Sample records for distance-based linear models

  1. Spatial generalised linear mixed models based on distances.

    PubMed

    Melo, Oscar O; Mateu, Jorge; Melo, Carlos E

    2016-10-01

    Risk models derived from environmental data have been widely shown to be effective in delineating geographical areas of risk because they are intuitively easy to understand. We present a new method based on distances, which allows the modelling of continuous and non-continuous random variables through distance-based spatial generalised linear mixed models. The parameters are estimated using Markov chain Monte Carlo maximum likelihood, which is a feasible and a useful technique. The proposed method depends on a detrending step built from continuous or categorical explanatory variables, or a mixture among them, by using an appropriate Euclidean distance. The method is illustrated through the analysis of the variation in the prevalence of Loa loa among a sample of village residents in Cameroon, where the explanatory variables included elevation, together with maximum normalised-difference vegetation index and the standard deviation of normalised-difference vegetation index calculated from repeated satellite scans over time. © The Author(s) 2013.

  2. The effect of different distance measures in detecting outliers using clustering-based algorithm for circular regression model

    NASA Astrophysics Data System (ADS)

    Di, Nur Faraidah Muhammad; Satari, Siti Zanariah

    2017-05-01

    Outlier detection in linear data sets has been done vigorously but only a small amount of work has been done for outlier detection in circular data. In this study, we proposed multiple outliers detection in circular regression models based on the clustering algorithm. Clustering technique basically utilizes distance measure to define distance between various data points. Here, we introduce the similarity distance based on Euclidean distance for circular model and obtain a cluster tree using the single linkage clustering algorithm. Then, a stopping rule for the cluster tree based on the mean direction and circular standard deviation of the tree height is proposed. We classify the cluster group that exceeds the stopping rule as potential outlier. Our aim is to demonstrate the effectiveness of proposed algorithms with the similarity distances in detecting the outliers. It is found that the proposed methods are performed well and applicable for circular regression model.

  3. One new method for road data shape change detection

    NASA Astrophysics Data System (ADS)

    Tang, Luliang; Li, Qingquan; Xu, Feng; Chang, Xiaomeng

    2009-10-01

    Similarity is a psychological cognition; this paper defines the Difference Distance and puts forward the Similarity Measuring Model for linear spatial data (SMM-L) based on the integration of the Distance View and the Feature Set View which are the views for similarity cognition. Based on the study of the relationship between the spatial data change and the similarity, a change detection algorithm for linear spatial data is developed, and a test on road data change detection is realized.

  4. Koopman Operator Framework for Time Series Modeling and Analysis

    NASA Astrophysics Data System (ADS)

    Surana, Amit

    2018-01-01

    We propose an interdisciplinary framework for time series classification, forecasting, and anomaly detection by combining concepts from Koopman operator theory, machine learning, and linear systems and control theory. At the core of this framework is nonlinear dynamic generative modeling of time series using the Koopman operator which is an infinite-dimensional but linear operator. Rather than working with the underlying nonlinear model, we propose two simpler linear representations or model forms based on Koopman spectral properties. We show that these model forms are invariants of the generative model and can be readily identified directly from data using techniques for computing Koopman spectral properties without requiring the explicit knowledge of the generative model. We also introduce different notions of distances on the space of such model forms which is essential for model comparison/clustering. We employ the space of Koopman model forms equipped with distance in conjunction with classical machine learning techniques to develop a framework for automatic feature generation for time series classification. The forecasting/anomaly detection framework is based on using Koopman model forms along with classical linear systems and control approaches. We demonstrate the proposed framework for human activity classification, and for time series forecasting/anomaly detection in power grid application.

  5. Modeling positional effects of regulatory sequences with spline transformations increases prediction accuracy of deep neural networks

    PubMed Central

    Avsec, Žiga; Cheng, Jun; Gagneur, Julien

    2018-01-01

    Abstract Motivation Regulatory sequences are not solely defined by their nucleic acid sequence but also by their relative distances to genomic landmarks such as transcription start site, exon boundaries or polyadenylation site. Deep learning has become the approach of choice for modeling regulatory sequences because of its strength to learn complex sequence features. However, modeling relative distances to genomic landmarks in deep neural networks has not been addressed. Results Here we developed spline transformation, a neural network module based on splines to flexibly and robustly model distances. Modeling distances to various genomic landmarks with spline transformations significantly increased state-of-the-art prediction accuracy of in vivo RNA-binding protein binding sites for 120 out of 123 proteins. We also developed a deep neural network for human splice branchpoint based on spline transformations that outperformed the current best, already distance-based, machine learning model. Compared to piecewise linear transformation, as obtained by composition of rectified linear units, spline transformation yields higher prediction accuracy as well as faster and more robust training. As spline transformation can be applied to further quantities beyond distances, such as methylation or conservation, we foresee it as a versatile component in the genomics deep learning toolbox. Availability and implementation Spline transformation is implemented as a Keras layer in the CONCISE python package: https://github.com/gagneurlab/concise. Analysis code is available at https://github.com/gagneurlab/Manuscript_Avsec_Bioinformatics_2017. Contact avsec@in.tum.de or gagneur@in.tum.de Supplementary information Supplementary data are available at Bioinformatics online. PMID:29155928

  6. A method for assigning species into groups based on generalized Mahalanobis distance between habitat model coefficients

    USGS Publications Warehouse

    Williams, C.J.; Heglund, P.J.

    2009-01-01

    Habitat association models are commonly developed for individual animal species using generalized linear modeling methods such as logistic regression. We considered the issue of grouping species based on their habitat use so that management decisions can be based on sets of species rather than individual species. This research was motivated by a study of western landbirds in northern Idaho forests. The method we examined was to separately fit models to each species and to use a generalized Mahalanobis distance between coefficient vectors to create a distance matrix among species. Clustering methods were used to group species from the distance matrix, and multidimensional scaling methods were used to visualize the relations among species groups. Methods were also discussed for evaluating the sensitivity of the conclusions because of outliers or influential data points. We illustrate these methods with data from the landbird study conducted in northern Idaho. Simulation results are presented to compare the success of this method to alternative methods using Euclidean distance between coefficient vectors and to methods that do not use habitat association models. These simulations demonstrate that our Mahalanobis-distance- based method was nearly always better than Euclidean-distance-based methods or methods not based on habitat association models. The methods used to develop candidate species groups are easily explained to other scientists and resource managers since they mainly rely on classical multivariate statistical methods. ?? 2008 Springer Science+Business Media, LLC.

  7. Wind Characterization for the Assessment of Collision Risk During Flight Level Changes

    NASA Technical Reports Server (NTRS)

    Carreno, Victor; Chartrand, Ryan

    2009-01-01

    A model of vertical wind gradient is presented based on National Oceanic and Atmospheric Administration (NOAA) wind data. The objective is to have an accurate representation of wind to be used in Collision Risk Models (CRM) of aircraft procedures. Depending on how an aircraft procedure is defined, wind and the different characteristics of the wind will have a more severe or less severe impact on distances between aircraft. For the In-Trail Procedure, the non-linearity of the vertical wind gradient has the greatest impact on longitudinal distance. The analysis in this paper extracts standard deviation, mean, maximum, and linearity characteristics from the NOAA data.

  8. Research of the impact of coupling between unit cells on performance of linear-to-circular polarization conversion metamaterial with half transmission and half reflection

    NASA Astrophysics Data System (ADS)

    Guo, Mengchao; Zhou, Kan; Wang, Xiaokun; Zhuang, Haiyan; Tang, Dongming; Zhang, Baoshan; Yang, Yi

    2018-04-01

    In this paper, the impact of coupling between unit cells on the performance of linear-to-circular polarization conversion metamaterial with half transmission and half reflection is analyzed by changing the distance between the unit cells. An equivalent electrical circuit model is then built to explain it based on the analysis. The simulated results show that, when the distance between the unit cells is 23 mm, this metamaterial converts half of the incident linearly-polarized wave into reflected left-hand circularly-polarized wave and converts the other half of it into transmitted left-hand circularly-polarized wave at 4.4 GHz; when the distance is 28 mm, this metamaterial reflects all of the incident linearly-polarized wave at 4.4 GHz; and when the distance is 32 mm, this metamaterial converts half of the incident linearly-polarized wave into reflected right-hand circularly-polarized wave and converts the other half of it into transmitted right-hand circularly-polarized wave at 4.4 GHz. The tunability is realized successfully. The analysis shows that the changes of coupling between unit cells lead to the changes of performance of this metamaterial. The coupling between the unit cells is then considered when building the equivalent electrical circuit model. The built equivalent electrical circuit model can be used to perfectly explain the simulated results, which confirms the validity of it. It can also give help to the design of tunable polarization conversion metamaterials.

  9. Feature Extraction of High-Dimensional Structures for Exploratory Analytics

    DTIC Science & Technology

    2013-04-01

    Comparison of Euclidean vs. geodesic distance. LDRs use metric based on the Euclidean distance between two points, while the NLDRs are based on...geodesic distance. An NLDR successfully unrolls the curved manifold, whereas an LDR fails. ...........................3 1 1. Introduction An...and classical metric multidimensional scaling, are a linear DR ( LDR ). An LDR is based on a linear combination of

  10. A fast community detection method in bipartite networks by distance dynamics

    NASA Astrophysics Data System (ADS)

    Sun, Hong-liang; Ch'ng, Eugene; Yong, Xi; Garibaldi, Jonathan M.; See, Simon; Chen, Duan-bing

    2018-04-01

    Many real bipartite networks are found to be divided into two-mode communities. In this paper, we formulate a new two-mode community detection algorithm BiAttractor. It is based on distance dynamics model Attractor proposed by Shao et al. with extension from unipartite to bipartite networks. Since Jaccard coefficient of distance dynamics model is incapable to measure distances of different types of vertices in bipartite networks, our main contribution is to extend distance dynamics model from unipartite to bipartite networks using a novel measure Local Jaccard Distance (LJD). Furthermore, distances between different types of vertices are not affected by common neighbors in the original method. This new idea makes clear assumptions and yields interpretable results in linear time complexity O(| E |) in sparse networks, where | E | is the number of edges. Experiments on synthetic networks demonstrate it is capable to overcome resolution limit compared with existing other methods. Further research on real networks shows that this model can accurately detect interpretable community structures in a short time.

  11. Neural Decoding and "Inner" Psychophysics: A Distance-to-Bound Approach for Linking Mind, Brain, and Behavior.

    PubMed

    Ritchie, J Brendan; Carlson, Thomas A

    2016-01-01

    A fundamental challenge for cognitive neuroscience is characterizing how the primitives of psychological theory are neurally implemented. Attempts to meet this challenge are a manifestation of what Fechner called "inner" psychophysics: the theory of the precise mapping between mental quantities and the brain. In his own time, inner psychophysics remained an unrealized ambition for Fechner. We suggest that, today, multivariate pattern analysis (MVPA), or neural "decoding," methods provide a promising starting point for developing an inner psychophysics. A cornerstone of these methods are simple linear classifiers applied to neural activity in high-dimensional activation spaces. We describe an approach to inner psychophysics based on the shared architecture of linear classifiers and observers under decision boundary models such as signal detection theory. Under this approach, distance from a decision boundary through activation space, as estimated by linear classifiers, can be used to predict reaction time in accordance with signal detection theory, and distance-to-bound models of reaction time. Our "neural distance-to-bound" approach is potentially quite general, and simple to implement. Furthermore, our recent work on visual object recognition suggests it is empirically viable. We believe the approach constitutes an important step along the path to an inner psychophysics that links mind, brain, and behavior.

  12. Construction of Protograph LDPC Codes with Linear Minimum Distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Sam; Jones, Christopher

    2006-01-01

    A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.

  13. Single-Image Distance Measurement by a Smart Mobile Device.

    PubMed

    Chen, Shangwen; Fang, Xianyong; Shen, Jianbing; Wang, Linbo; Shao, Ling

    2017-12-01

    Existing distance measurement methods either require multiple images and special photographing poses or only measure the height with a special view configuration. We propose a novel image-based method that can measure various types of distance from single image captured by a smart mobile device. The embedded accelerometer is used to determine the view orientation of the device. Consequently, pixels can be back-projected to the ground, thanks to the efficient calibration method using two known distances. Then the distance in pixel is transformed to a real distance in centimeter with a linear model parameterized by the magnification ratio. Various types of distance specified in the image can be computed accordingly. Experimental results demonstrate the effectiveness of the proposed method.

  14. Can a one-layer optical skin model including melanin and inhomogeneously distributed blood explain spatially resolved diffuse reflectance spectra?

    NASA Astrophysics Data System (ADS)

    Karlsson, Hanna; Pettersson, Anders; Larsson, Marcus; Strömberg, Tomas

    2011-02-01

    Model based analysis of calibrated diffuse reflectance spectroscopy can be used for determining oxygenation and concentration of skin chromophores. This study aimed at assessing the effect of including melanin in addition to hemoglobin (Hb) as chromophores and compensating for inhomogeneously distributed blood (vessel packaging), in a single-layer skin model. Spectra from four humans were collected during different provocations using a twochannel fiber optic probe with source-detector separations 0.4 and 1.2 mm. Absolute calibrated spectra using data from either a single distance or both distances were analyzed using inverse Monte Carlo for light transport and Levenberg-Marquardt for non-linear fitting. The model fitting was excellent using a single distance. However, the estimated model failed to explain spectra from the other distance. The two-distance model did not fit the data well at either distance. Model fitting was significantly improved including melanin and vessel packaging. The most prominent effect when fitting data from the larger separation compared to the smaller separation was a different light scattering decay with wavelength, while the tissue fraction of Hb and saturation were similar. For modeling spectra at both distances, we propose using either a multi-layer skin model or a more advanced model for the scattering phase function.

  15. An extended car-following model considering random safety distance with different probabilities

    NASA Astrophysics Data System (ADS)

    Wang, Jufeng; Sun, Fengxin; Cheng, Rongjun; Ge, Hongxia; Wei, Qi

    2018-02-01

    Because of the difference in vehicle type or driving skill, the driving strategy is not exactly the same. The driving speeds of the different vehicles may be different for the same headway. Since the optimal velocity function is just determined by the safety distance besides the maximum velocity and headway, an extended car-following model accounting for random safety distance with different probabilities is proposed in this paper. The linear stable condition for this extended traffic model is obtained by using linear stability theory. Numerical simulations are carried out to explore the complex phenomenon resulting from multiple safety distance in the optimal velocity function. The cases of multiple types of safety distances selected with different probabilities are presented. Numerical results show that the traffic flow with multiple safety distances with different probabilities will be more unstable than that with single type of safety distance, and will result in more stop-and-go phenomena.

  16. Light propagation and the distance-redshift relation in a realistic inhomogeneous universe

    NASA Technical Reports Server (NTRS)

    Futamase, Toshifumi; Sasaki, Misao

    1989-01-01

    The propagation of light rays in a clumpy universe constructed by cosmological version of the post-Newtonian approximation was investigated. It is shown that linear approximation to the propagation equations is valid in the region where zeta is approximately less than 1 even if the density contrast is much larger than unity. Based on a gerneral order-of-magnitude statistical consideration, it is argued that the linear approximation is still valid where zeta is approximately greater than 1. A general formula for the distance-redshift relation in a clumpy universe is given. An explicit expression is derived for a simplified situation in which the effect of the gravitational potential of inhomogeneities dominates. In the light of the derived relation, the validity of the Dyer-Roeder distance is discussed. Also, statistical properties of light rays are investigated for a simple model of an inhomogeneous universe. The result of this example supports the validity of the linear approximation.

  17. A mixed model for the relationship between climate and human cranial form.

    PubMed

    Katz, David C; Grote, Mark N; Weaver, Timothy D

    2016-08-01

    We expand upon a multivariate mixed model from quantitative genetics in order to estimate the magnitude of climate effects in a global sample of recent human crania. In humans, genetic distances are correlated with distances based on cranial form, suggesting that population structure influences both genetic and quantitative trait variation. Studies controlling for this structure have demonstrated significant underlying associations of cranial distances with ecological distances derived from climate variables. However, to assess the biological importance of an ecological predictor, estimates of effect size and uncertainty in the original units of measurement are clearly preferable to significance claims based on units of distance. Unfortunately, the magnitudes of ecological effects are difficult to obtain with distance-based methods, while models that produce estimates of effect size generally do not scale to high-dimensional data like cranial shape and form. Using recent innovations that extend quantitative genetics mixed models to highly multivariate observations, we estimate morphological effects associated with a climate predictor for a subset of the Howells craniometric dataset. Several measurements, particularly those associated with cranial vault breadth, show a substantial linear association with climate, and the multivariate model incorporating a climate predictor is preferred in model comparison. Previous studies demonstrated the existence of a relationship between climate and cranial form. The mixed model quantifies this relationship concretely. Evolutionary questions that require population structure and phylogeny to be disentangled from potential drivers of selection may be particularly well addressed by mixed models. Am J Phys Anthropol 160:593-603, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  18. Accuracy of digital models generated by conventional impression/plaster-model methods and intraoral scanning.

    PubMed

    Tomita, Yuki; Uechi, Jun; Konno, Masahiro; Sasamoto, Saera; Iijima, Masahiro; Mizoguchi, Itaru

    2018-04-17

    We compared the accuracy of digital models generated by desktop-scanning of conventional impression/plaster models versus intraoral scanning. Eight ceramic spheres were attached to the buccal molar regions of dental epoxy models, and reference linear-distance measurement were determined using a contact-type coordinate measuring instrument. Alginate (AI group) and silicone (SI group) impressions were taken and converted into cast models using dental stone; the models were scanned using desktop scanner. As an alternative, intraoral scans were taken using an intraoral scanner, and digital models were generated from these scans (IOS group). Twelve linear-distance measurement combinations were calculated between different sphere-centers for all digital models. There were no significant differences among the three groups using total of six linear-distance measurements. When limited to five lineardistance measurement, the IOS group showed significantly higher accuracy compared to the AI and SI groups. Intraoral scans may be more accurate compared to scans of conventional impression/plaster models.

  19. Rainfall induced landslide susceptibility mapping using weight-of-evidence, linear and quadratic discriminant and logistic model tree method

    NASA Astrophysics Data System (ADS)

    Hong, H.; Zhu, A. X.

    2017-12-01

    Climate change is a common phenomenon and it is very serious all over the world. The intensification of rainfall extremes with climate change is of key importance to society and then it may induce a large impact through landslides. This paper presents GIS-based new ensemble data mining techniques that weight-of-evidence, logistic model tree, linear and quadratic discriminant for landslide spatial modelling. This research was applied in Anfu County, which is a landslide-prone area in Jiangxi Province, China. According to a literature review and research the study area, we select the landslide influencing factor and their maps were digitized in a GIS environment. These landslide influencing factors are the altitude, plan curvature, profile curvature, slope degree, slope aspect, topographic wetness index (TWI), Stream Power Index (SPI), Topographic Wetness Index (SPI), distance to faults, distance to rivers, distance to roads, soil, lithology, normalized difference vegetation index and land use. According to historical information of individual landslide events, interpretation of the aerial photographs, and field surveys supported by the government of Jiangxi Meteorological Bureau of China, 367 landslides were identified in the study area. The landslide locations were divided into two subsets, namely, training and validating (70/30), based on a random selection scheme. In this research, Pearson's correlation was used for the evaluation of the relationship between the landslides and influencing factors. In the next step, three data mining techniques combined with the weight-of-evidence, logistic model tree, linear and quadratic discriminant, were used for the landslide spatial modelling and its zonation. Finally, the landslide susceptibility maps produced by the mentioned models were evaluated by the ROC curve. The results showed that the area under the curve (AUC) of all of the models was > 0.80. At the same time, the highest AUC value was for the linear and quadratic discriminant model (0.864), followed by logistic model tree (0.832), and weight-of-evidence (0.819). In general, the landslide maps can be applied for land use planning and management in the Anfu area.

  20. Method of assessing the state of a rolling bearing based on the relative compensation distance of multiple-domain features and locally linear embedding

    NASA Astrophysics Data System (ADS)

    Kang, Shouqiang; Ma, Danyang; Wang, Yujing; Lan, Chaofeng; Chen, Qingguo; Mikulovich, V. I.

    2017-03-01

    To effectively assess different fault locations and different degrees of performance degradation of a rolling bearing with a unified assessment index, a novel state assessment method based on the relative compensation distance of multiple-domain features and locally linear embedding is proposed. First, for a single-sample signal, time-domain and frequency-domain indexes can be calculated for the original vibration signal and each sensitive intrinsic mode function obtained by improved ensemble empirical mode decomposition, and the singular values of the sensitive intrinsic mode function matrix can be extracted by singular value decomposition to construct a high-dimensional hybrid-domain feature vector. Second, a feature matrix can be constructed by arranging each feature vector of multiple samples, the dimensions of each row vector of the feature matrix can be reduced by the locally linear embedding algorithm, and the compensation distance of each fault state of the rolling bearing can be calculated using the support vector machine. Finally, the relative distance between different fault locations and different degrees of performance degradation and the normal-state optimal classification surface can be compensated, and on the basis of the proposed relative compensation distance, the assessment model can be constructed and an assessment curve drawn. Experimental results show that the proposed method can effectively assess different fault locations and different degrees of performance degradation of the rolling bearing under certain conditions.

  1. Baryonic Force for Accelerated Cosmic Expansion and Generalized U1b Gauge Symmetry in Particle-Cosmology

    NASA Astrophysics Data System (ADS)

    Khan, Mehbub; Hao, Yun; Hsu, Jong-Ping

    2018-01-01

    Based on baryon charge conservation and a generalized Yang-Mills symmetry for Abelian (and non-Abelian) groups, we discuss a new baryonic gauge field and its linear potential for two point-like baryon charges. The force between two point-like baryons is repulsive, extremely weak and independent of distance. However, for two extended baryonic systems, we have a dominant linear force α r. Thus, only in the later stage of the cosmic evolution, when two baryonic galaxies are separated by an extremely large distance, the new repulsive baryonic force can overcome the gravitational attractive force. Such a model provides a gauge-field-theoretic understanding of the late-time accelerated cosmic expansion. The baryonic force can be tested by measuring the accelerated Wu-Doppler frequency shifts of supernovae at different distances.

  2. Forecasting longitudinal changes in oropharyngeal tumor morphology throughout the course of head and neck radiation therapy

    PubMed Central

    Yock, Adam D.; Rao, Arvind; Dong, Lei; Beadle, Beth M.; Garden, Adam S.; Kudchadker, Rajat J.; Court, Laurence E.

    2014-01-01

    Purpose: To create models that forecast longitudinal trends in changing tumor morphology and to evaluate and compare their predictive potential throughout the course of radiation therapy. Methods: Two morphology feature vectors were used to describe 35 gross tumor volumes (GTVs) throughout the course of intensity-modulated radiation therapy for oropharyngeal tumors. The feature vectors comprised the coordinates of the GTV centroids and a description of GTV shape using either interlandmark distances or a spherical harmonic decomposition of these distances. The change in the morphology feature vector observed at 33 time points throughout the course of treatment was described using static, linear, and mean models. Models were adjusted at 0, 1, 2, 3, or 5 different time points (adjustment points) to improve prediction accuracy. The potential of these models to forecast GTV morphology was evaluated using leave-one-out cross-validation, and the accuracy of the models was compared using Wilcoxon signed-rank tests. Results: Adding a single adjustment point to the static model without any adjustment points decreased the median error in forecasting the position of GTV surface landmarks by the largest amount (1.2 mm). Additional adjustment points further decreased the forecast error by about 0.4 mm each. Selection of the linear model decreased the forecast error for both the distance-based and spherical harmonic morphology descriptors (0.2 mm), while the mean model decreased the forecast error for the distance-based descriptor only (0.2 mm). The magnitude and statistical significance of these improvements decreased with each additional adjustment point, and the effect from model selection was not as large as that from adding the initial points. Conclusions: The authors present models that anticipate longitudinal changes in tumor morphology using various models and model adjustment schemes. The accuracy of these models depended on their form, and the utility of these models includes the characterization of patient-specific response with implications for treatment management and research study design. PMID:25086518

  3. Forecasting longitudinal changes in oropharyngeal tumor morphology throughout the course of head and neck radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yock, Adam D.; Kudchadker, Rajat J.; Rao, Arvind

    2014-08-15

    Purpose: To create models that forecast longitudinal trends in changing tumor morphology and to evaluate and compare their predictive potential throughout the course of radiation therapy. Methods: Two morphology feature vectors were used to describe 35 gross tumor volumes (GTVs) throughout the course of intensity-modulated radiation therapy for oropharyngeal tumors. The feature vectors comprised the coordinates of the GTV centroids and a description of GTV shape using either interlandmark distances or a spherical harmonic decomposition of these distances. The change in the morphology feature vector observed at 33 time points throughout the course of treatment was described using static, linear,more » and mean models. Models were adjusted at 0, 1, 2, 3, or 5 different time points (adjustment points) to improve prediction accuracy. The potential of these models to forecast GTV morphology was evaluated using leave-one-out cross-validation, and the accuracy of the models was compared using Wilcoxon signed-rank tests. Results: Adding a single adjustment point to the static model without any adjustment points decreased the median error in forecasting the position of GTV surface landmarks by the largest amount (1.2 mm). Additional adjustment points further decreased the forecast error by about 0.4 mm each. Selection of the linear model decreased the forecast error for both the distance-based and spherical harmonic morphology descriptors (0.2 mm), while the mean model decreased the forecast error for the distance-based descriptor only (0.2 mm). The magnitude and statistical significance of these improvements decreased with each additional adjustment point, and the effect from model selection was not as large as that from adding the initial points. Conclusions: The authors present models that anticipate longitudinal changes in tumor morphology using various models and model adjustment schemes. The accuracy of these models depended on their form, and the utility of these models includes the characterization of patient-specific response with implications for treatment management and research study design.« less

  4. Solving the aerodynamics of fungal flight: How air viscosity slows spore motion

    PubMed Central

    Fischer, Mark W. F.; Stolze-Rybczynski, Jessica L.; Davis, Diana J.; Cui, Yunluan; Money, Nicholas P.

    2010-01-01

    Viscous drag causes the rapid deceleration of fungal spores after high-speed launches and limits discharge distance. Stokes' law posits a linear relationship between drag force and velocity. It provides an excellent fit to experimental measurements of the terminal velocity of free-falling spores and other instances of low Reynolds number motion (Re<1). More complex, non-linear drag models have been devised for movements characterized by higher Re, but their effectiveness for modeling the launch of fast-moving fungal spores has not been tested. In this paper, we use data on spore discharge processes obtained from ultra-high-speed video recordings to evaluate the effects of air viscosity predicted by Stokes' law and a commonly used non-linear drag model. We find that discharge distances predicted from launch speeds by Stokes' model provide a much better match to measured distances than estimates from the more complex drag model. Stokes' model works better over a wide range projectile sizes, launch speeds, and discharge distances, from microscopic mushroom ballistospores discharged at <1 m/s over a distance of <0.1 mm (Re<1.0), to macroscopic sporangia of Pilobolus that are launched at >10 m/s and travel as far as 2.5 m (Re>100). PMID:21036338

  5. Geometric model of pseudo-distance measurement in satellite location systems

    NASA Astrophysics Data System (ADS)

    Panchuk, K. L.; Lyashkov, A. A.; Lyubchinov, E. V.

    2018-04-01

    The existing mathematical model of pseudo-distance measurement in satellite location systems does not provide a precise solution of the problem, but rather an approximate one. The existence of such inaccuracy, as well as bias in measurement of distance from satellite to receiver, results in inaccuracy level of several meters. Thereupon, relevance of refinement of the current mathematical model becomes obvious. The solution of the system of quadratic equations used in the current mathematical model is based on linearization. The objective of the paper is refinement of current mathematical model and derivation of analytical solution of the system of equations on its basis. In order to attain the objective, geometric analysis is performed; geometric interpretation of the equations is given. As a result, an equivalent system of equations, which allows analytical solution, is derived. An example of analytical solution implementation is presented. Application of analytical solution algorithm to the problem of pseudo-distance measurement in satellite location systems allows to improve the accuracy such measurements.

  6. Landslide susceptibility mapping using GIS-based statistical models and Remote sensing data in tropical environment

    PubMed Central

    Hashim, Mazlan

    2015-01-01

    This research presents the results of the GIS-based statistical models for generation of landslide susceptibility mapping using geographic information system (GIS) and remote-sensing data for Cameron Highlands area in Malaysia. Ten factors including slope, aspect, soil, lithology, NDVI, land cover, distance to drainage, precipitation, distance to fault, and distance to road were extracted from SAR data, SPOT 5 and WorldView-1 images. The relationships between the detected landslide locations and these ten related factors were identified by using GIS-based statistical models including analytical hierarchy process (AHP), weighted linear combination (WLC) and spatial multi-criteria evaluation (SMCE) models. The landslide inventory map which has a total of 92 landslide locations was created based on numerous resources such as digital aerial photographs, AIRSAR data, WorldView-1 images, and field surveys. Then, 80% of the landslide inventory was used for training the statistical models and the remaining 20% was used for validation purpose. The validation results using the Relative landslide density index (R-index) and Receiver operating characteristic (ROC) demonstrated that the SMCE model (accuracy is 96%) is better in prediction than AHP (accuracy is 91%) and WLC (accuracy is 89%) models. These landslide susceptibility maps would be useful for hazard mitigation purpose and regional planning. PMID:25898919

  7. Landslide susceptibility mapping using GIS-based statistical models and Remote sensing data in tropical environment.

    PubMed

    Shahabi, Himan; Hashim, Mazlan

    2015-04-22

    This research presents the results of the GIS-based statistical models for generation of landslide susceptibility mapping using geographic information system (GIS) and remote-sensing data for Cameron Highlands area in Malaysia. Ten factors including slope, aspect, soil, lithology, NDVI, land cover, distance to drainage, precipitation, distance to fault, and distance to road were extracted from SAR data, SPOT 5 and WorldView-1 images. The relationships between the detected landslide locations and these ten related factors were identified by using GIS-based statistical models including analytical hierarchy process (AHP), weighted linear combination (WLC) and spatial multi-criteria evaluation (SMCE) models. The landslide inventory map which has a total of 92 landslide locations was created based on numerous resources such as digital aerial photographs, AIRSAR data, WorldView-1 images, and field surveys. Then, 80% of the landslide inventory was used for training the statistical models and the remaining 20% was used for validation purpose. The validation results using the Relative landslide density index (R-index) and Receiver operating characteristic (ROC) demonstrated that the SMCE model (accuracy is 96%) is better in prediction than AHP (accuracy is 91%) and WLC (accuracy is 89%) models. These landslide susceptibility maps would be useful for hazard mitigation purpose and regional planning.

  8. Efficient distance calculation using the spherically-extended polytope (s-tope) model

    NASA Technical Reports Server (NTRS)

    Hamlin, Gregory J.; Kelley, Robert B.; Tornero, Josep

    1991-01-01

    An object representation scheme which allows for Euclidean distance calculation is presented. The object model extends the polytope model by representing objects as the convex hull of a finite set of spheres. An algorithm for calculating distances between objects is developed which is linear in the total number of spheres specifying the two objects.

  9. Restricted DCJ-indel model: sorting linear genomes with DCJ and indels

    PubMed Central

    2012-01-01

    Background The double-cut-and-join (DCJ) is a model that is able to efficiently sort a genome into another, generalizing the typical mutations (inversions, fusions, fissions, translocations) to which genomes are subject, but allowing the existence of circular chromosomes at the intermediate steps. In the general model many circular chromosomes can coexist in some intermediate step. However, when the compared genomes are linear, it is more plausible to use the so-called restricted DCJ model, in which we proceed the reincorporation of a circular chromosome immediately after its creation. These two consecutive DCJ operations, which create and reincorporate a circular chromosome, mimic a transposition or a block-interchange. When the compared genomes have the same content, it is known that the genomic distance for the restricted DCJ model is the same as the distance for the general model. If the genomes have unequal contents, in addition to DCJ it is necessary to consider indels, which are insertions and deletions of DNA segments. Linear time algorithms were proposed to compute the distance and to find a sorting scenario in a general, unrestricted DCJ-indel model that considers DCJ and indels. Results In the present work we consider the restricted DCJ-indel model for sorting linear genomes with unequal contents. We allow DCJ operations and indels with the following constraint: if a circular chromosome is created by a DCJ, it has to be reincorporated in the next step (no other DCJ or indel can be applied between the creation and the reincorporation of a circular chromosome). We then develop a sorting algorithm and give a tight upper bound for the restricted DCJ-indel distance. Conclusions We have given a tight upper bound for the restricted DCJ-indel distance. The question whether this bound can be reduced so that both the general and the restricted DCJ-indel distances are equal remains open. PMID:23281630

  10. A comparison of the two approaches of the theory of critical distances based on linear-elastic and elasto-plastic analyses

    NASA Astrophysics Data System (ADS)

    Terekhina, A. I.; Plekhov, O. A.; Kostina, A. A.; Susmel, L.

    2017-06-01

    The problem of determining the strength of engineering structures, considering the effects of the non-local fracture in the area of stress concentrators is a great scientific and industrial interest. This work is aimed on modification of the classical theory of critical distance that is known as a method of failure prediction based on linear-elastic analysis in case of elasto-plastic material behaviour to improve the accuracy of estimation of lifetime of notched components. Accounting plasticity has been implemented with the use of the Simplified Johnson-Cook model. Mechanical tests were carried out using a 300 kN electromechanical testing machine Shimadzu AG-X Plus. The cylindrical un-notched specimens and specimens with stress concentrators of titanium alloy Grade2 were tested under tensile loading with different grippers travel speed, which ensured several orders of strain rate. The results of elasto-plastic analyses of stress distributions near a wide variety of notches are presented. The results showed that the use of the modification of the TCD based on elasto-plastic analysis gives us estimates falling within an error interval of ±5-10%, that more accurate predictions than the linear elastic TCD solution. The use of an improved description of the stress-strain state at the notch tip allows introducing the critical distances as a material parameter.

  11. Solving the aerodynamics of fungal flight: how air viscosity slows spore motion.

    PubMed

    Fischer, Mark W F; Stolze-Rybczynski, Jessica L; Davis, Diana J; Cui, Yunluan; Money, Nicholas P

    2010-01-01

    Viscous drag causes the rapid deceleration of fungal spores after high-speed launches and limits discharge distance. Stokes' law posits a linear relationship between drag force and velocity. It provides an excellent fit to experimental measurements of the terminal velocity of free-falling spores and other instances of low Reynolds number motion (Re<1). More complex, non-linear drag models have been devised for movements characterized by higher Re, but their effectiveness for modeling the launch of fast-moving fungal spores has not been tested. In this paper, we use data on spore discharge processes obtained from ultra-high-speed video recordings to evaluate the effects of air viscosity predicted by Stokes' law and a commonly used non-linear drag model. We find that discharge distances predicted from launch speeds by Stokes' model provide a much better match to measured distances than estimates from the more complex drag model. Stokes' model works better over a wide range projectile sizes, launch speeds, and discharge distances, from microscopic mushroom ballistospores discharged at <1 m s(-1) over a distance of <0.1mm (Re<1.0), to macroscopic sporangia of Pilobolus that are launched at >10 m s(-1) and travel as far as 2.5m (Re>100). Copyright © 2010 The British Mycological Society. Published by Elsevier Ltd. All rights reserved.

  12. Estimating Genomic Distance from DNA Sequence Location in Cell Nuclei by a Random Walk Model

    NASA Astrophysics Data System (ADS)

    van den Engh, Ger; Sachs, Rainer; Trask, Barbara J.

    1992-09-01

    The folding of chromatin in interphase cell nuclei was studied by fluorescent in situ hybridization with pairs of unique DNA sequence probes. The sites of DNA sequences separated by 100 to 2000 kilobase pairs (kbp) are distributed in interphase chromatin according to a random walk model. This model provides the basis for calculating the spacing of sequences along the linear DNA molecule from interphase distance measurements. An interphase mapping strategy based on this model was tested with 13 probes from a 4-megabase pair (Mbp) region of chromosome 4 containing the Huntington disease locus. The results confirmed the locations of the probes and showed that the remaining gap in the published maps of this region is negligible in size. Interphase distance measurements should facilitate construction of chromosome maps with an average marker density of one per 100 kbp, approximately ten times greater than that achieved by hybridization to metaphase chromosomes.

  13. A Recursive Partitioning Method for the Prediction of Preference Rankings Based Upon Kemeny Distances.

    PubMed

    D'Ambrosio, Antonio; Heiser, Willem J

    2016-09-01

    Preference rankings usually depend on the characteristics of both the individuals judging a set of objects and the objects being judged. This topic has been handled in the literature with log-linear representations of the generalized Bradley-Terry model and, recently, with distance-based tree models for rankings. A limitation of these approaches is that they only work with full rankings or with a pre-specified pattern governing the presence of ties, and/or they are based on quite strict distributional assumptions. To overcome these limitations, we propose a new prediction tree method for ranking data that is totally distribution-free. It combines Kemeny's axiomatic approach to define a unique distance between rankings with the CART approach to find a stable prediction tree. Furthermore, our method is not limited by any particular design of the pattern of ties. The method is evaluated in an extensive full-factorial Monte Carlo study with a new simulation design.

  14. Analysis of point-to-point lung motion with full inspiration and expiration CT data using non-linear optimization method: optimal geometric assumption model for the effective registration algorithm

    NASA Astrophysics Data System (ADS)

    Kim, Namkug; Seo, Joon Beom; Heo, Jeong Nam; Kang, Suk-Ho

    2007-03-01

    The study was conducted to develop a simple model for more robust lung registration of volumetric CT data, which is essential for various clinical lung analysis applications, including the lung nodule matching in follow up CT studies, semi-quantitative assessment of lung perfusion, and etc. The purpose of this study is to find the most effective reference point and geometric model based on the lung motion analysis from the CT data sets obtained in full inspiration (In.) and expiration (Ex.). Ten pairs of CT data sets in normal subjects obtained in full In. and Ex. were used in this study. Two radiologists were requested to draw 20 points representing the subpleural point of the central axis in each segment. The apex, hilar point, and center of inertia (COI) of each unilateral lung were proposed as the reference point. To evaluate optimal expansion point, non-linear optimization without constraints was employed. The objective function is sum of distances from the line, consist of the corresponding points between In. and Ex. to the optimal point x. By using the nonlinear optimization, the optimal points was evaluated and compared between reference points. The average distance between the optimal point and each line segment revealed that the balloon model was more suitable to explain the lung expansion model. This lung motion analysis based on vector analysis and non-linear optimization shows that balloon model centered on the center of inertia of lung is most effective geometric model to explain lung expansion by breathing.

  15. Simulation of changes in heavy metal contamination in farmland soils of a typical manufacturing center through logistic-based cellular automata modeling.

    PubMed

    Qiu, Menglong; Wang, Qi; Li, Fangbai; Chen, Junjian; Yang, Guoyi; Liu, Liming

    2016-01-01

    A customized logistic-based cellular automata (CA) model was developed to simulate changes in heavy metal contamination (HMC) in farmland soils of Dongguan, a manufacturing center in Southern China, and to discover the relationship between HMC and related explanatory variables (continuous and categorical). The model was calibrated through the simulation and validation of HMC in 2012. Thereafter, the model was implemented for the scenario simulation of development alternatives for HMC in 2022. The HMC in 2002 and 2012 was determined through soil tests and cokriging. Continuous variables were divided into two groups by odds ratios. Positive variables (odds ratios >1) included the Nemerow synthetic pollution index in 2002, linear drainage density, distance from the city center, distance from the railway, slope, and secondary industrial output per unit of land. Negative variables (odds ratios <1) included elevation, distance from the road, distance from the key polluting enterprises, distance from the town center, soil pH, and distance from bodies of water. Categorical variables, including soil type, parent material type, organic content grade, and land use type, also significantly influenced HMC according to Wald statistics. The relative operating characteristic and kappa coefficients were 0.91 and 0.64, respectively, which proved the validity and accuracy of the model. The scenario simulation shows that the government should not only implement stricter environmental regulation but also strengthen the remediation of the current polluted area to effectively mitigate HMC.

  16. Impact of distance-based metric learning on classification and visualization model performance and structure-activity landscapes.

    PubMed

    Kireeva, Natalia V; Ovchinnikova, Svetlana I; Kuznetsov, Sergey L; Kazennov, Andrey M; Tsivadze, Aslan Yu

    2014-02-01

    This study concerns large margin nearest neighbors classifier and its multi-metric extension as the efficient approaches for metric learning which aimed to learn an appropriate distance/similarity function for considered case studies. In recent years, many studies in data mining and pattern recognition have demonstrated that a learned metric can significantly improve the performance in classification, clustering and retrieval tasks. The paper describes application of the metric learning approach to in silico assessment of chemical liabilities. Chemical liabilities, such as adverse effects and toxicity, play a significant role in drug discovery process, in silico assessment of chemical liabilities is an important step aimed to reduce costs and animal testing by complementing or replacing in vitro and in vivo experiments. Here, to our knowledge for the first time, a distance-based metric learning procedures have been applied for in silico assessment of chemical liabilities, the impact of metric learning on structure-activity landscapes and predictive performance of developed models has been analyzed, the learned metric was used in support vector machines. The metric learning results have been illustrated using linear and non-linear data visualization techniques in order to indicate how the change of metrics affected nearest neighbors relations and descriptor space.

  17. Impact of distance-based metric learning on classification and visualization model performance and structure-activity landscapes

    NASA Astrophysics Data System (ADS)

    Kireeva, Natalia V.; Ovchinnikova, Svetlana I.; Kuznetsov, Sergey L.; Kazennov, Andrey M.; Tsivadze, Aslan Yu.

    2014-02-01

    This study concerns large margin nearest neighbors classifier and its multi-metric extension as the efficient approaches for metric learning which aimed to learn an appropriate distance/similarity function for considered case studies. In recent years, many studies in data mining and pattern recognition have demonstrated that a learned metric can significantly improve the performance in classification, clustering and retrieval tasks. The paper describes application of the metric learning approach to in silico assessment of chemical liabilities. Chemical liabilities, such as adverse effects and toxicity, play a significant role in drug discovery process, in silico assessment of chemical liabilities is an important step aimed to reduce costs and animal testing by complementing or replacing in vitro and in vivo experiments. Here, to our knowledge for the first time, a distance-based metric learning procedures have been applied for in silico assessment of chemical liabilities, the impact of metric learning on structure-activity landscapes and predictive performance of developed models has been analyzed, the learned metric was used in support vector machines. The metric learning results have been illustrated using linear and non-linear data visualization techniques in order to indicate how the change of metrics affected nearest neighbors relations and descriptor space.

  18. Theoretical and experimental study of a wireless power supply system for moving low power devices in ferromagnetic and conductive medium

    NASA Astrophysics Data System (ADS)

    Safour, Salaheddine; Bernard, Yves

    2017-10-01

    This paper focuses on the design of a wireless power supply system for low power devices (e.g. sensors) located in harsh electromagnetic environment with ferromagnetic and conductive materials. Such particular environment could be found in linear and rotating actuators. The studied power transfer system is based on the resonant magnetic coupling between a fixed transmitter coil and a moving receiver coil. The technique was utilized successfully for rotary machines. The aim of this paper is to extend the technique to linear actuators. A modeling approach based on 2D Axisymmetric Finite Element model and an electrical lumped model based on the two-port network theory is introduced. The study shows the limitation of the technique to transfer the required power in the presence of ferromagnetic and conductive materials. Parametric and circuit analysis were conducted in order to design a resonant magnetic coupler that ensures good power transfer capability and efficiency. A design methodology is proposed based on this study. Measurements on the prototype show efficiency up to 75% at a linear distance of 20 mm.

  19. Method of Individual Adjustment for 3D CT Analysis: Linear Measurement.

    PubMed

    Kim, Dong Kyu; Choi, Dong Hun; Lee, Jeong Woo; Yang, Jung Dug; Chung, Ho Yun; Cho, Byung Chae; Choi, Kang Young

    2016-01-01

    Introduction . We aim to regularize measurement values in three-dimensional (3D) computed tomography (CT) reconstructed images for higher-precision 3D analysis, focusing on length-based 3D cephalometric examinations. Methods . We measure the linear distances between points on different skull models using Vernier calipers (real values). We use 10 differently tilted CT scans for 3D CT reconstruction of the models and measure the same linear distances from the picture archiving and communication system (PACS). In both cases, each measurement is performed three times by three doctors, yielding nine measurements. The real values are compared with the PACS values. Each PACS measurement is revised based on the display field of view (DFOV) values and compared with the real values. Results . The real values and the PACS measurement changes according to tilt value have no significant correlations ( p > 0.05). However, significant correlations appear between the real values and DFOV-adjusted PACS measurements ( p < 0.001). Hence, we obtain a correlation expression that can yield real physical values from PACS measurements. The DFOV value intervals for various age groups are also verified. Conclusion . Precise confirmation of individual preoperative length and precise analysis of postoperative improvements through 3D analysis is possible, which is helpful for facial-bone-surgery symmetry correction.

  20. Molecular dynamics studies of a DNA-binding protein: 2. An evaluation of implicit and explicit solvent models for the molecular dynamics simulation of the Escherichia coli trp repressor.

    PubMed Central

    Guenot, J.; Kollman, P. A.

    1992-01-01

    Although aqueous simulations with periodic boundary conditions more accurately describe protein dynamics than in vacuo simulations, these are computationally intensive for most proteins. Trp repressor dynamic simulations with a small water shell surrounding the starting model yield protein trajectories that are markedly improved over gas phase, yet computationally efficient. Explicit water in molecular dynamics simulations maintains surface exposure of protein hydrophilic atoms and burial of hydrophobic atoms by opposing the otherwise asymmetric protein-protein forces. This properly orients protein surface side chains, reduces protein fluctuations, and lowers the overall root mean square deviation from the crystal structure. For simulations with crystallographic waters only, a linear or sigmoidal distance-dependent dielectric yields a much better trajectory than does a constant dielectric model. As more water is added to the starting model, the differences between using distance-dependent and constant dielectric models becomes smaller, although the linear distance-dependent dielectric yields an average structure closer to the crystal structure than does a constant dielectric model. Multiplicative constants greater than one, for the linear distance-dependent dielectric simulations, produced trajectories that are progressively worse in describing trp repressor dynamics. Simulations of bovine pancreatic trypsin were used to ensure that the trp repressor results were not protein dependent and to explore the effect of the nonbonded cutoff on the distance-dependent and constant dielectric simulation models. The nonbonded cutoff markedly affected the constant but not distance-dependent dielectric bovine pancreatic trypsin inhibitor simulations. As with trp repressor, the distance-dependent dielectric model with a shell of water surrounding the protein produced a trajectory in better agreement with the crystal structure than a constant dielectric model, and the physical properties of the trajectory average structure, both with and without a nonbonded cutoff, were comparable. PMID:1304396

  1. Error Estimation for the Linearized Auto-Localization Algorithm

    PubMed Central

    Guevara, Jorge; Jiménez, Antonio R.; Prieto, Jose Carlos; Seco, Fernando

    2012-01-01

    The Linearized Auto-Localization (LAL) algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs), using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons’ positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter τ is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL), the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method. PMID:22736965

  2. A Time-constrained Network Voronoi Construction and Accessibility Analysis in Location-based Service Technology

    NASA Astrophysics Data System (ADS)

    Yu, W.; Ai, T.

    2014-11-01

    Accessibility analysis usually requires special models of spatial location analysis based on some geometric constructions, such as Voronoi diagram (abbreviated to VD). There are many achievements in classic Voronoi model research, however suffering from the following limitations for location-based services (LBS) applications. (1) It is difficult to objectively reflect the actual service areas of facilities by using traditional planar VDs, because human activities in LBS are usually constrained only to the network portion of the planar space. (2) Although some researchers have adopted network distance to construct VDs, their approaches are used in a static environment, where unrealistic measures of shortest path distance based on assumptions about constant travel speeds through the network were often used. (3) Due to the computational complexity of the shortest-path distance calculating, previous researches tend to be very time consuming, especially for large datasets and if multiple runs are required. To solve the above problems, a novel algorithm is developed in this paper. We apply network-based quadrat system and 1-D sequential expansion to find the corresponding subnetwork for each focus. The idea is inspired by the natural phenomenon that water flow extends along certain linear channels until meets others or arrives at the end of route. In order to accommodate the changes in traffic conditions, the length of network-quadrat is set upon the traffic condition of the corresponding street. The method has the advantage over Dijkstra's algorithm in that the time cost is avoided, and replaced with a linear time operation.

  3. Can We Speculate Running Application With Server Power Consumption Trace?

    PubMed

    Li, Yuanlong; Hu, Han; Wen, Yonggang; Zhang, Jun

    2018-05-01

    In this paper, we propose to detect the running applications in a server by classifying the observed power consumption series for the purpose of data center energy consumption monitoring and analysis. Time series classification problem has been extensively studied with various distance measurements developed; also recently the deep learning-based sequence models have been proved to be promising. In this paper, we propose a novel distance measurement and build a time series classification algorithm hybridizing nearest neighbor and long short term memory (LSTM) neural network. More specifically, first we propose a new distance measurement termed as local time warping (LTW), which utilizes a user-specified index set for local warping, and is designed to be noncommutative and nondynamic programming. Second, we hybridize the 1-nearest neighbor (1NN)-LTW and LSTM together. In particular, we combine the prediction probability vector of 1NN-LTW and LSTM to determine the label of the test cases. Finally, using the power consumption data from a real data center, we show that the proposed LTW can improve the classification accuracy of dynamic time warping (DTW) from about 84% to 90%. Our experimental results prove that the proposed LTW is competitive on our data set compared with existed DTW variants and its noncommutative feature is indeed beneficial. We also test a linear version of LTW and find out that it can perform similar to state-of-the-art DTW-based method while it runs as fast as the linear runtime lower bound methods like LB_Keogh for our problem. With the hybrid algorithm, for the power series classification task we achieve an accuracy up to about 93%. Our research can inspire more studies on time series distance measurement and the hybrid of the deep learning models with other traditional models.

  4. An ensemble of dissimilarity based classifiers for Mackerel gender determination

    NASA Astrophysics Data System (ADS)

    Blanco, A.; Rodriguez, R.; Martinez-Maranon, I.

    2014-03-01

    Mackerel is an infravalored fish captured by European fishing vessels. A manner to add value to this specie can be achieved by trying to classify it attending to its sex. Colour measurements were performed on Mackerel females and males (fresh and defrozen) extracted gonads to obtain differences between sexes. Several linear and non linear classifiers such as Support Vector Machines (SVM), k Nearest Neighbors (k-NN) or Diagonal Linear Discriminant Analysis (DLDA) can been applied to this problem. However, theyare usually based on Euclidean distances that fail to reflect accurately the sample proximities. Classifiers based on non-Euclidean dissimilarities misclassify a different set of patterns. We combine different kind of dissimilarity based classifiers. The diversity is induced considering a set of complementary dissimilarities for each model. The experimental results suggest that our algorithm helps to improve classifiers based on a single dissimilarity.

  5. Improving UWB-Based Localization in IoT Scenarios with Statistical Models of Distance Error.

    PubMed

    Monica, Stefania; Ferrari, Gianluigi

    2018-05-17

    Interest in the Internet of Things (IoT) is rapidly increasing, as the number of connected devices is exponentially growing. One of the application scenarios envisaged for IoT technologies involves indoor localization and context awareness. In this paper, we focus on a localization approach that relies on a particular type of communication technology, namely Ultra Wide Band (UWB). UWB technology is an attractive choice for indoor localization, owing to its high accuracy. Since localization algorithms typically rely on estimated inter-node distances, the goal of this paper is to evaluate the improvement brought by a simple (linear) statistical model of the distance error. On the basis of an extensive experimental measurement campaign, we propose a general analytical framework, based on a Least Square (LS) method, to derive a novel statistical model for the range estimation error between a pair of UWB nodes. The proposed statistical model is then applied to improve the performance of a few illustrative localization algorithms in various realistic scenarios. The obtained experimental results show that the use of the proposed statistical model improves the accuracy of the considered localization algorithms with a reduction of the localization error up to 66%.

  6. Extended Kalman Doppler tracking and model determination for multi-sensor short-range radar

    NASA Astrophysics Data System (ADS)

    Mittermaier, Thomas J.; Siart, Uwe; Eibert, Thomas F.; Bonerz, Stefan

    2016-09-01

    A tracking solution for collision avoidance in industrial machine tools based on short-range millimeter-wave radar Doppler observations is presented. At the core of the tracking algorithm there is an Extended Kalman Filter (EKF) that provides dynamic estimation and localization in real-time. The underlying sensor platform consists of several homodyne continuous wave (CW) radar modules. Based on In-phase-Quadrature (IQ) processing and down-conversion, they provide only Doppler shift information about the observed target. Localization with Doppler shift estimates is a nonlinear problem that needs to be linearized before the linear KF can be applied. The accuracy of state estimation depends highly on the introduced linearization errors, the initialization and the models that represent the true physics as well as the stochastic properties. The important issue of filter consistency is addressed and an initialization procedure based on data fitting and maximum likelihood estimation is suggested. Models for both, measurement and process noise are developed. Tracking results from typical three-dimensional courses of movement at short distances in front of a multi-sensor radar platform are presented.

  7. MO-C-17A-04: Forecasting Longitudinal Changes in Oropharyngeal Tumor Morphology Throughout the Course of Head and Neck Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yock, A; UT Graduate School of Biomedical Sciences, Houston, TX; Rao, A

    2014-06-15

    Purpose: To generate, evaluate, and compare models that predict longitudinal changes in tumor morphology throughout the course of radiation therapy. Methods: Two morphology feature vectors were used to describe the size, shape, and position of 35 oropharyngeal GTVs at each treatment fraction during intensity-modulated radiation therapy. The feature vectors comprised the coordinates of the GTV centroids and one of two shape descriptors. One shape descriptor was based on radial distances between the GTV centroid and 614 GTV surface landmarks. The other was based on a spherical harmonic decomposition of these distances. Feature vectors over the course of therapy were describedmore » using static, linear, and mean models. The error of these models in forecasting GTV morphology was evaluated with leave-one-out cross-validation, and their accuracy was compared using Wilcoxon signed-rank tests. The effect of adjusting model parameters at 1, 2, 3, or 5 time points (adjustment points) was also evaluated. Results: The addition of a single adjustment point to the static model decreased the median error in forecasting the position of GTV surface landmarks by 1.2 mm (p<0.001). Additional adjustment points further decreased forecast error by about 0.4 mm each. The linear model decreased forecast error compared to the static model for feature vectors based on both shape descriptors (0.2 mm), while the mean model did so only for those based on the inter-landmark distances (0.2 mm). The decrease in forecast error due to adding adjustment points was greater than that due to model selection. Both effects diminished with subsequent adjustment points. Conclusion: Models of tumor morphology that include information from prior patients and/or prior treatment fractions are able to predict the tumor surface at each treatment fraction during radiation therapy. The predicted tumor morphology can be compared with patient anatomy or dose distributions, opening the possibility of anticipatory re-planning. American Legion Auxiliary Fellowship; The University of Texas Graduate School of Biomedical Sciences at Houston.« less

  8. New spatial clustering-based models for optimal urban facility location considering geographical obstacles

    NASA Astrophysics Data System (ADS)

    Javadi, Maryam; Shahrabi, Jamal

    2014-03-01

    The problems of facility location and the allocation of demand points to facilities are crucial research issues in spatial data analysis and urban planning. It is very important for an organization or governments to best locate its resources and facilities and efficiently manage resources to ensure that all demand points are covered and all the needs are met. Most of the recent studies, which focused on solving facility location problems by performing spatial clustering, have used the Euclidean distance between two points as the dissimilarity function. Natural obstacles, such as mountains and rivers, can have drastic impacts on the distance that needs to be traveled between two geographical locations. While calculating the distance between various supply chain entities (including facilities and demand points), it is necessary to take such obstacles into account to obtain better and more realistic results regarding location-allocation. In this article, new models were presented for location of urban facilities while considering geographical obstacles at the same time. In these models, three new distance functions were proposed. The first function was based on the analysis of shortest path in linear network, which was called SPD function. The other two functions, namely PD and P2D, were based on the algorithms that deal with robot geometry and route-based robot navigation in the presence of obstacles. The models were implemented in ArcGIS Desktop 9.2 software using the visual basic programming language. These models were evaluated using synthetic and real data sets. The overall performance was evaluated based on the sum of distance from demand points to their corresponding facilities. Because of the distance between the demand points and facilities becoming more realistic in the proposed functions, results indicated desired quality of the proposed models in terms of quality of allocating points to centers and logistic cost. Obtained results show promising improvements of the allocation, the logistics costs and the response time. It can also be inferred from this study that the P2D-based model and the SPD-based model yield similar results in terms of the facility location and the demand allocation. It is noted that the P2D-based model showed better execution time than the SPD-based model. Considering logistic costs, facility location and response time, the P2D-based model was appropriate choice for urban facility location problem considering the geographical obstacles.

  9. Conceptual problems in detecting the evolution of dark energy when using distance measurements

    NASA Astrophysics Data System (ADS)

    Bolejko, K.

    2011-01-01

    Context. Dark energy is now one of the most important and topical problems in cosmology. The first step to reveal its nature is to detect the evolution of dark energy or to prove beyond doubt that the cosmological constant is indeed constant. However, in the standard approach to cosmology, the Universe is described by the homogeneous and isotropic Friedmann models. Aims: We aim to show that in the perturbed universe (even if perturbations vanish if averaged over sufficiently large scales) the distance-redshift relation is not the same as in the unperturbed universe. This has a serious consequence when studying the nature of dark energy and, as shown here, can impair the analysis and studies of dark energy. Methods: The analysis is based on two methods: the linear lensing approximation and the non-linear Szekeres Swiss-Cheese model. The inhomogeneity scale is ~50 Mpc, and both models have the same density fluctuations along the line of sight. Results: The comparison between linear and non-linear methods shows that non-linear corrections are not negligible. When inhomogeneities are present the distance changes by several percent. To show how this change influences the measurements of dark energy, ten future observations with 2% uncertainties are generated. It is shown the using the standard methods (i.e. under the assumption of homogeneity) the systematics due to inhomogeneities can distort our analysis, and may lead to a conclusion that dark energy evolves when in fact it is constant (or vice versa). Conclusions: Therefore, if future observations are analysed only within the homogeneous framework then the impact of inhomogeneities (such as voids and superclusters) can be mistaken for evolving dark energy. Since the robust distinction between the evolution and non-evolution of dark energy is the first step to understanding the nature of dark energy a proper handling of inhomogeneities is essential.

  10. Partially supervised speaker clustering.

    PubMed

    Tang, Hao; Chu, Stephen Mingyu; Hasegawa-Johnson, Mark; Huang, Thomas S

    2012-05-01

    Content-based multimedia indexing, retrieval, and processing as well as multimedia databases demand the structuring of the media content (image, audio, video, text, etc.), one significant goal being to associate the identity of the content to the individual segments of the signals. In this paper, we specifically address the problem of speaker clustering, the task of assigning every speech utterance in an audio stream to its speaker. We offer a complete treatment to the idea of partially supervised speaker clustering, which refers to the use of our prior knowledge of speakers in general to assist the unsupervised speaker clustering process. By means of an independent training data set, we encode the prior knowledge at the various stages of the speaker clustering pipeline via 1) learning a speaker-discriminative acoustic feature transformation, 2) learning a universal speaker prior model, and 3) learning a discriminative speaker subspace, or equivalently, a speaker-discriminative distance metric. We study the directional scattering property of the Gaussian mixture model (GMM) mean supervector representation of utterances in the high-dimensional space, and advocate exploiting this property by using the cosine distance metric instead of the euclidean distance metric for speaker clustering in the GMM mean supervector space. We propose to perform discriminant analysis based on the cosine distance metric, which leads to a novel distance metric learning algorithm—linear spherical discriminant analysis (LSDA). We show that the proposed LSDA formulation can be systematically solved within the elegant graph embedding general dimensionality reduction framework. Our speaker clustering experiments on the GALE database clearly indicate that 1) our speaker clustering methods based on the GMM mean supervector representation and vector-based distance metrics outperform traditional speaker clustering methods based on the “bag of acoustic features” representation and statistical model-based distance metrics, 2) our advocated use of the cosine distance metric yields consistent increases in the speaker clustering performance as compared to the commonly used euclidean distance metric, 3) our partially supervised speaker clustering concept and strategies significantly improve the speaker clustering performance over the baselines, and 4) our proposed LSDA algorithm further leads to state-of-the-art speaker clustering performance.

  11. A spatial model for a stream networks of Citarik River with the environmental variables: potential of hydrogen (PH) and temperature

    NASA Astrophysics Data System (ADS)

    Bachrudin, A.; Mohamed, N. B.; Supian, S.; Sukono; Hidayat, Y.

    2018-03-01

    Application of existing geostatistical theory of stream networks provides a number of interesting and challenging problems. Most of statistical tools in the traditional geostatistics have been based on a Euclidean distance such as autocovariance functions, but for stream data is not permissible since it deals with a stream distance. To overcome this autocovariance developed a model based on the distance the flow with using convolution kernel approach (moving average construction). Spatial model for a stream networks is widely used to monitor environmental on a river networks. In a case study of a river in province of West Java, the objective of this paper is to analyze a capability of a predictive on two environmental variables, potential of hydrogen (PH) and temperature using ordinary kriging. Several the empirical results show: (1) The best fit of autocovariance functions for temperature and potential hydrogen (ph) of Citarik River is linear which also yields the smallest root mean squared prediction error (RMSPE), (2) the spatial correlation values between the locations on upstream and on downstream of Citarik river exhibit decreasingly

  12. Evaluation for relationship among source parameters of underground nuclear tests in Northern Korean Peninsula

    NASA Astrophysics Data System (ADS)

    Kim, G.; Che, I. Y.

    2017-12-01

    We evaluated relationship among source parameters of underground nuclear tests in northern Korean Peninsula using regional seismic data. Dense global and regional seismic networks are incorporated to measure locations and origin times precisely. Location analyses show that distance among the locations is tiny on a regional scale. The tiny location-differences validate a linear model assumption. We estimated source spectral ratios by excluding path effects based spectral ratios of the observed seismograms. We estimated empirical relationship among depth of burials and yields based on theoretical source models.

  13. Is There a Critical Distance for Fickian Transport? - a Statistical Approach to Sub-Fickian Transport Modelling in Porous Media

    NASA Astrophysics Data System (ADS)

    Most, S.; Nowak, W.; Bijeljic, B.

    2014-12-01

    Transport processes in porous media are frequently simulated as particle movement. This process can be formulated as a stochastic process of particle position increments. At the pore scale, the geometry and micro-heterogeneities prohibit the commonly made assumption of independent and normally distributed increments to represent dispersion. Many recent particle methods seek to loosen this assumption. Recent experimental data suggest that we have not yet reached the end of the need to generalize, because particle increments show statistical dependency beyond linear correlation and over many time steps. The goal of this work is to better understand the validity regions of commonly made assumptions. We are investigating after what transport distances can we observe: A statistical dependence between increments, that can be modelled as an order-k Markov process, boils down to order 1. This would be the Markovian distance for the process, where the validity of yet-unexplored non-Gaussian-but-Markovian random walks would start. A bivariate statistical dependence that simplifies to a multi-Gaussian dependence based on simple linear correlation (validity of correlated PTRW). Complete absence of statistical dependence (validity of classical PTRW/CTRW). The approach is to derive a statistical model for pore-scale transport from a powerful experimental data set via copula analysis. The model is formulated as a non-Gaussian, mutually dependent Markov process of higher order, which allows us to investigate the validity ranges of simpler models.

  14. DCJ-indel and DCJ-substitution distances with distinct operation costs

    PubMed Central

    2013-01-01

    Background Classical approaches to compute the genomic distance are usually limited to genomes with the same content and take into consideration only rearrangements that change the organization of the genome (i.e. positions and orientation of pieces of DNA, number and type of chromosomes, etc.), such as inversions, translocations, fusions and fissions. These operations are generically represented by the double-cut and join (DCJ) operation. The distance between two genomes, in terms of number of DCJ operations, can be computed in linear time. In order to handle genomes with distinct contents, also insertions and deletions of fragments of DNA – named indels – must be allowed. More powerful than an indel is a substitution of a fragment of DNA by another fragment of DNA. Indels and substitutions are called content-modifying operations. It has been shown that both the DCJ-indel and the DCJ-substitution distances can also be computed in linear time, assuming that the same cost is assigned to any DCJ or content-modifying operation. Results In the present study we extend the DCJ-indel and the DCJ-substitution models, considering that the content-modifying cost is distinct from and upper bounded by the DCJ cost, and show that the distance in both models can still be computed in linear time. Although the triangular inequality can be disrupted in both models, we also show how to efficiently fix this problem a posteriori. PMID:23879938

  15. An empirical model of H2O, CO2 and CO coma distributions and production rates for comet 67P/Churyumov-Gerasimenko based on ROSINA/DFMS measurements and AMPS-DSMC simulations

    NASA Astrophysics Data System (ADS)

    Hansen, Kenneth C.; Altwegg, Kathrin; Bieler, Andre; Berthelier, Jean-Jacques; Calmonte, Ursina; Combi, Michael R.; De Keyser, Johan; Fiethe, Björn; Fougere, Nicolas; Fuselier, Stephen; Gombosi, T. I.; Hässig, Myrtha; Huang, Zhenguang; Le Roy, Léna; Rubin, Martin; Tenishev, Valeriy; Toth, Gabor; Tzou, Chia-Yu; ROSINA Team

    2016-10-01

    We have previously used results from the AMPS DSMC (Adaptive Mesh Particle Simulator Direct Simulation Monte Carlo) model to create an empirical model of the near comet water (H2O) coma of comet 67P/Churyumov-Gerasimenko. In this work we create additional empirical models for the coma distributions of CO2 and CO. The AMPS simulations are based on ROSINA DFMS (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis, Double Focusing Mass Spectrometer) data taken over the entire timespan of the Rosetta mission. The empirical model is created using AMPS DSMC results which are extracted from simulations at a range of radial distances, rotation phases and heliocentric distances. The simulation results are then averaged over a comet rotation and fitted to an empirical model distribution. Model coefficients are then fitted to piecewise-linear functions of heliocentric distance. The final product is an empirical model of the coma distribution which is a function of heliocentric distance, radial distance, and sun-fixed longitude and latitude angles. The model clearly mimics the behavior of water shifting production from North to South across the inbound equinox while the CO2 production is always in the South.The empirical model can be used to de-trend the spacecraft motion from the ROSINA COPS and DFMS data. The ROSINA instrument measures the neutral coma density at a single point and the measured value is influenced by the location of the spacecraft relative to the comet and the comet-sun line. Using the empirical coma model we can correct for the position of the spacecraft and compute a total production rate based on single point measurements. In this presentation we will present the coma production rates as a function of heliocentric distance for the entire Rosetta mission.This work was supported by contracts JPL#1266313 and JPL#1266314 from the US Rosetta Project and NASA grant NNX14AG84G from the Planetary Atmospheres Program.

  16. Self-potential response to periodic pumping test: a numerical study

    NASA Astrophysics Data System (ADS)

    Konosavsky, Pavel; Maineult, Alexis; Narbut, Mikhail; Titov, Konstantin

    2017-09-01

    We numerically model self-potential responses associated with periodic pumping test experiments by sequential calculation of the hydraulic response and the coupled electrical potential. We assume the pumping test experiments with a fully saturated confined aquifer. Application of different excitation functions leads to quasi-linear trends in electrical records whose direction and intensity depend on the form of the excitation function. The hydraulic response is phase shifted compared to the excitation function; the phase shift increases quasi-linearly with the distance from the pumping well. For the electrical signals, we investigated separately the cases of conducting and insulating casings of the pumping well. For the conducting casing the electrical signals are larger in magnitude than that for the insulating casing; they reproduce the drawdown signals in the pumping well at any distance from the well and exhibit any phase shift with the increased distance. For the insulating casing, the electrical signals are phase shifted and their shape depends on the distance from the pumping well. Three characteristic regimes were found for the phase shift, φ, with the increased distance and for various hydraulic diffusivity values. At small distances φ increases quasi-linearly; at intermediate distances φ attends the value of π/2 and stay about this value (for relatively small diffusivity values); and at large distances φ attends the value of π and, stay about this value at larger distances. This behaviour of the electrical signals can be explained by two electrical sources of reverse polarity. They are (i) linear, time independent, and located at the pumping interval of the well; and (ii) volumetric, time dependent, with maximum value located in the aquifer at the distance corresponding to maximum variation of the hydraulic head magnitude with time. We also model the variation of the amplitude and phase of the hydraulic and electrical signals with increased excitation function period, and we show the characteristic periods corresponding to transition of the periodic pumping test regime to the classical pumping test regime, when the excitation function is considered as the step-function. This transition depends on the distance from the pumping well and the hydraulic diffusivity value of aquifer. Finally, with this modelling of saturated flow we reproduced in sufficient details the field data previously obtained by Maineult et al.

  17. Protograph based LDPC codes with minimum distance linearly growing with block size

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy

    2005-01-01

    We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.

  18. wayGoo recommender system: personalized recommendations for events scheduling, based on static and real-time information

    NASA Astrophysics Data System (ADS)

    Thanos, Konstantinos-Georgios; Thomopoulos, Stelios C. A.

    2016-05-01

    wayGoo is a fully functional application whose main functionalities include content geolocation, event scheduling, and indoor navigation. However, significant information about events do not reach users' attention, either because of the size of this information or because some information comes from real - time data sources. The purpose of this work is to facilitate event management operations by prioritizing the presented events, based on users' interests using both, static and real - time data. Through the wayGoo interface, users select conceptual topics that are interesting for them. These topics constitute a browsing behavior vector which is used for learning users' interests implicitly, without being intrusive. Then, the system estimates user preferences and return an events list sorted from the most preferred one to the least. User preferences are modeled via a Naïve Bayesian Network which consists of: a) the `decision' random variable corresponding to users' decision on attending an event, b) the `distance' random variable, modeled by a linear regression that estimates the probability that the distance between a user and each event destination is not discouraging, ` the seat availability' random variable, modeled by a linear regression, which estimates the probability that the seat availability is encouraging d) and the `relevance' random variable, modeled by a clustering - based collaborative filtering, which determines the relevance of each event users' interests. Finally, experimental results show that the proposed system contribute essentially to assisting users in browsing and selecting events to attend.

  19. An Exact Algorithm to Compute the Double-Cut-and-Join Distance for Genomes with Duplicate Genes.

    PubMed

    Shao, Mingfu; Lin, Yu; Moret, Bernard M E

    2015-05-01

    Computing the edit distance between two genomes is a basic problem in the study of genome evolution. The double-cut-and-join (DCJ) model has formed the basis for most algorithmic research on rearrangements over the last few years. The edit distance under the DCJ model can be computed in linear time for genomes without duplicate genes, while the problem becomes NP-hard in the presence of duplicate genes. In this article, we propose an integer linear programming (ILP) formulation to compute the DCJ distance between two genomes with duplicate genes. We also provide an efficient preprocessing approach to simplify the ILP formulation while preserving optimality. Comparison on simulated genomes demonstrates that our method outperforms MSOAR in computing the edit distance, especially when the genomes contain long duplicated segments. We also apply our method to assign orthologous gene pairs among human, mouse, and rat genomes, where once again our method outperforms MSOAR.

  20. A New Computational Methodology for Structural Dynamics Problems

    DTIC Science & Technology

    2008-04-01

    by approximating the geometry of the midsurface of the shell (as in continuum-based finite element models), are prevented from the beginning...iiθ , such that the surface 03=θ defines the midsurface ( )R tM M of the region ( )R tB B . The coordinate 3θ is the measure of the distance...assumption for the shell model: “the displacement field is considered as a linear expansion of the thickness coordinate around the midsurface . The

  1. Learning Human Actions by Combining Global Dynamics and Local Appearance.

    PubMed

    Luo, Guan; Yang, Shuang; Tian, Guodong; Yuan, Chunfeng; Hu, Weiming; Maybank, Stephen J

    2014-12-01

    In this paper, we address the problem of human action recognition through combining global temporal dynamics and local visual spatio-temporal appearance features. For this purpose, in the global temporal dimension, we propose to model the motion dynamics with robust linear dynamical systems (LDSs) and use the model parameters as motion descriptors. Since LDSs live in a non-Euclidean space and the descriptors are in non-vector form, we propose a shift invariant subspace angles based distance to measure the similarity between LDSs. In the local visual dimension, we construct curved spatio-temporal cuboids along the trajectories of densely sampled feature points and describe them using histograms of oriented gradients (HOG). The distance between motion sequences is computed with the Chi-Squared histogram distance in the bag-of-words framework. Finally we perform classification using the maximum margin distance learning method by combining the global dynamic distances and the local visual distances. We evaluate our approach for action recognition on five short clips data sets, namely Weizmann, KTH, UCF sports, Hollywood2 and UCF50, as well as three long continuous data sets, namely VIRAT, ADL and CRIM13. We show competitive results as compared with current state-of-the-art methods.

  2. The robustness and accuracy of in vivo linear wear measurements for knee prostheses based on model-based RSA.

    PubMed

    van Ijsseldijk, E A; Valstar, E R; Stoel, B C; Nelissen, R G H H; Reiber, J H C; Kaptein, B L

    2011-10-13

    Accurate in vivo measurements methods of wear in total knee arthroplasty are required for a timely detection of excessive wear and to assess new implant designs. Component separation measurements based on model-based Roentgen stereophotogrammetric analysis (RSA), in which 3-dimensional reconstruction methods are used, have shown promising results, yet the robustness of these measurements is unknown. In this study, the accuracy and robustness of this measurement for clinical usage was assessed. The validation experiments were conducted in an RSA setup with a phantom setup of a knee in a vertical orientation. 72 RSA images were created using different variables for knee orientations, two prosthesis types (fixed-bearing Duracon knee and fixed-bearing Triathlon knee) and accuracies of the reconstruction models. The measurement error was determined for absolute and relative measurements and the effect of knee positioning and true seperation distance was determined. The measurement method overestimated the separation distance with 0.1mm on average. The precision of the method was 0.10mm (2*SD) for the Duracon prosthesis and 0.20mm for the Triathlon prosthesis. A slight difference in error was found between the measurements with 0° and 10° anterior tilt. (difference=0.08mm, p=0.04). The accuracy of 0.1mm and precision of 0.2mm can be achieved for linear wear measurements based on model-based RSA, which is more than adequate for clinical applications. The measurement is robust in clinical settings. Although anterior tilt seems to influence the measurement, the size of this influence is low and clinically irrelevant. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. Valuing water resources in Switzerland using a hedonic price model

    NASA Astrophysics Data System (ADS)

    van Dijk, Diana; Siber, Rosi; Brouwer, Roy; Logar, Ivana; Sanadgol, Dorsa

    2016-05-01

    In this paper, linear and spatial hedonic price models are applied to the housing market in Switzerland, covering all 26 cantons in the country over the period 2005-2010. Besides structural house, neighborhood and socioeconomic characteristics, we include a wide variety of new environmental characteristics related to water to examine their role in explaining variation in sales prices. These include water abundance, different types of water bodies, the recreational function of water, and water disamenity. Significant spatial autocorrelation is found in the estimated models, as well as nonlinear effects for distances to the nearest lake and large river. Significant effects are furthermore found for water abundance and the distance to large rivers, but not to small rivers. Although in both linear and spatial models water related variables explain less than 1% of the price variation, the distance to the nearest bathing site has a larger marginal contribution than many neighborhood-related distance variables. The housing market shows to differentiate between different water related resources in terms of relative contribution to house prices, which could help the housing development industry make more geographically targeted planning activities.

  4. Distance correction system for localization based on linear regression and smoothing in ambient intelligence display.

    PubMed

    Kim, Dae-Hee; Choi, Jae-Hun; Lim, Myung-Eun; Park, Soo-Jun

    2008-01-01

    This paper suggests the method of correcting distance between an ambient intelligence display and a user based on linear regression and smoothing method, by which distance information of a user who approaches to the display can he accurately output even in an unanticipated condition using a passive infrared VIR) sensor and an ultrasonic device. The developed system consists of an ambient intelligence display and an ultrasonic transmitter, and a sensor gateway. Each module communicates with each other through RF (Radio frequency) communication. The ambient intelligence display includes an ultrasonic receiver and a PIR sensor for motion detection. In particular, this system selects and processes algorithms such as smoothing or linear regression for current input data processing dynamically through judgment process that is determined using the previous reliable data stored in a queue. In addition, we implemented GUI software with JAVA for real time location tracking and an ambient intelligence display.

  5. Is Linear Displacement Information Or Angular Displacement Information Used During The Adaptation of Pointing Responses To An Optically Shifted Image?

    NASA Technical Reports Server (NTRS)

    Bautista, Abigail B.

    1994-01-01

    Twenty-four observers looked through a pair of 20 diopter wedge prisms and pointed to an image of a target which was displaced vertically from eye level by 6 cm at a distance of 30 cm. Observers pointed 40 times, using only their right hand, and received error-corrective feedback upon termination of each pointing response (terminal visual feedback). At three testing distances, 20, 30, and 40 cm, ten pre-exposure and ten post-exposure pointing responses were recorded for each hand as observers reached to a mirror-viewed target located at eye level. The difference between pre- and post-exposure pointing response (adaptive shift) was compared for both Exposed and Unexposed hands across all three testing distances. The data were assessed according to the results predicted by two alternative models for processing spatial-information: one using angular displacement information and another using linear displacement information. The angular model of spatial mapping best predicted the observer's pointing response for the Exposed hand. Although the angular adaptive shift did not change significantly as a function of distance (F(2,44) = 1.12, n.s.), the linear adaptive shift increased significantly over the three testing distances 02 44) = 4.90 p less than 0.01).

  6. Transmission of linearly polarized light in seawater: implications for polarization signaling.

    PubMed

    Shashar, Nadav; Sabbah, Shai; Cronin, Thomas W

    2004-09-01

    Partially linearly polarized light is abundant in the oceans. The natural light field is partially polarized throughout the photic range, and some objects and animals produce a polarization pattern of their own. Many polarization-sensitive marine animals take advantage of the polarization information, using it for tasks ranging from navigation and finding food to communication. In such tasks, the distance to which the polarization information propagates is of great importance. Using newly designed polarization sensors, we measured the changes in linear polarization underwater as a function of distance from a standard target. In the relatively clear waters surrounding coral reefs, partial (%) polarization decreased exponentially as a function of distance from the target, resulting in a 50% reduction of partial polarization at a distance of 1.25-3 m, depending on water quality. Based on these measurements, we predict that polarization sensitivity will be most useful for short-range (in the order of meters) visual tasks in water and less so for detecting objects, signals, or structures from far away. Navigation and body orientation based on the celestial polarization pattern are predicted to be limited to shallow waters as well, while navigation based on the solar position is possible through a deeper range.

  7. A regularized approach for geodesic-based semisupervised multimanifold learning.

    PubMed

    Fan, Mingyu; Zhang, Xiaoqin; Lin, Zhouchen; Zhang, Zhongfei; Bao, Hujun

    2014-05-01

    Geodesic distance, as an essential measurement for data dissimilarity, has been successfully used in manifold learning. However, most geodesic distance-based manifold learning algorithms have two limitations when applied to classification: 1) class information is rarely used in computing the geodesic distances between data points on manifolds and 2) little attention has been paid to building an explicit dimension reduction mapping for extracting the discriminative information hidden in the geodesic distances. In this paper, we regard geodesic distance as a kind of kernel, which maps data from linearly inseparable space to linear separable distance space. In doing this, a new semisupervised manifold learning algorithm, namely regularized geodesic feature learning algorithm, is proposed. The method consists of three techniques: a semisupervised graph construction method, replacement of original data points with feature vectors which are built by geodesic distances, and a new semisupervised dimension reduction method for feature vectors. Experiments on the MNIST, USPS handwritten digit data sets, MIT CBCL face versus nonface data set, and an intelligent traffic data set show the effectiveness of the proposed algorithm.

  8. One Hundred Ways to be Non-Fickian - A Rigorous Multi-Variate Statistical Analysis of Pore-Scale Transport

    NASA Astrophysics Data System (ADS)

    Most, Sebastian; Nowak, Wolfgang; Bijeljic, Branko

    2015-04-01

    Fickian transport in groundwater flow is the exception rather than the rule. Transport in porous media is frequently simulated via particle methods (i.e. particle tracking random walk (PTRW) or continuous time random walk (CTRW)). These methods formulate transport as a stochastic process of particle position increments. At the pore scale, geometry and micro-heterogeneities prohibit the commonly made assumption of independent and normally distributed increments to represent dispersion. Many recent particle methods seek to loosen this assumption. Hence, it is important to get a better understanding of the processes at pore scale. For our analysis we track the positions of 10.000 particles migrating through the pore space over time. The data we use come from micro CT scans of a homogeneous sandstone and encompass about 10 grain sizes. Based on those images we discretize the pore structure and simulate flow at the pore scale based on the Navier-Stokes equation. This flow field realistically describes flow inside the pore space and we do not need to add artificial dispersion during the transport simulation. Next, we use particle tracking random walk and simulate pore-scale transport. Finally, we use the obtained particle trajectories to do a multivariate statistical analysis of the particle motion at the pore scale. Our analysis is based on copulas. Every multivariate joint distribution is a combination of its univariate marginal distributions. The copula represents the dependence structure of those univariate marginals and is therefore useful to observe correlation and non-Gaussian interactions (i.e. non-Fickian transport). The first goal of this analysis is to better understand the validity regions of commonly made assumptions. We are investigating three different transport distances: 1) The distance where the statistical dependence between particle increments can be modelled as an order-one Markov process. This would be the Markovian distance for the process, where the validity of yet-unexplored non-Gaussian-but-Markovian random walks start. 2) The distance where bivariate statistical dependence simplifies to a multi-Gaussian dependence based on simple linear correlation (validity of correlated PTRW/CTRW). 3) The distance of complete statistical independence (validity of classical PTRW/CTRW). The second objective is to reveal characteristic dependencies influencing transport the most. Those dependencies can be very complex. Copulas are highly capable of representing linear dependence as well as non-linear dependence. With that tool we are able to detect persistent characteristics dominating transport even across different scales. The results derived from our experimental data set suggest that there are many more non-Fickian aspects of pore-scale transport than the univariate statistics of longitudinal displacements. Non-Fickianity can also be found in transverse displacements, and in the relations between increments at different time steps. Also, the found dependence is non-linear (i.e. beyond simple correlation) and persists over long distances. Thus, our results strongly support the further refinement of techniques like correlated PTRW or correlated CTRW towards non-linear statistical relations.

  9. Linear degrees of freedom in speech production: analysis of cineradio- and labio-film data and articulatory-acoustic modeling.

    PubMed

    Beautemps, D; Badin, P; Bailly, G

    2001-05-01

    The following contribution addresses several issues concerning speech degrees of freedom in French oral vowels, stop, and fricative consonants based on an analysis of tongue and lip shapes extracted from cineradio- and labio-films. The midsagittal tongue shapes have been submitted to a linear decomposition where some of the loading factors were selected such as jaw and larynx position while four other components were derived from principal component analysis (PCA). For the lips, in addition to the more traditional protrusion and opening components, a supplementary component was extracted to explain the upward movement of both the upper and lower lips in [v] production. A linear articulatory model was developed; the six tongue degrees of freedom were used as the articulatory control parameters of the midsagittal tongue contours and explained 96% of the tongue data variance. These control parameters were also used to specify the frontal lip width dimension derived from the labio-film front views. Finally, this model was complemented by a conversion model going from the midsagittal to the area function, based on a fitting of the midsagittal distances and the formant frequencies for both vowels and consonants.

  10. Statistical analysis of dendritic spine distributions in rat hippocampal cultures

    PubMed Central

    2013-01-01

    Background Dendritic spines serve as key computational structures in brain plasticity. Much remains to be learned about their spatial and temporal distribution among neurons. Our aim in this study was to perform exploratory analyses based on the population distributions of dendritic spines with regard to their morphological characteristics and period of growth in dissociated hippocampal neurons. We fit a log-linear model to the contingency table of spine features such as spine type and distance from the soma to first determine which features were important in modeling the spines, as well as the relationships between such features. A multinomial logistic regression was then used to predict the spine types using the features suggested by the log-linear model, along with neighboring spine information. Finally, an important variant of Ripley’s K-function applicable to linear networks was used to study the spatial distribution of spines along dendrites. Results Our study indicated that in the culture system, (i) dendritic spine densities were "completely spatially random", (ii) spine type and distance from the soma were independent quantities, and most importantly, (iii) spines had a tendency to cluster with other spines of the same type. Conclusions Although these results may vary with other systems, our primary contribution is the set of statistical tools for morphological modeling of spines which can be used to assess neuronal cultures following gene manipulation such as RNAi, and to study induced pluripotent stem cells differentiated to neurons. PMID:24088199

  11. Penalized nonparametric scalar-on-function regression via principal coordinates

    PubMed Central

    Reiss, Philip T.; Miller, David L.; Wu, Pei-Shien; Hua, Wen-Yu

    2016-01-01

    A number of classical approaches to nonparametric regression have recently been extended to the case of functional predictors. This paper introduces a new method of this type, which extends intermediate-rank penalized smoothing to scalar-on-function regression. In the proposed method, which we call principal coordinate ridge regression, one regresses the response on leading principal coordinates defined by a relevant distance among the functional predictors, while applying a ridge penalty. Our publicly available implementation, based on generalized additive modeling software, allows for fast optimal tuning parameter selection and for extensions to multiple functional predictors, exponential family-valued responses, and mixed-effects models. In an application to signature verification data, principal coordinate ridge regression, with dynamic time warping distance used to define the principal coordinates, is shown to outperform a functional generalized linear model. PMID:29217963

  12. 3D finite element modeling of epiretinal stimulation: Impact of prosthetic electrode size and distance from the retina.

    PubMed

    Sui, Xiaohong; Huang, Yu; Feng, Fuchen; Huang, Chenhui; Chan, Leanne Lai Hang; Wang, Guoxing

    2015-05-01

    A novel 3-dimensional (3D) finite element model was established to systematically investigate the impact of the diameter (Φ) of disc electrodes and the electrode-to-retina distance on the effectiveness of stimulation. The 3D finite element model was established based on a disc platinum stimulating electrode and a 6-layered retinal structure. The ground electrode was placed in the extraocular space in direct attachment with sclera and treated as a distant return electrode. An established criterion of electric-field strength of 1000 Vm-1 was adopted as the activation threshold for RGCs. The threshold current (TC) increased linearly with increasing Φ and electrode-to-retina distance and remained almost unchanged with further increases in diameter. However, the threshold charge density (TCD) increased dramatically with decreasing electrode diameter. TCD exceeded the electrode safety limit for an electrode diameter of 50 µm at an electrode-to-retina distance of 50 to 200 μm. The electric field distributions illustrated that smaller electrode diameters and shorter electrode-to-retina distances were preferred due to more localized excitation of RGC area under stimulation of different threshold currents in terms of varied electrode size and electrode-to-retina distances. Under the condition of same-amplitude current stimulation, a large electrode exhibited an improved potential spatial selectivity at large electrode-to-retina distances. Modeling results were consistent with those reported in animal electrophysiological experiments and clinical trials, validating the 3D finite element model of epiretinal stimulation. The computational model proved to be useful in optimizing the design of an epiretinal stimulating electrode for prosthesis.

  13. Discriminative components of data.

    PubMed

    Peltonen, Jaakko; Kaski, Samuel

    2005-01-01

    A simple probabilistic model is introduced to generalize classical linear discriminant analysis (LDA) in finding components that are informative of or relevant for data classes. The components maximize the predictability of the class distribution which is asymptotically equivalent to 1) maximizing mutual information with the classes, and 2) finding principal components in the so-called learning or Fisher metrics. The Fisher metric measures only distances that are relevant to the classes, that is, distances that cause changes in the class distribution. The components have applications in data exploration, visualization, and dimensionality reduction. In empirical experiments, the method outperformed, in addition to more classical methods, a Renyi entropy-based alternative while having essentially equivalent computational cost.

  14. An Ionospheric Index Model based on Linear Regression and Neural Network Approaches

    NASA Astrophysics Data System (ADS)

    Tshisaphungo, Mpho; McKinnell, Lee-Anne; Bosco Habarulema, John

    2017-04-01

    The ionosphere is well known to reflect radio wave signals in the high frequency (HF) band due to the present of electron and ions within the region. To optimise the use of long distance HF communications, it is important to understand the drivers of ionospheric storms and accurately predict the propagation conditions especially during disturbed days. This paper presents the development of an ionospheric storm-time index over the South African region for the application of HF communication users. The model will result into a valuable tool to measure the complex ionospheric behaviour in an operational space weather monitoring and forecasting environment. The development of an ionospheric storm-time index is based on a single ionosonde station data over Grahamstown (33.3°S,26.5°E), South Africa. Critical frequency of the F2 layer (foF2) measurements for a period 1996-2014 were considered for this study. The model was developed based on linear regression and neural network approaches. In this talk validation results for low, medium and high solar activity periods will be discussed to demonstrate model's performance.

  15. Mobile robot trajectory tracking using noisy RSS measurements: an RFID approach.

    PubMed

    Miah, M Suruz; Gueaieb, Wail

    2014-03-01

    Most RF beacons-based mobile robot navigation techniques rely on approximating line-of-sight (LOS) distances between the beacons and the robot. This is mostly performed using the robot's received signal strength (RSS) measurements from the beacons. However, accurate mapping between the RSS measurements and the LOS distance is almost impossible to achieve in reverberant environments. This paper presents a partially-observed feedback controller for a wheeled mobile robot where the feedback signal is in the form of noisy RSS measurements emitted from radio frequency identification (RFID) tags. The proposed controller requires neither an accurate mapping between the LOS distance and the RSS measurements, nor the linearization of the robot model. The controller performance is demonstrated through numerical simulations and real-time experiments. ©2013 Published by ISA. All rights reserved.

  16. Linear regression in astronomy. II

    NASA Technical Reports Server (NTRS)

    Feigelson, Eric D.; Babu, Gutti J.

    1992-01-01

    A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.

  17. Estimation of elimination half-lives of organic chemicals in humans using gradient boosting machine.

    PubMed

    Lu, Jing; Lu, Dong; Zhang, Xiaochen; Bi, Yi; Cheng, Keguang; Zheng, Mingyue; Luo, Xiaomin

    2016-11-01

    Elimination half-life is an important pharmacokinetic parameter that determines exposure duration to approach steady state of drugs and regulates drug administration. The experimental evaluation of half-life is time-consuming and costly. Thus, it is attractive to build an accurate prediction model for half-life. In this study, several machine learning methods, including gradient boosting machine (GBM), support vector regressions (RBF-SVR and Linear-SVR), local lazy regression (LLR), SA, SR, and GP, were employed to build high-quality prediction models. Two strategies of building consensus models were explored to improve the accuracy of prediction. Moreover, the applicability domains (ADs) of the models were determined by using the distance-based threshold. Among seven individual models, GBM showed the best performance (R(2)=0.820 and RMSE=0.555 for the test set), and Linear-SVR produced the inferior prediction accuracy (R(2)=0.738 and RMSE=0.672). The use of distance-based ADs effectively determined the scope of QSAR models. However, the consensus models by combing the individual models could not improve the prediction performance. Some essential descriptors relevant to half-life were identified and analyzed. An accurate prediction model for elimination half-life was built by GBM, which was superior to the reference model (R(2)=0.723 and RMSE=0.698). Encouraged by the promising results, we expect that the GBM model for elimination half-life would have potential applications for the early pharmacokinetic evaluations, and provide guidance for designing drug candidates with favorable in vivo exposure profile. This article is part of a Special Issue entitled "System Genetics" Guest Editor: Dr. Yudong Cai and Dr. Tao Huang. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Modeling interpopulation dispersal by banner-tailed kangaroo rats

    USGS Publications Warehouse

    Skvarla, J.L.; Nichols, J.D.; Hines, J.E.; Waser, P.M.

    2004-01-01

    Many metapopulation models assume rules of population connectivity that are implicitly based on what we know about within-population dispersal, but especially for vertebrates, few data exist to assess whether interpopulation dispersal is just within-population dispersal "scaled up." We extended existing multi-stratum mark-release-recapture models to incorporate the robust design, allowing us to compare patterns of within- and between-population movement in the banner-tailed kangaroo rat (Dipodomys spectabilis). Movement was rare among eight populations separated by only a few hundred meters: seven years of twice-annual sampling captured >1200 individuals but only 26 interpopulation dispersers. We developed a program that implemented models with parameters for capture, survival, and interpopulation movement probability and that evaluated competing hypotheses in a model selection framework. We evaluated variants of the island, stepping-stone, and isolation-by-distance models of interpopulation movement, incorporating effects of age, season, and habitat (short or tall grass). For both sexes, QAICc values clearly favored isolation-by-distance models, or models combining the effects of isolation by distance and habitat. Models with probability of dispersal expressed as linear-logistic functions of distance and as negative exponentials of distance fit the data equally well. Interpopulation movement probabilities were similar among sexes (perhaps slightly biased toward females), greater for juveniles than adults (especially for females), and greater before than during the breeding season (especially for females). These patterns resemble those previously described for within-population dispersal in this species, which we interpret as indicating that the same processes initiate both within- and between-population dispersal.

  19. Soil pH determines fungal diversity along an elevation gradient in Southwestern China.

    PubMed

    Liu, Dan; Liu, Guohua; Chen, Li; Wang, Juntao; Zhang, Limei

    2018-01-03

    Fungi play important roles in ecosystem processes, and the elevational pattern of fungal diversity is still unclear. Here, we examined the diversity of fungi along a 1,000 m elevation gradient on Mount Nadu, Southwestern China. We used MiSeq sequencing to obtain fungal sequences that were clustered into operational taxonomic units (OTUs) and to measure the fungal composition and diversity. Though the species richness and phylogenetic diversity of the fungal community did not exhibit significant trends with increasing altitude, they were significantly lower at mid-altitudinal sites than at the base. The Bray-Curtis distance clustering also showed that the fungal communities varied significantly with altitude. A distance-based linear model multivariate analysis (DistLM) identified that soil pH dominated the explanatory power of the species richness (23.72%), phylogenetic diversity (24.25%) and beta diversity (28.10%) of the fungal community. Moreover, the species richness and phylogenetic diversity of the fungal community increased linearly with increasing soil pH (P<0.05). Our study provides evidence that pH is an important predictor of soil fungal diversity along elevation gradients in Southwestern China.

  20. By Ounce or By Calorie: The Differential Effects of Alternative Sugar-Sweetened Beverage Tax Strategies

    PubMed Central

    Zhen, Chen; Brissette, Ian F.; Ruff, Ryan R.

    2014-01-01

    The obesity epidemic and excessive consumption of sugar-sweetened beverages have led to proposals of economics-based interventions to promote healthy eating in the United States. Targeted food and beverage taxes and subsidies are prominent examples of such potential intervention strategies. This paper examines the differential effects of taxing sugar-sweetened beverages by calories and by ounces on beverage demand. To properly measure the extent of substitution and complementarity between beverage products, we developed a fully modified distance metric model of differentiated product demand that endogenizes the cross-price effects. We illustrated the proposed methodology in a linear approximate almost ideal demand system, although other flexible demand systems can also be used. In the empirical application using supermarket scanner data, the product-level demand model consists of 178 beverage products with combined market share of over 90%. The novel demand model outperformed the conventional distance metric model in non-nested model comparison tests and in terms of the economic significance of model predictions. In the fully modified model, a calorie-based beverage tax was estimated to cost $1.40 less in compensating variation than an ounce-based tax per 3,500 beverage calories reduced. This difference in welfare cost estimates between two tax strategies is more than three times as much as the difference estimated by the conventional distance metric model. If applied to products purchased from all sources, a 0.04-cent per kcal tax on sugar-sweetened beverages is predicted to reduce annual per capita beverage intake by 5,800 kcal. PMID:25414517

  1. Field Effect Flow Control in a Polymer T-Intersection Microfluidic Network

    NASA Technical Reports Server (NTRS)

    Sniadecki, Nathan J.; Chang, Richard; Beamesderfer, Mike; Lee, Cheng S.; DeVoe, Don L.

    2003-01-01

    We present a study of induced pressure pumping in a polymer microchannel due to differential electroosmotic flow @OF) rates via field-effect flow control (FEFC). The experimental results demonstrate that the induced pressure pumping is dependent on the distance of the FEFC gate from the cathodic gate. A proposed flow model based on a linearly-decaying zeta potential profile is found to successfully predict experimental trends.

  2. Process fault detection and nonlinear time series analysis for anomaly detection in safeguards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burr, T.L.; Mullen, M.F.; Wangen, L.E.

    In this paper we discuss two advanced techniques, process fault detection and nonlinear time series analysis, and apply them to the analysis of vector-valued and single-valued time-series data. We investigate model-based process fault detection methods for analyzing simulated, multivariate, time-series data from a three-tank system. The model-predictions are compared with simulated measurements of the same variables to form residual vectors that are tested for the presence of faults (possible diversions in safeguards terminology). We evaluate two methods, testing all individual residuals with a univariate z-score and testing all variables simultaneously with the Mahalanobis distance, for their ability to detect lossmore » of material from two different leak scenarios from the three-tank system: a leak without and with replacement of the lost volume. Nonlinear time-series analysis tools were compared with the linear methods popularized by Box and Jenkins. We compare prediction results using three nonlinear and two linear modeling methods on each of six simulated time series: two nonlinear and four linear. The nonlinear methods performed better at predicting the nonlinear time series and did as well as the linear methods at predicting the linear values.« less

  3. Modeling protein conformational changes by iterative fitting of distance constraints using reoriented normal modes.

    PubMed

    Zheng, Wenjun; Brooks, Bernard R

    2006-06-15

    Recently we have developed a normal-modes-based algorithm that predicts the direction of protein conformational changes given the initial state crystal structure together with a small number of pairwise distance constraints for the end state. Here we significantly extend this method to accurately model both the direction and amplitude of protein conformational changes. The new protocol implements a multisteps search in the conformational space that is driven by iteratively minimizing the error of fitting the given distance constraints and simultaneously enforcing the restraint of low elastic energy. At each step, an incremental structural displacement is computed as a linear combination of the lowest 10 normal modes derived from an elastic network model, whose eigenvectors are reorientated to correct for the distortions caused by the structural displacements in the previous steps. We test this method on a list of 16 pairs of protein structures for which relatively large conformational changes are observed (root mean square deviation >3 angstroms), using up to 10 pairwise distance constraints selected by a fluctuation analysis of the initial state structures. This method has achieved a near-optimal performance in almost all cases, and in many cases the final structural models lie within root mean square deviation of 1 approximately 2 angstroms from the native end state structures.

  4. Multiscale Shannon's Entropy Modeling of Orientation and Distance in Steel Fiber Micro-Tomography Data.

    PubMed

    Chiverton, John P; Ige, Olubisi; Barnett, Stephanie J; Parry, Tony

    2017-11-01

    This paper is concerned with the modeling and analysis of the orientation and distance between steel fibers in X-ray micro-tomography data. The advantage of combining both orientation and separation in a model is that it helps provide a detailed understanding of how the steel fibers are arranged, which is easy to compare. The developed models are designed to summarize the randomness of the orientation distribution of the steel fibers both locally and across an entire volume based on multiscale entropy. Theoretical modeling, simulation, and application to real imaging data are shown here. The theoretical modeling of multiscale entropy for orientation includes a proof showing the final form of the multiscale taken over a linear range of scales. A series of image processing operations are also included to overcome interslice connectivity issues to help derive the statistical descriptions of the orientation distributions of the steel fibers. The results demonstrate that multiscale entropy provides unique insights into both simulated and real imaging data of steel fiber reinforced concrete.

  5. Generalizing a Categorization of Students' Interpretations of Linear Kinematics Graphs

    ERIC Educational Resources Information Center

    Bollen, Laurens; De Cock, Mieke; Zuza, Kristina; Guisasola, Jenaro; van Kampen, Paul

    2016-01-01

    We have investigated whether and how a categorization of responses to questions on linear distance-time graphs, based on a study of Irish students enrolled in an algebra-based course, could be adopted and adapted to responses from students enrolled in calculus-based physics courses at universities in Flanders, Belgium (KU Leuven) and the Basque…

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, K; Li, X; Liu, B

    Purpose: To accurately measure CT bow-tie profiles from various manufacturers and to provide non-proprietary information for CT system modeling. Methods: A GOS-based linear detector (0.8 mm per pixel and 51.2 cm in length) with a fast data sampling speed (0.24 ms/sample) was used to measure the relative profiles of bow-tie filters from a collection of eight CT scanners by three different vendors, GE (LS Xtra, LS VCT, Discovery HD750), Siemens (Sensation 64, Edge, Flash, Force), and Philips (iBrilliance 256). The linear detector was first calibrated for its energy response within typical CT beam quality ranges and compared with an ionmore » chamber and analytical modeling (SPECTRA and TASMIP). A geometrical calibration process was developed to determine key parameters including the distance from the focal spot to the linear detector, the angular increment of the gantry at each data sampling, the location of the central x-ray on the linear detector, and the angular response of the detector pixel. Measurements were performed under axial-scan modes for most representative bow-tie filters and kV selections from each scanner. Bow-tie profiles were determined by re-binning the measured rotational data with an angular accuracy of 0.1 degree using the calibrated geometrical parameters. Results: The linear detector demonstrated an energy response as a solid state detector, which is close to the CT imaging detector. The geometrical calibration was proven to be sufficiently accurate (< 1mm in error for distances >550 mm) and the bow-tie profiles measured from rotational mode matched closely to those from the gantry-stationary mode. Accurate profiles were determined for a total of 21 bow-tie filters and 83 filter/kV combinations from the abovementioned scanner models. Conclusion: A new improved approach of CT bow-tie measurement was proposed and accurate bow-tie profiles were provided for a broad list of CT scanner models.« less

  7. Estimating linear effects in ANOVA designs: the easy way.

    PubMed

    Pinhas, Michal; Tzelgov, Joseph; Ganor-Stern, Dana

    2012-09-01

    Research in cognitive science has documented numerous phenomena that are approximated by linear relationships. In the domain of numerical cognition, the use of linear regression for estimating linear effects (e.g., distance and SNARC effects) became common following Fias, Brysbaert, Geypens, and d'Ydewalle's (1996) study on the SNARC effect. While their work has become the model for analyzing linear effects in the field, it requires statistical analysis of individual participants and does not provide measures of the proportions of variability accounted for (cf. Lorch & Myers, 1990). In the present methodological note, using both the distance and SNARC effects as examples, we demonstrate how linear effects can be estimated in a simple way within the framework of repeated measures analysis of variance. This method allows for estimating effect sizes in terms of both slope and proportions of variability accounted for. Finally, we show that our method can easily be extended to estimate linear interaction effects, not just linear effects calculated as main effects.

  8. A descriptive model of resting-state networks using Markov chains.

    PubMed

    Xie, H; Pal, R; Mitra, S

    2016-08-01

    Resting-state functional connectivity (RSFC) studies considering pairwise linear correlations have attracted great interests while the underlying functional network structure still remains poorly understood. To further our understanding of RSFC, this paper presents an analysis of the resting-state networks (RSNs) based on the steady-state distributions and provides a novel angle to investigate the RSFC of multiple functional nodes. This paper evaluates the consistency of two networks based on the Hellinger distance between the steady-state distributions of the inferred Markov chain models. The results show that generated steady-state distributions of default mode network have higher consistency across subjects than random nodes from various RSNs.

  9. Influence of air humidity and the distance from the source on negative air ion concentration in indoor air.

    PubMed

    Wu, Chih Cheng; Lee, Grace W M; Yang, Shinhao; Yu, Kuo-Pin; Lou, Chia Ling

    2006-10-15

    Although negative air ionizer is commonly used for indoor air cleaning, few studies examine the concentration gradient of negative air ion (NAI) in indoor environments. This study investigated the concentration gradient of NAI at various relative humidities and distances form the source in indoor air. The NAI was generated by single-electrode negative electric discharge; the discharge was kept at dark discharge and 30.0 kV. The NAI concentrations were measured at various distances (10-900 cm) from the discharge electrode in order to identify the distribution of NAI in an indoor environment. The profile of NAI concentration was monitored at different relative humidities (38.1-73.6% RH) and room temperatures (25.2+/-1.4 degrees C). Experimental results indicate that the influence of relative humidity on the concentration gradient of NAI was complicated. There were four trends for the relationship between NAI concentration and relative humidity at different distances from the discharge electrode. The changes of NAI concentration with an increase in relative humidity at different distances were quite steady (10-30 cm), strongly declining (70-360 cm), approaching stability (420-450 cm) and moderately increasing (560-900 cm). Additionally, the regression analysis of NAI concentrations and distances from the discharge electrode indicated a logarithmic linear (log-linear) relationship; the distance of log-linear tendency (lambda) decreased with an increase in relative humidity such that the log-linear distance of 38.1% RH was 2.9 times that of 73.6% RH. Moreover, an empirical curve fit based on this study for the concentration gradient of NAI generated by negative electric discharge in indoor air was developed for estimating the NAI concentration at different relative humidities and distances from the source of electric discharge.

  10. Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter

    PubMed Central

    Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Gu, Chengfan

    2018-01-01

    This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation. PMID:29415509

  11. Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter.

    PubMed

    Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Zhong, Yongmin; Gu, Chengfan

    2018-02-06

    This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation.

  12. Chromosome structures: reduction of certain problems with unequal gene content and gene paralogs to integer linear programming.

    PubMed

    Lyubetsky, Vassily; Gershgorin, Roman; Gorbunov, Konstantin

    2017-12-06

    Chromosome structure is a very limited model of the genome including the information about its chromosomes such as their linear or circular organization, the order of genes on them, and the DNA strand encoding a gene. Gene lengths, nucleotide composition, and intergenic regions are ignored. Although highly incomplete, such structure can be used in many cases, e.g., to reconstruct phylogeny and evolutionary events, to identify gene synteny, regulatory elements and promoters (considering highly conserved elements), etc. Three problems are considered; all assume unequal gene content and the presence of gene paralogs. The distance problem is to determine the minimum number of operations required to transform one chromosome structure into another and the corresponding transformation itself including the identification of paralogs in two structures. We use the DCJ model which is one of the most studied combinatorial rearrangement models. Double-, sesqui-, and single-operations as well as deletion and insertion of a chromosome region are considered in the model; the single ones comprise cut and join. In the reconstruction problem, a phylogenetic tree with chromosome structures in the leaves is given. It is necessary to assign the structures to inner nodes of the tree to minimize the sum of distances between terminal structures of each edge and to identify the mutual paralogs in a fairly large set of structures. A linear algorithm is known for the distance problem without paralogs, while the presence of paralogs makes it NP-hard. If paralogs are allowed but the insertion and deletion operations are missing (and special constraints are imposed), the reduction of the distance problem to integer linear programming is known. Apparently, the reconstruction problem is NP-hard even in the absence of paralogs. The problem of contigs is to find the optimal arrangements for each given set of contigs, which also includes the mutual identification of paralogs. We proved that these problems can be reduced to integer linear programming formulations, which allows an algorithm to redefine the problems to implement a very special case of the integer linear programming tool. The results were tested on synthetic and biological samples. Three well-known problems were reduced to a very special case of integer linear programming, which is a new method of their solutions. Integer linear programming is clearly among the main computational methods and, as generally accepted, is fast on average; in particular, computation systems specifically targeted at it are available. The challenges are to reduce the size of the corresponding integer linear programming formulations and to incorporate a more detailed biological concept in our model of the reconstruction.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Filippov, A. V., E-mail: fav@triniti.ru

    The interaction of two charged point macroparticles located in Wigner–Seitz cells of simple cubic (SC), body-centered cubic (BCC), or face-centered cubic (FCC) lattices in an equilibrium plasma has been studied within the Debye approximation or, more specifically, based on the linearized Poisson–Boltzmann model. The shape of the outer boundary is shown to exert a strong influence on the pattern of electrostatic interaction between the two macroparticles, which transforms from repulsion at small interparticle distances to attraction as the interparticle distance approaches half the length of the computational cell. The macroparticle pair interaction potential in an equilibrium plasma is shown tomore » be nevertheless the Debye one and purely repulsive for likely charged macroparticles.« less

  14. Classification and recognition of dynamical models: the role of phase, independent components, kernels and optimal transport.

    PubMed

    Bissacco, Alessandro; Chiuso, Alessandro; Soatto, Stefano

    2007-11-01

    We address the problem of performing decision tasks, and in particular classification and recognition, in the space of dynamical models in order to compare time series of data. Motivated by the application of recognition of human motion in image sequences, we consider a class of models that include linear dynamics, both stable and marginally stable (periodic), both minimum and non-minimum phase, driven by non-Gaussian processes. This requires extending existing learning and system identification algorithms to handle periodic modes and nonminimum phase behavior, while taking into account higher-order statistics of the data. Once a model is identified, we define a kernel-based cord distance between models that includes their dynamics, their initial conditions as well as input distribution. This is made possible by a novel kernel defined between two arbitrary (non-Gaussian) distributions, which is computed by efficiently solving an optimal transport problem. We validate our choice of models, inference algorithm, and distance on the tasks of human motion synthesis (sample paths of the learned models), and recognition (nearest-neighbor classification in the computed distance). However, our work can be applied more broadly where one needs to compare historical data while taking into account periodic trends, non-minimum phase behavior, and non-Gaussian input distributions.

  15. Differential morphology and image processing.

    PubMed

    Maragos, P

    1996-01-01

    Image processing via mathematical morphology has traditionally used geometry to intuitively understand morphological signal operators and set or lattice algebra to analyze them in the space domain. We provide a unified view and analytic tools for morphological image processing that is based on ideas from differential calculus and dynamical systems. This includes ideas on using partial differential or difference equations (PDEs) to model distance propagation or nonlinear multiscale processes in images. We briefly review some nonlinear difference equations that implement discrete distance transforms and relate them to numerical solutions of the eikonal equation of optics. We also review some nonlinear PDEs that model the evolution of multiscale morphological operators and use morphological derivatives. Among the new ideas presented, we develop some general 2-D max/min-sum difference equations that model the space dynamics of 2-D morphological systems (including the distance computations) and some nonlinear signal transforms, called slope transforms, that can analyze these systems in a transform domain in ways conceptually similar to the application of Fourier transforms to linear systems. Thus, distance transforms are shown to be bandpass slope filters. We view the analysis of the multiscale morphological PDEs and of the eikonal PDE solved via weighted distance transforms as a unified area in nonlinear image processing, which we call differential morphology, and briefly discuss its potential applications to image processing and computer vision.

  16. SU-E-T-186: Cloud-Based Quality Assurance Application for Linear Accelerator Commissioning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rogers, J

    2015-06-15

    Purpose: To identify anomalies and safety issues during data collection and modeling for treatment planning systems Methods: A cloud-based quality assurance system (AQUIRE - Automated QUalIty REassurance) has been developed to allow the uploading and analysis of beam data aquired during the treatment planning system commissioning process. In addition to comparing and aggregating measured data, tools have also been developed to extract dose from the treatment planning system for end-to-end testing. A gamma index is perfomed on the data to give a dose difference and distance-to-agreement for validation that a beam model is generating plans consistent with the beam datamore » collection. Results: Over 20 linear accelerators have been commissioning using this platform, and a variety of errors and potential saftey issues have been caught through the validation process. For example, the gamma index of 2% dose, 2mm DTA is quite sufficient to see curves not corrected for effective point of measurement. Also, data imported into the database is analyzed against an aggregate of similar linear accelerators to show data points that are outliers. The resulting curves in the database exhibit a very small standard deviation and imply that a preconfigured beam model based on aggregated linear accelerators will be sufficient in most cases. Conclusion: With the use of this new platform for beam data commissioning, errors in beam data collection and treatment planning system modeling are greatly reduced. With the reduction in errors during acquisition, the resulting beam models are quite similar, suggesting that a common beam model may be possible in the future. Development is ongoing to create routine quality assurance tools to compare back to the beam data acquired during commissioning. I am a medical physicist for Alzyen Medical Physics, and perform commissioning services.« less

  17. Effects of motivation on car-following

    NASA Technical Reports Server (NTRS)

    Boesser, T.

    1982-01-01

    Speed- and distance control by automobile-drivers is described best by linear models when the leading vehicles speed varies randomly and when the driver is motivated to keep a large distance. A car-following experiment required subjects to follow at 'safe' or at 'close' distance. Transfer-characteristics of the driver were extended by 1 octave when following 'closely'. Nonlinear properties of drivers control-movements are assumed to reflect different motivation-dependent control strategies.

  18. Design and indoor testing of a compact optical concentrator

    NASA Astrophysics Data System (ADS)

    Zheng, Cheng; Li, Qiyuan; Rosengarten, Gary; Hawkes, Evatt; Taylor, Robert A.

    2017-01-01

    We propose and analyze designs for stationary and compact optical concentrators. The designs are based on a catadioptric assembly with a linear focus line. They have a focal distance of around 10 to 15 cm with a concentration ratio (4.5 to 5.9 times). The concentrator employs an internal linear-tracking mechanism, making it suitable for rooftop solar applications. The optical performance of the collector has been simulated with ray tracing software (Zemax), and laser-based indoor experiments were carried out to validate this model. The results show that the system is capable of achieving an average optical efficiency of around 66% to 69% during the middle 6 (sunniest) h of the day. The design process and principles described in this work will help enable a new class of rooftop solar thermal concentrators.

  19. Identification of the focal plane wavefront control system using E-M algorithm

    NASA Astrophysics Data System (ADS)

    Sun, He; Kasdin, N. Jeremy; Vanderbei, Robert

    2017-09-01

    In a typical focal plane wavefront control (FPWC) system, such as the adaptive optics system of NASA's WFIRST mission, the efficient controllers and estimators in use are usually model-based. As a result, the modeling accuracy of the system influences the ultimate performance of the control and estimation. Currently, a linear state space model is used and calculated based on lab measurements using Fourier optics. Although the physical model is clearly defined, it is usually biased due to incorrect distance measurements, imperfect diagnoses of the optical aberrations, and our lack of knowledge of the deformable mirrors (actuator gains and influence functions). In this paper, we present a new approach for measuring/estimating the linear state space model of a FPWC system using the expectation-maximization (E-M) algorithm. Simulation and lab results in the Princeton's High Contrast Imaging Lab (HCIL) show that the E-M algorithm can well handle both the amplitude and phase errors and accurately recover the system. Using the recovered state space model, the controller creates dark holes with faster speed. The final accuracy of the model depends on the amount of data used for learning.

  20. Protograph LDPC Codes with Node Degrees at Least 3

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher

    2006-01-01

    In this paper we present protograph codes with a small number of degree-3 nodes and one high degree node. The iterative decoding threshold for proposed rate 1/2 codes are lower, by about 0.2 dB, than the best known irregular LDPC codes with degree at least 3. The main motivation is to gain linear minimum distance to achieve low error floor. Also to construct rate-compatible protograph-based LDPC codes for fixed block length that simultaneously achieves low iterative decoding threshold and linear minimum distance. We start with a rate 1/2 protograph LDPC code with degree-3 nodes and one high degree node. Higher rate codes are obtained by connecting check nodes with degree-2 non-transmitted nodes. This is equivalent to constraint combining in the protograph. The condition where all constraints are combined corresponds to the highest rate code. This constraint must be connected to nodes of degree at least three for the graph to have linear minimum distance. Thus having node degree at least 3 for rate 1/2 guarantees linear minimum distance property to be preserved for higher rates. Through examples we show that the iterative decoding threshold as low as 0.544 dB can be achieved for small protographs with node degrees at least three. A family of low- to high-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.

  1. GeneOnEarth: fitting genetic PC plots on the globe.

    PubMed

    Torres-Sánchez, Sergio; Medina-Medina, Nuria; Gignoux, Chris; Abad-Grau, María M; González-Burchard, Esteban

    2013-01-01

    Principal component (PC) plots have become widely used to summarize genetic variation of individuals in a sample. The similarity between genetic distance in PC plots and geographical distance has shown to be quite impressive. However, in most situations, individual ancestral origins are not precisely known or they are heterogeneously distributed; hence, they are hardly linked to a geographical area. We have developed GeneOnEarth, a user-friendly web-based tool to help geneticists to understand whether a linear isolation-by-distance model may apply to a genetic data set; thus, genetic distances among a set of individuals resemble geographical distances among their origins. Its main goal is to allow users to first apply a by-view Procrustes method to visually learn whether this model holds. To do that, the user can choose the exact geographical area from an on line 2D or 3D world map by using, respectively, Google Maps or Google Earth, and rotate, flip, and resize the images. GeneOnEarth can also compute the optimal rotation angle using Procrustes analysis and assess statistical evidence of similarity when a different rotation angle has been chosen by the user. An online version of GeneOnEarth is available for testing and using purposes at http://bios.ugr.es/GeneOnEarth.

  2. Distance determinations to shield galaxies from Hubble space telescope imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McQuinn, Kristen B. W.; Skillman, Evan D.; Cannon, John M.

    The Survey of H I in Extremely Low-mass Dwarf (SHIELD) galaxies is an ongoing multi-wavelength program to characterize the gas, star formation, and evolution in gas-rich, very low-mass galaxies. The galaxies were selected from the first ∼10% of the H I Arecibo Legacy Fast ALFA (ALFALFA) survey based on their inferred low H I mass and low baryonic mass, and all systems have recent star formation. Thus, the SHIELD sample probes the faint end of the galaxy luminosity function for star-forming galaxies. Here, we measure the distances to the 12 SHIELD galaxies to be between 5 and 12 Mpc bymore » applying the tip of the red giant method to the resolved stellar populations imaged by the Hubble Space Telescope. Based on these distances, the H I masses in the sample range from 4 × 10{sup 6} to 6 × 10{sup 7} M {sub ☉}, with a median H I mass of 1 × 10{sup 7} M {sub ☉}. The tip of the red giant branch distances are up to 73% farther than flow-model estimates in the ALFALFA catalog. Because of the relatively large uncertainties of flow-model distances, we are biased toward selecting galaxies from the ALFALFA catalog where the flow model underestimates the true distances. The measured distances allow for an assessment of the native environments around the sample members. Five of the galaxies are part of the NGC 672 and NGC 784 groups, which together constitute a single structure. One galaxy is part of a larger linear ensemble of nine systems that stretches 1.6 Mpc from end to end. Three galaxies reside in regions with 1-9 neighbors, and four galaxies are truly isolated with no known system identified within a radius of 1 Mpc.« less

  3. Analysis of redox additive-based overcharge protection for rechargeable lithium batteries

    NASA Technical Reports Server (NTRS)

    Narayanan, S. R.; Surampudi, S.; Attia, A. I.; Bankston, C. P.

    1991-01-01

    The overcharge condition in secondary lithium batteries employing redox additives for overcharge protection, has been theoretically analyzed in terms of a finite linear diffusion model. The analysis leads to expressions relating the steady-state overcharge current density and cell voltage to the concentration, diffusion coefficient, standard reduction potential of the redox couple, and interelectrode distance. The model permits the estimation of the maximum permissible overcharge rate for any chosen set of system conditions. Digital simulation of the overcharge experiment leads to numerical representation of the potential transients, and estimate of the influence of diffusion coefficient and interelectrode distance on the transient attainment of the steady state during overcharge. The model has been experimentally verified using 1,1-prime-dimethyl ferrocene as a redox additive. The analysis of the experimental results in terms of the theory allows the calculation of the diffusion coefficient and the formal potential of the redox couple. The model and the theoretical results may be exploited in the design and optimization of overcharge protection by the redox additive approach.

  4. The Kinematics Parameters of the Galaxy Using Data of Modern Astrometric Catalogues

    NASA Astrophysics Data System (ADS)

    Akhmetov, V. S.; Fedorov, P. N.; Velichko, A. B.; Shulga, V. M.

    Based on the Ogorodnikov-Milne model, we analyze the proper motions of XPM2, UCAC4 and PPMXL stars. To estimate distances to the stars we used the method of statistical parallaxes herewith the random errors of the distance estimations do not exceed 10%. The method of statistical parallaxes was used to estimate the distances to stars with random errors no larger than 14%. The linear solar velocity relative to the local standard of rest, which is well determined for the local entroid (d 150 p), was used as a reference. We have established that the model component that describes the rotation of all stars under consideration about the Galactic Y axis differs from zero. For the distant (d < 1000 pc) PPMXL and UCAC4 stars, the mean rotation about the Galactic Y axis has been found to be M-13 = -0.75± 0.04 mas yr-1. As for distances greater than 1 kpc M-13>derived from the data of only XPM2 catalogue becomes positive and exceeds 0.5 mas yr-1. We interpret this rotation found using the distant stars as a residual rotation of the ICRS/Tycho-2 system relative to the inertial reference frame.

  5. Analytical Debye-Huckel model for electrostatic potentials around dissolved DNA.

    PubMed

    Wagner, K; Keyes, E; Kephart, T W; Edwards, G

    1997-07-01

    We present an analytical, Green-function-based model for the electric potential of DNA in solution, treating the surrounding solvent with the Debye-Huckel approximation. The partial charge of each atom is accounted for by modeling DNA as linear distributions of atoms on concentric cylindrical surfaces. The condensed ions of the solvent are treated with the Debye-Huckel approximation. The resultant leading term of the potential is that of a continuous shielded line charge, and the higher order terms account for the helical structure. Within several angstroms of the surface there is sufficient information in the electric potential to distinguish features and symmetries of DNA. Plots of the potential and equipotential surfaces, dominated by the phosphate charges, reflect the structural differences between the A, B, and Z conformations and, to a smaller extent, the difference between base sequences. As the distances from the helices increase, the magnitudes of the potentials decrease. However, the bases and sugars account for a larger fraction of the double helix potential with increasing distance. We have found that when the solvent is treated with the Debye-Huckel approximation, the potential decays more rapidly in every direction from the surface than it did in the concentric dielectric cylinder approximation.

  6. Spatial modeling in ecology: the flexibility of eigenfunction spatial analyses.

    PubMed

    Griffith, Daniel A; Peres-Neto, Pedro R

    2006-10-01

    Recently, analytical approaches based on the eigenfunctions of spatial configuration matrices have been proposed in order to consider explicitly spatial predictors. The present study demonstrates the usefulness of eigenfunctions in spatial modeling applied to ecological problems and shows equivalencies of and differences between the two current implementations of this methodology. The two approaches in this category are the distance-based (DB) eigenvector maps proposed by P. Legendre and his colleagues, and spatial filtering based upon geographic connectivity matrices (i.e., topology-based; CB) developed by D. A. Griffith and his colleagues. In both cases, the goal is to create spatial predictors that can be easily incorporated into conventional regression models. One important advantage of these two approaches over any other spatial approach is that they provide a flexible tool that allows the full range of general and generalized linear modeling theory to be applied to ecological and geographical problems in the presence of nonzero spatial autocorrelation.

  7. Dissimilarity based Partial Least Squares (DPLS) for genomic prediction from SNPs.

    PubMed

    Singh, Priyanka; Engel, Jasper; Jansen, Jeroen; de Haan, Jorn; Buydens, Lutgarde Maria Celina

    2016-05-04

    Genomic prediction (GP) allows breeders to select plants and animals based on their breeding potential for desirable traits, without lengthy and expensive field trials or progeny testing. We have proposed to use Dissimilarity-based Partial Least Squares (DPLS) for GP. As a case study, we use the DPLS approach to predict Bacterial wilt (BW) in tomatoes using SNPs as predictors. The DPLS approach was compared with the Genomic Best-Linear Unbiased Prediction (GBLUP) and single-SNP regression with SNP as a fixed effect to assess the performance of DPLS. Eight genomic distance measures were used to quantify relationships between the tomato accessions from the SNPs. Subsequently, each of these distance measures was used to predict the BW using the DPLS prediction model. The DPLS model was found to be robust to the choice of distance measures; similar prediction performances were obtained for each distance measure. DPLS greatly outperformed the single-SNP regression approach, showing that BW is a comprehensive trait dependent on several loci. Next, the performance of the DPLS model was compared to that of GBLUP. Although GBLUP and DPLS are conceptually very different, the prediction quality (PQ) measured by DPLS models were similar to the prediction statistics obtained from GBLUP. A considerable advantage of DPLS is that the genotype-phenotype relationship can easily be visualized in a 2-D scatter plot. This so-called score-plot provides breeders an insight to select candidates for their future breeding program. DPLS is a highly appropriate method for GP. The model prediction performance was similar to the GBLUP and far better than the single-SNP approach. The proposed method can be used in combination with a wide range of genomic dissimilarity measures and genotype representations such as allele-count, haplotypes or allele-intensity values. Additionally, the data can be insightfully visualized by the DPLS model, allowing for selection of desirable candidates from the breeding experiments. In this study, we have assessed the DPLS performance on a single trait.

  8. Adaptive cruise control with stop&go function using the state-dependent nonlinear model predictive control approach.

    PubMed

    Shakouri, Payman; Ordys, Andrzej; Askari, Mohamad R

    2012-09-01

    In the design of adaptive cruise control (ACC) system two separate control loops - an outer loop to maintain the safe distance from the vehicle traveling in front and an inner loop to control the brake pedal and throttle opening position - are commonly used. In this paper a different approach is proposed in which a single control loop is utilized. The objective of the distance tracking is incorporated into the single nonlinear model predictive control (NMPC) by extending the original linear time invariant (LTI) models obtained by linearizing the nonlinear dynamic model of the vehicle. This is achieved by introducing the additional states corresponding to the relative distance between leading and following vehicles, and also the velocity of the leading vehicle. Control of the brake and throttle position is implemented by taking the state-dependent approach. The model demonstrates to be more effective in tracking the speed and distance by eliminating the necessity of switching between the two controllers. It also offers smooth variation in brake and throttle controlling signal which subsequently results in a more uniform acceleration of the vehicle. The results of proposed method are compared with other ACC systems using two separate control loops. Furthermore, an ACC simulation results using a stop&go scenario are shown, demonstrating a better fulfillment of the design requirements. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Statistical performance and information content of time lag analysis and redundancy analysis in time series modeling.

    PubMed

    Angeler, David G; Viedma, Olga; Moreno, José M

    2009-11-01

    Time lag analysis (TLA) is a distance-based approach used to study temporal dynamics of ecological communities by measuring community dissimilarity over increasing time lags. Despite its increased use in recent years, its performance in comparison with other more direct methods (i.e., canonical ordination) has not been evaluated. This study fills this gap using extensive simulations and real data sets from experimental temporary ponds (true zooplankton communities) and landscape studies (landscape categories as pseudo-communities) that differ in community structure and anthropogenic stress history. Modeling time with a principal coordinate of neighborhood matrices (PCNM) approach, the canonical ordination technique (redundancy analysis; RDA) consistently outperformed the other statistical tests (i.e., TLAs, Mantel test, and RDA based on linear time trends) using all real data. In addition, the RDA-PCNM revealed different patterns of temporal change, and the strength of each individual time pattern, in terms of adjusted variance explained, could be evaluated, It also identified species contributions to these patterns of temporal change. This additional information is not provided by distance-based methods. The simulation study revealed better Type I error properties of the canonical ordination techniques compared with the distance-based approaches when no deterministic component of change was imposed on the communities. The simulation also revealed that strong emphasis on uniform deterministic change and low variability at other temporal scales is needed to result in decreased statistical power of the RDA-PCNM approach relative to the other methods. Based on the statistical performance of and information content provided by RDA-PCNM models, this technique serves ecologists as a powerful tool for modeling temporal change of ecological (pseudo-) communities.

  10. Fourier power, subjective distance, and object categories all provide plausible models of BOLD responses in scene-selective visual areas

    PubMed Central

    Lescroart, Mark D.; Stansbury, Dustin E.; Gallant, Jack L.

    2015-01-01

    Perception of natural visual scenes activates several functional areas in the human brain, including the Parahippocampal Place Area (PPA), Retrosplenial Complex (RSC), and the Occipital Place Area (OPA). It is currently unclear what specific scene-related features are represented in these areas. Previous studies have suggested that PPA, RSC, and/or OPA might represent at least three qualitatively different classes of features: (1) 2D features related to Fourier power; (2) 3D spatial features such as the distance to objects in a scene; or (3) abstract features such as the categories of objects in a scene. To determine which of these hypotheses best describes the visual representation in scene-selective areas, we applied voxel-wise modeling (VM) to BOLD fMRI responses elicited by a set of 1386 images of natural scenes. VM provides an efficient method for testing competing hypotheses by comparing predictions of brain activity based on encoding models that instantiate each hypothesis. Here we evaluated three different encoding models that instantiate each of the three hypotheses listed above. We used linear regression to fit each encoding model to the fMRI data recorded from each voxel, and we evaluated each fit model by estimating the amount of variance it predicted in a withheld portion of the data set. We found that voxel-wise models based on Fourier power or the subjective distance to objects in each scene predicted much of the variance predicted by a model based on object categories. Furthermore, the response variance explained by these three models is largely shared, and the individual models explain little unique variance in responses. Based on an evaluation of previous studies and the data we present here, we conclude that there is currently no good basis to favor any one of the three alternative hypotheses about visual representation in scene-selective areas. We offer suggestions for further studies that may help resolve this issue. PMID:26594164

  11. Practical robustness measures in multivariable control system analysis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Lehtomaki, N. A.

    1981-01-01

    The robustness of the stability of multivariable linear time invariant feedback control systems with respect to model uncertainty is considered using frequency domain criteria. Available robustness tests are unified under a common framework based on the nature and structure of model errors. These results are derived using a multivariable version of Nyquist's stability theorem in which the minimum singular value of the return difference transfer matrix is shown to be the multivariable generalization of the distance to the critical point on a single input, single output Nyquist diagram. Using the return difference transfer matrix, a very general robustness theorem is presented from which all of the robustness tests dealing with specific model errors may be derived. The robustness tests that explicitly utilized model error structure are able to guarantee feedback system stability in the face of model errors of larger magnitude than those robustness tests that do not. The robustness of linear quadratic Gaussian control systems are analyzed.

  12. Safe distance car-following model including backward-looking and its stability analysis

    NASA Astrophysics Data System (ADS)

    Yang, Da; Jin, Peter Jing; Pu, Yun; Ran, Bin

    2013-03-01

    The focus of this paper is the car-following behavior including backward-looking, simply called the bi-directional looking car-following behavior. This study is motivated by the potential changes of the physical properties of traffic flow caused by the fast developing intelligent transportation system (ITS), especially the new connected vehicle technology. Existing studies on this topic focused on general motors (GM) models and optimal velocity (OV) models. The safe distance car-following model, Gipps' model, which is more widely used in practice have not drawn too much attention in the bi-directional looking context. This paper explores the property of the bi-directional looking extension of Gipps' safe distance model. The stability condition of the proposed model is derived using the linear stability theory and is verified using numerical simulations. The impacts of the driver and vehicle characteristics appeared in the proposed model on the traffic flow stability are also investigated. It is found that taking into account the backward-looking effect in car-following has three types of effect on traffic flow: stabilizing, destabilizing and producing non-physical phenomenon. This conclusion is more sophisticated than the study results based on the OV bi-directional looking car-following models. Moreover, the drivers who have the smaller reaction time or the larger additional delay and think the other vehicles have larger maximum decelerations can stabilize traffic flow.

  13. An Algorithm for Finding Candidate Synaptic Sites in Computer Generated Networks of Neurons with Realistic Morphologies

    PubMed Central

    van Pelt, Jaap; Carnell, Andrew; de Ridder, Sander; Mansvelder, Huibert D.; van Ooyen, Arjen

    2010-01-01

    Neurons make synaptic connections at locations where axons and dendrites are sufficiently close in space. Typically the required proximity is based on the dimensions of dendritic spines and axonal boutons. Based on this principle one can search those locations in networks formed by reconstructed neurons or computer generated neurons. Candidate synapses are then located where axons and dendrites are within a given criterion distance from each other. Both experimentally reconstructed and model generated neurons are usually represented morphologically by piecewise-linear structures (line pieces or cylinders). Proximity tests are then performed on all pairs of line pieces from both axonal and dendritic branches. Applying just a test on the distance between line pieces may result in local clusters of synaptic sites when more than one pair of nearby line pieces from axonal and dendritic branches is sufficient close, and may introduce a dependency on the length scale of the individual line pieces. The present paper describes a new algorithm for defining locations of candidate synapses which is based on the crossing requirement of a line piece pair, while the length of the orthogonal distance between the line pieces is subjected to the distance criterion for testing 3D proximity. PMID:21160548

  14. Fit Point-Wise AB Initio Calculation Potential Energies to a Multi-Dimension Long-Range Model

    NASA Astrophysics Data System (ADS)

    Zhai, Yu; Li, Hui; Le Roy, Robert J.

    2016-06-01

    A potential energy surface (PES) is a fundamental tool and source of understanding for theoretical spectroscopy and for dynamical simulations. Making correct assignments for high-resolution rovibrational spectra of floppy polyatomic and van der Waals molecules often relies heavily on predictions generated from a high quality ab initio potential energy surface. Moreover, having an effective analytic model to represent such surfaces can be as important as the ab initio results themselves. For the one-dimensional potentials of diatomic molecules, the most successful such model to date is arguably the ``Morse/Long-Range'' (MLR) function developed by R. J. Le Roy and coworkers. It is very flexible, is everywhere differentiable to all orders. It incorporates correct predicted long-range behaviour, extrapolates sensibly at both large and small distances, and two of its defining parameters are always the physically meaningful well depth {D}_e and equilibrium distance r_e. Extensions of this model, called the Multi-Dimension Morse/Long-Range (MD-MLR) function, linear molecule-linear molecule systems and atom-non-linear molecule system. have been applied successfully to atom-plus-linear molecule, linear molecule-linear molecule and atom-non-linear molecule systems. However, there are several technical challenges faced in modelling the interactions of general molecule-molecule systems, such as the absence of radial minima for some relative alignments, difficulties in fitting short-range potential energies, and challenges in determining relative-orientation dependent long-range coefficients. This talk will illustrate some of these challenges and describe our ongoing work in addressing them. Mol. Phys. 105, 663 (2007); J. Chem. Phys. 131, 204309 (2009); Mol. Phys. 109, 435 (2011). Phys. Chem. Chem. Phys. 10, 4128 (2008); J. Chem. Phys. 130, 144305 (2009) J. Chem. Phys. 132, 214309 (2010) J. Chem. Phys. 140, 214309 (2010)

  15. Fluence map optimization (FMO) with dose-volume constraints in IMRT using the geometric distance sorting method.

    PubMed

    Lan, Yihua; Li, Cunhua; Ren, Haozheng; Zhang, Yong; Min, Zhifang

    2012-10-21

    A new heuristic algorithm based on the so-called geometric distance sorting technique is proposed for solving the fluence map optimization with dose-volume constraints which is one of the most essential tasks for inverse planning in IMRT. The framework of the proposed method is basically an iterative process which begins with a simple linear constrained quadratic optimization model without considering any dose-volume constraints, and then the dose constraints for the voxels violating the dose-volume constraints are gradually added into the quadratic optimization model step by step until all the dose-volume constraints are satisfied. In each iteration step, an interior point method is adopted to solve each new linear constrained quadratic programming. For choosing the proper candidate voxels for the current dose constraint adding, a so-called geometric distance defined in the transformed standard quadratic form of the fluence map optimization model was used to guide the selection of the voxels. The new geometric distance sorting technique can mostly reduce the unexpected increase of the objective function value caused inevitably by the constraint adding. It can be regarded as an upgrading to the traditional dose sorting technique. The geometry explanation for the proposed method is also given and a proposition is proved to support our heuristic idea. In addition, a smart constraint adding/deleting strategy is designed to ensure a stable iteration convergence. The new algorithm is tested on four cases including head-neck, a prostate, a lung and an oropharyngeal, and compared with the algorithm based on the traditional dose sorting technique. Experimental results showed that the proposed method is more suitable for guiding the selection of new constraints than the traditional dose sorting method, especially for the cases whose target regions are in non-convex shapes. It is a more efficient optimization technique to some extent for choosing constraints than the dose sorting method. By integrating a smart constraint adding/deleting scheme within the iteration framework, the new technique builds up an improved algorithm for solving the fluence map optimization with dose-volume constraints.

  16. The physical characteristics of match-play in English schoolboy and academy rugby union.

    PubMed

    Read, Dale B; Jones, Ben; Phibbs, Padraic J; Roe, Gregory A B; Darrall-Jones, Joshua; Weakley, Jonathon J S; Till, Kevin

    2018-03-01

    The aim was to compare the physical characteristics of under-18 academy and schoolboy rugby union competition by position (forwards and backs). Using a microsensor unit, match characteristics were recorded in 66 players. Locomotor characteristics were assessed by maximum sprint speed (MSS) and total, walking, jogging, striding and sprinting distances. The slow component (<2 m · s -1 ) of PlayerLoad TM (PL slow ), which is the accumulated accelerations from the three axes of movement, was analysed as a measure of low-speed activity (e.g., rucking). A linear mixed-model was assessed with magnitude-based inferences. Academy forwards and backs almost certainly and very likely covered greater total distance than school forwards and backs. Academy players from both positions were also very likely to cover greater jogging distances. Academy backs were very likely to accumulate greater PL slow and the academy forwards a likely greater sprinting distance than school players in their respective positions. The MSS, total, walking and sprinting distances were greater in backs (likely-almost certainly), while forwards accumulated greater PL slow (almost certainly) and jogging distance (very likely). The results suggest that academy-standard rugby better prepares players to progress to senior competition compared to schoolboy rugby.

  17. Predicting performance for ecological restoration: A case study using Spartina altemiflora

    USGS Publications Warehouse

    Travis, S.E.; Grace, J.B.

    2010-01-01

    The success of population-based ecological restoration relies on the growth and reproductive performance of selected donor materials, whether consisting of whole plants or seed. Accurately predicting performance requires an understanding of a variety of underlying processes, particularly gene flow and selection, which can be measured, at least in part, using surrogates such as neutral marker genetic distances and simple latitudinal effects. Here we apply a structural equation modeling approach to understanding and predicting performance in a widespread salt marsh grass, Spartina alterniflora, commonly used for ecological restoration throughout its native range in North America. We collected source materials from throughout this range, consisting of eight clones each from 23 populations, for transplantation to a common garden site in coastal Louisiana and monitored their performance. We modeled performance as a latent process described by multiple indicator variables (e.g., clone diameter, stem number) and estimated direct and indirect influences of geographic and genetic distances on performance. Genetic distances were determined by comparison of neutral molecular markers with those from a local population at the common garden site. Geographic distance metrics included dispersal distance (the minimum distance over water between donor and experimental sites) and latitude. Model results indicate direct effects of genetic distance and latitude on performance variation among the donor sites. Standardized effect strengths indicate that performance was roughly twice as sensitive to variation in genetic distance as to latitudinal variation. Dispersal distance had an indirect influence on performance through effects on genetic distance, indicating a typical pattern of genetic isolation by distance. Latitude also had an indirect effect on genetic distance through its linear relationship with dispersal distance. Three performance indicators had significant loadings on performance alone (mean clone diameter, mean number of stems, mean number of inflorescences), while the performance indicators mean stem height and mean stem width were also influenced by latitude. We suggest that dispersal distance and latitude should provide an adequate means of predicting performance in future S. alterniflora restorations and propose a maximum sampling distance of 300 km (holding latitude constant) to avoid the sampling of inappropriate ecotypes. ?? 2010 by the Ecological Society of America.

  18. Cost estimation for slope stability improvement in Muara Enim

    NASA Astrophysics Data System (ADS)

    Juliantina, Ika; Sutejo, Yulindasari; Adhitya, Bimo Brata; Sari, Nurul Permata; Kurniawan, Reffanda

    2017-11-01

    Case study area of SP. Sugihwaras-Baturaja is typologically specified in the C-zone type because the area is included in the foot of the mountain with a slope of 0 % to 20 %. Generally, the factors that cause landslide in Muara Enim Regency due to the influence of soil/rock, water factor, geological factors, and human activities. Slope improvement on KM.273 + 642-KM.273 + 774 along 132 m using soil nailing with 19 mm diameter tendon iron and an angle of 20o and a 75 mm shotcrete thickness, a K-250 concrete grouting material. Cost modeling (y) soil nailing based on 4 variables are X1 = length, X2 = horizontal distance, X3 = safety factor (SF), and X4 = time. Nine variations were used as multiple linear regression equations and analyzed with SPSS.16.0 program. Based on the SPSS output, then attempt the classical assumption and feasibility test model which produced the model that is Cost = (1,512,062 + 194,354 length-1,649,135 distance + 187,831 SF + 54,864 time) million Rupiah. The budget plan includes preparatory work, drainage system, soil nailing, and shotcrete. An efficient cost estimate of 8 m length nail, 1.5 m installation distance, safety factor (SF) = 1.742 and a 30 day processing time resulted in a fee of Rp. 2,566,313,000.00 (Two billion five hundred sixty six million three hundred thirteen thousand rupiah).

  19. Tried and True: Springing into Linear Models

    ERIC Educational Resources Information Center

    Darling, Gerald

    2012-01-01

    In eighth grade, students usually learn about forces in science class and linear relationships in math class, crucial topics that form the foundation for further study in science and engineering. An activity that links these two fundamental concepts involves measuring the distance a spring stretches as a function of how much weight is suspended…

  20. Modeling and controlling a robotic convoy using guidance laws strategies.

    PubMed

    Belkhouche, Fethi; Belkhouche, Boumediene

    2005-08-01

    This paper deals with the problem of modeling and controlling a robotic convoy. Guidance laws techniques are used to provide a mathematical formulation of the problem. The guidance laws used for this purpose are the velocity pursuit, the deviated pursuit, and the proportional navigation. The velocity pursuit equations model the robot's path under various sensors based control laws. A systematic study of the tracking problem based on this technique is undertaken. These guidance laws are applied to derive decentralized control laws for the angular and linear velocities. For the angular velocity, the control law is directly derived from the guidance laws after considering the relative kinematics equations between successive robots. The second control law maintains the distance between successive robots constant by controlling the linear velocity. This control law is derived by considering the kinematics equations between successive robots under the considered guidance law. Properties of the method are discussed and proven. Simulation results confirm the validity of our approach, as well as the validity of the properties of the method. Index Terms-Guidance laws, relative kinematics equations, robotic convoy, tracking.

  1. Modeling when and where a secondary accident occurs.

    PubMed

    Wang, Junhua; Liu, Boya; Fu, Ting; Liu, Shuo; Stipancic, Joshua

    2018-01-31

    The occurrence of secondary accidents leads to traffic congestion and road safety issues. Secondary accident prevention has become a major consideration in traffic incident management. This paper investigates the location and time of a potential secondary accident after the occurrence of an initial traffic accident. With accident data and traffic loop data collected over three years from California interstate freeways, a shock wave-based method was introduced to identify secondary accidents. A linear regression model and two machine learning algorithms, including a back-propagation neural network (BPNN) and a least squares support vector machine (LSSVM), were implemented to explore the distance and time gap between the initial and secondary accidents using inputs of crash severity, violation category, weather condition, tow away, road surface condition, lighting, parties involved, traffic volume, duration, and shock wave speed generated by the primary accident. From the results, the linear regression model was inadequate in describing the effect of most variables and its goodness-of-fit and accuracy in prediction was relatively poor. In the training programs, the BPNN and LSSVM demonstrated adequate goodness-of-fit, though the BPNN was superior with a higher CORR and lower MSE. The BPNN model also outperformed the LSSVM in time prediction, while both failed to provide adequate distance prediction. Therefore, the BPNN model could be used to forecast the time gap between initial and secondary accidents, which could be used by decision makers and incident management agencies to prevent or reduce secondary collisions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Alternative (G-16v2) Ground Motion Prediction Equations for the Central and Eastern North America

    NASA Astrophysics Data System (ADS)

    Graizer, V.

    2016-12-01

    Introduced is the ground motion prediction equations model for the Central and Eastern North America that represents an alternative more physically justified approach to ground motion attenuation modeling then previous Graizer (2016) G-16 model. The new model has a bilinear slope of R-1 within 70 km from the fault with a slope of R-0.5 at larger distances corresponding to the geometrical spreading of body and surface waves. The new (G-16v2) model is based in part on the NGA-East database for the horizontal peak ground acceleration and 5%-damped pseudo spectral acceleration (SA) and also on comparisons with the Western U.S. data and ground motion simulations. Based on data, I estimated the average slope of the distance attenuation within the 50-70 km distance from the fault to be -1.0 at most of the frequencies supporting regular geometrical spreading of body waves. Multiple inversions are performed to estimate apparent (combined intrinsic and scattering) attenuation of SA amplitudes from the NGA-East database for incorporation into the model. These estimates demonstrate a difference between seismological Q(f) and the above mentioned attenuation factor that I recommend calling QSA(f). I adjusted previously developed site correction which was based on multiple runs of representative VS30 (time-averaged shear-wave velocity in the upper 30 m) profiles through SHAKE-type equivalent-linear codes. Site amplifications are calculated relative to the hard rock definition used in nuclear industry (VS=2800 m/s). These improvements resulted in a modest reduction in standard deviation in the new G-16v2 relative to the G-16 model. The number of model predictors is limited to a few measurable parameters: moment magnitude M, closest distance to fault rupture plane Rrup, VS30, and apparent attenuation factor QSA(f). The model is applicable for the stable continental regions and covers the following range: 4.0≤M≤8.5, 0≤Rrup≤1000 km, 450≤VS30≤2800 m/s and frequencies 0.1≤f≤100 Hz.

  3. Asthma exacerbation and proximity of residence to major roads: a population-based matched case-control study among the pediatric Medicaid population in Detroit, Michigan

    PubMed Central

    2011-01-01

    Background The relationship between asthma and traffic-related pollutants has received considerable attention. The use of individual-level exposure measures, such as residence location or proximity to emission sources, may avoid ecological biases. Method This study focused on the pediatric Medicaid population in Detroit, MI, a high-risk population for asthma-related events. A population-based matched case-control analysis was used to investigate associations between acute asthma outcomes and proximity of residence to major roads, including freeways. Asthma cases were identified as all children who made at least one asthma claim, including inpatient and emergency department visits, during the three-year study period, 2004-06. Individually matched controls were randomly selected from the rest of the Medicaid population on the basis of non-respiratory related illness. We used conditional logistic regression with distance as both categorical and continuous variables, and examined non-linear relationships with distance using polynomial splines. The conditional logistic regression models were then extended by considering multiple asthma states (based on the frequency of acute asthma outcomes) using polychotomous conditional logistic regression. Results Asthma events were associated with proximity to primary roads with an odds ratio of 0.97 (95% CI: 0.94, 0.99) for a 1 km increase in distance using conditional logistic regression, implying that asthma events are less likely as the distance between the residence and a primary road increases. Similar relationships and effect sizes were found using polychotomous conditional logistic regression. Another plausible exposure metric, a reduced form response surface model that represents atmospheric dispersion of pollutants from roads, was not associated under that exposure model. Conclusions There is moderately strong evidence of elevated risk of asthma close to major roads based on the results obtained in this population-based matched case-control study. PMID:21513554

  4. Ignition-and-Growth Modeling of NASA Standard Detonator and a Linear Shaped Charge

    NASA Technical Reports Server (NTRS)

    Oguz, Sirri

    2010-01-01

    The main objective of this study is to quantitatively investigate the ignition and shock sensitivity of NASA Standard Detonator (NSD) and the shock wave propagation of a linear shaped charge (LSC) after being shocked by NSD flyer plate. This combined explosive train was modeled as a coupled Arbitrary Lagrangian-Eulerian (ALE) model with LS-DYNA hydro code. An ignition-and-growth (I&G) reactive model based on unreacted and reacted Jones-Wilkins-Lee (JWL) equations of state was used to simulate the shock initiation. Various NSD-to-LSC stand-off distances were analyzed to calculate the shock initiation (or failure to initiate) and detonation wave propagation along the shaped charge. Simulation results were verified by experimental data which included VISAR tests for NSD flyer plate velocity measurement and an aluminum target severance test for LSC performance verification. Parameters used for the analysis were obtained from various published data or by using CHEETAH thermo-chemical code.

  5. Small-mammal density estimation: A field comparison of grid-based vs. web-based density estimators

    USGS Publications Warehouse

    Parmenter, R.R.; Yates, Terry L.; Anderson, D.R.; Burnham, K.P.; Dunnum, J.L.; Franklin, A.B.; Friggens, M.T.; Lubow, B.C.; Miller, M.; Olson, G.S.; Parmenter, Cheryl A.; Pollard, J.; Rexstad, E.; Shenk, T.M.; Stanley, T.R.; White, Gary C.

    2003-01-01

    Statistical models for estimating absolute densities of field populations of animals have been widely used over the last century in both scientific studies and wildlife management programs. To date, two general classes of density estimation models have been developed: models that use data sets from capture–recapture or removal sampling techniques (often derived from trapping grids) from which separate estimates of population size (NÌ‚) and effective sampling area (AÌ‚) are used to calculate density (DÌ‚ = NÌ‚/AÌ‚); and models applicable to sampling regimes using distance-sampling theory (typically transect lines or trapping webs) to estimate detection functions and densities directly from the distance data. However, few studies have evaluated these respective models for accuracy, precision, and bias on known field populations, and no studies have been conducted that compare the two approaches under controlled field conditions. In this study, we evaluated both classes of density estimators on known densities of enclosed rodent populations. Test data sets (n = 11) were developed using nine rodent species from capture–recapture live-trapping on both trapping grids and trapping webs in four replicate 4.2-ha enclosures on the Sevilleta National Wildlife Refuge in central New Mexico, USA. Additional “saturation” trapping efforts resulted in an enumeration of the rodent populations in each enclosure, allowing the computation of true densities. Density estimates (DÌ‚) were calculated using program CAPTURE for the grid data sets and program DISTANCE for the web data sets, and these results were compared to the known true densities (D) to evaluate each model's relative mean square error, accuracy, precision, and bias. In addition, we evaluated a variety of approaches to each data set's analysis by having a group of independent expert analysts calculate their best density estimates without a priori knowledge of the true densities; this “blind” test allowed us to evaluate the influence of expertise and experience in calculating density estimates in comparison to simply using default values in programs CAPTURE and DISTANCE. While the rodent sample sizes were considerably smaller than the recommended minimum for good model results, we found that several models performed well empirically, including the web-based uniform and half-normal models in program DISTANCE, and the grid-based models Mb and Mbh in program CAPTURE (with AÌ‚ adjusted by species-specific full mean maximum distance moved (MMDM) values). These models produced accurate DÌ‚ values (with 95% confidence intervals that included the true D values) and exhibited acceptable bias but poor precision. However, in linear regression analyses comparing each model's DÌ‚ values to the true D values over the range of observed test densities, only the web-based uniform model exhibited a regression slope near 1.0; all other models showed substantial slope deviations, indicating biased estimates at higher or lower density values. In addition, the grid-based DÌ‚ analyses using full MMDM values for WÌ‚ area adjustments required a number of theoretical assumptions of uncertain validity, and we therefore viewed their empirical successes with caution. Finally, density estimates from the independent analysts were highly variable, but estimates from web-based approaches had smaller mean square errors and better achieved confidence-interval coverage of D than did grid-based approaches. Our results support the contention that web-based approaches for density estimation of small-mammal populations are both theoretically and empirically superior to grid-based approaches, even when sample size is far less than often recommended. In view of the increasing need for standardized environmental measures for comparisons among ecosystems and through time, analytical models based on distance sampling appear to offer accurate density estimation approaches for research studies involving small-mammal abundances.

  6. Generalizing a categorization of students' interpretations of linear kinematics graphs

    NASA Astrophysics Data System (ADS)

    Bollen, Laurens; De Cock, Mieke; Zuza, Kristina; Guisasola, Jenaro; van Kampen, Paul

    2016-06-01

    We have investigated whether and how a categorization of responses to questions on linear distance-time graphs, based on a study of Irish students enrolled in an algebra-based course, could be adopted and adapted to responses from students enrolled in calculus-based physics courses at universities in Flanders, Belgium (KU Leuven) and the Basque Country, Spain (University of the Basque Country). We discuss how we adapted the categorization to accommodate a much more diverse student cohort and explain how the prior knowledge of students may account for many differences in the prevalence of approaches and success rates. Although calculus-based physics students make fewer mistakes than algebra-based physics students, they encounter similar difficulties that are often related to incorrectly dividing two coordinates. We verified that a qualitative understanding of kinematics is an important but not sufficient condition for students to determine a correct value for the speed. When comparing responses to questions on linear distance-time graphs with responses to isomorphic questions on linear water level versus time graphs, we observed that the context of a question influences the approach students use. Neither qualitative understanding nor an ability to find the slope of a context-free graph proved to be a reliable predictor for the approach students use when they determine the instantaneous speed.

  7. Implementation of software-based sensor linearization algorithms on low-cost microcontrollers.

    PubMed

    Erdem, Hamit

    2010-10-01

    Nonlinear sensors and microcontrollers are used in many embedded system designs. As the input-output characteristic of most sensors is nonlinear in nature, obtaining data from a nonlinear sensor by using an integer microcontroller has always been a design challenge. This paper discusses the implementation of six software-based sensor linearization algorithms for low-cost microcontrollers. The comparative study of the linearization algorithms is performed by using a nonlinear optical distance-measuring sensor. The performance of the algorithms is examined with respect to memory space usage, linearization accuracy and algorithm execution time. The implementation and comparison results can be used for selection of a linearization algorithm based on the sensor transfer function, expected linearization accuracy and microcontroller capacity. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  8. [Quantitative structure-gas chromatographic retention relationship of polycyclic aromatic sulfur heterocycles using molecular electronegativity-distance vector].

    PubMed

    Li, Zhenghua; Cheng, Fansheng; Xia, Zhining

    2011-01-01

    The chemical structures of 114 polycyclic aromatic sulfur heterocycles (PASHs) have been studied by molecular electronegativity-distance vector (MEDV). The linear relationships between gas chromatographic retention index and the MEDV have been established by a multiple linear regression (MLR) model. The results of variable selection by stepwise multiple regression (SMR) and the powerful predictive abilities of the optimization model appraised by leave-one-out cross-validation showed that the optimization model with the correlation coefficient (R) of 0.994 7 and the cross-validated correlation coefficient (Rcv) of 0.994 0 possessed the best statistical quality. Furthermore, when the 114 PASHs compounds were divided into calibration and test sets in the ratio of 2:1, the statistical analysis showed our models possesses almost equal statistical quality, the very similar regression coefficients and the good robustness. The quantitative structure-retention relationship (QSRR) model established may provide a convenient and powerful method for predicting the gas chromatographic retention of PASHs.

  9. Genomic similarity and kernel methods I: advancements by building on mathematical and statistical foundations.

    PubMed

    Schaid, Daniel J

    2010-01-01

    Measures of genomic similarity are the basis of many statistical analytic methods. We review the mathematical and statistical basis of similarity methods, particularly based on kernel methods. A kernel function converts information for a pair of subjects to a quantitative value representing either similarity (larger values meaning more similar) or distance (smaller values meaning more similar), with the requirement that it must create a positive semidefinite matrix when applied to all pairs of subjects. This review emphasizes the wide range of statistical methods and software that can be used when similarity is based on kernel methods, such as nonparametric regression, linear mixed models and generalized linear mixed models, hierarchical models, score statistics, and support vector machines. The mathematical rigor for these methods is summarized, as is the mathematical framework for making kernels. This review provides a framework to move from intuitive and heuristic approaches to define genomic similarities to more rigorous methods that can take advantage of powerful statistical modeling and existing software. A companion paper reviews novel approaches to creating kernels that might be useful for genomic analyses, providing insights with examples [1]. Copyright © 2010 S. Karger AG, Basel.

  10. Toward Optimal Manifold Hashing via Discrete Locally Linear Embedding.

    PubMed

    Rongrong Ji; Hong Liu; Liujuan Cao; Di Liu; Yongjian Wu; Feiyue Huang

    2017-11-01

    Binary code learning, also known as hashing, has received increasing attention in large-scale visual search. By transforming high-dimensional features to binary codes, the original Euclidean distance is approximated via Hamming distance. More recently, it is advocated that it is the manifold distance, rather than the Euclidean distance, that should be preserved in the Hamming space. However, it retains as an open problem to directly preserve the manifold structure by hashing. In particular, it first needs to build the local linear embedding in the original feature space, and then quantize such embedding to binary codes. Such a two-step coding is problematic and less optimized. Besides, the off-line learning is extremely time and memory consuming, which needs to calculate the similarity matrix of the original data. In this paper, we propose a novel hashing algorithm, termed discrete locality linear embedding hashing (DLLH), which well addresses the above challenges. The DLLH directly reconstructs the manifold structure in the Hamming space, which learns optimal hash codes to maintain the local linear relationship of data points. To learn discrete locally linear embeddingcodes, we further propose a discrete optimization algorithm with an iterative parameters updating scheme. Moreover, an anchor-based acceleration scheme, termed Anchor-DLLH, is further introduced, which approximates the large similarity matrix by the product of two low-rank matrices. Experimental results on three widely used benchmark data sets, i.e., CIFAR10, NUS-WIDE, and YouTube Face, have shown superior performance of the proposed DLLH over the state-of-the-art approaches.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Acero, F.

    Most of the celestial γ rays detected by the Large Area Telescope (LAT) aboard the Fermi Gamma-ray Space Telescope originate from the interstellar medium when energetic cosmic rays interact with interstellar nucleons and photons. Conventional point and extended source studies rely on the modeling of this diffuse emission for accurate characterization. We describe here the development of the Galactic Interstellar Emission Model (GIEM) that is the standard adopted by the LAT Collaboration and is publicly available. The model is based on a linear combination of maps for interstellar gas column density in Galactocentric annuli and for the inverse Compton emissionmore » produced in the Galaxy. We also include in the GIEM large-scale structures like Loop I and the Fermi bubbles. The measured gas emissivity spectra con rm that the cosmic-ray proton density decreases with Galactocentric distance beyond 5 kpc from the Galactic Center. The measurements also suggest a softening of the proton spectrum with Galactocentric distance. We observe that the Fermi bubbles have boundaries with a shape similar to a catenary at latitudes below 20° and we observe an enhanced emission toward their base extending in the North and South Galactic direction and located within ~4° of the Galactic Center.« less

  12. Stochastic sediment property inversion in Shallow Water 06.

    PubMed

    Michalopoulou, Zoi-Heleni

    2017-11-01

    Received time-series at a short distance from the source allow the identification of distinct paths; four of these are direct, surface and bottom reflections, and sediment reflection. In this work, a Gibbs sampling method is used for the estimation of the arrival times of these paths and the corresponding probability density functions. The arrival times for the first three paths are then employed along with linearization for the estimation of source range and depth, water column depth, and sound speed in the water. Propagating densities of arrival times through the linearized inverse problem, densities are also obtained for the above parameters, providing maximum a posteriori estimates. These estimates are employed to calculate densities and point estimates of sediment sound speed and thickness using a non-linear, grid-based model. Density computation is an important aspect of this work, because those densities express the uncertainty in the inversion for sediment properties.

  13. Analytical Debye-Huckel model for electrostatic potentials around dissolved DNA.

    PubMed Central

    Wagner, K; Keyes, E; Kephart, T W; Edwards, G

    1997-01-01

    We present an analytical, Green-function-based model for the electric potential of DNA in solution, treating the surrounding solvent with the Debye-Huckel approximation. The partial charge of each atom is accounted for by modeling DNA as linear distributions of atoms on concentric cylindrical surfaces. The condensed ions of the solvent are treated with the Debye-Huckel approximation. The resultant leading term of the potential is that of a continuous shielded line charge, and the higher order terms account for the helical structure. Within several angstroms of the surface there is sufficient information in the electric potential to distinguish features and symmetries of DNA. Plots of the potential and equipotential surfaces, dominated by the phosphate charges, reflect the structural differences between the A, B, and Z conformations and, to a smaller extent, the difference between base sequences. As the distances from the helices increase, the magnitudes of the potentials decrease. However, the bases and sugars account for a larger fraction of the double helix potential with increasing distance. We have found that when the solvent is treated with the Debye-Huckel approximation, the potential decays more rapidly in every direction from the surface than it did in the concentric dielectric cylinder approximation. Images FIGURE 1 FIGURE 2 FIGURE 3 FIGURE 4 FIGURE 5 FIGURE 7 PMID:9199767

  14. Quantitative model of diffuse speckle contrast analysis for flow measurement.

    PubMed

    Liu, Jialin; Zhang, Hongchao; Lu, Jian; Ni, Xiaowu; Shen, Zhonghua

    2017-07-01

    Diffuse speckle contrast analysis (DSCA) is a noninvasive optical technique capable of monitoring deep tissue blood flow. However, a detailed study of the speckle contrast model for DSCA has yet to be presented. We deduced the theoretical relationship between speckle contrast and exposure time and further simplified it to a linear approximation model. The feasibility of this linear model was validated by the liquid phantoms which demonstrated that the slope of this linear approximation was able to rapidly determine the Brownian diffusion coefficient of the turbid media at multiple distances using multiexposure speckle imaging. Furthermore, we have theoretically quantified the influence of optical property on the measurements of the Brownian diffusion coefficient which was a consequence of the fact that the slope of this linear approximation was demonstrated to be equal to the inverse of correlation time of the speckle.

  15. Biomechanical Model for Computing Deformations for Whole-Body Image Registration: A Meshless Approach

    PubMed Central

    Li, Mao; Miller, Karol; Joldes, Grand Roman; Kikinis, Ron; Wittek, Adam

    2016-01-01

    Patient-specific biomechanical models have been advocated as a tool for predicting deformations of soft body organs/tissue for medical image registration (aligning two sets of images) when differences between the images are large. However, complex and irregular geometry of the body organs makes generation of patient-specific biomechanical models very time consuming. Meshless discretisation has been proposed to solve this challenge. However, applications so far have been limited to 2-D models and computing single organ deformations. In this study, 3-D comprehensive patient-specific non-linear biomechanical models implemented using Meshless Total Lagrangian Explicit Dynamics (MTLED) algorithms are applied to predict a 3-D deformation field for whole-body image registration. Unlike a conventional approach which requires dividing (segmenting) the image into non-overlapping constituents representing different organs/tissues, the mechanical properties are assigned using the Fuzzy C-Means (FCM) algorithm without the image segmentation. Verification indicates that the deformations predicted using the proposed meshless approach are for practical purposes the same as those obtained using the previously validated finite element models. To quantitatively evaluate the accuracy of the predicted deformations, we determined the spatial misalignment between the registered (i.e. source images warped using the predicted deformations) and target images by computing the edge-based Hausdorff distance. The Hausdorff distance-based evaluation determines that our meshless models led to successful registration of the vast majority of the image features. PMID:26791945

  16. ON THE DISTANCE OF THE MAGELLANIC CLOUDS USING CEPHEID NIR AND OPTICAL-NIR PERIOD-WESENHEIT RELATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Inno, L.; Bono, G.; Buonanno, R.

    2013-02-10

    We present the largest near-infrared (NIR) data sets, JHKs, ever collected for classical Cepheids in the Magellanic Clouds (MCs). We selected fundamental (FU) and first overtone (FO) pulsators, and found 4150 (2571 FU, 1579 FO) Cepheids for Small Magellanic Cloud (SMC) and 3042 (1840 FU, 1202 FO) for Large Magellanic Cloud (LMC). Current sample is 2-3 times larger than any sample used in previous investigations with NIR photometry. We also discuss optical VI photometry from OGLE-III. NIR and optical-NIR Period-Wesenheit (PW) relations are linear over the entire period range (0.0 < log P {sub FU} {<=} 1.65) and their slopesmore » are, within the intrinsic dispersions, common between the MCs. These are consistent with recent results from pulsation models and observations suggesting that the PW relations are minimally affected by the metal content. The new FU and FO PW relations were calibrated using a sample of Galactic Cepheids with distances based on trigonometric parallaxes and Cepheid pulsation models. By using FU Cepheids we found a true distance moduli of 18.45 {+-} 0.02(random) {+-} 0.10(systematic) mag (LMC) and 18.93 {+-} 0.02(random) {+-} 0.10(systematic) mag (SMC). These estimates are the weighted mean over 10 PW relations and the systematic errors account for uncertainties in the zero point and in the reddening law. We found similar distances using FO Cepheids (18.60 {+-} 0.03(random) {+-} 0.10(systematic) mag (LMC) and 19.12 {+-} 0.03(random) {+-} 0.10(systematic) mag (SMC)). These new MC distances lead to the relative distance, {Delta}{mu} = 0.48 {+-} 0.03 mag (FU, log P = 1) and {Delta}{mu} = 0.52 {+-} 0.03 mag (FO, log P = 0.5), which agrees quite well with previous estimates based on robust distance indicators.« less

  17. Robust linear discriminant analysis with distance based estimators

    NASA Astrophysics Data System (ADS)

    Lim, Yai-Fung; Yahaya, Sharipah Soaad Syed; Ali, Hazlina

    2017-11-01

    Linear discriminant analysis (LDA) is one of the supervised classification techniques concerning relationship between a categorical variable and a set of continuous variables. The main objective of LDA is to create a function to distinguish between populations and allocating future observations to previously defined populations. Under the assumptions of normality and homoscedasticity, the LDA yields optimal linear discriminant rule (LDR) between two or more groups. However, the optimality of LDA highly relies on the sample mean and pooled sample covariance matrix which are known to be sensitive to outliers. To alleviate these conflicts, a new robust LDA using distance based estimators known as minimum variance vector (MVV) has been proposed in this study. The MVV estimators were used to substitute the classical sample mean and classical sample covariance to form a robust linear discriminant rule (RLDR). Simulation and real data study were conducted to examine on the performance of the proposed RLDR measured in terms of misclassification error rates. The computational result showed that the proposed RLDR is better than the classical LDR and was comparable with the existing robust LDR.

  18. Bending of Light in Modified Gravity at Large Distances

    NASA Technical Reports Server (NTRS)

    Sultana, Joseph; Kazanas, Demosthenes

    2012-01-01

    We discuss the bending of light in a recent model for gravity at large distances containing a Rindler type acceleration proposed by Grumiller. We consider the static, spherically symmetric metric with cosmological constant and Rindler-like term 2ar presented in this model, and we use the procedure by Rindler and Ishak. to obtain the bending angle of light in this metric. Earlier work on light bending in this model by Carloni, Grumiller, and Preis, using the method normally employed for asymptotically flat space-times, led to a conflicting result (caused by the Rindler-like term in the metric) of a bending angle that increases with the distance of closest approach r(sub 0) of the light ray from the centrally concentrated spherically symmetric matter distribution. However, when using the alternative approach for light bending in nonasymptotically flat space-times, we show that the linear Rindler-like term produces a small correction to the general relativistic result that is inversely proportional to r(sub 0). This will in turn affect the bounds on Rindler acceleration obtained earlier from light bending and casts doubts on the nature of the linear term 2ar in the metric

  19. Magnetized strange quark model with Big Rip singularity in f(R, T) gravity

    NASA Astrophysics Data System (ADS)

    Sahoo, P. K.; Sahoo, Parbati; Bishi, Binaya K.; Aygün, S.

    2017-07-01

    Locally rotationally symmetric (LRS) Bianchi type-I magnetized strange quark matter (SQM) cosmological model has been studied based on f(R, T) gravity. The exact solutions of the field equations are derived with linearly time varying deceleration parameter, which is consistent with observational data (from SNIa, BAO and CMB) of standard cosmology. It is observed that the model begins with big bang and ends with a Big Rip. The transition of the deceleration parameter from decelerating phase to accelerating phase with respect to redshift obtained in our model fits with the recent observational data obtained by Farook et al. [Astrophys. J. 835, 26 (2017)]. The well-known Hubble parameter H(z) and distance modulus μ(z) are discussed with redshift.

  20. A statistical model of brittle fracture by transgranular cleavage

    NASA Astrophysics Data System (ADS)

    Lin, Tsann; Evans, A. G.; Ritchie, R. O.

    A MODEL for brittle fracture by transgranular cleavage cracking is presented based on the application of weakest link statistics to the critical microstructural fracture mechanisms. The model permits prediction of the macroscopic fracture toughness, KI c, in single phase microstructures containing a known distribution of particles, and defines the critical distance from the crack tip at which the initial cracking event is most probable. The model is developed for unstable fracture ahead of a sharp crack considering both linear elastic and nonlinear elastic ("elastic/plastic") crack tip stress fields. Predictions are evaluated by comparison with experimental results on the low temperature flow and fracture behavior of a low carbon mild steel with a simple ferrite/grain boundary carbide microstructure.

  1. DEVELOPMENT OF THE MODEL OF GALACTIC INTERSTELLAR EMISSION FOR STANDARD POINT-SOURCE ANALYSIS OF FERMI LARGE AREA TELESCOPE DATA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Acero, F.; Ballet, J.; Ackermann, M.

    2016-04-01

    Most of the celestial γ rays detected by the Large Area Telescope (LAT) on board the Fermi Gamma-ray Space Telescope originate from the interstellar medium when energetic cosmic rays interact with interstellar nucleons and photons. Conventional point-source and extended-source studies rely on the modeling of this diffuse emission for accurate characterization. Here, we describe the development of the Galactic Interstellar Emission Model (GIEM), which is the standard adopted by the LAT Collaboration and is publicly available. This model is based on a linear combination of maps for interstellar gas column density in Galactocentric annuli and for the inverse-Compton emission producedmore » in the Galaxy. In the GIEM, we also include large-scale structures like Loop I and the Fermi bubbles. The measured gas emissivity spectra confirm that the cosmic-ray proton density decreases with Galactocentric distance beyond 5 kpc from the Galactic Center. The measurements also suggest a softening of the proton spectrum with Galactocentric distance. We observe that the Fermi bubbles have boundaries with a shape similar to a catenary at latitudes below 20° and we observe an enhanced emission toward their base extending in the north and south Galactic directions and located within ∼4° of the Galactic Center.« less

  2. Development of the Model of Galactic Interstellar Emission for Standard Point-Source Analysis of Fermi Large Area Telescope Data

    DOE PAGES

    Acero, F.

    2016-04-22

    Most of the celestial γ rays detected by the Large Area Telescope (LAT) aboard the Fermi Gamma-ray Space Telescope originate from the interstellar medium when energetic cosmic rays interact with interstellar nucleons and photons. Conventional point and extended source studies rely on the modeling of this diffuse emission for accurate characterization. We describe here the development of the Galactic Interstellar Emission Model (GIEM) that is the standard adopted by the LAT Collaboration and is publicly available. The model is based on a linear combination of maps for interstellar gas column density in Galactocentric annuli and for the inverse Compton emissionmore » produced in the Galaxy. We also include in the GIEM large-scale structures like Loop I and the Fermi bubbles. The measured gas emissivity spectra con rm that the cosmic-ray proton density decreases with Galactocentric distance beyond 5 kpc from the Galactic Center. The measurements also suggest a softening of the proton spectrum with Galactocentric distance. We observe that the Fermi bubbles have boundaries with a shape similar to a catenary at latitudes below 20° and we observe an enhanced emission toward their base extending in the North and South Galactic direction and located within ~4° of the Galactic Center.« less

  3. Development of the Model of Galactic Interstellar Emission for Standard Point-Source Analysis of Fermi Large Area Telescope Data

    NASA Technical Reports Server (NTRS)

    Acero, F.; Ackermann, M.; Ajello, M.; Albert, A.; Baldini, L.; Ballet, J.; Barbiellini, G.; Bastieri, D.; Bellazzini, R.; Brandt, T. J.; hide

    2016-01-01

    Most of the celestial gamma rays detected by the Large Area Telescope (LAT) on board the Fermi Gamma-ray Space Telescope originate from the interstellar medium when energetic cosmic rays interact with interstellar nucleons and photons. Conventional point-source and extended-source studies rely on the modeling of this diffuse emission for accurate characterization. Here, we describe the development of the Galactic Interstellar Emission Model (GIEM),which is the standard adopted by the LAT Collaboration and is publicly available. This model is based on a linear combination of maps for interstellar gas column density in Galactocentric annuli and for the inverse-Compton emission produced in the Galaxy. In the GIEM, we also include large-scale structures like Loop I and the Fermi bubbles. The measured gas emissivity spectra confirm that the cosmic-ray proton density decreases with Galactocentric distance beyond 5 kpc from the Galactic Center. The measurements also suggest a softening of the proton spectrum with Galactocentric distance. We observe that the Fermi bubbles have boundaries with a shape similar to a catenary at latitudes below 20deg and we observe an enhanced emission toward their base extending in the north and south Galactic directions and located within approximately 4deg of the Galactic Center.

  4. Spatial analysis and land use regression of VOCs and NO(2) from school-based urban air monitoring in Detroit/Dearborn, USA.

    PubMed

    Mukerjee, Shaibal; Smith, Luther A; Johnson, Mary M; Neas, Lucas M; Stallings, Casson A

    2009-08-01

    Passive ambient air sampling for nitrogen dioxide (NO(2)) and volatile organic compounds (VOCs) was conducted at 25 school and two compliance sites in Detroit and Dearborn, Michigan, USA during the summer of 2005. Geographic Information System (GIS) data were calculated at each of 116 schools. The 25 selected schools were monitored to assess and model intra-urban gradients of air pollutants to evaluate impact of traffic and urban emissions on pollutant levels. Schools were chosen to be statistically representative of urban land use variables such as distance to major roadways, traffic intensity around the schools, distance to nearest point sources, population density, and distance to nearest border crossing. Two approaches were used to investigate spatial variability. First, Kruskal-Wallis analyses and pairwise comparisons on data from the schools examined coarse spatial differences based on city section and distance from heavily trafficked roads. Secondly, spatial variation on a finer scale and as a response to multiple factors was evaluated through land use regression (LUR) models via multiple linear regression. For weeklong exposures, VOCs did not exhibit spatial variability by city section or distance from major roads; NO(2) was significantly elevated in a section dominated by traffic and industrial influence versus a residential section. Somewhat in contrast to coarse spatial analyses, LUR results revealed spatial gradients in NO(2) and selected VOCs across the area. The process used to select spatially representative sites for air sampling and the results of coarse and fine spatial variability of air pollutants provide insights that may guide future air quality studies in assessing intra-urban gradients.

  5. Definition of Linear Color Models in the RGB Vector Color Space to Detect Red Peaches in Orchard Images Taken under Natural Illumination

    PubMed Central

    Teixidó, Mercè; Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Nogués, Miquel; Palacín, Jordi

    2012-01-01

    This work proposes the detection of red peaches in orchard images based on the definition of different linear color models in the RGB vector color space. The classification and segmentation of the pixels of the image is then performed by comparing the color distance from each pixel to the different previously defined linear color models. The methodology proposed has been tested with images obtained in a real orchard under natural light. The peach variety in the orchard was the paraguayo (Prunus persica var. platycarpa) peach with red skin. The segmentation results showed that the area of the red peaches in the images was detected with an average error of 11.6%; 19.7% in the case of bright illumination; 8.2% in the case of low illumination; 8.6% for occlusion up to 33%; 12.2% in the case of occlusion between 34 and 66%; and 23% for occlusion above 66%. Finally, a methodology was proposed to estimate the diameter of the fruits based on an ellipsoidal fitting. A first diameter was obtained by using all the contour pixels and a second diameter was obtained by rejecting some pixels of the contour. This approach enables a rough estimate of the fruit occlusion percentage range by comparing the two diameter estimates. PMID:22969369

  6. Definition of linear color models in the RGB vector color space to detect red peaches in orchard images taken under natural illumination.

    PubMed

    Teixidó, Mercè; Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Nogués, Miquel; Palacín, Jordi

    2012-01-01

    This work proposes the detection of red peaches in orchard images based on the definition of different linear color models in the RGB vector color space. The classification and segmentation of the pixels of the image is then performed by comparing the color distance from each pixel to the different previously defined linear color models. The methodology proposed has been tested with images obtained in a real orchard under natural light. The peach variety in the orchard was the paraguayo (Prunus persica var. platycarpa) peach with red skin. The segmentation results showed that the area of the red peaches in the images was detected with an average error of 11.6%; 19.7% in the case of bright illumination; 8.2% in the case of low illumination; 8.6% for occlusion up to 33%; 12.2% in the case of occlusion between 34 and 66%; and 23% for occlusion above 66%. Finally, a methodology was proposed to estimate the diameter of the fruits based on an ellipsoidal fitting. A first diameter was obtained by using all the contour pixels and a second diameter was obtained by rejecting some pixels of the contour. This approach enables a rough estimate of the fruit occlusion percentage range by comparing the two diameter estimates.

  7. Contact Prediction for Beta and Alpha-Beta Proteins Using Integer Linear Optimization and its Impact on the First Principles 3D Structure Prediction Method ASTRO-FOLD

    PubMed Central

    Rajgaria, R.; Wei, Y.; Floudas, C. A.

    2010-01-01

    An integer linear optimization model is presented to predict residue contacts in β, α + β, and α/β proteins. The total energy of a protein is expressed as sum of a Cα – Cα distance dependent contact energy contribution and a hydrophobic contribution. The model selects contacts that assign lowest energy to the protein structure while satisfying a set of constraints that are included to enforce certain physically observed topological information. A new method based on hydrophobicity is proposed to find the β-sheet alignments. These β-sheet alignments are used as constraints for contacts between residues of β-sheets. This model was tested on three independent protein test sets and CASP8 test proteins consisting of β, α + β, α/β proteins and was found to perform very well. The average accuracy of the predictions (separated by at least six residues) was approximately 61%. The average true positive and false positive distances were also calculated for each of the test sets and they are 7.58 Å and 15.88 Å, respectively. Residue contact prediction can be directly used to facilitate the protein tertiary structure prediction. This proposed residue contact prediction model is incorporated into the first principles protein tertiary structure prediction approach, ASTRO-FOLD. The effectiveness of the contact prediction model was further demonstrated by the improvement in the quality of the protein structure ensemble generated using the predicted residue contacts for a test set of 10 proteins. PMID:20225257

  8. The median problems on linear multichromosomal genomes: graph representation and fast exact solutions.

    PubMed

    Xu, Andrew Wei

    2010-09-01

    In genome rearrangement, given a set of genomes G and a distance measure d, the median problem asks for another genome q that minimizes the total distance [Formula: see text]. This is a key problem in genome rearrangement based phylogenetic analysis. Although this problem is known to be NP-hard, we have shown in a previous article, on circular genomes and under the DCJ distance measure, that a family of patterns in the given genomes--represented by adequate subgraphs--allow us to rapidly find exact solutions to the median problem in a decomposition approach. In this article, we extend this result to the case of linear multichromosomal genomes, in order to solve more interesting problems on eukaryotic nuclear genomes. A multi-way capping problem in the linear multichromosomal case imposes an extra computational challenge on top of the difficulty in the circular case, and this difficulty has been underestimated in our previous study and is addressed in this article. We represent the median problem by the capped multiple breakpoint graph, extend the adequate subgraphs into the capped adequate subgraphs, and prove optimality-preserving decomposition theorems, which give us the tools to solve the median problem and the multi-way capping optimization problem together. We also develop an exact algorithm ASMedian-linear, which iteratively detects instances of (capped) adequate subgraphs and decomposes problems into subproblems. Tested on simulated data, ASMedian-linear can rapidly solve most problems with up to several thousand genes, and it also can provide optimal or near-optimal solutions to the median problem under the reversal/HP distance measures. ASMedian-linear is available at http://sites.google.com/site/andrewweixu .

  9. An eclipsing-binary distance to the Large Magellanic Cloud accurate to two per cent.

    PubMed

    Pietrzyński, G; Graczyk, D; Gieren, W; Thompson, I B; Pilecki, B; Udalski, A; Soszyński, I; Kozłowski, S; Konorski, P; Suchomska, K; Bono, G; Moroni, P G Prada; Villanova, S; Nardetto, N; Bresolin, F; Kudritzki, R P; Storm, J; Gallenne, A; Smolec, R; Minniti, D; Kubiak, M; Szymański, M K; Poleski, R; Wyrzykowski, L; Ulaczyk, K; Pietrukowicz, P; Górski, M; Karczmarek, P

    2013-03-07

    In the era of precision cosmology, it is essential to determine the Hubble constant to an accuracy of three per cent or better. At present, its uncertainty is dominated by the uncertainty in the distance to the Large Magellanic Cloud (LMC), which, being our second-closest galaxy, serves as the best anchor point for the cosmic distance scale. Observations of eclipsing binaries offer a unique opportunity to measure stellar parameters and distances precisely and accurately. The eclipsing-binary method was previously applied to the LMC, but the accuracy of the distance results was lessened by the need to model the bright, early-type systems used in those studies. Here we report determinations of the distances to eight long-period, late-type eclipsing systems in the LMC, composed of cool, giant stars. For these systems, we can accurately measure both the linear and the angular sizes of their components and avoid the most important problems related to the hot, early-type systems. The LMC distance that we derive from these systems (49.97 ± 0.19 (statistical) ± 1.11 (systematic) kiloparsecs) is accurate to 2.2 per cent and provides a firm base for a 3-per-cent determination of the Hubble constant, with prospects for improvement to 2 per cent in the future.

  10. Environmentally Dependent Density-Distance Relationship of Dispersing Culex tarsalis in a Southern California Desert Region.

    PubMed

    Antonić, Oleg; Sudarić-Bogojević, Mirta; Lothrop, Hugh; Merdić, Enrih

    2014-09-01

    The direct inclusion of environmental factors into the empirical model that describes a density-distance relationship (DDR) is demonstrated on dispersal data obtained in a capture-mark-release-recapture experiment (CMRR) with Culex tarsalis conducted around the community of Mecca, CA. Empirical parameters of standard (environmentally independent) DDR were expressed as linear functions of environmental variables: relative orientation (azimuthal deviation of north) of release point (relative to recapture point) and proportions of habitat types surrounding each recapture point. The yielded regression model (R(2)  =  0.5373, after optimization on the best subset of linear terms) suggests that spatial density of recaptured individuals after 12 days of a CMRR experiment significantly depended on 1) distance from release point, 2) orientation of recapture points in relation to release point (preferring dispersal toward the south, probably due to wind drift and position of periodically flooded habitats suitable for species egg clutches), and 3) habitat spectrum in surroundings of recapture points (increasing and decreasing population density in desert and urban environment, respectively).

  11. Secular Extragalactic Parallax and Geometric Distances with Gaia Proper Motions

    NASA Astrophysics Data System (ADS)

    Paine, Jennie; Darling, Jeremiah K.

    2018-06-01

    The motion of the Solar System with respect to the cosmic microwave background (CMB) rest frame creates a well measured dipole in the CMB, which corresponds to a linear solar velocity of about 78 AU/yr. This motion causes relatively nearby extragalactic objects to appear to move compared to more distant objects, an effect that can be measured in the proper motions of nearby galaxies. An object at 1 Mpc and perpendicular to the CMB apex will exhibit a secular parallax, observed as a proper motion, of 78 µas/yr. The relatively large peculiar motions of galaxies make the detection of secular parallax challenging for individual objects. Instead, a statistical parallax measurement can be made for a sample of objects with proper motions, where the global parallax signal is modeled as an E-mode dipole that diminishes linearly with distance. We present preliminary results of applying this model to a sample of nearby galaxies with Gaia proper motions to detect the statistical secular parallax signal. The statistical measurement can be used to calibrate the canonical cosmological “distance ladder.”

  12. Validation of the gravity model in predicting the global spread of influenza.

    PubMed

    Li, Xinhai; Tian, Huidong; Lai, Dejian; Zhang, Zhibin

    2011-08-01

    The gravity model is often used in predicting the spread of influenza. We use the data of influenza A (H1N1) to check the model's performance and validation, in order to determine the scope of its application. In this article, we proposed to model the pattern of global spread of the virus via a few important socio-economic indicators. We applied the epidemic gravity model for modelling the virus spread globally through the estimation of parameters of a generalized linear model. We compiled the daily confirmed cases of influenza A (H1N1) in each country as reported to the WHO and each state in the USA, and established the model to describe the relationship between the confirmed cases and socio-economic factors such as population size, per capita gross domestic production (GDP), and the distance between the countries/states and the country where the first confirmed case was reported (i.e., Mexico). The covariates we selected for the model were all statistically significantly associated with the global spread of influenza A (H1N1). However, within the USA, the distance and GDP were not significantly associated with the number of confirmed cases. The combination of the gravity model and generalized linear model provided a quick assessment of pandemic spread globally. The gravity model is valid if the spread period is long enough for estimating the model parameters. Meanwhile, the distance between donor and recipient communities has a good gradient. Besides, the spread should be at the early stage if a single source is taking into account.

  13. A new predictive model for the bioconcentration factors of polychlorinated biphenyls (PCBs) based on the molecular electronegativity distance vector (MEDV).

    PubMed

    Qin, Li-Tang; Liu, Shu-Shen; Liu, Hai-Ling; Ge, Hui-Lin

    2008-02-01

    Polychlorinated biphenyls (PCBs) are some of the most prevalent pollutants in the total environment and receive more and more concerns as a group of ubiquitous potential persistent organic pollutants. Using the variable selection and modeling based on prediction (VSMP), the molecular electronegativity distance vector (MEDV) derived directly from the molecular topological structures was employed to develop a linear model (MI) between the bioconcentration factors (BCF) and two MEDV descriptors of 58 PCBs. The MI model showed a good estimation ability with a correlation coefficient (r) of 0.9605 and a high stability with a leave-one-out cross-validation correlation coefficient (q) of 0.9564. The MEDV-base model (MI) is easier to use than the splinoid poset method reported by Ivanciuc et al. [Ivanciuc, T., Ivanciuc, O., Klein, D.J., 2006. Modeling the bioconcentration factors and bioaccumulation factors of polychlorinated biphenyls with posetic quatitative super-structure/activity relationships (QSSAR). Mol. Divers. 10, 133-145] and gives a better statistics than molecular connectivity index (MCI)-base model developed by Hu et al. [Hu, H.Y., Xu, F.L., Li, B.G., Cao, J., Dawson, R., Tao, S., 2005. Prediction of the bioconcentration factor of PCBs in fish using the molecular connectivity index and fragment constant models. Water Environ. Res. 77, 87-97]. Main structural factors influencing the BCF of PCBs are the substructures expressed by two atomic groups >C= and -CH=. 58 PCBs were divided into an "odd set" and "even set" in order to ensure the predicted potential of the MI for the external samples. It was shown that three models, MI, MO for "odd set", and ME for "even set", can be used to predict the BCF of remaining 152 PCBs in which the experimental BCFs are not available.

  14. Large Local Void, Supernovae Type Ia, and the Kinematic Sunyaev-Zel'dovich Effect in a Lambda-LTB Model

    NASA Astrophysics Data System (ADS)

    Hoscheit, Benjamin L.; Barger, Amy J.

    2017-06-01

    There is substantial and growing observational evidence from the normalized luminosity density in the near-infrared that the local universe may be under-dense on scales of several hundred Megaparsecs. Our objective is to test whether a void described by a parameterization of the observational data is compatible with the latest data on supernovae type Ia and the linear kinematic Sunyaev-Zel'dovich (kSZ) effect. Our study is based on the large local void radial profile observed by Keenan, Barger, and Cowie (KBC) and a theoretical void description based on the Lemaître-Tolman-Bondi model with a nonzero cosmological constant (Lambda-LTB). We find consistency with the measured luminosity distance-redshift relation on radial scales relevant to the KBC void through a comparison with low-redshift supernovae type Ia from the `Supercal' dataset over the redshift range 0.01 < z < 0.10. We also find that previous linear kSZ constraints, as well as new ones from the South Pole Telescope, are fully compatible with the existence of the KBC void.

  15. Normal growth and development of the lips: a 3-dimensional study from 6 years to adulthood using a geometric model

    PubMed Central

    FERRARIO, VIRGILIO F.; SFORZA, CHIARELLA; SCHMITZ, JOHANNES H.; CIUSA, VERONICA; COLOMBO, ANNA

    2000-01-01

    A 3-dimensional computerised system with landmark representation of the soft-tissue facial surface allows noninvasive and fast quantitative study of facial growth. The aims of the present investigation were (1) to provide reference data for selected dimensions of lips (linear distances and ratios, vermilion area, volume); (2) to quantify the relevant growth changes; and (3) to evaluate sex differences in growth patterns. The 3-dimensional coordinates of 6 soft-tissue landmarks on the lips were obtained by an optoelectronic instrument in a mixed longitudinal and cross-sectional study (2023 examinations in 1348 healthy subjects between 6 y of age and young adulthood). From the landmarks, several linear distances (mouth width, total vermilion height, total lip height, upper lip height), the vermilion height-to-mouth width ratio, some areas (vermilion of the upper lip, vermilion of the lower lip, total vermilion) and volumes (upper lip volume, lower lip volume, total lip volume) were calculated and averaged for age and sex. Male values were compared with female values by means of Student's t test. Within each age group all lip dimensions (distances, areas, volumes) were significantly larger in boys than in girls (P < 0.05), with some exceptions in the first age groups and coinciding with the earlier female growth spurt, whereas the vermilion height-to-mouth width ratio did not show a corresponding sexual dimorphism. Linear distances in girls had almost reached adult dimensions in the 13–14 y age group, while in boys a large increase was still to occur. The attainment of adult dimensions was faster in the upper than in the lower lip, especially in girls. The method used in the present investigation allowed the noninvasive evaluation of a large sample of nonpatient subjects, leading to the definition of 3-dimensional normative data. Data collected in the present study could represent a data base for the quantitative description of human lip morphology from childhood to young adulthood. PMID:10853963

  16. Local Intrinsic Dimension Estimation by Generalized Linear Modeling.

    PubMed

    Hino, Hideitsu; Fujiki, Jun; Akaho, Shotaro; Murata, Noboru

    2017-07-01

    We propose a method for intrinsic dimension estimation. By fitting the power of distance from an inspection point and the number of samples included inside a ball with a radius equal to the distance, to a regression model, we estimate the goodness of fit. Then, by using the maximum likelihood method, we estimate the local intrinsic dimension around the inspection point. The proposed method is shown to be comparable to conventional methods in global intrinsic dimension estimation experiments. Furthermore, we experimentally show that the proposed method outperforms a conventional local dimension estimation method.

  17. Visual Analytics for Exploration of a High-Dimensional Structure

    DTIC Science & Technology

    2013-04-01

    5 Figure 3. Comparison of Euclidean vs. geodesic distance. LDRs use...manifold, whereas an LDR fails. ...........................6 Figure 4. WEKA GUI for data mining HDD using FRFS-ACO...multidimensional scaling (CMDS)— are a linear DR ( LDR ). An LDR is based on a linear combination of the feature data. LDRs keep similar data points close together

  18. Validation of the Gravity Model in Predicting the Global Spread of Influenza

    PubMed Central

    Li, Xinhai; Tian, Huidong; Lai, Dejian; Zhang, Zhibin

    2011-01-01

    The gravity model is often used in predicting the spread of influenza. We use the data of influenza A (H1N1) to check the model’s performance and validation, in order to determine the scope of its application. In this article, we proposed to model the pattern of global spread of the virus via a few important socio-economic indicators. We applied the epidemic gravity model for modelling the virus spread globally through the estimation of parameters of a generalized linear model. We compiled the daily confirmed cases of influenza A (H1N1) in each country as reported to the WHO and each state in the USA, and established the model to describe the relationship between the confirmed cases and socio-economic factors such as population size, per capita gross domestic production (GDP), and the distance between the countries/states and the country where the first confirmed case was reported (i.e., Mexico). The covariates we selected for the model were all statistically significantly associated with the global spread of influenza A (H1N1). However, within the USA, the distance and GDP were not significantly associated with the number of confirmed cases. The combination of the gravity model and generalized linear model provided a quick assessment of pandemic spread globally. The gravity model is valid if the spread period is long enough for estimating the model parameters. Meanwhile, the distance between donor and recipient communities has a good gradient. Besides, the spread should be at the early stage if a single source is taking into account. PMID:21909295

  19. Modelization of highly nonlinear waves in coastal regions

    NASA Astrophysics Data System (ADS)

    Gouin, Maïté; Ducrozet, Guillaume; Ferrant, Pierre

    2015-04-01

    The proposed work deals with the development of a highly non-linear model for water wave propagation in coastal regions. The accurate modelization of surface gravity waves is of major interest in ocean engineering, especially in the field of marine renewable energy. These marine structures are intended to be settled in coastal regions where the effect of variable bathymetry may be significant on local wave conditions. This study presents a numerical model for the wave propagation with complex bathymetry. It is based on High-Order Spectral (HOS) method, initially limited to the propagation of non-linear wave fields over flat bottom. Such a model has been developed and validated at the LHEEA Lab. (Ecole Centrale Nantes) over the past few years and the current developments will enlarge its application range. This new numerical model will keep the interesting numerical properties of the original pseudo-spectral approach (convergence, efficiency with the use of FFTs, …) and enable the possibility to propagate highly non-linear wave fields over long time and large distance. Different validations will be provided in addition to the presentation of the method. At first, Bragg reflection will be studied with the proposed approach. If the Bragg condition is satisfied, the reflected wave generated by a sinusoidal bottom patch should be amplified as a result of resonant quadratic interactions between incident wave and bottom. Comparisons will be provided with experiments and reference solutions. Then, the method will be used to consider the transformation of a non-linear monochromatic wave as it propagates up and over a submerged bar. As the waves travel up the front slope of the bar, it steepens and high harmonics are generated due to non-linear interactions. Comparisons with experimental data will be provided. The different test cases will assess the accuracy and efficiency of the method proposed.

  20. Null tests of the standard model using the linear model formalism

    NASA Astrophysics Data System (ADS)

    Marra, Valerio; Sapone, Domenico

    2018-04-01

    We test both the Friedmann-Lemaître-Robertson-Walker geometry and Λ CDM cosmology in a model-independent way by reconstructing the Hubble function H (z ), the comoving distance D (z ), and the growth of structure f σ8(z ) using the most recent data available. We use the linear model formalism in order to optimally reconstruct the above cosmological functions, together with their derivatives and integrals. We then evaluate four of the null tests available in the literature that probe both background and perturbation assumptions. For all the four tests, we find agreement, within the errors, with the standard cosmological model.

  1. The Two Modes of Distance Education.

    ERIC Educational Resources Information Center

    Keegan, Desmond

    1998-01-01

    Discusses two models of distance-education, group-based versus individual-based. Highlights include group-based distance education for full-time and part-time students; individual-based distance education with pre-prepared materials and without pre-prepared materials; and distance education management and research. (LRW)

  2. On Computing Breakpoint Distances for Genomes with Duplicate Genes.

    PubMed

    Shao, Mingfu; Moret, Bernard M E

    2017-06-01

    A fundamental problem in comparative genomics is to compute the distance between two genomes in terms of its higher level organization (given by genes or syntenic blocks). For two genomes without duplicate genes, we can easily define (and almost always efficiently compute) a variety of distance measures, but the problem is NP-hard under most models when genomes contain duplicate genes. To tackle duplicate genes, three formulations (exemplar, maximum matching, and any matching) have been proposed, all of which aim to build a matching between homologous genes so as to minimize some distance measure. Of the many distance measures, the breakpoint distance (the number of nonconserved adjacencies) was the first one to be studied and remains of significant interest because of its simplicity and model-free property. The three breakpoint distance problems corresponding to the three formulations have been widely studied. Although we provided last year a solution for the exemplar problem that runs very fast on full genomes, computing optimal solutions for the other two problems has remained challenging. In this article, we describe very fast, exact algorithms for these two problems. Our algorithms rely on a compact integer-linear program that we further simplify by developing an algorithm to remove variables, based on new results on the structure of adjacencies and matchings. Through extensive experiments using both simulations and biological data sets, we show that our algorithms run very fast (in seconds) on mammalian genomes and scale well beyond. We also apply these algorithms (as well as the classic orthology tool MSOAR) to create orthology assignment, then compare their quality in terms of both accuracy and coverage. We find that our algorithm for the "any matching" formulation significantly outperforms other methods in terms of accuracy while achieving nearly maximum coverage.

  3. Theory of advection-driven long range biotic transport

    USDA-ARS?s Scientific Manuscript database

    We propose a simple mechanistic model to examine the effects of advective flow on the spread of fungal diseases spread by wind-blown spores. The model is defined by a set of two coupled non-linear partial differential equations for spore densities. One equation describes the long-distance advectiv...

  4. An LFMCW detector with new structure and FRFT based differential distance estimation method.

    PubMed

    Yue, Kai; Hao, Xinhong; Li, Ping

    2016-01-01

    This paper describes a linear frequency modulated continuous wave (LFMCW) detector which is designed for a collision avoidance radar. This detector can estimate distance between the detector and pedestrians or vehicles, thereby it will help to reduce the likelihood of traffic accidents. The detector consists of a transceiver and a signal processor. A novel structure based on the intermediate frequency signal (IFS) is designed for the transceiver which is different from the traditional LFMCW transceiver using the beat frequency signal (BFS) based structure. In the signal processor, a novel fractional Fourier transform (FRFT) based differential distance estimation (DDE) method is used to detect the distance. The new IFS based structure is beneficial for the FRFT based DDE method to reduce the computation complexity, because it does not need the scan of the optimal FRFT order. Low computation complexity ensures the feasibility of practical applications. Simulations are carried out and results demonstrate the efficiency of the detector designed in this paper.

  5. Propagation and stability characteristics of a 500-m-long laser-based fiducial line for high-precision alignment of long-distance linear accelerators.

    PubMed

    Suwada, Tsuyoshi; Satoh, Masanori; Telada, Souichi; Minoshima, Kaoru

    2013-09-01

    A laser-based alignment system with a He-Ne laser has been newly developed in order to precisely align accelerator units at the KEKB injector linac. The laser beam was first implemented as a 500-m-long fiducial straight line for alignment measurements. We experimentally investigated the propagation and stability characteristics of the laser beam passing through laser pipes in vacuum. The pointing stability at the last fiducial point was successfully obtained with the transverse displacements of ±40 μm level in one standard deviation by applying a feedback control. This pointing stability corresponds to an angle of ±0.08 μrad. This report contains a detailed description of the experimental investigation for the propagation and stability characteristics of the laser beam in the laser-based alignment system for long-distance linear accelerators.

  6. Propagation and stability characteristics of a 500-m-long laser-based fiducial line for high-precision alignment of long-distance linear accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suwada, Tsuyoshi; Satoh, Masanori; Telada, Souichi

    2013-09-15

    A laser-based alignment system with a He-Ne laser has been newly developed in order to precisely align accelerator units at the KEKB injector linac. The laser beam was first implemented as a 500-m-long fiducial straight line for alignment measurements. We experimentally investigated the propagation and stability characteristics of the laser beam passing through laser pipes in vacuum. The pointing stability at the last fiducial point was successfully obtained with the transverse displacements of ±40 μm level in one standard deviation by applying a feedback control. This pointing stability corresponds to an angle of ±0.08 μrad. This report contains a detailedmore » description of the experimental investigation for the propagation and stability characteristics of the laser beam in the laser-based alignment system for long-distance linear accelerators.« less

  7. Prevalence, Correlates, and Impact of Uncorrected Presbyopia in a Multiethnic Asian Population.

    PubMed

    Kidd Man, Ryan Eyn; Fenwick, Eva Katie; Sabanayagam, Charumathi; Li, Ling-Jun; Gupta, Preeti; Tham, Yih-Chung; Wong, Tien Yin; Cheng, Ching-Yu; Lamoureux, Ecosse Luc

    2016-08-01

    To examine the prevalence, correlates, and impact of uncorrected presbyopia on vision-specific functioning (VF) in a multiethnic Asian population. Population-based cross-sectional study. We included 7890 presbyopic subjects (3909 female; age range, 40-86 years) of Malay, Indian, and Chinese ethnicities from the Singapore Epidemiology of Eye Disease study. Presbyopia was classified as corrected and uncorrected based on self-reported near correction use. VF was assessed with the VF-11 questionnaire validated using Rasch analysis. Multivariable logistic and linear regression models were used to investigate the associations of sociodemographic and clinical parameters with uncorrected presbyopia, and its impact on VF, respectively. As myopia may mitigate the impact of noncorrection, we performed a subgroup analysis on myopic subjects only (n = 2742). In total, 2678 of 7890 subjects (33.9%) had uncorrected presbyopia. In multivariable models, younger age, male sex, Malay and Indian ethnicities, presenting distance visual impairment (any eye), and lower education and income levels were associated with higher odds of uncorrected presbyopia (all P < .05). Compared with corrected presbyopia, noncorrection was associated with worse overall VF and reduced ability to perform individual near and distance vision-specific tasks even after adjusting for distance VA and other confounders (all P < .05). Results were very similar for myopic individuals. One-third of presbyopic Singaporean adults did not have near correction. Given its detrimental impact on both near and distance VF, public health strategies to increase uptake of presbyopic correction in younger individuals, male individuals, and those of Malay and Indian ethnicities are needed. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. An open-population hierarchical distance sampling model

    USGS Publications Warehouse

    Sollmann, Rachel; Beth Gardner,; Richard B Chandler,; Royle, J. Andrew; T Scott Sillett,

    2015-01-01

    Modeling population dynamics while accounting for imperfect detection is essential to monitoring programs. Distance sampling allows estimating population size while accounting for imperfect detection, but existing methods do not allow for direct estimation of demographic parameters. We develop a model that uses temporal correlation in abundance arising from underlying population dynamics to estimate demographic parameters from repeated distance sampling surveys. Using a simulation study motivated by designing a monitoring program for island scrub-jays (Aphelocoma insularis), we investigated the power of this model to detect population trends. We generated temporally autocorrelated abundance and distance sampling data over six surveys, using population rates of change of 0.95 and 0.90. We fit the data generating Markovian model and a mis-specified model with a log-linear time effect on abundance, and derived post hoc trend estimates from a model estimating abundance for each survey separately. We performed these analyses for varying number of survey points. Power to detect population changes was consistently greater under the Markov model than under the alternatives, particularly for reduced numbers of survey points. The model can readily be extended to more complex demographic processes than considered in our simulations. This novel framework can be widely adopted for wildlife population monitoring.

  9. An open-population hierarchical distance sampling model.

    PubMed

    Sollmann, Rahel; Gardner, Beth; Chandler, Richard B; Royle, J Andrew; Sillett, T Scott

    2015-02-01

    Modeling population dynamics while accounting for imperfect detection is essential to monitoring programs. Distance sampling allows estimating population size while accounting for imperfect detection, but existing methods do not allow for estimation of demographic parameters. We develop a model that uses temporal correlation in abundance arising from underlying population dynamics to estimate demographic parameters from repeated distance sampling surveys. Using a simulation study motivated by designing a monitoring program for Island Scrub-Jays (Aphelocoma insularis), we investigated the power of this model to detect population trends. We generated temporally autocorrelated abundance and distance sampling data over six surveys, using population rates of change of 0.95 and 0.90. We fit the data generating Markovian model and a mis-specified model with a log-linear time effect on abundance, and derived post hoc trend estimates from a model estimating abundance for each survey separately. We performed these analyses for varying numbers of survey points. Power to detect population changes was consistently greater under the Markov model than under the alternatives, particularly for reduced numbers of survey points. The model can readily be extended to more complex demographic processes than considered in our simulations. This novel framework can be widely adopted for wildlife population monitoring.

  10. Statistics and Machine Learning based Outlier Detection Techniques for Exoplanets

    NASA Astrophysics Data System (ADS)

    Goel, Amit; Montgomery, Michele

    2015-08-01

    Architectures of planetary systems are observable snapshots in time that can indicate formation and dynamic evolution of planets. The observable key parameters that we consider are planetary mass and orbital period. If planet masses are significantly less than their host star masses, then Keplerian Motion is defined as P^2 = a^3 where P is the orbital period in units of years and a is the orbital period in units of Astronomical Units (AU). Keplerian motion works on small scales such as the size of the Solar System but not on large scales such as the size of the Milky Way Galaxy. In this work, for confirmed exoplanets of known stellar mass, planetary mass, orbital period, and stellar age, we analyze Keplerian motion of systems based on stellar age to seek if Keplerian motion has an age dependency and to identify outliers. For detecting outliers, we apply several techniques based on statistical and machine learning methods such as probabilistic, linear, and proximity based models. In probabilistic and statistical models of outliers, the parameters of a closed form probability distributions are learned in order to detect the outliers. Linear models use regression analysis based techniques for detecting outliers. Proximity based models use distance based algorithms such as k-nearest neighbour, clustering algorithms such as k-means, or density based algorithms such as kernel density estimation. In this work, we will use unsupervised learning algorithms with only the proximity based models. In addition, we explore the relative strengths and weaknesses of the various techniques by validating the outliers. The validation criteria for the outliers is if the ratio of planetary mass to stellar mass is less than 0.001. In this work, we present our statistical analysis of the outliers thus detected.

  11. Understanding pyrotechnic shock dynamics and response attenuation over distance

    NASA Astrophysics Data System (ADS)

    Ott, Richard J.

    Pyrotechnic shock events used during stage separation on rocket vehicles produce high amplitude short duration structural response that can lead to malfunction or degradation of electronic components, cracks and fractures in brittle materials, local plastic deformation, and can cause materials to experience accelerated fatigue life. These transient loads propagate as waves through the structural media losing energy as they travel outward from the source. This work assessed available test data in an effort to better understand attenuation characteristics associated with wave propagation and attempted to update a historical standard defined by the Martin Marietta Corporation in the late 1960's using out of date data acquisition systems. Two data sets were available for consideration. The first data set came from a test that used a flight like cylinder used in NASA's Ares I-X program, and the second from a test conducted with a flat plate. Both data sets suggested that the historical standard was not a conservative estimate of shock attenuation with distance, however, the variation in the test data did not lend to recommending an update to the standard. Beyond considering attenuation with distance an effort was made to model the flat plate configuration using finite element analysis. The available flat plate data consisted of three groups of tests, each with a unique charge density linear shape charge (LSC) used to cut an aluminum plate. The model was tuned to a representative test using the lowest charge density LSC as input. The correlated model was then used to predict the other two cases by linearly scaling the input load based on the relative difference in charge density. The resulting model predictions were then compared with available empirical data. Aside from differences in amplitude due to nonlinearities associated with scaling the charge density of the LSC, the model predictions matched the available test data reasonably well. Finally, modeling best practices were recommended when using industry standard software to predict shock response on structures. As part of the best practices documented, a frequency dependent damping schedule that can be used in model development when no data is available is provided.

  12. Landslide susceptibility modeling in a landslide prone area in Mazandarn Province, north of Iran: a comparison between GLM, GAM, MARS, and M-AHP methods

    NASA Astrophysics Data System (ADS)

    Pourghasemi, Hamid Reza; Rossi, Mauro

    2017-10-01

    Landslides are identified as one of the most important natural hazards in many areas throughout the world. The essential purpose of this study is to compare general linear model (GLM), general additive model (GAM), multivariate adaptive regression spline (MARS), and modified analytical hierarchy process (M-AHP) models and assessment of their performances for landslide susceptibility modeling in the west of Mazandaran Province, Iran. First, landslides were identified by interpreting aerial photographs, and extensive field works. In total, 153 landslides were identified in the study area. Among these, 105 landslides were randomly selected as training data (i.e. used in the models training) and the remaining 48 (30 %) cases were used for the validation (i.e. used in the models validation). Afterward, based on a deep literature review on 220 scientific papers (period between 2005 and 2012), eleven conditioning factors including lithology, land use, distance from rivers, distance from roads, distance from faults, slope angle, slope aspect, altitude, topographic wetness index (TWI), plan curvature, and profile curvature were selected. The Certainty Factor (CF) model was used for managing uncertainty in rule-based systems and evaluation of the correlation between the dependent (landslides) and independent variables. Finally, the landslide susceptibility zonation was produced using GLM, GAM, MARS, and M-AHP models. For evaluation of the models, the area under the curve (AUC) method was used and both success and prediction rate curves were calculated. The evaluation of models for GLM, GAM, and MARS showed 90.50, 88.90, and 82.10 % for training data and 77.52, 70.49, and 78.17 % for validation data, respectively. Furthermore, The AUC value of the produced landslide susceptibility map using M-AHP showed a training value of 77.82 % and validation value of 82.77 % accuracy. Based on the overall assessments, the proposed approaches showed reasonable results for landslide susceptibility mapping in the study area. Moreover, results obtained showed that the M-AHP model performed slightly better than the MARS, GLM, and GAM models in prediction. These algorithms can be very useful for landslide susceptibility and hazard mapping and land use planning in regional scale.

  13. Keep Your Distance! Using Second-Order Ordinary Differential Equations to Model Traffic Flow

    ERIC Educational Resources Information Center

    McCartney, Mark

    2004-01-01

    A simple mathematical model for how vehicles follow each other along a stretch of road is presented. The resulting linear second-order differential equation with constant coefficients is solved and interpreted. The model can be used as an application of solution techniques taught at first-year undergraduate level and as a motivator to encourage…

  14. Tracking the dynamics of divergent thinking via semantic distance: Analytic methods and theoretical implications.

    PubMed

    Hass, Richard W

    2017-02-01

    Divergent thinking has often been used as a proxy measure of creative thinking, but this practice lacks a foundation in modern cognitive psychological theory. This article addresses several issues with the classic divergent-thinking methodology and presents a new theoretical and methodological framework for cognitive divergent-thinking studies. A secondary analysis of a large dataset of divergent-thinking responses is presented. Latent semantic analysis was used to examine the potential changes in semantic distance between responses and the concept represented by the divergent-thinking prompt across successive response iterations. The results of linear growth modeling showed that although there is some linear increase in semantic distance across response iterations, participants high in fluid intelligence tended to give more distant initial responses than those with lower fluid intelligence. Additional analyses showed that the semantic distance of responses significantly predicted the average creativity rating given to the response, with significant variation in average levels of creativity across participants. Finally, semantic distance does not seem to be related to participants' choices of their own most creative responses. Implications for cognitive theories of creativity are discussed, along with the limitations of the methodology and directions for future research.

  15. Partial Correlation-Based Retinotopically Organized Resting-State Functional Connectivity Within and Between Areas of the Visual Cortex Reflects More Than Cortical Distance

    PubMed Central

    Dawson, Debra Ann; Lam, Jack; Lewis, Lindsay B.; Carbonell, Felix; Mendola, Janine D.

    2016-01-01

    Abstract Numerous studies have demonstrated functional magnetic resonance imaging (fMRI)-based resting-state functional connectivity (RSFC) between cortical areas. Recent evidence suggests that synchronous fluctuations in blood oxygenation level-dependent fMRI reflect functional organization at a scale finer than that of visual areas. In this study, we investigated whether RSFCs within and between lower visual areas are retinotopically organized and whether retinotopically organized RSFC merely reflects cortical distance. Subjects underwent retinotopic mapping and separately resting-state fMRI. Visual areas V1, V2, and V3, were subdivided into regions of interest (ROIs) according to quadrants and visual field eccentricity. Functional connectivity (FC) was computed based on Pearson's linear correlation (correlation), and Pearson's linear partial correlation (correlation between two time courses after the time courses from all other regions in the network are regressed out). Within a quadrant, within visual areas, all correlation and nearly all partial correlation FC measures showed statistical significance. Consistently in V1, V2, and to a lesser extent in V3, correlation decreased with increasing eccentricity separation. Consistent with previously reported monkey anatomical connectivity, correlation/partial correlation values between regions from adjacent areas (V1-V2 and V2-V3) were higher than those between nonadjacent areas (V1-V3). Within a quadrant, partial correlation showed consistent significance between regions from two different areas with the same or adjacent eccentricities. Pairs of ROIs with similar eccentricity showed higher correlation/partial correlation than pairs distant in eccentricity. Between dorsal and ventral quadrants, partial correlation between common and adjacent eccentricity regions within a visual area showed statistical significance; this extended to more distant eccentricity regions in V1. Within and between quadrants, correlation decreased approximately linearly with increasing distances separating the tested ROIs. Partial correlation showed a more complex dependence on cortical distance: it decreased exponentially with increasing distance within a quadrant, but was best fit by a quadratic function between quadrants. We conclude that RSFCs within and between lower visual areas are retinotopically organized. Correlation-based FC is nonselectively high across lower visual areas, even between regions that do not share direct anatomical connections. The mechanisms likely involve network effects caused by the dense anatomical connectivity within this network and projections from higher visual areas. FC based on partial correlation, which minimizes network effects, follows expectations based on direct anatomical connections in the monkey visual cortex better than correlation. Last, partial correlation-based retinotopically organized RSFC reflects more than cortical distance effects. PMID:26415043

  16. Partial Correlation-Based Retinotopically Organized Resting-State Functional Connectivity Within and Between Areas of the Visual Cortex Reflects More Than Cortical Distance.

    PubMed

    Dawson, Debra Ann; Lam, Jack; Lewis, Lindsay B; Carbonell, Felix; Mendola, Janine D; Shmuel, Amir

    2016-02-01

    Numerous studies have demonstrated functional magnetic resonance imaging (fMRI)-based resting-state functional connectivity (RSFC) between cortical areas. Recent evidence suggests that synchronous fluctuations in blood oxygenation level-dependent fMRI reflect functional organization at a scale finer than that of visual areas. In this study, we investigated whether RSFCs within and between lower visual areas are retinotopically organized and whether retinotopically organized RSFC merely reflects cortical distance. Subjects underwent retinotopic mapping and separately resting-state fMRI. Visual areas V1, V2, and V3, were subdivided into regions of interest (ROIs) according to quadrants and visual field eccentricity. Functional connectivity (FC) was computed based on Pearson's linear correlation (correlation), and Pearson's linear partial correlation (correlation between two time courses after the time courses from all other regions in the network are regressed out). Within a quadrant, within visual areas, all correlation and nearly all partial correlation FC measures showed statistical significance. Consistently in V1, V2, and to a lesser extent in V3, correlation decreased with increasing eccentricity separation. Consistent with previously reported monkey anatomical connectivity, correlation/partial correlation values between regions from adjacent areas (V1-V2 and V2-V3) were higher than those between nonadjacent areas (V1-V3). Within a quadrant, partial correlation showed consistent significance between regions from two different areas with the same or adjacent eccentricities. Pairs of ROIs with similar eccentricity showed higher correlation/partial correlation than pairs distant in eccentricity. Between dorsal and ventral quadrants, partial correlation between common and adjacent eccentricity regions within a visual area showed statistical significance; this extended to more distant eccentricity regions in V1. Within and between quadrants, correlation decreased approximately linearly with increasing distances separating the tested ROIs. Partial correlation showed a more complex dependence on cortical distance: it decreased exponentially with increasing distance within a quadrant, but was best fit by a quadratic function between quadrants. We conclude that RSFCs within and between lower visual areas are retinotopically organized. Correlation-based FC is nonselectively high across lower visual areas, even between regions that do not share direct anatomical connections. The mechanisms likely involve network effects caused by the dense anatomical connectivity within this network and projections from higher visual areas. FC based on partial correlation, which minimizes network effects, follows expectations based on direct anatomical connections in the monkey visual cortex better than correlation. Last, partial correlation-based retinotopically organized RSFC reflects more than cortical distance effects.

  17. Trouble with diffusion: Reassessing hillslope erosion laws with a particle-based model

    NASA Astrophysics Data System (ADS)

    Tucker, Gregory E.; Bradley, D. Nathan

    2010-03-01

    Many geomorphic systems involve a broad distribution of grain motion length scales, ranging from a few particle diameters to the length of an entire hillslope or stream. Studies of analogous physical systems have revealed that such broad motion distributions can have a significant impact on macroscale dynamics and can violate the assumptions behind standard, local gradient flux laws. Here, a simple particle-based model of sediment transport on a hillslope is used to study the relationship between grain motion statistics and macroscopic landform evolution. Surface grains are dislodged by random disturbance events with probabilities and distances that depend on local microtopography. Despite its simplicity, the particle model reproduces a surprisingly broad range of slope forms, including asymmetric degrading scarps and cinder cone profiles. At low slope angles the dynamics are diffusion like, with a short-range, thin-tailed hop length distribution, a parabolic, convex upward equilibrium slope form, and a linear relationship between transport rate and gradient. As slope angle steepens, the characteristic grain motion length scale begins to approach the length of the slope, leading to planar equilibrium forms that show a strongly nonlinear correlation between transport rate and gradient. These high-probability, long-distance motions violate the locality assumption embedded in many common gradient-based geomorphic transport laws. The example of a degrading scarp illustrates the potential for grain motion dynamics to vary in space and time as topography evolves. This characteristic renders models based on independent, stationary statistics inapplicable. An accompanying analytical framework based on treating grain motion as a survival process is briefly outlined.

  18. Mining Distance Based Outliers in Near Linear Time with Randomization and a Simple Pruning Rule

    NASA Technical Reports Server (NTRS)

    Bay, Stephen D.; Schwabacher, Mark

    2003-01-01

    Defining outliers by their distance to neighboring examples is a popular approach to finding unusual examples in a data set. Recently, much work has been conducted with the goal of finding fast algorithms for this task. We show that a simple nested loop algorithm that in the worst case is quadratic can give near linear time performance when the data is in random order and a simple pruning rule is used. We test our algorithm on real high-dimensional data sets with millions of examples and show that the near linear scaling holds over several orders of magnitude. Our average case analysis suggests that much of the efficiency is because the time to process non-outliers, which are the majority of examples, does not depend on the size of the data set.

  19. Real-time inextensible surgical thread simulation.

    PubMed

    Xu, Lang; Liu, Qian

    2018-03-27

    This paper discusses a real-time simulation method of inextensible surgical thread based on the Cosserat rod theory using position-based dynamics (PBD). The method realizes stable twining and knotting of surgical thread while including inextensibility, bending, twisting and coupling effects. The Cosserat rod theory is used to model the nonlinear elastic behavior of surgical thread. The surgical thread model is solved with PBD to achieve a real-time, extremely stable simulation. Due to the one-dimensional linear structure of surgical thread, the direct solution of the distance constraint based on tridiagonal matrix algorithm is used to enhance stretching resistance in every constraint projection iteration. In addition, continuous collision detection and collision response guarantee a large time step and high performance. Furthermore, friction is integrated into the constraint projection process to stabilize the twining of multiple threads and complex contact situations. Through comparisons with existing methods, the surgical thread maintains constant length under large deformation after applying the direct distance constraint in our method. The twining and knotting of multiple threads correspond to stable solutions to contact and friction forces. A surgical suture scene is also modeled to demonstrate the practicality and simplicity of our method. Our method achieves stable and fast simulation of inextensible surgical thread. Benefiting from the unified particle framework, the rigid body, elastic rod, and soft body can be simultaneously simulated. The method is appropriate for applications in virtual surgery that require multiple dynamic bodies.

  20. Clinical predictors of the optimal spectacle correction for comfort performing desktop tasks.

    PubMed

    Leffler, Christopher T; Davenport, Byrd; Rentz, Jodi; Miller, Amy; Benson, William

    2008-11-01

    The best strategy for spectacle correction of presbyopia for near tasks has not been determined. Thirty volunteers over the age of 40 years were tested for subjective accommodative amplitude, pupillary size, fusional vergence, interpupillary distance, arm length, preferred working distance, near and far visual acuity and preferred reading correction in the phoropter and trial frames. Subjects performed near tasks (reading, writing and counting change) using various spectacle correction strengths. Predictors of the correction maximising near task comfort were determined by multivariable linear regression. The mean age was 54.9 years (range 43 to 71) and 40 per cent had diabetes. Significant predictors of the most comfortable addition in univariate analyses were age (p<0.001), interpupillary distance (p=0.02), fusional vergence amplitude (p=0.02), distance visual acuity in the worse eye (p=0.01), vision at 40 cm in the worse eye with distance correction (p=0.01), duration of diabetes (p=0.01), and the preferred correction to read at 40 cm with the phoropter (p=0.002) or trial frames (p<0.001). Target distance selected wearing trial frames (in dioptres), arm length, and accommodative amplitude were not significant predictors (p>0.15). The preferred addition wearing trial frames holding a reading target at a distance selected by the patient was the only independent predictor. Excluding this variable, distance visual acuity was predictive independent of age or near vision wearing distance correction. The distance selected for task performance was predicted by vision wearing distance correction at near and at distance. Multivariable linear regression can be used to generate tables based on distance visual acuity and age or near vision wearing distance correction to determine tentative near spectacle addition. Final spectacle correction for desktop tasks can be estimated by subjective refraction with trial frames.

  1. Asymptotic analysis of noisy fitness maximization, applied to metabolism & growth

    NASA Astrophysics Data System (ADS)

    De Martino, Daniele; Masoero, Davide

    2016-12-01

    We consider a population dynamics model coupling cell growth to a diffusion in the space of metabolic phenotypes as it can be obtained from realistic constraints-based modeling. In the asymptotic regime of slow diffusion, that coincides with the relevant experimental range, the resulting non-linear Fokker-Planck equation is solved for the steady state in the WKB approximation that maps it into the ground state of a quantum particle in an Airy potential plus a centrifugal term. We retrieve scaling laws for growth rate fluctuations and time response with respect to the distance from the maximum growth rate suggesting that suboptimal populations can have a faster response to perturbations.

  2. Manifold Learning by Preserving Distance Orders.

    PubMed

    Ataer-Cansizoglu, Esra; Akcakaya, Murat; Orhan, Umut; Erdogmus, Deniz

    2014-03-01

    Nonlinear dimensionality reduction is essential for the analysis and the interpretation of high dimensional data sets. In this manuscript, we propose a distance order preserving manifold learning algorithm that extends the basic mean-squared error cost function used mainly in multidimensional scaling (MDS)-based methods. We develop a constrained optimization problem by assuming explicit constraints on the order of distances in the low-dimensional space. In this optimization problem, as a generalization of MDS, instead of forcing a linear relationship between the distances in the high-dimensional original and low-dimensional projection space, we learn a non-decreasing relation approximated by radial basis functions. We compare the proposed method with existing manifold learning algorithms using synthetic datasets based on the commonly used residual variance and proposed percentage of violated distance orders metrics. We also perform experiments on a retinal image dataset used in Retinopathy of Prematurity (ROP) diagnosis.

  3. Electrooptic converter to control linear displacements of the large structures of the buildings and facilities

    NASA Astrophysics Data System (ADS)

    Vasilev, Aleksandr S.; Konyakhin, Igor A.; Timofeev, Alexander N.; Lashmanov, Oleg U.; Molev, Fedor V.

    2015-05-01

    The paper analyzes the construction matters and metrological parameters of the electrooptic converter to control linear displacements of the large structures of the buildings and facilities. The converter includes the base module, the processing module and a set of the reference marks. The base module is the main unit of the system, it includes the receiving optical system and the CMOS photodetector array that realizes the instrument coordinate system that controls the mark coordinates in the space. The methods of the frame-to-frame difference, adaptive threshold filtration, binarization and objects search by the tied areas to detect the marks against accidental contrast background is the basis of the algorithm. The entire algorithm is performed during one image reading stage and is based on the FPGA. The developed and manufactured converter experimental model was tested in laboratory conditions at the metrological bench at the distance between the base module and the mark 50±0.2 m. The static characteristic was read during the experiment of the reference mark displacement at the pitch of 5 mm in the horizontal and vertical directions for the displacement range 400 mm. The converter experimental model error not exceeding ±0.5 mm was obtained in the result of the experiment.

  4. An Isometric Mapping Based Co-Location Decision Tree Algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, G.; Wei, J.; Zhou, X.; Zhang, R.; Huang, W.; Sha, H.; Chen, J.

    2018-05-01

    Decision tree (DT) induction has been widely used in different pattern classification. However, most traditional DTs have the disadvantage that they consider only non-spatial attributes (ie, spectral information) as a result of classifying pixels, which can result in objects being misclassified. Therefore, some researchers have proposed a co-location decision tree (Cl-DT) method, which combines co-location and decision tree to solve the above the above-mentioned traditional decision tree problems. Cl-DT overcomes the shortcomings of the existing DT algorithms, which create a node for each value of a given attribute, which has a higher accuracy than the existing decision tree approach. However, for non-linearly distributed data instances, the euclidean distance between instances does not reflect the true positional relationship between them. In order to overcome these shortcomings, this paper proposes an isometric mapping method based on Cl-DT (called, (Isomap-based Cl-DT), which is a method that combines heterogeneous and Cl-DT together. Because isometric mapping methods use geodetic distances instead of Euclidean distances between non-linearly distributed instances, the true distance between instances can be reflected. The experimental results and several comparative analyzes show that: (1) The extraction method of exposed carbonate rocks is of high accuracy. (2) The proposed method has many advantages, because the total number of nodes, the number of leaf nodes and the number of nodes are greatly reduced compared to Cl-DT. Therefore, the Isomap -based Cl-DT algorithm can construct a more accurate and faster decision tree.

  5. Distribution of Orientation Selectivity in Recurrent Networks of Spiking Neurons with Different Random Topologies

    PubMed Central

    Sadeh, Sadra; Rotter, Stefan

    2014-01-01

    Neurons in the primary visual cortex are more or less selective for the orientation of a light bar used for stimulation. A broad distribution of individual grades of orientation selectivity has in fact been reported in all species. A possible reason for emergence of broad distributions is the recurrent network within which the stimulus is being processed. Here we compute the distribution of orientation selectivity in randomly connected model networks that are equipped with different spatial patterns of connectivity. We show that, for a wide variety of connectivity patterns, a linear theory based on firing rates accurately approximates the outcome of direct numerical simulations of networks of spiking neurons. Distance dependent connectivity in networks with a more biologically realistic structure does not compromise our linear analysis, as long as the linearized dynamics, and hence the uniform asynchronous irregular activity state, remain stable. We conclude that linear mechanisms of stimulus processing are indeed responsible for the emergence of orientation selectivity and its distribution in recurrent networks with functionally heterogeneous synaptic connectivity. PMID:25469704

  6. Developing approaches for linear mixed modeling in landscape genetics through landscape-directed dispersal simulations

    USGS Publications Warehouse

    Row, Jeffrey R.; Knick, Steven T.; Oyler-McCance, Sara J.; Lougheed, Stephen C.; Fedy, Bradley C.

    2017-01-01

    Dispersal can impact population dynamics and geographic variation, and thus, genetic approaches that can establish which landscape factors influence population connectivity have ecological and evolutionary importance. Mixed models that account for the error structure of pairwise datasets are increasingly used to compare models relating genetic differentiation to pairwise measures of landscape resistance. A model selection framework based on information criteria metrics or explained variance may help disentangle the ecological and landscape factors influencing genetic structure, yet there are currently no consensus for the best protocols. Here, we develop landscape-directed simulations and test a series of replicates that emulate independent empirical datasets of two species with different life history characteristics (greater sage-grouse; eastern foxsnake). We determined that in our simulated scenarios, AIC and BIC were the best model selection indices and that marginal R2 values were biased toward more complex models. The model coefficients for landscape variables generally reflected the underlying dispersal model with confidence intervals that did not overlap with zero across the entire model set. When we controlled for geographic distance, variables not in the underlying dispersal models (i.e., nontrue) typically overlapped zero. Our study helps establish methods for using linear mixed models to identify the features underlying patterns of dispersal across a variety of landscapes.

  7. Fourier Transform Fringe-Pattern Analysis of an Absolute Distance Michelson Interferometer for Space-Based Laser Metrology.

    NASA Astrophysics Data System (ADS)

    Talamonti, James Joseph

    1995-01-01

    Future NASA proposals include the placement of optical interferometer systems in space for a wide variety of astrophysical studies including a vastly improved deflection test of general relativity, a precise and direct calibration of the Cepheid distance scale, and the determination of stellar masses (Reasenberg et al., 1988). There are also plans for placing large array telescopes on the moon with the ultimate objective of being able to measure angular separations of less than 10 mu-arc seconds (Burns, 1990). These and other future projects will require interferometric measurement of the (baseline) distance between the optical elements comprising the systems. Eventually, space qualifiable interferometers capable of picometer (10^{-12}m) relative precision and nanometer (10^{ -9}m) absolute precision will be required. A numerical model was developed to emulate the capabilities of systems performing interferometric noncontact absolute distance measurements. The model incorporates known methods to minimize signal processing and digital sampling errors and evaluates the accuracy limitations imposed by spectral peak isolation using Hanning, Blackman, and Gaussian windows in the Fast Fourier Transform Technique. We applied this model to the specific case of measuring the relative lengths of a compound Michelson interferometer using a frequency scanned laser. By processing computer simulated data through our model, the ultimate precision is projected for ideal data, and data containing AM/FM noise. The precision is shown to be limited by non-linearities in the laser scan. A laboratory system was developed by implementing ultra-stable external cavity diode lasers into existing interferometric measuring techniques. The capabilities of the system were evaluated and increased by using the computer modeling results as guidelines for the data analysis. Experimental results measured 1-3 meter baselines with <20 micron precision. Comparison of the laboratory and modeling results showed that the laboratory precisions obtained were of the same order of magnitude as those predicted for computer generated results under similar conditions. We believe that our model can be implemented as a tool in the design for new metrology systems capable of meeting the precisions required by space-based interferometers.

  8. Optimal Design of Spring Characteristics of Damper for Subharmonic Vibration in Automatic Transmission Powertrain

    NASA Astrophysics Data System (ADS)

    Nakae, T.; Ryu, T.; Matsuzaki, K.; Rosbi, S.; Sueoka, A.; Takikawa, Y.; Ooi, Y.

    2016-09-01

    In the torque converter, the damper of the lock-up clutch is used to effectively absorb the torsional vibration. The damper is designed using a piecewise-linear spring with three stiffness stages. However, a nonlinear vibration, referred to as a subharmonic vibration of order 1/2, occurred around the switching point in the piecewise-linear restoring torque characteristics because of the nonlinearity. In the present study, we analyze vibration reduction for subharmonic vibration. The model used herein includes the torque converter, the gear train, and the differential gear. The damper is modeled by a nonlinear rotational spring of the piecewise-linear spring. We focus on the optimum design of the spring characteristics of the damper in order to suppress the subharmonic vibration. A piecewise-linear spring with five stiffness stages is proposed, and the effect of the distance between switching points on the subharmonic vibration is investigated. The results of our analysis indicate that the subharmonic vibration can be suppressed by designing a damper with five stiffness stages to have a small spring constant ratio between the neighboring springs. The distances between switching points must be designed to be large enough that the amplitude of the main frequency component of the systems does not reach the neighboring switching point.

  9. Application of Statistical Learning Theory to Plankton Image Analysis

    DTIC Science & Technology

    2006-06-01

    linear distance interval from 1 to 40 pixels and two directions formula (horizontal & vertical, and diagonals), EF2 is EF with 7 ex- ponential distance...and four directions formula (horizontal, vertical and two diagonals). It is clear that exponential distance inter- val works better than the linear ...PSI - PS by Vincent, linear and pseudo opening and closing spectra, each has 40 elements, total feature length of 160. PS2 - PS modified from Mei- jster

  10. Algorithms for sorting unsigned linear genomes by the DCJ operations.

    PubMed

    Jiang, Haitao; Zhu, Binhai; Zhu, Daming

    2011-02-01

    The double cut and join operation (abbreviated as DCJ) has been extensively used for genomic rearrangement. Although the DCJ distance between signed genomes with both linear and circular (uni- and multi-) chromosomes is well studied, the only known result for the NP-complete unsigned DCJ distance problem is an approximation algorithm for unsigned linear unichromosomal genomes. In this article, we study the problem of computing the DCJ distance on two unsigned linear multichromosomal genomes (abbreviated as UDCJ). We devise a 1.5-approximation algorithm for UDCJ by exploiting the distance formula for signed genomes. In addition, we show that UDCJ admits a weak kernel of size 2k and hence an FPT algorithm running in O(2(2k)n) time.

  11. A Programming System for School Location & Facility Utilization.

    ERIC Educational Resources Information Center

    North Carolina State Dept. of Public Instruction, Raleigh.

    A linear program model designed to aid in site selection and the development of pupil assignment plans is illustrated in terms of a hypothetical school system. The model is designed to provide the best possible realization of any single stated objective (for example, "Minimize the distance that pupils must travel") given any number of specified…

  12. A comparison of linear interpolation models for iterative CT reconstruction.

    PubMed

    Hahn, Katharina; Schöndube, Harald; Stierstorfer, Karl; Hornegger, Joachim; Noo, Frédéric

    2016-12-01

    Recent reports indicate that model-based iterative reconstruction methods may improve image quality in computed tomography (CT). One difficulty with these methods is the number of options available to implement them, including the selection of the forward projection model and the penalty term. Currently, the literature is fairly scarce in terms of guidance regarding this selection step, whereas these options impact image quality. Here, the authors investigate the merits of three forward projection models that rely on linear interpolation: the distance-driven method, Joseph's method, and the bilinear method. The authors' selection is motivated by three factors: (1) in CT, linear interpolation is often seen as a suitable trade-off between discretization errors and computational cost, (2) the first two methods are popular with manufacturers, and (3) the third method enables assessing the importance of a key assumption in the other methods. One approach to evaluate forward projection models is to inspect their effect on discretized images, as well as the effect of their transpose on data sets, but significance of such studies is unclear since the matrix and its transpose are always jointly used in iterative reconstruction. Another approach is to investigate the models in the context they are used, i.e., together with statistical weights and a penalty term. Unfortunately, this approach requires the selection of a preferred objective function and does not provide clear information on features that are intrinsic to the model. The authors adopted the following two-stage methodology. First, the authors analyze images that progressively include components of the singular value decomposition of the model in a reconstructed image without statistical weights and penalty term. Next, the authors examine the impact of weights and penalty on observed differences. Image quality metrics were investigated for 16 different fan-beam imaging scenarios that enabled probing various aspects of all models. The metrics include a surrogate for computational cost, as well as bias, noise, and an estimation task, all at matched resolution. The analysis revealed fundamental differences in terms of both bias and noise. Task-based assessment appears to be required to appreciate the differences in noise; the estimation task the authors selected showed that these differences balance out to yield similar performance. Some scenarios highlighted merits for the distance-driven method in terms of bias but with an increase in computational cost. Three combinations of statistical weights and penalty term showed that the observed differences remain the same, but strong edge-preserving penalty can dramatically reduce the magnitude of these differences. In many scenarios, Joseph's method seems to offer an interesting compromise between cost and computational effort. The distance-driven method offers the possibility to reduce bias but with an increase in computational cost. The bilinear method indicated that a key assumption in the other two methods is highly robust. Last, strong edge-preserving penalty can act as a compensator for insufficiencies in the forward projection model, bringing all models to similar levels in the most challenging imaging scenarios. Also, the authors find that their evaluation methodology helps appreciating how model, statistical weights, and penalty term interplay together.

  13. Support from the relationship of genetic and geographic distance in human populations for a serial founder effect originating in Africa

    PubMed Central

    Ramachandran, Sohini; Deshpande, Omkar; Roseman, Charles C.; Rosenberg, Noah A.; Feldman, Marcus W.; Cavalli-Sforza, L. Luca

    2005-01-01

    Equilibrium models of isolation by distance predict an increase in genetic differentiation with geographic distance. Here we find a linear relationship between genetic and geographic distance in a worldwide sample of human populations, with major deviations from the fitted line explicable by admixture or extreme isolation. A close relationship is shown to exist between the correlation of geographic distance and genetic differentiation (as measured by FST) and the geographic pattern of heterozygosity across populations. Considering a worldwide set of geographic locations as possible sources of the human expansion, we find that heterozygosities in the globally distributed populations of the data set are best explained by an expansion originating in Africa and that no geographic origin outside of Africa accounts as well for the observed patterns of genetic diversity. Although the relationship between FST and geographic distance has been interpreted in the past as the result of an equilibrium model of drift and dispersal, simulation shows that the geographic pattern of heterozygosities in this data set is consistent with a model of a serial founder effect starting at a single origin. Given this serial-founder scenario, the relationship between genetic and geographic distance allows us to derive bounds for the effects of drift and natural selection on human genetic variation. PMID:16243969

  14. Analysis of variance to assess statistical significance of Laplacian estimation accuracy improvement due to novel variable inter-ring distances concentric ring electrodes.

    PubMed

    Makeyev, Oleksandr; Joe, Cody; Lee, Colin; Besio, Walter G

    2017-07-01

    Concentric ring electrodes have shown promise in non-invasive electrophysiological measurement demonstrating their superiority to conventional disc electrodes, in particular, in accuracy of Laplacian estimation. Recently, we have proposed novel variable inter-ring distances concentric ring electrodes. Analytic and finite element method modeling results for linearly increasing distances electrode configurations suggested they may decrease the truncation error resulting in more accurate Laplacian estimates compared to currently used constant inter-ring distances configurations. This study assesses statistical significance of Laplacian estimation accuracy improvement due to novel variable inter-ring distances concentric ring electrodes. Full factorial design of analysis of variance was used with one categorical and two numerical factors: the inter-ring distances, the electrode diameter, and the number of concentric rings in the electrode. The response variables were the Relative Error and the Maximum Error of Laplacian estimation computed using a finite element method model for each of the combinations of levels of three factors. Effects of the main factors and their interactions on Relative Error and Maximum Error were assessed and the obtained results suggest that all three factors have statistically significant effects in the model confirming the potential of using inter-ring distances as a means of improving accuracy of Laplacian estimation.

  15. A model of urban scaling laws based on distance dependent interactions

    NASA Astrophysics Data System (ADS)

    Ribeiro, Fabiano L.; Meirelles, Joao; Ferreira, Fernando F.; Neto, Camilo Rodrigues

    2017-03-01

    Socio-economic related properties of a city grow faster than a linear relationship with the population, in a log-log plot, the so-called superlinear scaling. Conversely, the larger a city, the more efficient it is in the use of its infrastructure, leading to a sublinear scaling on these variables. In this work, we addressed a simple explanation for those scaling laws in cities based on the interaction range between the citizens and on the fractal properties of the cities. To this purpose, we introduced a measure of social potential which captured the influence of social interaction on the economic performance and the benefits of amenities in the case of infrastructure offered by the city. We assumed that the population density depends on the fractal dimension and on the distance-dependent interactions between individuals. The model suggests that when the city interacts as a whole, and not just as a set of isolated parts, there is improvement of the socio-economic indicators. Moreover, the bigger the interaction range between citizens and amenities, the bigger the improvement of the socio-economic indicators and the lower the infrastructure costs of the city. We addressed how public policies could take advantage of these properties to improve cities development, minimizing negative effects. Furthermore, the model predicts that the sum of the scaling exponents of social-economic and infrastructure variables are 2, as observed in the literature. Simulations with an agent-based model are confronted with the theoretical approach and they are compatible with the empirical evidences.

  16. A model of urban scaling laws based on distance dependent interactions.

    PubMed

    Ribeiro, Fabiano L; Meirelles, Joao; Ferreira, Fernando F; Neto, Camilo Rodrigues

    2017-03-01

    Socio-economic related properties of a city grow faster than a linear relationship with the population, in a log-log plot, the so-called superlinear scaling . Conversely, the larger a city, the more efficient it is in the use of its infrastructure, leading to a sublinear scaling on these variables. In this work, we addressed a simple explanation for those scaling laws in cities based on the interaction range between the citizens and on the fractal properties of the cities. To this purpose, we introduced a measure of social potential which captured the influence of social interaction on the economic performance and the benefits of amenities in the case of infrastructure offered by the city. We assumed that the population density depends on the fractal dimension and on the distance-dependent interactions between individuals. The model suggests that when the city interacts as a whole, and not just as a set of isolated parts, there is improvement of the socio-economic indicators. Moreover, the bigger the interaction range between citizens and amenities, the bigger the improvement of the socio-economic indicators and the lower the infrastructure costs of the city. We addressed how public policies could take advantage of these properties to improve cities development, minimizing negative effects. Furthermore, the model predicts that the sum of the scaling exponents of social-economic and infrastructure variables are 2, as observed in the literature. Simulations with an agent-based model are confronted with the theoretical approach and they are compatible with the empirical evidences.

  17. Comparison between isotropic linear-elastic law and isotropic hyperelastic law in the finite element modeling of the brachial plexus.

    PubMed

    Perruisseau-Carrier, A; Bahlouli, N; Bierry, G; Vernet, P; Facca, S; Liverneaux, P

    2017-12-01

    Augmented reality could help the identification of nerve structures in brachial plexus surgery. The goal of this study was to determine which law of mechanical behavior was more adapted by comparing the results of Hooke's isotropic linear elastic law to those of Ogden's isotropic hyperelastic law, applied to a biomechanical model of the brachial plexus. A model of finite elements was created using the ABAQUS ® from a 3D model of the brachial plexus acquired by segmentation and meshing of MRI images at 0°, 45° and 135° of shoulder abduction of a healthy subject. The offset between the reconstructed model and the deformed model was evaluated quantitatively by the Hausdorff distance and qualitatively by the identification of 3 anatomical landmarks. In every case the Hausdorff distance was shorter with Ogden's law compared to Hooke's law. On a qualitative aspect, the model deformed by Ogden's law followed the concavity of the reconstructed model whereas the model deformed by Hooke's law remained convex. In conclusion, the results of this study demonstrate that the behavior of Ogden's isotropic hyperelastic mechanical model was more adapted to the modeling of the deformations of the brachial plexus. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  18. Directional output distance functions: endogenous directions based on exogenous normalization constraints

    USDA-ARS?s Scientific Manuscript database

    In this paper we develop a model for computing directional output distance functions with endogenously determined direction vectors. We show how this model is related to the slacks-based directional distance function introduced by Fare and Grosskopf and show how to use the slacks-based function to e...

  19. Clustering of the human skeletal muscle fibers using linear programming and angular Hilbertian metrics.

    PubMed

    Neji, Radhouène; Besbes, Ahmed; Komodakis, Nikos; Deux, Jean-François; Maatouk, Mezri; Rahmouni, Alain; Bassez, Guillaume; Fleury, Gilles; Paragios, Nikos

    2009-01-01

    In this paper, we present a manifold clustering method fo the classification of fibers obtained from diffusion tensor images (DTI) of the human skeletal muscle. Using a linear programming formulation of prototype-based clustering, we propose a novel fiber classification algorithm over manifolds that circumvents the necessity to embed the data in low dimensional spaces and determines automatically the number of clusters. Furthermore, we propose the use of angular Hilbertian metrics between multivariate normal distributions to define a family of distances between tensors that we generalize to fibers. These metrics are used to approximate the geodesic distances over the fiber manifold. We also discuss the case where only geodesic distances to a reduced set of landmark fibers are available. The experimental validation of the method is done using a manually annotated significant dataset of DTI of the calf muscle for healthy and diseased subjects.

  20. Longitudinal train dynamics model for a rail transit simulation system

    DOE PAGES

    Wang, Jinghui; Rakha, Hesham A.

    2018-01-01

    The paper develops a longitudinal train dynamics model in support of microscopic railway transportation simulation. The model can be calibrated without any mechanical data making it ideal for implementation in transportation simulators. The calibration and validation work is based on data collected from the Portland light rail train fleet. The calibration procedure is mathematically formulated as a constrained non-linear optimization problem. The validity of the model is assessed by comparing instantaneous model predictions against field observations, and also evaluated in the domains of acceleration/deceleration versus speed and acceleration/deceleration versus distance. A test is conducted to investigate the adequacy of themore » model in simulation implementation. The results demonstrate that the proposed model can adequately capture instantaneous train dynamics, and provides good performance in the simulation test. Thus, the model provides a simple theoretical foundation for microscopic simulators and will significantly support the planning, management and control of railway transportation systems.« less

  1. Longitudinal train dynamics model for a rail transit simulation system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jinghui; Rakha, Hesham A.

    The paper develops a longitudinal train dynamics model in support of microscopic railway transportation simulation. The model can be calibrated without any mechanical data making it ideal for implementation in transportation simulators. The calibration and validation work is based on data collected from the Portland light rail train fleet. The calibration procedure is mathematically formulated as a constrained non-linear optimization problem. The validity of the model is assessed by comparing instantaneous model predictions against field observations, and also evaluated in the domains of acceleration/deceleration versus speed and acceleration/deceleration versus distance. A test is conducted to investigate the adequacy of themore » model in simulation implementation. The results demonstrate that the proposed model can adequately capture instantaneous train dynamics, and provides good performance in the simulation test. Thus, the model provides a simple theoretical foundation for microscopic simulators and will significantly support the planning, management and control of railway transportation systems.« less

  2. Precision of a CAD/CAM-engineered surgical template based on a facebow for orthognathic surgery: an experiment with a rapid prototyping maxillary model.

    PubMed

    Lee, Jae-Won; Lim, Se-Ho; Kim, Moon-Key; Kang, Sang-Hoon

    2015-12-01

    We examined the precision of a computer-aided design/computer-aided manufacturing-engineered, manufactured, facebow-based surgical guide template (facebow wafer) by comparing it with a bite splint-type orthognathic computer-aided design/computer-aided manufacturing-engineered surgical guide template (bite wafer). We used 24 rapid prototyping (RP) models of the craniofacial skeleton with maxillary deformities. Twelve RP models each were used for the facebow wafer group and the bite wafer group (experimental group). Experimental maxillary orthognathic surgery was performed on the RP models of both groups. Errors were evaluated through comparisons with surgical simulations. We measured the minimum distances from 3 planes of reference to determine the vertical, lateral, and anteroposterior errors at specific measurement points. The measured errors were compared between experimental groups using a t test. There were significant intergroup differences in the lateral error when we compared the absolute values of the 3-D linear distance, as well as vertical, lateral, and anteroposterior errors between experimental groups. The bite wafer method exhibited little lateral error overall and little error in the anterior tooth region. The facebow wafer method exhibited very little vertical error in the posterior molar region. The clinical precision of the facebow wafer method did not significantly exceed that of the bite wafer method. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Comparison of all atom, continuum, and linear fitting empirical models for charge screening effect of aqueous medium surrounding a protein molecule

    NASA Astrophysics Data System (ADS)

    Takahashi, Takuya; Sugiura, Junnnosuke; Nagayama, Kuniaki

    2002-05-01

    To investigate the role hydration plays in the electrostatic interactions of proteins, the time-averaged electrostatic potential of the B1 domain of protein G in an aqueous solution was calculated with full atomic molecular dynamics simulations that explicitly considers every atom (i.e., an all atom model). This all atom calculated potential was compared with the potential obtained from an electrostatic continuum model calculation. In both cases, the charge-screening effect was fairly well formulated with an effective relative dielectric constant which increased linearly with increasing charge-charge distance. This simulated linear dependence agrees with the experimentally determined linear relation proposed by Pickersgill. Cut-off approximations for Coulomb interactions failed to reproduce this linear relation. Correlation between the all atom model and the continuum models was found to be better than the respective correlation calculated for linear fitting to the two models. This confirms that the continuum model is better at treating the complicated shapes of protein conformations than the simple linear fitting empirical model. We have tried a sigmoid fitting empirical model in addition to the linear one. When weights of all data were treated equally, the sigmoid model, which requires two fitting parameters, fits results of both the all atom and the continuum models less accurately than the linear model which requires only one fitting parameter. When potential values are chosen as weighting factors, the fitting error of the sigmoid model became smaller, and the slope of both linear fitting curves became smaller. This suggests the screening effect of an aqueous medium within a short range, where potential values are relatively large, is smaller than that expected from the linear fitting curve whose slope is almost 4. To investigate the linear increase of the effective relative dielectric constant, the Poisson equation of a low-dielectric sphere in a high-dielectric medium was solved and charges distributed near the molecular surface were indicated as leading to the apparent linearity.

  4. Optimization of wood plastic composite decks

    NASA Astrophysics Data System (ADS)

    Ravivarman, S.; Venkatesh, G. S.; Karmarkar, A.; Shivkumar N., D.; Abhilash R., M.

    2018-04-01

    Wood Plastic Composite (WPC) is a new class of natural fibre based composite material that contains plastic matrix reinforced with wood fibres or wood flour. In the present work, Wood Plastic Composite was prepared with 70-wt% of wood flour reinforced in polypropylene matrix. Mechanical characterization of the composite was done by carrying out laboratory tests such as tensile test and flexural test as per the American Society for Testing and Materials (ASTM) standards. Computer Aided Design (CAD) model of the laboratory test specimen (tensile test) was created and explicit finite element analysis was carried out on the finite element model in non-linear Explicit FE code LS - DYNA. The piecewise linear plasticity (MAT 24) material model was identified as a suitable model in LS-DYNA material library, describing the material behavior of the developed composite. The composite structures for decking application in construction industry were then optimized for cross sectional area and distance between two successive supports (span length) by carrying out various numerical experiments in LS-DYNA. The optimized WPC deck (Elliptical channel-2 E10) has 45% reduced weight than the baseline model (solid cross-section) considered in this study with the load carrying capacity meeting acceptance criterion (allowable deflection & stress) for outdoor decking application.

  5. A Brownian dynamics program for the simulation of linear and circular DNA and other wormlike chain polyelectrolytes.

    PubMed Central

    Klenin, K; Merlitz, H; Langowski, J

    1998-01-01

    For the interpretation of solution structural and dynamic data of linear and circular DNA molecules in the kb range, and for the prediction of the effect of local structural changes on the global conformation of such DNAs, we have developed an efficient and easy way to set up a program based on a second-order explicit Brownian dynamics algorithm. The DNA is modeled by a chain of rigid segments interacting through harmonic spring potentials for bending, torsion, and stretching. The electrostatics are handled using precalculated energy tables for the interactions between DNA segments as a function of relative orientation and distance. Hydrodynamic interactions are treated using the Rotne-Prager tensor. While maintaining acceptable precision, the simulation can be accelerated by recalculating this tensor only once in a certain number of steps. PMID:9533691

  6. Encoding of natural and artificial stimuli in the auditory midbrain

    NASA Astrophysics Data System (ADS)

    Lyzwa, Dominika

    How complex acoustic stimuli are encoded in the main center of convergence in the auditory midbrain is not clear. Here, the representation of neural spiking responses to natural and artificial sounds across this subcortical structure is investigated based on neurophysiological recordings from the mammalian midbrain. Neural and stimulus correlations of neuronal pairs are analyzed with respect to the neurons' distance, and responses to different natural communication sounds are discriminated. A model which includes linear and nonlinear neural response properties of this nucleus is presented and employed to predict temporal spiking responses to new sounds. Supported by BMBF Grant 01GQ0811.

  7. Adaptive deformable model for colonic polyp segmentation and measurement on CT colonography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao Jianhua; Summers, Ronald M.

    2007-05-15

    Polyp size is one important biomarker for the malignancy risk of a polyp. This paper presents an improved approach for colonic polyp segmentation and measurement on CT colonography images. The method is based on a combination of knowledge-guided intensity adjustment, fuzzy clustering, and adaptive deformable model. Since polyps on haustral folds are the most difficult to be segmented, we propose a dual-distance algorithm to first identify voxels on the folds, and then introduce a counter-force to control the model evolution. We derive linear and volumetric measurements from the segmentation. The experiment was conducted on 395 patients with 83 polyps, ofmore » which 43 polyps were on haustral folds. The results were validated against manual measurement from the optical colonoscopy and the CT colonography. The paired t-test showed no significant difference, and the R{sup 2} correlation was 0.61 for the linear measurement and 0.98 for the volumetric measurement. The mean Dice coefficient for volume overlap between automatic and manual segmentation was 0.752 (standard deviation 0.154)« less

  8. Bin Ratio-Based Histogram Distances and Their Application to Image Classification.

    PubMed

    Hu, Weiming; Xie, Nianhua; Hu, Ruiguang; Ling, Haibin; Chen, Qiang; Yan, Shuicheng; Maybank, Stephen

    2014-12-01

    Large variations in image background may cause partial matching and normalization problems for histogram-based representations, i.e., the histograms of the same category may have bins which are significantly different, and normalization may produce large changes in the differences between corresponding bins. In this paper, we deal with this problem by using the ratios between bin values of histograms, rather than bin values' differences which are used in the traditional histogram distances. We propose a bin ratio-based histogram distance (BRD), which is an intra-cross-bin distance, in contrast with previous bin-to-bin distances and cross-bin distances. The BRD is robust to partial matching and histogram normalization, and captures correlations between bins with only a linear computational complexity. We combine the BRD with the ℓ1 histogram distance and the χ(2) histogram distance to generate the ℓ1 BRD and the χ(2) BRD, respectively. These combinations exploit and benefit from the robustness of the BRD under partial matching and the robustness of the ℓ1 and χ(2) distances to small noise. We propose a method for assessing the robustness of histogram distances to partial matching. The BRDs and logistic regression-based histogram fusion are applied to image classification. The experimental results on synthetic data sets show the robustness of the BRDs to partial matching, and the experiments on seven benchmark data sets demonstrate promising results of the BRDs for image classification.

  9. Beyond Serial Founder Effects: The Impact of Admixture and Localized Gene Flow on Patterns of Regional Genetic Diversity.

    PubMed

    Hunley, Keith L; Cabana, Graciela S

    2016-07-01

    Geneticists have argued that the linear decay in within-population genetic diversity with increasing geographic distance from East Africa is best explained by a phylogenetic process of repeated founder effects, growth, and isolation. However, this serial founder effect (SFE) process has not yet been adequately vetted against other evolutionary processes that may also affect geospatial patterns of diversity. Additionally, studies of the SFE process have been largely based on a limited 52-population sample. Here, we assess the effects of founder effect, admixture, and localized gene flow processes on patterns of global and regional diversity using a published data set of 645 autosomal microsatellite genotypes from 5,415 individuals in 248 widespread populations. We used a formal tree-fitting approach to explore the role of founder effects. The approach involved fitting global and regional population trees to extant patterns of gene diversity and then systematically examining the deviations in fit. We also informally tested the SFE process using linear models of gene diversity versus waypoint geographic distances from Africa. We tested the role of localized gene flow using partial Mantel correlograms of gene diversity versus geographic distance controlling for the confounding effects of treelike genetic structure. We corroborate previous findings that global patterns of diversity, both within and between populations, are the product of an out-of-Africa SFE process. Within regions, however, diversity within populations is uncorrelated with geographic distance from Africa. Here, patterns of diversity have been largely shaped by recent interregional admixture and secondary range expansions. Our detailed analyses of the pattern of diversity within and between populations reveal that the signatures of different evolutionary processes dominate at different geographic scales. These findings have important implications for recent publications on the biology of race.

  10. Functional connectivity and structural covariance between regions of interest can be measured more accurately using multivariate distance correlation.

    PubMed

    Geerligs, Linda; Cam-Can; Henson, Richard N

    2016-07-15

    Studies of brain-wide functional connectivity or structural covariance typically use measures like the Pearson correlation coefficient, applied to data that have been averaged across voxels within regions of interest (ROIs). However, averaging across voxels may result in biased connectivity estimates when there is inhomogeneity within those ROIs, e.g., sub-regions that exhibit different patterns of functional connectivity or structural covariance. Here, we propose a new measure based on "distance correlation"; a test of multivariate dependence of high dimensional vectors, which allows for both linear and non-linear dependencies. We used simulations to show how distance correlation out-performs Pearson correlation in the face of inhomogeneous ROIs. To evaluate this new measure on real data, we use resting-state fMRI scans and T1 structural scans from 2 sessions on each of 214 participants from the Cambridge Centre for Ageing & Neuroscience (Cam-CAN) project. Pearson correlation and distance correlation showed similar average connectivity patterns, for both functional connectivity and structural covariance. Nevertheless, distance correlation was shown to be 1) more reliable across sessions, 2) more similar across participants, and 3) more robust to different sets of ROIs. Moreover, we found that the similarity between functional connectivity and structural covariance estimates was higher for distance correlation compared to Pearson correlation. We also explored the relative effects of different preprocessing options and motion artefacts on functional connectivity. Because distance correlation is easy to implement and fast to compute, it is a promising alternative to Pearson correlations for investigating ROI-based brain-wide connectivity patterns, for functional as well as structural data. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  11. A comparison of regression methods for model selection in individual-based landscape genetic analysis.

    PubMed

    Shirk, Andrew J; Landguth, Erin L; Cushman, Samuel A

    2018-01-01

    Anthropogenic migration barriers fragment many populations and limit the ability of species to respond to climate-induced biome shifts. Conservation actions designed to conserve habitat connectivity and mitigate barriers are needed to unite fragmented populations into larger, more viable metapopulations, and to allow species to track their climate envelope over time. Landscape genetic analysis provides an empirical means to infer landscape factors influencing gene flow and thereby inform such conservation actions. However, there are currently many methods available for model selection in landscape genetics, and considerable uncertainty as to which provide the greatest accuracy in identifying the true landscape model influencing gene flow among competing alternative hypotheses. In this study, we used population genetic simulations to evaluate the performance of seven regression-based model selection methods on a broad array of landscapes that varied by the number and type of variables contributing to resistance, the magnitude and cohesion of resistance, as well as the functional relationship between variables and resistance. We also assessed the effect of transformations designed to linearize the relationship between genetic and landscape distances. We found that linear mixed effects models had the highest accuracy in every way we evaluated model performance; however, other methods also performed well in many circumstances, particularly when landscape resistance was high and the correlation among competing hypotheses was limited. Our results provide guidance for which regression-based model selection methods provide the most accurate inferences in landscape genetic analysis and thereby best inform connectivity conservation actions. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  12. The effect of inhomogeneities on the distance to the last scattering surface and the accuracy of the CMB analysis

    NASA Astrophysics Data System (ADS)

    Bolejko, Krzysztof

    2011-02-01

    The standard analysis of the CMB data assumes that the distance to the last scattering surface can be calculated using the distance-redshift relation as in the Friedmann model. However, in the inhomogeneous universe, even if langδρrang = 0, the distance relation is not the same as in the unperturbed universe. This can be of serious consequences as a change of distance affects the mapping of CMB temperature fluctuations into the angular power spectrum Cl. In addition, if the change of distance is relatively uniform no new temperature fluctuations are generated. It is therefore a different effect than the lensing or ISW effects which introduce additional CMB anisotropies. This paper shows that the accuracy of the CMB analysis can be impaired by the accuracy of calculation of the distance within the cosmological models. Since this effect has not been fully explored before, to test how the inhomogeneities affect the distance-redshift relation, several methods are examined: the Dyer-Roeder relation, lensing approximation, and non-linear Swiss-Cheese model. In all cases, the distance to the last scattering surface is different than when homogeneity is assumed. The difference can be as low as 1% and as high as 80%. An usual change of the distance is around 20-30%. Since the distance to the last scattering surface is set by the position of the CMB peaks, in order to have a good fit, the distance needs to be adjusted. After correcting the distance, the cosmological parameters change. Therefore, a not properly estimated distance to the last scattering surface can be a major source of systematics. This paper shows that if inhomogeneities are taken into account when calculating the distance then models with positive spatial curvature and with ΩΛ ~ 0.8-0.9 are preferred.

  13. On-line Adaptive Radiation Treatment of Prostate Cancer

    DTIC Science & Technology

    2008-01-01

    novel imaging system using a linear x-ray source and a linear detector . This imaging system may significantly improve the quality of online images...yielded the Euclidean voxel distances nside the ROI. The two distance maps were combined with ositive distances outside and negative distances inside...is reduced by 1cm. IMRT is more sensitive to organ motion. Large discrepancies of bladder and rectum doses were observed compared to the actual

  14. DNA viewed as an out-of-equilibrium structure

    NASA Astrophysics Data System (ADS)

    Provata, A.; Nicolis, C.; Nicolis, G.

    2014-05-01

    The complexity of the primary structure of human DNA is explored using methods from nonequilibrium statistical mechanics, dynamical systems theory, and information theory. A collection of statistical analyses is performed on the DNA data and the results are compared with sequences derived from different stochastic processes. The use of χ2 tests shows that DNA can not be described as a low order Markov chain of order up to r =6. Although detailed balance seems to hold at the level of a binary alphabet, it fails when all four base pairs are considered, suggesting spatial asymmetry and irreversibility. Furthermore, the block entropy does not increase linearly with the block size, reflecting the long-range nature of the correlations in the human genomic sequences. To probe locally the spatial structure of the chain, we study the exit distances from a specific symbol, the distribution of recurrence distances, and the Hurst exponent, all of which show power law tails and long-range characteristics. These results suggest that human DNA can be viewed as a nonequilibrium structure maintained in its state through interactions with a constantly changing environment. Based solely on the exit distance distribution accounting for the nonequilibrium statistics and using the Monte Carlo rejection sampling method, we construct a model DNA sequence. This method allows us to keep both long- and short-range statistical characteristics of the native DNA data. The model sequence presents the same characteristic exponents as the natural DNA but fails to capture spatial correlations and point-to-point details.

  15. DNA viewed as an out-of-equilibrium structure.

    PubMed

    Provata, A; Nicolis, C; Nicolis, G

    2014-05-01

    The complexity of the primary structure of human DNA is explored using methods from nonequilibrium statistical mechanics, dynamical systems theory, and information theory. A collection of statistical analyses is performed on the DNA data and the results are compared with sequences derived from different stochastic processes. The use of χ^{2} tests shows that DNA can not be described as a low order Markov chain of order up to r=6. Although detailed balance seems to hold at the level of a binary alphabet, it fails when all four base pairs are considered, suggesting spatial asymmetry and irreversibility. Furthermore, the block entropy does not increase linearly with the block size, reflecting the long-range nature of the correlations in the human genomic sequences. To probe locally the spatial structure of the chain, we study the exit distances from a specific symbol, the distribution of recurrence distances, and the Hurst exponent, all of which show power law tails and long-range characteristics. These results suggest that human DNA can be viewed as a nonequilibrium structure maintained in its state through interactions with a constantly changing environment. Based solely on the exit distance distribution accounting for the nonequilibrium statistics and using the Monte Carlo rejection sampling method, we construct a model DNA sequence. This method allows us to keep both long- and short-range statistical characteristics of the native DNA data. The model sequence presents the same characteristic exponents as the natural DNA but fails to capture spatial correlations and point-to-point details.

  16. [A Study of the Relationship Among Genetic Distances, NIR Spectra Distances, and NIR-Based Identification Model Performance of the Seeds of Maize Iinbred Lines].

    PubMed

    Liu, Xu; Jia, Shi-qiang; Wang, Chun-ying; Liu, Zhe; Gu, Jian-cheng; Zhai, Wei; Li, Shao-ming; Zhang, Xiao-dong; Zhu, De-hai; Huang, Hua-jun; An, Dong

    2015-09-01

    This paper explored the relationship among genetic distances, NIR spectra distances and NIR-based identification model performance of the seeds of maize inbred lines. Using 3 groups (total 15 pairs) of maize inbred lines whose genetic distaches are different as experimental materials, we calculates the genetic distance between these seeds with SSR markers and uses Euclidean distance between distributed center points of maize NIR spectrum in the PCA space as the distances of NIR spectrum. BPR method is used to build identification model of inbred lines and the identification accuracy is used as a measure of model identification performance. The results showed that, the correlation of genetic distance and spectra distancesis 0.9868, and it has a correlation of 0.9110 with the identification accuracy, which is highly correlated. This means near-Infrared spectrum of seedscan reflect genetic relationship of maize inbred lines. The smaller the genetic distance, the smaller the distance of spectrum, the poorer ability of model to identify. In practical application, near infrared spectrum analysis technology has the potential to be used to analyze maize inbred genetic relations, contributing much to genetic breeding, identification of species, purity sorting and so on. What's more, when creating a NIR-based identification model, the impact of the maize inbred lines which have closer genetic relationship should be fully considered.

  17. Finite linear diffusion model for design of overcharge protection for rechargeable lithium batteries

    NASA Technical Reports Server (NTRS)

    Narayanan, S. R.; Surampudi, S.; Attia, A. I.

    1991-01-01

    The overcharge condition in secondary lithium batteries employing redox additives for overcharge protection has been theoretically analyzed in terms of a finite linear diffusion model. The analysis leads to expressions relating the steady-state overcharge current density and cell voltage to the concentration, diffusion coefficient, standard reduction potential of the redox couple, and interelectrode distance. The model permits the estimation of the maximum permissible overcharge rate for any chosen set of system conditions. The model has been experimentally verified using 1,1-prime-dimethylferrocene as a redox additive. The theoretical results may be exploited in the design and optimization of overcharge protection by the redox additive approach.

  18. A spline-based non-linear diffeomorphism for multimodal prostate registration.

    PubMed

    Mitra, Jhimli; Kato, Zoltan; Martí, Robert; Oliver, Arnau; Lladó, Xavier; Sidibé, Désiré; Ghose, Soumya; Vilanova, Joan C; Comet, Josep; Meriaudeau, Fabrice

    2012-08-01

    This paper presents a novel method for non-rigid registration of transrectal ultrasound and magnetic resonance prostate images based on a non-linear regularized framework of point correspondences obtained from a statistical measure of shape-contexts. The segmented prostate shapes are represented by shape-contexts and the Bhattacharyya distance between the shape representations is used to find the point correspondences between the 2D fixed and moving images. The registration method involves parametric estimation of the non-linear diffeomorphism between the multimodal images and has its basis in solving a set of non-linear equations of thin-plate splines. The solution is obtained as the least-squares solution of an over-determined system of non-linear equations constructed by integrating a set of non-linear functions over the fixed and moving images. However, this may not result in clinically acceptable transformations of the anatomical targets. Therefore, the regularized bending energy of the thin-plate splines along with the localization error of established correspondences should be included in the system of equations. The registration accuracies of the proposed method are evaluated in 20 pairs of prostate mid-gland ultrasound and magnetic resonance images. The results obtained in terms of Dice similarity coefficient show an average of 0.980±0.004, average 95% Hausdorff distance of 1.63±0.48 mm and mean target registration and target localization errors of 1.60±1.17 mm and 0.15±0.12 mm respectively. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Identifying influential data points in hydrological model calibration and their impact on streamflow predictions

    NASA Astrophysics Data System (ADS)

    Wright, David; Thyer, Mark; Westra, Seth

    2015-04-01

    Highly influential data points are those that have a disproportionately large impact on model performance, parameters and predictions. However, in current hydrological modelling practice the relative influence of individual data points on hydrological model calibration is not commonly evaluated. This presentation illustrates and evaluates several influence diagnostics tools that hydrological modellers can use to assess the relative influence of data. The feasibility and importance of including influence detection diagnostics as a standard tool in hydrological model calibration is discussed. Two classes of influence diagnostics are evaluated: (1) computationally demanding numerical "case deletion" diagnostics; and (2) computationally efficient analytical diagnostics, based on Cook's distance. These diagnostics are compared against hydrologically orientated diagnostics that describe changes in the model parameters (measured through the Mahalanobis distance), performance (objective function displacement) and predictions (mean and maximum streamflow). These influence diagnostics are applied to two case studies: a stage/discharge rating curve model, and a conceptual rainfall-runoff model (GR4J). Removing a single data point from the calibration resulted in differences to mean flow predictions of up to 6% for the rating curve model, and differences to mean and maximum flow predictions of up to 10% and 17%, respectively, for the hydrological model. When using the Nash-Sutcliffe efficiency in calibration, the computationally cheaper Cook's distance metrics produce similar results to the case-deletion metrics at a fraction of the computational cost. However, Cooks distance is adapted from linear regression with inherit assumptions on the data and is therefore less flexible than case deletion. Influential point detection diagnostics show great potential to improve current hydrological modelling practices by identifying highly influential data points. The findings of this study establish the feasibility and importance of including influential point detection diagnostics as a standard tool in hydrological model calibration. They provide the hydrologist with important information on whether model calibration is susceptible to a small number of highly influent data points. This enables the hydrologist to make a more informed decision of whether to (1) remove/retain the calibration data; (2) adjust the calibration strategy and/or hydrological model to reduce the susceptibility of model predictions to a small number of influential observations.

  20. Evaluation of accuracy of complete-arch multiple-unit abutment-level dental implant impressions using different impression and splinting materials.

    PubMed

    Buzayan, Muaiyed; Baig, Mirza Rustum; Yunus, Norsiah

    2013-01-01

    This in vitro study evaluated the accuracy of multiple-unit dental implant casts obtained from splinted or nonsplinted direct impression techniques using various splinting materials by comparing the casts to the reference models. The effect of two different impression materials on the accuracy of the implant casts was also evaluated for abutment-level impressions. A reference model with six internal-connection implant replicas placed in the completely edentulous mandibular arch and connected to multi-base abutments was fabricated from heat-curing acrylic resin. Forty impressions of the reference model were made, 20 each with polyether (PE) and polyvinylsiloxane (PVS) impression materials using the open tray technique. The PE and PVS groups were further subdivided into four subgroups of five each on the bases of splinting type: no splinting, bite registration PE, bite registration addition silicone, or autopolymerizing acrylic resin. The positional accuracy of the implant replica heads was measured on the poured casts using a coordinate measuring machine to assess linear differences in interimplant distances in all three axes. The collected data (linear and three-dimensional [3D] displacement values) were compared with the measurements calculated on the reference resin model and analyzed with nonparametric tests (Kruskal-Wallis and Mann-Whitney). No significant differences were found between the various splinting groups for both PE and PVS impression materials in terms of linear and 3D distortions. However, small but significant differences were found between the two impression materials (PVS, 91 μm; PE, 103 μm) in terms of 3D discrepancies, irrespective of the splinting technique employed. Casts obtained from both impression materials exhibited differences from the reference model. The impression material influenced impression inaccuracy more than the splinting material for multiple-unit abutment-level impressions.

  1. Red Shifts and Existing Speculations

    NASA Astrophysics Data System (ADS)

    Aisenberg, Sol

    2009-03-01

    There are many current flaws, mysteries, and errors in the standard model of the universe - all based upon speculative interpretation of many excellent and verified observations. The most serious cause of some errors is the speculation about the meaning of the redshifts observed in the 1930s by Hubble. He ascribed the redshifts as due to ``an apparent Doppler effect''. This led to speculation that the remote stars were receding, and the universe was expanding -- although without observational proof of the actual receding velocity of the stars. The age of the universe, based upon the Hubble constant is pure speculation because of lack of velocity demonstration. The belief in expansion, the big bang, and of inflation should be reexamined. Also, the redshift cannot always be used as a distance measure, particularly for photons from quasars containing massive black holes that can reduce photon energy through gravitational attraction. If the linear Hubble constant is extrapolated to the most remote super novae and beyond, it would eventually require that the corresponding photon energy go to zero or become negative -- according to Hubble linear relationship. This should require a reexamination of the meaning of the red shift and the speculative consequences and give a model with fewer mysteries.

  2. Multi-target detection and positioning in crowds using multiple camera surveillance

    NASA Astrophysics Data System (ADS)

    Huang, Jiahu; Zhu, Qiuyu; Xing, Yufeng

    2018-04-01

    In this study, we propose a pixel correspondence algorithm for positioning in crowds based on constraints on the distance between lines of sight, grayscale differences, and height in a world coordinates system. First, a Gaussian mixture model is used to obtain the background and foreground from multi-camera videos. Second, the hair and skin regions are extracted as regions of interest. Finally, the correspondences between each pixel in the region of interest are found under multiple constraints and the targets are positioned by pixel clustering. The algorithm can provide appropriate redundancy information for each target, which decreases the risk of losing targets due to a large viewing angle and wide baseline. To address the correspondence problem for multiple pixels, we construct a pixel-based correspondence model based on a similar permutation matrix, which converts the correspondence problem into a linear programming problem where a similar permutation matrix is found by minimizing an objective function. The correct pixel correspondences can be obtained by determining the optimal solution of this linear programming problem and the three-dimensional position of the targets can also be obtained by pixel clustering. Finally, we verified the algorithm with multiple cameras in experiments, which showed that the algorithm has high accuracy and robustness.

  3. A boundary condition to the Khokhlov-Zabolotskaya equation for modeling strongly focused nonlinear ultrasound fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosnitskiy, P., E-mail: pavrosni@yandex.ru; Yuldashev, P., E-mail: petr@acs366.phys.msu.ru; Khokhlova, V., E-mail: vera@acs366.phys.msu.ru

    2015-10-28

    An equivalent source model was proposed as a boundary condition to the nonlinear parabolic Khokhlov-Zabolotskaya (KZ) equation to simulate high intensity focused ultrasound (HIFU) fields generated by medical ultrasound transducers with the shape of a spherical shell. The boundary condition was set in the initial plane; the aperture, the focal distance, and the initial pressure of the source were chosen based on the best match of the axial pressure amplitude and phase distributions in the Rayleigh integral analytic solution for a spherical transducer and the linear parabolic approximation solution for the equivalent source. Analytic expressions for the equivalent source parametersmore » were derived. It was shown that the proposed approach allowed us to transfer the boundary condition from the spherical surface to the plane and to achieve a very good match between the linear field solutions of the parabolic and full diffraction models even for highly focused sources with F-number less than unity. The proposed method can be further used to expand the capabilities of the KZ nonlinear parabolic equation for efficient modeling of HIFU fields generated by strongly focused sources.« less

  4. A model for compression-weakening materials and the elastic fields due to contractile cells

    NASA Astrophysics Data System (ADS)

    Rosakis, Phoebus; Notbohm, Jacob; Ravichandran, Guruswami

    2015-12-01

    We construct a homogeneous, nonlinear elastic constitutive law that models aspects of the mechanical behavior of inhomogeneous fibrin networks. Fibers in such networks buckle when in compression. We model this as a loss of stiffness in compression in the stress-strain relations of the homogeneous constitutive model. Problems that model a contracting biological cell in a finite matrix are solved. It is found that matrix displacements and stresses induced by cell contraction decay slower (with distance from the cell) in a compression weakening material than linear elasticity would predict. This points toward a mechanism for long-range cell mechanosensing. In contrast, an expanding cell would induce displacements that decay faster than in a linear elastic matrix.

  5. The impact of precipitation on land interfacility transport times.

    PubMed

    Giang, Wayne C W; Donmez, Birsen; Ahghari, Mahvareh; MacDonald, Russell D

    2014-12-01

    Timely transfer of patients among facilities within a regionalized critical-care system remains a large obstacle to effective patient care. For medical transport systems where dispatchers are responsible for planning these interfacility transfers, accurate estimates of interfacility transfer times play a large role in planning and resource-allocation decisions. However, the impact of adverse weather conditions on transfer times is not well understood. Precipitation negatively impacts driving conditions and can decrease free-flow speeds and increase travel times. The objective of this research was to quantify and model the effects of different precipitation types on land travel times for interfacility patient transfers. It was hypothesized that the effects of precipitation would accumulate as the distance of the transfer increased, and they would differ based on the type of precipitation. Urgent and emergent interfacility transfers carried out by the medical transport system in Ontario from 2005 through 2011 were linked to Environment Canada's (Gatineau, Quebec, Canada) climate data. Two linear models were built to estimate travel times based on precipitation type and driving distance: one for transfers between cities (intercity) and another for transfers within a city (intracity). Precipitation affected both transfer types. For intercity transfers, the magnitude of the delays increased as driving distance increased. For median-distance intercity transfers (48 km), snow produced delays of approximately 9.1% (3.1 minutes), while rain produced delays of 8.4% (2.9 minutes). For intracity transfers, the magnitude of delays attributed to precipitation did not depend on distance driven. Transfers in rain were 8.6% longer (1.7 minutes) compared to no precipitation, whereas only statistically marginal effects were observed for snow. Precipitation increases the duration of interfacility land ambulance travel times by eight percent to ten percent. For transfers between cities, snow is associated with the longest delays (versus rain), but for transfers within a single city, rain is associated with the longest delays.

  6. Estimating exercise capacity from walking tests in elderly individuals with stable coronary artery disease.

    PubMed

    Mandic, Sandra; Walker, Robert; Stevens, Emily; Nye, Edwin R; Body, Dianne; Barclay, Leanne; Williams, Michael J A

    2013-01-01

    Compared with symptom-limited cardiopulmonary exercise test (CPET), timed walking tests are cheaper, well-tolerated and simpler alternative for assessing exercise capacity in coronary artery disease (CAD) patients. We developed multivariate models for predicting peak oxygen consumption (VO2peak) from 6-minute walk test (6MWT) distance and peak shuttle walk speed for elderly stable CAD patients. Fifty-eight CAD patients (72 SD 6 years, 66% men) completed: (1) CPET with expired gas analysis on a cycle ergometer, (2) incremental 10-meter shuttle walk test, (3) two 6MWTs, (4) anthropometric assessment and (5) 30-second chair stands. Linear regression models were developed for estimating VO2peak from 6MWT distance and peak shuttle walk speed as well as demographic, anthropometric and functional variables. Measured VO2peak was significantly related to 6MWT distance (r = 0.719, p < 0.001) and peak shuttle walk speed (r = 0.717, p < 0.001). The addition of demographic (age, gender), anthropometric (height, weight, body mass index, body composition) and functional characteristics (30-second chair stands) increased the accuracy of predicting VO2peak from both 6MWT distance and peak shuttle walk speed (from 51% to 73% of VO2peak variance explained). Addition of demographic, anthropometric and functional characteristics improves the accuracy of VO2peak estimate based on walking tests in elderly individuals with stable CAD. Implications for Rehabilitation Timed walking tests are cheaper, well-tolerated and simpler alternative for assessing exercise capacity in cardiac patients. Walking tests could be used to assess individual's functional capacity and response to therapeutic interventions when symptom-limited cardiopulmonary exercise testing is not practical or not necessary for clinical reasons. Addition of demographic, anthropometric and functional characteristics improves the accuracy of peak oxygen consumption estimate based on 6-minute walk test distance and peak shuttle walk speed in elderly patients with coronary artery disease.

  7. Normative biometrics for fetal ocular growth using volumetric MRI reconstruction.

    PubMed

    Velasco-Annis, Clemente; Gholipour, Ali; Afacan, Onur; Prabhu, Sanjay P; Estroff, Judy A; Warfield, Simon K

    2015-04-01

    To determine normative ranges for fetal ocular biometrics between 19 and 38 weeks gestational age (GA) using volumetric MRI reconstruction. The 3D images of 114 healthy fetuses between 19 and 38 weeks GA were created using super-resolution volume reconstructions from MRI slice acquisitions. These 3D images were semi-automatically segmented to measure fetal orbit volume, binocular distance (BOD), interocular distance (IOD), and ocular diameter (OD). All biometry correlated with GA (Volume, Pearson's correlation coefficient (CC) = 0.9680; BOD, CC = 0.9552; OD, CC = 0.9445; and IOD, CC = 0.8429), and growth curves were plotted against linear and quadratic growth models. Regression analysis showed quadratic models to best fit BOD, IOD, and OD and a linear model to best fit volume. Orbital volume had the greatest correlation with GA, although BOD and OD also showed strong correlation. The normative data found in this study may be helpful for the detection of congenital fetal anomalies with more consistent measurements than are currently available. © 2015 John Wiley & Sons, Ltd. © 2015 John Wiley & Sons, Ltd.

  8. Importance of dispersal routes that minimize open-ocean movement to the genetic structure of island populations.

    PubMed

    Harradine, E L; Andrew, M E; Thomas, J W; How, R A; Schmitt, L H; Spencer, P B S

    2015-12-01

    Islands present a unique scenario in conservation biology, offering refuge yet imposing limitations on insular populations. The Kimberley region of northwestern Australia has more than 2500 islands that have recently come into focus as substantial conservation resources. It is therefore of great interest for managers to understand the driving forces of genetic structure of species within these island archipelagos. We used the ubiquitous bar-shouldered skink (Ctenotus inornatus) as a model species to represent the influence of landscape factors on genetic structure across the Kimberley islands. On 41 islands and 4 mainland locations in a remote area of Australia, we genotyped individuals across 18 nuclear (microsatellite) markers. Measures of genetic differentiation and diversity were used in two complementary analyses. We used circuit theory and Mantel tests to examine the influence of the landscape matrix on population connectivity and linear regression and model selection based on Akaike's information criterion to investigate landscape controls on genetic diversity. Genetic differentiation between islands was best predicted with circuit-theory models that accounted for the large difference in resistance to dispersal between land and ocean. In contrast, straight-line distances were unrelated to either resistance distances or genetic differentiation. Instead, connectivity was determined by island-hopping routes that allow organisms to minimize the distance of difficult ocean passages. Island populations of C. inornatus retained varying degrees of genetic diversity (NA = 1.83 - 7.39), but it was greatest on islands closer to the mainland, in terms of resistance-distance units. In contrast, genetic diversity was unrelated to island size. Our results highlight the potential for islands to contribute to both theoretical and applied conservation, provide strong evidence of the driving forces of population structure within undisturbed landscapes, and identify the islands most valuable for conservation based on their contributions to gene flow and genetic diversity. © 2015 Society for Conservation Biology.

  9. Distance estimation and collision prediction for on-line robotic motion planning

    NASA Technical Reports Server (NTRS)

    Kyriakopoulos, K. J.; Saridis, G. N.

    1992-01-01

    An efficient method for computing the minimum distance and predicting collisions between moving objects is presented. This problem is incorporated into the framework of an in-line motion-planning algorithm to satisfy collision avoidance between a robot and moving objects modeled as convex polyhedra. In the beginning, the deterministic problem where the information about the objects is assumed to be certain is examined. L(1) or L(infinity) norms are used to represent distance and the problem becomes a linear programming problem. The stochastic problem is formulated where the uncertainty is induced by sensing and the unknown dynamics of the moving obstacles. Two problems are considered: First, filtering of the distance between the robot and the moving object at the present time. Second, prediction of the minimum distance in the future in order to predict the collision time.

  10. Unpacking the public stigma of problem gambling: The process of stigma creation and predictors of social distancing.

    PubMed

    Hing, Nerilee; Russell, Alex M T; Gainsbury, Sally M

    2016-09-01

    Background and aims Public stigma diminishes the health of stigmatized populations, so it is critical to understand how and why stigma occurs to inform stigma reduction measures. This study aimed to examine stigmatizing attitudes held toward people experiencing problem gambling, to examine whether specific elements co-occur to create this public stigma, and to model explanatory variables of this public stigma. Methods An online panel of adults from Victoria, Australia (N = 2,000) was surveyed. Measures were based on a vignette for problem gambling and included demographics, gambling behavior, perceived dimensions of problem gambling, stereotyping, social distancing, emotional reactions, and perceived devaluation and discrimination. A hierarchical linear regression was conducted. Results People with gambling problems attracted substantial negative stereotypes, social distancing, emotional reactions, and status loss/discrimination. These elements were associated with desired social distance, as was perceived that problem gambling is caused by bad character, and is perilous, non-recoverable, and disruptive. Level of contact with problem gambling, gambling involvement, and some demographic variables was significantly associated with social distance, but they explained little additional variance. Discussion and conclusions This study contributes to the understanding of how and why people experiencing gambling problems are stigmatized. Results suggest the need to increase public contact with such people, avoid perpetuation of stereotypes in media and public health communications, and reduce devaluing and discriminating attitudes and behaviors.

  11. Unpacking the public stigma of problem gambling: The process of stigma creation and predictors of social distancing

    PubMed Central

    Hing, Nerilee; Russell, Alex M. T.; Gainsbury, Sally M.

    2016-01-01

    Background and aims Public stigma diminishes the health of stigmatized populations, so it is critical to understand how and why stigma occurs to inform stigma reduction measures. This study aimed to examine stigmatizing attitudes held toward people experiencing problem gambling, to examine whether specific elements co-occur to create this public stigma, and to model explanatory variables of this public stigma. Methods An online panel of adults from Victoria, Australia (N = 2,000) was surveyed. Measures were based on a vignette for problem gambling and included demographics, gambling behavior, perceived dimensions of problem gambling, stereotyping, social distancing, emotional reactions, and perceived devaluation and discrimination. A hierarchical linear regression was conducted. Results People with gambling problems attracted substantial negative stereotypes, social distancing, emotional reactions, and status loss/discrimination. These elements were associated with desired social distance, as was perceived that problem gambling is caused by bad character, and is perilous, non-recoverable, and disruptive. Level of contact with problem gambling, gambling involvement, and some demographic variables was significantly associated with social distance, but they explained little additional variance. Discussion and conclusions This study contributes to the understanding of how and why people experiencing gambling problems are stigmatized. Results suggest the need to increase public contact with such people, avoid perpetuation of stereotypes in media and public health communications, and reduce devaluing and discriminating attitudes and behaviors. PMID:27513611

  12. Deletion Diagnostics for the Generalised Linear Mixed Model with independent random effects

    PubMed Central

    Ganguli, B.; Roy, S. Sen; Naskar, M.; Malloy, E. J.; Eisen, E. A.

    2015-01-01

    The Generalised Linear Mixed Model (GLMM) is widely used for modelling environmental data. However, such data are prone to influential observations which can distort the estimated exposure-response curve particularly in regions of high exposure. Deletion diagnostics for iterative estimation schemes commonly derive the deleted estimates based on a single iteration of the full system holding certain pivotal quantities such as the information matrix to be constant. In this paper, we present an approximate formula for the deleted estimates and Cook’s distance for the GLMM which does not assume that the estimates of variance parameters are unaffected by deletion. The procedure allows the user to calculate standardised DFBETAs for mean as well as variance parameters. In certain cases, such as when using the GLMM as a device for smoothing, such residuals for the variance parameters are interesting in their own right. In general, the procedure leads to deleted estimates of mean parameters which are corrected for the effect of deletion on variance components as estimation of the two sets of parameters is interdependent. The probabilistic behaviour of these residuals is investigated and a simulation based procedure suggested for their standardisation. The method is used to identify influential individuals in an occupational cohort exposed to silica. The results show that failure to conduct post model fitting diagnostics for variance components can lead to erroneous conclusions about the fitted curve and unstable confidence intervals. PMID:26626135

  13. The Baade-Wesselink projection factor of the δ-Scuti stars AI Vel and β Cas

    NASA Astrophysics Data System (ADS)

    Guiglion, G.; Nardetto, N.; Domiciano de Souza, A.; Mathias, P.; Mourard, D.; Poretti, E.

    2012-12-01

    The Baade-Wesselink method of distance determination is based on the oscillations of pulsating stars. After determining the angular diameter and the linear radius variations, the distance is derived by a simple ratio. The linear radius variation is measured by integrating the pulsation velocity (hereafter V_{puls}) over one pulsating cycle. However, from observations we have only access to the radial velocity (V_{rad}) because of the projection along the line-of-sight. The projection factor, used to convert the radial velocity into the pulsation velocity, is defined by: p = V_{puls} / V_{rad}. We aim to derive the projection factor for two δ-Scuti stars, the high amplitude pulsator AI Vel and the fast rotator β Cas. The geometric component of the projection factor is derived using a limb-darkening model of the intensity distribution of AI Vel, and a fast rotator model for β Cas. Then, by comparing the radial velocity curves of several spectral lines forming at different levels in the atmosphere, we derive directly the velocity gradient (in a part of the atmosphere of the star) using SOPHIE/OHP data for β Cas and HARPS/ESO data for AI Vel, which is used to derive a dynamical projection factor for both stars. We find p = 1.44 ± 0.05 for AI Vel and p = 1.41 ± 0.25 for β Cas. By comparing Cepheids and δ-Scuti stars, these results bring valuable insights into the dynamical structure of pulsating star atmospheres.

  14. Emergent properties of gene evolution: Species as attractors in phenotypic space

    NASA Astrophysics Data System (ADS)

    Reuveni, Eli; Giuliani, Alessandro

    2012-02-01

    The question how the observed discrete character of the phenotype emerges from a continuous genetic distance metrics is the core argument of two contrasted evolutionary theories: punctuated equilibrium (stable evolution scattered with saltations in the phenotype) and phyletic gradualism (smooth and linear evolution of the phenotype). Identifying phenotypic saltation on the molecular levels is critical to support the first model of evolution. We have used DNA sequences of ∼1300 genes from 6 isolated populations of the budding yeast Saccharomyces cerevisiae. We demonstrate that while the equivalent measure of the genetic distance show a continuum between lineage distance with no evidence of discrete states, the phenotypic space illustrates only two (discrete) possible states that can be associated with a saltation of the species phenotype. The fact that such saltation spans large fraction of the genome and follows by continuous genetic distance is a proof of the concept that the genotype-phenotype relation is not univocal and may have severe implication when looking for disease related genes and mutations. We used this finding with analogy to attractor-like dynamics and show that punctuated equilibrium could be explained in the framework of non-linear dynamics systems.

  15. Study of the observational compatibility of an inhomogeneous cosmology with linear expansion according to SNe Ia

    NASA Astrophysics Data System (ADS)

    Monjo, R.

    2017-11-01

    Most of current cosmological theories are built combining an isotropic and homogeneous manifold with a scale factor that depends on time. If one supposes a hyperconical universe with linear expansion, an inhomogeneous metric can be obtained by an appropriate transformation that preserves the proper time. This model locally tends to a flat Friedman-Robertson-Walker metric with linear expansion. The objective of this work is to analyze the observational compatibility of the inhomogeneous metric considered. For this purpose, the corresponding luminosity distance was obtained and was compared with the observations of 580 SNe Ia, taken from the Supernova Cosmology Project. The best fit of the hyperconical model obtains χ02=562 , the same value as the standard Λ CDM model. Finally, a possible relationship is found between both theories.

  16. Forecasting stochastic neural network based on financial empirical mode decomposition.

    PubMed

    Wang, Jie; Wang, Jun

    2017-06-01

    In an attempt to improve the forecasting accuracy of stock price fluctuations, a new one-step-ahead model is developed in this paper which combines empirical mode decomposition (EMD) with stochastic time strength neural network (STNN). The EMD is a processing technique introduced to extract all the oscillatory modes embedded in a series, and the STNN model is established for considering the weight of occurrence time of the historical data. The linear regression performs the predictive availability of the proposed model, and the effectiveness of EMD-STNN is revealed clearly through comparing the predicted results with the traditional models. Moreover, a new evaluated method (q-order multiscale complexity invariant distance) is applied to measure the predicted results of real stock index series, and the empirical results show that the proposed model indeed displays a good performance in forecasting stock market fluctuations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. CORS BAADE-WESSELINK DISTANCE TO THE LMC NGC 1866 BLUE POPULOUS CLUSTER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molinaro, R.; Ripepi, V.; Marconi, M.

    2012-03-20

    We used optical, near-infrared photometry, and radial velocity data for a sample of 11 Cepheids belonging to the young LMC blue populous cluster NGC 1866 to estimate their radii and distances on the basis of the CORS Baade-Wesselink method. This technique, based on an accurate calibration of surface brightness as a function of (U - B), (V - K) colors, allows us to estimate, simultaneously, the linear radius and the angular diameter of Cepheid variables, and consequently to derive their distance. A rigorous error estimate on radii and distances was derived by using Monte Carlo simulations. Our analysis gives amore » distance modulus for NGC 1866 of 18.51 {+-} 0.03 mag, which is in agreement with several independent results.« less

  18. Change Detection of High-Resolution Remote Sensing Images Based on Adaptive Fusion of Multiple Features

    NASA Astrophysics Data System (ADS)

    Wang, G. H.; Wang, H. B.; Fan, W. F.; Liu, Y.; Chen, C.

    2018-04-01

    In view of the traditional change detection algorithm mainly depends on the spectral information image spot, failed to effectively mining and fusion of multi-image feature detection advantage, the article borrows the ideas of object oriented analysis proposed a multi feature fusion of remote sensing image change detection algorithm. First by the multi-scale segmentation of image objects based; then calculate the various objects of color histogram and linear gradient histogram; utilizes the color distance and edge line feature distance between EMD statistical operator in different periods of the object, using the adaptive weighted method, the color feature distance and edge in a straight line distance of combination is constructed object heterogeneity. Finally, the curvature histogram analysis image spot change detection results. The experimental results show that the method can fully fuse the color and edge line features, thus improving the accuracy of the change detection.

  19. A fundamental study revisited: Quantitative evidence for territory quality in oystercatchers (Haematopus ostralegus) using GPS data loggers.

    PubMed

    Schwemmer, Philipp; Weiel, Stefan; Garthe, Stefan

    2017-01-01

    A fundamental study by Ens et al. (1992, Journal of Animal Ecology , 61, 703) developed the concept of two different nest-territory qualities in Eurasian oystercatchers ( Haematopus ostralegus , L.), resulting in different reproductive successes. "Resident" oystercatchers use breeding territories close to the high-tide line and occupy adjacent foraging territories on mudflats. "Leapfrog" oystercatchers breed further away from their foraging territories. In accordance with this concept, we hypothesized that both foraging trip duration and trip distance from the high-tide line to the foraging territory would be linearly related to distance between the nest site and the high tide line. We also expected tidal stage and time of day to affect this relationship. The former study used visual observations of marked oystercatchers, which could not be permanently tracked. This concept model can now be tested using miniaturized GPS devices able to record data at high temporal and spatial resolutions. Twenty-nine oystercatchers from two study sites were equipped with GPS devices during the incubation periods (however, not during chick rearing) over 3 years, providing data for 548 foraging trips. Trip distances from the high-tide line were related to distance between the nest and high-tide line. Tidal stage and time of day were included in a mixing model. Foraging trip distance, but not duration (which was likely more impacted by intake rate), increased with increasing distance between the nest and high-tide line. There was a site-specific effect of tidal stage on both trip parameters. Foraging trip duration, but not distance, was significantly longer during the hours of darkness. Our findings support and additionally quantify the previously developed concept. Furthermore, rather than separating breeding territory quality into two discrete classes, this classification should be extended by the linear relationship between nest-site and foraging location. Finally, oystercatcher's foraging territories overlapped strongly in areas of high food abundance.

  20. Analysis for delamination initiation in postbuckled dropped-ply laminates

    NASA Technical Reports Server (NTRS)

    Davila, Carlos G.; Johnson, Eric R.

    1992-01-01

    The compression strength of dropped-ply, graphite-epoxy laminated plates for the delamination mode of failure is studied by analysis and corroborated with experiments. The nonlinear response of the test specimens is modeled by a geometrically nonlinear finite element analysis. The methodology for predicting delamination is based on a quadratic interlaminar stress criterion evaluated at a characteristic distance from the ply drop-off. The compression strength of specimens exhibiting a linear response is greater than the compression strength of specimens with the same layup exhibiting a geometrically nonlinear response. The analyses for both linear and nonlinear response show that severe interlaminar stress gradients occur in the interfaces at the drop-off because of the thickness/stiffness discontinuity. However, these interlaminar stress distributions are altered in the geometrically nonlinear response such that, with increasing load, their growth at the center of the laminate is retarded while their growth near the unloaded supported edge is increased.

  1. A model of urban scaling laws based on distance dependent interactions

    PubMed Central

    Ribeiro, Fabiano L.; Meirelles, Joao; Ferreira, Fernando F.

    2017-01-01

    Socio-economic related properties of a city grow faster than a linear relationship with the population, in a log–log plot, the so-called superlinear scaling. Conversely, the larger a city, the more efficient it is in the use of its infrastructure, leading to a sublinear scaling on these variables. In this work, we addressed a simple explanation for those scaling laws in cities based on the interaction range between the citizens and on the fractal properties of the cities. To this purpose, we introduced a measure of social potential which captured the influence of social interaction on the economic performance and the benefits of amenities in the case of infrastructure offered by the city. We assumed that the population density depends on the fractal dimension and on the distance-dependent interactions between individuals. The model suggests that when the city interacts as a whole, and not just as a set of isolated parts, there is improvement of the socio-economic indicators. Moreover, the bigger the interaction range between citizens and amenities, the bigger the improvement of the socio-economic indicators and the lower the infrastructure costs of the city. We addressed how public policies could take advantage of these properties to improve cities development, minimizing negative effects. Furthermore, the model predicts that the sum of the scaling exponents of social-economic and infrastructure variables are 2, as observed in the literature. Simulations with an agent-based model are confronted with the theoretical approach and they are compatible with the empirical evidences. PMID:28405381

  2. Landslide susceptibility mapping for a landslide-prone area (Findikli, NE of Turkey) by likelihood-frequency ratio and weighted linear combination models

    NASA Astrophysics Data System (ADS)

    Akgun, Aykut; Dag, Serhat; Bulut, Fikri

    2008-05-01

    Landslides are very common natural problems in the Black Sea Region of Turkey due to the steep topography, improper use of land cover and adverse climatic conditions for landslides. In the western part of region, many studies have been carried out especially in the last decade for landslide susceptibility mapping using different evaluation methods such as deterministic approach, landslide distribution, qualitative, statistical and distribution-free analyses. The purpose of this study is to produce landslide susceptibility maps of a landslide-prone area (Findikli district, Rize) located at the eastern part of the Black Sea Region of Turkey by likelihood frequency ratio (LRM) model and weighted linear combination (WLC) model and to compare the results obtained. For this purpose, landslide inventory map of the area were prepared for the years of 1983 and 1995 by detailed field surveys and aerial-photography studies. Slope angle, slope aspect, lithology, distance from drainage lines, distance from roads and the land-cover of the study area are considered as the landslide-conditioning parameters. The differences between the susceptibility maps derived by the LRM and the WLC models are relatively minor when broad-based classifications are taken into account. However, the WLC map showed more details but the other map produced by LRM model produced weak results. The reason for this result is considered to be the fact that the majority of pixels in the LRM map have high values than the WLC-derived susceptibility map. In order to validate the two susceptibility maps, both of them were compared with the landslide inventory map. Although the landslides do not exist in the very high susceptibility class of the both maps, 79% of the landslides fall into the high and very high susceptibility zones of the WLC map while this is 49% for the LRM map. This shows that the WLC model exhibited higher performance than the LRM model.

  3. Output from Linear Generator for VIV-driven Buoys

    DTIC Science & Technology

    2014-09-01

    demonstration of VIV-based energy harvesting was accomplished by Bernitsas of Vortex Hydro Energy and their Vortex Induced Vibration Aquatic Clean Energy...the total actuation distance of the force input device was limited to 2.75 inches, a lever arm amplified the stroke input by 4.93X to raise the...magnet plunger ±6.78 inches above and below the horizontal axis (total 13.56-inch stroke distance). The magnet plunger served to drive two 2-inch

  4. A miniature shoe-mounted orientation determination system for accurate indoor heading and trajectory tracking.

    PubMed

    Zhang, Shengzhi; Yu, Shuai; Liu, Chaojun; Liu, Sheng

    2016-06-01

    Tracking the position of pedestrian is urgently demanded when the most commonly used GPS (Global Position System) is unavailable. Benefited from the small size, low-power consumption, and relatively high reliability, micro-electro-mechanical system sensors are well suited for GPS-denied indoor pedestrian heading estimation. In this paper, a real-time miniature orientation determination system (MODS) was developed for indoor heading and trajectory tracking based on a novel dual-linear Kalman filter. The proposed filter precludes the impact of geomagnetic distortions on pitch and roll that the heading is subjected to. A robust calibration approach was designed to improve the accuracy of sensors measurements based on a unified sensor model. Online tests were performed on the MODS with an improved turntable. The results demonstrate that the average RMSE (root-mean-square error) of heading estimation is less than 1°. Indoor heading experiments were carried out with the MODS mounted on the shoe of pedestrian. Besides, we integrated the existing MODS into an indoor pedestrian dead reckoning application as an example of its utility in realistic actions. A human attitude-based walking model was developed to calculate the walking distance. Test results indicate that mean percentage error of indoor trajectory tracking achieves 2% of the total walking distance. This paper provides a feasible alternative for accurate indoor heading and trajectory tracking.

  5. Implementing a Learning Model for a Practical Subject in Distance Education.

    ERIC Educational Resources Information Center

    Weller, M. J.; Hopgood, A. A.

    1997-01-01

    Artificial Intelligence for Technology, a distance learning course at the Open University, is based on a learning model that combines conceptualization, construction, and dialog. This allows a practical emphasis which has been difficult to implement in distance education. The course uses commercial software, real-world-based assignments, and a…

  6. Development of Rock Engineering Systems-Based Models for Flyrock Risk Analysis and Prediction of Flyrock Distance in Surface Blasting

    NASA Astrophysics Data System (ADS)

    Faramarzi, Farhad; Mansouri, Hamid; Farsangi, Mohammad Ali Ebrahimi

    2014-07-01

    The environmental effects of blasting must be controlled in order to comply with regulatory limits. Because of safety concerns and risk of damage to infrastructures, equipment, and property, and also having a good fragmentation, flyrock control is crucial in blasting operations. If measures to decrease flyrock are taken, then the flyrock distance would be limited, and, in return, the risk of damage can be reduced or eliminated. This paper deals with modeling the level of risk associated with flyrock and, also, flyrock distance prediction based on the rock engineering systems (RES) methodology. In the proposed models, 13 effective parameters on flyrock due to blasting are considered as inputs, and the flyrock distance and associated level of risks as outputs. In selecting input data, the simplicity of measuring input data was taken into account as well. The data for 47 blasts, carried out at the Sungun copper mine, western Iran, were used to predict the level of risk and flyrock distance corresponding to each blast. The obtained results showed that, for the 47 blasts carried out at the Sungun copper mine, the level of estimated risks are mostly in accordance with the measured flyrock distances. Furthermore, a comparison was made between the results of the flyrock distance predictive RES-based model, the multivariate regression analysis model (MVRM), and, also, the dimensional analysis model. For the RES-based model, R 2 and root mean square error (RMSE) are equal to 0.86 and 10.01, respectively, whereas for the MVRM and dimensional analysis, R 2 and RMSE are equal to (0.84 and 12.20) and (0.76 and 13.75), respectively. These achievements confirm the better performance of the RES-based model over the other proposed models.

  7. A Novel Marker Based Method to Teeth Alignment in MRI

    NASA Astrophysics Data System (ADS)

    Luukinen, Jean-Marc; Aalto, Daniel; Malinen, Jarmo; Niikuni, Naoko; Saunavaara, Jani; Jääsaari, Päivi; Ojalammi, Antti; Parkkola, Riitta; Soukka, Tero; Happonen, Risto-Pekka

    2018-04-01

    Magnetic resonance imaging (MRI) can precisely capture the anatomy of the vocal tract. However, the crowns of teeth are not visible in standard MRI scans. In this study, a marker-based teeth alignment method is presented and evaluated. Ten patients undergoing orthognathic surgery were enrolled. Supraglottal airways were imaged preoperatively using structural MRI. MRI visible markers were developed, and they were attached to maxillary teeth and corresponding locations on the dental casts. Repeated measurements of intermarker distances in MRI and in a replica model was compared using linear regression analysis. Dental cast MRI and corresponding caliper measurements did not differ significantly. In contrast, the marker locations in vivo differed somewhat from the dental cast measurements likely due to marker placement inaccuracies. The markers were clearly visible in MRI and allowed for dental models to be aligned to head and neck MRI scans.

  8. A feasibility study for the detection of upper atmospheric winds using a ground based laser Doppler velocimeter

    NASA Technical Reports Server (NTRS)

    Thomson, J. A. L.; Meng, J. C. S.

    1975-01-01

    A possible measurement program designed to obtain the information requisite to determining the feasibility of airborne and/or satellite-borne LDV (Laser Doppler Velocimeter) systems is discussed. Measurements made from the ground are favored over an airborne measurement as far as for the purpose of determining feasibility is concerned. The expected signal strengths for scattering at various altitude and elevation angles are examined; it appears that both molecular absorption and ambient turbulence degrade the signal at low elevation angles and effectively constrain the ground based measurement of elevation angles exceeding a critical value. The nature of the wind shear and turbulence to be expected are treated from a linear hydrodynamic model - a mountain lee wave model. The spatial and temporal correlation distances establish requirements on the range resolution, the maximum detectable range and the allowable integration time.

  9. An independent Cepheid distance scale: Current status

    NASA Technical Reports Server (NTRS)

    Barnes, T. G., III

    1980-01-01

    An independent distance scale for Cepheid variables is discussed. The apparent magnitude and the visual surface brightness, inferred from an appropriate color index, are used to determine the angular diameter variation of the Cepheid. When combined with the linear displacement curve obtained from the integrated radial velocity curve, the distance and linear radius are determined. The attractiveness of the method is its complete independence of all other stellar distance scales, even though a number of practical difficulties currently exist in implementing the technique.

  10. Atmospheric pressure plasma jet for biomedical applications characterised by passive thermal probe

    NASA Astrophysics Data System (ADS)

    Mance, Diana; Wiese, Ruben; Kewitz, Thorben; Kersten, Holger

    2018-05-01

    Atmospheric pressure plasma jets (APPJs) are a promising tool in medicine with extensive possibilities of utilization. For a safe and therapeutically effective application of APPJs, it is necessary to know in detail the physical processes in plasma as well as possible hazards. In this paper, we focus on plasma thermal energy transferred to the substrate, i.e. to a passive thermal probe acting as substrate dummy. Specifically, we examined the dependence of transferred energy on the distance from the plasma source outlet, on the gas flow rate, and on the length of the visible plasma plume. The plasma plume is the plasma carried by the gas flow from the outlet of the source into the ambient air. The results show the distance between the plasma-generating device and the substrate to be the most important determinant of the transferred thermal energy, among the three examined variables. Most importantly for the end-user, the results also show this relation to be non-linear. To describe this relation, we chose a model based on a Boltzmann type of sigmoid function. Based on the results of our modelling and visual inspection of the plasma, we provide sort of a user guide for the adjustment of a suitable energy flux on the (bio) substrate.

  11. Two-Hierarchy Entanglement Swapping for a Linear Optical Quantum Repeater

    NASA Astrophysics Data System (ADS)

    Xu, Ping; Yong, Hai-Lin; Chen, Luo-Kan; Liu, Chang; Xiang, Tong; Yao, Xing-Can; Lu, He; Li, Zheng-Da; Liu, Nai-Le; Li, Li; Yang, Tao; Peng, Cheng-Zhi; Zhao, Bo; Chen, Yu-Ao; Pan, Jian-Wei

    2017-10-01

    Quantum repeaters play a significant role in achieving long-distance quantum communication. In the past decades, tremendous effort has been devoted towards constructing a quantum repeater. As one of the crucial elements, entanglement has been created in different memory systems via entanglement swapping. The realization of j -hierarchy entanglement swapping, i.e., connecting quantum memory and further extending the communication distance, is important for implementing a practical quantum repeater. Here, we report the first demonstration of a fault-tolerant two-hierarchy entanglement swapping with linear optics using parametric down-conversion sources. In the experiment, the dominant or most probable noise terms in the one-hierarchy entanglement swapping, which is on the same order of magnitude as the desired state and prevents further entanglement connections, are automatically washed out by a proper design of the detection setting, and the communication distance can be extended. Given suitable quantum memory, our techniques can be directly applied to implementing an atomic ensemble based quantum repeater, and are of significant importance in the scalable quantum information processing.

  12. Two-Hierarchy Entanglement Swapping for a Linear Optical Quantum Repeater.

    PubMed

    Xu, Ping; Yong, Hai-Lin; Chen, Luo-Kan; Liu, Chang; Xiang, Tong; Yao, Xing-Can; Lu, He; Li, Zheng-Da; Liu, Nai-Le; Li, Li; Yang, Tao; Peng, Cheng-Zhi; Zhao, Bo; Chen, Yu-Ao; Pan, Jian-Wei

    2017-10-27

    Quantum repeaters play a significant role in achieving long-distance quantum communication. In the past decades, tremendous effort has been devoted towards constructing a quantum repeater. As one of the crucial elements, entanglement has been created in different memory systems via entanglement swapping. The realization of j-hierarchy entanglement swapping, i.e., connecting quantum memory and further extending the communication distance, is important for implementing a practical quantum repeater. Here, we report the first demonstration of a fault-tolerant two-hierarchy entanglement swapping with linear optics using parametric down-conversion sources. In the experiment, the dominant or most probable noise terms in the one-hierarchy entanglement swapping, which is on the same order of magnitude as the desired state and prevents further entanglement connections, are automatically washed out by a proper design of the detection setting, and the communication distance can be extended. Given suitable quantum memory, our techniques can be directly applied to implementing an atomic ensemble based quantum repeater, and are of significant importance in the scalable quantum information processing.

  13. Improving the capability of an integrated CA-Markov model to simulate spatio-temporal urban growth trends using an Analytical Hierarchy Process and Frequency Ratio

    NASA Astrophysics Data System (ADS)

    Aburas, Maher Milad; Ho, Yuek Ming; Ramli, Mohammad Firuz; Ash'aari, Zulfa Hanan

    2017-07-01

    The creation of an accurate simulation of future urban growth is considered one of the most important challenges in urban studies that involve spatial modeling. The purpose of this study is to improve the simulation capability of an integrated CA-Markov Chain (CA-MC) model using CA-MC based on the Analytical Hierarchy Process (AHP) and CA-MC based on Frequency Ratio (FR), both applied in Seremban, Malaysia, as well as to compare the performance and accuracy between the traditional and hybrid models. Various physical, socio-economic, utilities, and environmental criteria were used as predictors, including elevation, slope, soil texture, population density, distance to commercial area, distance to educational area, distance to residential area, distance to industrial area, distance to roads, distance to highway, distance to railway, distance to power line, distance to stream, and land cover. For calibration, three models were applied to simulate urban growth trends in 2010; the actual data of 2010 were used for model validation utilizing the Relative Operating Characteristic (ROC) and Kappa coefficient methods Consequently, future urban growth maps of 2020 and 2030 were created. The validation findings confirm that the integration of the CA-MC model with the FR model and employing the significant driving force of urban growth in the simulation process have resulted in the improved simulation capability of the CA-MC model. This study has provided a novel approach for improving the CA-MC model based on FR, which will provide powerful support to planners and decision-makers in the development of future sustainable urban planning.

  14. The Capacity Gain of Orbital Angular Momentum Based Multiple-Input-Multiple-Output System

    PubMed Central

    Zhang, Zhuofan; Zheng, Shilie; Chen, Yiling; Jin, Xiaofeng; Chi, Hao; Zhang, Xianmin

    2016-01-01

    Wireless communication using electromagnetic wave carrying orbital angular momentum (OAM) has attracted increasing interest in recent years, and its potential to increase channel capacity has been explored widely. In this paper, we compare the technique of using uniform linear array consist of circular traveling-wave OAM antennas for multiplexing with the conventional multiple-in-multiple-out (MIMO) communication method, and numerical results show that the OAM based MIMO system can increase channel capacity while communication distance is long enough. An equivalent model is proposed to illustrate that the OAM multiplexing system is equivalent to a conventional MIMO system with a larger element spacing, which means OAM waves could decrease the spatial correlation of MIMO channel. In addition, the effects of some system parameters, such as OAM state interval and element spacing, on the capacity advantage of OAM based MIMO are also investigated. Our results reveal that OAM waves are complementary with MIMO method. OAM waves multiplexing is suitable for long-distance line-of-sight (LoS) communications or communications in open area where the multi-path effect is weak and can be used in massive MIMO systems as well. PMID:27146453

  15. Einstein-Podolsky-Rosen steering: Its geometric quantification and witness

    NASA Astrophysics Data System (ADS)

    Ku, Huan-Yu; Chen, Shin-Liang; Budroni, Costantino; Miranowicz, Adam; Chen, Yueh-Nan; Nori, Franco

    2018-02-01

    We propose a measure of quantum steerability, namely, a convex steering monotone, based on the trace distance between a given assemblage and its corresponding closest assemblage admitting a local-hidden-state (LHS) model. We provide methods to estimate such a quantity, via lower and upper bounds, based on semidefinite programming. One of these upper bounds has a clear geometrical interpretation as a linear function of rescaled Euclidean distances in the Bloch sphere between the normalized quantum states of (i) a given assemblage and (ii) an LHS assemblage. For a qubit-qubit quantum state, these ideas also allow us to visualize various steerability properties of the state in the Bloch sphere via the so-called LHS surface. In particular, some steerability properties can be obtained by comparing such an LHS surface with a corresponding quantum steering ellipsoid. Thus, we propose a witness of steerability corresponding to the difference of the volumes enclosed by these two surfaces. This witness (which reveals the steerability of a quantum state) enables one to find an optimal measurement basis, which can then be used to determine the proposed steering monotone (which describes the steerability of an assemblage) optimized over all mutually unbiased bases.

  16. Dependency Distance Differences across Interpreting Types: Implications for Cognitive Demand

    PubMed Central

    Liang, Junying; Fang, Yuanyuan; Lv, Qianxi; Liu, Haitao

    2017-01-01

    Interpreting is generally recognized as a particularly demanding language processing task for the cognitive system. Dependency distance, the linear distance between two syntactically related words in a sentence, is an index of sentence complexity and is also able to reflect the cognitive constraints during various tasks. In the current research, we examine the difference in dependency distance among three interpreting types, namely, simultaneous interpreting, consecutive interpreting and read-out translated speech based on a treebank comprising these types of interpreting output texts with dependency annotation. Results show that different interpreting renditions yield different dependency distances, and consecutive interpreting texts entail the smallest dependency distance other than those of simultaneous interpreting and read-out translated speech, suggesting that consecutive interpreting bears heavier cognitive demands than simultaneous interpreting. The current research suggests for the first time that interpreting is an extremely demanding cognitive task that can further mediate the dependency distance of output sentences. Such findings may be due to the minimization of dependency distance under cognitive constraints. PMID:29312027

  17. Ground Motion Prediction Equations for the Central and Eastern United States

    NASA Astrophysics Data System (ADS)

    Seber, D.; Graizer, V.

    2015-12-01

    New ground motion prediction equations (GMPE) G15 model for the Central and Eastern United States (CEUS) is presented. It is based on the modular filter based approach developed by Graizer and Kalkan (2007, 2009) for active tectonic environment in the Western US (WUS). The G15 model is based on the NGA-East database for the horizontal peak ground acceleration and 5%-damped pseudo spectral acceleration RotD50 component (Goulet et al., 2014). In contrast to active tectonic environment the database for the CEUS is not sufficient for creating purely empirical GMPE covering the range of magnitudes and distances required for seismic hazard assessments. Recordings in NGA-East database are sparse and cover mostly range of M<6.0 with limited amount of near-fault recordings. The functional forms of the G15 GMPEs are derived from filters—each filter represents a particular physical phenomenon affecting the seismic wave radiation from the source. Main changes in the functional forms for the CEUS relative to the WUS model (Graizer and Kalkan, 2015) are a shift of maximum frequency of the acceleration response spectrum toward higher frequencies and an increase in the response spectrum amplitudes at high frequencies. Developed site correction is based on multiple runs of representative VS30 profiles through SHAKE-type equivalent-linear programs using time histories and random vibration theory approaches. Site amplification functions are calculated for different VS30 relative to hard rock definition used in nuclear industry (Vs=2800 m/s). The number of model predictors is limited to a few measurable parameters: moment magnitude M, closest distance to fault rupture plane R, average shear-wave velocity in the upper 30 m of the geological profile VS30, and anelastic attenuation factor Q0. Incorporating anelastic attenuation Q0 as an input parameter allows adjustments based on the regional crustal properties. The model covers the range of magnitudes 4.010 Hz) and is within the range of other models for frequencies lower than 2.5 Hz

  18. A framework for correcting brain retraction based on an eXtended Finite Element Method using a laser range scanner.

    PubMed

    Li, Ping; Wang, Weiwei; Song, Zhijian; An, Yong; Zhang, Chenxi

    2014-07-01

    Brain retraction causes great distortion that limits the accuracy of an image-guided neurosurgery system that uses preoperative images. Therefore, brain retraction correction is an important intraoperative clinical application. We used a linear elastic biomechanical model, which deforms based on the eXtended Finite Element Method (XFEM) within a framework for brain retraction correction. In particular, a laser range scanner was introduced to obtain a surface point cloud of the exposed surgical field including retractors inserted into the brain. A brain retraction surface tracking algorithm converted these point clouds into boundary conditions applied to XFEM modeling that drive brain deformation. To test the framework, we performed a brain phantom experiment involving the retraction of tissue. Pairs of the modified Hausdorff distance between Canny edges extracted from model-updated images, pre-retraction, and post-retraction CT images were compared to evaluate the morphological alignment of our framework. Furthermore, the measured displacements of beads embedded in the brain phantom and the predicted ones were compared to evaluate numerical performance. The modified Hausdorff distance of 19 pairs of images decreased from 1.10 to 0.76 mm. The forecast error of 23 stainless steel beads in the phantom was between 0 and 1.73 mm (mean 1.19 mm). The correction accuracy varied between 52.8 and 100 % (mean 81.4 %). The results demonstrate that the brain retraction compensation can be incorporated intraoperatively into the model-updating process in image-guided neurosurgery systems.

  19. Empirical models for the prediction of ground motion duration for intraplate earthquakes

    NASA Astrophysics Data System (ADS)

    Anbazhagan, P.; Neaz Sheikh, M.; Bajaj, Ketan; Mariya Dayana, P. J.; Madhura, H.; Reddy, G. R.

    2017-07-01

    Many empirical relationships for the earthquake ground motion duration were developed for interplate region, whereas only a very limited number of empirical relationships exist for intraplate region. Also, the existing relationships were developed based mostly on the scaled recorded interplate earthquakes to represent intraplate earthquakes. To the author's knowledge, none of the existing relationships for the intraplate regions were developed using only the data from intraplate regions. Therefore, an attempt is made in this study to develop empirical predictive relationships of earthquake ground motion duration (i.e., significant and bracketed) with earthquake magnitude, hypocentral distance, and site conditions (i.e., rock and soil sites) using the data compiled from intraplate regions of Canada, Australia, Peninsular India, and the central and southern parts of the USA. The compiled earthquake ground motion data consists of 600 records with moment magnitudes ranging from 3.0 to 6.5 and hypocentral distances ranging from 4 to 1000 km. The non-linear mixed-effect (NLMEs) and logistic regression techniques (to account for zero duration) were used to fit predictive models to the duration data. The bracketed duration was found to be decreased with an increase in the hypocentral distance and increased with an increase in the magnitude of the earthquake. The significant duration was found to be increased with the increase in the magnitude and hypocentral distance of the earthquake. Both significant and bracketed durations were predicted higher in rock sites than in soil sites. The predictive relationships developed herein are compared with the existing relationships for interplate and intraplate regions. The developed relationship for bracketed duration predicts lower durations for rock and soil sites. However, the developed relationship for a significant duration predicts lower durations up to a certain distance and thereafter predicts higher durations compared to the existing relationships.

  20. Molecular electronegativity distance vector model for the prediction of bioconcentration factors in fish.

    PubMed

    Liu, Shu-Shen; Qin, Li-Tang; Liu, Hai-Ling; Yin, Da-Qiang

    2008-02-01

    Molecular electronegativity distance vector (MEDV) derived directly from the molecular topological structures was used to describe the structures of 122 nonionic organic compounds (NOCs) and a quantitative relationship between the MEDV descriptors and the bioconcentration factors (BCF) of NOCs in fish was developed using the variable selection and modeling based on prediction (VSMP). It was found that some main structural factors influencing the BCFs of NOCs are the substructures expressed by four atomic types of nos. 2, 3, 5, and 13, i.e., atom groups -CH(2)- or =CH-, -CH< or =C<, -NH(2), and -Cl or -Br where the former two groups exist in the molecular skeleton of NOC and the latter three groups are related closely to the substituting groups on a benzene ring. The best 5-variable model, with the correlation coefficient (r(2)) of 0.9500 and the leave-one-out cross-validation correlation coefficient (q(2)) of 0.9428, was built by multiple linear regressions, which shows a good estimation ability and stability. A predictive power for the external samples was tested by the model from the training set of 80 NOCs and the predictive correlation coefficient (u(2)) for the 42 external samples in the test set was 0.9028.

  1. Population Structure, Diversity and Trait Association Analysis in Rice (Oryza sativa L.) Germplasm for Early Seedling Vigor (ESV) Using Trait Linked SSR Markers

    PubMed Central

    Anandan, Annamalai; Anumalla, Mahender; Pradhan, Sharat Kumar; Ali, Jauhar

    2016-01-01

    Early seedling vigor (ESV) is the essential trait for direct seeded rice to dominate and smother the weed growth. In this regard, 629 rice genotypes were studied for their morphological and physiological responses in the field under direct seeded aerobic situation on 14th, 28th and 56th days after sowing (DAS). It was determined that the early observations taken on 14th and 28th DAS were reliable estimators to study ESV as compared to56th DAS. Further, 96 were selected from 629 genotypes by principal component (PCA) and discriminate function analyses. The selected genotypes were subjected to decipher the pattern of genetic diversity in terms of both phenotypic and genotypic by using ESV QTL linked simple sequence repeat (SSR) markers. To assess the genetic structure, model and distance based approaches were used. Genotyping of 96 rice lines using 39 polymorphic SSRs produced a total of 128 alleles with the phenotypic information content (PIC) value of 0.24. The model based population structure approach grouped the accession into two distinct populations, whereas unrooted tree grouped the genotypes into three clusters. Both model based and structure based approach had clearly distinguished the early vigor genotypes from non-early vigor genotypes. Association analysis revealed that 16 and 10 SSRs showed significant association with ESV traits by general linear model (GLM) and mixed linear model (MLM) approaches respectively. Marker alleles on chromosome 2 were associated with shoot dry weight on 28 DAS, vigor index on 14 and 28 DAS. Improvement in the rate of seedling growth will be useful for identifying rice genotypes acquiescent to direct seeded conditions through marker-assisted selection. PMID:27031620

  2. Determination of Critical Achievement Factors in Distance Education by Using Structural Equality Model: A Case Study of E-MBA Program Held in Sakarya University

    ERIC Educational Resources Information Center

    Evirgen, Hayrettin; Cengel, Metin

    2012-01-01

    Nowadays, distance learning education has started to become familiar in behalf of classical face to face education (F2F) model. Web based learning is a major part of distance education systems. Web based distance learning can be defined shortly as an education type which doesn't force students and educators being into the same mediums. This…

  3. A binary linear programming formulation of the graph edit distance.

    PubMed

    Justice, Derek; Hero, Alfred

    2006-08-01

    A binary linear programming formulation of the graph edit distance for unweighted, undirected graphs with vertex attributes is derived and applied to a graph recognition problem. A general formulation for editing graphs is used to derive a graph edit distance that is proven to be a metric, provided the cost function for individual edit operations is a metric. Then, a binary linear program is developed for computing this graph edit distance, and polynomial time methods for determining upper and lower bounds on the solution of the binary program are derived by applying solution methods for standard linear programming and the assignment problem. A recognition problem of comparing a sample input graph to a database of known prototype graphs in the context of a chemical information system is presented as an application of the new method. The costs associated with various edit operations are chosen by using a minimum normalized variance criterion applied to pairwise distances between nearest neighbors in the database of prototypes. The new metric is shown to perform quite well in comparison to existing metrics when applied to a database of chemical graphs.

  4. Accuracy and Precision of a Surgical Navigation System: Effect of Camera and Patient Tracker Position and Number of Active Markers.

    PubMed

    Gundle, Kenneth R; White, Jedediah K; Conrad, Ernest U; Ching, Randal P

    2017-01-01

    Surgical navigation systems are increasingly used to aid resection and reconstruction of osseous malignancies. In the process of implementing image-based surgical navigation systems, there are numerous opportunities for error that may impact surgical outcome. This study aimed to examine modifiable sources of error in an idealized scenario, when using a bidirectional infrared surgical navigation system. Accuracy and precision were assessed using a computerized-numerical-controlled (CNC) machined grid with known distances between indentations while varying: 1) the distance from the grid to the navigation camera (range 150 to 247cm), 2) the distance from the grid to the patient tracker device (range 20 to 40cm), and 3) whether the minimum or maximum number of bidirectional infrared markers were actively functioning. For each scenario, distances between grid points were measured at 10-mm increments between 10 and 120mm, with twelve measurements made at each distance. The accuracy outcome was the root mean square (RMS) error between the navigation system distance and the actual grid distance. To assess precision, four indentations were recorded six times for each scenario while also varying the angle of the navigation system pointer. The outcome for precision testing was the standard deviation of the distance between each measured point to the mean three-dimensional coordinate of the six points for each cluster. Univariate and multiple linear regression revealed that as the distance from the navigation camera to the grid increased, the RMS error increased (p<0.001). The RMS error also increased when not all infrared markers were actively tracking (p=0.03), and as the measured distance increased (p<0.001). In a multivariate model, these factors accounted for 58% of the overall variance in the RMS error. Standard deviations in repeated measures also increased when not all infrared markers were active (p<0.001), and as the distance between navigation camera and physical space increased (p=0.005). Location of the patient tracker did not affect accuracy (0.36) or precision (p=0.97). In our model laboratory test environment, the infrared bidirectional navigation system was more accurate and precise when the distance from the navigation camera to the physical (working) space was minimized and all bidirectional markers were active. These findings may require alterations in operating room setup and software changes to improve the performance of this system.

  5. Frequency-modulated laser ranging sensor with closed-loop control

    NASA Astrophysics Data System (ADS)

    Müller, Fabian M.; Böttger, Gunnar; Janeczka, Christian; Arndt-Staufenbiel, Norbert; Schröder, Henning; Schneider-Ramelow, Martin

    2018-02-01

    Advances in autonomous driving and robotics are creating high demand for inexpensive and mass-producible distance sensors. A laser ranging system (Lidar), based on the frequency-modulated continuous-wave (FMCW) method is built in this work. The benefits of an FMCW Lidar system are the low-cost components and the performance in comparison to conventional time-of-flight Lidar systems. The basic system consists of a DFB laser diode (λ= 1308 nm) and an asymmetric fiber-coupled Mach-Zehnder interferometer with a fixed delay line in one arm. Linear tuning of the laser optical frequency via injection current modulation creates a beat signal at the interferometer output. The frequency of the beat signal is proportional to the optical path difference in the interferometer. Since the laser frequency-to-current response is non-linear, a closed-loop feed-back system is designed to improve the tuning linearity, and consequently the measurement resolution. For fast active control, an embedded system with FPGA is used, resulting in a nearly linear frequency tuning, realizing a narrow peak in the Fourier spectrum of the beat signal. For free-space measurements, a setup with two distinct interferometers is built. The fully fiber-coupled Mach-Zehnder reference interferometer is part of the feed-back loop system, while the other - a Michelson interferometer - has a free-space arm with collimator lens and reflective target. A resolution of 2:0 mm for a 560 mm distance is achieved. The results for varying target distances show high consistency and a linear relation to the measured beat-frequency.

  6. An Automatic and Robust Algorithm of Reestablishment of Digital Dental Occlusion

    PubMed Central

    Chang, Yu-Bing; Xia, James J.; Gateno, Jaime; Xiong, Zixiang; Zhou, Xiaobo; Wong, Stephen T. C.

    2017-01-01

    In the field of craniomaxillofacial (CMF) surgery, surgical planning can be performed on composite 3-D models that are generated by merging a computerized tomography scan with digital dental models. Digital dental models can be generated by scanning the surfaces of plaster dental models or dental impressions with a high-resolution laser scanner. During the planning process, one of the essential steps is to reestablish the dental occlusion. Unfortunately, this task is time-consuming and often inaccurate. This paper presents a new approach to automatically and efficiently reestablish dental occlusion. It includes two steps. The first step is to initially position the models based on dental curves and a point matching technique. The second step is to reposition the models to the final desired occlusion based on iterative surface-based minimum distance mapping with collision constraints. With linearization of rotation matrix, the alignment is modeled by solving quadratic programming. The simulation was completed on 12 sets of digital dental models. Two sets of dental models were partially edentulous, and another two sets have first premolar extractions for orthodontic treatment. Two validation methods were applied to the articulated models. The results show that using our method, the dental models can be successfully articulated with a small degree of deviations from the occlusion achieved with the gold-standard method. PMID:20529735

  7. Reproducing the nonlinear dynamic behavior of a structured beam with a generalized continuum model

    NASA Astrophysics Data System (ADS)

    Vila, J.; Fernández-Sáez, J.; Zaera, R.

    2018-04-01

    In this paper we study the coupled axial-transverse nonlinear vibrations of a kind of one dimensional structured solids by application of the so called Inertia Gradient Nonlinear continuum model. To show the accuracy of this axiomatic model, previously proposed by the authors, its predictions are compared with numeric results from a previously defined finite discrete chain of lumped masses and springs, for several number of particles. A continualization of the discrete model equations based on Taylor series allowed us to set equivalent values of the mechanical properties in both discrete and axiomatic continuum models. Contrary to the classical continuum model, the inertia gradient nonlinear continuum model used herein is able to capture scale effects, which arise for modes in which the wavelength is comparable to the characteristic distance of the structured solid. The main conclusion of the work is that the proposed generalized continuum model captures the scale effects in both linear and nonlinear regimes, reproducing the behavior of the 1D nonlinear discrete model adequately.

  8. Distance estimation and collision prediction for on-line robotic motion planning

    NASA Technical Reports Server (NTRS)

    Kyriakopoulos, K. J.; Saridis, G. N.

    1991-01-01

    An efficient method for computing the minimum distance and predicting collisions between moving objects is presented. This problem has been incorporated in the framework of an in-line motion planning algorithm to satisfy collision avoidance between a robot and moving objects modeled as convex polyhedra. In the beginning the deterministic problem, where the information about the objects is assumed to be certain is examined. If instead of the Euclidean norm, L(sub 1) or L(sub infinity) norms are used to represent distance, the problem becomes a linear programming problem. The stochastic problem is formulated, where the uncertainty is induced by sensing and the unknown dynamics of the moving obstacles. Two problems are considered: (1) filtering of the minimum distance between the robot and the moving object, at the present time; and (2) prediction of the minimum distance in the future, in order to predict possible collisions with the moving obstacles and estimate the collision time.

  9. Bayesian inference in camera trapping studies for a class of spatial capture-recapture models

    USGS Publications Warehouse

    Royle, J. Andrew; Karanth, K. Ullas; Gopalaswamy, Arjun M.; Kumar, N. Samba

    2009-01-01

    We develop a class of models for inference about abundance or density using spatial capture-recapture data from studies based on camera trapping and related methods. The model is a hierarchical model composed of two components: a point process model describing the distribution of individuals in space (or their home range centers) and a model describing the observation of individuals in traps. We suppose that trap- and individual-specific capture probabilities are a function of distance between individual home range centers and trap locations. We show that the models can be regarded as generalized linear mixed models, where the individual home range centers are random effects. We adopt a Bayesian framework for inference under these models using a formulation based on data augmentation. We apply the models to camera trapping data on tigers from the Nagarahole Reserve, India, collected over 48 nights in 2006. For this study, 120 camera locations were used, but cameras were only operational at 30 locations during any given sample occasion. Movement of traps is common in many camera-trapping studies and represents an important feature of the observation model that we address explicitly in our application.

  10. Online Detection of Driver Fatigue Using Steering Wheel Angles for Real Driving Conditions.

    PubMed

    Li, Zuojin; Li, Shengbo Eben; Li, Renjie; Cheng, Bo; Shi, Jinliang

    2017-03-02

    This paper presents a drowsiness on-line detection system for monitoring driver fatigue level under real driving conditions, based on the data of steering wheel angles (SWA) collected from sensors mounted on the steering lever. The proposed system firstly extracts approximate entropy (ApEn)featuresfromfixedslidingwindowsonreal-timesteeringwheelanglestimeseries. Afterthat, this system linearizes the ApEn features series through an adaptive piecewise linear fitting using a given deviation. Then, the detection system calculates the warping distance between the linear features series of the sample data. Finally, this system uses the warping distance to determine the drowsiness state of the driver according to a designed binary decision classifier. The experimental data were collected from 14.68 h driving under real road conditions, including two fatigue levels: "wake" and "drowsy". The results show that the proposed system is capable of working online with an average 78.01% accuracy, 29.35% false detections of the "awake" state, and 15.15% false detections of the "drowsy" state. The results also confirm that the proposed method based on SWA signal is valuable for applications in preventing traffic accidents caused by driver fatigue.

  11. Online Detection of Driver Fatigue Using Steering Wheel Angles for Real Driving Conditions

    PubMed Central

    Li, Zuojin; Li, Shengbo Eben; Li, Renjie; Cheng, Bo; Shi, Jinliang

    2017-01-01

    This paper presents a drowsiness on-line detection system for monitoring driver fatigue level under real driving conditions, based on the data of steering wheel angles (SWA) collected from sensors mounted on the steering lever. The proposed system firstly extracts approximate entropy (ApEn) features from fixed sliding windows on real-time steering wheel angles time series. After that, this system linearizes the ApEn features series through an adaptive piecewise linear fitting using a given deviation. Then, the detection system calculates the warping distance between the linear features series of the sample data. Finally, this system uses the warping distance to determine the drowsiness state of the driver according to a designed binary decision classifier. The experimental data were collected from 14.68 h driving under real road conditions, including two fatigue levels: “wake” and “drowsy”. The results show that the proposed system is capable of working online with an average 78.01% accuracy, 29.35% false detections of the “awake” state, and 15.15% false detections of the “drowsy” state. The results also confirm that the proposed method based on SWA signal is valuable for applications in preventing traffic accidents caused by driver fatigue. PMID:28257094

  12. Three-dimensional head anthropometric analysis

    NASA Astrophysics Data System (ADS)

    Enciso, Reyes; Shaw, Alex M.; Neumann, Ulrich; Mah, James

    2003-05-01

    Currently, two-dimensional photographs are most commonly used to facilitate visualization, assessment and treatment of facial abnormalities in craniofacial care but are subject to errors because of perspective, projection, lack metric and 3-dimensional information. One can find in the literature a variety of methods to generate 3-dimensional facial images such as laser scans, stereo-photogrammetry, infrared imaging and even CT however each of these methods contain inherent limitations and as such no systems are in common clinical use. In this paper we will focus on development of indirect 3-dimensional landmark location and measurement of facial soft-tissue with light-based techniques. In this paper we will statistically evaluate and validate a current three-dimensional image-based face modeling technique using a plaster head model. We will also develop computer graphics tools for indirect anthropometric measurements in a three-dimensional head model (or polygonal mesh) including linear distances currently used in anthropometry. The measurements will be tested against a validated 3-dimensional digitizer (MicroScribe 3DX).

  13. Nonlinear Semi-Supervised Metric Learning Via Multiple Kernels and Local Topology.

    PubMed

    Li, Xin; Bai, Yanqin; Peng, Yaxin; Du, Shaoyi; Ying, Shihui

    2018-03-01

    Changing the metric on the data may change the data distribution, hence a good distance metric can promote the performance of learning algorithm. In this paper, we address the semi-supervised distance metric learning (ML) problem to obtain the best nonlinear metric for the data. First, we describe the nonlinear metric by the multiple kernel representation. By this approach, we project the data into a high dimensional space, where the data can be well represented by linear ML. Then, we reformulate the linear ML by a minimization problem on the positive definite matrix group. Finally, we develop a two-step algorithm for solving this model and design an intrinsic steepest descent algorithm to learn the positive definite metric matrix. Experimental results validate that our proposed method is effective and outperforms several state-of-the-art ML methods.

  14. A MEMS Micro-Translation Stage with Long Linear Translation

    NASA Technical Reports Server (NTRS)

    Ferguson, Cynthia K.; English, J. M.; Nordin, G. P.; Ashley, P. R.; Abushagur, M. A. G.

    2004-01-01

    A MEMS Micro-Translation Stage (MTS) actuator concept has been developed that is capable of traveling long distances, while maintaining low power, low voltage, and accuracy as required by many applications, including optical coupling. The Micro-Translation Stage (MTS) uses capacitive electrostatic forces in a linear motor application, with stationary stators arranged linearly on both sides of a channel, and matching rotors on a moveable shuttle. This creates a force that allows the shuttle to be pulled along the channel. It is designed to carry 100 micron-sized elements on the top surface, and can travel back and forth in the channel, either in a stepping fashion allowing many interim stops, or it can maintain constant adjustable speeds for a controlled scanning motion. The MTS travel range is limited only by the size of the fabrication wafer. Analytical modeling and simulations were performed based on the fabrication process, to assure the stresses, friction and electrostatic forces were acceptable to allow successful operation of this device. The translation forces were analyzed to be near 0.5 micron N, with a 300 micron N stop-to-stop time of 11.8 ms.

  15. Dispersal ability and habitat requirements determine landscape-level genetic patterns in desert aquatic insects.

    PubMed

    Phillipsen, Ivan C; Kirk, Emily H; Bogan, Michael T; Mims, Meryl C; Olden, Julian D; Lytle, David A

    2015-01-01

    Species occupying the same geographic range can exhibit remarkably different population structures across the landscape, ranging from highly diversified to panmictic. Given limitations on collecting population-level data for large numbers of species, ecologists seek to identify proximate organismal traits-such as dispersal ability, habitat preference and life history-that are strong predictors of realized population structure. We examined how dispersal ability and habitat structure affect the regional balance of gene flow and genetic drift within three aquatic insects that represent the range of dispersal abilities and habitat requirements observed in desert stream insect communities. For each species, we tested for linear relationships between genetic distances and geographic distances using Euclidean and landscape-based metrics of resistance. We found that the moderate-disperser Mesocapnia arizonensis (Plecoptera: Capniidae) has a strong isolation-by-distance pattern, suggesting migration-drift equilibrium. By contrast, population structure in the flightless Abedus herberti (Hemiptera: Belostomatidae) is influenced by genetic drift, while gene flow is the dominant force in the strong-flying Boreonectes aequinoctialis (Coleoptera: Dytiscidae). The best-fitting landscape model for M. arizonensis was based on Euclidean distance. Analyses also identified a strong spatial scale-dependence, where landscape genetic methods only performed well for species that were intermediate in dispersal ability. Our results highlight the fact that when either gene flow or genetic drift dominates in shaping population structure, no detectable relationship between genetic and geographic distances is expected at certain spatial scales. This study provides insight into how gene flow and drift interact at the regional scale for these insects as well as the organisms that share similar habitats and dispersal abilities. © 2014 John Wiley & Sons Ltd.

  16. Fast Depiction Invariant Visual Similarity for Content Based Image Retrieval Based on Data-driven Visual Similarity using Linear Discriminant Analysis

    NASA Astrophysics Data System (ADS)

    Wihardi, Y.; Setiawan, W.; Nugraha, E.

    2018-01-01

    On this research we try to build CBIRS based on Learning Distance/Similarity Function using Linear Discriminant Analysis (LDA) and Histogram of Oriented Gradient (HoG) feature. Our method is invariant to depiction of image, such as similarity of image to image, sketch to image, and painting to image. LDA can decrease execution time compared to state of the art method, but it still needs an improvement in term of accuracy. Inaccuracy in our experiment happen because we did not perform sliding windows search and because of low number of negative samples as natural-world images.

  17. A mathematical programming method for formulating a fuzzy regression model based on distance criterion.

    PubMed

    Chen, Liang-Hsuan; Hsueh, Chan-Ching

    2007-06-01

    Fuzzy regression models are useful to investigate the relationship between explanatory and response variables with fuzzy observations. Different from previous studies, this correspondence proposes a mathematical programming method to construct a fuzzy regression model based on a distance criterion. The objective of the mathematical programming is to minimize the sum of distances between the estimated and observed responses on the X axis, such that the fuzzy regression model constructed has the minimal total estimation error in distance. Only several alpha-cuts of fuzzy observations are needed as inputs to the mathematical programming model; therefore, the applications are not restricted to triangular fuzzy numbers. Three examples, adopted in the previous studies, and a larger example, modified from the crisp case, are used to illustrate the performance of the proposed approach. The results indicate that the proposed model has better performance than those in the previous studies based on either distance criterion or Kim and Bishu's criterion. In addition, the efficiency and effectiveness for solving the larger example by the proposed model are also satisfactory.

  18. Analytical model for out-of-field dose in photon craniospinal irradiation

    NASA Astrophysics Data System (ADS)

    Taddei, Phillip J.; Jalbout, Wassim; Howell, Rebecca M.; Khater, Nabil; Geara, Fady; Homann, Kenneth; Newhauser, Wayne D.

    2013-11-01

    The prediction of late effects after radiotherapy in organs outside a treatment field requires accurate estimations of out-of-field dose. However, out-of-field dose is not calculated accurately by commercial treatment planning systems (TPSs). The purpose of this study was to develop and test an analytical model for out-of-field dose during craniospinal irradiation (CSI) from photon beams produced by a linear accelerator. In two separate evaluations of the model, we measured absorbed dose for a 6 MV CSI using thermoluminescent dosimeters placed throughout an anthropomorphic phantom and fit the measured data to an analytical model of absorbed dose versus distance outside of the composite field edge. These measurements were performed in two separate clinics—the University of Texas MD Anderson Cancer Center (MD Anderson) and the American University of Beirut Medical Center (AUBMC)—using the same phantom but different linear accelerators and TPSs commissioned for patient treatments. The measurement at AUBMC also included in-field locations. Measured dose values were compared to those predicted by TPSs and parameters were fit to the model in each setting. In each clinic, 95% of the measured data were contained within a factor of 0.2 and one root mean square deviation of the model-based values. The root mean square deviations of the mathematical model were 0.91 cGy Gy-1 and 1.67 cGy Gy-1 in the MD Anderson and AUBMC clinics, respectively. The TPS predictions agreed poorly with measurements in regions of sharp dose gradient, e.g., near the field edge. At distances greater than 1 cm from the field edge, the TPS underestimated the dose by an average of 14% ± 24% and 44% ± 19% in the MD Anderson and AUBMC clinics, respectively. The in-field measured dose values of the measurement at AUBMC matched the dose values calculated by the TPS to within 2%. Dose algorithms in TPSs systematically underestimated the actual out-of-field dose. Therefore, it is important to use an improved model based on measurements when estimating out-of-field dose. The model proposed in this study performed well for this purpose in two clinics and may be applicable in other clinics with similar treatment field configurations.

  19. Mathematical models for nonparametric inferences from line transect data

    USGS Publications Warehouse

    Burnham, K.P.; Anderson, D.R.

    1976-01-01

    A general mathematical theory of line transects is develoepd which supplies a framework for nonparametric density estimation based on either right angle or sighting distances. The probability of observing a point given its right angle distance (y) from the line is generalized to an arbitrary function g(y). Given only that g(O) = 1, it is shown there are nonparametric approaches to density estimation using the observed right angle distances. The model is then generalized to include sighting distances (r). Let f(y/r) be the conditional distribution of right angle distance given sighting distance. It is shown that nonparametric estimation based only on sighting distances requires we know the transformation of r given by f(O/r).

  20. Dose-dependent misrejoining of radiation-induced DNA double-strand breaks in human fibroblasts: Experimental and theoretical study for high and low LET radiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rydberg, Bjorn; Cooper, Brian; Cooper, Priscilla K.

    2004-11-18

    Misrejoining of DNA double-strand breaks (DSBs) was measured in human primary fibroblasts after exposure to X-rays and high LET particles (He, N and Fe) in the dose range 10-80 Gy. To measure joining of wrong DNA ends, the integrity of a 3.2 Mbp restriction fragment was analyzed directly after exposure and after 16 hr of repair incubation. It was found that the misrejoining frequency for X-rays was non-linearly related to dose, with less probability of misrejoining at low doses than at high doses. The dose dependence for the high LET particles, on the other hand, was closer to being linear,more » with misrejoining frequencies higher than for X-rays particularly at the lower doses. These experimental results were simulated with a Monte-Carlo approach that includes a cell nucleus model with all 46 chromosomes present, combined with realistic track structure simulations to calculate the geometrical positions of all DSBs induced for each dose. The model assumes that the main determinant for misrejoining probability is the distance between two simultaneously present DSBs. With a Gaussian interaction probability function with distance, it was found that both the low and high LET data could be fitted with an interaction distance (sigma of the Gaussian curve) of 0.25 {micro}m. This is half the distance previously found to best fit chromosomal aberration data in human lymphocytes using the same methods (Holley et al. Radiat. Res . 158, 568-580 (2002)). The discrepancy may indicate inadequacies in the chromosome model, for example insufficient chromosomal overlap, but may also partly be due to differences between fibroblasts and lymphocytes. Although the experimental data was obtained at high doses, the Monte Carlo calculations could be extended to lower doses. It was found that a linear component of misrejoining versus dose dominated for doses below 1 Gy for all radiations, including X-rays. The calculated relative biological efficiency (RBE) for misrejoining at this low dose region was 31 for the He-ions, 28 for the N-ions and 19 for Fe-ions.« less

  1. Preventing Road Rage by Modelling the Car-following and the Safety Distance Model

    NASA Astrophysics Data System (ADS)

    Lan, Si; Fang, Ni; Zhao, Huanming; Ye, Shiqi

    2017-11-01

    Starting from the different behaviours of the driver’s lane change, the car-following model based on the distance and speed and the safety distance model are established in this paper, so as to analyse the impact on traffic flow and safety, helping solve the phenomenon of road anger.

  2. Model-based recognition of 3D articulated target using ladar range data.

    PubMed

    Lv, Dan; Sun, Jian-Feng; Li, Qi; Wang, Qi

    2015-06-10

    Ladar is suitable for 3D target recognition because ladar range images can provide rich 3D geometric surface information of targets. In this paper, we propose a part-based 3D model matching technique to recognize articulated ground military vehicles in ladar range images. The key of this approach is to solve the decomposition and pose estimation of articulated parts of targets. The articulated components were decomposed into isolate parts based on 3D geometric properties of targets, such as surface point normals, data histogram distribution, and data distance relationships. The corresponding poses of these separate parts were estimated through the linear characteristics of barrels. According to these pose parameters, all parts of the target were roughly aligned to 3D point cloud models in a library and fine matching was finally performed to accomplish 3D articulated target recognition. The recognition performance was evaluated with 1728 ladar range images of eight different articulated military vehicles with various part types and orientations. Experimental results demonstrated that the proposed approach achieved a high recognition rate.

  3. Drag reduction by a linear viscosity profile.

    PubMed

    De Angelis, Elisabetta; Casciola, Carlo M; L'vov, Victor S; Pomyalov, Anna; Procaccia, Itamar; Tiberkevich, Vasil

    2004-11-01

    Drag reduction by polymers in turbulent flows raises an apparent contradiction: the stretching of the polymers must increase the viscosity, so why is the drag reduced? A recent theory proposed that drag reduction, in agreement with experiments, is consistent with the effective viscosity growing linearly with the distance from the wall. With this self-consistent solution the reduction in the Reynolds stress overwhelms the increase in viscous drag. In this Rapid Communication we show, using direct numerical simulations, that a linear viscosity profile indeed reduces the drag in agreement with the theory and in close correspondence with direct simulations of the FENE-P model at the same flow conditions.

  4. Calculating distance by wireless ethernet signal strength for global positioning method

    NASA Astrophysics Data System (ADS)

    Kim, Seung-Yong; Kim, Jeehong; Lee, Chang-goo

    2005-12-01

    This paper investigated mobile robot localization by using wireless Ethernet for global localization and INS for relative localization. For relative localization, the low-cost INS features self-contained was adopted. Low-cost MEMS-based INS has a short-period response and acceptable performance. Generally, variety sensor was used for mobile robot localization. In spite of precise modeling of the sensor, it leads inevitably to the accumulation of errors. The IEEE802.11b wireless Ethernet standard has been deployed in office building, museums, hospitals, shopping centers and other indoor environments. Many mobile robots already make use of wireless networking for communication. So location sensing with wireless Ethernet might be very useful for a low-cost robot. This research used wireless Ethernet card for compensation the accumulation of errors. So the mobile robot can use that for global localization through the installed many IEEE802.11b wireless Ethernets in indoor environments. The chief difficulty in localization with wireless Ethernet is predicting signal strength. As a sensor, RF signal strength measured indoors is non-linear with distance. So, there made the profiles of signal strength for points and used that. We wrote using function between signal strength profile and distance from the wireless Ethernet point.

  5. Evaluation of goal kicking performance in international rugby union matches.

    PubMed

    Quarrie, Kenneth L; Hopkins, Will G

    2015-03-01

    Goal kicking is an important element in rugby but has been the subject of minimal research. To develop and apply a method to describe the on-field pattern of goal-kicking and rank the goal kicking performance of players in international rugby union matches. Longitudinal observational study. A generalized linear mixed model was used to analyze goal-kicking performance in a sample of 582 international rugby matches played from 2002 to 2011. The model adjusted for kick distance, kick angle, a rating of the importance of each kick, and venue-related conditions. Overall, 72% of the 6769 kick attempts were successful. Forty-five percent of points scored during the matches resulted from goal kicks, and in 5.7% of the matches the result of the match hinged on the outcome of a kick attempt. There was an extremely large decrease in success with increasing distance (odds ratio for two SD distance 0.06, 90% confidence interval 0.05-0.07) and a small decrease with increasingly acute angle away from the mid-line of the goal posts (odds ratio for 2 SD angle, 0.44, 0.39-0.49). Differences between players were typically small (odds ratio for 2 between-player SD 0.53, 0.45-0.65). The generalized linear mixed model with its random-effect solutions provides a tool for ranking the performance of goal kickers in rugby. This modelling approach could be applied to other performance indicators in rugby and in other sports in which discrete outcomes are measured repeatedly on players or teams. Copyright © 2015. Published by Elsevier Ltd.

  6. Rise and Shock: Optimal Defibrillator Placement in a High-rise Building.

    PubMed

    Chan, Timothy C Y

    2017-01-01

    Out-of-hospital cardiac arrests (OHCA) in high-rise buildings experience lower survival and longer delays until paramedic arrival. Use of publicly accessible automated external defibrillators (AED) can improve survival, but "vertical" placement has not been studied. We aim to determine whether elevator-based or lobby-based AED placement results in shorter vertical distance travelled ("response distance") to OHCAs in a high-rise building. We developed a model of a single-elevator, n-floor high-rise building. We calculated and compared the average distance from AED to floor of arrest for the two AED locations. We modeled OHCA occurrences using floor-specific Poisson processes, the risk of OHCA on the ground floor (λ 1 ) and the risk on any above-ground floor (λ). The elevator was modeled with an override function enabling direct travel to the target floor. The elevator location upon override was modeled as a discrete uniform random variable. Calculations used the laws of probability. Elevator-based AED placement had shorter average response distance if the number of floors (n) in the building exceeded three quarters of the ratio of ground-floor OHCA risk to above-ground floor risk (λ 1 /λ) plus one half (n ≥ 3λ 1 /4λ + 0.5). Otherwise, a lobby-based AED had shorter average response distance. If OHCA risk on each floor was equal, an elevator-based AED had shorter average response distance. Elevator-based AEDs travel less vertical distance to OHCAs in tall buildings or those with uniform vertical risk, while lobby-based AEDs travel less vertical distance in buildings with substantial lobby, underground, and nearby street-level traffic and OHCA risk.

  7. Effect of electric potential and current on mandibular linear measurements in cone beam CT.

    PubMed

    Panmekiate, S; Apinhasmit, W; Petersson, A

    2012-10-01

    The purpose of this study was to compare mandibular linear distances measured from cone beam CT (CBCT) images produced by different radiographic parameter settings (peak kilovoltage and milliampere value). 20 cadaver hemimandibles with edentulous ridges posterior to the mental foramen were embedded in clear resin blocks and scanned by a CBCT machine (CB MercuRay(TM); Hitachi Medico Technology Corp., Chiba-ken, Japan). The radiographic parameters comprised four peak kilovoltage settings (60 kVp, 80 kVp, 100 kVp and 120 kVp) and two milliampere settings (10 mA and 15 mA). A 102.4 mm field of view was chosen. Each hemimandible was scanned 8 times with 8 different parameter combinations resulting in 160 CBCT data sets. On the cross-sectional images, six linear distances were measured. To assess the intraobserver variation, the 160 data sets were remeasured after 2 weeks. The measurement precision was calculated using Dahlberg's formula. With the same peak kilovoltage, the measurements yielded by different milliampere values were compared using the paired t-test. With the same milliampere value, the measurements yielded by different peak kilovoltage were compared using analysis of variance. A significant difference was considered when p < 0.05. Measurement precision varied from 0.03 mm to 0.28 mm. No significant differences in the distances were found among the different radiographic parameter combinations. Based upon the specific machine in the present study, low peak kilovoltage and milliampere value might be used for linear measurements in the posterior mandible.

  8. Statistical and Biophysical Models for Predicting Total and Outdoor Water Use in Los Angeles

    NASA Astrophysics Data System (ADS)

    Mini, C.; Hogue, T. S.; Pincetl, S.

    2012-04-01

    Modeling water demand is a complex exercise in the choice of the functional form, techniques and variables to integrate in the model. The goal of the current research is to identify the determinants that control total and outdoor residential water use in semi-arid cities and to utilize that information in the development of statistical and biophysical models that can forecast spatial and temporal urban water use. The City of Los Angeles is unique in its highly diverse socio-demographic, economic and cultural characteristics across neighborhoods, which introduces significant challenges in modeling water use. Increasing climate variability also contributes to uncertainties in water use predictions in urban areas. Monthly individual water use records were acquired from the Los Angeles Department of Water and Power (LADWP) for the 2000 to 2010 period. Study predictors of residential water use include socio-demographic, economic, climate and landscaping variables at the zip code level collected from US Census database. Climate variables are estimated from ground-based observations and calculated at the centroid of each zip code by inverse-distance weighting method. Remotely-sensed products of vegetation biomass and landscape land cover are also utilized. Two linear regression models were developed based on the panel data and variables described: a pooled-OLS regression model and a linear mixed effects model. Both models show income per capita and the percentage of landscape areas in each zip code as being statistically significant predictors. The pooled-OLS model tends to over-estimate higher water use zip codes and both models provide similar RMSE values.Outdoor water use was estimated at the census tract level as the residual between total water use and indoor use. This residual is being compared with the output from a biophysical model including tree and grass cover areas, climate variables and estimates of evapotranspiration at very high spatial resolution. A genetic algorithm based model (Shuffled Complex Evolution-UA; SCE-UA) is also being developed to provide estimates of the predictions and parameters uncertainties and to compare against the linear regression models. Ultimately, models will be selected to undertake predictions for a range of climate change and landscape scenarios. Finally, project results will contribute to a better understanding of water demand to help predict future water use and implement targeted landscaping conservation programs to maintain sustainable water needs for a growing population under uncertain climate variability.

  9. Leaking Underground Storage Tanks and Environmental Injustice: Is There a Hidden and Unequal Threat to Public Health in South Carolina?

    PubMed Central

    Wilson, Sacoby; Zhang, Hongmei; Burwell, Kristen; Samantapudi, Ashok; Dalemarre, Laura; Jiang, Chengsheng; Rice, LaShanta; Williams, Edith; Naney, Charles

    2014-01-01

    There are approximately 590,000 underground storage tanks (USTs) nationwide that store petroleum or hazardous substances. Many of these tanks are leaking, which may increase the risk of exposure to contaminants that promote health problems in host neighborhoods. Within this study, we assessed disparities in the spatial distribution of leaking underground storage tanks (LUSTs) based on socioeconomic status (SES) and race/ethnicity in South Carolina (SC). Chi-square tests were used to evaluate the difference in the proportion of populations who host a LUST compared to those not hosting a LUST for all sociodemographic factors. Linear regression models were applied to examine the association of distance to the nearest LUST with relevant sociodemographic measures. As percent black increased, the distance (both in kilometers and miles) to the nearest LUST decreased. Similar results were observed for percent poverty, unemployment, persons with less than a high school education, blacks in poverty, and whites in poverty. Furthermore, chi-square tests indicated that blacks or non-whites or people with low SES were more likely to live in LUST host areas than in non-host areas. As buffer distance increased, percent black and non-white decreased. SES variables demonstrated a similar inverse relationship. Overall, burden disparities exist in the distribution of LUSTs based on race/ethnicity and SES in SC. PMID:24729829

  10. Rate-compatible protograph LDPC code families with linear minimum distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Dolinar, Jr., Samuel J (Inventor); Jones, Christopher R. (Inventor)

    2012-01-01

    Digital communication coding methods are shown, which generate certain types of low-density parity-check (LDPC) codes built from protographs. A first method creates protographs having the linear minimum distance property and comprising at least one variable node with degree less than 3. A second method creates families of protographs of different rates, all having the linear minimum distance property, and structurally identical for all rates except for a rate-dependent designation of certain variable nodes as transmitted or non-transmitted. A third method creates families of protographs of different rates, all having the linear minimum distance property, and structurally identical for all rates except for a rate-dependent designation of the status of certain variable nodes as non-transmitted or set to zero. LDPC codes built from the protographs created by these methods can simultaneously have low error floors and low iterative decoding thresholds, and families of such codes of different rates can be decoded efficiently using a common decoding architecture.

  11. A study of infrasound propagation based on high-order finite difference solutions of the Navier-Stokes equations.

    PubMed

    Marsden, O; Bogey, C; Bailly, C

    2014-03-01

    The feasibility of using numerical simulation of fluid dynamics equations for the detailed description of long-range infrasound propagation in the atmosphere is investigated. The two dimensional (2D) Navier Stokes equations are solved via high fidelity spatial finite differences and Runge-Kutta time integration, coupled with a shock-capturing filter procedure allowing large amplitudes to be studied. The accuracy of acoustic prediction over long distances with this approach is first assessed in the linear regime thanks to two test cases featuring an acoustic source placed above a reflective ground in a homogeneous and weakly inhomogeneous medium, solved for a range of grid resolutions. An atmospheric model which can account for realistic features affecting acoustic propagation is then described. A 2D study of the effect of source amplitude on signals recorded at ground level at varying distances from the source is carried out. Modifications both in terms of waveforms and arrival times are described.

  12. Spatial Disparities in the Distribution of Parks and Green Spaces in the USA

    PubMed Central

    Wen, Ming; Zhang, Xingyou; Harris, Carmen D.; Holt, James B.; Croft, Janet B.

    2013-01-01

    Background Little national evidence is available on spatial disparities in distributions of parks and green spaces in the USA. Purpose This study examines ecological associations of spatial access to parks and green spaces with percentages of black, Hispanic, and low-income residents across the urban–rural continuum in the conterminous USA. Methods Census tract-level park and green space data were linked with data from the 2010 U.S. Census and 2006–2010 American Community Surveys. Linear mixed regression models were performed to examine these associations. Results Poverty levels were negatively associated with distances to parks and percentages of green spaces in urban/suburban areas while positively associated in rural areas. Percentages of blacks and Hispanics were in general negatively linked to distances to parks and green space coverage along the urban–rural spectrum. Conclusions Place-based race–ethnicity and poverty are important correlates of spatial access to parks and green spaces, but the associations vary across the urbanization levels. PMID:23334758

  13. Spatial disparities in the distribution of parks and green spaces in the USA.

    PubMed

    Wen, Ming; Zhang, Xingyou; Harris, Carmen D; Holt, James B; Croft, Janet B

    2013-02-01

    Little national evidence is available on spatial disparities in distributions of parks and green spaces in the USA. This study examines ecological associations of spatial access to parks and green spaces with percentages of black, Hispanic, and low-income residents across the urban-rural continuum in the conterminous USA. Census tract-level park and green space data were linked with data from the 2010 U.S. Census and 2006-2010 American Community Surveys. Linear mixed regression models were performed to examine these associations. Poverty levels were negatively associated with distances to parks and percentages of green spaces in urban/suburban areas while positively associated in rural areas. Percentages of blacks and Hispanics were in general negatively linked to distances to parks and green space coverage along the urban-rural spectrum. Place-based race-ethnicity and poverty are important correlates of spatial access to parks and green spaces, but the associations vary across the urbanization levels.

  14. Mathematical models for non-parametric inferences from line transect data

    USGS Publications Warehouse

    Burnham, K.P.; Anderson, D.R.

    1976-01-01

    A general mathematical theory of line transects is developed which supplies a framework for nonparametric density estimation based on either right angle or sighting distances. The probability of observing a point given its right angle distance (y) from the line is generalized to an arbitrary function g(y). Given only that g(0) = 1, it is shown there are nonparametric approaches to density estimation using the observed right angle distances. The model is then generalized to include sighting distances (r). Let f(y I r) be the conditional distribution of right angle distance given sighting distance. It is shown that nonparametric estimation based only on sighting distances requires we know the transformation of r given by f(0 I r).

  15. Impact imaging of aircraft composite structure based on a model-independent spatial-wavenumber filter.

    PubMed

    Qiu, Lei; Liu, Bin; Yuan, Shenfang; Su, Zhongqing

    2016-01-01

    The spatial-wavenumber filtering technique is an effective approach to distinguish the propagating direction and wave mode of Lamb wave in spatial-wavenumber domain. Therefore, it has been gradually studied for damage evaluation in recent years. But for on-line impact monitoring in practical application, the main problem is how to realize the spatial-wavenumber filtering of impact signal when the wavenumber of high spatial resolution cannot be measured or the accurate wavenumber curve cannot be modeled. In this paper, a new model-independent spatial-wavenumber filter based impact imaging method is proposed. In this method, a 2D cross-shaped array constructed by two linear piezoelectric (PZT) sensor arrays is used to acquire impact signal on-line. The continuous complex Shannon wavelet transform is adopted to extract the frequency narrowband signals from the frequency wideband impact response signals of the PZT sensors. A model-independent spatial-wavenumber filter is designed based on the spatial-wavenumber filtering technique. Based on the designed filter, a wavenumber searching and best match mechanism is proposed to implement the spatial-wavenumber filtering of the frequency narrowband signals without modeling, which can be used to obtain a wavenumber-time image of the impact relative to a linear PZT sensor array. By using the two wavenumber-time images of the 2D cross-shaped array, the impact direction can be estimated without blind angle. The impact distance relative to the 2D cross-shaped array can be calculated by using the difference of time-of-flight between the frequency narrowband signals of two different central frequencies and the corresponding group velocities. The validations performed on a carbon fiber composite laminate plate and an aircraft composite oil tank show a good impact localization accuracy of the model-independent spatial-wavenumber filter based impact imaging method. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. N-tuple topological/geometric cutoffs for 3D N-linear algebraic molecular codifications: variability, linear independence and QSAR analysis.

    PubMed

    García-Jacas, C R; Marrero-Ponce, Y; Barigye, S J; Hernández-Ortega, T; Cabrera-Leyva, L; Fernández-Castillo, A

    2016-12-01

    Novel N-tuple topological/geometric cutoffs to consider specific inter-atomic relations in the QuBiLS-MIDAS framework are introduced in this manuscript. These molecular cutoffs permit the taking into account of relations between more than two atoms by using (dis-)similarity multi-metrics and the concepts related with topological and Euclidean-geometric distances. To this end, the kth two-, three- and four-tuple topological and geometric neighbourhood quotient (NQ) total (or local-fragment) spatial-(dis)similarity matrices are defined, to represent 3D information corresponding to the relations between two, three and four atoms of the molecular structures that satisfy certain cutoff criteria. First, an analysis of a diverse chemical space for the most common values of topological/Euclidean-geometric distances, bond/dihedral angles, triangle/quadrilateral perimeters, triangle area and volume was performed in order to determine the intervals to take into account in the cutoff procedures. A variability analysis based on Shannon's entropy reveals that better distribution patterns are attained with the descriptors based on the cutoffs proposed (QuBiLS-MIDAS NQ-MDs) with regard to the results obtained when all inter-atomic relations are considered (QuBiLS-MIDAS KA-MDs - 'Keep All'). A principal component analysis shows that the novel molecular cutoffs codify chemical information captured by the respective QuBiLS-MIDAS KA-MDs, as well as information not captured by the latter. Lastly, a QSAR study to obtain deeper knowledge of the contribution of the proposed methods was carried out, using four molecular datasets (steroids (STER), angiotensin converting enzyme (ACE), thermolysin inhibitors (THER) and thrombin inhibitors (THR)) widely used as benchmarks in the evaluation of several methodologies. One to four variable QSAR models based on multiple linear regression were developed for each compound dataset following the original division into training and test sets. The results obtained reveal that the novel cutoff procedures yield superior performances relative to those of the QuBiLS-MIDAS KA-MDs in the prediction of the biological activities considered. From the results achieved, it can be suggested that the proposed N-tuple topological/geometric cutoffs constitute a relevant criteria for generating MDs codifying particular atomic relations, ultimately useful in enhancing the modelling capacity of the QuBiLS-MIDAS 3D-MDs.

  17. Evaluation of the 3dMDface system as a tool for soft tissue analysis.

    PubMed

    Hong, C; Choi, K; Kachroo, Y; Kwon, T; Nguyen, A; McComb, R; Moon, W

    2017-06-01

    To evaluate the accuracy of three-dimensional stereophotogrammetry by comparing values obtained from direct anthropometry and the 3dMDface system. To achieve a more comprehensive evaluation of the reliability of 3dMD, both linear and surface measurements were examined. UCLA Section of Orthodontics. Mannequin head as model for anthropometric measurements. Image acquisition and analysis were carried out on a mannequin head using 16 anthropometric landmarks and 21 measured parameters for linear and surface distances. 3D images using 3dMDface system were made at 0, 1 and 24 hours; 1, 2, 3 and 4 weeks. Error magnitude statistics used include mean absolute difference, standard deviation of error, relative error magnitude and root mean square error. Intra-observer agreement for all measurements was attained. Overall mean errors were lower than 1.00 mm for both linear and surface parameter measurements, except in 5 of the 21 measurements. The three longest parameter distances showed increased variation compared to shorter distances. No systematic errors were observed for all performed paired t tests (P<.05). Agreement values between two observers ranged from 0.91 to 0.99. Measurements on a mannequin confirmed the accuracy of all landmarks and parameters analysed in this study using the 3dMDface system. Results indicated that 3dMDface system is an accurate tool for linear and surface measurements, with potentially broad-reaching applications in orthodontics, surgical treatment planning and treatment evaluation. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  18. Predicting regional variations in mortality from motor vehicle crashes.

    PubMed

    Clark, D E; Cushing, B M

    1999-02-01

    To show that the previously-observed inverse relationship between population density and per-capita mortality from motor vehicle crashes can be derived from a simple mathematical model that can be used for prediction. The authors proposed models in which the number of fatal crashes in an area was directly proportional to the population and also to some power of the mean distance between hospitals. Alternatively, these can be parameterized as Weibull survival models. Using county and state data from the U.S. Census, the authors fitted linear regression equations on a logarithmic scale to test the validity of these models. The southern states conformed to a different model from the other states. If an indicator variable was used to distinguish these groups, the resulting model accounted for 74% of the variation from state to state (Alaska excepted). After controlling for mean inter-hospital distance, the southern states had a per-capita mortality 1.37 times that of the other states. Simply knowing the mean distance between hospitals in a region allows a fiarly accurate estimate of its per-capita mortality from vehicle crashes. After controlling for this factor, vehicle crash mortality per capita is higher in the southern states, for reasons yet to be explained.

  19. Orthogonal Projection in Teaching Regression and Financial Mathematics

    ERIC Educational Resources Information Center

    Kachapova, Farida; Kachapov, Ilias

    2010-01-01

    Two improvements in teaching linear regression are suggested. The first is to include the population regression model at the beginning of the topic. The second is to use a geometric approach: to interpret the regression estimate as an orthogonal projection and the estimation error as the distance (which is minimized by the projection). Linear…

  20. Finite element method modeling to assess Laplacian estimates via novel variable inter-ring distances concentric ring electrodes.

    PubMed

    Makeyev, Oleksandr; Besio, Walter G

    2016-08-01

    Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation has been demonstrated in a range of applications. In our recent work we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1)-point method, linearly increasing and decreasing inter-ring distances tripolar (n = 2) and quadripolar (n = 3) electrode configurations are compared to their constant inter-ring distances counterparts using finite element method modeling. Obtained results suggest that increasing inter-ring distances electrode configurations may decrease the estimation error resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration the estimation error may be decreased more than two-fold while for the quadripolar configuration more than six-fold decrease is expected.

  1. MMU (Manned Maneuvering Unit) Task Simulator.

    DTIC Science & Technology

    1986-01-15

    motion is obtained by applying the Clohessy - Wiltshire equations for terminal rendezvous/docking with the earth modeled as a uniform sphere " (Aj<endix...quaternions. The Clohessy - Wiltshire equations for terminal rendezvous/docking are used to model orbital drift. These are linearized equations of...system is the Clohessy - Wiltshire system, centered at the target and described in detail in Appendix A. The earth’s vector list is scaled at one distance

  2. Instability of cooperative adaptive cruise control traffic flow: A macroscopic approach

    NASA Astrophysics Data System (ADS)

    Ngoduy, D.

    2013-10-01

    This paper proposes a macroscopic model to describe the operations of cooperative adaptive cruise control (CACC) traffic flow, which is an extension of adaptive cruise control (ACC) traffic flow. In CACC traffic flow a vehicle can exchange information with many preceding vehicles through wireless communication. Due to such communication the CACC vehicle can follow its leader at a closer distance than the ACC vehicle. The stability diagrams are constructed from the developed model based on the linear and nonlinear stability method for a certain model parameter set. It is found analytically that CACC vehicles enhance the stabilization of traffic flow with respect to both small and large perturbations compared to ACC vehicles. Numerical simulation is carried out to support our analytical findings. Based on the nonlinear stability analysis, we will show analytically and numerically that the CACC system better improves the dynamic equilibrium capacity over the ACC system. We have argued that in parallel to microscopic models for CACC traffic flow, the newly developed macroscopic will provide a complete insight into the dynamics of intelligent traffic flow.

  3. Validity of Treadmill-Derived Critical Speed on Predicting 5000-Meter Track-Running Performance.

    PubMed

    Nimmerichter, Alfred; Novak, Nina; Triska, Christoph; Prinz, Bernhard; Breese, Brynmor C

    2017-03-01

    Nimmerichter, A, Novak, N, Triska, C, Prinz, B, and Breese, BC. Validity of treadmill-derived critical speed on predicting 5,000-meter track-running performance. J Strength Cond Res 31(3): 706-714, 2017-To evaluate 3 models of critical speed (CS) for the prediction of 5,000-m running performance, 16 trained athletes completed an incremental test on a treadmill to determine maximal aerobic speed (MAS) and 3 randomly ordered runs to exhaustion at the [INCREMENT]70% intensity, at 110% and 98% of MAS. Critical speed and the distance covered above CS (D') were calculated using the hyperbolic speed-time (HYP), the linear distance-time (LIN), and the linear speed inverse-time model (INV). Five thousand meter performance was determined on a 400-m running track. Individual predictions of 5,000-m running time (t = [5,000-D']/CS) and speed (s = D'/t + CS) were calculated across the 3 models in addition to multiple regression analyses. Prediction accuracy was assessed with the standard error of estimate (SEE) from linear regression analysis and the mean difference expressed in units of measurement and coefficient of variation (%). Five thousand meter running performance (speed: 4.29 ± 0.39 m·s; time: 1,176 ± 117 seconds) was significantly better than the predictions from all 3 models (p < 0.0001). The mean difference was 65-105 seconds (5.7-9.4%) for time and -0.22 to -0.34 m·s (-5.0 to -7.5%) for speed. Predictions from multiple regression analyses with CS and D' as predictor variables were not significantly different from actual running performance (-1.0 to 1.1%). The SEE across all models and predictions was approximately 65 seconds or 0.20 m·s and is therefore considered as moderate. The results of this study have shown the importance of aerobic and anaerobic energy system contribution to predict 5,000-m running performance. Using estimates of CS and D' is valuable for predicting performance over race distances of 5,000 m.

  4. A Weighted Least Squares Approach To Robustify Least Squares Estimates.

    ERIC Educational Resources Information Center

    Lin, Chowhong; Davenport, Ernest C., Jr.

    This study developed a robust linear regression technique based on the idea of weighted least squares. In this technique, a subsample of the full data of interest is drawn, based on a measure of distance, and an initial set of regression coefficients is calculated. The rest of the data points are then taken into the subsample, one after another,…

  5. Classification model based on Raman spectra of selected morphological and biochemical tissue constituents for identification of atherosclerosis in human coronary arteries.

    PubMed

    Peres, Marines Bertolo; Silveira, Landulfo; Zângaro, Renato Amaro; Pacheco, Marcos Tadeu Tavares; Pasqualucci, Carlos Augusto

    2011-09-01

    This study presents the results of Raman spectroscopy applied to the classification of arterial tissue based on a simplified model using basal morphological and biochemical information extracted from the Raman spectra of arteries. The Raman spectrograph uses an 830-nm diode laser, imaging spectrograph, and a CCD camera. A total of 111 Raman spectra from arterial fragments were used to develop the model, and those spectra were compared to the spectra of collagen, fat cells, smooth muscle cells, calcification, and cholesterol in a linear fit model. Non-atherosclerotic (NA), fatty and fibrous-fatty atherosclerotic plaques (A) and calcified (C) arteries exhibited different spectral signatures related to different morphological structures presented in each tissue type. Discriminant analysis based on Mahalanobis distance was employed to classify the tissue type with respect to the relative intensity of each compound. This model was subsequently tested prospectively in a set of 55 spectra. The simplified diagnostic model showed that cholesterol, collagen, and adipocytes were the tissue constituents that gave the best classification capability and that those changes were correlated to histopathology. The simplified model, using spectra obtained from a few tissue morphological and biochemical constituents, showed feasibility by using a small amount of variables, easily extracted from gross samples.

  6. Estimation of spatially restricted LET using track structure models

    NASA Technical Reports Server (NTRS)

    Kiefer, J.

    1994-01-01

    The spatial distribution of energy deposition is an important determinant in the formation of biologically significant lesions. It has been widely realized that Linear Energy Transfer (LET) being an average quantity is not sufficient to describe the situation at a submicroscopic scale. To remedy this to some extent 'energy-cut-off' values are sometimes used but since they are related to secondary electron energy and only indirectly to their range they are also not adequate although they may be easily calculated. 'Range-restricted LET' appears to be better but its determination is usually quite involved. Xapsos (1992) suggested a semi-empirical approximation based on a modified Bethe-formula which contains a number of assumption which are difficult to verify. A simpler and easier way is to use existing beam-models which describe energy deposition around an ion's path. They all agree that the energy density (i. e., energy deposited per unit mass) decreases with the inverse square of the distance from the track center. This simple dependence can be used to determine the fraction of total LET which is deposited in a cylinder of a given radius. As an example our own beam model. Energy density depends on distance x (measured in m) from the track center according to the presented formula.

  7. Sensory integration of a light touch reference in human standing balance.

    PubMed

    Assländer, Lorenz; Smith, Craig P; Reynolds, Raymond F

    2018-01-01

    In upright stance, light touch of a space-stationary touch reference reduces spontaneous sway. Moving the reference evokes sway responses which exhibit non-linear behavior that has been attributed to sensory reweighting. Reweighting refers to a change in the relative contribution of sensory cues signaling body sway in space and light touch cues signaling finger position with respect to the body. Here we test the hypothesis that the sensory fusion process involves a transformation of light touch signals into the same reference frame as other sensory inputs encoding body sway in space, or vice versa. Eight subjects lightly gripped a robotic manipulandum which moved in a circular arc around the ankle joint. A pseudo-randomized motion sequence with broad spectral characteristics was applied at three amplitudes. The stimulus was presented at two different heights and therefore different radial distances, which were matched in terms of angular motion. However, the higher stimulus evoked a significantly larger sway response, indicating that the response was not matched to stimulus angular motion. Instead, the body sway response was strongly related to the horizontal translation of the manipulandum. The results suggest that light touch is integrated as the horizontal distance between body COM and the finger. The data were well explained by a model with one feedback loop minimizing changes in horizontal COM-finger distance. The model further includes a second feedback loop estimating the horizontal finger motion and correcting the first loop when the touch reference is moving. The second loop includes the predicted transformation of sensory signals into the same reference frame and a non-linear threshold element that reproduces the non-linear sway responses, thus providing a mechanism that can explain reweighting.

  8. Sensory integration of a light touch reference in human standing balance

    PubMed Central

    Smith, Craig P.; Reynolds, Raymond F.

    2018-01-01

    In upright stance, light touch of a space-stationary touch reference reduces spontaneous sway. Moving the reference evokes sway responses which exhibit non-linear behavior that has been attributed to sensory reweighting. Reweighting refers to a change in the relative contribution of sensory cues signaling body sway in space and light touch cues signaling finger position with respect to the body. Here we test the hypothesis that the sensory fusion process involves a transformation of light touch signals into the same reference frame as other sensory inputs encoding body sway in space, or vice versa. Eight subjects lightly gripped a robotic manipulandum which moved in a circular arc around the ankle joint. A pseudo-randomized motion sequence with broad spectral characteristics was applied at three amplitudes. The stimulus was presented at two different heights and therefore different radial distances, which were matched in terms of angular motion. However, the higher stimulus evoked a significantly larger sway response, indicating that the response was not matched to stimulus angular motion. Instead, the body sway response was strongly related to the horizontal translation of the manipulandum. The results suggest that light touch is integrated as the horizontal distance between body COM and the finger. The data were well explained by a model with one feedback loop minimizing changes in horizontal COM-finger distance. The model further includes a second feedback loop estimating the horizontal finger motion and correcting the first loop when the touch reference is moving. The second loop includes the predicted transformation of sensory signals into the same reference frame and a non-linear threshold element that reproduces the non-linear sway responses, thus providing a mechanism that can explain reweighting. PMID:29874252

  9. Research of Face Recognition with Fisher Linear Discriminant

    NASA Astrophysics Data System (ADS)

    Rahim, R.; Afriliansyah, T.; Winata, H.; Nofriansyah, D.; Ratnadewi; Aryza, S.

    2018-01-01

    Face identification systems are developing rapidly, and these developments drive the advancement of biometric-based identification systems that have high accuracy. However, to develop a good face recognition system and to have high accuracy is something that’s hard to find. Human faces have diverse expressions and attribute changes such as eyeglasses, mustache, beard and others. Fisher Linear Discriminant (FLD) is a class-specific method that distinguishes facial image images into classes and also creates distance between classes and intra classes so as to produce better classification.

  10. Structural characterization of astaxanthin aggregates as revealed by analysis and simulation of optical spectra

    NASA Astrophysics Data System (ADS)

    Lu, Liping; Hu, Taoping; Xu, Zhigang

    2017-10-01

    Carotenoids can self-assemble in hydrated polar solvents to form J- or H-type aggregates, inducing dramatic changes in photophysical properties. Here, we measured absorption and emission spectra of astaxanthin in ethanol-water solution using ultraviolet-visible and fluorescence spectrometers. Two types of aggregates were distinguished in mixed solution at different water contents by absorption spectra. After addition of water, all probed samples immediately formed H-aggregates with maximum blue shift of 31 nm. In addition, J-aggregate was formed in 1:3 ethanol-water solution measured after an hour. Based on Frenkel exciton model, we calculated linear absorption and emission spectra of these aggregates to describe aggregate structures in solution. For astaxanthin, experimental results agreed well with the fitted spectra of H-aggregate models, which consisted of tightly packed stacks of individual molecules, including hexamers, trimers, and dimers. Transition moment of single astaxanthin in ethanol was obtained by Gaussian 09 program package to estimate the distance between molecules in aggregates. Intermolecular distance of astaxanthin aggregates ranges from 0.45 nm to 0.9 nm. Fluorescence analysis showed that between subbands, strong exciton coupling induced rapid relaxation of H-aggregates. This coupling generated larger Stokes shift than monomers and J-aggregates.

  11. Holographic self-tuning of the cosmological constant

    NASA Astrophysics Data System (ADS)

    Charmousis, Christos; Kiritsis, Elias; Nitti, Francesco

    2017-09-01

    We propose a brane-world setup based on gauge/gravity duality in which the four-dimensional cosmological constant is set to zero by a dynamical self-adjustment mechanism. The bulk contains Einstein gravity and a scalar field. We study holographic RG flow solutions, with the standard model brane separating an infinite volume UV region and an IR region of finite volume. For generic values of the brane vacuum energy, regular solutions exist such that the four-dimensional brane is flat. Its position in the bulk is determined dynamically by the junction conditions. Analysis of linear fluctuations shows that a regime of 4-dimensional gravity is possible at large distances, due to the presence of an induced gravity term. The graviton acquires an effective mass, and a five-dimensional regime may exist at large and/or small scales. We show that, for a broad choice of potentials, flat-brane solutions are manifestly stable and free of ghosts. We compute the scalar contribution to the force between brane-localized sources and show that, in certain models, the vDVZ discontinuity is absent and the effective interaction at short distances is mediated by two transverse graviton helicities.

  12. A study on nonlinear estimation of submaximal effort tolerance based on the generalized MET concept and the 6MWT in pulmonary rehabilitation

    PubMed Central

    Szczegielniak, Jan; Łuniewski, Jacek; Stanisławski, Rafał; Bogacz, Katarzyna; Krajczy, Marcin; Rydel, Marek

    2018-01-01

    Background The six-minute walk test (6MWT) is considered to be a simple and inexpensive tool for the assessment of functional tolerance of submaximal effort. The aim of this work was 1) to background the nonlinear nature of the energy expenditure process due to physical activity, 2) to compare the results/scores of the submaximal treadmill exercise test and those of 6MWT in pulmonary patients and 3) to develop nonlinear mathematical models relating the two. Methods The study group included patients with the COPD. All patients were subjected to a submaximal exercise test and a 6MWT. To develop an optimal mathematical solution and compare the results of the exercise test and the 6MWT, the least squares and genetic algorithms were employed to estimate parameters of polynomial expansion and piecewise linear models. Results Mathematical analysis enabled to construct nonlinear models for estimating the MET result of submaximal exercise test based on average walk velocity (or distance) in the 6MWT. Conclusions Submaximal effort tolerance in COPD patients can be effectively estimated from new, rehabilitation-oriented, nonlinear models based on the generalized MET concept and the 6MWT. PMID:29425213

  13. Supervised Variational Relevance Learning, An Analytic Geometric Feature Selection with Applications to Omic Datasets.

    PubMed

    Boareto, Marcelo; Cesar, Jonatas; Leite, Vitor B P; Caticha, Nestor

    2015-01-01

    We introduce Supervised Variational Relevance Learning (Suvrel), a variational method to determine metric tensors to define distance based similarity in pattern classification, inspired in relevance learning. The variational method is applied to a cost function that penalizes large intraclass distances and favors small interclass distances. We find analytically the metric tensor that minimizes the cost function. Preprocessing the patterns by doing linear transformations using the metric tensor yields a dataset which can be more efficiently classified. We test our methods using publicly available datasets, for some standard classifiers. Among these datasets, two were tested by the MAQC-II project and, even without the use of further preprocessing, our results improve on their performance.

  14. Optimal design of compact and connected nature reserves for multiple species.

    PubMed

    Wang, Yicheng; Önal, Hayri

    2016-04-01

    When designing a conservation reserve system for multiple species, spatial attributes of the reserves must be taken into account at species level. The existing optimal reserve design literature considers either one spatial attribute or when multiple attributes are considered the analysis is restricted only to one species. We built a linear integer programing model that incorporates compactness and connectivity of the landscape reserved for multiple species. The model identifies multiple reserves that each serve a subset of target species with a specified coverage probability threshold to ensure the species' long-term survival in the reserve, and each target species is covered (protected) with another probability threshold at the reserve system level. We modeled compactness by minimizing the total distance between selected sites and central sites, and we modeled connectivity of a selected site to its designated central site by selecting at least one of its adjacent sites that has a nearer distance to the central site. We considered structural distance and functional distances that incorporated site quality between sites. We tested the model using randomly generated data on 2 species, one ground species that required structural connectivity and the other an avian species that required functional connectivity. We applied the model to 10 bird species listed as endangered by the state of Illinois (U.S.A.). Spatial coherence and selection cost of the reserves differed substantially depending on the weights assigned to these 2 criteria. The model can be used to design a reserve system for multiple species, especially species whose habitats are far apart in which case multiple disjunct but compact and connected reserves are advantageous. The model can be modified to increase or decrease the distance between reserves to reduce or promote population connectivity. © 2015 Society for Conservation Biology.

  15. Proteins QSAR with Markov average electrostatic potentials.

    PubMed

    González-Díaz, Humberto; Uriarte, Eugenio

    2005-11-15

    Classic physicochemical and topological indices have been largely used in small molecules QSAR but less in proteins QSAR. In this study, a Markov model is used to calculate, for the first time, average electrostatic potentials xik for an indirect interaction between aminoacids placed at topologic distances k within a given protein backbone. The short-term average stochastic potential xi1 for 53 Arc repressor mutants was used to model the effect of Alanine scanning on thermal stability. The Arc repressor is a model protein of relevance for biochemical studies on bioorganics and medicinal chemistry. A linear discriminant analysis model developed correctly classified 43 out of 53, 81.1% of proteins according to their thermal stability. More specifically, the model classified 20/28, 71.4% of proteins with near wild-type stability and 23/25, 92.0% of proteins with reduced stability. Moreover, predictability in cross-validation procedures was of 81.0%. Expansion of the electrostatic potential in the series xi0, xi1, xi2, and xi3, justified the use of the abrupt truncation approach, being the overall accuracy >70.0% for xi0 but equal for xi1, xi2, and xi3. The xi1 model compared favorably with respect to others based on D-Fire potential, surface area, volume, partition coefficient, and molar refractivity, with less than 77.0% of accuracy [Ramos de Armas, R.; González-Díaz, H.; Molina, R.; Uriarte, E. Protein Struct. Func. Bioinf.2004, 56, 715]. The xi1 model also has more tractable interpretation than others based on Markovian negentropies and stochastic moments. Finally, the model is notably simpler than the two models based on quadratic and linear indices. Both models, reported by Marrero-Ponce et al., use four-to-five time more descriptors. Introduction of average stochastic potentials may be useful for QSAR applications; having xik amenable physical interpretation and being very effective.

  16. Augmented Method to Improve Thermal Data for the Figure Drift Thermal Distortion Predictions of the JWST OTIS Cryogenic Vacuum Test

    NASA Technical Reports Server (NTRS)

    Park, Sang C.; Carnahan, Timothy M.; Cohen, Lester M.; Congedo, Cherie B.; Eisenhower, Michael J.; Ousley, Wes; Weaver, Andrew; Yang, Kan

    2017-01-01

    The JWST Optical Telescope Element (OTE) assembly is the largest optically stable infrared-optimized telescope currently being manufactured and assembled, and is scheduled for launch in 2018. The JWST OTE, including the 18 segment primary mirror, secondary mirror, and the Aft Optics Subsystem (AOS) are designed to be passively cooled and operate near 45K. These optical elements are supported by a complex composite backplane structure. As a part of the structural distortion model validation efforts, a series of tests are planned during the cryogenic vacuum test of the fully integrated flight hardware at NASA JSC Chamber A. The successful ends to the thermal-distortion phases are heavily dependent on the accurate temperature knowledge of the OTE structural members. However, the current temperature sensor allocations during the cryo-vac test may not have sufficient fidelity to provide accurate knowledge of the temperature distributions within the composite structure. A method based on an inverse distance relationship among the sensors and thermal model nodes was developed to improve the thermal data provided for the nanometer scale WaveFront Error (WFE) predictions. The Linear Distance Weighted Interpolation (LDWI) method was developed to augment the thermal model predictions based on the sparse sensor information. This paper will encompass the development of the LDWI method using the test data from the earlier pathfinder cryo-vac tests, and the results of the notional and as tested WFE predictions from the structural finite element model cases to characterize the accuracies of this LDWI method.

  17. Quantitative Approach to Failure Mode and Effect Analysis for Linear Accelerator Quality Assurance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Daniel, Jennifer C., E-mail: jennifer.odaniel@duke.edu; Yin, Fang-Fang

    Purpose: To determine clinic-specific linear accelerator quality assurance (QA) TG-142 test frequencies, to maximize physicist time efficiency and patient treatment quality. Methods and Materials: A novel quantitative approach to failure mode and effect analysis is proposed. Nine linear accelerator-years of QA records provided data on failure occurrence rates. The severity of test failure was modeled by introducing corresponding errors into head and neck intensity modulated radiation therapy treatment plans. The relative risk of daily linear accelerator QA was calculated as a function of frequency of test performance. Results: Although the failure severity was greatest for daily imaging QA (imaging vsmore » treatment isocenter and imaging positioning/repositioning), the failure occurrence rate was greatest for output and laser testing. The composite ranking results suggest that performing output and lasers tests daily, imaging versus treatment isocenter and imaging positioning/repositioning tests weekly, and optical distance indicator and jaws versus light field tests biweekly would be acceptable for non-stereotactic radiosurgery/stereotactic body radiation therapy linear accelerators. Conclusions: Failure mode and effect analysis is a useful tool to determine the relative importance of QA tests from TG-142. Because there are practical time limitations on how many QA tests can be performed, this analysis highlights which tests are the most important and suggests the frequency of testing based on each test's risk priority number.« less

  18. Hierarchy and Scope of Planning in Subject-Verb Agreement Production

    ERIC Educational Resources Information Center

    Gillespie, Maureen; Pearlmutter, Neal J.

    2011-01-01

    Two subject-verb agreement error elicitation studies tested the hierarchical feature-passing account of agreement computation in production and three timing-based alternatives: linear distance to the head noun, semantic integration, and a combined effect of both (a scope of planning account). In Experiment 1, participants completed subject noun…

  19. Modeling spatial accessibility of immigrants to culturally diverse family physicians.

    PubMed

    Wanga, Lu; Roisman, Deborah

    2011-01-01

    This article uses accessibility as an analytical tool to examine health care access among immigrants in a multicultural urban setting. It applies and improves on two widely used accessibility models—the gravity model and the two-step floating catchment area model—in measuring spatial accessibility by Mainland Chinese immigrants in the Toronto Census Metropolitan Area. Empirical data on physician-seeking behaviors are collected through two rounds of questionnaire surveys. Attention is focused on journey to physician location and utilization of linguistically matched family physicians. Based on the survey data, a two-zone accessibility model is developed by relaxing the travel threshold and distance impedance parameters that are traditionally treated as a constant in the accessibility models. General linear models are used to identify relationships among spatial accessibility, geography, and socioeconomic characteristics of Mainland Chinese immigrants. The results suggest a spatial mismatch in the supply of and demand for culturally sensitive care, and residential location is the primary factor that determines spatial accessibility to family physicians. The article yields important policy implications.

  20. Ensemble Weight Enumerators for Protograph LDPC Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush

    2006-01-01

    Recently LDPC codes with projected graph, or protograph structures have been proposed. In this paper, finite length ensemble weight enumerators for LDPC codes with protograph structures are obtained. Asymptotic results are derived as the block size goes to infinity. In particular we are interested in obtaining ensemble average weight enumerators for protograph LDPC codes which have minimum distance that grows linearly with block size. As with irregular ensembles, linear minimum distance property is sensitive to the proportion of degree-2 variable nodes. In this paper the derived results on ensemble weight enumerators show that linear minimum distance condition on degree distribution of unstructured irregular LDPC codes is a sufficient but not a necessary condition for protograph LDPC codes.

  1. Patterns of Reproductive Isolation in Eucalyptus-A Phylogenetic Perspective.

    PubMed

    Larcombe, Matthew J; Holland, Barbara; Steane, Dorothy A; Jones, Rebecca C; Nicolle, Dean; Vaillancourt, René E; Potts, Brad M

    2015-07-01

    We assess phylogenetic patterns of hybridization in the speciose, ecologically and economically important genus Eucalyptus, in order to better understand the evolution of reproductive isolation. Eucalyptus globulus pollen was applied to 99 eucalypt species, mainly from the large commercially important subgenus, Symphyomyrtus. In the 64 species that produce seeds, hybrid compatibility was assessed at two stages, hybrid-production (at approximately 1 month) and hybrid-survival (at 9 months), and compared with phylogenies based on 8,350 genome-wide DArT (diversity arrays technology) markers. Model fitting was used to assess the relationship between compatibility and genetic distance, and whether or not the strength of incompatibility "snowballs" with divergence. There was a decline in compatibility with increasing genetic distance between species. Hybridization was common within two closely related clades (one including E. globulus), but rare between E. globulus and species in two phylogenetically distant clades. Of three alternative models tested (linear, slowdown, and snowball), we found consistent support for a snowball model, indicating that the strength of incompatibility accelerates relative to genetic distance. Although we can only speculate about the genetic basis of this pattern, it is consistent with a Dobzhansky-Muller-model prediction that incompatibilities should snowball with divergence due to negative epistasis. Different rates of compatibility decline in the hybrid-production and hybrid-survival measures suggest that early-acting postmating barriers developed first and are stronger than later-acting barriers. We estimated that complete reproductive isolation can take up to 21-31 My in Eucalyptus. Practical implications for hybrid eucalypt breeding and genetic risk assessment in Australia are discussed. © The Author 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. Planning Training Workload in Football Using Small-Sided Games' Density.

    PubMed

    Sangnier, Sebastien; Cotte, Thierry; Brachet, Olivier; Coquart, Jeremy; Tourny, Claire

    2018-05-08

    Sangnier, S, Cotte, T, Brachet, O, Coquart, J, and Tourny, C. Planning training workload in football using small-sided games density. J Strength Cond Res XX(X): 000-000, 2018-To develop the physical qualities, the small-sided games' (SSGs) density may be essential in soccer. Small-sided games are games in which the pitch size, players' number, and rules are different to those for traditional soccer matches. The purpose was to assess the relation between training workload and SSGs' density. The 33 densities data (41 practice games and 3 full games) were analyzed through global positioning system (GPS) data collected from 25 professional soccer players (80.7 ± 7.0 kg; 1.83 ± 0.05 m; 26.4 ± 4.9 years). From total distance, distance metabolic power, sprint distance, and acceleration distance, the data GPS were divided into 4 categories: endurance, power, speed, and strength. Statistical analysis compared the relation between GPS values and SSGs' densities, and 3 methods were applied to assess models (R-squared, root-mean-square error, and Akaike information criterion). The results suggest that all the GPS data match the player's essential athletic skills. They were all correlated with the game's density. Acceleration distance, deceleration distance, metabolic power, and total distance followed a logarithmic regression model, whereas distance and number of sprints follow a linear regression model. The research reveals options to monitor the training workload. Coaches could anticipate the load resulting from the SSGs and adjust the field size to the players' number. Taking into account the field size during SSGs enables coaches to target the most favorable density for developing expected physical qualities. Calibrating intensity during SSGs would allow coaches to assess each athletic skill in the same conditions of intensity as in the competition.

  3. A Process-Based Transport-Distance Model of Aeolian Transport

    NASA Astrophysics Data System (ADS)

    Naylor, A. K.; Okin, G.; Wainwright, J.; Parsons, A. J.

    2017-12-01

    We present a new approach to modeling aeolian transport based on transport distance. Particle fluxes are based on statistical probabilities of particle detachment and distributions of transport lengths, which are functions of particle size classes. A computational saltation model is used to simulate transport distances over a variety of sizes. These are fit to an exponential distribution, which has the advantages of computational economy, concordance with current field measurements, and a meaningful relationship to theoretical assumptions about mean and median particle transport distance. This novel approach includes particle-particle interactions, which are important for sustaining aeolian transport and dust emission. Results from this model are compared with results from both bulk- and particle-sized-specific transport equations as well as empirical wind tunnel studies. The transport-distance approach has been successfully used for hydraulic processes, and extending this methodology from hydraulic to aeolian transport opens up the possibility of modeling joint transport by wind and water using consistent physics. Particularly in nutrient-limited environments, modeling the joint action of aeolian and hydraulic transport is essential for understanding the spatial distribution of biomass across landscapes and how it responds to climatic variability and change.

  4. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances.

    PubMed

    Gil, Manuel

    2014-01-01

    Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error.

  5. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances

    PubMed Central

    2014-01-01

    Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error. PMID:25279263

  6. Regional co-location pattern scoping on a street network considering distance decay effects of spatial interaction

    PubMed Central

    Yu, Wenhao

    2017-01-01

    Regional co-location scoping intends to identify local regions where spatial features of interest are frequently located together. Most of the previous researches in this domain are conducted on a global scale and they assume that spatial objects are embedded in a 2-D space, but the movement in urban space is actually constrained by the street network. In this paper we refine the scope of co-location patterns to 1-D paths consisting of nodes and segments. Furthermore, since the relations between spatial events are usually inversely proportional to their separation distance, the proposed method introduces the “Distance Decay Effects” to improve the result. Specifically, our approach first subdivides the street edges into continuous small linear segments. Then a value representing the local distribution intensity of events is estimated for each linear segment using the distance-decay function. Each kind of geographic feature can lead to a tessellated network with density attribute, and the generated multiple networks for the pattern of interest will be finally combined into a composite network by calculating the co-location prevalence measure values, which are based on the density variation between different features. Our experiments verify that the proposed approach is effective in urban analysis. PMID:28763496

  7. Simulating the performance of a distance-3 surface code in a linear ion trap

    NASA Astrophysics Data System (ADS)

    Trout, Colin J.; Li, Muyuan; Gutiérrez, Mauricio; Wu, Yukai; Wang, Sheng-Tao; Duan, Luming; Brown, Kenneth R.

    2018-04-01

    We explore the feasibility of implementing a small surface code with 9 data qubits and 8 ancilla qubits, commonly referred to as surface-17, using a linear chain of 171Yb+ ions. Two-qubit gates can be performed between any two ions in the chain with gate time increasing linearly with ion distance. Measurement of the ion state by fluorescence requires that the ancilla qubits be physically separated from the data qubits to avoid errors on the data due to scattered photons. We minimize the time required to measure one round of stabilizers by optimizing the mapping of the two-dimensional surface code to the linear chain of ions. We develop a physically motivated Pauli error model that allows for fast simulation and captures the key sources of noise in an ion trap quantum computer including gate imperfections and ion heating. Our simulations showed a consistent requirement of a two-qubit gate fidelity of ≥99.9% for the logical memory to have a better fidelity than physical two-qubit operations. Finally, we perform an analysis of the error subsets from the importance sampling method used to bound the logical error rates to gain insight into which error sources are particularly detrimental to error correction.

  8. Compensatory selection for roads over natural linear features by wolves in northern Ontario: Implications for caribou conservation

    PubMed Central

    Patterson, Brent R.; Anderson, Morgan L.; Rodgers, Arthur R.; Vander Vennen, Lucas M.; Fryxell, John M.

    2017-01-01

    Woodland caribou (Rangifer tarandus caribou) in Ontario are a threatened species that have experienced a substantial retraction of their historic range. Part of their decline has been attributed to increasing densities of anthropogenic linear features such as trails, roads, railways, and hydro lines. These features have been shown to increase the search efficiency and kill rate of wolves. However, it is unclear whether selection for anthropogenic linear features is additive or compensatory to selection for natural (water) linear features which may also be used for travel. We studied the selection of water and anthropogenic linear features by 52 resident wolves (Canis lupus x lycaon) over four years across three study areas in northern Ontario that varied in degrees of forestry activity and human disturbance. We used Euclidean distance-based resource selection functions (mixed-effects logistic regression) at the seasonal range scale with random coefficients for distance to water linear features, primary/secondary roads/railways, and hydro lines, and tertiary roads to estimate the strength of selection for each linear feature and for several habitat types, while accounting for availability of each feature. Next, we investigated the trade-off between selection for anthropogenic and water linear features. Wolves selected both anthropogenic and water linear features; selection for anthropogenic features was stronger than for water during the rendezvous season. Selection for anthropogenic linear features increased with increasing density of these features on the landscape, while selection for natural linear features declined, indicating compensatory selection of anthropogenic linear features. These results have implications for woodland caribou conservation. Prey encounter rates between wolves and caribou seem to be strongly influenced by increasing linear feature densities. This behavioral mechanism–a compensatory functional response to anthropogenic linear feature density resulting in decreased use of natural travel corridors–has negative consequences for the viability of woodland caribou. PMID:29117234

  9. Compensatory selection for roads over natural linear features by wolves in northern Ontario: Implications for caribou conservation.

    PubMed

    Newton, Erica J; Patterson, Brent R; Anderson, Morgan L; Rodgers, Arthur R; Vander Vennen, Lucas M; Fryxell, John M

    2017-01-01

    Woodland caribou (Rangifer tarandus caribou) in Ontario are a threatened species that have experienced a substantial retraction of their historic range. Part of their decline has been attributed to increasing densities of anthropogenic linear features such as trails, roads, railways, and hydro lines. These features have been shown to increase the search efficiency and kill rate of wolves. However, it is unclear whether selection for anthropogenic linear features is additive or compensatory to selection for natural (water) linear features which may also be used for travel. We studied the selection of water and anthropogenic linear features by 52 resident wolves (Canis lupus x lycaon) over four years across three study areas in northern Ontario that varied in degrees of forestry activity and human disturbance. We used Euclidean distance-based resource selection functions (mixed-effects logistic regression) at the seasonal range scale with random coefficients for distance to water linear features, primary/secondary roads/railways, and hydro lines, and tertiary roads to estimate the strength of selection for each linear feature and for several habitat types, while accounting for availability of each feature. Next, we investigated the trade-off between selection for anthropogenic and water linear features. Wolves selected both anthropogenic and water linear features; selection for anthropogenic features was stronger than for water during the rendezvous season. Selection for anthropogenic linear features increased with increasing density of these features on the landscape, while selection for natural linear features declined, indicating compensatory selection of anthropogenic linear features. These results have implications for woodland caribou conservation. Prey encounter rates between wolves and caribou seem to be strongly influenced by increasing linear feature densities. This behavioral mechanism-a compensatory functional response to anthropogenic linear feature density resulting in decreased use of natural travel corridors-has negative consequences for the viability of woodland caribou.

  10. Development of collision avoidance system for useful UAV applications using image sensors with laser transmitter

    NASA Astrophysics Data System (ADS)

    Cheong, M. K.; Bahiki, M. R.; Azrad, S.

    2016-10-01

    The main goal of this study is to demonstrate the approach of achieving collision avoidance on Quadrotor Unmanned Aerial Vehicle (QUAV) using image sensors with colour- based tracking method. A pair of high definition (HD) stereo cameras were chosen as the stereo vision sensor to obtain depth data from flat object surfaces. Laser transmitter was utilized to project high contrast tracking spot for depth calculation using common triangulation. Stereo vision algorithm was developed to acquire the distance from tracked point to QUAV and the control algorithm was designed to manipulate QUAV's response based on depth calculated. Attitude and position controller were designed using the non-linear model with the help of Optitrack motion tracking system. A number of collision avoidance flight tests were carried out to validate the performance of the stereo vision and control algorithm based on image sensors. In the results, the UAV was able to hover with fairly good accuracy in both static and dynamic collision avoidance for short range collision avoidance. Collision avoidance performance of the UAV was better with obstacle of dull surfaces in comparison to shiny surfaces. The minimum collision avoidance distance achievable was 0.4 m. The approach was suitable to be applied in short range collision avoidance.

  11. Accessibility and distribution of the Norwegian National Air Emergency Service: 1988-1998.

    PubMed

    Heggestad, Torhild; Børsheim, Knut Yngve

    2002-01-01

    To evaluate the accessibility and distribution of the Norwegian National Air Emergency Service in the 10-year period from 1988 to 1998. The primary material was annual standardized activity data that included all helicopter missions. A multivariate model of determinants for use of the helicopter service was computed by linear regression. Accessibility was measured as the percentage of the population reached in different flying times, and we evaluated the service using a simulation of alternative locations for the helicopter bases. The helicopter service (HEMS) has short access times, with a mean reaction time of 8 minutes and a mean response time of 26 minutes for acute missions. Nearly all patients (98%) are reached within 1 hour. A simulation that tested alternative locations of the helicopter bases compared with current locations showed no increase in accessibility. The use of the service shows large regional differences. Multivariate analyses showed that the distances of the patients from the nearest helicopter base and the nearest hospital are significant determinants for the use of HEMS. Establishment of a national service has given the Norwegian population better access to highly qualified prehospital emergency services. Furthermore, the HEMS has a compensating effect in adjusting for differences in traveling distances to a hospital. Safety, cost-containment, and gatekeeper functions remain challenges.

  12. Tractography Verified by Intraoperative Magnetic Resonance Imaging and Subcortical Stimulation During Tumor Resection Near the Corticospinal Tract.

    PubMed

    Münnich, Timo; Klein, Jan; Hattingen, Elke; Noack, Anika; Herrmann, Eva; Seifert, Volker; Senft, Christian; Forster, Marie-Therese

    2018-04-14

    Tractography is a popular tool for visualizing the corticospinal tract (CST). However, results may be influenced by numerous variables, eg, the selection of seeding regions of interests (ROIs) or the chosen tracking algorithm. To compare different variable sets by correlating tractography results with intraoperative subcortical stimulation of the CST, correcting intraoperative brain shift by the use of intraoperative MRI. Seeding ROIs were created by means of motor cortex segmentation, functional MRI (fMRI), and navigated transcranial magnetic stimulation (nTMS). Based on these ROIs, tractography was run for each patient using a deterministic and a probabilistic algorithm. Tractographies were processed on pre- and postoperatively acquired data. Using a linear mixed effects statistical model, best correlation between subcortical stimulation intensity and the distance between tractography and stimulation sites was achieved by using the segmented motor cortex as seeding ROI and applying the probabilistic algorithm on preoperatively acquired imaging sequences. Tractographies based on fMRI or nTMS results differed very little, but with enlargement of positive nTMS sites the stimulation-distance correlation of nTMS-based tractography improved. Our results underline that the use of tractography demands for careful interpretation of its virtual results by considering all influencing variables.

  13. Effects of multiple concurrent stressors on rectal temperature, blood acid-base status, and longissimus muscle glycolytic potential in market-weight pigs.

    PubMed

    Ritter, M J; Ellis, M; Anderson, D B; Curtis, S E; Keffaber, K K; Killefer, J; McKeith, F K; Murphy, C M; Peterson, B A

    2009-01-01

    Sixty-four market-weight (130.0 +/- 0.65 kg) barrows (n = 16) and gilts (n = 48) were used in a split-plot design with a 2 x 2 x 2 factorial arrangement of treatments: 1) handling intensity (gentle vs. aggressive), 2) transport floor space (0.39 vs. 0.49 m(2)/pig), and 3) distance moved during handling (25 vs. 125 m) to determine the effects of multiple concurrent stressors on metabolic responses. For the handling intensity treatment, pigs were moved individually approximately 50 m through a handling course with either 0 (gentle) or 8 (aggressive) shocks from an electric goad. Pigs were loaded onto a trailer and transported for approximately 1 h at floor spaces of either 0.39 or 0.49 m(2)/pig. After transport, pigs were unloaded, and the distance moved treatment was applied; pigs were moved 25 or 125 m through a handling course using livestock paddles. Rectal temperature was measured, and blood samples (to measure blood acid-base status) were collected 2 h before the handling intensity treatment was applied and immediately after the distance moved treatment was applied. A LM sample to measure glycolytic potential was collected after the distance moved treatments on a subset of 32 pigs. There were handling intensity x distance moved interactions (P < 0.05) for several blood acid-base measurements. In general, there was no effect of distance moved on these traits when pigs were previously handled gently. However, when pigs were previously handled aggressively, pigs moved 125 compared with 25 m had greater (P < 0.05) blood lactate and less (P < 0.05) blood pH, bicarbonate, and base-excess. Pigs transported at 0.39 compared with 0.49 m(2)/pig had a greater (P < 0.01) increase in creatine kinase values; however, transport floor space did not affect any other measurements. Data were analyzed by the number of stressors (the aggressive handling, restricted transport floor space, and 125-m distance moved treatments) experienced by each pig (0, 1, 2, or 3). As the number of stressors experienced by the pig increased, rectal temperature, blood lactate, and LM lactate increased linearly (P

  14. A Calculation Method of Electric Distance and Subarea Division Application Based on Transmission Impedance

    NASA Astrophysics Data System (ADS)

    Fang, G. J.; Bao, H.

    2017-12-01

    The widely used method of calculating electric distances is sensitivity method. The sensitivity matrix is the result of linearization and based on the hypothesis that the active power and reactive power are decoupled, so it is inaccurate. In addition, it calculates the ratio of two partial derivatives as the relationship of two dependent variables, so there is no physical meaning. This paper presents a new method for calculating electrical distance, namely transmission impedance method. It forms power supply paths based on power flow tracing, then establishes generalized branches to calculate transmission impedances. In this paper, the target of power flow tracing is S instead of Q. Q itself has no direction and the grid delivers complex power so that S contains more electrical information than Q. By describing the power transmission relationship of the branch and drawing block diagrams in both forward and reverse directions, it can be found that the numerators of feedback parts of two block diagrams are all the transmission impedances. To ensure the distance is scalar, the absolute value of transmission impedance is defined as electrical distance. Dividing network according to the electric distances and comparing with the results of sensitivity method, it proves that the transmission impedance method can adapt to the dynamic change of system better and reach a reasonable subarea division scheme.

  15. Nonmetallic electronegativity equalization and point-dipole interaction model including exchange interactions for molecular dipole moments and polarizabilities.

    PubMed

    Smalø, Hans S; Astrand, Per-Olof; Jensen, Lasse

    2009-07-28

    The electronegativity equalization model (EEM) has been combined with a point-dipole interaction model to obtain a molecular mechanics model consisting of atomic charges, atomic dipole moments, and two-atom relay tensors to describe molecular dipole moments and molecular dipole-dipole polarizabilities. The EEM has been phrased as an atom-atom charge-transfer model allowing for a modification of the charge-transfer terms to avoid that the polarizability approaches infinity for two particles at infinite distance and for long chains. In the present work, these shortcomings have been resolved by adding an energy term for transporting charges through individual atoms. A Gaussian distribution is adopted for the atomic charge distributions, resulting in a damping of the electrostatic interactions at short distances. Assuming that an interatomic exchange term may be described as the overlap between two electronic charge distributions, the EEM has also been extended by a short-range exchange term. The result is a molecular mechanics model where the difference of charge transfer in insulating and metallic systems is modeled regarding the difference in bond length between different types of system. For example, the model is capable of modeling charge transfer in both alkanes and alkenes with alternating double bonds with the same set of carbon parameters only relying on the difference in bond length between carbon sigma- and pi-bonds. Analytical results have been obtained for the polarizability of a long linear chain. These results show that the model is capable of describing the polarizability scaling both linearly and nonlinearly with the size of the system. Similarly, a linear chain with an end atom with a high electronegativity has been analyzed analytically. The dipole moment of this model system can either be independent of the length or increase linearly with the length of the chain. In addition, the model has been parametrized for alkane and alkene chains with data from density functional theory calculations, where the polarizability behaves differently with the chain length. For the molecular dipole moment, the same two systems have been studied with an aldehyde end group. Both the molecular polarizability and the dipole moment are well described as a function of the chain length for both alkane and alkene chains demonstrating the power of the presented model.

  16. Accuracy and Precision of a Surgical Navigation System: Effect of Camera and Patient Tracker Position and Number of Active Markers

    PubMed Central

    Gundle, Kenneth R.; White, Jedediah K.; Conrad, Ernest U.; Ching, Randal P.

    2017-01-01

    Introduction: Surgical navigation systems are increasingly used to aid resection and reconstruction of osseous malignancies. In the process of implementing image-based surgical navigation systems, there are numerous opportunities for error that may impact surgical outcome. This study aimed to examine modifiable sources of error in an idealized scenario, when using a bidirectional infrared surgical navigation system. Materials and Methods: Accuracy and precision were assessed using a computerized-numerical-controlled (CNC) machined grid with known distances between indentations while varying: 1) the distance from the grid to the navigation camera (range 150 to 247cm), 2) the distance from the grid to the patient tracker device (range 20 to 40cm), and 3) whether the minimum or maximum number of bidirectional infrared markers were actively functioning. For each scenario, distances between grid points were measured at 10-mm increments between 10 and 120mm, with twelve measurements made at each distance. The accuracy outcome was the root mean square (RMS) error between the navigation system distance and the actual grid distance. To assess precision, four indentations were recorded six times for each scenario while also varying the angle of the navigation system pointer. The outcome for precision testing was the standard deviation of the distance between each measured point to the mean three-dimensional coordinate of the six points for each cluster. Results: Univariate and multiple linear regression revealed that as the distance from the navigation camera to the grid increased, the RMS error increased (p<0.001). The RMS error also increased when not all infrared markers were actively tracking (p=0.03), and as the measured distance increased (p<0.001). In a multivariate model, these factors accounted for 58% of the overall variance in the RMS error. Standard deviations in repeated measures also increased when not all infrared markers were active (p<0.001), and as the distance between navigation camera and physical space increased (p=0.005). Location of the patient tracker did not affect accuracy (0.36) or precision (p=0.97) Conclusion: In our model laboratory test environment, the infrared bidirectional navigation system was more accurate and precise when the distance from the navigation camera to the physical (working) space was minimized and all bidirectional markers were active. These findings may require alterations in operating room setup and software changes to improve the performance of this system. PMID:28694888

  17. Overcoming bias in estimating the volume-outcome relationship.

    PubMed

    Tsai, Alexander C; Votruba, Mark; Bridges, John F P; Cebul, Randall D

    2006-02-01

    To examine the effect of hospital volume on 30-day mortality for patients with congestive heart failure (CHF) using administrative and clinical data in conventional regression and instrumental variables (IV) estimation models. The primary data consisted of longitudinal information on comorbid conditions, vital signs, clinical status, and laboratory test results for 21,555 Medicare-insured patients aged 65 years and older hospitalized for CHF in northeast Ohio in 1991-1997. The patient was the primary unit of analysis. We fit a linear probability model to the data to assess the effects of hospital volume on patient mortality within 30 days of admission. Both administrative and clinical data elements were included for risk adjustment. Linear distances between patients and hospitals were used to construct the instrument, which was then used to assess the endogeneity of hospital volume. When only administrative data elements were included in the risk adjustment model, the estimated volume-outcome effect was statistically significant (p=.029) but small in magnitude. The estimate was markedly attenuated in magnitude and statistical significance when clinical data were added to the model as risk adjusters (p=.39). IV estimation shifted the estimate in a direction consistent with selective referral, but we were unable to reject the consistency of the linear probability estimates. Use of only administrative data for volume-outcomes research may generate spurious findings. The IV analysis further suggests that conventional estimates of the volume-outcome relationship may be contaminated by selective referral effects. Taken together, our results suggest that efforts to concentrate hospital-based CHF care in high-volume hospitals may not reduce mortality among elderly patients.

  18. Application of tripolar concentric electrodes and prefeature selection algorithm for brain-computer interface.

    PubMed

    Besio, Walter G; Cao, Hongbao; Zhou, Peng

    2008-04-01

    For persons with severe disabilities, a brain-computer interface (BCI) may be a viable means of communication. Lapalacian electroencephalogram (EEG) has been shown to improve classification in EEG recognition. In this work, the effectiveness of signals from tripolar concentric electrodes and disc electrodes were compared for use as a BCI. Two sets of left/right hand motor imagery EEG signals were acquired. An autoregressive (AR) model was developed for feature extraction with a Mahalanobis distance based linear classifier for classification. An exhaust selection algorithm was employed to analyze three factors before feature extraction. The factors analyzed were 1) length of data in each trial to be used, 2) start position of data, and 3) the order of the AR model. The results showed that tripolar concentric electrodes generated significantly higher classification accuracy than disc electrodes.

  19. Hydrodynamic Modeling of Free Surface Interactions and Implications for P and Rg Waves Recorded on the Source Physics Experiments

    NASA Astrophysics Data System (ADS)

    Larmat, C. S.; Rougier, E.; Knight, E.; Yang, X.; Patton, H. J.

    2013-12-01

    A goal of the Source Physics Experiments (SPE) is to develop explosion source models expanding monitoring capabilities beyond empirical methods. The SPE project combines field experimentation with numerical modelling. The models take into account non-linear processes occurring from the first moment of the explosion as well as complex linear propagation effects of signals reaching far-field recording stations. The hydrodynamic code CASH is used for modelling high-strain rate, non-linear response occurring in the material near the source. Our development efforts focused on incorporating in-situ stress and fracture processes. CASH simulates the material response from the near-source, strong shock zone out to the small-strain and ultimately the elastic regime where a linear code can take over. We developed an interface with the Spectral Element Method code, SPECFEM3D, that is an efficient implementation on parallel computers of a high-order finite element method. SPECFEM3D allows accurate modelling of wave propagation to remote monitoring distance at low cost. We will present CASH-SPECFEM3D results for SPE1, which was a chemical detonation of about 85 kg of TNT at 55 m depth in a granitic geologic unit. Spallation was observed for SPE1. Keeping yield fixed we vary the depth of the source systematically and compute synthetic seismograms to distances where the P and Rg waves are separated, so that analysis can be performed without concern about interference effects due to overlapping energy. We study the time and frequency characteristics of P and Rg waves and analyse them in regard to the impact of free-surface interactions and rock damage resulting from those interactions. We also perform traditional CMT inversions as well as advanced CMT inversions, developed at LANL to take into account the damage. This will allow us to assess the effect of spallation on CMT solutions as well as to validate our inversion procedure. Further work will aim to validate the developed models with the data recorded on SPEs. This long-term goal requires taking into account the 3D structure and thus a comprehensive characterization of the site.

  20. Unmixing Space Object’s Moderate Resolution Spectra

    DTIC Science & Technology

    2013-09-01

    collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE SEP 2013 2. REPORT TYPE 3. DATES COVERED 00...result of spectral unmixing. In the visible, the non- resolved spectral signature is modeled as a linear mixture of spectral reflectance signatures...1 (3) In (3), the first term expresses the Euclidian distance (l2) between the observed data and the forward model . The second term (l1

  1. Assembling programmable FRET-based photonic networks using designer DNA scaffolds

    PubMed Central

    Buckhout-White, Susan; Spillmann, Christopher M; Algar, W. Russ; Khachatrian, Ani; Melinger, Joseph S.; Goldman, Ellen R.; Ancona, Mario G.; Medintz, Igor L.

    2014-01-01

    DNA demonstrates a remarkable capacity for creating designer nanostructures and devices. A growing number of these structures utilize Förster resonance energy transfer (FRET) as part of the device's functionality, readout or characterization, and, as device sophistication increases so do the concomitant FRET requirements. Here we create multi-dye FRET cascades and assess how well DNA can marshal organic dyes into nanoantennae that focus excitonic energy. We evaluate 36 increasingly complex designs including linear, bifurcated, Holliday junction, 8-arm star and dendrimers involving up to five different dyes engaging in four-consecutive FRET steps, while systematically varying fluorophore spacing by Förster distance (R0). Decreasing R0 while augmenting cross-sectional collection area with multiple donors significantly increases terminal exciton delivery efficiency within dendrimers compared with the first linear constructs. Förster modelling confirms that best results are obtained when there are multiple interacting FRET pathways rather than independent channels by which excitons travel from initial donor(s) to final acceptor. PMID:25504073

  2. Internal resonance and low frequency vibration energy harvesting

    NASA Astrophysics Data System (ADS)

    Yang, Wei; Towfighian, Shahrzad

    2017-09-01

    A nonlinear vibration energy harvester with internal resonance is presented. The proposed harvester consists of two cantilevers, each with a permanent magnet on its tip. One cantilever has a piezoelectric layer at its base. When magnetic force is applied this two degrees-of-freedom nonlinear vibration system shows the internal resonance phenomenon that broadens the frequency bandwidth compared to a linear system. Three coupled partial differential equations are obtained to predict the dynamic behavior of the nonlinear energy harvester. The perturbation method of multiple scales is used to solve equations. Results from experiments done at different vibration levels with varying distances between the magnets validate the mathematical model. Experiments and simulations show the design outperforms the linear system by doubling the frequency bandwidth. Output voltage for frequency response is studied for different system parameters. The optimal load resistance is obtained for the maximum power in the internal resonance case. The results demonstrate that a design combining internal resonance and magnetic nonlinearity improves the efficiency of energy harvesting.

  3. Linear discriminant analysis based on L1-norm maximization.

    PubMed

    Zhong, Fujin; Zhang, Jiashu

    2013-08-01

    Linear discriminant analysis (LDA) is a well-known dimensionality reduction technique, which is widely used for many purposes. However, conventional LDA is sensitive to outliers because its objective function is based on the distance criterion using L2-norm. This paper proposes a simple but effective robust LDA version based on L1-norm maximization, which learns a set of local optimal projection vectors by maximizing the ratio of the L1-norm-based between-class dispersion and the L1-norm-based within-class dispersion. The proposed method is theoretically proved to be feasible and robust to outliers while overcoming the singular problem of the within-class scatter matrix for conventional LDA. Experiments on artificial datasets, standard classification datasets and three popular image databases demonstrate the efficacy of the proposed method.

  4. Near-optimal alternative generation using modified hit-and-run sampling for non-linear, non-convex problems

    NASA Astrophysics Data System (ADS)

    Rosenberg, D. E.; Alafifi, A.

    2016-12-01

    Water resources systems analysis often focuses on finding optimal solutions. Yet an optimal solution is optimal only for the modelled issues and managers often seek near-optimal alternatives that address un-modelled objectives, preferences, limits, uncertainties, and other issues. Early on, Modelling to Generate Alternatives (MGA) formalized near-optimal as the region comprising the original problem constraints plus a new constraint that allowed performance within a specified tolerance of the optimal objective function value. MGA identified a few maximally-different alternatives from the near-optimal region. Subsequent work applied Markov Chain Monte Carlo (MCMC) sampling to generate a larger number of alternatives that span the near-optimal region of linear problems or select portions for non-linear problems. We extend the MCMC Hit-And-Run method to generate alternatives that span the full extent of the near-optimal region for non-linear, non-convex problems. First, start at a feasible hit point within the near-optimal region, then run a random distance in a random direction to a new hit point. Next, repeat until generating the desired number of alternatives. The key step at each iterate is to run a random distance along the line in the specified direction to a new hit point. If linear equity constraints exist, we construct an orthogonal basis and use a null space transformation to confine hits and runs to a lower-dimensional space. Linear inequity constraints define the convex bounds on the line that runs through the current hit point in the specified direction. We then use slice sampling to identify a new hit point along the line within bounds defined by the non-linear inequity constraints. This technique is computationally efficient compared to prior near-optimal alternative generation techniques such MGA, MCMC Metropolis-Hastings, evolutionary, or firefly algorithms because search at each iteration is confined to the hit line, the algorithm can move in one step to any point in the near-optimal region, and each iterate generates a new, feasible alternative. We use the method to generate alternatives that span the near-optimal regions of simple and more complicated water management problems and may be preferred to optimal solutions. We also discuss extensions to handle non-linear equity constraints.

  5. Efficient Maintenance and Update of Nonbonded Lists in Macromolecular Simulations.

    PubMed

    Chowdhury, Rezaul; Beglov, Dmitri; Moghadasi, Mohammad; Paschalidis, Ioannis Ch; Vakili, Pirooz; Vajda, Sandor; Bajaj, Chandrajit; Kozakov, Dima

    2014-10-14

    Molecular mechanics and dynamics simulations use distance based cutoff approximations for faster computation of pairwise van der Waals and electrostatic energy terms. These approximations traditionally use a precalculated and periodically updated list of interacting atom pairs, known as the "nonbonded neighborhood lists" or nblists, in order to reduce the overhead of finding atom pairs that are within distance cutoff. The size of nblists grows linearly with the number of atoms in the system and superlinearly with the distance cutoff, and as a result, they require significant amount of memory for large molecular systems. The high space usage leads to poor cache performance, which slows computation for large distance cutoffs. Also, the high cost of updates means that one cannot afford to keep the data structure always synchronized with the configuration of the molecules when efficiency is at stake. We propose a dynamic octree data structure for implicit maintenance of nblists using space linear in the number of atoms but independent of the distance cutoff. The list can be updated very efficiently as the coordinates of atoms change during the simulation. Unlike explicit nblists, a single octree works for all distance cutoffs. In addition, octree is a cache-friendly data structure, and hence, it is less prone to cache miss slowdowns on modern memory hierarchies than nblists. Octrees use almost 2 orders of magnitude less memory, which is crucial for simulation of large systems, and while they are comparable in performance to nblists when the distance cutoff is small, they outperform nblists for larger systems and large cutoffs. Our tests show that octree implementation is approximately 1.5 times faster in practical use case scenarios as compared to nblists.

  6. Mapping the Dark Matter with 6dFGS

    NASA Astrophysics Data System (ADS)

    Mould, Jeremy R.; Magoulas, C.; Springob, C.; Colless, M.; Jones, H.; Lucey, J.; Erdogdu, P.; Campbell, L.

    2012-05-01

    Fundamental plane distances from the 6dF Galaxy Redshift Survey are fitted to a model of the density field within 200/h Mpc. Likelihood is maximized for a single value of the local galaxy density, as expected in linear theory for the relation between overdensity and peculiar velocity. The dipole of the inferred southern hemisphere early type galaxy peculiar velocities is calculated within 150/h Mpc, before and after correction for the individual galaxy velocities predicted by the model. The former agrees with that obtained by other peculiar velocity studies (e.g. SFI++). The latter is only of order 150 km/sec and consistent with the expectations of the standard cosmological model and recent forecasts of the cosmic mach number, which show linearly declining bulk flow with increasing scale.

  7. Spatial structure, sampling design and scale in remotely-sensed imagery of a California savanna woodland

    NASA Technical Reports Server (NTRS)

    Mcgwire, K.; Friedl, M.; Estes, J. E.

    1993-01-01

    This article describes research related to sampling techniques for establishing linear relations between land surface parameters and remotely-sensed data. Predictive relations are estimated between percentage tree cover in a savanna environment and a normalized difference vegetation index (NDVI) derived from the Thematic Mapper sensor. Spatial autocorrelation in original measurements and regression residuals is examined using semi-variogram analysis at several spatial resolutions. Sampling schemes are then tested to examine the effects of autocorrelation on predictive linear models in cases of small sample sizes. Regression models between image and ground data are affected by the spatial resolution of analysis. Reducing the influence of spatial autocorrelation by enforcing minimum distances between samples may also improve empirical models which relate ground parameters to satellite data.

  8. The effects of target distance on pivot hip, trunk, pelvis, and kicking leg kinematics in Taekwondo roundhouse kicks.

    PubMed

    Kim, Jae-Woong; Kwon, Moon-Seok; Yenuga, Sree Sushma; Kwon, Young-Hoooo

    2010-06-01

    The study purpose was to investigate the effects of target distance on pivot hip, trunk, pelvis, and kicking leg movements in Taekwondo roundhouse kick. Twelve male black-belt holders executed roundhouse kicks for three target distances (Normal, Short, and Long). Linear displacements of the pivot hip and orientation angles of the pelvis, trunk, right thigh, and right shank were obtained through a three-dimensional video motion analysis. Select displacements, distances, peak orientation angles, and angle ranges were compared among the conditions using one-way repeated measure ANOVA (p < 0.05). Several orientation angle variables (posterior tilt range, peak right-tilted position, peak right-rotated position, peak left-rotated position, and left rotation range of the pelvis; peak hyperextended position and peak right-flexed position of the trunk; peak flexed position, flexion range and peak internal-rotated position of the hip) as well as the linear displacements of the pivot hip and the reach significantly changed in response to different target distances. It was concluded that the adjustment to different target distances was mainly accomplished through the pivot hip displacements, hip flexion, and pelvis left rotation. Target distance mainly affected the reach control function of the pelvis and the linear balance function of the trunk.

  9. On the inversion-indel distance

    PubMed Central

    2013-01-01

    Background The inversion distance, that is the distance between two unichromosomal genomes with the same content allowing only inversions of DNA segments, can be computed thanks to a pioneering approach of Hannenhalli and Pevzner in 1995. In 2000, El-Mabrouk extended the inversion model to allow the comparison of unichromosomal genomes with unequal contents, thus insertions and deletions of DNA segments besides inversions. However, an exact algorithm was presented only for the case in which we have insertions alone and no deletion (or vice versa), while a heuristic was provided for the symmetric case, that allows both insertions and deletions and is called the inversion-indel distance. In 2005, Yancopoulos, Attie and Friedberg started a new branch of research by introducing the generic double cut and join (DCJ) operation, that can represent several genome rearrangements (including inversions). Among others, the DCJ model gave rise to two important results. First, it has been shown that the inversion distance can be computed in a simpler way with the help of the DCJ operation. Second, the DCJ operation originated the DCJ-indel distance, that allows the comparison of genomes with unequal contents, considering DCJ, insertions and deletions, and can be computed in linear time. Results In the present work we put these two results together to solve an open problem, showing that, when the graph that represents the relation between the two compared genomes has no bad components, the inversion-indel distance is equal to the DCJ-indel distance. We also give a lower and an upper bound for the inversion-indel distance in the presence of bad components. PMID:24564182

  10. First Principles Modeling of the Performance of a Hydrogen-Peroxide-Driven Chem-E-Car

    ERIC Educational Resources Information Center

    Farhadi, Maryam; Azadi, Pooya; Zarinpanjeh, Nima

    2009-01-01

    In this study, performance of a hydrogen-peroxide-driven car has been simulated using basic conservation laws and a few numbers of auxiliary equations. A numerical method was implemented to solve sets of highly non-linear ordinary differential equations. Transient pressure and the corresponding traveled distance for three different car weights are…

  11. Predicting complex traits using a diffusion kernel on genetic markers with an application to dairy cattle and wheat data

    PubMed Central

    2013-01-01

    Background Arguably, genotypes and phenotypes may be linked in functional forms that are not well addressed by the linear additive models that are standard in quantitative genetics. Therefore, developing statistical learning models for predicting phenotypic values from all available molecular information that are capable of capturing complex genetic network architectures is of great importance. Bayesian kernel ridge regression is a non-parametric prediction model proposed for this purpose. Its essence is to create a spatial distance-based relationship matrix called a kernel. Although the set of all single nucleotide polymorphism genotype configurations on which a model is built is finite, past research has mainly used a Gaussian kernel. Results We sought to investigate the performance of a diffusion kernel, which was specifically developed to model discrete marker inputs, using Holstein cattle and wheat data. This kernel can be viewed as a discretization of the Gaussian kernel. The predictive ability of the diffusion kernel was similar to that of non-spatial distance-based additive genomic relationship kernels in the Holstein data, but outperformed the latter in the wheat data. However, the difference in performance between the diffusion and Gaussian kernels was negligible. Conclusions It is concluded that the ability of a diffusion kernel to capture the total genetic variance is not better than that of a Gaussian kernel, at least for these data. Although the diffusion kernel as a choice of basis function may have potential for use in whole-genome prediction, our results imply that embedding genetic markers into a non-Euclidean metric space has very small impact on prediction. Our results suggest that use of the black box Gaussian kernel is justified, given its connection to the diffusion kernel and its similar predictive performance. PMID:23763755

  12. Novel design and sensitivity analysis of displacement measurement system utilizing knife edge diffraction for nanopositioning stages.

    PubMed

    Lee, ChaBum; Lee, Sun-Kyu; Tarbutton, Joshua A

    2014-09-01

    This paper presents a novel design and sensitivity analysis of a knife edge-based optical displacement sensor that can be embedded with nanopositioning stages. The measurement system consists of a laser, two knife edge locations, two photodetectors, and axillary optics components in a simple configuration. The knife edge is installed on the stage parallel to its moving direction and two separated laser beams are incident on knife edges. While the stage is in motion, the direct transverse and diffracted light at each knife edge is superposed producing interference at the detector. The interference is measured with two photodetectors in a differential amplification configuration. The performance of the proposed sensor was mathematically modeled, and the effect of the optical and mechanical parameters, wavelength, beam diameter, distances from laser to knife edge to photodetector, and knife edge topography, on sensor outputs was investigated to obtain a novel analytical method to predict linearity and sensitivity. From the model, all parameters except for the beam diameter have a significant influence on measurement range and sensitivity of the proposed sensing system. To validate the model, two types of knife edges with different edge topography were used for the experiment. By utilizing a shorter wavelength, smaller sensor distance and higher edge quality increased measurement sensitivity can be obtained. The model was experimentally validated and the results showed a good agreement with the theoretically estimated results. This sensor is expected to be easily implemented into nanopositioning stage applications at a low cost and mathematical model introduced here can be used for design and performance estimation of the knife edge-based sensor as a tool.

  13. Refocusing distance of a standard plenoptic camera.

    PubMed

    Hahne, Christopher; Aggoun, Amar; Velisavljevic, Vladan; Fiebig, Susanne; Pesch, Matthias

    2016-09-19

    Recent developments in computational photography enabled variation of the optical focus of a plenoptic camera after image exposure, also known as refocusing. Existing ray models in the field simplify the camera's complexity for the purpose of image and depth map enhancement, but fail to satisfyingly predict the distance to which a photograph is refocused. By treating a pair of light rays as a system of linear functions, it will be shown in this paper that its solution yields an intersection indicating the distance to a refocused object plane. Experimental work is conducted with different lenses and focus settings while comparing distance estimates with a stack of refocused photographs for which a blur metric has been devised. Quantitative assessments over a 24 m distance range suggest that predictions deviate by less than 0.35 % in comparison to an optical design software. The proposed refocusing estimator assists in predicting object distances just as in the prototyping stage of plenoptic cameras and will be an essential feature in applications demanding high precision in synthetic focus or where depth map recovery is done by analyzing a stack of refocused photographs.

  14. Curriculum Guidelines for a Distance Education Course in Urban Agriculture Based on an Eclectic Model.

    ERIC Educational Resources Information Center

    Gaum, Wilma G.; van Rooyen, Hugo G.

    1997-01-01

    Describes research to develop curriculum guidelines for a distance education course in urban agriculture. The course, designed to train the teacher, is based on an eclectic curriculum design model. The course is aimed at the socioeconomic empowerment of urban farmers and is based on sustainable ecological-agricultural principles, an…

  15. Research in Distance Education: A System Modeling Approach.

    ERIC Educational Resources Information Center

    Saba, Farhad; Twitchell, David

    This demonstration of the use of a computer simulation research method based on the System Dynamics modeling technique for studying distance education reviews research methods in distance education, including the broad categories of conceptual and case studies, and presents a rationale for the application of systems research in this area. The…

  16. The Enigmatic Local Hubble Flow: Probing the Nearby Peculiar Velocity Field with Consistent Distances to Neighboring Galaxies.

    NASA Astrophysics Data System (ADS)

    Mendez, B.; Davis, M.; Newman, J.; Madore, B. F.; Freedman, W. L.; Moustakas, J.

    2002-12-01

    The properties of the velocity field in the local volume (cz < 550 km s-1) have been difficult to constrain due to a lack of a consistent set of galaxy distances. The sparse observations available to date suggest a remarkably quiet flow, with little deviation from a pure Hubble law. However, velocity field models based on the distribution of galaxies in the 1.2 Jy IRAS redshift survey, predict a quadrupolar flow pattern locally with strong infall at the poles of the local Supergalactic plane. In an attempt to resolve this discrepency, we probe the local velocity field and begin to establish a consistent set of galactic distances. We have obtained images of nearby galaxies in I, V, and B bands from the W.M. Keck Observatory and in F814W and F555W filters from the Hubble Space Telescope. Where these galaxies are well resolved into stars we can use the Tip of the Red Giant Branch (TRGB) as a distance indicator. Using a maximum likelihood analysis to quantitatively measure the I magnitude of the TRGB we determine precise distances to several nearby galaxies. We supplement that dataset with published distances to local galaxies measured using Cepheids, Surface Brightness Fluctuations, and the TRGB. With these data we find that the amplitude of the local flow is roughly half that expected in linear theory and N-body simulations; thus the enigma of cold local flows persists. This work was supported in part by NASA through a grant from the Space Telescope Science Institute and a Predoctoral Fellowship for Minorities from the Ford Foundation.

  17. Investigating the impact of the properties of pilot points on calibration of groundwater models: case study of a karst catchment in Rote Island, Indonesia

    NASA Astrophysics Data System (ADS)

    Klaas, Dua K. S. Y.; Imteaz, Monzur Alam

    2017-09-01

    A robust configuration of pilot points in the parameterisation step of a model is crucial to accurately obtain a satisfactory model performance. However, the recommendations provided by the majority of recent researchers on pilot-point use are considered somewhat impractical. In this study, a practical approach is proposed for using pilot-point properties (i.e. number, distance and distribution method) in the calibration step of a groundwater model. For the first time, the relative distance-area ratio ( d/ A) and head-zonation-based (HZB) method are introduced, to assign pilot points into the model domain by incorporating a user-friendly zone ratio. This study provides some insights into the trade-off between maximising and restricting the number of pilot points, and offers a relative basis for selecting the pilot-point properties and distribution method in the development of a physically based groundwater model. The grid-based (GB) method is found to perform comparably better than the HZB method in terms of model performance and computational time. When using the GB method, this study recommends a distance-area ratio of 0.05, a distance-x-grid length ratio ( d/ X grid) of 0.10, and a distance-y-grid length ratio ( d/ Y grid) of 0.20.

  18. Evaluation of interpolation methods for TG-43 dosimetric parameters based on comparison with Monte Carlo data for high-energy brachytherapy sources.

    PubMed

    Pujades-Claumarchirant, Ma Carmen; Granero, Domingo; Perez-Calatayud, Jose; Ballester, Facundo; Melhus, Christopher; Rivard, Mark

    2010-03-01

    The aim of this work was to determine dose distributions for high-energy brachytherapy sources at spatial locations not included in the radial dose function g L ( r ) and 2D anisotropy function F ( r , θ ) table entries for radial distance r and polar angle θ . The objectives of this study are as follows: 1) to evaluate interpolation methods in order to accurately derive g L ( r ) and F ( r , θ ) from the reported data; 2) to determine the minimum number of entries in g L ( r ) and F ( r , θ ) that allow reproduction of dose distributions with sufficient accuracy. Four high-energy photon-emitting brachytherapy sources were studied: 60 Co model Co0.A86, 137 Cs model CSM-3, 192 Ir model Ir2.A85-2, and 169 Yb hypothetical model. The mesh used for r was: 0.25, 0.5, 0.75, 1, 1.5, 2-8 (integer steps) and 10 cm. Four different angular steps were evaluated for F ( r , θ ): 1°, 2°, 5° and 10°. Linear-linear and logarithmic-linear interpolation was evaluated for g L ( r ). Linear-linear interpolation was used to obtain F ( r , θ ) with resolution of 0.05 cm and 1°. Results were compared with values obtained from the Monte Carlo (MC) calculations for the four sources with the same grid. Linear interpolation of g L ( r ) provided differences ≤ 0.5% compared to MC for all four sources. Bilinear interpolation of F ( r , θ ) using 1° and 2° angular steps resulted in agreement ≤ 0.5% with MC for 60 Co, 192 Ir, and 169 Yb, while 137 Cs agreement was ≤ 1.5% for θ < 15°. The radial mesh studied was adequate for interpolating g L ( r ) for high-energy brachytherapy sources, and was similar to commonly found examples in the published literature. For F ( r , θ ) close to the source longitudinal-axis, polar angle step sizes of 1°-2° were sufficient to provide 2% accuracy for all sources.

  19. Modeling abundance effects in distance sampling

    USGS Publications Warehouse

    Royle, J. Andrew; Dawson, D.K.; Bates, S.

    2004-01-01

    Distance-sampling methods are commonly used in studies of animal populations to estimate population density. A common objective of such studies is to evaluate the relationship between abundance or density and covariates that describe animal habitat or other environmental influences. However, little attention has been focused on methods of modeling abundance covariate effects in conventional distance-sampling models. In this paper we propose a distance-sampling model that accommodates covariate effects on abundance. The model is based on specification of the distance-sampling likelihood at the level of the sample unit in terms of local abundance (for each sampling unit). This model is augmented with a Poisson regression model for local abundance that is parameterized in terms of available covariates. Maximum-likelihood estimation of detection and density parameters is based on the integrated likelihood, wherein local abundance is removed from the likelihood by integration. We provide an example using avian point-transect data of Ovenbirds (Seiurus aurocapillus) collected using a distance-sampling protocol and two measures of habitat structure (understory cover and basal area of overstory trees). The model yields a sensible description (positive effect of understory cover, negative effect on basal area) of the relationship between habitat and Ovenbird density that can be used to evaluate the effects of habitat management on Ovenbird populations.

  20. Topological Distances Between Brain Networks

    PubMed Central

    Lee, Hyekyoung; Solo, Victor; Davidson, Richard J.; Pollak, Seth D.

    2018-01-01

    Many existing brain network distances are based on matrix norms. The element-wise differences may fail to capture underlying topological differences. Further, matrix norms are sensitive to outliers. A few extreme edge weights may severely affect the distance. Thus it is necessary to develop network distances that recognize topology. In this paper, we introduce Gromov-Hausdorff (GH) and Kolmogorov-Smirnov (KS) distances. GH-distance is often used in persistent homology based brain network models. The superior performance of KS-distance is contrasted against matrix norms and GH-distance in random network simulations with the ground truths. The KS-distance is then applied in characterizing the multimodal MRI and DTI study of maltreated children.

  1. Angular scale expansion theory and the misperception of egocentric distance in locomotor space.

    PubMed

    Durgin, Frank H

    Perception is crucial for the control of action, but perception need not be scaled accurately to produce accurate actions. This paper reviews evidence for an elegant new theory of locomotor space perception that is based on the dense coding of angular declination so that action control may be guided by richer feedback. The theory accounts for why so much direct-estimation data suggests that egocentric distance is underestimated despite the fact that action measures have been interpreted as indicating accurate perception. Actions are calibrated to the perceived scale of space and thus action measures are typically unable to distinguish systematic (e.g., linearly scaled) misperception from accurate perception. Whereas subjective reports of the scaling of linear extent are difficult to evaluate in absolute terms, study of the scaling of perceived angles (which exist in a known scale, delimited by vertical and horizontal) provides new evidence regarding the perceptual scaling of locomotor space.

  2. A Sequential Ensemble Prediction System at Convection Permitting Scales

    NASA Astrophysics Data System (ADS)

    Milan, M.; Simmer, C.

    2012-04-01

    A Sequential Assimilation Method (SAM) following some aspects of particle filtering with resampling, also called SIR (Sequential Importance Resampling), is introduced and applied in the framework of an Ensemble Prediction System (EPS) for weather forecasting on convection permitting scales, with focus to precipitation forecast. At this scale and beyond, the atmosphere increasingly exhibits chaotic behaviour and non linear state space evolution due to convectively driven processes. One way to take full account of non linear state developments are particle filter methods, their basic idea is the representation of the model probability density function by a number of ensemble members weighted by their likelihood with the observations. In particular particle filter with resampling abandons ensemble members (particles) with low weights restoring the original number of particles adding multiple copies of the members with high weights. In our SIR-like implementation we substitute the likelihood way to define weights and introduce a metric which quantifies the "distance" between the observed atmospheric state and the states simulated by the ensemble members. We also introduce a methodology to counteract filter degeneracy, i.e. the collapse of the simulated state space. To this goal we propose a combination of resampling taking account of simulated state space clustering and nudging. By keeping cluster representatives during resampling and filtering, the method maintains the potential for non linear system state development. We assume that a particle cluster with initially low likelihood may evolve in a state space with higher likelihood in a subsequent filter time thus mimicking non linear system state developments (e.g. sudden convection initiation) and remedies timing errors for convection due to model errors and/or imperfect initial condition. We apply a simplified version of the resampling, the particles with highest weights in each cluster are duplicated; for the model evolution for each particle pair one particle evolves using the forward model; the second particle, however, is nudged to the radar and satellite observation during its evolution based on the forward model.

  3. Analysis of laser energy characteristics of laser guided weapons based on the hardware-in-the-loop simulation system

    NASA Astrophysics Data System (ADS)

    Zhu, Yawen; Cui, Xiaohong; Wang, Qianqian; Tong, Qiujie; Cui, Xutai; Li, Chenyu; Zhang, Le; Peng, Zhong

    2016-11-01

    The hardware-in-the-loop simulation system, which provides a precise, controllable and repeatable test conditions, is an important part of the development of the semi-active laser (SAL) guided weapons. In this paper, laser energy chain characteristics were studied, which provides a theoretical foundation for the SAL guidance technology and the hardware-in-the-loop simulation system. Firstly, a simplified equation was proposed to adjust the radar equation according to the principles of the hardware-in-the-loop simulation system. Secondly, a theoretical model and calculation method were given about the energy chain characteristics based on the hardware-in-the-loop simulation system. We then studied the reflection characteristics of target and the distance between the missile and target with major factors such as the weather factors. Finally, the accuracy of modeling was verified by experiment as the values measured experimentally generally follow the theoretical results from the model. And experimental results revealed that ratio of attenuation of the laser energy exhibited a non-linear change vs. pulse number, which were in accord with the actual condition.

  4. Initial Simulations of RF Waves in Hot Plasmas Using the FullWave Code

    NASA Astrophysics Data System (ADS)

    Zhao, Liangji; Svidzinski, Vladimir; Spencer, Andrew; Kim, Jin-Soo

    2017-10-01

    FullWave is a simulation tool that models RF fields in hot inhomogeneous magnetized plasmas. The wave equations with linearized hot plasma dielectric response are solved in configuration space on adaptive cloud of computational points. The nonlocal hot plasma dielectric response is formulated by calculating the plasma conductivity kernel based on the solution of the linearized Vlasov equation in inhomogeneous magnetic field. In an rf field, the hot plasma dielectric response is limited to the distance of a few particles' Larmor radii, near the magnetic field line passing through the test point. The localization of the hot plasma dielectric response results in a sparse matrix of the problem thus significantly reduces the size of the problem and makes the simulations faster. We will present the initial results of modeling of rf waves using the Fullwave code, including calculation of nonlocal conductivity kernel in 2D Tokamak geometry; the interpolation of conductivity kernel from test points to adaptive cloud of computational points; and the results of self-consistent simulations of 2D rf fields using calculated hot plasma conductivity kernel in a tokamak plasma with reduced parameters. Work supported by the US DOE ``SBIR program.

  5. A Study of the Effect of the Front-End Styling of Sport Utility Vehicles on Pedestrian Head Injuries

    PubMed Central

    Qin, Qin; Chen, Zheng; Bai, Zhonghao; Cao, Libo

    2018-01-01

    Background The number of sport utility vehicles (SUVs) on China market is continuously increasing. It is necessary to investigate the relationships between the front-end styling features of SUVs and head injuries at the styling design stage for improving the pedestrian protection performance and product development efficiency. Methods Styling feature parameters were extracted from the SUV side contour line. And simplified finite element models were established based on the 78 SUV side contour lines. Pedestrian headform impact simulations were performed and validated. The head injury criterion of 15 ms (HIC15) at four wrap-around distances was obtained. A multiple linear regression analysis method was employed to describe the relationships between the styling feature parameters and the HIC15 at each impact point. Results The relationship between the selected styling features and the HIC15 showed reasonable correlations, and the regression models and the selected independent variables showed statistical significance. Conclusions The regression equations obtained by multiple linear regression can be used to assess the performance of SUV styling in protecting pedestrians' heads and provide styling designers with technical guidance regarding their artistic creations.

  6. Answer Markup Algorithms for Southeast Asian Languages.

    ERIC Educational Resources Information Center

    Henry, George M.

    1991-01-01

    Typical markup methods for providing feedback to foreign language learners are not applicable to languages not written in a strictly linear fashion. A modification of Hart's edit markup software is described, along with a second variation based on a simple edit distance algorithm adapted to a general Southeast Asian font system. (10 references)…

  7. Organization of Nucleotides in Different Environments and the Formation of Pre-Polymers

    NASA Astrophysics Data System (ADS)

    Himbert, Sebastian; Chapman, Mindy; Deamer, David W.; Rheinstädter, Maikel C.

    2016-08-01

    RNA is a linear polymer of nucleotides linked by a ribose-phosphate backbone. Polymerization of nucleotides occurs in a condensation reaction in which phosphodiester bonds are formed. However, in the absence of enzymes and metabolism there has been no obvious way for RNA-like molecules to be produced and then encapsulated in cellular compartments. We investigated 5‧-adenosine monophosphate (AMP) and 5‧-uridine monophosphate (UMP) molecules confined in multi-lamellar phospholipid bilayers, nanoscopic films, ammonium chloride salt crystals and Montmorillonite clay, previously proposed to promote polymerization. X-ray diffraction was used to determine whether such conditions imposed a degree of order on the nucleotides. Two nucleotide signals were observed in all matrices, one corresponding to a nearest neighbour distance of 4.6 Å attributed to nucleotides that form a disordered, glassy structure. A second, smaller distance of 3.4 Å agrees well with the distance between stacked base pairs in the RNA backbone, and was assigned to the formation of pre-polymers, i.e., the organization of nucleotides into stacks of about 10 monomers. Such ordering can provide conditions that promote the nonenzymatic polymerization of RNA strands under prebiotic conditions. Experiments were modeled by Monte-Carlo simulations, which provide details of the molecular structure of these pre-polymers.

  8. Study on super-resolution three-dimensional range-gated imaging technology

    NASA Astrophysics Data System (ADS)

    Guo, Huichao; Sun, Huayan; Wang, Shuai; Fan, Youchen; Li, Yuanmiao

    2018-04-01

    Range-gated three dimensional imaging technology is a hotspot in recent years, because of the advantages of high spatial resolution, high range accuracy, long range, and simultaneous reflection of target reflectivity information. Based on the study of the principle of intensity-related method, this paper has carried out theoretical analysis and experimental research. The experimental system adopts the high power pulsed semiconductor laser as light source, gated ICCD as the imaging device, can realize the imaging depth and distance flexible adjustment to achieve different work mode. The imaging experiment of small imaging depth is carried out aiming at building 500m away, and 26 group images were obtained with distance step 1.5m. In this paper, the calculation method of 3D point cloud based on triangle method is analyzed, and 15m depth slice of the target 3D point cloud are obtained by using two frame images, the distance precision is better than 0.5m. The influence of signal to noise ratio, illumination uniformity and image brightness on distance accuracy are analyzed. Based on the comparison with the time-slicing method, a method for improving the linearity of point cloud is proposed.

  9. Quantitative social dialectology: explaining linguistic variation geographically and socially.

    PubMed

    Wieling, Martijn; Nerbonne, John; Baayen, R Harald

    2011-01-01

    In this study we examine linguistic variation and its dependence on both social and geographic factors. We follow dialectometry in applying a quantitative methodology and focusing on dialect distances, and social dialectology in the choice of factors we examine in building a model to predict word pronunciation distances from the standard Dutch language to 424 Dutch dialects. We combine linear mixed-effects regression modeling with generalized additive modeling to predict the pronunciation distance of 559 words. Although geographical position is the dominant predictor, several other factors emerged as significant. The model predicts a greater distance from the standard for smaller communities, for communities with a higher average age, for nouns (as contrasted with verbs and adjectives), for more frequent words, and for words with relatively many vowels. The impact of the demographic variables, however, varied from word to word. For a majority of words, larger, richer and younger communities are moving towards the standard. For a smaller minority of words, larger, richer and younger communities emerge as driving a change away from the standard. Similarly, the strength of the effects of word frequency and word category varied geographically. The peripheral areas of the Netherlands showed a greater distance from the standard for nouns (as opposed to verbs and adjectives) as well as for high-frequency words, compared to the more central areas. Our findings indicate that changes in pronunciation have been spreading (in particular for low-frequency words) from the Hollandic center of economic power to the peripheral areas of the country, meeting resistance that is stronger wherever, for well-documented historical reasons, the political influence of Holland was reduced. Our results are also consistent with the theory of lexical diffusion, in that distances from the Hollandic norm vary systematically and predictably on a word by word basis.

  10. Quantitative Social Dialectology: Explaining Linguistic Variation Geographically and Socially

    PubMed Central

    Wieling, Martijn; Nerbonne, John; Baayen, R. Harald

    2011-01-01

    In this study we examine linguistic variation and its dependence on both social and geographic factors. We follow dialectometry in applying a quantitative methodology and focusing on dialect distances, and social dialectology in the choice of factors we examine in building a model to predict word pronunciation distances from the standard Dutch language to 424 Dutch dialects. We combine linear mixed-effects regression modeling with generalized additive modeling to predict the pronunciation distance of 559 words. Although geographical position is the dominant predictor, several other factors emerged as significant. The model predicts a greater distance from the standard for smaller communities, for communities with a higher average age, for nouns (as contrasted with verbs and adjectives), for more frequent words, and for words with relatively many vowels. The impact of the demographic variables, however, varied from word to word. For a majority of words, larger, richer and younger communities are moving towards the standard. For a smaller minority of words, larger, richer and younger communities emerge as driving a change away from the standard. Similarly, the strength of the effects of word frequency and word category varied geographically. The peripheral areas of the Netherlands showed a greater distance from the standard for nouns (as opposed to verbs and adjectives) as well as for high-frequency words, compared to the more central areas. Our findings indicate that changes in pronunciation have been spreading (in particular for low-frequency words) from the Hollandic center of economic power to the peripheral areas of the country, meeting resistance that is stronger wherever, for well-documented historical reasons, the political influence of Holland was reduced. Our results are also consistent with the theory of lexical diffusion, in that distances from the Hollandic norm vary systematically and predictably on a word by word basis. PMID:21912639

  11. Living Near Major Traffic Roads and Risk of Deep Vein Thrombosis

    PubMed Central

    Baccarelli, Andrea; Martinelli, Ida; Pegoraro, Valeria; Melly, Steven; Grillo, Paolo; Zanobetti, Antonella; Hou, Lifang; Bertazzi, Pier Alberto; Mannucci, Pier Mannuccio; Schwartz, Joel

    2010-01-01

    Background Particulate air pollution has been consistently linked to increased risk of arterial cardiovascular disease. Few data on air pollution exposure and risk of venous thrombosis are available. We investigated whether living near major traffic roads increases the risk of deep vein thrombosis (DVT), using distance from roads as a proxy for traffic exposure. Methods and Results Between 1995-2005, we examined 663 patients with DVT of the lower limbs and 859 age-matched controls from cities with population>15,000 inhabitants in Lombardia Region, Italy. We assessed distance from residential addresses to the nearest major traffic road using geographic information system methodology. The risk of DVT was estimated from logistic regression models adjusting for multiple clinical and environmental covariates. The risk of DVT was increased (Odds Ratio [OR]=1.33; 95% CI 1.03-1.71; p=0.03 in age-adjusted models; OR=1.47; 95%CI 1.10-1.96; p=0.008 in models adjusted for multiple covariates) for subjects living near a major traffic road (3 meters, 10th centile of the distance distribution) compared to those living farther away (reference distance of 245 meters, 90th centile). The increase in DVT risk was approximately linear over the observed distance range (from 718 to 0 meters), and was not modified after adjusting for background levels of particulate matter (OR=1.47; 95%CI 1.11-1.96; p=0.008 for 10th vs. 90th distance centile in models adjusting for area levels of particulate matter <10 μm in aerodynamic diameter [PM10] in the year before diagnosis). Conclusions Living near major traffic roads is associated with increased risk of DVT. PMID:19506111

  12. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan

    2015-11-15

    Purpose: Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. Methods: A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy ofmore » the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. Results: The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was attributed to phantom setup errors due to the slightly deformable and flexible phantom extremities. The estimated site-specific safety buffer distance with 0.001% probability of collision for (gantry-to-couch, gantry-to-phantom) was (1.23 cm, 3.35 cm), (1.01 cm, 3.99 cm), and (2.19 cm, 5.73 cm) for treatment to the head, lung, and prostate, respectively. Automated delivery to all three treatment sites was completed in 15 min and collision free using a digital Linac. Conclusions: An individualized collision prediction model for the purpose of noncoplanar beam delivery was developed and verified. With the model, the study has demonstrated the feasibility of predicting deliverable beams for an individual patient and then guiding fully automated noncoplanar treatment delivery. This work motivates development of clinical workflows and quality assurance procedures to allow more extensive use and automation of noncoplanar beam geometries.« less

  13. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery.

    PubMed

    Yu, Victoria Y; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A; Sheng, Ke

    2015-11-01

    Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was attributed to phantom setup errors due to the slightly deformable and flexible phantom extremities. The estimated site-specific safety buffer distance with 0.001% probability of collision for (gantry-to-couch, gantry-to-phantom) was (1.23 cm, 3.35 cm), (1.01 cm, 3.99 cm), and (2.19 cm, 5.73 cm) for treatment to the head, lung, and prostate, respectively. Automated delivery to all three treatment sites was completed in 15 min and collision free using a digital Linac. An individualized collision prediction model for the purpose of noncoplanar beam delivery was developed and verified. With the model, the study has demonstrated the feasibility of predicting deliverable beams for an individual patient and then guiding fully automated noncoplanar treatment delivery. This work motivates development of clinical workflows and quality assurance procedures to allow more extensive use and automation of noncoplanar beam geometries.

  14. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery

    PubMed Central

    Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke

    2015-01-01

    Purpose: Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. Methods: A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. Results: The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was attributed to phantom setup errors due to the slightly deformable and flexible phantom extremities. The estimated site-specific safety buffer distance with 0.001% probability of collision for (gantry-to-couch, gantry-to-phantom) was (1.23 cm, 3.35 cm), (1.01 cm, 3.99 cm), and (2.19 cm, 5.73 cm) for treatment to the head, lung, and prostate, respectively. Automated delivery to all three treatment sites was completed in 15 min and collision free using a digital Linac. Conclusions: An individualized collision prediction model for the purpose of noncoplanar beam delivery was developed and verified. With the model, the study has demonstrated the feasibility of predicting deliverable beams for an individual patient and then guiding fully automated noncoplanar treatment delivery. This work motivates development of clinical workflows and quality assurance procedures to allow more extensive use and automation of noncoplanar beam geometries. PMID:26520735

  15. Blood biomarkers in male and female participants after an Ironman-distance triathlon.

    PubMed

    Danielsson, Tom; Carlsson, Jörg; Schreyer, Hendrik; Ahnesjö, Jonas; Ten Siethoff, Lasse; Ragnarsson, Thony; Tugetam, Åsa; Bergman, Patrick

    2017-01-01

    While overall physical activity is clearly associated with a better short-term and long-term health, prolonged strenuous physical activity may result in a rise in acute levels of blood-biomarkers used in clinical practice for diagnosis of various conditions or diseases. In this study, we explored the acute effects of a full Ironman-distance triathlon on biomarkers related to heart-, liver-, kidney- and skeletal muscle damage immediately post-race and after one week's rest. We also examined if sex, age, finishing time and body composition influenced the post-race values of the biomarkers. A sample of 30 subjects was recruited (50% women) to the study. The subjects were evaluated for body composition and blood samples were taken at three occasions, before the race (T1), immediately after (T2) and one week after the race (T3). Linear regression models were fitted to analyse the independent contribution of sex and finishing time controlled for weight, body fat percentage and age, on the biomarkers at the termination of the race (T2). Linear mixed models were fitted to examine if the biomarkers differed between the sexes over time (T1-T3). Being male was a significant predictor of higher post-race (T2) levels of myoglobin, CK, and creatinine levels and body weight was negatively associated with myoglobin. In general, the models were unable to explain the variation of the dependent variables. In the linear mixed models, an interaction between time (T1-T3) and sex was seen for myoglobin and creatinine, in which women had a less pronounced response to the race. Overall women appear to tolerate the effects of prolonged strenuous physical activity better than men as illustrated by their lower values of the biomarkers both post-race as well as during recovery.

  16. Urban Growth Modeling Using AN Artificial Neural Network a Case Study of Sanandaj City, Iran

    NASA Astrophysics Data System (ADS)

    Mohammady, S.; Delavar, M. R.; Pahlavani, P.

    2014-10-01

    Land use activity is a major issue and challenge for town and country planners. Modelling and managing urban growth is a complex problem. Cities are now recognized as complex, non-linear and dynamic process systems. The design of a system that can handle these complexities is a challenging prospect. Local governments that implement urban growth models need to estimate the amount of urban land required in the future given anticipated growth of housing, business, recreation and other urban uses within the boundary. There are so many negative implications related with the type of inappropriate urban development such as increased traffic and demand for mobility, reduced landscape attractively, land use fragmentation, loss of biodiversity and alterations of the hydrological cycle. The aim of this study is to use the Artificial Neural Network (ANN) to make a powerful tool for simulating urban growth patterns. Our study area is Sanandaj city located in the west of Iran. Landsat imageries acquired at 2000 and 2006 are used. Dataset were used include distance to principle roads, distance to residential areas, elevation, slope, distance to green spaces and distance to region centers. In this study an appropriate methodology for urban growth modelling using satellite remotely sensed data is presented and evaluated. Percent Correct Match (PCM) and Figure of Merit were used to evaluate ANN results.

  17. Model for Spiral Galaxys Rotation Curves

    NASA Astrophysics Data System (ADS)

    Hodge, John

    2003-11-01

    A model of spiral galaxy dynamics is proposed. An expression describing the rotation velocity of particles v in a galaxy as a function of the distance from the center r (RC) is developed. The resulting, intrinsic RC of a galaxy is Keplerian in the inner bulge and rising in the disk region without modifying the Newtonian gravitational potential (MOND) and without unknown dark matter. The v^2 is linearly related to r of the galaxy in part of the rapidly rising region of the HI RC (RRRC) and to r^2 in another part of the RRRC. The r to discontinuities in the surface brightness versus r curve is related to the 21 cm line width, the measured mass of the central supermassive black hole (SBH), and the maximum v^2 in the RRRC. The distance to spiral galaxies can be calculated from these relationships that tightly correlates with the distance calculated using Cepheid variables. Differing results in measuring the mass of the SBH from differing measurement procedures are explained. This model is consistent with previously unexplained data, has predicted new relationships, and suggests a new model of the universe. Full text: http://web.infoave.net/ ˜scjh.

  18. On isocentre adjustment and quality control in linear accelerator based radiosurgery with circular collimators and room lasers.

    PubMed

    Treuer, H; Hoevels, M; Luyken, K; Gierich, A; Kocher, M; Müller, R P; Sturm, V

    2000-08-01

    We have developed a densitometric method for measuring the isocentric accuracy and the accuracy of marking the isocentre position for linear accelerator based radiosurgery with circular collimators and room lasers. Isocentric shots are used to determine the accuracy of marking the isocentre position with room lasers and star shots are used to determine the wobble of the gantry and table rotation movement, the effect of gantry sag, the stereotactic collimator alignment, and the minimal distance between gantry and table rotation axes. Since the method is based on densitometric measurements, beam spot stability is implicitly tested. The method developed is also suitable for quality assurance and has proved to be useful in optimizing isocentric accuracy. The method is simple to perform and only requires a film box and film scanner for instrumentation. Thus, the method has the potential to become widely available and may therefore be useful in standardizing the description of linear accelerator based radiosurgical systems.

  19. SU-E-J-219: Quantitative Evaluation of Motion Effects On Accuracy of Image-Guided Radiotherapy with Fiducial Markers Using CT Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, I; Oyewale, S; Ahmad, S

    2014-06-01

    Purpose: To investigate quantitatively patient motion effects on the localization accuracy of image-guided radiation with fiducial markers using axial CT (ACT), helical CT (HCT) and cone-beam CT (CBCT) using modeling and experimental phantom studies. Methods: Markers with different lengths (2.5 mm, 5 mm, 10 mm, and 20 mm) were inserted in a mobile thorax phantom which was imaged using ACT, HCT and CBCT. The phantom moved with sinusoidal motion with amplitudes ranging 0–20 mm and a frequency of 15 cycles-per-minute. Three parameters that include: apparent marker lengths, center position and distance between the centers of the markers were measured inmore » the different CT images of the mobile phantom. A motion mathematical model was derived to predict the variations in the previous three parameters and their dependence on the motion in the different imaging modalities. Results: In CBCT, the measured marker lengths increased linearly with increase in motion amplitude. For example, the apparent length of the 10 mm marker was about 20 mm when phantom moved with amplitude of 5 mm. Although the markers have elongated, the center position and the distance between markers remained at the same position for different motion amplitudes in CBCT. These parameters were not affected by motion frequency and phase in CBCT. In HCT and ACT, the measured marker length, center and distance between markers varied irregularly with motion parameters. The apparent lengths of the markers varied with inverse of the phantom velocity which depends on motion frequency and phase. Similarly the center position and distance between markers varied inversely with phantom speed. Conclusion: Motion may lead to variations in maker length, center position and distance between markers using CT imaging. These effects should be considered in patient setup using image-guided radiation therapy based on fiducial markers matching using 2D-radiographs or volumetric CT imaging.« less

  20. DISCO: Distance and Spectrum Correlation Optimization Alignment for Two Dimensional Gas Chromatography Time-of-Flight Mass Spectrometry-based Metabolomics

    PubMed Central

    Wang, Bing; Fang, Aiqin; Heim, John; Bogdanov, Bogdan; Pugh, Scott; Libardoni, Mark; Zhang, Xiang

    2010-01-01

    A novel peak alignment algorithm using a distance and spectrum correlation optimization (DISCO) method has been developed for two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC/TOF-MS) based metabolomics. This algorithm uses the output of the instrument control software, ChromaTOF, as its input data. It detects and merges multiple peak entries of the same metabolite into one peak entry in each input peak list. After a z-score transformation of metabolite retention times, DISCO selects landmark peaks from all samples based on both two-dimensional retention times and mass spectrum similarity of fragment ions measured by Pearson’s correlation coefficient. A local linear fitting method is employed in the original two-dimensional retention time space to correct retention time shifts. A progressive retention time map searching method is used to align metabolite peaks in all samples together based on optimization of the Euclidean distance and mass spectrum similarity. The effectiveness of the DISCO algorithm is demonstrated using data sets acquired under different experiment conditions and a spiked-in experiment. PMID:20476746

  1. Machine learning enhanced optical distance sensor

    NASA Astrophysics Data System (ADS)

    Amin, M. Junaid; Riza, N. A.

    2018-01-01

    Presented for the first time is a machine learning enhanced optical distance sensor. The distance sensor is based on our previously demonstrated distance measurement technique that uses an Electronically Controlled Variable Focus Lens (ECVFL) with a laser source to illuminate a target plane with a controlled optical beam spot. This spot with varying spot sizes is viewed by an off-axis camera and the spot size data is processed to compute the distance. In particular, proposed and demonstrated in this paper is the use of a regularized polynomial regression based supervised machine learning algorithm to enhance the accuracy of the operational sensor. The algorithm uses the acquired features and corresponding labels that are the actual target distance values to train a machine learning model. The optimized training model is trained over a 1000 mm (or 1 m) experimental target distance range. Using the machine learning algorithm produces a training set and testing set distance measurement errors of <0.8 mm and <2.2 mm, respectively. The test measurement error is at least a factor of 4 improvement over our prior sensor demonstration without the use of machine learning. Applications for the proposed sensor include industrial scenario distance sensing where target material specific training models can be generated to realize low <1% measurement error distance measurements.

  2. Dissemination of evidence-based practice: can we train therapists from a distance?

    PubMed

    Vismara, Laurie A; Young, Gregory S; Stahmer, Aubyn C; Griffith, Elizabeth McMahon; Rogers, Sally J

    2009-12-01

    Although knowledge about the efficacy of behavioral interventions for children with ASD is increasing, studies of effectiveness and transportability to community settings are needed. The current study conducted an effectiveness trial to compare distance learning vs. live instruction for training community-based therapists to implement the Early Start Denver Model. Findings revealed: (a) distance learning and live instruction were equally effective for teaching therapists to both implement the model and to train parents; (b) didactic workshops and team supervision were required to improve therapists' skill use; (c) significant child gains occurred over time and across teaching modalities; and (d) parents implemented the model more skillfully after coaching. Implications are discussed in relation to the economic and clinical utility of distance learning.

  3. Local classification: Locally weighted-partial least squares-discriminant analysis (LW-PLS-DA).

    PubMed

    Bevilacqua, Marta; Marini, Federico

    2014-08-01

    The possibility of devising a simple, flexible and accurate non-linear classification method, by extending the locally weighted partial least squares (LW-PLS) approach to the cases where the algorithm is used in a discriminant way (partial least squares discriminant analysis, PLS-DA), is presented. In particular, to assess which category an unknown sample belongs to, the proposed algorithm operates by identifying which training objects are most similar to the one to be predicted and building a PLS-DA model using these calibration samples only. Moreover, the influence of the selected training samples on the local model can be further modulated by adopting a not uniform distance-based weighting scheme which allows the farthest calibration objects to have less impact than the closest ones. The performances of the proposed locally weighted-partial least squares-discriminant analysis (LW-PLS-DA) algorithm have been tested on three simulated data sets characterized by a varying degree of non-linearity: in all cases, a classification accuracy higher than 99% on external validation samples was achieved. Moreover, when also applied to a real data set (classification of rice varieties), characterized by a high extent of non-linearity, the proposed method provided an average correct classification rate of about 93% on the test set. By the preliminary results, showed in this paper, the performances of the proposed LW-PLS-DA approach have proved to be comparable and in some cases better than those obtained by other non-linear methods (k nearest neighbors, kernel-PLS-DA and, in the case of rice, counterpropagation neural networks). Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Dynamic contraction behaviour of pneumatic artificial muscle

    NASA Astrophysics Data System (ADS)

    Doumit, Marc D.; Pardoel, Scott

    2017-07-01

    The development of a dynamic model for the Pneumatic Artificial Muscle (PAM) is an imperative undertaking for understanding and analyzing the behaviour of the PAM as a function of time. This paper proposes a Newtonian based dynamic PAM model that includes the modeling of the muscle geometry, force, inertia, fluid dynamic, static and dynamic friction, heat transfer and valve flow while ignoring the effect of bladder elasticity. This modeling contribution allows the designer to predict, analyze and optimize PAM performance prior to its development. Thus advancing successful implementations of PAM based powered exoskeletons and medical systems. To date, most muscle dynamic properties are determined experimentally, furthermore, no analytical models that can accurately predict the muscle's dynamic behaviour are found in the literature. Most developed analytical models adequately predict the muscle force in static cases but neglect the behaviour of the system in the transient response. This could be attributed to the highly challenging task of deriving such a dynamic model given the number of system elements that need to be identified and the system's highly non-linear properties. The proposed dynamic model in this paper is successfully simulated through MATLAB programing and validated the pressure, contraction distance and muscle temperature with experimental testing that is conducted with in-house built prototype PAM's.

  5. Shallow, non-pumped wells: a low-energy alternative for cleaning polluted groundwater.

    PubMed

    Hudak, Paul F

    2013-07-01

    This modeling study evaluated the capability of non-pumped wells with filter media for preventing contaminant plumes from migrating offsite. Linear configurations of non-pumped wells were compared to permeable reactive barriers in simulated shallow homogeneous and heterogeneous aquifers. While permeable reactive barriers enabled faster contaminant removal and shorter distances of contaminant travel, non-pumped wells also prevented offsite contaminant migration. Overall, results of this study suggest that discontinuous, linear configurations of non-pumped wells may be a viable alternative to much more costly permeable reactive barriers for preventing offsite contaminant travel in some shallow aquifers.

  6. Derivation of groundwater flow-paths based on semi-automatic extraction of lineaments from remote sensing data

    NASA Astrophysics Data System (ADS)

    Mallast, U.; Gloaguen, R.; Geyer, S.; Rödiger, T.; Siebert, C.

    2011-08-01

    In this paper we present a semi-automatic method to infer groundwater flow-paths based on the extraction of lineaments from digital elevation models. This method is especially adequate in remote and inaccessible areas where in-situ data are scarce. The combined method of linear filtering and object-based classification provides a lineament map with a high degree of accuracy. Subsequently, lineaments are differentiated into geological and morphological lineaments using auxiliary information and finally evaluated in terms of hydro-geological significance. Using the example of the western catchment of the Dead Sea (Israel/Palestine), the orientation and location of the differentiated lineaments are compared to characteristics of known structural features. We demonstrate that a strong correlation between lineaments and structural features exists. Using Euclidean distances between lineaments and wells provides an assessment criterion to evaluate the hydraulic significance of detected lineaments. Based on this analysis, we suggest that the statistical analysis of lineaments allows a delineation of flow-paths and thus significant information on groundwater movements. To validate the flow-paths we compare them to existing results of groundwater models that are based on well data.

  7. Bats Use Path Integration Rather Than Acoustic Flow to Assess Flight Distance along Flyways.

    PubMed

    Aharon, Gal; Sadot, Meshi; Yovel, Yossi

    2017-12-04

    Navigation can be achieved using different strategies from simple beaconing to complex map-based movement [1-4]. Bats display remarkable navigation capabilities, ranging from nightly commutes of several kilometers and up to seasonal migrations over thousands of kilometers [5]. Many bats have been suggested to fly along fixed routes termed "flyways," when flying from their roost to their foraging sites [6]. Flyways commonly stretch along linear landscape elements such as tree lines, hedges, or rivers [7]. When flying along a flyway, bats must estimate the distance they have traveled in order to determine when to turn. This can be especially challenging when moving along a repetitive landscape. Some bats, like Kuhl's pipistrelles, which we studied here, have limited vision [8] and were suggested to rely on bio-sonar for navigation. These bats could therefore estimate distance using three main sensory-navigation strategies, all of which we have examined: acoustic flow, acoustic landmarks, or path integration. We trained bats to fly along a linear flyway and land on a platform. We then tested their behavior when the platform was removed under different manipulations, including changing the acoustic flow, moving the start point, and adding wind. We found that bats do not require acoustic flow, which was hypothesized to be important for their navigation [9-15], and that they can perform the task without landmarks. Our results suggest that Kuhl's pipistrelles use internal self-motion cues-also known as path integration-rather than external information to estimate flight distance for at least dozens of meters when navigating along linear flyways. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Modeling and simulation of magnetic resonance imaging based on intermolecular multiple quantum coherences

    NASA Astrophysics Data System (ADS)

    Cai, Congbo; Dong, Jiyang; Cai, Shuhui; Cheng, En; Chen, Zhong

    2006-11-01

    Intermolecular multiple quantum coherences (iMQCs) have many potential applications since they can provide interaction information between different molecules within the range of dipolar correlation distance, and can provide new contrast in magnetic resonance imaging (MRI). Because of the non-localized property of dipolar field, and the non-linear property of the Bloch equations incorporating the dipolar field term, the evolution behavior of iMQC is difficult to deduce strictly in many cases. In such cases, simulation studies are very important. Simulation results can not only give a guide to optimize experimental conditions, but also help analyze unexpected experimental results. Based on our product operator matrix and the K-space method for dipolar field calculation, the MRI simulation software was constructed, running on Windows operation system. The non-linear Bloch equations are calculated by a fifth-order Cash-Karp Runge-Kutta formulism. Computational time can be efficiently reduced by separating the effects of chemical shifts and strong gradient field. Using this software, simulation of different kinds of complex MRI sequences can be done conveniently and quickly on general personal computers. Some examples were given. The results were discussed.

  9. Regression Models and Fuzzy Logic Prediction of TBM Penetration Rate

    NASA Astrophysics Data System (ADS)

    Minh, Vu Trieu; Katushin, Dmitri; Antonov, Maksim; Veinthal, Renno

    2017-03-01

    This paper presents statistical analyses of rock engineering properties and the measured penetration rate of tunnel boring machine (TBM) based on the data of an actual project. The aim of this study is to analyze the influence of rock engineering properties including uniaxial compressive strength (UCS), Brazilian tensile strength (BTS), rock brittleness index (BI), the distance between planes of weakness (DPW), and the alpha angle (Alpha) between the tunnel axis and the planes of weakness on the TBM rate of penetration (ROP). Four (4) statistical regression models (two linear and two nonlinear) are built to predict the ROP of TBM. Finally a fuzzy logic model is developed as an alternative method and compared to the four statistical regression models. Results show that the fuzzy logic model provides better estimations and can be applied to predict the TBM performance. The R-squared value (R2) of the fuzzy logic model scores the highest value of 0.714 over the second runner-up of 0.667 from the multiple variables nonlinear regression model.

  10. Registration of terrestrial mobile laser data on 2D or 3D geographic database by use of a non-rigid ICP approach.

    NASA Astrophysics Data System (ADS)

    Monnier, F.; Vallet, B.; Paparoditis, N.; Papelard, J.-P.; David, N.

    2013-10-01

    This article presents a generic and efficient method to register terrestrial mobile data with imperfect location on a geographic database with better overall accuracy but less details. The registration method proposed in this paper is based on a semi-rigid point to plane ICP ("Iterative Closest Point"). The main applications of such registration is to improve existing geographic databases, particularly in terms of accuracy, level of detail and diversity of represented objects. Other applications include fine geometric modelling and fine façade texturing, object extraction such as trees, poles, road signs marks, facilities, vehicles, etc. The geopositionning system of mobile mapping systems is affected by GPS masks that are only partially corrected by an Inertial Navigation System (INS) which can cause an important drift. As this drift varies non-linearly, but slowly in time, it will be modelled by a translation defined as a piecewise linear function of time which variation over time will be minimized (rigidity term). For each iteration of the ICP, the drift is estimated in order to minimise the distance between laser points and planar model primitives (data attachment term). The method has been tested on real data (a scan of the city of Paris of 3.6 million laser points registered on a 3D model of approximately 71,400 triangles).

  11. Coevolution of dependency distance, hierarchical structure and word order. Comment on "Dependency distance: a new perspective on syntactic patterns in natural languages" by Haitao Liu et al.

    NASA Astrophysics Data System (ADS)

    Jing, Yingqi

    2017-07-01

    Exploring the relationships between structural rules and their linearization constraints have been a central issue in formal syntax and linguistic typology [1]. Liu et al. give a historical overview of the investigation of dependency distance minimization (DDM) in various fields, and specify its potential connections with the graphic patterns of syntactic structure and the linear ordering of words and constituents in real sentences [2]. This comment focuses on discussing the relations between dependency distance (DD), hierarchical structure and word order, and advocates further study on the coevolution of these traits in language histories.

  12. Gravitational field of static p -branes in linearized ghost-free gravity

    NASA Astrophysics Data System (ADS)

    Boos, Jens; Frolov, Valeri P.; Zelnikov, Andrei

    2018-04-01

    We study the gravitational field of static p -branes in D -dimensional Minkowski space in the framework of linearized ghost-free (GF) gravity. The concrete models of GF gravity we consider are parametrized by the nonlocal form factors exp (-□/μ2) and exp (□2/μ4) , where μ-1 is the scale of nonlocality. We show that the singular behavior of the gravitational field of p -branes in general relativity is cured by short-range modifications introduced by the nonlocalities, and we derive exact expressions of the regularized gravitational fields, whose geometry can be written as a warped metric. For large distances compared to the scale of nonlocality, μ r →∞ , our solutions approach those found in linearized general relativity.

  13. Overlapping community detection based on link graph using distance dynamics

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Zhang, Jing; Cai, Li-Jun

    2018-01-01

    The distance dynamics model was recently proposed to detect the disjoint community of a complex network. To identify the overlapping structure of a network using the distance dynamics model, an overlapping community detection algorithm, called L-Attractor, is proposed in this paper. The process of L-Attractor mainly consists of three phases. In the first phase, L-Attractor transforms the original graph to a link graph (a new edge graph) to assure that one node has multiple distances. In the second phase, using the improved distance dynamics model, a dynamic interaction process is introduced to simulate the distance dynamics (shrink or stretch). Through the dynamic interaction process, all distances converge, and the disjoint community structure of the link graph naturally manifests itself. In the third phase, a recovery method is designed to convert the disjoint community structure of the link graph to the overlapping community structure of the original graph. Extensive experiments are conducted on the LFR benchmark networks as well as real-world networks. Based on the results, our algorithm demonstrates higher accuracy and quality than other state-of-the-art algorithms.

  14. Full velocity difference car-following model considering desired inter-vehicle distance

    NASA Astrophysics Data System (ADS)

    Xin, Tong; Yi, Liu; Rongjun, Cheng; Hongxia, Ge

    Based on the full velocity difference car-following model, an improved car-following model is put forward by considering the driver’s desired inter-vehicle distance. The stability conditions are obtained by applying the control method. The results of theoretical analysis are used to demonstrate the advantages of our model. Numerical simulations are used to show that traffic congestion can be improved as the desired inter-vehicle distance is considered in the full velocity difference car-following model.

  15. Correlation between facial morphology and gene polymorphisms in the Uygur youth population.

    PubMed

    He, Huiyu; Mi, Xue; Zhang, Jiayu; Zhang, Qin; Yao, Yuan; Zhang, Xu; Xiao, Feng; Zhao, Chunping; Zheng, Shutao

    2017-04-25

    Human facial morphology varies considerably among individuals and can be influenced by gene polymorphisms. We explored the effects of single nucleotide polymorphisms (SNPs) on facial features in the Uygur youth population of the Kashi area in Xinjiang, China. Saliva samples were collected from 578 volunteers, and 10 SNPs previously associated with variations in facial physiognomy were genotyped. In parallel, 3D images of the subjects' faces were obtained using grating facial scanning technology. After delimitation of 15 salient landmarks, the correlation between SNPs and the distances between facial landmark pairs was assessed. Analysis of variance revealed that ENPP1 rs7754561 polymorphism was significantly associated with RAla-RLipCn and RLipCn-Sbn linear distances (p = 0.044 and p = 0.012, respectively) as well as RLipCn-Stm curve distance (p = 0.042). The GHR rs6180 polymorphism correlated with RLipCn-Stm linear distance (p = 0.04), while the GHR rs6184 polymorphism correlated with RLipCn-ULipP curve distance (p = 0.047). The FGFR1 rs4647905 polymorphism was associated with LLipCn-Nsn linear distance (p = 0.042). These results reveal that ENPP1 and FGFR1 influence lower anterior face height, the distance from the upper lip to the nasal floor, and lip shape. FGFR1 also influences the lower anterior face height, while GHR is associated with the length and width of the lip.

  16. Modeling of metastable phase formation diagrams for sputtered thin films.

    PubMed

    Chang, Keke; Music, Denis; To Baben, Moritz; Lange, Dennis; Bolvardi, Hamid; Schneider, Jochen M

    2016-01-01

    A method to model the metastable phase formation in the Cu-W system based on the critical surface diffusion distance has been developed. The driver for the formation of a second phase is the critical diffusion distance which is dependent on the solubility of W in Cu and on the solubility of Cu in W. Based on comparative theoretical and experimental data, we can describe the relationship between the solubilities and the critical diffusion distances in order to model the metastable phase formation. Metastable phase formation diagrams for Cu-W and Cu-V thin films are predicted and validated by combinatorial magnetron sputtering experiments. The correlative experimental and theoretical research strategy adopted here enables us to efficiently describe the relationship between the solubilities and the critical diffusion distances in order to model the metastable phase formation during magnetron sputtering.

  17. Measuring distance “as the horse runs”: Cross-scale comparison of terrain-based metrics

    USGS Publications Warehouse

    Buttenfield, Barbara P.; Ghandehari, M; Leyk, S; Stanislawski, Larry V.; Brantley, M E; Qiang, Yi

    2016-01-01

    Distance metrics play significant roles in spatial modeling tasks, such as flood inundation (Tucker and Hancock 2010), stream extraction (Stanislawski et al. 2015), power line routing (Kiessling et al. 2003) and analysis of surface pollutants such as nitrogen (Harms et al. 2009). Avalanche risk is based on slope, aspect, and curvature, all directly computed from distance metrics (Gutiérrez 2012). Distance metrics anchor variogram analysis, kernel estimation, and spatial interpolation (Cressie 1993). Several approaches are employed to measure distance. Planar metrics measure straight line distance between two points (“as the crow flies”) and are simple and intuitive, but suffer from uncertainties. Planar metrics assume that Digital Elevation Model (DEM) pixels are rigid and flat, as tiny facets of ceramic tile approximating a continuous terrain surface. In truth, terrain can bend, twist and undulate within each pixel.Work with Light Detection and Ranging (lidar) data or High Resolution Topography to achieve precise measurements present challenges, as filtering can eliminate or distort significant features (Passalacqua et al. 2015). The current availability of lidar data is far from comprehensive in developed nations, and non-existent in many rural and undeveloped regions. Notwithstanding computational advances, distance estimation on DEMs has never been systematically assessed, due to assumptions that improvements are so small that surface adjustment is unwarranted. For individual pixels inaccuracies may be small, but additive effects can propagate dramatically, especially in regional models (e.g., disaster evacuation) or global models (e.g., sea level rise) where pixels span dozens to hundreds of kilometers (Usery et al 2003). Such models are increasingly common, lending compelling reasons to understand shortcomings in the use of planar distance metrics. Researchers have studied curvature-based terrain modeling. Jenny et al. (2011) use curvature to generate hierarchical terrain models. Schneider (2001) creates a ‘plausibility’ metric for DEM-extracted structure lines. d’Oleire- Oltmanns et al. (2014) adopt object-based image processing as an alternative to working with DEMs; acknowledging the pre-processing involved in converting terrain into an object model is computationally intensive, and likely infeasible for some applications.This paper compares planar distance with surface adjusted distance, evolving from distance “as the crow flies” to distance “as the horse runs”. Several methods are compared for DEMs spanning a range of resolutions for the study area and validated against a 3 meter (m) lidar data benchmark. Error magnitudes vary with pixel size and with the method of surface adjustment. The rate of error increase may also vary with landscape type (terrain roughness, precipitation regimes and land settlement patterns). Cross-scale analysis for a single study area is reported here. Additional areas will be presented at the conference.

  18. Categorization of First-Year University Students' Interpretations of Numerical Linear Distance-Time Graphs

    ERIC Educational Resources Information Center

    Wemyss, Thomas; van Kampen, Paul

    2013-01-01

    We have investigated the various approaches taken by first-year university students (n[image omitted]550) when asked to determine the direction of motion, the constancy of speed, and a numerical value of the speed of an object at a point on a numerical linear distance-time graph. We investigated the prevalence of various well-known general…

  19. A data-based conservation planning tool for Florida panthers

    USGS Publications Warehouse

    Murrow, Jennifer L.; Thatcher, Cindy A.; Van Manen, Frank T.; Clark, Joseph D.

    2013-01-01

    Habitat loss and fragmentation are the greatest threats to the endangered Florida panther (Puma concolor coryi). We developed a data-based habitat model and user-friendly interface so that land managers can objectively evaluate Florida panther habitat. We used a geographic information system (GIS) and the Mahalanobis distance statistic (D2) to develop a model based on broad-scale landscape characteristics associated with panther home ranges. Variables in our model were Euclidean distance to natural land cover, road density, distance to major roads, human density, amount of natural land cover, amount of semi-natural land cover, amount of permanent or semi-permanent flooded area–open water, and a cost–distance variable. We then developed a Florida Panther Habitat Estimator tool, which automates and replicates the GIS processes used to apply the statistical habitat model. The estimator can be used by persons with moderate GIS skills to quantify effects of land-use changes on panther habitat at local and landscape scales. Example applications of the tool are presented.

  20. Levels of naturally occurring gamma radiation measured in British homes and their prediction in particular residences.

    PubMed

    Kendall, G M; Wakeford, R; Athanson, M; Vincent, T J; Carter, E J; McColl, N P; Little, M P

    2016-03-01

    Gamma radiation from natural sources (including directly ionising cosmic rays) is an important component of background radiation. In the present paper, indoor measurements of naturally occurring gamma rays that were undertaken as part of the UK Childhood Cancer Study are summarised, and it is shown that these are broadly compatible with an earlier UK National Survey. The distribution of indoor gamma-ray dose rates in Great Britain is approximately normal with mean 96 nGy/h and standard deviation 23 nGy/h. Directly ionising cosmic rays contribute about one-third of the total. The expanded dataset allows a more detailed description than previously of indoor gamma-ray exposures and in particular their geographical variation. Various strategies for predicting indoor natural background gamma-ray dose rates were explored. In the first of these, a geostatistical model was fitted, which assumes an underlying geologically determined spatial variation, superimposed on which is a Gaussian stochastic process with Matérn correlation structure that models the observed tendency of dose rates in neighbouring houses to correlate. In the second approach, a number of dose-rate interpolation measures were first derived, based on averages over geologically or administratively defined areas or using distance-weighted averages of measurements at nearest-neighbour points. Linear regression was then used to derive an optimal linear combination of these interpolation measures. The predictive performances of the two models were compared via cross-validation, using a randomly selected 70 % of the data to fit the models and the remaining 30 % to test them. The mean square error (MSE) of the linear-regression model was lower than that of the Gaussian-Matérn model (MSE 378 and 411, respectively). The predictive performance of the two candidate models was also evaluated via simulation; the OLS model performs significantly better than the Gaussian-Matérn model.

  1. Setback distances between small biological wastewater treatment systems and drinking water wells against virus contamination in alluvial aquifers.

    PubMed

    Blaschke, A P; Derx, J; Zessner, M; Kirnbauer, R; Kavka, G; Strelec, H; Farnleitner, A H; Pang, L

    2016-12-15

    Contamination of groundwater by pathogenic viruses from small biological wastewater treatment system discharges in remote areas is a major concern. To protect drinking water wells against virus contamination, safe setback distances are required between wastewater disposal fields and water supply wells. In this study, setback distances are calculated for alluvial sand and gravel aquifers for different vadose zone and aquifer thicknesses and horizontal groundwater gradients. This study applies to individual households and small settlements (1-20 persons) in decentralized locations without access to receiving surface waters but with the legal obligation of biological wastewater treatment. The calculations are based on Monte Carlo simulations using an analytical model that couples vertical unsaturated and horizontal saturated flow with virus transport. Hydraulic conductivities and water retention curves were selected from reported distribution functions depending on the type of subsurface media. The enteric virus concentration in effluent discharge was calculated based on reported ranges of enteric virus concentration in faeces, virus infectivity, suspension factor, and virus reduction by mechanical-biological wastewater treatment. To meet the risk target of <10 -4 infections/person/year, a 12 log 10 reduction was required, using a linear dose-response relationship for the total amount of enteric viruses, at very low exposure concentrations. The results of this study suggest that the horizontal setback distances vary widely ranging 39 to 144m in sand aquifers, 66-289m in gravel aquifers and 1-2.5km in coarse gravel aquifers. It also varies for the same aquifers, depending on the thickness of the vadose zones and the groundwater gradient. For vulnerable fast-flow alluvial aquifers like coarse gravels, the calculated setback distances were too large to achieve practically. Therefore, for this category of aquifer, a high level of treatment is recommended before the effluent is discharged to the ground surface. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  2. A Clinical Comparative Study of 3-Dimensional Accuracy between Digital and Conventional Implant Impression Techniques.

    PubMed

    Alsharbaty, Mohammed Hussein M; Alikhasi, Marzieh; Zarrati, Simindokht; Shamshiri, Ahmed Reza

    2018-02-09

    To evaluate the accuracy of a digital implant impression technique using a TRIOS 3Shape intraoral scanner (IOS) compared to conventional implant impression techniques (pick-up and transfer) in clinical situations. Thirty-six patients who had two implants (Implantium, internal connection) ranging in diameter between 3.8 and 4.8 mm in posterior regions participated in this study after signing a consent form. Thirty-six reference models (RM) were fabricated by attaching two impression copings intraorally, splinted with autopolymerizing acrylic resin, verified by sectioning through the middle of the index, and rejoined again with freshly mixed autopolymerizing acrylic resin pattern (Pattern Resin) with the brush bead method. After that, the splinted assemblies were attached to implant analogs (DANSE) and impressed with type III dental stone (Gypsum Microstone) in standard plastic die lock trays. Thirty-six working casts were fabricated for each conventional impression technique (i.e., pick-up and transfer). Thirty-six digital impressions were made with a TRIOS 3Shape IOS. Eight of the digitally scanned files were damaged; 28 digital scan files were retrieved to STL format. A coordinate-measuring machine (CMM) was used to record linear displacement measurements (x, y, and z-coordinates), interimplant distances, and angular displacements for the RMs and conventionally fabricated working casts. CATIA 3D evaluation software was used to assess the digital STL files for the same variables as the CMM measurements. CMM measurements made on the RMs and conventionally fabricated working casts were compared with 3D software measurements made on the digitally scanned files. Data were statistically analyzed using the generalized estimating equation (GEE) with an exchangeable correlation matrix and linear method, followed by the Bonferroni method for pairwise comparisons (α = 0.05). The results showed significant differences between the pick-up and digital groups in all of the measured variables (p < 0.001). Concerning the transfer and digital groups, the results were statistically significant in angular displacement (p < 0.001), distance measurements (p = 0.01), and linear displacement (p = 0.03); however, between the pick-up and transfer groups, there was no statistical significance in all of the measured variables (interimplant distance deviation, linear displacement, and angular displacement deviations). According to the results of this study, the digital implant impression technique had the least accuracy. Based on the study outcomes, distance and angulation errors associated with the intraoral digital implant impressions were too large to fabricate well-fitting restorations for partially edentulous patients. The pick-up implant impression technique was the most accurate, and the transfer technique revealed comparable accuracy to it. © 2018 by the American College of Prosthodontists.

  3. Optimization methods of pulse-to-pulse alignment using femtosecond pulse laser based on temporal coherence function for practical distance measurement

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Yang, Linghui; Guo, Yin; Lin, Jiarui; Cui, Pengfei; Zhu, Jigui

    2018-02-01

    An interferometer technique based on temporal coherence function of femtosecond pulses is demonstrated for practical distance measurement. Here, the pulse-to-pulse alignment is analyzed for large delay distance measurement. Firstly, a temporal coherence function model between two femtosecond pulses is developed in the time domain for the dispersive unbalanced Michelson interferometer. Then, according to this model, the fringes analysis and the envelope extraction process are discussed. Meanwhile, optimization methods of pulse-to-pulse alignment for practical long distance measurement are presented. The order of the curve fitting and the selection of points for envelope extraction are analyzed. Furthermore, an averaging method based on the symmetry of the coherence function is demonstrated. Finally, the performance of the proposed methods is evaluated in the absolute distance measurement of 20 μ m with path length difference of 9 m. The improvement of standard deviation in experimental results shows that these approaches have the potential for practical distance measurement.

  4. The Asian clam Corbicula fluminea as a biomonitor of trace element contamination: Accounting for different sources of variation using an hierarchical linear model

    USGS Publications Warehouse

    Shoults-Wilson, W. A.; Peterson, J.T.; Unrine, J.M.; Rickard, J.; Black, M.C.

    2009-01-01

    In the present study, specimens of the invasive clam, Corbicula fluminea, were collected above and below possible sources of potentially toxic trace elements (As, Cd, Cr, Cu, Hg, Pb, and Zn) in the Altamaha River system (Georgia, USA). Bioaccumulation of these elements was quantified, along with environmental (water and sediment) concentrations. Hierarchical linear models were used to account for variability in tissue concentrations related to environmental (site water chemistry and sediment characteristics) and individual (growth metrics) variables while identifying the strongest relations between these variables and trace element accumulation. The present study found significantly elevated concentrations of Cd, Cu, and Hg downstream of the outfall of kaolin-processing facilities, Zn downstream of a tire cording facility, and Cr downstream of both a nuclear power plant and a paper pulp mill. Models of the present study indicated that variation in trace element accumulation was linked to distance upstream from the estuary, dissolved oxygen, percentage of silt and clay in the sediment, elemental concentrations in sediment, shell length, and bivalve condition index. By explicitly modeling environmental variability, the Hierarchical linear modeling procedure allowed the identification of sites showing increased accumulation of trace elements that may have been caused by human activity. Hierarchical linear modeling is a useful tool for accounting for environmental and individual sources of variation in bioaccumulation studies. ?? 2009 SETAC.

  5. Clustering of financial time series

    NASA Astrophysics Data System (ADS)

    D'Urso, Pierpaolo; Cappelli, Carmela; Di Lallo, Dario; Massari, Riccardo

    2013-05-01

    This paper addresses the topic of classifying financial time series in a fuzzy framework proposing two fuzzy clustering models both based on GARCH models. In general clustering of financial time series, due to their peculiar features, needs the definition of suitable distance measures. At this aim, the first fuzzy clustering model exploits the autoregressive representation of GARCH models and employs, in the framework of a partitioning around medoids algorithm, the classical autoregressive metric. The second fuzzy clustering model, also based on partitioning around medoids algorithm, uses the Caiado distance, a Mahalanobis-like distance, based on estimated GARCH parameters and covariances that takes into account the information about the volatility structure of time series. In order to illustrate the merits of the proposed fuzzy approaches an application to the problem of classifying 29 time series of Euro exchange rates against international currencies is presented and discussed, also comparing the fuzzy models with their crisp version.

  6. Treatment of cells with alkaline borate buffer extends the capability of interphase FISH mapping.

    PubMed

    Yokota, H; van den Engh, G; Mostert, M; Trask, B J

    1995-01-20

    Interphase fluorescence in situ hybridization (FISH) has been shown to be a means to map DNA sequences relative to each other in the 100 kb to 1-2 Mb genomic-separation range. At distances below 0.1 Mb, probe sites are infrequently resolved in interphase chromatin. In the 0.1- to 1-Mb range, interphase chromatin can be modeled as a freely flexible chain. The mean square interphase distance between two probes is proportional to the genomic separation between the probes on the linear DNA molecule. Above 1-2 Mb, the relationship between interphase distance and genomic separation changes abruptly and appears to level off. We have used alkaline-borate treatment to expand the capability of interphase FISH mapping. We show here that alkaline-borate treatment increases nuclear diameter, the interphase distance between probes on homologous chromosomes, and the distance between probes on the same chromosome. We also show that the mean square distance between hybridization sites in borate-treated nuclei is proportional to genomic separation up to 4 Mb. Thus, alkaline-borate treatment enhances the capability of interphase FISH mapping by increasing the absolute distance between probes and extending the range of the simple relationship between interphase distance and genomic separation.

  7. Treatment of cells with alkaline borate buffer extends the capability of interphase FISH mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yokota, H.; Van Den Engh, G.; Mostert, M.

    1995-01-20

    Interphase fluorescence in situ hybridization (FISH) has been shown to be a means to map DNA sequences relative to each other in the 100 kb to 1-2 Mb genomic-separation range. At distances below 0.1 Mb, probe sites are infrequently resolved in interphase chromatin. In the 0.1- to 1-Mb range, interphase chromatin can be modeled as a freely flexible chain. The mean square interphase distance between two probes is proportional to the genomic separation between the probes on the linear DNA molecule. Above 1-2 Mb, the relationship between interphase distance and genomic separation changes abruptly and appears to level off. Wemore » have used alkaline-borate treatment to expand the capability of interphase FISH mapping. We show here that alkaline-borate treatment increases nuclear diameter, the interphase distance between probes on homologous chromosomes, and the distance between probes on the same chromosome. We also show that the mean square distance between hybridization sites in borate-treated nuclei is proportional to genomic separation up to 4 Mb. Thus, alkaline-borate treatment enhances the capability of interphase FISH mapping by increasing the absolute distance between probes and extending the range of the simple relationship between interphase distance and genomic separation. 31 refs., 5 figs.« less

  8. Accurate FRET Measurements within Single Diffusing Biomolecules Using Alternating-Laser Excitation

    PubMed Central

    Lee, Nam Ki; Kapanidis, Achillefs N.; Wang, You; Michalet, Xavier; Mukhopadhyay, Jayanta; Ebright, Richard H.; Weiss, Shimon

    2005-01-01

    Fluorescence resonance energy transfer (FRET) between a donor (D) and an acceptor (A) at the single-molecule level currently provides qualitative information about distance, and quantitative information about kinetics of distance changes. Here, we used the sorting ability of confocal microscopy equipped with alternating-laser excitation (ALEX) to measure accurate FRET efficiencies and distances from single molecules, using corrections that account for cross-talk terms that contaminate the FRET-induced signal, and for differences in the detection efficiency and quantum yield of the probes. ALEX yields accurate FRET independent of instrumental factors, such as excitation intensity or detector alignment. Using DNA fragments, we showed that ALEX-based distances agree well with predictions from a cylindrical model of DNA; ALEX-based distances fit better to theory than distances obtained at the ensemble level. Distance measurements within transcription complexes agreed well with ensemble-FRET measurements, and with structural models based on ensemble-FRET and x-ray crystallography. ALEX can benefit structural analysis of biomolecules, especially when such molecules are inaccessible to conventional structural methods due to heterogeneity or transient nature. PMID:15653725

  9. On the frequency dependence of the otoacoustic emission latency in hypoacoustic and normal ears

    NASA Astrophysics Data System (ADS)

    Sisto, R.; Moleti, A.

    2002-01-01

    Experimental measurements of the otoacoustic emission (OAE) latency of adult subjects have been obtained, as a function of frequency, by means of wavelet time-frequency analysis based on the iterative application of filter banks. The results are in agreement with previous OAE latency measurements by Tognola et al. [Hear. Res. 106, 112-122 (1997)], as regards both the latency values and the frequency dependence, and seem to be incompatible with the steep 1/f law that is predicted by scale-invariant full cochlear models. The latency-frequency relationship has been best fitted to a linear function of the cochlear physical distance, using the Greenwood map, and to an exponential function of the cochlear distance, for comparison with derived band ABR latency measurements. Two sets of ears [94 audiometrically normal and 42 impaired with high-frequency (f>3 kHz) hearing loss] have been separately analyzed. Significantly larger average latencies were found in the impaired ears in the mid-frequency range. Theoretical implications of these findings on the transmission of the traveling wave are discussed.

  10. Point spread functions and deconvolution of ultrasonic images.

    PubMed

    Dalitz, Christoph; Pohle-Fröhlich, Regina; Michalk, Thorsten

    2015-03-01

    This article investigates the restoration of ultrasonic pulse-echo C-scan images by means of deconvolution with a point spread function (PSF). The deconvolution concept from linear system theory (LST) is linked to the wave equation formulation of the imaging process, and an analytic formula for the PSF of planar transducers is derived. For this analytic expression, different numerical and analytic approximation schemes for evaluating the PSF are presented. By comparing simulated images with measured C-scan images, we demonstrate that the assumptions of LST in combination with our formula for the PSF are a good model for the pulse-echo imaging process. To reconstruct the object from a C-scan image, we compare different deconvolution schemes: the Wiener filter, the ForWaRD algorithm, and the Richardson-Lucy algorithm. The best results are obtained with the Richardson-Lucy algorithm with total variation regularization. For distances greater or equal twice the near field distance, our experiments show that the numerically computed PSF can be replaced with a simple closed analytic term based on a far field approximation.

  11. Interactive Physical Simulation of Catheter Motion within Mayor Vessel Structures and Cavities for ASD/VSD Treatment

    NASA Astrophysics Data System (ADS)

    Becherer, Nico; Hesser, Jürgen; Kornmesser, Ulrike; Schranz, Dietmar; Männer, Reinhard

    2007-03-01

    Simulation systems are becoming increasingly essential in medical education. Hereby, capturing the physical behaviour of the real world requires a sophisticated modelling of instruments within the virtual environment. Most models currently used are not capable of user interactive simulations due to the computation of the complex underlying analytical equations. Alternatives are often based on simplifying mass-spring systems, being able to deliver high update rates that come at the cost of less realistic motion. In addition, most techniques are limited to narrow and tubular vessel structures or restrict shape alterations to two degrees of freedom, not allowing instrument deformations like torsion. In contrast, our approach combines high update rates with highly realistic motion and can in addition be used with respect to arbitrary structures like vessels or cavities (e.g. atrium, ventricle) without limiting the degrees of freedom. Based on energy minimization, bending energies and vessel structures are considered as linear elastic elements; energies are evaluated at regularly spaced points on the instrument, while the distance of the points is fixed, i.e. we simulate an articulated structure of joints with fixed connections between them. Arbitrary tissue structures are modeled through adaptive distance fields and are connected by nodes via an undirected graph system. The instrument points are linked to nodes by a system of rules. Energy minimization uses a Quasi Newton method without preconditioning and, hereby, gradients are estimated using a combination of analytical and numerical terms. Results show a high quality in motion simulation when compared to a phantom model. The approach is also robust and fast. Simulating an instrument with 100 joints runs at 100 Hz on a 3 GHz PC.

  12. The Role of Deep Creep in the Timing of Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Sammis, C. G.; Smith, S. W.

    2012-12-01

    The observed temporal clustering of the world's largest earthquakes has been largely discounted for two reasons: a) it is consistent with Poisson clustering, and b) no physical mechanism leading to such clustering has been proposed. This lack of a mechanism arises primarily because the static stress transfer mechanism, commonly used to explain aftershocks and the clustering of large events on localized fault networks, does not work at global distances. However, there is recent observational evidence that the surface waves from large earthquakes trigger non-volcanic tremor at the base of distant fault zones at global distances. Based on these observations, we develop a simple non-linear coupled oscillator model that shows how the triggering of such tremor can lead to the synchronization of large earthquakes on a global scale. A basic assumption of the model is that induced tremor is a proxy for deep creep that advances the seismic cycle of the fault. We support this hypothesis by demonstrating that the 2010 Maule Chile and the 2011 Fukushima Japan earthquakes, which have been shown to induce tremor on the Parkfield segment of the San Andreas Fault, also produce changes in off-fault seismicity that are spatially and temporally consistent with episodes of deep creep on the fault. The observed spatial pattern can be simulated using an Okada dislocation model for deep creep (below 20 km) on the fault plane in which the slip rate decreases from North to South consistent with surface creep measurements and deepens south of the "Parkfield asperity" as indicated by recent tremor locations. The model predicts the off-fault events should have reverse mechanism consistent with observed topography.

  13. Factors influencing the dosimetry for high-intensity focused ultrasound ablation of uterine fibroids: a retrospective study.

    PubMed

    Peng, Song; Zhang, Lian; Hu, Liang; Chen, Jinyun; Ju, Jin; Wang, Xi; Zhang, Rong; Wang, Zhibiao; Chen, Wenzhi

    2015-04-01

    The aim of this article is to analyze factors affecting sonication dose and build a dosimetry model of high-intensity focused ultrasound (HIFU) ablation for uterine fibroids. Four hundred and three patients with symptomatic uterine fibroids who underwent HIFU were retrospectively analyzed. The energy efficiency factor (EEF) was set as dependent variable, and the factors possibly affecting sonication dose included age, body mass index, size of uterine fibroid, abdominal wall thickness, the distance from uterine fibroid dorsal side to sacrum, the distance from uterine fibroid ventral side to skin, location of uterus, location of uterine fibroids, type of uterine fibroids, abdominal wall scar, signal intensity on T2-weighted imaging (T2WI), and enhancement type on T1-weighted imaging (T1WI) were set as predictors to build a multiple regression model. The size of uterine fibroid, distance from fibroid ventral side to skin, location of uterus, location of uterine fibroids, type of uterine fibroids, signal intensity on T2WI, and enhancement type on T1WI had a linear correlation with EEF. The distance from fibroid ventral side to skin, enhancement type on T1WI, size of uterine fibroid, and signal intensity on T2WI were eventually incorporated into the dosimetry model. The distance from fibroid ventral side to skin, enhancement type on T1WI, size of uterine fibroid, and signal intensity on T2WI can be used as dosimetric predictors for HIFU for uterine fibroids.

  14. A note on convergence of solutions of total variation regularized linear inverse problems

    NASA Astrophysics Data System (ADS)

    Iglesias, José A.; Mercier, Gwenael; Scherzer, Otmar

    2018-05-01

    In a recent paper by Chambolle et al (2017 Inverse Problems 33 015002) it was proven that if the subgradient of the total variation at the noise free data is not empty, the level-sets of the total variation denoised solutions converge to the level-sets of the noise free data with respect to the Hausdorff distance. The condition on the subgradient corresponds to the source condition introduced by Burger and Osher (2007 Multiscale Model. Simul. 6 365–95), who proved convergence rates results with respect to the Bregman distance under this condition. We generalize the result of Chambolle et al to total variation regularization of general linear inverse problems under such a source condition. As particular applications we present denoising in bounded and unbounded, convex and non convex domains, deblurring and inversion of the circular Radon transform. In all these examples the convergence result applies. Moreover, we illustrate the convergence behavior through numerical examples.

  15. A methodology for design of a linear referencing system for surface transportation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vonderohe, A.; Hepworth, T.

    1997-06-01

    The transportation community has recently placed significant emphasis on development of data models, procedural standards, and policies for management of linearly-referenced data. There is an Intelligent Transportation Systems initiative underway to create a spatial datum for location referencing in one, two, and three dimensions. Most recently, a call was made for development of a unified linear reference system to support public, private, and military surface transportation needs. A methodology for design of the linear referencing system was developed from geodetic engineering principles and techniques used for designing geodetic control networks. The method is founded upon the law of propagation ofmore » random error and the statistical analysis of systems of redundant measurements, used to produce best estimates for unknown parameters. A complete mathematical development is provided. Example adjustments of linear distance measurement systems are included. The classical orders of design are discussed with regard to the linear referencing system. A simple design example is provided. A linear referencing system designed and analyzed with this method will not only be assured of meeting the accuracy requirements of users, it will have the potential for supporting delivery of error estimates along with the results of spatial analytical queries. Modeling considerations, alternative measurement methods, implementation strategies, maintenance issues, and further research needs are discussed. Recommendations are made for further advancement of the unified linear referencing system concept.« less

  16. Coherent Transport in a Linear Triple Quantum Dot Made from a Pure-Phase InAs Nanowire.

    PubMed

    Wang, Ji-Yin; Huang, Shaoyun; Huang, Guang-Yao; Pan, Dong; Zhao, Jianhua; Xu, H Q

    2017-07-12

    A highly tunable linear triple quantum dot (TQD) device is realized in a single-crystalline pure-phase InAs nanowire using a local finger gate technique. The electrical measurements show that the charge stability diagram of the TQD can be represented by three kinds of current lines of different slopes and a simulation performed based on a capacitance matrix model confirms the experiment. We show that each current line observable in the charge stability diagram is associated with a case where a QD is on resonance with the Fermi level of the source and drain reservoirs. At a triple point where two current lines of different slopes move together but show anticrossing, two QDs are on resonance with the Fermi level of the reservoirs. We demonstrate that an energetically degenerated quadruple point at which all three QDs are on resonance with the Fermi level of the reservoirs can be built by moving two separated triple points together via sophistically tuning of energy levels in the three QDs. We also demonstrate the achievement of direct coherent electron transfer between the two remote QDs in the TQD, realizing a long-distance coherent quantum bus operation. Such a long-distance coherent coupling could be used to investigate coherent spin teleportation and superexchange effects and to construct a spin qubit with an improved long coherent time and with spin state detection solely by sensing the charge states.

  17. Power and sample-size estimation for microbiome studies using pairwise distances and PERMANOVA.

    PubMed

    Kelly, Brendan J; Gross, Robert; Bittinger, Kyle; Sherrill-Mix, Scott; Lewis, James D; Collman, Ronald G; Bushman, Frederic D; Li, Hongzhe

    2015-08-01

    The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence-absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-group distance and exposure/intervention effect size must be accurately modeled to estimate statistical power for a microbiome study that will be analyzed with pairwise distances and PERMANOVA. We present a framework for PERMANOVA power estimation tailored to marker-gene microbiome studies that will be analyzed by pairwise distances, which includes: (i) a novel method for distance matrix simulation that permits modeling of within-group pairwise distances according to pre-specified population parameters; (ii) a method to incorporate effects of different sizes within the simulated distance matrix; (iii) a simulation-based method for estimating PERMANOVA power from simulated distance matrices; and (iv) an R statistical software package that implements the above. Matrices of pairwise distances can be efficiently simulated to satisfy the triangle inequality and incorporate group-level effects, which are quantified by the adjusted coefficient of determination, omega-squared (ω2). From simulated distance matrices, available PERMANOVA power or necessary sample size can be estimated for a planned microbiome study. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. A Unimodal Model for Double Observer Distance Sampling Surveys.

    PubMed

    Becker, Earl F; Christ, Aaron M

    2015-01-01

    Distance sampling is a widely used method to estimate animal population size. Most distance sampling models utilize a monotonically decreasing detection function such as a half-normal. Recent advances in distance sampling modeling allow for the incorporation of covariates into the distance model, and the elimination of the assumption of perfect detection at some fixed distance (usually the transect line) with the use of double-observer models. The assumption of full observer independence in the double-observer model is problematic, but can be addressed by using the point independence assumption which assumes there is one distance, the apex of the detection function, where the 2 observers are assumed independent. Aerially collected distance sampling data can have a unimodal shape and have been successfully modeled with a gamma detection function. Covariates in gamma detection models cause the apex of detection to shift depending upon covariate levels, making this model incompatible with the point independence assumption when using double-observer data. This paper reports a unimodal detection model based on a two-piece normal distribution that allows covariates, has only one apex, and is consistent with the point independence assumption when double-observer data are utilized. An aerial line-transect survey of black bears in Alaska illustrate how this method can be applied.

  19. Testing in Microbiome-Profiling Studies with MiRKAT, the Microbiome Regression-Based Kernel Association Test

    PubMed Central

    Zhao, Ni; Chen, Jun; Carroll, Ian M.; Ringel-Kulka, Tamar; Epstein, Michael P.; Zhou, Hua; Zhou, Jin J.; Ringel, Yehuda; Li, Hongzhe; Wu, Michael C.

    2015-01-01

    High-throughput sequencing technology has enabled population-based studies of the role of the human microbiome in disease etiology and exposure response. Distance-based analysis is a popular strategy for evaluating the overall association between microbiome diversity and outcome, wherein the phylogenetic distance between individuals’ microbiome profiles is computed and tested for association via permutation. Despite their practical popularity, distance-based approaches suffer from important challenges, especially in selecting the best distance and extending the methods to alternative outcomes, such as survival outcomes. We propose the microbiome regression-based kernel association test (MiRKAT), which directly regresses the outcome on the microbiome profiles via the semi-parametric kernel machine regression framework. MiRKAT allows for easy covariate adjustment and extension to alternative outcomes while non-parametrically modeling the microbiome through a kernel that incorporates phylogenetic distance. It uses a variance-component score statistic to test for the association with analytical p value calculation. The model also allows simultaneous examination of multiple distances, alleviating the problem of choosing the best distance. Our simulations demonstrated that MiRKAT provides correctly controlled type I error and adequate power in detecting overall association. “Optimal” MiRKAT, which considers multiple candidate distances, is robust in that it suffers from little power loss in comparison to when the best distance is used and can achieve tremendous power gain in comparison to when a poor distance is chosen. Finally, we applied MiRKAT to real microbiome datasets to show that microbial communities are associated with smoking and with fecal protease levels after confounders are controlled for. PMID:25957468

  20. Estimating rupture distances without a rupture

    USGS Publications Warehouse

    Thompson, Eric M.; Worden, Charles

    2017-01-01

    Most ground motion prediction equations (GMPEs) require distances that are defined relative to a rupture model, such as the distance to the surface projection of the rupture (RJB) or the closest distance to the rupture plane (RRUP). There are a number of situations in which GMPEs are used where it is either necessary or advantageous to derive rupture distances from point-source distance metrics, such as hypocentral (RHYP) or epicentral (REPI) distance. For ShakeMap, it is necessary to provide an estimate of the shaking levels for events without rupture models, and before rupture models are available for events that eventually do have rupture models. In probabilistic seismic hazard analysis, it is often convenient to use point-source distances for gridded seismicity sources, particularly if a preferred orientation is unknown. This avoids the computationally cumbersome task of computing rupture-based distances for virtual rupture planes across all strikes and dips for each source. We derive average rupture distances conditioned on REPI, magnitude, and (optionally) back azimuth, for a variety of assumed seismological constraints. Additionally, we derive adjustment factors for GMPE standard deviations that reflect the added uncertainty in the ground motion estimation when point-source distances are used to estimate rupture distances.

  1. Measurement of Initial Conditions at Nozzle Exit of High Speed Jets

    NASA Technical Reports Server (NTRS)

    Panda, J.; Zaman, K. B. M. Q.; Seasholtz, R. G.

    2004-01-01

    The time averaged and unsteady density fields close to the nozzle exit (0.1 less than or = x/D less than or = 2, x: downstream distance, D: jet diameter) of unheated free jets at Mach numbers of 0.95, 1.4, and 1.8 were measured using a molecular Rayleigh scattering based technique. The initial thickness of shear layer and its linear growth rate were determined from time-averaged density survey and a modeling process, which utilized the Crocco-Busemann equation to relate density profiles to velocity profiles. The model also corrected for the smearing effect caused by a relatively long probe length in the measured density data. The calculated shear layer thickness was further verified from a limited hot-wire measurement. Density fluctuations spectra, measured using a two-Photomultiplier-tube technique, were used to determine evolution of turbulent fluctuations in various Strouhal frequency bands. For this purpose spectra were obtained from a large number of points inside the flow; and at every axial station spectral data from all radial positions were integrated. The radially-integrated fluctuation data show an exponential growth with downstream distance and an eventual saturation in all Strouhal frequency bands. The initial level of density fluctuations was calculated by extrapolation to nozzle exit.

  2. Characterization of Course and Terrain and Their Effect on Skier Speed in World Cup Alpine Ski Racing

    PubMed Central

    Gilgien, Matthias; Crivelli, Philip; Spörri, Jörg; Kröll, Josef; Müller, Erich

    2015-01-01

    World Cup (WC) alpine ski racing consists of four main competition disciplines (slalom, giant slalom, super-G and downhill), each with specific course and terrain characteristics. The International Ski Federation (FIS) has regulated course length, altitude drop from start to finish and course setting in order to specify the characteristics of the respective competition disciplines and to control performance and injury-related aspects. However to date, no detailed data on course setting and its adaptation to terrain is available. It is also unknown how course and terrain characteristics influence skier speed. Therefore, the aim of the study was to characterize course setting, terrain geomorphology and their relationship to speed in male WC giant slalom, super-G and downhill. The study revealed that terrain was flatter in downhill compared to the other disciplines. In all disciplines, variability in horizontal gate distance (gate offset) was larger than in gate distance (linear distance from gate to gate). In giant slalom the horizontal gate distance increased with terrain inclination, while super-G and downhill did not show such a connection. In giant slalom and super-G, there was a slight trend towards shorter gate distances as the steepness of the terrain increased. Gates were usually set close to terrain transitions in all three disciplines. Downhill had a larger proportion of extreme terrain inclination changes along the skier trajectory per unit time skiing than the other disciplines. Skier speed decreased with increasing steepness of terrain in all disciplines except for downhill. In steep terrain, speed was found to be controllable by increased horizontal gate distances in giant slalom and by shorter gate distances in giant slalom and super-G. Across the disciplines skier speed was largely explained by course setting and terrain inclination in a multiple linear model. PMID:25760039

  3. 3D patient-specific models for left atrium characterization to support ablation in atrial fibrillation patients.

    PubMed

    Valinoti, Maddalena; Fabbri, Claudio; Turco, Dario; Mantovan, Roberto; Pasini, Antonio; Corsi, Cristiana

    2018-01-01

    Radiofrequency ablation (RFA) is an important and promising therapy for atrial fibrillation (AF) patients. Optimization of patient selection and the availability of an accurate anatomical guide could improve RFA success rate. In this study we propose a unified, fully automated approach to build a 3D patient-specific left atrium (LA) model including pulmonary veins (PVs) in order to provide an accurate anatomical guide during RFA and without PVs in order to characterize LA volumetry and support patient selection for AF ablation. Magnetic resonance data from twenty-six patients referred for AF RFA were processed applying an edge-based level set approach guided by a phase-based edge detector to obtain the 3D LA model with PVs. An automated technique based on the shape diameter function was designed and applied to remove PVs and compute LA volume. 3D LA models were qualitatively compared with 3D LA surfaces acquired during the ablation procedure. An expert radiologist manually traced the LA on MR images twice. LA surfaces from the automatic approach and manual tracing were compared by mean surface-to-surface distance. In addition, LA volumes were compared with volumes from manual segmentation by linear and Bland-Altman analyses. Qualitative comparison of 3D LA models showed several inaccuracies, in particular PVs reconstruction was not accurate and left atrial appendage was missing in the model obtained during RFA procedure. LA surfaces were very similar (mean surface-to-surface distance: 2.3±0.7mm). LA volumes were in excellent agreement (y=1.03x-1.4, r=0.99, bias=-1.37ml (-1.43%) SD=2.16ml (2.3%), mean percentage difference=1.3%±2.1%). Results showed the proposed 3D patient-specific LA model with PVs is able to better describe LA anatomy compared to models derived from the navigation system, thus potentially improving electrograms and voltage information location and reducing fluoroscopic time during RFA. Quantitative assessment of LA volume derived from our 3D LA model without PVs is also accurate and may provide important information for patient selection for RFA. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. The red/infrared evolution in galaxies - Effect of the stars on the asymptotic giant branch

    NASA Technical Reports Server (NTRS)

    Chokshi, Arati; Wright, Edward L.

    1987-01-01

    The effect of including the asymptotic giant branch (AGB) population in a spectral synthesis model of galaxy evolution is examined. Stars on the AGB are luminous enough and also evolve rapidly enough to affect the evolution of red and infrared colors in galaxies. The validity of using infrared colors as distance indicators to galaxies is then investigated in detail. It is found that for z of 1 or less infrared colors of model galaxies behave linearly with redshift.

  5. Computed tomography assessment of peripubertal craniofacial morphology in a sheep model of binge alcohol drinking in the first trimester

    PubMed Central

    Birch, Sharla M.; Lenox, Mark W.; Kornegay, Joe N.; Shen, Li; Ai, Huisi; Ren, Xiaowei; Goodlett, Charles R.; Cudd, Tim A.; Washburn, Shannon E.

    2015-01-01

    Identification of facial dysmorphology is essential for the diagnosis of fetal alcohol syndrome (FAS); however, most children with fetal alcohol spectrum disorders (FASD) do not meet the dysmorphology criterion. Additional objective indicators are needed to help identify the broader spectrum of children affected by prenatal alcohol exposure. Computed tomography (CT) was used in a sheep model of prenatal binge alcohol exposure to test the hypothesis that quantitative measures of craniofacial bone volumes and linear distances could identify alcohol-exposed lambs. Pregnant sheep were randomly assigned to four groups: heavy binge alcohol, 2.5 g/kg/day (HBA); binge alcohol, 1.75 g/kg/day (BA); saline control (SC); and normal control (NC). Intravenous alcohol (BA; HBA) or saline (SC) infusions were given three consecutive days per week from gestation day 4–41, and a CT scan was performed on postnatal day 182. The volumes of eight skull bones, cranial circumference, and 19 linear measures of the face and skull were compared among treatment groups. Lambs from both alcohol groups showed significant reduction in seven of the eight skull bones and total skull bone volume, as well as cranial circumference. Alcohol exposure also decreased four of the 19 craniofacial measures. Discriminant analysis showed that alcohol-exposed and control lambs could be classified with high accuracy based on total skull bone volume, frontal, parietal, or mandibular bone volumes, cranial circumference, or interorbital distance. Total skull volume was significantly more sensitive than cranial circumference in identifying the alcohol-exposed lambs when alcohol-exposed lambs were classified using the typical FAS diagnostic cutoff of ≤10th percentile. This first demonstration of the usefulness of CT-derived craniofacial measures in a sheep model of FASD following binge-like alcohol exposure during the first trimester suggests that volumetric measurement of cranial bones may be a novel biomarker for binge alcohol exposure during the first trimester to help identify non-dysmorphic children with FASD. PMID:26496796

  6. Predictors of the physical impact of Multiple Sclerosis following community-based, exercise trial.

    PubMed

    Kehoe, M; Saunders, J; Jakeman, P; Coote, S

    2015-04-01

    Studies evaluating exercise interventions in people with multiple sclerosis (PwMS) demonstrate small to medium positive effects and large variability on a number of outcome measures. No study to date has tried to explain this variability. This paper presents a novel exploration of data examining the predictors of outcome for PwMS with minimal gait impairment following a randomised, controlled trial evaluating community-based exercise interventions (N = 242). The primary variable was the physical component of the Multiple Sclerosis Impact Scale-29, version 2 (MSIS-29, v2) after a 10-week, controlled intervention period. Predictors were identified a priori and were measured at baseline. Multiple linear regression was conducted. Four models are presented lower MSIS-29, v2 scores after the intervention period were best predicted by a lower baseline MSIS-29,v2, a lower baseline Modified Fatigue Impact Score (physical subscale), randomisation to an exercise intervention, a longer baseline walking distance measured by the Six Minute Walk Test and female gender. This model explained 57.4% of the variance (F (5, 211) = 59.24, p < 0.01). These results suggest that fatigue and walking distance at baseline contribute significantly to predicting MSIS-29, v29 (physical component) after intervention, and thus should be the focus of intervention and assessment. Exercise is an important contributor to minimising the physical impact of MS, and gender-specific interventions may be warranted. © The Author(s), 2014.

  7. Human access and landscape structure effects on Andean forest bird richness

    NASA Astrophysics Data System (ADS)

    Aubad, Jorge; Aragón, Pedro; Rodríguez, Miguel Á.

    2010-07-01

    We analyzed the influence of human access and landscape structure on forest bird species richness in a fragmented landscape of the Colombian Andes. In Latin America, habitat loss and fragmentation are considered as the greatest threats to biodiversity because a large number of countryside villagers complement their food and incomes with the extraction of forest resources. Anthropogenic actions may also affect forest species by bird hunting or indirectly through modifying the structure of forest habitats. We surveyed 14 secondary cloud forest remnants to generate bird species richness data for each of them. We also quantified six landscape structure descriptors of forest patch size (patch area and core area), shape (perimeter of each fragment and the Patton's shape index) and isolation (nearest neighbor distance and edge contrast), and generated (using principal components analysis) a synthetic human influence variable based on the distance of each fragment to roads and villages, as well as the total slope of the fragments. Species richness was related to these variables using generalized linear models (GLMs) complemented with model selection techniques based on information theory and partial regression analysis. We found that forest patch size and accessibility were key drivers of bird richness, which increased toward largest patches, but decreased in those more accessible to humans and their potential disturbances. Both patch area and human access effects on forest bird species richness were complementary and similar in magnitude. Our results provide a basis for biodiversity conservation plans and initiatives of Andean forest diversity.

  8. Evaluation of the Bitterness of Traditional Chinese Medicines using an E-Tongue Coupled with a Robust Partial Least Squares Regression Method.

    PubMed

    Lin, Zhaozhou; Zhang, Qiao; Liu, Ruixin; Gao, Xiaojie; Zhang, Lu; Kang, Bingya; Shi, Junhan; Wu, Zidan; Gui, Xinjing; Li, Xuelin

    2016-01-25

    To accurately, safely, and efficiently evaluate the bitterness of Traditional Chinese Medicines (TCMs), a robust predictor was developed using robust partial least squares (RPLS) regression method based on data obtained from an electronic tongue (e-tongue) system. The data quality was verified by the Grubb's test. Moreover, potential outliers were detected based on both the standardized residual and score distance calculated for each sample. The performance of RPLS on the dataset before and after outlier detection was compared to other state-of-the-art methods including multivariate linear regression, least squares support vector machine, and the plain partial least squares regression. Both R² and root-mean-squares error (RMSE) of cross-validation (CV) were recorded for each model. With four latent variables, a robust RMSECV value of 0.3916 with bitterness values ranging from 0.63 to 4.78 were obtained for the RPLS model that was constructed based on the dataset including outliers. Meanwhile, the RMSECV, which was calculated using the models constructed by other methods, was larger than that of the RPLS model. After six outliers were excluded, the performance of all benchmark methods markedly improved, but the difference between the RPLS model constructed before and after outlier exclusion was negligible. In conclusion, the bitterness of TCM decoctions can be accurately evaluated with the RPLS model constructed using e-tongue data.

  9. Comparison of Available Technologies for Fire Spots Detection via Linear Heat Detector

    NASA Astrophysics Data System (ADS)

    Miksa, František; Nemlaha, Eduard

    2016-12-01

    It is very demanding to detect fire spots under difficult conditions with high occurrence of interfering external factors such as large distances, airflow difficultly, high dustiness, high humidity, etc. Spot fire sensors do not meet the requirements due to the aforementioned conditions as well as large distances. Therefore, the detection of a fire spot via linear heat sensing cables is utilized.

  10. Analysis of methods to estimate spring flows in a karst aquifer

    USGS Publications Warehouse

    Sepulveda, N.

    2009-01-01

    Hydraulically and statistically based methods were analyzed to identify the most reliable method to predict spring flows in a karst aquifer. Measured water levels at nearby observation wells, measured spring pool altitudes, and the distance between observation wells and the spring pool were the parameters used to match measured spring flows. Measured spring flows at six Upper Floridan aquifer springs in central Florida were used to assess the reliability of these methods to predict spring flows. Hydraulically based methods involved the application of the Theis, Hantush-Jacob, and Darcy-Weisbach equations, whereas the statistically based methods were the multiple linear regressions and the technology of artificial neural networks (ANNs). Root mean square errors between measured and predicted spring flows using the Darcy-Weisbach method ranged between 5% and 15% of the measured flows, lower than the 7% to 27% range for the Theis or Hantush-Jacob methods. Flows at all springs were estimated to be turbulent based on the Reynolds number derived from the Darcy-Weisbach equation for conduit flow. The multiple linear regression and the Darcy-Weisbach methods had similar spring flow prediction capabilities. The ANNs provided the lowest residuals between measured and predicted spring flows, ranging from 1.6% to 5.3% of the measured flows. The model prediction efficiency criteria also indicated that the ANNs were the most accurate method predicting spring flows in a karst aquifer. ?? 2008 National Ground Water Association.

  11. Analysis of methods to estimate spring flows in a karst aquifer.

    PubMed

    Sepúlveda, Nicasio

    2009-01-01

    Hydraulically and statistically based methods were analyzed to identify the most reliable method to predict spring flows in a karst aquifer. Measured water levels at nearby observation wells, measured spring pool altitudes, and the distance between observation wells and the spring pool were the parameters used to match measured spring flows. Measured spring flows at six Upper Floridan aquifer springs in central Florida were used to assess the reliability of these methods to predict spring flows. Hydraulically based methods involved the application of the Theis, Hantush-Jacob, and Darcy-Weisbach equations, whereas the statistically based methods were the multiple linear regressions and the technology of artificial neural networks (ANNs). Root mean square errors between measured and predicted spring flows using the Darcy-Weisbach method ranged between 5% and 15% of the measured flows, lower than the 7% to 27% range for the Theis or Hantush-Jacob methods. Flows at all springs were estimated to be turbulent based on the Reynolds number derived from the Darcy-Weisbach equation for conduit flow. The multiple linear regression and the Darcy-Weisbach methods had similar spring flow prediction capabilities. The ANNs provided the lowest residuals between measured and predicted spring flows, ranging from 1.6% to 5.3% of the measured flows. The model prediction efficiency criteria also indicated that the ANNs were the most accurate method predicting spring flows in a karst aquifer.

  12. Sub-optimal control of fuzzy linear dynamical systems under granular differentiability concept.

    PubMed

    Mazandarani, Mehran; Pariz, Naser

    2018-05-01

    This paper deals with sub-optimal control of a fuzzy linear dynamical system. The aim is to keep the state variables of the fuzzy linear dynamical system close to zero in an optimal manner. In the fuzzy dynamical system, the fuzzy derivative is considered as the granular derivative; and all the coefficients and initial conditions can be uncertain. The criterion for assessing the optimality is regarded as a granular integral whose integrand is a quadratic function of the state variables and control inputs. Using the relative-distance-measure (RDM) fuzzy interval arithmetic and calculus of variations, the optimal control law is presented as the fuzzy state variables feedback. Since the optimal feedback gains are obtained as fuzzy functions, they need to be defuzzified. This will result in the sub-optimal control law. This paper also sheds light on the restrictions imposed by the approaches which are based on fuzzy standard interval arithmetic (FSIA), and use strongly generalized Hukuhara and generalized Hukuhara differentiability concepts for obtaining the optimal control law. The granular eigenvalues notion is also defined. Using an RLC circuit mathematical model, it is shown that, due to their unnatural behavior in the modeling phenomenon, the FSIA-based approaches may obtain some eigenvalues sets that might be different from the inherent eigenvalues set of the fuzzy dynamical system. This is, however, not the case with the approach proposed in this study. The notions of granular controllability and granular stabilizability of the fuzzy linear dynamical system are also presented in this paper. Moreover, a sub-optimal control for regulating a Boeing 747 in longitudinal direction with uncertain initial conditions and parameters is gained. In addition, an uncertain suspension system of one of the four wheels of a bus is regulated using the sub-optimal control introduced in this paper. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Three-dimensional computer-assisted study model analysis of long-term oral-appliance wear. Part 1: Methodology.

    PubMed

    Chen, Hui; Lowe, Alan A; de Almeida, Fernanda Riberiro; Wong, Mary; Fleetham, John A; Wang, Bangkang

    2008-09-01

    The aim of this study was to test a 3-dimensional (3D) computer-assisted dental model analysis system that uses selected landmarks to describe tooth movement during treatment with an oral appliance. Dental casts of 70 patients diagnosed with obstructive sleep apnea and treated with oral appliances for a mean time of 7 years 4 months were evaluated with a 3D digitizer (MicroScribe-3DX, Immersion, San Jose, Calif) compatible with the Rhinoceros modeling program (version 3.0 SR3c, Robert McNeel & Associates, Seattle, Wash). A total of 86 landmarks on each model were digitized, and 156 variables were calculated as either the linear distance between points or the distance from points to reference planes. Four study models for each patient (maxillary baseline, mandibular baseline, maxillary follow-up, and mandibular follow-up) were superimposed on 2 sets of reference points: 3 points on the palatal rugae for maxillary model superimposition, and 3 occlusal contact points for the same set of maxillary and mandibular model superimpositions. The patients were divided into 3 evaluation groups by 5 orthodontists based on the changes between baseline and follow-up study models. Digital dental measurements could be analyzed, including arch width, arch length, curve of Spee, overbite, overjet, and the anteroposterior relationship between the maxillary and mandibular arches. A method error within 0.23 mm in 14 selected variables was found for the 3D system. The statistical differences in the 3 evaluation groups verified the division criteria determined by the orthodontists. The system provides a method to record 3D measurements of study models that permits computer visualization of tooth position and movement from various perspectives.

  14. A Conceptual Model of the Cognitive Processing of Environmental Distance Information

    NASA Astrophysics Data System (ADS)

    Montello, Daniel R.

    I review theories and research on the cognitive processing of environmental distance information by humans, particularly that acquired via direct experience in the environment. The cognitive processes I consider for acquiring and thinking about environmental distance information include working-memory, nonmediated, hybrid, and simple-retrieval processes. Based on my review of the research literature, and additional considerations about the sources of distance information and the situations in which it is used, I propose an integrative conceptual model to explain the cognitive processing of distance information that takes account of the plurality of possible processes and information sources, and describes conditions under which particular processes and sources are likely to operate. The mechanism of summing vista distances is identified as widely important in situations with good visual access to the environment. Heuristics based on time, effort, or other information are likely to play their most important role when sensory access is restricted.

  15. Fitting Data to Model: Structural Equation Modeling Diagnosis Using Two Scatter Plots

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Hayashi, Kentaro

    2010-01-01

    This article introduces two simple scatter plots for model diagnosis in structural equation modeling. One plot contrasts a residual-based M-distance of the structural model with the M-distance for the factor score. It contains information on outliers, good leverage observations, bad leverage observations, and normal cases. The other plot contrasts…

  16. Effective and extensible feature extraction method using genetic algorithm-based frequency-domain feature search for epileptic EEG multiclassification

    PubMed Central

    Wen, Tingxi; Zhang, Zhongnan

    2017-01-01

    Abstract In this paper, genetic algorithm-based frequency-domain feature search (GAFDS) method is proposed for the electroencephalogram (EEG) analysis of epilepsy. In this method, frequency-domain features are first searched and then combined with nonlinear features. Subsequently, these features are selected and optimized to classify EEG signals. The extracted features are analyzed experimentally. The features extracted by GAFDS show remarkable independence, and they are superior to the nonlinear features in terms of the ratio of interclass distance and intraclass distance. Moreover, the proposed feature search method can search for features of instantaneous frequency in a signal after Hilbert transformation. The classification results achieved using these features are reasonable; thus, GAFDS exhibits good extensibility. Multiple classical classifiers (i.e., k-nearest neighbor, linear discriminant analysis, decision tree, AdaBoost, multilayer perceptron, and Naïve Bayes) achieve satisfactory classification accuracies by using the features generated by the GAFDS method and the optimized feature selection. The accuracies for 2-classification and 3-classification problems may reach up to 99% and 97%, respectively. Results of several cross-validation experiments illustrate that GAFDS is effective in the extraction of effective features for EEG classification. Therefore, the proposed feature selection and optimization model can improve classification accuracy. PMID:28489789

  17. Effective and extensible feature extraction method using genetic algorithm-based frequency-domain feature search for epileptic EEG multiclassification.

    PubMed

    Wen, Tingxi; Zhang, Zhongnan

    2017-05-01

    In this paper, genetic algorithm-based frequency-domain feature search (GAFDS) method is proposed for the electroencephalogram (EEG) analysis of epilepsy. In this method, frequency-domain features are first searched and then combined with nonlinear features. Subsequently, these features are selected and optimized to classify EEG signals. The extracted features are analyzed experimentally. The features extracted by GAFDS show remarkable independence, and they are superior to the nonlinear features in terms of the ratio of interclass distance and intraclass distance. Moreover, the proposed feature search method can search for features of instantaneous frequency in a signal after Hilbert transformation. The classification results achieved using these features are reasonable; thus, GAFDS exhibits good extensibility. Multiple classical classifiers (i.e., k-nearest neighbor, linear discriminant analysis, decision tree, AdaBoost, multilayer perceptron, and Naïve Bayes) achieve satisfactory classification accuracies by using the features generated by the GAFDS method and the optimized feature selection. The accuracies for 2-classification and 3-classification problems may reach up to 99% and 97%, respectively. Results of several cross-validation experiments illustrate that GAFDS is effective in the extraction of effective features for EEG classification. Therefore, the proposed feature selection and optimization model can improve classification accuracy.

  18. The performance of approximations of farm contiguity compared to contiguity defined using detailed geographical information in two sample areas in Scotland: implications for foot-and-mouth disease modelling.

    PubMed

    Flood, Jessica S; Porphyre, Thibaud; Tildesley, Michael J; Woolhouse, Mark E J

    2013-10-08

    When modelling infectious diseases, accurately capturing the pattern of dissemination through space is key to providing optimal recommendations for control. Mathematical models of disease spread in livestock, such as for foot-and-mouth disease (FMD), have done this by incorporating a transmission kernel which describes the decay in transmission rate with increasing Euclidean distance from an infected premises (IP). However, this assumes a homogenous landscape, and is based on the distance between point locations of farms. Indeed, underlying the spatial pattern of spread are the contact networks involved in transmission. Accordingly, area-weighted tessellation around farm point locations has been used to approximate field-contiguity and simulate the effect of contiguous premises (CP) culling for FMD. Here, geographic data were used to determine contiguity based on distance between premises' fields and presence of landscape features for two sample areas in Scotland. Sensitivity, positive predictive value, and the True Skill Statistic (TSS) were calculated to determine how point distance measures and area-weighted tessellation compared to the 'gold standard' of the map-based measures in identifying CPs. In addition, the mean degree and density of the different contact networks were calculated. Utilising point distances <1 km and <5 km as a measure for contiguity resulted in poor discrimination between map-based CPs/non-CPs (TSS 0.279-0.344 and 0.385-0.400, respectively). Point distance <1 km missed a high proportion of map-based CPs; <5 km point distance picked up a high proportion of map-based non-CPs as CPs. Area-weighted tessellation performed best, with reasonable discrimination between map-based CPs/non-CPs (TSS 0.617-0.737) and comparable mean degree and density. Landscape features altered network properties considerably when taken into account. The farming landscape is not homogeneous. Basing contiguity on geographic locations of field boundaries and including landscape features known to affect transmission into FMD models are likely to improve individual farm-level accuracy of spatial predictions in the event of future outbreaks. If a substantial proportion of FMD transmission events are by contiguous spread, and CPs should be assigned an elevated relative transmission rate, the shape of the kernel could be significantly altered since ability to discriminate between map-based CPs and non-CPs is different over different Euclidean distances.

  19. A phenomenological biological dose model for proton therapy based on linear energy transfer spectra.

    PubMed

    Rørvik, Eivind; Thörnqvist, Sara; Stokkevåg, Camilla H; Dahle, Tordis J; Fjaera, Lars Fredrik; Ytre-Hauge, Kristian S

    2017-06-01

    The relative biological effectiveness (RBE) of protons varies with the radiation quality, quantified by the linear energy transfer (LET). Most phenomenological models employ a linear dependency of the dose-averaged LET (LET d ) to calculate the biological dose. However, several experiments have indicated a possible non-linear trend. Our aim was to investigate if biological dose models including non-linear LET dependencies should be considered, by introducing a LET spectrum based dose model. The RBE-LET relationship was investigated by fitting of polynomials from 1st to 5th degree to a database of 85 data points from aerobic in vitro experiments. We included both unweighted and weighted regression, the latter taking into account experimental uncertainties. Statistical testing was performed to decide whether higher degree polynomials provided better fits to the data as compared to lower degrees. The newly developed models were compared to three published LET d based models for a simulated spread out Bragg peak (SOBP) scenario. The statistical analysis of the weighted regression analysis favored a non-linear RBE-LET relationship, with the quartic polynomial found to best represent the experimental data (P = 0.010). The results of the unweighted regression analysis were on the borderline of statistical significance for non-linear functions (P = 0.053), and with the current database a linear dependency could not be rejected. For the SOBP scenario, the weighted non-linear model estimated a similar mean RBE value (1.14) compared to the three established models (1.13-1.17). The unweighted model calculated a considerably higher RBE value (1.22). The analysis indicated that non-linear models could give a better representation of the RBE-LET relationship. However, this is not decisive, as inclusion of the experimental uncertainties in the regression analysis had a significant impact on the determination and ranking of the models. As differences between the models were observed for the SOBP scenario, both non-linear LET spectrum- and linear LET d based models should be further evaluated in clinically realistic scenarios. © 2017 American Association of Physicists in Medicine.

  20. Predictions of Experimentally Observed Stochastic Ground Vibrations Induced by Blasting

    PubMed Central

    Kostić, Srđan; Perc, Matjaž; Vasović, Nebojša; Trajković, Slobodan

    2013-01-01

    In the present paper, we investigate the blast induced ground motion recorded at the limestone quarry “Suva Vrela” near Kosjerić, which is located in the western part of Serbia. We examine the recorded signals by means of surrogate data methods and a determinism test, in order to determine whether the recorded ground velocity is stochastic or deterministic in nature. Longitudinal, transversal and the vertical ground motion component are analyzed at three monitoring points that are located at different distances from the blasting source. The analysis reveals that the recordings belong to a class of stationary linear stochastic processes with Gaussian inputs, which could be distorted by a monotonic, instantaneous, time-independent nonlinear function. Low determinism factors obtained with the determinism test further confirm the stochastic nature of the recordings. Guided by the outcome of time series analysis, we propose an improved prediction model for the peak particle velocity based on a neural network. We show that, while conventional predictors fail to provide acceptable prediction accuracy, the neural network model with four main blast parameters as input, namely total charge, maximum charge per delay, distance from the blasting source to the measuring point, and hole depth, delivers significantly more accurate predictions that may be applicable on site. We also perform a sensitivity analysis, which reveals that the distance from the blasting source has the strongest influence on the final value of the peak particle velocity. This is in full agreement with previous observations and theory, thus additionally validating our methodology and main conclusions. PMID:24358140

  1. Polynomial Size Formulations for the Distance and Capacity Constrained Vehicle Routing Problem

    NASA Astrophysics Data System (ADS)

    Kara, Imdat; Derya, Tusan

    2011-09-01

    The Distance and Capacity Constrained Vehicle Routing Problem (DCVRP) is an extension of the well known Traveling Salesman Problem (TSP). DCVRP arises in distribution and logistics problems. It would be beneficial to construct new formulations, which is the main motivation and contribution of this paper. We focused on two indexed integer programming formulations for DCVRP. One node based and one arc (flow) based formulation for DCVRP are presented. Both formulations have O(n2) binary variables and O(n2) constraints, i.e., the number of the decision variables and constraints grows with a polynomial function of the nodes of the underlying graph. It is shown that proposed arc based formulation produces better lower bound than the existing one (this refers to the Water's formulation in the paper). Finally, various problems from literature are solved with the node based and arc based formulations by using CPLEX 8.0. Preliminary computational analysis shows that, arc based formulation outperforms the node based formulation in terms of linear programming relaxation.

  2. Testing the limits of long-distance learning: Learning beyond a three-segment window

    PubMed Central

    Finley, Sara

    2012-01-01

    Traditional flat-structured bigram and trigram models of phonotactics are useful because they capture a large number of facts about phonological processes. Additionally, these models predict that local interactions should be easier to learn than long-distance ones since long-distance dependencies are difficult to capture with these models. Long-distance phonotactic patterns have been observed by linguists in many languages, who have proposed different kinds of models, including feature-based bigram and trigram models, as well as precedence models. Contrary to flat-structured bigram and trigram models, these alternatives capture unbounded dependencies because at an abstract level of representation, the relevant elements are locally dependent, even if they are not adjacent at the observable level. Using an artificial grammar learning paradigm, we provide additional support for these alternative models of phonotactics. Participants in two experiments were exposed to a long-distance consonant harmony pattern in which the first consonant of a five-syllable word was [s] or [∫] ('sh') and triggered a suffix that was either [−su] or [−∫u] depending on the sibilant quality of this first consonant. Participants learned this pattern, despite the large distance between the trigger and the target, suggesting that when participants learn long-distance phonological patterns, that pattern is learned without specific reference to distance. PMID:22303815

  3. Modeling the long-term evolution of space debris

    DOEpatents

    Nikolaev, Sergei; De Vries, Willem H.; Henderson, John R.; Horsley, Matthew A.; Jiang, Ming; Levatin, Joanne L.; Olivier, Scot S.; Pertica, Alexander J.; Phillion, Donald W.; Springer, Harry K.

    2017-03-07

    A space object modeling system that models the evolution of space debris is provided. The modeling system simulates interaction of space objects at simulation times throughout a simulation period. The modeling system includes a propagator that calculates the position of each object at each simulation time based on orbital parameters. The modeling system also includes a collision detector that, for each pair of objects at each simulation time, performs a collision analysis. When the distance between objects satisfies a conjunction criterion, the modeling system calculates a local minimum distance between the pair of objects based on a curve fitting to identify a time of closest approach at the simulation times and calculating the position of the objects at the identified time. When the local minimum distance satisfies a collision criterion, the modeling system models the debris created by the collision of the pair of objects.

  4. Durango delta: Complications on San Juan basin Cretaceous linear strandline theme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zech, R.S.; Wright, R.

    1989-09-01

    The Upper Cretaceous Point Lookout Sandstone generally conforms to a predictable cyclic shoreface model in which prograding linear strandline lithosomes dominate formation architecture. Multiple transgressive-regressive cycles results in systematic repetition of lithologies deposited in beach to inner shelf environments. Deposits of approximately five cycles are locally grouped into bundles. Such bundles extend at least 20 km along depositional strike and change from foreshore sandstone to offshore, time-equivalent Mancos mud rock in a downdip distance of 17 to 20 km. Excellent hydrocarbon reservoirs exist where well-sorted shoreface sandstone bundles stack and the formation thickens. This depositional model breaks down in themore » vicinity of Durango, Colorado, where a fluvial-dominated delta front and associated large distributary channels characterize the Point Lookout Sandstone and overlying Menefee Formation.« less

  5. Effects of hydrokinetic turbine sound on the behavior of four species of fish within an experimental mesocosm

    DOE PAGES

    Schramm, Michael P.; Bevelhimer, Mark; Scherelis, Constantin

    2017-02-04

    The development of hydrokinetic energy technologies (e.g., tidal turbines) has raised concern over the potential impacts of underwater sound produced by hydrokinetic turbines on fish species likely to encounter these turbines. To assess the potential for behavioral impacts, we exposed four species of fish to varying intensities of recorded hydrokinetic turbine sound in a semi-natural environment. Although we tested freshwater species (redhorse suckers [Moxostoma spp], freshwater drum [Aplondinotus grunniens], largemouth bass [Micropterus salmoides], and rainbow trout [Oncorhynchus mykiss]), these species are also representative of the hearing physiology and sensitivity of estuarine species that would be affected at tidal energy sites.more » Here, we evaluated changes in fish position relative to different intensities of turbine sound as well as trends in location over time with linear mixed-effects and generalized additive mixed models. We also evaluated changes in the proportion of near-source detections relative to sound intensity and exposure time with generalized linear mixed models and generalized additive models. Models indicated that redhorse suckers may respond to sustained turbine sound by increasing distance from the sound source. Freshwater drum models suggested a mixed response to turbine sound, and largemouth bass and rainbow trout models did not indicate any likely responses to turbine sound. Lastly, findings highlight the importance for future research to utilize accurate localization systems, different species, validated sound transmission distances, and to consider different types of behavioral responses to different turbine designs and to the cumulative sound of arrays of multiple turbines.« less

  6. Effects of hydrokinetic turbine sound on the behavior of four species of fish within an experimental mesocosm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schramm, Michael P.; Bevelhimer, Mark; Scherelis, Constantin

    The development of hydrokinetic energy technologies (e.g., tidal turbines) has raised concern over the potential impacts of underwater sound produced by hydrokinetic turbines on fish species likely to encounter these turbines. To assess the potential for behavioral impacts, we exposed four species of fish to varying intensities of recorded hydrokinetic turbine sound in a semi-natural environment. Although we tested freshwater species (redhorse suckers [Moxostoma spp], freshwater drum [Aplondinotus grunniens], largemouth bass [Micropterus salmoides], and rainbow trout [Oncorhynchus mykiss]), these species are also representative of the hearing physiology and sensitivity of estuarine species that would be affected at tidal energy sites.more » Here, we evaluated changes in fish position relative to different intensities of turbine sound as well as trends in location over time with linear mixed-effects and generalized additive mixed models. We also evaluated changes in the proportion of near-source detections relative to sound intensity and exposure time with generalized linear mixed models and generalized additive models. Models indicated that redhorse suckers may respond to sustained turbine sound by increasing distance from the sound source. Freshwater drum models suggested a mixed response to turbine sound, and largemouth bass and rainbow trout models did not indicate any likely responses to turbine sound. Lastly, findings highlight the importance for future research to utilize accurate localization systems, different species, validated sound transmission distances, and to consider different types of behavioral responses to different turbine designs and to the cumulative sound of arrays of multiple turbines.« less

  7. TU-H-BRA-02: The Physics of Magnetic Field Isolation in a Novel Compact Linear Accelerator Based MRI-Guided Radiation Therapy System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Low, D; Mutic, S; Shvartsman, S

    Purpose: To develop a method for isolating the MRI magnetic field from field-sensitive linear accelerator components at distances close to isocenter. Methods: A MRI-guided radiation therapy system has been designed that integrates a linear accelerator with simultaneous MR imaging. In order to accomplish this, the magnetron, port circulator, radiofrequency waveguide, gun driver, and linear accelerator needed to be placed in locations with low magnetic fields. The system was also required to be compact, so moving these components far from the main magnetic field and isocenter was not an option. The magnetic field sensitive components (exclusive of the waveguide) were placedmore » in coaxial steel sleeves that were electrically and mechanically isolated and whose thickness and placement were optimized using E&M modeling software. Six sets of sleeves were placed 60° apart, 85 cm from isocenter. The Faraday effect occurs when the direction of propagation is parallel to the magnetic RF field component, rotating the RF polarization, subsequently diminishing RF power. The Faraday effect was avoided by orienting the waveguides such that the magnetic field RF component was parallel to the magnetic field. Results: The magnetic field within the shields was measured to be less than 40 Gauss, significantly below the amount needed for the magnetron and port circulator. Additional mu-metal was employed to reduce the magnetic field at the linear accelerator to less than 1 Gauss. The orientation of the RF waveguides allowed the RT transport with minimal loss and reflection. Conclusion: One of the major challenges in designing a compact linear accelerator based MRI-guided radiation therapy system, that of creating low magnetic field environments for the magnetic-field sensitive components, has been solved. The measured magnetic fields are sufficiently small to enable system integration. This work supported by ViewRay, Inc.« less

  8. Accuracy Assessment in Determining the Location of Corners of Building Structures Using a Combination of Various Measurement Methods

    NASA Astrophysics Data System (ADS)

    Krzyżek, Robert; Przewięźlikowska, Anna

    2017-12-01

    When surveys of corners of building structures are carried out, surveyors frequently use a compilation of two surveying methods. The first one involves the determination of several corners with reference to a geodetic control using classical methods of surveying field details. The second method relates to the remaining corner points of a structure, which are determined in sequence from distance-distance intersection, using control linear values of the wall faces of the building, the so-called tie distances. This paper assesses the accuracy of coordinates of corner points of a building structure, determined using the method of distance-distance intersection, based on the corners which had previously been determined by the conducted surveys tied to a geodetic control. It should be noted, however, that such a method of surveying the corners of building structures from linear measures is based on the details of the first-order accuracy, while the regulations explicitly allow such measurement only for the details of the second- and third-order accuracy. Therefore, a question arises whether this legal provision is unfounded, or whether surveyors are acting not only against the applicable standards but also without due diligence while performing surveys? This study provides answers to the formulated problem. The main purpose of the study was to verify whether the actual method which is used in practice for surveying building structures allows to obtain the required accuracy of coordinates of the points being determined, or whether it should be strictly forbidden. The results of the conducted studies clearly demonstrate that the problem is definitely more complex. Eventually, however, it might be assumed that assessment of the accuracy in determining a location of corners of a building using a combination of two different surveying methods will meet the requirements of the regulation [MIA, 2011), subject to compliance with relevant baseline criteria, which have been presented in this study. Observance of the proposed boundary conditions would allow for frequent performance of surveys of building structures by surveyors (from tie distances), while maintaining the applicable accuracy criteria. This would allow for the inclusion of surveying documentation into the national geodetic and cartographic documentation center database pursuant to the legal bases.

  9. Re-Conceptualizing Intimacy and Distance in Instructional Models

    ERIC Educational Resources Information Center

    Ketterer, John J.

    2006-01-01

    The idea that distance education lacks intimacy and is therefore inferior is based on an embedded metaphor that sustains a restricted and limiting mental model of ideal instruction. The authors analyze alternative conceptualizations of intimacy, space, and place as factors in the development of effective instructional models. They predict that the…

  10. Comparing Habitat Suitability and Connectivity Modeling Methods for Conserving Pronghorn Migrations

    PubMed Central

    Poor, Erin E.; Loucks, Colby; Jakes, Andrew; Urban, Dean L.

    2012-01-01

    Terrestrial long-distance migrations are declining globally: in North America, nearly 75% have been lost. Yet there has been limited research comparing habitat suitability and connectivity models to identify migration corridors across increasingly fragmented landscapes. Here we use pronghorn (Antilocapra americana) migrations in prairie habitat to compare two types of models that identify habitat suitability: maximum entropy (Maxent) and expert-based (Analytic Hierarchy Process). We used distance to wells, distance to water, NDVI, land cover, distance to roads, terrain shape and fence presence to parameterize the models. We then used the output of these models as cost surfaces to compare two common connectivity models, least-cost modeling (LCM) and circuit theory. Using pronghorn movement data from spring and fall migrations, we identified potential migration corridors by combining each habitat suitability model with each connectivity model. The best performing model combination was Maxent with LCM corridors across both seasons. Maxent out-performed expert-based habitat suitability models for both spring and fall migrations. However, expert-based corridors can perform relatively well and are a cost-effective alternative if species location data are unavailable. Corridors created using LCM out-performed circuit theory, as measured by the number of pronghorn GPS locations present within the corridors. We suggest the use of a tiered approach using different corridor widths for prioritizing conservation and mitigation actions, such as fence removal or conservation easements. PMID:23166656

  11. Comparing habitat suitability and connectivity modeling methods for conserving pronghorn migrations.

    PubMed

    Poor, Erin E; Loucks, Colby; Jakes, Andrew; Urban, Dean L

    2012-01-01

    Terrestrial long-distance migrations are declining globally: in North America, nearly 75% have been lost. Yet there has been limited research comparing habitat suitability and connectivity models to identify migration corridors across increasingly fragmented landscapes. Here we use pronghorn (Antilocapra americana) migrations in prairie habitat to compare two types of models that identify habitat suitability: maximum entropy (Maxent) and expert-based (Analytic Hierarchy Process). We used distance to wells, distance to water, NDVI, land cover, distance to roads, terrain shape and fence presence to parameterize the models. We then used the output of these models as cost surfaces to compare two common connectivity models, least-cost modeling (LCM) and circuit theory. Using pronghorn movement data from spring and fall migrations, we identified potential migration corridors by combining each habitat suitability model with each connectivity model. The best performing model combination was Maxent with LCM corridors across both seasons. Maxent out-performed expert-based habitat suitability models for both spring and fall migrations. However, expert-based corridors can perform relatively well and are a cost-effective alternative if species location data are unavailable. Corridors created using LCM out-performed circuit theory, as measured by the number of pronghorn GPS locations present within the corridors. We suggest the use of a tiered approach using different corridor widths for prioritizing conservation and mitigation actions, such as fence removal or conservation easements.

  12. Micromechanism linear actuator with capillary force sealing

    DOEpatents

    Sniegowski, Jeffry J.

    1997-01-01

    A class of micromachine linear actuators whose function is based on gas driven pistons in which capillary forces are used to seal the gas behind the piston. The capillary forces also increase the amount of force transmitted from the gas pressure to the piston. In a major subclass of such devices, the gas bubble is produced by thermal vaporization of a working fluid. Because of their dependence on capillary forces for sealing, such devices are only practical on the sub-mm size scale, but in that regime they produce very large force times distance (total work) values.

  13. Power and sample-size estimation for microbiome studies using pairwise distances and PERMANOVA

    PubMed Central

    Kelly, Brendan J.; Gross, Robert; Bittinger, Kyle; Sherrill-Mix, Scott; Lewis, James D.; Collman, Ronald G.; Bushman, Frederic D.; Li, Hongzhe

    2015-01-01

    Motivation: The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence–absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-group distance and exposure/intervention effect size must be accurately modeled to estimate statistical power for a microbiome study that will be analyzed with pairwise distances and PERMANOVA. Results: We present a framework for PERMANOVA power estimation tailored to marker-gene microbiome studies that will be analyzed by pairwise distances, which includes: (i) a novel method for distance matrix simulation that permits modeling of within-group pairwise distances according to pre-specified population parameters; (ii) a method to incorporate effects of different sizes within the simulated distance matrix; (iii) a simulation-based method for estimating PERMANOVA power from simulated distance matrices; and (iv) an R statistical software package that implements the above. Matrices of pairwise distances can be efficiently simulated to satisfy the triangle inequality and incorporate group-level effects, which are quantified by the adjusted coefficient of determination, omega-squared (ω2). From simulated distance matrices, available PERMANOVA power or necessary sample size can be estimated for a planned microbiome study. Availability and implementation: http://github.com/brendankelly/micropower. Contact: brendank@mail.med.upenn.edu or hongzhe@upenn.edu PMID:25819674

  14. Building development and roads: implications for the distribution of stone curlews across the Brecks.

    PubMed

    Clarke, Ralph T; Liley, Durwyn; Sharp, Joanna M; Green, Rhys E

    2013-01-01

    Substantial new housing and infrastructure development planned within England has the potential to conflict with the nature conservation interests of protected sites. The Breckland area of eastern England (the Brecks) is designated as a Special Protection Area for a number of bird species, including the stone curlew (for which it holds more than 60% of the UK total population). We explore the effect of buildings and roads on the spatial distribution of stone curlew nests across the Brecks in order to inform strategic development plans to avoid adverse effects on such European protected sites. Using data across all years (and subsets of years) over the period 1988-2006 but restricted to habitat areas of arable land with suitable soils, we assessed nest density in relation to the distances to nearest settlements and to major roads. Measures of the local density of nearby buildings, roads and traffic levels were assessed using normal kernel distance-weighting functions. Quasi-Poisson generalised linear mixed models allowing for spatial auto-correlation were fitted. Significantly lower densities of stone curlew nests were found at distances up to 1500m from settlements, and distances up to 1000m or more from major (trunk) roads. The best fitting models involved optimally distance-weighted variables for the extent of nearby buildings and the trunk road traffic levels. The results and predictions from this study of past data suggests there is cause for concern that future housing development and associated road infrastructure within the Breckland area could have negative impacts on the nesting stone curlew population. Given the strict legal protection afforded to the SPA the planning and conservation bodies have subsequently agreed precautionary restrictions on building development within the distances identified and used the modelling predictions to agree mitigation measures for proposed trunk road developments.

  15. Optical Coherence Tomography Scan Circle Location and Mean Retinal Nerve Fiber Layer Measurement Variability

    PubMed Central

    Gabriele, Michelle L.; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A.; Townsend, Kelly A.; Kagemann, Larry; Wojtkowski, Maciej; Srinivasan, Vivek J.; Fujimoto, James G.; Duker, Jay S.; Schuman, Joel S.

    2009-01-01

    PURPOSE To investigate the effect on optical coherence tomography (OCT) retinal nerve fiber layer (RNFL) thickness measurements of varying the standard 3.4-mm-diameter circle location. METHODS The optic nerve head (ONH) region of 17 eyes of 17 healthy subjects was imaged with high-speed, ultrahigh-resolution OCT (hsUHR-OCT; 501 × 180 axial scans covering a 6 × 6-mm area; scan time, 3.84 seconds) for a comprehensive sampling. This method allows for systematic simulation of the variable circle placement effect. RNFL thickness was measured on this three-dimensional dataset by using a custom-designed software program. RNFL thickness was resampled along a 3.4-mm-diameter circle centered on the ONH, then along 3.4-mm circles shifted horizontally (x-shift), vertically (y-shift) and diagonally up to ±500 µm (at 100-µm intervals). Linear mixed-effects models were used to determine RNFL thickness as a function of the scan circle shift. A model for the distance between the two thickest measurements along the RNFL thickness circular profile (peak distance) was also calculated. RESULTS RNFL thickness tended to decrease with both positive and negative x- and y-shifts. The range of shifts that caused a decrease greater than the variability inherent to the commercial device was greater in both nasal and temporal quadrants than in the superior and inferior ones. The model for peak distance demonstrated that as the scan moves nasally, the RNFL peak distance increases, and as the circle moves temporally, the distance decreases. Vertical shifts had a minimal effect on peak distance. CONCLUSIONS The location of the OCT scan circle affects RNFL thickness measurements. Accurate registration of OCT scans is essential for measurement reproducibility and longitudinal examination (ClinicalTrials.gov number, NCT00286637). PMID:18515577

  16. Correlation between electrical direct current resistivity and plasmonic properties of CMOS compatible titanium nitride thin films.

    PubMed

    Viarbitskaya, S; Arocas, J; Heintz, O; Colas-Des-Francs, G; Rusakov, D; Koch, U; Leuthold, J; Markey, L; Dereux, A; Weeber, J-C

    2018-04-16

    Damping distances of surface plasmon polariton modes sustained by different thin titanium nitride (TiN) films are measured at the telecom wavelength of 1.55 μm. The damping distances are correlated to the electrical direct current resistivity of the films sustaining the surface plasmon modes. It is found that TiN/Air surface plasmon mode damping distances drop non-linearly from 40 to 16μm as the resistivity of the layers increases from 28 to 130μΩ.cm, respectively. The relevance of the direct current (dc) electrical resistivity for the characterization of TiN plasmonic properties is investigated in the framework of the Drude model, on the basis of parameters extracted from spectroscopic ellipsometry experiments. By probing a parametric space of realistic values for parameters of the Drude model, we obtain a nearly univocal dependence of the surface plasmon damping distance on the dc resistivity demonstrating the relevance of dc resistivity for the evaluation of the plasmonic performances of TiN at telecom frequencies. Finally, we show that better plasmonic performances are obtained for TiN films featuring a low content of oxygen. For low oxygen content and corresponding low resistivity, we attribute the increase of the surface plasmon damping distances to a lower confinement of the plasmon field into the metal and not to a decrease of the absorption of TiN.

  17. Association Between Peri-implant Bone Morphology and Marginal Bone Loss: A Retrospective Study on Implant-Supported Mandibular Overdentures.

    PubMed

    Ding, Qian; Zhang, Lei; Geraets, Wil; Wu, Wuqing; Zhou, Yongsheng; Wismeijer, Daniel; Xie, Qiufei

    The present study aimed to explore the association between marginal bone loss and type of peri-implant bony defect determined using a new peri-implant bony defect classification system. A total of 110 patients with implant-supported mandibular overdentures were involved. Clinical information was collected, including gender, age, smoking habit, and the overdenture attachment system used. Peri-implant bony defect types and marginal distances (ie, distance between the marginal bone level and the top of the implant shoulder) of all sites were identified on panoramic radiographs by a single experienced observer. The associations between marginal distance and peri-implant bony defect type, gender, age, smoking habit, attachment system, and time after implantation were investigated using marginal generalized linear models and regression analysis. A total of 83 participants were included in the final sample with a total of 224 implants involving 3,124 implant sites. The mean observation time was 10.7 years. All peri-implant bony defect types except Type 5 (slit-like) were significantly related to marginal distance in all models (P < .01). Smoking and time after implantation were significantly related to marginal distance while gender, age, and the overdenture attachment system used were not. The peri-implant bony defect type, determined using the new classification system, is associated with the extent of marginal bone loss.

  18. Landscape resistance and habitat combine to provide an optimal model of genetic structure and connectivity at the range margin of a small mammal.

    PubMed

    Marrotte, R R; Gonzalez, A; Millien, V

    2014-08-01

    We evaluated the effect of habitat and landscape characteristics on the population genetic structure of the white-footed mouse. We develop a new approach that uses numerical optimization to define a model that combines site differences and landscape resistance to explain the genetic differentiation between mouse populations inhabiting forest patches in southern Québec. We used ecological distance computed from resistance surfaces with Circuitscape to infer the effect of the landscape matrix on gene flow. We calculated site differences using a site index of habitat characteristics. A model that combined site differences and resistance distances explained a high proportion of the variance in genetic differentiation and outperformed models that used geographical distance alone. Urban and agriculture-related land uses were, respectively, the most and the least resistant landscape features influencing gene flow. Our method detected the effect of rivers and highways as highly resistant linear barriers. The density of grass and shrubs on the ground best explained the variation in the site index of habitat characteristics. Our model indicates that movement of white-footed mouse in this region is constrained along routes of low resistance. Our approach can generate models that may improve predictions of future northward range expansion of this small mammal. © 2014 John Wiley & Sons Ltd.

  19. MIRO Observation of Comet C/2002 T7 (LINEAR) Water Line Spectrum

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; Frerking, Margaret; Hofstadter, Mark; Gulkis, Samuel; von Allmen, Paul; Crovisier, Jaques; Biver, Nicholas; Bockelee-Morvan, Dominique

    2011-01-01

    Comet C/2002 T7 (LINEAR) was observed with the Microwave Instrument for Rosetta Orbiter (MIRO) on April 30, 2004, between 5 hr and 16 hr UT. The comet was 0.63AU distance from the Sun and 0.68AU distance from the MIRO telescope at the time of the observations. The water line involving the two lowest rotational levels at 556.936 GHz is observed at 557.070 GHz due to a large Doppler frequency shift. The detected water line spectrum is interpreted using a non local thermal equilibrium (Non-LTE) molecular excitation and radiative transfer model. Several synthetic spectra are calculated with various coma profiles that are plausible for the comet at the time of observations. The coma profile is modeled with three characteristic parameters: outgassing rate, a constant expansion velocity, and a constant gas temperature. The model calculation result shows that for the distant line observation where contributions from a large coma space is averaged, the combination of the outgassing rate and the gas expansion velocity determines the line shape while the gas temperature has a negligible effect. The comparison between the calculated spectra and the MIRO measured spectrum suggests that the outgassing rate of the comet is about 2.0x1029 molecules/second and its gas expansion velocity about 1.2 km/s at the time of the observations.

  20. Characterization of Type Ia Supernova Light Curves Using Principal Component Analysis of Sparse Functional Data

    NASA Astrophysics Data System (ADS)

    He, Shiyuan; Wang, Lifan; Huang, Jianhua Z.

    2018-04-01

    With growing data from ongoing and future supernova surveys, it is possible to empirically quantify the shapes of SNIa light curves in more detail, and to quantitatively relate the shape parameters with the intrinsic properties of SNIa. Building such relationships is critical in controlling systematic errors associated with supernova cosmology. Based on a collection of well-observed SNIa samples accumulated in the past years, we construct an empirical SNIa light curve model using a statistical method called the functional principal component analysis (FPCA) for sparse and irregularly sampled functional data. Using this method, the entire light curve of an SNIa is represented by a linear combination of principal component functions, and the SNIa is represented by a few numbers called “principal component scores.” These scores are used to establish relations between light curve shapes and physical quantities such as intrinsic color, interstellar dust reddening, spectral line strength, and spectral classes. These relations allow for descriptions of some critical physical quantities based purely on light curve shape parameters. Our study shows that some important spectral feature information is being encoded in the broad band light curves; for instance, we find that the light curve shapes are correlated with the velocity and velocity gradient of the Si II λ6355 line. This is important for supernova surveys (e.g., LSST and WFIRST). Moreover, the FPCA light curve model is used to construct the entire light curve shape, which in turn is used in a functional linear form to adjust intrinsic luminosity when fitting distance models.

  1. Biomechanical models for radial distance determination by the rat vibrissal system.

    PubMed

    Birdwell, J Alexander; Solomon, Joseph H; Thajchayapong, Montakan; Taylor, Michael A; Cheely, Matthew; Towal, R Blythe; Conradt, Jorg; Hartmann, Mitra J Z

    2007-10-01

    Rats use active, rhythmic movements of their whiskers to acquire tactile information about three-dimensional object features. There are no receptors along the length of the whisker; therefore all tactile information must be mechanically transduced back to receptors at the whisker base. This raises the question: how might the rat determine the radial contact position of an object along the whisker? We developed two complementary biomechanical models that show that the rat could determine radial object distance by monitoring the rate of change of moment (or equivalently, the rate of change of curvature) at the whisker base. The first model is used to explore the effects of taper and inherent whisker curvature on whisker deformation and used to predict the shapes of real rat whiskers during deflections at different radial distances. Predicted shapes closely matched experimental measurements. The second model describes the relationship between radial object distance and the rate of change of moment at the base of a tapered, inherently curved whisker. Together, these models can account for recent recordings showing that some trigeminal ganglion (Vg) neurons encode closer radial distances with increased firing rates. The models also suggest that four and only four physical variables at the whisker base -- angular position, angular velocity, moment, and rate of change of moment -- are needed to describe the dynamic state of a whisker. We interpret these results in the context of our evolving hypothesis that neural responses in Vg can be represented using a state-encoding scheme that includes combinations of these four variables.

  2. A Skill Score of Trajectory Model Evaluation Using Reinitialized Series of Normalized Cumulative Lagrangian Separation

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Weisberg, R. H.

    2017-12-01

    The Lagrangian separation distance between the endpoints of simulated and observed drifter trajectories is often used to assess the performance of numerical particle trajectory models. However, the separation distance fails to indicate relative model performance in weak and strong current regions, such as a continental shelf and its adjacent deep ocean. A skill score is proposed based on the cumulative Lagrangian separation distances normalized by the associated cumulative trajectory lengths. The new metrics correctly indicates the relative performance of the Global HYCOM in simulating the strong currents of the Gulf of Mexico Loop Current and the weaker currents of the West Florida Shelf in the eastern Gulf of Mexico. In contrast, the Lagrangian separation distance alone gives a misleading result. Also, the observed drifter position series can be used to reinitialize the trajectory model and evaluate its performance along the observed trajectory, not just at the drifter end position. The proposed dimensionless skill score is particularly useful when the number of drifter trajectories is limited and neither a conventional Eulerian-based velocity nor a Lagrangian-based probability density function may be estimated.

  3. Dependency-based long short term memory network for drug-drug interaction extraction.

    PubMed

    Wang, Wei; Yang, Xi; Yang, Canqun; Guo, Xiaowei; Zhang, Xiang; Wu, Chengkun

    2017-12-28

    Drug-drug interaction extraction (DDI) needs assistance from automated methods to address the explosively increasing biomedical texts. In recent years, deep neural network based models have been developed to address such needs and they have made significant progress in relation identification. We propose a dependency-based deep neural network model for DDI extraction. By introducing the dependency-based technique to a bi-directional long short term memory network (Bi-LSTM), we build three channels, namely, Linear channel, DFS channel and BFS channel. All of these channels are constructed with three network layers, including embedding layer, LSTM layer and max pooling layer from bottom up. In the embedding layer, we extract two types of features, one is distance-based feature and another is dependency-based feature. In the LSTM layer, a Bi-LSTM is instituted in each channel to better capture relation information. Then max pooling is used to get optimal features from the entire encoding sequential data. At last, we concatenate the outputs of all channels and then link it to the softmax layer for relation identification. To the best of our knowledge, our model achieves new state-of-the-art performance with the F-score of 72.0% on the DDIExtraction 2013 corpus. Moreover, our approach obtains much higher Recall value compared to the existing methods. The dependency-based Bi-LSTM model can learn effective relation information with less feature engineering in the task of DDI extraction. Besides, the experimental results show that our model excels at balancing the Precision and Recall values.

  4. Re-evaluating causal modeling with mantel tests in landscape genetics

    Treesearch

    Samuel A. Cushman; Tzeidle N. Wasserman; Erin L. Landguth; Andrew J. Shirk

    2013-01-01

    The predominant analytical approach to associate landscape patterns with gene flow processes is based on the association of cost distances with genetic distances between individuals. Mantel and partial Mantel tests have been the dominant statistical tools used to correlate cost distances and genetic distances in landscape genetics. However, the inherent high...

  5. Distance Learners' Perspective on User-Friendly Instructional Materials at the University of Zambia

    ERIC Educational Resources Information Center

    Simui, F.; Thompson, L. C.; Mundende, K.; Mwewa, G.; Kakana, F.; Chishiba, A.; Namangala, B.

    2017-01-01

    This case study focuses on print-based instructional materials available to distance education learners at the University of Zambia. Using the Visual Paradigm Software, we model distance education learners' voices into sociograms to make a contribution to the ongoing discourse on quality distance learning in poorly resourced communities. Emerging…

  6. Earth fissures and localized differential subsidence

    USGS Publications Warehouse

    Holzer, Thomas L.; Pampeyan, Earl H.

    1981-01-01

    Long linear tension cracks associated with declining groundwater levels at four sites in subsiding areas in south-central Arizona, Fremont Valley, California, and Las Vegas Valley, Nevada, occur near points of maximum convex-upward curvature in subsidence profiles oriented perpendicular to the cracks. Profiles are based on repeated precise vertical control surveys of lines of closely spaced bench marks. Association of these fissures with zones of localized differential subsidence indicates that linear earth fissures are caused by horizontal tensile strains probably resulting from localized differential compaction. Horizontal tensile strains across the fissures at the point of maximum convex-upward curvature, ranging from approximately 100 to 700 microstrains (0.01 to 0.07% per year), were indicated based on measurements with a tape or electronic distance meter.

  7. Controls on the variability of net infiltration to desert sandstone

    USGS Publications Warehouse

    Heilweil, Victor M.; McKinney, Tim S.; Zhdanov, Michael S.; Watt, Dennis E.

    2007-01-01

    As populations grow in arid climates and desert bedrock aquifers are increasingly targeted for future development, understanding and quantifying the spatial variability of net infiltration becomes critically important for accurately inventorying water resources and mapping contamination vulnerability. This paper presents a conceptual model of net infiltration to desert sandstone and then develops an empirical equation for its spatial quantification at the watershed scale using linear least squares inversion methods for evaluating controlling parameters (independent variables) based on estimated net infiltration rates (dependent variables). Net infiltration rates used for this regression analysis were calculated from environmental tracers in boreholes and more than 3000 linear meters of vadose zone excavations in an upland basin in southwestern Utah underlain by Navajo sandstone. Soil coarseness, distance to upgradient outcrop, and topographic slope were shown to be the primary physical parameters controlling the spatial variability of net infiltration. Although the method should be transferable to other desert sandstone settings for determining the relative spatial distribution of net infiltration, further study is needed to evaluate the effects of other potential parameters such as slope aspect, outcrop parameters, and climate on absolute net infiltration rates.

  8. Detailed solvent, structural, quantum chemical study and antimicrobial activity of isatin Schiff base

    NASA Astrophysics Data System (ADS)

    Brkić, Dominik R.; Božić, Aleksandra R.; Marinković, Aleksandar D.; Milčić, Miloš K.; Prlainović, Nevena Ž.; Assaleh, Fathi H.; Cvijetić, Ilija N.; Nikolić, Jasmina B.; Drmanić, Saša Ž.

    2018-05-01

    The ratios of E/Z isomers of sixteen synthesized 1,3-dihydro-3-(substituted phenylimino)-2H-indol-2-one were studied using experimental and theoretical methodology. Linear solvation energy relationships (LSER) rationalized solvent influence of the solvent-solute interactions on the UV-Vis absorption maxima shifts (νmax) of both geometrical isomers using the Kamlet-Taft equation. Linear free energy relationships (LFER) in the form of single substituent parameter equation (SSP) was used to analyze substituent effect on pKa, NMR chemical shifts and νmax values. Electron charge density was obtained by the use of Quantum Theory of Atoms in Molecules, i.e. Bader's analysis. The substituent and solvent effect on intramolecular charge transfer (ICT) were interpreted with the aid of time-dependent density functional (TD-DFT) method. Additionally, the results of TD-DFT calculations quantified the efficiency of ICT from the calculated charge-transfer distance (DCT) and amount of transferred charge (QCT). The antimicrobial activity was evaluated using broth microdilution method. 3D QSAR modeling was used to demonstrate the influence of substituents effect as well as molecule geometry on antimicrobial activity.

  9. Efficient nonlinear equalizer for intra-channel nonlinearity compensation for next generation agile and dynamically reconfigurable optical networks.

    PubMed

    Malekiha, Mahdi; Tselniker, Igor; Plant, David V

    2016-02-22

    In this work, we propose and experimentally demonstrate a novel low-complexity technique for fiber nonlinearity compensation. We achieved a transmission distance of 2818 km for a 32-GBaud dual-polarization 16QAM signal. For efficient implantation, and to facilitate integration with conventional digital signal processing (DSP) approaches, we independently compensate fiber nonlinearities after linear impairment equalization. Therefore this algorithm can be easily implemented in currently deployed transmission systems after using linear DSP. The proposed equalizer operates at one sample per symbol and requires only one computation step. The structure of the algorithm is based on a first-order perturbation model with quantized perturbation coefficients. Also, it does not require any prior calculation or detailed knowledge of the transmission system. We identified common symmetries between perturbation coefficients to avoid duplicate and unnecessary operations. In addition, we use only a few adaptive filter coefficients by grouping multiple nonlinear terms and dedicating only one adaptive nonlinear filter coefficient to each group. Finally, the complexity of the proposed algorithm is lower than previously studied nonlinear equalizers by more than one order of magnitude.

  10. A Model for the Breakup of Comet Linear (C/1999 S4)

    NASA Technical Reports Server (NTRS)

    Samarasinha, Nalin H.

    2001-01-01

    We propose a mechanism based on the rubble-pile hypothesis of the cometary nucleus (Weissman 1986) to explain the catastrophic breakup of comet LINEAR (C/1999 S4) observed during July-August 2000. We suggest that a solid nucleus made up of 10-100 m "cometesimals" (Weidenschilling 1997) contains a network of inter-connected voids in the inter-cometesimal regions. The production of super-volatile (i.e., species more volatile than water) gases into these voids occurs due to the thermal wave propagating through the nucleus and associated phase transitions of water ice. The network of voids provides an efficient pathway for rapid propagation of these gases within the nucleus resulting in gas pressure caused stresses over a wide regime of the nucleus. This provides a mechanism for catastrophic breakups of small cometary nuclei such as comet LINEAR (C/1999 S4) as well as for some observed cometary outbursts including those that occur at large heliocentric distances (e.g., West et al. 1991). We emphasize the importance of techniques such as radar reflection tomography and radiowave transmission tomography (e.g., Kofman et al. 1998) aboard cometary missions to determine the three dimensional structure of the nucleus in particular the extent of large scale voids.

  11. Two wrongs make a right: linear increase of accuracy of visually-guided manual pointing, reaching, and height-matching with increase in hand-to-body distance.

    PubMed

    Li, Wenxun; Matin, Leonard

    2005-03-01

    Measurements were made of the accuracy of open-loop manual pointing and height-matching to a visual target whose elevation was perceptually mislocalized. Accuracy increased linearly with distance of the hand from the body, approaching complete accuracy at full extension; with the hand close to the body (within the midfrontal plane), the manual errors equaled the magnitude of the perceptual mislocalization. The visual inducing stimulus responsible for the perceptual errors was a single pitched-from-vertical line that was long (50 degrees), eccentrically-located (25 degrees horizontal), and viewed in otherwise total darkness. The line induced perceptual errors in the elevation of a small, circular visual target set to appear at eye level (VPEL), a setting that changed linearly with the change in the line's visual pitch as has been previously reported (pitch: -30 degrees topbackward to 30 degrees topforward); the elevation errors measured by VPEL settings varied systematically with pitch through an 18 degrees range. In a fourth experiment the visual inducing stimulus responsible for the perceptual errors was shown to induce separately-measured errors in the manual setting of the arm to feel horizontal that were also distance-dependent. The distance-dependence of the visually-induced changes in felt arm position accounts quantitatively for the distance-dependence of the manual errors in pointing/reaching and height matching to the visual target: The near equality of the changes in felt horizontal and changes in pointing/reaching with the finger at the end of the fully extended arm is responsible for the manual accuracy of the fully-extended point; with the finger in the midfrontal plane their large difference is responsible for the inaccuracies of the midfrontal-plane point. The results are inconsistent with the widely-held but controversial theory that visual spatial information employed for perception and action are dissociated and different with no illusory visual influence on action. A different two-system theory, the Proximal/Distal model, employing the same signals from vision and from the body-referenced mechanism with different weights for different hand-to-body distances, accounts for both the perceptual and the manual results in the present experiments.

  12. Influences of roads and development on bird communities in protected Chihuahuan Desert landscapes

    USGS Publications Warehouse

    Gutzwiller, K.J.; Barrow, W.C.

    2003-01-01

    Our objective was to improve knowledge about effects of broad-scale road and development variables on bird communities in protected desert landscapes. Bird species richness and the relative abundance or probability of occurrence of many species were significantly associated with total length of roads within each of two spatial extents (1- and 2-km radii), distance to the nearest road, distance to the nearest development, or the two-way interactions of these variables. Regression models reflected non-linear relations, interaction effects, spatial-extent effects, and interannual variation. Road and development effects warrant special attention in protected areas because such places may be important sources of indigenous bird communities in a region.

  13. Quantifying bushfire penetration into urban areas in Australia

    NASA Astrophysics Data System (ADS)

    Chen, Keping; McAneney, John

    2004-06-01

    The extent and trajectory of bushfire penetration at the bushland-urban interface are quantified using data from major historical fires in Australia. We find that the maximum distance at which homes are destroyed is typically less than 700 m. The probability of home destruction emerges as a simple linear and decreasing function of distance from the bushland-urban boundary but with a variable slope that presumably depends upon fire regime and human intervention. The collective data suggest that the probability of home destruction at the forest edge is around 60%. Spatial patterns of destroyed homes display significant neighbourhood clustering. Our results provide revealing spatial evidence for estimating fire risk to properties and suggest an ember-attack model.

  14. An analytical model of capped turbulent oscillatory bottom boundary layers

    NASA Astrophysics Data System (ADS)

    Shimizu, Kenji

    2010-03-01

    An analytical model of capped turbulent oscillatory bottom boundary layers (BBLs) is proposed using eddy viscosity of a quadratic form. The common definition of friction velocity based on maximum bottom shear stress is found unsatisfactory for BBLs under rotating flows, and a possible extension based on turbulent kinetic energy balance is proposed. The model solutions show that the flow may slip at the top of the boundary layer due to capping by the water surface or stratification, reducing the bottom shear stress, and that the Earth's rotation induces current and bottom shear stress components perpendicular to the interior flow with a phase lag (or lead). Comparisons with field and numerical experiments indicate that the model predicts the essential characteristics of the velocity profiles, although the agreement is rather qualitative due to assumptions of quadratic eddy viscosity with time-independent friction velocity and a well-mixed boundary layer. On the other hand, the predicted linear friction coefficients, phase lead, and veering angle at the bottom agreed with available data with an error of 3%-10%, 5°-10°, and 5°-10°, respectively. As an application of the model, the friction coefficients are used to calculate e-folding decay distances of progressive internal waves with a semidiurnal frequency.

  15. The Impact of Biomass Feedstock Supply Variability on the Delivered Price to a Biorefinery in the Peace River Region of Alberta, Canada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stephen, Jamie; Sokhansanj, Shahabaddine; Bi, X.T.

    2010-01-01

    Agricultural residue feedstock availability in a given region can vary significantly over the 20 25 year lifetime of a biorefinery. Since delivered price of biomass feedstock to a biorefinery is related to the distance travelled and equipment optimization, and transportation distance increases as productivity decreases, productivity is a primary determinant of feedstock price. Using the Integrated Biomass Supply Analysis and Logistics (IBSAL) modeling environment and a standard round bale harvest and delivery scenario, harvest and delivery price were modelled for minimum, average, and maximum yields at four potential biorefinery sites in the Peace River region of Alberta, Canada. Biorefinery capacitiesmore » ranged from 50,000 to 500,000 tonnes per year. Delivery cost is a linear function of transportation distance and can be combined with a polynomial harvest function to create a generalized delivered cost function for agricultural residues. The range in delivered cost is substantial and is an important consideration for the operating costs of a biorefinery.« less

  16. Tip-tilt disturbance model identification based on non-linear least squares fitting for Linear Quadratic Gaussian control

    NASA Astrophysics Data System (ADS)

    Yang, Kangjian; Yang, Ping; Wang, Shuai; Dong, Lizhi; Xu, Bing

    2018-05-01

    We propose a method to identify tip-tilt disturbance model for Linear Quadratic Gaussian control. This identification method based on Levenberg-Marquardt method conducts with a little prior information and no auxiliary system and it is convenient to identify the tip-tilt disturbance model on-line for real-time control. This identification method makes it easy that Linear Quadratic Gaussian control runs efficiently in different adaptive optics systems for vibration mitigation. The validity of the Linear Quadratic Gaussian control associated with this tip-tilt disturbance model identification method is verified by experimental data, which is conducted in replay mode by simulation.

  17. Portfolio optimization by using linear programing models based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Sukono; Hidayat, Y.; Lesmana, E.; Putra, A. S.; Napitupulu, H.; Supian, S.

    2018-01-01

    In this paper, we discussed the investment portfolio optimization using linear programming model based on genetic algorithms. It is assumed that the portfolio risk is measured by absolute standard deviation, and each investor has a risk tolerance on the investment portfolio. To complete the investment portfolio optimization problem, the issue is arranged into a linear programming model. Furthermore, determination of the optimum solution for linear programming is done by using a genetic algorithm. As a numerical illustration, we analyze some of the stocks traded on the capital market in Indonesia. Based on the analysis, it is shown that the portfolio optimization performed by genetic algorithm approach produces more optimal efficient portfolio, compared to the portfolio optimization performed by a linear programming algorithm approach. Therefore, genetic algorithms can be considered as an alternative on determining the investment portfolio optimization, particularly using linear programming models.

  18. Modeling Laterality of the Globus Pallidus Internus in Patients With Parkinson's Disease.

    PubMed

    Sharim, Justin; Yazdi, Daniel; Baohan, Amy; Behnke, Eric; Pouratian, Nader

    2017-04-01

    Neurosurgical interventions such as deep brain stimulation surgery of the globus pallidus internus (GPi) play an important role in the treatment of medically refractory Parkinson's disease (PD), and require high targeting accuracy. Variability in the laterality of the GPi across patients with PD has not been well characterized. The aim of this report is to identify factors that may contribute to differences in position of the motor region of GPi. The charts and operative reports of 101 PD patients following deep brain stimulation surgery (70 males, aged 11-78 years) representing 201 GPi were retrospectively reviewed. Data extracted for each subject include age, gender, anterior and posterior commissures (AC-PC) distance, and third ventricular width. Multiple linear regression, stepwise regression, and relative importance of regressors analysis were performed to assess the predictive ability of these variables on GPi laterality. Multiple linear regression for target vs. third ventricular width, gender, AC-PC distance, and age were significant for normalized linear regression coefficients of 0.333 (p < 0.0001), 0.206 (p = 0.00219), 0.168 (p = 0.0119), and 0.159 (p = 0.0136), respectively. Third ventricular width, gender, AC-PC distance, and age each account for 44.06% (21.38-65.69%, 95% CI), 20.82% (10.51-35.88%), 21.46% (8.28-37.05%), and 13.66% (2.62-28.64%) of the R 2 value, respectively. Effect size calculation was significant for a change in the GPi laterality of 0.19 mm per mm of ventricular width, 0.11 mm per mm of AC-PC distance, 0.017 mm per year in age, and 0.54 mm increase for male gender. This variability highlights the limitations of indirect targeting alone, and argues for the continued use of MRI as well as intraoperative physiological testing to account for such factors that contribute to patient-specific variability in GPi localization. © 2016 International Neuromodulation Society.

  19. Nasolabial Morphology Following Nasoalveolar Molding in Infants With Unilateral Cleft Lip and Palate.

    PubMed

    Nur Yilmaz, Rahime Burcu; Germeç Çakan, Derya

    2018-06-01

    The aim of the present study is to evaluate the effects of nasoalveolar molding (NAM) therapy on nasolabial morphology three dimensionally, and compare the nasolabial linear and surface distance measurements in infants with unilateral cleft lip and palate. Facial plaster casts of 42 infants with unilateral cleft lip and palate taken at the onset (pre-NAM) and finishing stage (post-NAM) of NAM were scanned with 3dMDface stereophotogrammetry system (3dMD, Atlanta, GA). Nineteen nasolabial linear and surface distance measurements were performed on three-dimensional images. In addition to standard descriptive statistical calculations (means and SDs), pre- and post-NAM measurements were evaluated by paired t test. All measurements except lip gap, nostril floor width, and nostril diameter increased between pre-NAM and post-NAM. Nostril and lip height increased significantly on the cleft side (P < 0.05). No differences were present between linear and surface distance measurements except for nasal width measurement. Nasal and lip symmetry improved with NAM. The use of surface distance measurements may be advised particularly for continuous and curved anatomic structures in which circumference differences are expected.

  20. Thin silicon layer SOI power device with linearly-distance fixed charge islands

    NASA Astrophysics Data System (ADS)

    Yuan, Zuo; Haiou, Li; Jianghui, Zhai; Ning, Tang; Shuxiang, Song; Qi, Li

    2015-05-01

    A new high-voltage LDMOS with linearly-distanced fixed charge islands is proposed (LFI LDMOS). A lot of linearly-distanced fixed charge islands are introduced by implanting the Cs or I ion into the buried oxide layer and dynamic holes are attracted and accumulated, which is crucial to enhance the electric field of the buried oxide and the vertical breakdown voltage. The surface electric field is improved by increasing the distance between two adjacent fixed charge islands from source to drain, which lead to the higher concentration of the drift region and a lower on-resistance. The numerical results indicate that the breakdown voltage of 500 V with Ld = 45 μm is obtained in the proposed device in comparison to 209 V of conventional LDMOS, while maintaining low on-resistance. Project supported by the Guangxi Natural Science Foundation of China (No. 2013GXNSFAA019335), the Guangxi Department of Education Project (No.201202ZD041), the China Postdoctoral Science Foundation Project (Nos. 2012M521127, 2013T60566), and the National Natural Science Foundation of China (Nos. 61361011, 61274077, 61464003).

  1. Frequency-based redshift for cosmological observation and Hubble diagram from the 4-D spherical model in comparison with observed supernovae

    NASA Astrophysics Data System (ADS)

    Nagao, Shigeto

    2017-08-01

    According to the formerly reported 4-D spherical model of the universe, factors on Hubble diagrams are discussed. The observed redshift is not the prolongation of wavelength from that of the source at the emission but from the wavelength of spectrum of the present atom of the same element. It is equal to the redshift based on the shift of frequency from the time of emission. We demonstrate that the K-correction corresponds to conversion of the light propagated distance (luminosity distance) to the proper distance at present (present distance). Comparison of the graph of the present distance times 1 + z versus the frequency-based redshift with the reported Hubble diagrams from the Supernova Cosmology Project, which were time-dilated by 1 + z and K-corrected, showed an excellent fit for the Present Time (the radius of 4-D sphere) being c.a. 0.7 of its maximum.

  2. Measurement and reactive burn modeling of the shock to detonation transition for the HMX based explosive LX-14

    NASA Astrophysics Data System (ADS)

    Jones, J. D.; Ma, Xia; Clements, B. E.; Gibson, L. L.; Gustavsen, R. L.

    2017-06-01

    Gas-gun driven plate-impact techniques were used to study the shock to detonation transition in LX-14 (95.5 weight % HMX, 4.5 weight % estane binder). The transition was recorded using embedded electromagnetic particle velocity gauges. Initial shock pressures, P, ranged from 2.5 to 8 GPa and the resulting distances to detonation, xD, were in the range 1.9 to 14 mm. Numerical simulations using the SURF reactive burn scheme coupled with a linear US -up / Mie-Grueneisen equation of state for the reactant and a JWL equation of state for the products, match the experimental data well. Comparison of simulation with experiment as well as the ``best fit'' parameter set for the simulations is presented.

  3. Tritium ((3)H) as a tracer for monitoring the dispersion of conservative radionuclides discharged by the Angra dos Reis nuclear power plants in the Piraquara de Fora Bay, Brazil.

    PubMed

    de Carvalho Gomes, Franciane; Godoy, José Marcus; de Carvalho, Zenildo Lara; de Souza, Elder Magalhães; Rodrigues Silva, José Ivan; Tadeu Lopes, Ricardo

    2014-10-01

    Presently, two nuclear power plants operate in Brazil. Both are located at Itaorna beach, Angra dos Reis, approximately 133 km from Rio de Janeiro city. The reactor cooling circuits require the input of seawater, which is later discharged through a pipeline into the adjacent Piraquara de Fora Cove. The radioactive effluents undergo ion-exchange treatment prior to their release in batches, causing the enrichment of (3)H relative to other radionuclides in the discharged waters. Under steady state conditions, the (3)H gradient in the Piraquara de Fora waters can be used to determine the dependence of the dilution factor on the distance from the discharge point. The present work describes experiments carried out at the reactor site during batch release episodes, including time series sampling at the discharge point and surface seawater sampling every 250 m to a distance of 1250 m, after a double distillation, the (3)H concentration was measured by liquid scintillation counting applying a Quantulus liquid scintillation spectrometer. The obtained results showed a linear relationship between the (3)H concentration and distance from the discharge point. At 1250 m from the discharge point a dilution index of 1:15 was measured which fits the expected value based on modeling. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Path Complexity in Virtual Water Maze Navigation: Differential Associations with Age, Sex, and Regional Brain Volume.

    PubMed

    Daugherty, Ana M; Yuan, Peng; Dahle, Cheryl L; Bender, Andrew R; Yang, Yiqin; Raz, Naftali

    2015-09-01

    Studies of human navigation in virtual maze environments have consistently linked advanced age with greater distance traveled between the start and the goal and longer duration of the search. Observations of search path geometry suggest that routes taken by older adults may be unnecessarily complex and that excessive path complexity may be an indicator of cognitive difficulties experienced by older navigators. In a sample of healthy adults, we quantify search path complexity in a virtual Morris water maze with a novel method based on fractal dimensionality. In a two-level hierarchical linear model, we estimated improvement in navigation performance across trials by a decline in route length, shortening of search time, and reduction in fractal dimensionality of the path. While replicating commonly reported age and sex differences in time and distance indices, a reduction in fractal dimension of the path accounted for improvement across trials, independent of age or sex. The volumes of brain regions associated with the establishment of cognitive maps (parahippocampal gyrus and hippocampus) were related to path dimensionality, but not to the total distance and time. Thus, fractal dimensionality of a navigational path may present a useful complementary method of quantifying performance in navigation. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. An Experimental Study on the Iso-Content-Based Angle Similarity Measure.

    ERIC Educational Resources Information Center

    Zhang, Jin; Rasmussen, Edie M.

    2002-01-01

    Retrieval performance of the iso-content-based angle similarity measure within the angle, distance, conjunction, disjunction, and ellipse retrieval models is compared with retrieval performance of the distance similarity measure and the angle similarity measure. Results show the iso-content-based angle similarity measure achieves satisfactory…

  6. Determinants of 6-minute walk distance in patients with idiopathic pulmonary fibrosis undergoing lung transplant evaluation.

    PubMed

    Porteous, Mary K; Rivera-Lebron, Belinda N; Kreider, Maryl; Lee, James; Kawut, Steven M

    2016-03-01

    Little is known about the physiologic determinants of 6-minute walk distance in idiopathic pulmonary fibrosis. We investigated the demographic, pulmonary function, echocardiographic, and hemodynamic determinants of 6-minute walk distance in patients with idiopathic pulmonary fibrosis evaluated for lung transplantation. We performed a cross-sectional analysis of 130 patients with idiopathic pulmonary fibrosis who completed a lung transplantation evaluation at the Hospital of the University of Pennsylvania between 2005 and 2010. Multivariable linear regression analysis was used to generate an explanatory model for 6-minute walk distance. After adjustment for age, sex, race, height, and weight, the presence of right ventricular dilation was associated with a decrease of 50.9 m (95% confidence interval [CI], 8.4-93.3) in 6-minute walk distance ([Formula: see text]). For each 200-mL reduction in forced vital capacity, the walk distance decreased by 15.0 m (95% CI, 9.0-21.1; [Formula: see text]). For every increase of 1 Wood unit in pulmonary vascular resistance, the walk distance decreased by 17.3 m (95% CI, 5.1-29.5; [Formula: see text]). Six-minute walk distance in idiopathic pulmonary fibrosis depends in part on circulatory impairment and the degree of restrictive lung disease. Future trials that target right ventricular morphology, pulmonary vascular resistance, and forced vital capacity may potentially improve exercise capacity in patients with idiopathic pulmonary fibrosis.

  7. Neuro-fuzzy model for estimating race and gender from geometric distances of human face across pose

    NASA Astrophysics Data System (ADS)

    Nanaa, K.; Rahman, M. N. A.; Rizon, M.; Mohamad, F. S.; Mamat, M.

    2018-03-01

    Classifying human face based on race and gender is a vital process in face recognition. It contributes to an index database and eases 3D synthesis of the human face. Identifying race and gender based on intrinsic factor is problematic, which is more fitting to utilizing nonlinear model for estimating process. In this paper, we aim to estimate race and gender in varied head pose. For this purpose, we collect dataset from PICS and CAS-PEAL databases, detect the landmarks and rotate them to the frontal pose. After geometric distances are calculated, all of distance values will be normalized. Implementation is carried out by using Neural Network Model and Fuzzy Logic Model. These models are combined by using Adaptive Neuro-Fuzzy Model. The experimental results showed that the optimization of address fuzzy membership. Model gives a better assessment rate and found that estimating race contributing to a more accurate gender assessment.

  8. Real-time control of hind limb functional electrical stimulation using feedback from dorsal root ganglia recordings

    NASA Astrophysics Data System (ADS)

    Bruns, Tim M.; Wagenaar, Joost B.; Bauman, Matthew J.; Gaunt, Robert A.; Weber, Douglas J.

    2013-04-01

    Objective. Functional electrical stimulation (FES) approaches often utilize an open-loop controller to drive state transitions. The addition of sensory feedback may allow for closed-loop control that can respond effectively to perturbations and muscle fatigue. Approach. We evaluated the use of natural sensory nerve signals obtained with penetrating microelectrode arrays in lumbar dorsal root ganglia (DRG) as real-time feedback for closed-loop control of FES-generated hind limb stepping in anesthetized cats. Main results. Leg position feedback was obtained in near real-time at 50 ms intervals by decoding the firing rates of more than 120 DRG neurons recorded simultaneously. Over 5 m of effective linear distance was traversed during closed-loop stepping trials in each of two cats. The controller compensated effectively for perturbations in the stepping path when DRG sensory feedback was provided. The presence of stimulation artifacts and the quality of DRG unit sorting did not significantly affect the accuracy of leg position feedback obtained from the linear decoding model as long as at least 20 DRG units were included in the model. Significance. This work demonstrates the feasibility and utility of closed-loop FES control based on natural neural sensors. Further work is needed to improve the controller and electrode technologies and to evaluate long-term viability.

  9. Real-time control of hind limb functional electrical stimulation using feedback from dorsal root ganglia recordings

    PubMed Central

    Bruns, Tim M; Wagenaar, Joost B; Bauman, Matthew J; Gaunt, Robert A; Weber, Douglas J

    2013-01-01

    Objective Functional electrical stimulation (FES) approaches often utilize an open-loop controller to drive state transitions. The addition of sensory feedback may allow for closed-loop control that can respond effectively to perturbations and muscle fatigue. Approach We evaluated the use of natural sensory nerve signals obtained with penetrating microelectrode arrays in lumbar dorsal root ganglia (DRG) as real-time feedback for closed-loop control of FES-generated hind limb stepping in anesthetized cats. Main results Leg position feedback was obtained in near real-time at 50 ms intervals by decoding the firing rates of more than 120 DRG neurons recorded simultaneously. Over 5 m of effective linear distance was traversed during closed-loop stepping trials in each of two cats. The controller compensated effectively for perturbations in the stepping path when DRG sensory feedback was provided. The presence of stimulation artifacts and the quality of DRG unit sorting did not significantly affect the accuracy of leg position feedback obtained from the linear decoding model as long as at least 20 DRG units were included in the model. Significance This work demonstrates the feasibility and utility of closed-loop FES control based on natural neural sensors. Further work is needed to improve the controller and electrode technologies and to evaluate long-term viability. PMID:23503062

  10. Radial orbit error reduction and sea surface topography determination using satellite altimetry

    NASA Technical Reports Server (NTRS)

    Engelis, Theodossios

    1987-01-01

    A method is presented in satellite altimetry that attempts to simultaneously determine the geoid and sea surface topography with minimum wavelengths of about 500 km and to reduce the radial orbit error caused by geopotential errors. The modeling of the radial orbit error is made using the linearized Lagrangian perturbation theory. Secular and second order effects are also included. After a rather extensive validation of the linearized equations, alternative expressions of the radial orbit error are derived. Numerical estimates for the radial orbit error and geoid undulation error are computed using the differences of two geopotential models as potential coefficient errors, for a SEASAT orbit. To provide statistical estimates of the radial distances and the geoid, a covariance propagation is made based on the full geopotential covariance. Accuracy estimates for the SEASAT orbits are given which agree quite well with already published results. Observation equations are develped using sea surface heights and crossover discrepancies as observables. A minimum variance solution with prior information provides estimates of parameters representing the sea surface topography and corrections to the gravity field that is used for the orbit generation. The simulation results show that the method can be used to effectively reduce the radial orbit error and recover the sea surface topography.

  11. Three-dimensional modeling of flexible pavements : research implementation plan.

    DOT National Transportation Integrated Search

    2006-02-14

    Many of the asphalt pavement analysis programs are based on linear elastic models. A linear viscoelastic models : would be superior to linear elastic models for analyzing the response of asphalt concrete pavements to loads. There : is a need to devel...

  12. Virtual Universities: Current Models and Future Trends.

    ERIC Educational Resources Information Center

    Guri-Rosenblit, Sarah

    2001-01-01

    Describes current models of distance education (single-mode distance teaching universities, dual- and mixed-mode universities, extension services, consortia-type ventures, and new technology-based universities), including their merits and problems. Discusses future trends in potential student constituencies, faculty roles, forms of knowledge…

  13. Genomic prediction based on data from three layer lines using non-linear regression models.

    PubMed

    Huang, Heyun; Windig, Jack J; Vereijken, Addie; Calus, Mario P L

    2014-11-06

    Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. In an attempt to alleviate potential discrepancies between assumptions of linear models and multi-population data, two types of alternative models were used: (1) a multi-trait genomic best linear unbiased prediction (GBLUP) model that modelled trait by line combinations as separate but correlated traits and (2) non-linear models based on kernel learning. These models were compared to conventional linear models for genomic prediction for two lines of brown layer hens (B1 and B2) and one line of white hens (W1). The three lines each had 1004 to 1023 training and 238 to 240 validation animals. Prediction accuracy was evaluated by estimating the correlation between observed phenotypes and predicted breeding values. When the training dataset included only data from the evaluated line, non-linear models yielded at best a similar accuracy as linear models. In some cases, when adding a distantly related line, the linear models showed a slight decrease in performance, while non-linear models generally showed no change in accuracy. When only information from a closely related line was used for training, linear models and non-linear radial basis function (RBF) kernel models performed similarly. The multi-trait GBLUP model took advantage of the estimated genetic correlations between the lines. Combining linear and non-linear models improved the accuracy of multi-line genomic prediction. Linear models and non-linear RBF models performed very similarly for genomic prediction, despite the expectation that non-linear models could deal better with the heterogeneous multi-population data. This heterogeneity of the data can be overcome by modelling trait by line combinations as separate but correlated traits, which avoids the occasional occurrence of large negative accuracies when the evaluated line was not included in the training dataset. Furthermore, when using a multi-line training dataset, non-linear models provided information on the genotype data that was complementary to the linear models, which indicates that the underlying data distributions of the three studied lines were indeed heterogeneous.

  14. Non-Linear Dynamics of Saturn’s Rings

    NASA Astrophysics Data System (ADS)

    Esposito, Larry W.

    2015-11-01

    Non-linear processes can explain why Saturn’s rings are so active and dynamic. Ring systems differ from simple linear systems in two significant ways: 1. They are systems of granular material: where particle-to-particle collisions dominate; thus a kinetic, not a fluid description needed. We find that stresses are strikingly inhomogeneous and fluctuations are large compared to equilibrium. 2. They are strongly forced by resonances: which drive a non-linear response, pushing the system across thresholds that lead to persistent states.Some of this non-linearity is captured in a simple Predator-Prey Model: Periodic forcing from the moon causes streamline crowding; This damps the relative velocity, and allows aggregates to grow. About a quarter phase later, the aggregates stir the system to higher relative velocity and the limit cycle repeats each orbit.Summary of Halo Results: A predator-prey model for ring dynamics produces transient structures like ‘straw’ that can explain the halo structure and spectroscopy: This requires energetic collisions (v ≈ 10m/sec, with throw distances about 200km, implying objects of scale R ≈ 20km).Transform to Duffing Eqn : With the coordinate transformation, z = M2/3, the Predator-Prey equations can be combined to form a single second-order differential equation with harmonic resonance forcing.Ring dynamics and history implications: Moon-triggered clumping at perturbed regions in Saturn’s rings creates both high velocity dispersion and large aggregates at these distances, explaining both small and large particles observed there. We calculate the stationary size distribution using a cell-to-cell mapping procedure that converts the phase-plane trajectories to a Markov chain. Approximating the Markov chain as an asymmetric random walk with reflecting boundaries allows us to determine the power law index from results of numerical simulations in the tidal environment surrounding Saturn. Aggregates can explain many dynamic aspects of the rings and can renew rings by shielding and recycling the material within them, depending on how long the mass is sequestered. We can ask: Are Saturn’s rings a chaotic non-linear driven system?

  15. Perceptions of the Impact of Online Learning as a Distance-Based Learning Model on the Professional Practices of Working Nurses in Northern Ontario

    ERIC Educational Resources Information Center

    Carter, Lorraine; Hanna, Mary; Warry, Wayne

    2016-01-01

    Nurses in Canada face diverse challenges to their ongoing educational pursuits. As a result, they have been early adopters of courses and programs based on distance education principles and, in particular, online learning models. In the study described in this paper, nurses studying at two northern universities, in programs involving online…

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tamanini, Nicola; Wright, Matthew, E-mail: nicola.tamanini@cea.fr, E-mail: matthew.wright.13@ucl.ac.uk

    We investigate the cosmological dynamics of the recently proposed extended chameleon models at both background and linear perturbation levels. Dynamical systems techniques are employed to fully characterize the evolution of the universe at the largest distances, while structure formation is analysed at sub-horizon scales within the quasi-static approximation. The late time dynamical transition from dark matter to dark energy domination can be well described by almost all extended chameleon models considered, with no deviations from ΛCDM results at both background and perturbation levels. The results obtained in this work confirm the cosmological viability of extended chameleons as alternative dark energymore » models.« less

  17. Landscape connectivity for bobcat (Lynx rufus) and lynx (Lynx canadensis) in the Northeastern United States

    PubMed Central

    Levy, Daniel M.; Donovan, Therese; Mickey, Ruth; Howard, Alan; Vashon, Jennifer; Freeman, Mark; Royar, Kim; Kilpatrick, C. William

    2018-01-01

    Landscape connectivity is integral to the persistence of metapopulations of wide ranging carnivores and other terrestrial species. The objectives of this research were to investigate the landscape characteristics essential to use of areas by lynx and bobcats in northern New England, map a habitat availability model for each species, and explore connectivity across areas of the region likely to experience future development pressure. A Mahalanobis distance analysis was conducted on location data collected between 2005 and 2010 from 16 bobcats in western Vermont and 31 lynx in northern Maine to determine which variables were most consistent across all locations for each species using three scales based on average 1) local (15 minute) movement, 2) linear distance between daily locations, and 3) female home range size. The bobcat model providing the widest separation between used locations and random study area locations suggests that they cue into landscape features such as edge, availability of cover, and development density at different scales. The lynx model with the widest separation between random and used locations contained five variables including natural habitat, cover, and elevation—all at different scales. Shrub scrub habitat—where lynx’s preferred prey is most abundant—was represented at the daily distance moved scale. Cross validation indicated that outliers had little effect on models for either species. A habitat suitability value was calculated for each 30 m2 pixel across Vermont, New Hampshire, and Maine for each species and used to map connectivity between conserved lands within selected areas across the region. Projections of future landscape change illustrated potential impacts of anthropogenic development on areas lynx and bobcat may use, and indicated where connectivity for bobcats and lynx may be lost. These projections provided a guide for conservation of landscape permeability for lynx, bobcat, and species relying on similar habitats in the region. PMID:29590192

  18. Landscape connectivity for bobcat (Lynx rufus) and lynx (Lynx canadensis) in the Northeastern United States

    USGS Publications Warehouse

    Farrell, Laura E.; Levy, Daniel M.; Donovan, Therese M.; Mickey, Ruth M.; Howard, Alan; Vashon, Jennifer; Freeman, Mark; Royar, Kim; Kilpatrick, C. William

    2018-01-01

    Landscape connectivity is integral to the persistence of metapopulations of wide ranging carnivores and other terrestrial species. The objectives of this research were to investigate the landscape characteristics essential to use of areas by lynx and bobcats in northern New England, map a habitat availability model for each species, and explore connectivity across areas of the region likely to experience future development pressure. A Mahalanobis distance analysis was conducted on location data collected between 2005 and 2010 from 16 bobcats in western Vermont and 31 lynx in northern Maine to determine which variables were most consistent across all locations for each species using three scales based on average 1) local (15 minute) movement, 2) linear distance between daily locations, and 3) female home range size. The bobcat model providing the widest separation between used locations and random study area locations suggests that they cue into landscape features such as edge, availability of cover, and development density at different scales. The lynx model with the widest separation between random and used locations contained five variables including natural habitat, cover, and elevation—all at different scales. Shrub scrub habitat—where lynx’s preferred prey is most abundant—was represented at the daily distance moved scale. Cross validation indicated that outliers had little effect on models for either species. A habitat suitability value was calculated for each 30 m2 pixel across Vermont, New Hampshire, and Maine for each species and used to map connectivity between conserved lands within selected areas across the region. Projections of future landscape change illustrated potential impacts of anthropogenic development on areas lynx and bobcat may use, and indicated where connectivity for bobcats and lynx may be lost. These projections provided a guide for conservation of landscape permeability for lynx, bobcat, and species relying on similar habitats in the region.

  19. A comparison between index of entropy and catastrophe theory methods for mapping groundwater potential in an arid region.

    PubMed

    Al-Abadi, Alaa M; Shahid, Shamsuddin

    2015-09-01

    In this study, index of entropy and catastrophe theory methods were used for demarcating groundwater potential in an arid region using weighted linear combination techniques in geographical information system (GIS) environment. A case study from Badra area in the eastern part of central of Iraq was analyzed and discussed. Six factors believed to have influence on groundwater occurrence namely elevation, slope, aquifer transmissivity and storativity, soil, and distance to fault were prepared as raster thematic layers to facility integration into GIS environment. The factors were chosen based on the availability of data and local conditions of the study area. Both techniques were used for computing weights and assigning ranks vital for applying weighted linear combination approach. The results of application of both modes indicated that the most influential groundwater occurrence factors were slope and elevation. The other factors have relatively smaller values of weights implying that these factors have a minor role in groundwater occurrence conditions. The groundwater potential index (GPI) values for both models were classified using natural break classification scheme into five categories: very low, low, moderate, high, and very high. For validation of generated GPI, the relative operating characteristic (ROC) curves were used. According to the obtained area under the curve, the catastrophe model with 78 % prediction accuracy was found to perform better than entropy model with 77 % prediction accuracy. The overall results indicated that both models have good capability for predicting groundwater potential zones.

  20. Anthropogenic factors and the risk of highly pathogenic avian influenza H5N1: prospects from a spatial-based model.

    PubMed

    Paul, Mathilde; Tavornpanich, Saraya; Abrial, David; Gasqui, Patrick; Charras-Garrido, Myriam; Thanapongtharm, Weerapong; Xiao, Xiangming; Gilbert, Marius; Roger, Francois; Ducrot, Christian

    2010-01-01

    Beginning in 2003, highly pathogenic avian influenza (HPAI) H5N1 virus spread across Southeast Asia, causing unprecedented epidemics. Thailand was massively infected in 2004 and 2005 and continues today to experience sporadic outbreaks. While research findings suggest that the spread of HPAI H5N1 is influenced primarily by trade patterns, identifying the anthropogenic risk factors involved remains a challenge. In this study, we investigated which anthropogenic factors played a role in the risk of HPAI in Thailand using outbreak data from the "second wave" of the epidemic (3 July 2004 to 5 May 2005) in the country. We first performed a spatial analysis of the relative risk of HPAI H5N1 at the subdistrict level based on a hierarchical Bayesian model. We observed a strong spatial heterogeneity of the relative risk. We then tested a set of potential risk factors in a multivariable linear model. The results confirmed the role of free-grazing ducks and rice-cropping intensity but showed a weak association with fighting cock density. The results also revealed a set of anthropogenic factors significantly linked with the risk of HPAI. High risk was associated strongly with densely populated areas, short distances to a highway junction, and short distances to large cities. These findings highlight a new explanatory pattern for the risk of HPAI and indicate that, in addition to agro-environmental factors, anthropogenic factors play an important role in the spread of H5N1. To limit the spread of future outbreaks, efforts to control the movement of poultry products must be sustained. INRA, EDP Sciences, 2010.

Top