Sample records for base distance optimization

  1. Towards a hybrid energy efficient multi-tree-based optimized routing protocol for wireless networks.

    PubMed

    Mitton, Nathalie; Razafindralambo, Tahiry; Simplot-Ryl, David; Stojmenovic, Ivan

    2012-12-13

    This paper considers the problem of designing power efficient routing with guaranteed delivery for sensor networks with unknown geographic locations. We propose HECTOR, a hybrid energy efficient tree-based optimized routing protocol, based on two sets of virtual coordinates. One set is based on rooted tree coordinates, and the other is based on hop distances toward several landmarks. In HECTOR, the node currently holding the packet forwards it to its neighbor that optimizes ratio of power cost over distance progress with landmark coordinates, among nodes that reduce landmark coordinates and do not increase distance in tree coordinates. If such a node does not exist, then forwarding is made to the neighbor that reduces tree-based distance only and optimizes power cost over tree distance progress ratio. We theoretically prove the packet delivery and propose an extension based on the use of multiple trees. Our simulations show the superiority of our algorithm over existing alternatives while guaranteeing delivery, and only up to 30% additional power compared to centralized shortest weighted path algorithm.

  2. Towards a Hybrid Energy Efficient Multi-Tree-Based Optimized Routing Protocol for Wireless Networks

    PubMed Central

    Mitton, Nathalie; Razafindralambo, Tahiry; Simplot-Ryl, David; Stojmenovic, Ivan

    2012-01-01

    This paper considers the problem of designing power efficient routing with guaranteed delivery for sensor networks with unknown geographic locations. We propose HECTOR, a hybrid energy efficient tree-based optimized routing protocol, based on two sets of virtual coordinates. One set is based on rooted tree coordinates, and the other is based on hop distances toward several landmarks. In HECTOR, the node currently holding the packet forwards it to its neighbor that optimizes ratio of power cost over distance progress with landmark coordinates, among nodes that reduce landmark coordinates and do not increase distance in tree coordinates. If such a node does not exist, then forwarding is made to the neighbor that reduces tree-based distance only and optimizes power cost over tree distance progress ratio. We theoretically prove the packet delivery and propose an extension based on the use of multiple trees. Our simulations show the superiority of our algorithm over existing alternatives while guaranteeing delivery, and only up to 30% additional power compared to centralized shortest weighted path algorithm. PMID:23443398

  3. A novel surrogate-based approach for optimal design of electromagnetic-based circuits

    NASA Astrophysics Data System (ADS)

    Hassan, Abdel-Karim S. O.; Mohamed, Ahmed S. A.; Rabie, Azza A.; Etman, Ahmed S.

    2016-02-01

    A new geometric design centring approach for optimal design of central processing unit-intensive electromagnetic (EM)-based circuits is introduced. The approach uses norms related to the probability distribution of the circuit parameters to find distances from a point to the feasible region boundaries by solving nonlinear optimization problems. Based on these normed distances, the design centring problem is formulated as a max-min optimization problem. A convergent iterative boundary search technique is exploited to find the normed distances. To alleviate the computation cost associated with the EM-based circuits design cycle, space-mapping (SM) surrogates are used to create a sequence of iteratively updated feasible region approximations. In each SM feasible region approximation, the centring process using normed distances is implemented, leading to a better centre point. The process is repeated until a final design centre is attained. Practical examples are given to show the effectiveness of the new design centring method for EM-based circuits.

  4. Adaptive distance metric learning for diffusion tensor image segmentation.

    PubMed

    Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C N; Chu, Winnie C W

    2014-01-01

    High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework.

  5. Adaptive Distance Metric Learning for Diffusion Tensor Image Segmentation

    PubMed Central

    Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C. N.; Chu, Winnie C. W.

    2014-01-01

    High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework. PMID:24651858

  6. Automatic Clustering Using Multi-objective Particle Swarm and Simulated Annealing

    PubMed Central

    Abubaker, Ahmad; Baharum, Adam; Alrefaei, Mahmoud

    2015-01-01

    This paper puts forward a new automatic clustering algorithm based on Multi-Objective Particle Swarm Optimization and Simulated Annealing, “MOPSOSA”. The proposed algorithm is capable of automatic clustering which is appropriate for partitioning datasets to a suitable number of clusters. MOPSOSA combines the features of the multi-objective based particle swarm optimization (PSO) and the Multi-Objective Simulated Annealing (MOSA). Three cluster validity indices were optimized simultaneously to establish the suitable number of clusters and the appropriate clustering for a dataset. The first cluster validity index is centred on Euclidean distance, the second on the point symmetry distance, and the last cluster validity index is based on short distance. A number of algorithms have been compared with the MOPSOSA algorithm in resolving clustering problems by determining the actual number of clusters and optimal clustering. Computational experiments were carried out to study fourteen artificial and five real life datasets. PMID:26132309

  7. Transformational leadership in the local police in Spain: a leader-follower distance approach.

    PubMed

    Álvarez, Octavio; Lila, Marisol; Tomás, Inés; Castillo, Isabel

    2014-01-01

    Based on the transformational leadership theory (Bass, 1985), the aim of the present study was to analyze the differences in leadership styles according to the various leading ranks and the organizational follower-leader distance reported by a representative sample of 975 local police members (828 male and 147 female) from Valencian Community (Spain). Results showed differences by rank (p < .01), and by rank distance (p < .05). The general intendents showed the most optimal profile of leadership in all the variables examined (transformational-leadership behaviors, transactional-leadership behaviors, laissez-faire behaviors, satisfaction with the leader, extra effort by follower, and perceived leadership effectiveness). By contrast, the least optimal profiles were presented by intendents. Finally, the maximum distance (five ranks) generally yielded the most optimal profiles, whereas the 3-rank distance generally produced the least optimal profiles for all variables examined. Outcomes and practical implications for the workforce dimensioning are also discussed.

  8. Measuring the misfit between seismograms using an optimal transport distance: application to full waveform inversion

    NASA Astrophysics Data System (ADS)

    Métivier, L.; Brossier, R.; Mérigot, Q.; Oudet, E.; Virieux, J.

    2016-04-01

    Full waveform inversion using the conventional L2 distance to measure the misfit between seismograms is known to suffer from cycle skipping. An alternative strategy is proposed in this study, based on a measure of the misfit computed with an optimal transport distance. This measure allows to account for the lateral coherency of events within the seismograms, instead of considering each seismic trace independently, as is done generally in full waveform inversion. The computation of this optimal transport distance relies on a particular mathematical formulation allowing for the non-conservation of the total energy between seismograms. The numerical solution of the optimal transport problem is performed using proximal splitting techniques. Three synthetic case studies are investigated using this strategy: the Marmousi 2 model, the BP 2004 salt model, and the Chevron 2014 benchmark data. The results emphasize interesting properties of the optimal transport distance. The associated misfit function is less prone to cycle skipping. A workflow is designed to reconstruct accurately the salt structures in the BP 2004 model, starting from an initial model containing no information about these structures. A high-resolution P-wave velocity estimation is built from the Chevron 2014 benchmark data, following a frequency continuation strategy. This estimation explains accurately the data. Using the same workflow, full waveform inversion based on the L2 distance converges towards a local minimum. These results yield encouraging perspectives regarding the use of the optimal transport distance for full waveform inversion: the sensitivity to the accuracy of the initial model is reduced, the reconstruction of complex salt structure is made possible, the method is robust to noise, and the interpretation of seismic data dominated by reflections is enhanced.

  9. Optimization methods of pulse-to-pulse alignment using femtosecond pulse laser based on temporal coherence function for practical distance measurement

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Yang, Linghui; Guo, Yin; Lin, Jiarui; Cui, Pengfei; Zhu, Jigui

    2018-02-01

    An interferometer technique based on temporal coherence function of femtosecond pulses is demonstrated for practical distance measurement. Here, the pulse-to-pulse alignment is analyzed for large delay distance measurement. Firstly, a temporal coherence function model between two femtosecond pulses is developed in the time domain for the dispersive unbalanced Michelson interferometer. Then, according to this model, the fringes analysis and the envelope extraction process are discussed. Meanwhile, optimization methods of pulse-to-pulse alignment for practical long distance measurement are presented. The order of the curve fitting and the selection of points for envelope extraction are analyzed. Furthermore, an averaging method based on the symmetry of the coherence function is demonstrated. Finally, the performance of the proposed methods is evaluated in the absolute distance measurement of 20 μ m with path length difference of 9 m. The improvement of standard deviation in experimental results shows that these approaches have the potential for practical distance measurement.

  10. Heat transfer optimization for air-mist cooling between a stack of parallel plates

    NASA Astrophysics Data System (ADS)

    Issa, Roy J.

    2010-06-01

    A theoretical model is developed to predict the upper limit heat transfer between a stack of parallel plates subject to multiphase cooling by air-mist flow. The model predicts the optimal separation distance between the plates based on the development of the boundary layers for small and large separation distances, and for dilute mist conditions. Simulation results show the optimal separation distance to be strongly dependent on the liquid-to-air mass flow rate loading ratio, and reach a limit for a critical loading. For these dilute spray conditions, complete evaporation of the droplets takes place. Simulation results also show the optimal separation distance decreases with the increase in the mist flow rate. The proposed theoretical model shall lead to a better understanding of the design of fins spacing in heat exchangers where multiphase spray cooling is used.

  11. Manifold Learning by Preserving Distance Orders.

    PubMed

    Ataer-Cansizoglu, Esra; Akcakaya, Murat; Orhan, Umut; Erdogmus, Deniz

    2014-03-01

    Nonlinear dimensionality reduction is essential for the analysis and the interpretation of high dimensional data sets. In this manuscript, we propose a distance order preserving manifold learning algorithm that extends the basic mean-squared error cost function used mainly in multidimensional scaling (MDS)-based methods. We develop a constrained optimization problem by assuming explicit constraints on the order of distances in the low-dimensional space. In this optimization problem, as a generalization of MDS, instead of forcing a linear relationship between the distances in the high-dimensional original and low-dimensional projection space, we learn a non-decreasing relation approximated by radial basis functions. We compare the proposed method with existing manifold learning algorithms using synthetic datasets based on the commonly used residual variance and proposed percentage of violated distance orders metrics. We also perform experiments on a retinal image dataset used in Retinopathy of Prematurity (ROP) diagnosis.

  12. An improved real time image detection system for elephant intrusion along the forest border areas.

    PubMed

    Sugumar, S J; Jayaparvathy, R

    2014-01-01

    Human-elephant conflict is a major problem leading to crop damage, human death and injuries caused by elephants, and elephants being killed by humans. In this paper, we propose an automated unsupervised elephant image detection system (EIDS) as a solution to human-elephant conflict in the context of elephant conservation. The elephant's image is captured in the forest border areas and is sent to a base station via an RF network. The received image is decomposed using Haar wavelet to obtain multilevel wavelet coefficients, with which we perform image feature extraction and similarity match between the elephant query image and the database image using image vision algorithms. A GSM message is sent to the forest officials indicating that an elephant has been detected in the forest border and is approaching human habitat. We propose an optimized distance metric to improve the image retrieval time from the database. We compare the optimized distance metric with the popular Euclidean and Manhattan distance methods. The proposed optimized distance metric retrieves more images with lesser retrieval time than the other distance metrics which makes the optimized distance method more efficient and reliable.

  13. Long-distance practical quantum key distribution by entanglement swapping.

    PubMed

    Scherer, Artur; Sanders, Barry C; Tittel, Wolfgang

    2011-02-14

    We develop a model for practical, entanglement-based long-distance quantum key distribution employing entanglement swapping as a key building block. Relying only on existing off-the-shelf technology, we show how to optimize resources so as to maximize secret key distribution rates. The tools comprise lossy transmission links, such as telecom optical fibers or free space, parametric down-conversion sources of entangled photon pairs, and threshold detectors that are inefficient and have dark counts. Our analysis provides the optimal trade-off between detector efficiency and dark counts, which are usually competing, as well as the optimal source brightness that maximizes the secret key rate for specified distances (i.e. loss) between sender and receiver.

  14. DISCO: Distance and Spectrum Correlation Optimization Alignment for Two Dimensional Gas Chromatography Time-of-Flight Mass Spectrometry-based Metabolomics

    PubMed Central

    Wang, Bing; Fang, Aiqin; Heim, John; Bogdanov, Bogdan; Pugh, Scott; Libardoni, Mark; Zhang, Xiang

    2010-01-01

    A novel peak alignment algorithm using a distance and spectrum correlation optimization (DISCO) method has been developed for two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC/TOF-MS) based metabolomics. This algorithm uses the output of the instrument control software, ChromaTOF, as its input data. It detects and merges multiple peak entries of the same metabolite into one peak entry in each input peak list. After a z-score transformation of metabolite retention times, DISCO selects landmark peaks from all samples based on both two-dimensional retention times and mass spectrum similarity of fragment ions measured by Pearson’s correlation coefficient. A local linear fitting method is employed in the original two-dimensional retention time space to correct retention time shifts. A progressive retention time map searching method is used to align metabolite peaks in all samples together based on optimization of the Euclidean distance and mass spectrum similarity. The effectiveness of the DISCO algorithm is demonstrated using data sets acquired under different experiment conditions and a spiked-in experiment. PMID:20476746

  15. Dynamic Portfolio Strategy Using Clustering Approach

    PubMed Central

    Lu, Ya-Nan; Li, Sai-Ping; Jiang, Xiong-Fei; Zhong, Li-Xin; Qiu, Tian

    2017-01-01

    The problem of portfolio optimization is one of the most important issues in asset management. We here propose a new dynamic portfolio strategy based on the time-varying structures of MST networks in Chinese stock markets, where the market condition is further considered when using the optimal portfolios for investment. A portfolio strategy comprises two stages: First, select the portfolios by choosing central and peripheral stocks in the selection horizon using five topological parameters, namely degree, betweenness centrality, distance on degree criterion, distance on correlation criterion and distance on distance criterion. Second, use the portfolios for investment in the investment horizon. The optimal portfolio is chosen by comparing central and peripheral portfolios under different combinations of market conditions in the selection and investment horizons. Market conditions in our paper are identified by the ratios of the number of trading days with rising index to the total number of trading days, or the sum of the amplitudes of the trading days with rising index to the sum of the amplitudes of the total trading days. We find that central portfolios outperform peripheral portfolios when the market is under a drawup condition, or when the market is stable or drawup in the selection horizon and is under a stable condition in the investment horizon. We also find that peripheral portfolios gain more than central portfolios when the market is stable in the selection horizon and is drawdown in the investment horizon. Empirical tests are carried out based on the optimal portfolio strategy. Among all possible optimal portfolio strategies based on different parameters to select portfolios and different criteria to identify market conditions, 65% of our optimal portfolio strategies outperform the random strategy for the Shanghai A-Share market while the proportion is 70% for the Shenzhen A-Share market. PMID:28129333

  16. Dynamic Portfolio Strategy Using Clustering Approach.

    PubMed

    Ren, Fei; Lu, Ya-Nan; Li, Sai-Ping; Jiang, Xiong-Fei; Zhong, Li-Xin; Qiu, Tian

    2017-01-01

    The problem of portfolio optimization is one of the most important issues in asset management. We here propose a new dynamic portfolio strategy based on the time-varying structures of MST networks in Chinese stock markets, where the market condition is further considered when using the optimal portfolios for investment. A portfolio strategy comprises two stages: First, select the portfolios by choosing central and peripheral stocks in the selection horizon using five topological parameters, namely degree, betweenness centrality, distance on degree criterion, distance on correlation criterion and distance on distance criterion. Second, use the portfolios for investment in the investment horizon. The optimal portfolio is chosen by comparing central and peripheral portfolios under different combinations of market conditions in the selection and investment horizons. Market conditions in our paper are identified by the ratios of the number of trading days with rising index to the total number of trading days, or the sum of the amplitudes of the trading days with rising index to the sum of the amplitudes of the total trading days. We find that central portfolios outperform peripheral portfolios when the market is under a drawup condition, or when the market is stable or drawup in the selection horizon and is under a stable condition in the investment horizon. We also find that peripheral portfolios gain more than central portfolios when the market is stable in the selection horizon and is drawdown in the investment horizon. Empirical tests are carried out based on the optimal portfolio strategy. Among all possible optimal portfolio strategies based on different parameters to select portfolios and different criteria to identify market conditions, 65% of our optimal portfolio strategies outperform the random strategy for the Shanghai A-Share market while the proportion is 70% for the Shenzhen A-Share market.

  17. Optimal approach to quantum communication using dynamic programming.

    PubMed

    Jiang, Liang; Taylor, Jacob M; Khaneja, Navin; Lukin, Mikhail D

    2007-10-30

    Reliable preparation of entanglement between distant systems is an outstanding problem in quantum information science and quantum communication. In practice, this has to be accomplished by noisy channels (such as optical fibers) that generally result in exponential attenuation of quantum signals at large distances. A special class of quantum error correction protocols, quantum repeater protocols, can be used to overcome such losses. In this work, we introduce a method for systematically optimizing existing protocols and developing more efficient protocols. Our approach makes use of a dynamic programming-based searching algorithm, the complexity of which scales only polynomially with the communication distance, letting us efficiently determine near-optimal solutions. We find significant improvements in both the speed and the final-state fidelity for preparing long-distance entangled states.

  18. An approach to multiobjective optimization of rotational therapy. II. Pareto optimal surfaces and linear combinations of modulated blocked arcs for a prostate geometry.

    PubMed

    Pardo-Montero, Juan; Fenwick, John D

    2010-06-01

    The purpose of this work is twofold: To further develop an approach to multiobjective optimization of rotational therapy treatments recently introduced by the authors [J. Pardo-Montero and J. D. Fenwick, "An approach to multiobjective optimization of rotational therapy," Med. Phys. 36, 3292-3303 (2009)], especially regarding its application to realistic geometries, and to study the quality (Pareto optimality) of plans obtained using such an approach by comparing them with Pareto optimal plans obtained through inverse planning. In the previous work of the authors, a methodology is proposed for constructing a large number of plans, with different compromises between the objectives involved, from a small number of geometrically based arcs, each arc prioritizing different objectives. Here, this method has been further developed and studied. Two different techniques for constructing these arcs are investigated, one based on image-reconstruction algorithms and the other based on more common gradient-descent algorithms. The difficulty of dealing with organs abutting the target, briefly reported in previous work of the authors, has been investigated using partial OAR unblocking. Optimality of the solutions has been investigated by comparison with a Pareto front obtained from inverse planning. A relative Euclidean distance has been used to measure the distance of these plans to the Pareto front, and dose volume histogram comparisons have been used to gauge the clinical impact of these distances. A prostate geometry has been used for the study. For geometries where a blocked OAR abuts the target, moderate OAR unblocking can substantially improve target dose distribution and minimize hot spots while not overly compromising dose sparing of the organ. Image-reconstruction type and gradient-descent blocked-arc computations generate similar results. The Pareto front for the prostate geometry, reconstructed using a large number of inverse plans, presents a hockey-stick shape comprising two regions: One where the dose to the target is close to prescription and trade-offs can be made between doses to the organs at risk and (small) changes in target dose, and one where very substantial rectal sparing is achieved at the cost of large target underdosage. Plans computed following the approach using a conformal arc and four blocked arcs generally lie close to the Pareto front, although distances of some plans from high gradient regions of the Pareto front can be greater. Only around 12% of plans lie a relative Euclidean distance of 0.15 or greater from the Pareto front. Using the alternative distance measure of Craft ["Calculating and controlling the error of discrete representations of Pareto surfaces in convex multi-criteria optimization," Phys. Medica (to be published)], around 2/5 of plans lie more than 0.05 from the front. Computation of blocked arcs is quite fast, the algorithms requiring 35%-80% of the running time per iteration needed for conventional inverse plan computation. The geometry-based arc approach to multicriteria optimization of rotational therapy allows solutions to be obtained that lie close to the Pareto front. Both the image-reconstruction type and gradient-descent algorithms produce similar modulated arcs, the latter one perhaps being preferred because it is more easily implementable in standard treatment planning systems. Moderate unblocking provides a good way of dealing with OARs which abut the PTV. Optimization of geometry-based arcs is faster than usual inverse optimization of treatment plans, making this approach more rapid than an inverse-based Pareto front reconstruction.

  19. Anomaly detection of flight routes through optimal waypoint

    NASA Astrophysics Data System (ADS)

    Pusadan, M. Y.; Buliali, J. L.; Ginardi, R. V. H.

    2017-01-01

    Deciding factor of flight, one of them is the flight route. Flight route determined by coordinate (latitude and longitude). flight routed is determined by its coordinates (latitude and longitude) as defined is waypoint. anomaly occurs, if the aircraft is flying outside the specified waypoint area. In the case of flight data, anomalies occur by identifying problems of the flight route based on data ADS-B. This study has an aim of to determine the optimal waypoints of the flight route. The proposed methods: i) Agglomerative Hierarchical Clustering (AHC) in several segments based on range area coordinates (latitude and longitude) in every waypoint; ii) The coefficient cophenetics correlation (c) to determine the correlation between the members in each cluster; iii) cubic spline interpolation as a graphic representation of the has connected between the coordinates on every waypoint; and iv). Euclidean distance to measure distances between waypoints with 2 centroid result of clustering AHC. The experiment results are value of coefficient cophenetics correlation (c): 0,691≤ c ≤ 0974, five segments the generated of the range area waypoint coordinates, and the shortest and longest distance between the centroid with waypoint are 0.46 and 2.18. Thus, concluded that the shortest distance is used as the reference coordinates of optimal waypoint, and farthest distance can be indicated potentially detected anomaly.

  20. Optimized mirror shape tuning using beam weightings based on distance, angle of incidence, reflectivity, and power.

    PubMed

    Goldberg, Kenneth A; Yashchuk, Valeriy V

    2016-05-01

    For glancing-incidence optical systems, such as short-wavelength optics used for nano-focusing, incorporating physical factors in the calculations used for shape optimization can improve performance. Wavefront metrology, including the measurement of a mirror's shape or slope, is routinely used as input for mirror figure optimization on mirrors that can be bent, actuated, positioned, or aligned. Modeling shows that when the incident power distribution, distance from focus, angle of incidence, and the spatially varying reflectivity are included in the optimization, higher Strehl ratios can be achieved. Following the works of Maréchal and Mahajan, optimization of the Strehl ratio (for peak intensity with a coherently illuminated system) occurs when the expectation value of the phase error's variance is minimized. We describe an optimization procedure based on regression analysis that incorporates these physical parameters. This approach is suitable for coherently illuminated systems of nearly diffraction-limited quality. Mathematically, this work is an enhancement of the methods commonly applied for ex situ alignment based on uniform weighting of all points on the surface (or a sub-region of the surface). It follows a similar approach to the optimization of apodized and non-uniformly illuminated optical systems. Significantly, it reaches a different conclusion than a more recent approach based on minimization of focal plane ray errors.

  1. Feasibility of employing model-based optimization of pulse amplitude and electrode distance for effective tumor electropermeabilization.

    PubMed

    Sel, Davorka; Lebar, Alenka Macek; Miklavcic, Damijan

    2007-05-01

    In electrochemotherapy (ECT) electropermeabilization, parameters (pulse amplitude, electrode setup) need to be customized in order to expose the whole tumor to electric field intensities above permeabilizing threshold to achieve effective ECT. In this paper, we present a model-based optimization approach toward determination of optimal electropermeabilization parameters for effective ECT. The optimization is carried out by minimizing the difference between the permeabilization threshold and electric field intensities computed by finite element model in selected points of tumor. We examined the feasibility of model-based optimization of electropermeabilization parameters on a model geometry generated from computer tomography images, representing brain tissue with tumor. Continuous parameter subject to optimization was pulse amplitude. The distance between electrode pairs was optimized as a discrete parameter. Optimization also considered the pulse generator constraints on voltage and current. During optimization the two constraints were reached preventing the exposure of the entire volume of the tumor to electric field intensities above permeabilizing threshold. However, despite the fact that with the particular needle array holder and pulse generator the entire volume of the tumor was not permeabilized, the maximal extent of permeabilization for the particular case (electrodes, tissue) was determined with the proposed approach. Model-based optimization approach could also be used for electro-gene transfer, where electric field intensities should be distributed between permeabilizing threshold and irreversible threshold-the latter causing tissue necrosis. This can be obtained by adding constraints on maximum electric field intensity in optimization procedure.

  2. Multidimensional Optimization of Signal Space Distance Parameters in WLAN Positioning

    PubMed Central

    Brković, Milenko; Simić, Mirjana

    2014-01-01

    Accurate indoor localization of mobile users is one of the challenging problems of the last decade. Besides delivering high speed Internet, Wireless Local Area Network (WLAN) can be used as an effective indoor positioning system, being competitive both in terms of accuracy and cost. Among the localization algorithms, nearest neighbor fingerprinting algorithms based on Received Signal Strength (RSS) parameter have been extensively studied as an inexpensive solution for delivering indoor Location Based Services (LBS). In this paper, we propose the optimization of the signal space distance parameters in order to improve precision of WLAN indoor positioning, based on nearest neighbor fingerprinting algorithms. Experiments in a real WLAN environment indicate that proposed optimization leads to substantial improvements of the localization accuracy. Our approach is conceptually simple, is easy to implement, and does not require any additional hardware. PMID:24757443

  3. Comparison of Genetic Algorithm and Hill Climbing for Shortest Path Optimization Mapping

    NASA Astrophysics Data System (ADS)

    Fronita, Mona; Gernowo, Rahmat; Gunawan, Vincencius

    2018-02-01

    Traveling Salesman Problem (TSP) is an optimization to find the shortest path to reach several destinations in one trip without passing through the same city and back again to the early departure city, the process is applied to the delivery systems. This comparison is done using two methods, namely optimization genetic algorithm and hill climbing. Hill Climbing works by directly selecting a new path that is exchanged with the neighbour's to get the track distance smaller than the previous track, without testing. Genetic algorithms depend on the input parameters, they are the number of population, the probability of crossover, mutation probability and the number of generations. To simplify the process of determining the shortest path supported by the development of software that uses the google map API. Tests carried out as much as 20 times with the number of city 8, 16, 24 and 32 to see which method is optimal in terms of distance and time computation. Based on experiments conducted with a number of cities 3, 4, 5 and 6 producing the same value and optimal distance for the genetic algorithm and hill climbing, the value of this distance begins to differ with the number of city 7. The overall results shows that these tests, hill climbing are more optimal to number of small cities and the number of cities over 30 optimized using genetic algorithms.

  4. Investigation of 16 × 10 Gbps DWDM System Based on Optimized Semiconductor Optical Amplifier

    NASA Astrophysics Data System (ADS)

    Rani, Aruna; Dewra, Sanjeev

    2017-08-01

    This paper investigates the performance of an optical system based on optimized semiconductor optical amplifier (SOA) at 160 Gbps with 0.8 nm channel spacing. Transmission distances up to 280 km at -30 dBm input signal power and up to 247 km at -32 dBm input signal power with acceptable bit error rate (BER) and Q-factor are examined. It is also analyzed that the transmission distance up to 292 km has been covered at -28 dBm input signal power using Dispersion Shifted (DS)-Normal fiber without any power compensation methods.

  5. Kernels, Degrees of Freedom, and Power Properties of Quadratic Distance Goodness-of-Fit Tests

    PubMed Central

    Lindsay, Bruce G.; Markatou, Marianthi; Ray, Surajit

    2014-01-01

    In this article, we study the power properties of quadratic-distance-based goodness-of-fit tests. First, we introduce the concept of a root kernel and discuss the considerations that enter the selection of this kernel. We derive an easy to use normal approximation to the power of quadratic distance goodness-of-fit tests and base the construction of a noncentrality index, an analogue of the traditional noncentrality parameter, on it. This leads to a method akin to the Neyman-Pearson lemma for constructing optimal kernels for specific alternatives. We then introduce a midpower analysis as a device for choosing optimal degrees of freedom for a family of alternatives of interest. Finally, we introduce a new diffusion kernel, called the Pearson-normal kernel, and study the extent to which the normal approximation to the power of tests based on this kernel is valid. Supplementary materials for this article are available online. PMID:24764609

  6. Comparison of Artificial Immune System and Particle Swarm Optimization Techniques for Error Optimization of Machine Vision Based Tool Movements

    NASA Astrophysics Data System (ADS)

    Mahapatra, Prasant Kumar; Sethi, Spardha; Kumar, Amod

    2015-10-01

    In conventional tool positioning technique, sensors embedded in the motion stages provide the accurate tool position information. In this paper, a machine vision based system and image processing technique for motion measurement of lathe tool from two-dimensional sequential images captured using charge coupled device camera having a resolution of 250 microns has been described. An algorithm was developed to calculate the observed distance travelled by the tool from the captured images. As expected, error was observed in the value of the distance traversed by the tool calculated from these images. Optimization of errors due to machine vision system, calibration, environmental factors, etc. in lathe tool movement was carried out using two soft computing techniques, namely, artificial immune system (AIS) and particle swarm optimization (PSO). The results show better capability of AIS over PSO.

  7. Shape Classification Using Wasserstein Distance for Brain Morphometry Analysis.

    PubMed

    Su, Zhengyu; Zeng, Wei; Wang, Yalin; Lu, Zhong-Lin; Gu, Xianfeng

    2015-01-01

    Brain morphometry study plays a fundamental role in medical imaging analysis and diagnosis. This work proposes a novel framework for brain cortical surface classification using Wasserstein distance, based on uniformization theory and Riemannian optimal mass transport theory. By Poincare uniformization theorem, all shapes can be conformally deformed to one of the three canonical spaces: the unit sphere, the Euclidean plane or the hyperbolic plane. The uniformization map will distort the surface area elements. The area-distortion factor gives a probability measure on the canonical uniformization space. All the probability measures on a Riemannian manifold form the Wasserstein space. Given any 2 probability measures, there is a unique optimal mass transport map between them, the transportation cost defines the Wasserstein distance between them. Wasserstein distance gives a Riemannian metric for the Wasserstein space. It intrinsically measures the dissimilarities between shapes and thus has the potential for shape classification. To the best of our knowledge, this is the first. work to introduce the optimal mass transport map to general Riemannian manifolds. The method is based on geodesic power Voronoi diagram. Comparing to the conventional methods, our approach solely depends on Riemannian metrics and is invariant under rigid motions and scalings, thus it intrinsically measures shape distance. Experimental results on classifying brain cortical surfaces with different intelligence quotients demonstrated the efficiency and efficacy of our method.

  8. Shape Classification Using Wasserstein Distance for Brain Morphometry Analysis

    PubMed Central

    Su, Zhengyu; Zeng, Wei; Wang, Yalin; Lu, Zhong-Lin; Gu, Xianfeng

    2015-01-01

    Brain morphometry study plays a fundamental role in medical imaging analysis and diagnosis. This work proposes a novel framework for brain cortical surface classification using Wasserstein distance, based on uniformization theory and Riemannian optimal mass transport theory. By Poincare uniformization theorem, all shapes can be conformally deformed to one of the three canonical spaces: the unit sphere, the Euclidean plane or the hyperbolic plane. The uniformization map will distort the surface area elements. The area-distortion factor gives a probability measure on the canonical uniformization space. All the probability measures on a Riemannian manifold form the Wasserstein space. Given any 2 probability measures, there is a unique optimal mass transport map between them, the transportation cost defines the Wasserstein distance between them. Wasserstein distance gives a Riemannian metric for the Wasserstein space. It intrinsically measures the dissimilarities between shapes and thus has the potential for shape classification. To the best of our knowledge, this is the first work to introduce the optimal mass transport map to general Riemannian manifolds. The method is based on geodesic power Voronoi diagram. Comparing to the conventional methods, our approach solely depends on Riemannian metrics and is invariant under rigid motions and scalings, thus it intrinsically measures shape distance. Experimental results on classifying brain cortical surfaces with different intelligence quotients demonstrated the efficiency and efficacy of our method. PMID:26221691

  9. A Most Probable Point-Based Method for Reliability Analysis, Sensitivity Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)

    2004-01-01

    A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).

  10. A Highly Functional Decision Paradigm Based on Nonlinear Adaptive Genetic Algorithm

    DTIC Science & Technology

    1997-10-07

    significant speedup. p£ lC <$jALTnimm SCTED & 14. SUBJECT TERMS Network Topology Optimization, Mathlink, Mathematica Plug-In, GA Route Optimizer, DSP...operations per second 2.4 Gbytes/second sustainable on-chip data transfer rate 400 Mb/s off-chip peak transfer rate Layer-to-layer interconnection...SecondHighestDist# = DistanceArray%(IndexList%(ChromeGene%(i%, 1) -1), IndexList%( Chrom eGene%(i%, 2) - 1)) For j% = 1 To StrandLength% - 1 ’If highest distance

  11. Optimal convergence in naming game with geography-based negotiation on small-world networks

    NASA Astrophysics Data System (ADS)

    Liu, Run-Ran; Wang, Wen-Xu; Lai, Ying-Cheng; Chen, Guanrong; Wang, Bing-Hong

    2011-01-01

    We propose a negotiation strategy to address the effect of geography on the dynamics of naming games over small-world networks. Communication and negotiation frequencies between two agents are determined by their geographical distance in terms of a parameter characterizing the correlation between interaction strength and the distance. A finding is that there exists an optimal parameter value leading to fastest convergence to global consensus on naming. Numerical computations and a theoretical analysis are provided to substantiate our findings.

  12. An improved initialization center k-means clustering algorithm based on distance and density

    NASA Astrophysics Data System (ADS)

    Duan, Yanling; Liu, Qun; Xia, Shuyin

    2018-04-01

    Aiming at the problem of the random initial clustering center of k means algorithm that the clustering results are influenced by outlier data sample and are unstable in multiple clustering, a method of central point initialization method based on larger distance and higher density is proposed. The reciprocal of the weighted average of distance is used to represent the sample density, and the data sample with the larger distance and the higher density are selected as the initial clustering centers to optimize the clustering results. Then, a clustering evaluation method based on distance and density is designed to verify the feasibility of the algorithm and the practicality, the experimental results on UCI data sets show that the algorithm has a certain stability and practicality.

  13. A tiger beetle’s pursuit of prey depends on distance

    NASA Astrophysics Data System (ADS)

    Noest, R. M.; Wang, Z. Jane

    2017-04-01

    Tiger beetles pursue prey by adjusting their heading according to a time-delayed proportional control law that minimizes the error angle (Haselsteiner et al 2014 J. R. Soc. Interface 11 20140216). This control law can be further interpreted in terms of mechanical actuation: to catch prey, tiger beetles exert a sideways force by biasing their tripod gait in proportion to the error angle measured half a stride earlier. The proportional gain was found to be nearly optimal in the sense that it minimizes the time to point directly toward the prey. For a time-delayed linear proportional controller, the optimal gain, k, is inversely proportional to the time delay, τ, and satisfies kτ =1/e . Here we present evidence that tiger beetles adjust their control gain during their pursuit of prey. Our analysis shows two critical distances: one corresponding to the beetle’s final approach to the prey, and the second, less expected, occurring at a distance around 10 cm for a prey size of 4.5 mm. The beetle initiates its chase using a sub-critical gain and increases the gain to the optimal value once the prey is within this critical distance. Insects use a variety of methods to detect distance, often involving different visual cues. Here we examine two such methods: one based on motion parallax and the other based on the prey’s elevation angle. We show that, in order for the motion parallax method to explain the observed data, the beetle needs to correct for the ratio of the prey’s sideways velocity relative to its own. On the other hand, the simpler method based on the elevation angle can detect both the distance and the prey’s size. Moreover we find that the transition distance corresponds to the accuracy required to distinguish small prey from large predators.

  14. A star recognition method based on the Adaptive Ant Colony algorithm for star sensors.

    PubMed

    Quan, Wei; Fang, Jiancheng

    2010-01-01

    A new star recognition method based on the Adaptive Ant Colony (AAC) algorithm has been developed to increase the star recognition speed and success rate for star sensors. This method draws circles, with the center of each one being a bright star point and the radius being a special angular distance, and uses the parallel processing ability of the AAC algorithm to calculate the angular distance of any pair of star points in the circle. The angular distance of two star points in the circle is solved as the path of the AAC algorithm, and the path optimization feature of the AAC is employed to search for the optimal (shortest) path in the circle. This optimal path is used to recognize the stellar map and enhance the recognition success rate and speed. The experimental results show that when the position error is about 50″, the identification success rate of this method is 98% while the Delaunay identification method is only 94%. The identification time of this method is up to 50 ms.

  15. Modeling of urban growth using cellular automata (CA) optimized by Particle Swarm Optimization (PSO)

    NASA Astrophysics Data System (ADS)

    Khalilnia, M. H.; Ghaemirad, T.; Abbaspour, R. A.

    2013-09-01

    In this paper, two satellite images of Tehran, the capital city of Iran, which were taken by TM and ETM+ for years 1988 and 2010 are used as the base information layers to study the changes in urban patterns of this metropolis. The patterns of urban growth for the city of Tehran are extracted in a period of twelve years using cellular automata setting the logistic regression functions as transition functions. Furthermore, the weighting coefficients of parameters affecting the urban growth, i.e. distance from urban centers, distance from rural centers, distance from agricultural centers, and neighborhood effects were selected using PSO. In order to evaluate the results of the prediction, the percent correct match index is calculated. According to the results, by combining optimization techniques with cellular automata model, the urban growth patterns can be predicted with accuracy up to 75 %.

  16. Production Task Queue Optimization Based on Multi-Attribute Evaluation for Complex Product Assembly Workshop.

    PubMed

    Li, Lian-Hui; Mo, Rong

    2015-01-01

    The production task queue has a great significance for manufacturing resource allocation and scheduling decision. Man-made qualitative queue optimization method has a poor effect and makes the application difficult. A production task queue optimization method is proposed based on multi-attribute evaluation. According to the task attributes, the hierarchical multi-attribute model is established and the indicator quantization methods are given. To calculate the objective indicator weight, criteria importance through intercriteria correlation (CRITIC) is selected from three usual methods. To calculate the subjective indicator weight, BP neural network is used to determine the judge importance degree, and then the trapezoid fuzzy scale-rough AHP considering the judge importance degree is put forward. The balanced weight, which integrates the objective weight and the subjective weight, is calculated base on multi-weight contribution balance model. The technique for order preference by similarity to an ideal solution (TOPSIS) improved by replacing Euclidean distance with relative entropy distance is used to sequence the tasks and optimize the queue by the weighted indicator value. A case study is given to illustrate its correctness and feasibility.

  17. Production Task Queue Optimization Based on Multi-Attribute Evaluation for Complex Product Assembly Workshop

    PubMed Central

    Li, Lian-hui; Mo, Rong

    2015-01-01

    The production task queue has a great significance for manufacturing resource allocation and scheduling decision. Man-made qualitative queue optimization method has a poor effect and makes the application difficult. A production task queue optimization method is proposed based on multi-attribute evaluation. According to the task attributes, the hierarchical multi-attribute model is established and the indicator quantization methods are given. To calculate the objective indicator weight, criteria importance through intercriteria correlation (CRITIC) is selected from three usual methods. To calculate the subjective indicator weight, BP neural network is used to determine the judge importance degree, and then the trapezoid fuzzy scale-rough AHP considering the judge importance degree is put forward. The balanced weight, which integrates the objective weight and the subjective weight, is calculated base on multi-weight contribution balance model. The technique for order preference by similarity to an ideal solution (TOPSIS) improved by replacing Euclidean distance with relative entropy distance is used to sequence the tasks and optimize the queue by the weighted indicator value. A case study is given to illustrate its correctness and feasibility. PMID:26414758

  18. The effect of the distance between acidic site and basic site immobilized on mesoporous solid on the activity in catalyzing aldol condensation

    NASA Astrophysics Data System (ADS)

    Yu, Xiaofang; Yu, Xiaobo; Wu, Shujie; Liu, Bo; Liu, Heng; Guan, Jingqi; Kan, Qiubin

    2011-02-01

    Acid-base bifunctional heterogeneous catalysts containing carboxylic and amine groups, which were immobilized at defined distance from one another on the mesoporous solid were synthesized by immobilizing lysine onto carboxyl-SBA-15. The obtained materials were characterized by X-ray diffraction (XRD), N 2 adsorption, Fourier-transform infrared spectroscopy (FTIR), thermogravimetric analysis (TGA), scanning electron micrographs (SEM), transmission electron micrographs (TEM), elemental analysis, and back titration. Proximal-C-A-SBA-15 with a proximal acid-base distance was more active than maximum-C-A-SBA-15 with a maximum acid-base distance in aldol condensation reaction between acetone and various aldehydes. It appears that the distance between acidic site and basic site immobilized on mesoporous solid should be an essential factor for catalysis optimization.

  19. Geometric Reasoning for Automated Planning

    NASA Technical Reports Server (NTRS)

    Clement, Bradley J.; Knight, Russell L.; Broderick, Daniel

    2012-01-01

    An important aspect of mission planning for NASA s operation of the International Space Station is the allocation and management of space for supplies and equipment. The Stowage, Configuration Analysis, and Operations Planning teams collaborate to perform the bulk of that planning. A Geometric Reasoning Engine is developed in a way that can be shared by the teams to optimize item placement in the context of crew planning. The ISS crew spends (at the time of this writing) a third or more of their time moving supplies and equipment around. Better logistical support and optimized packing could make a significant impact on operational efficiency of the ISS. Currently, computational geometry and motion planning do not focus specifically on the optimized orientation and placement of 3D objects based on multiple distance and containment preferences and constraints. The software performs reasoning about the manipulation of 3D solid models in order to maximize an objective function based on distance. It optimizes for 3D orientation and placement. Spatial placement optimization is a general problem and can be applied to object packing or asset relocation.

  20. Normalized distance aggregation of discriminative features for person reidentification

    NASA Astrophysics Data System (ADS)

    Hou, Li; Han, Kang; Wan, Wanggen; Hwang, Jenq-Neng; Yao, Haiyan

    2018-03-01

    We propose an effective person reidentification method based on normalized distance aggregation of discriminative features. Our framework is built on the integration of three high-performance discriminative feature extraction models, including local maximal occurrence (LOMO), feature fusion net (FFN), and a concatenation of LOMO and FFN called LOMO-FFN, through two fast and discriminant metric learning models, i.e., cross-view quadratic discriminant analysis (XQDA) and large-scale similarity learning (LSSL). More specifically, we first represent all the cross-view person images using LOMO, FFN, and LOMO-FFN, respectively, and then apply each extracted feature representation to train XQDA and LSSL, respectively, to obtain the optimized individual cross-view distance metric. Finally, the cross-view person matching is computed as the sum of the optimized individual cross-view distance metric through the min-max normalization. Experimental results have shown the effectiveness of the proposed algorithm on three challenging datasets (VIPeR, PRID450s, and CUHK01).

  1. Optimal speeds for walking and running, and walking on a moving walkway.

    PubMed

    Srinivasan, Manoj

    2009-06-01

    Many aspects of steady human locomotion are thought to be constrained by a tendency to minimize the expenditure of metabolic cost. This paper has three parts related to the theme of energetic optimality: (1) a brief review of energetic optimality in legged locomotion, (2) an examination of the notion of optimal locomotion speed, and (3) an analysis of walking on moving walkways, such as those found in some airports. First, I describe two possible connotations of the term "optimal locomotion speed:" that which minimizes the total metabolic cost per unit distance and that which minimizes the net cost per unit distance (total minus resting cost). Minimizing the total cost per distance gives the maximum range speed and is a much better predictor of the speeds at which people and horses prefer to walk naturally. Minimizing the net cost per distance is equivalent to minimizing the total daily energy intake given an idealized modern lifestyle that requires one to walk a given distance every day--but it is not a good predictor of animals' walking speeds. Next, I critique the notion that there is no energy-optimal speed for running, making use of some recent experiments and a review of past literature. Finally, I consider the problem of predicting the speeds at which people walk on moving walkways--such as those found in some airports. I present two substantially different theories to make predictions. The first theory, minimizing total energy per distance, predicts that for a range of low walkway speeds, the optimal absolute speed of travel will be greater--but the speed relative to the walkway smaller--than the optimal walking speed on stationary ground. At higher walkway speeds, this theory predicts that the person will stand still. The second theory is based on the assumption that the human optimally reconciles the sensory conflict between the forward speed that the eye sees and the walking speed that the legs feel and tries to equate the best estimate of the forward speed to the naturally preferred speed. This sensory conflict theory also predicts that people would walk slower than usual relative to the walkway yet move faster than usual relative to the ground. These predictions agree qualitatively with available experimental observations, but there are quantitative differences.

  2. Design and Laboratory Implementation of Autonomous Optimal Motion Planning for Non-Holonomic Planetary Rovers

    DTIC Science & Technology

    2012-12-01

    autonomy helped to maximize a Mars day journey, because humans could only plan the first portion of the journey based on images sent from the rover...safe trajectory based on its sensors [1]. The distance between Mars and Earth ranges from 100-200 million miles [1] and at this distance, the time...This feature worked for the pre- planned maneuvers, which were planned by humans the day before based on available sensory and visual inputs. Once the

  3. Incorporation of physical constraints in optimal surface search for renal cortex segmentation

    NASA Astrophysics Data System (ADS)

    Li, Xiuli; Chen, Xinjian; Yao, Jianhua; Zhang, Xing; Tian, Jie

    2012-02-01

    In this paper, we propose a novel approach for multiple surfaces segmentation based on the incorporation of physical constraints in optimal surface searching. We apply our new approach to solve the renal cortex segmentation problem, an important but not sufficiently researched issue. In this study, in order to better restrain the intensity proximity of the renal cortex and renal column, we extend the optimal surface search approach to allow for varying sampling distance and physical separation constraints, instead of the traditional fixed sampling distance and numerical separation constraints. The sampling distance of each vertex-column is computed according to the sparsity of the local triangular mesh. Then the physical constraint learned from a priori renal cortex thickness is applied to the inter-surface arcs as the separation constraints. Appropriate varying sampling distance and separation constraints were learnt from 6 clinical CT images. After training, the proposed approach was tested on a test set of 10 images. The manual segmentation of renal cortex was used as the reference standard. Quantitative analysis of the segmented renal cortex indicates that overall segmentation accuracy was increased after introducing the varying sampling distance and physical separation constraints (the average true positive volume fraction (TPVF) and false positive volume fraction (FPVF) were 83.96% and 2.80%, respectively, by using varying sampling distance and physical separation constraints compared to 74.10% and 0.18%, respectively, by using fixed sampling distance and numerical separation constraints). The experimental results demonstrated the effectiveness of the proposed approach.

  4. Study of probe-sample distance for biomedical spectra measurement.

    PubMed

    Wang, Bowen; Fan, Shuzhen; Li, Lei; Wang, Cong

    2011-11-02

    Fiber-based optical spectroscopy has been widely used for biomedical applications. However, the effect of probe-sample distance on the collection efficiency has not been well investigated. In this paper, we presented a theoretical model to maximize the illumination and collection efficiency in designing fiber optic probes for biomedical spectra measurement. This model was in general applicable to probes with single or multiple fibers at an arbitrary incident angle. In order to demonstrate the theory, a fluorescence spectrometer was used to measure the fluorescence of human finger skin at various probe-sample distances. The fluorescence spectrum and the total fluorescence intensity were recorded. The theoretical results show that for single fiber probes, contact measurement always provides the best results. While for multi-fiber probes, there is an optimal probe distance. When a 400- μm excitation fiber is used to deliver the light to the skin and another six 400- μm fibers surrounding the excitation fiber are used to collect the fluorescence signal, the experimental results show that human finger skin has very strong fluorescence between 475 nm and 700 nm under 450 nm excitation. The fluorescence intensity is heavily dependent on the probe-sample distance and there is an optimal probe distance. We investigated a number of probe-sample configurations and found that contact measurement could be the primary choice for single-fiber probes, but was very inefficient for multi-fiber probes. There was an optimal probe-sample distance for multi-fiber probes. By carefully choosing the probe-sample distance, the collection efficiency could be enhanced by 5-10 times. Our experiments demonstrated that the experimental results of the probe-sample distance dependence of collection efficiency in multi-fiber probes were in general agreement with our theory.

  5. Distance Metric Learning via Iterated Support Vector Machines.

    PubMed

    Zuo, Wangmeng; Wang, Faqiang; Zhang, David; Lin, Liang; Huang, Yuchi; Meng, Deyu; Zhang, Lei

    2017-07-11

    Distance metric learning aims to learn from the given training data a valid distance metric, with which the similarity between data samples can be more effectively evaluated for classification. Metric learning is often formulated as a convex or nonconvex optimization problem, while most existing methods are based on customized optimizers and become inefficient for large scale problems. In this paper, we formulate metric learning as a kernel classification problem with the positive semi-definite constraint, and solve it by iterated training of support vector machines (SVMs). The new formulation is easy to implement and efficient in training with the off-the-shelf SVM solvers. Two novel metric learning models, namely Positive-semidefinite Constrained Metric Learning (PCML) and Nonnegative-coefficient Constrained Metric Learning (NCML), are developed. Both PCML and NCML can guarantee the global optimality of their solutions. Experiments are conducted on general classification, face verification and person re-identification to evaluate our methods. Compared with the state-of-the-art approaches, our methods can achieve comparable classification accuracy and are efficient in training.

  6. A Case-Based Reasoning Method with Rank Aggregation

    NASA Astrophysics Data System (ADS)

    Sun, Jinhua; Du, Jiao; Hu, Jian

    2018-03-01

    In order to improve the accuracy of case-based reasoning (CBR), this paper addresses a new CBR framework with the basic principle of rank aggregation. First, the ranking methods are put forward in each attribute subspace of case. The ordering relation between cases on each attribute is got between cases. Then, a sorting matrix is got. Second, the similar case retrieval process from ranking matrix is transformed into a rank aggregation optimal problem, which uses the Kemeny optimal. On the basis, a rank aggregation case-based reasoning algorithm, named RA-CBR, is designed. The experiment result on UCI data sets shows that case retrieval accuracy of RA-CBR algorithm is higher than euclidean distance CBR and mahalanobis distance CBR testing.So we can get the conclusion that RA-CBR method can increase the performance and efficiency of CBR.

  7. Optimizing distance-based methods for large data sets

    NASA Astrophysics Data System (ADS)

    Scholl, Tobias; Brenner, Thomas

    2015-10-01

    Distance-based methods for measuring spatial concentration of industries have received an increasing popularity in the spatial econometrics community. However, a limiting factor for using these methods is their computational complexity since both their memory requirements and running times are in {{O}}(n^2). In this paper, we present an algorithm with constant memory requirements and shorter running time, enabling distance-based methods to deal with large data sets. We discuss three recent distance-based methods in spatial econometrics: the D&O-Index by Duranton and Overman (Rev Econ Stud 72(4):1077-1106, 2005), the M-function by Marcon and Puech (J Econ Geogr 10(5):745-762, 2010) and the Cluster-Index by Scholl and Brenner (Reg Stud (ahead-of-print):1-15, 2014). Finally, we present an alternative calculation for the latter index that allows the use of data sets with millions of firms.

  8. Fluence map optimization (FMO) with dose-volume constraints in IMRT using the geometric distance sorting method.

    PubMed

    Lan, Yihua; Li, Cunhua; Ren, Haozheng; Zhang, Yong; Min, Zhifang

    2012-10-21

    A new heuristic algorithm based on the so-called geometric distance sorting technique is proposed for solving the fluence map optimization with dose-volume constraints which is one of the most essential tasks for inverse planning in IMRT. The framework of the proposed method is basically an iterative process which begins with a simple linear constrained quadratic optimization model without considering any dose-volume constraints, and then the dose constraints for the voxels violating the dose-volume constraints are gradually added into the quadratic optimization model step by step until all the dose-volume constraints are satisfied. In each iteration step, an interior point method is adopted to solve each new linear constrained quadratic programming. For choosing the proper candidate voxels for the current dose constraint adding, a so-called geometric distance defined in the transformed standard quadratic form of the fluence map optimization model was used to guide the selection of the voxels. The new geometric distance sorting technique can mostly reduce the unexpected increase of the objective function value caused inevitably by the constraint adding. It can be regarded as an upgrading to the traditional dose sorting technique. The geometry explanation for the proposed method is also given and a proposition is proved to support our heuristic idea. In addition, a smart constraint adding/deleting strategy is designed to ensure a stable iteration convergence. The new algorithm is tested on four cases including head-neck, a prostate, a lung and an oropharyngeal, and compared with the algorithm based on the traditional dose sorting technique. Experimental results showed that the proposed method is more suitable for guiding the selection of new constraints than the traditional dose sorting method, especially for the cases whose target regions are in non-convex shapes. It is a more efficient optimization technique to some extent for choosing constraints than the dose sorting method. By integrating a smart constraint adding/deleting scheme within the iteration framework, the new technique builds up an improved algorithm for solving the fluence map optimization with dose-volume constraints.

  9. A statistical approach for inferring the 3D structure of the genome.

    PubMed

    Varoquaux, Nelle; Ay, Ferhat; Noble, William Stafford; Vert, Jean-Philippe

    2014-06-15

    Recent technological advances allow the measurement, in a single Hi-C experiment, of the frequencies of physical contacts among pairs of genomic loci at a genome-wide scale. The next challenge is to infer, from the resulting DNA-DNA contact maps, accurate 3D models of how chromosomes fold and fit into the nucleus. Many existing inference methods rely on multidimensional scaling (MDS), in which the pairwise distances of the inferred model are optimized to resemble pairwise distances derived directly from the contact counts. These approaches, however, often optimize a heuristic objective function and require strong assumptions about the biophysics of DNA to transform interaction frequencies to spatial distance, and thereby may lead to incorrect structure reconstruction. We propose a novel approach to infer a consensus 3D structure of a genome from Hi-C data. The method incorporates a statistical model of the contact counts, assuming that the counts between two loci follow a Poisson distribution whose intensity decreases with the physical distances between the loci. The method can automatically adjust the transfer function relating the spatial distance to the Poisson intensity and infer a genome structure that best explains the observed data. We compare two variants of our Poisson method, with or without optimization of the transfer function, to four different MDS-based algorithms-two metric MDS methods using different stress functions, a non-metric version of MDS and ChromSDE, a recently described, advanced MDS method-on a wide range of simulated datasets. We demonstrate that the Poisson models reconstruct better structures than all MDS-based methods, particularly at low coverage and high resolution, and we highlight the importance of optimizing the transfer function. On publicly available Hi-C data from mouse embryonic stem cells, we show that the Poisson methods lead to more reproducible structures than MDS-based methods when we use data generated using different restriction enzymes, and when we reconstruct structures at different resolutions. A Python implementation of the proposed method is available at http://cbio.ensmp.fr/pastis. © The Author 2014. Published by Oxford University Press.

  10. An LFMCW detector with new structure and FRFT based differential distance estimation method.

    PubMed

    Yue, Kai; Hao, Xinhong; Li, Ping

    2016-01-01

    This paper describes a linear frequency modulated continuous wave (LFMCW) detector which is designed for a collision avoidance radar. This detector can estimate distance between the detector and pedestrians or vehicles, thereby it will help to reduce the likelihood of traffic accidents. The detector consists of a transceiver and a signal processor. A novel structure based on the intermediate frequency signal (IFS) is designed for the transceiver which is different from the traditional LFMCW transceiver using the beat frequency signal (BFS) based structure. In the signal processor, a novel fractional Fourier transform (FRFT) based differential distance estimation (DDE) method is used to detect the distance. The new IFS based structure is beneficial for the FRFT based DDE method to reduce the computation complexity, because it does not need the scan of the optimal FRFT order. Low computation complexity ensures the feasibility of practical applications. Simulations are carried out and results demonstrate the efficiency of the detector designed in this paper.

  11. Supertrees Based on the Subtree Prune-and-Regraft Distance

    PubMed Central

    Whidden, Christopher; Zeh, Norbert; Beiko, Robert G.

    2014-01-01

    Supertree methods reconcile a set of phylogenetic trees into a single structure that is often interpreted as a branching history of species. A key challenge is combining conflicting evolutionary histories that are due to artifacts of phylogenetic reconstruction and phenomena such as lateral gene transfer (LGT). Many supertree approaches use optimality criteria that do not reflect underlying processes, have known biases, and may be unduly influenced by LGT. We present the first method to construct supertrees by using the subtree prune-and-regraft (SPR) distance as an optimality criterion. Although calculating the rooted SPR distance between a pair of trees is NP-hard, our new maximum agreement forest-based methods can reconcile trees with hundreds of taxa and > 50 transfers in fractions of a second, which enables repeated calculations during the course of an iterative search. Our approach can accommodate trees in which uncertain relationships have been collapsed to multifurcating nodes. Using a series of benchmark datasets simulated under plausible rates of LGT, we show that SPR supertrees are more similar to correct species histories than supertrees based on parsimony or Robinson–Foulds distance criteria. We successfully constructed an SPR supertree from a phylogenomic dataset of 40,631 gene trees that covered 244 genomes representing several major bacterial phyla. Our SPR-based approach also allowed direct inference of highways of gene transfer between bacterial classes and genera. A Small number of these highways connect genera in different phyla and can highlight specific genes implicated in long-distance LGT. [Lateral gene transfer; matrix representation with parsimony; phylogenomics; prokaryotic phylogeny; Robinson–Foulds; subtree prune-and-regraft; supertrees.] PMID:24695589

  12. Minimal entropy probability paths between genome families.

    PubMed

    Ahlbrandt, Calvin; Benson, Gary; Casey, William

    2004-05-01

    We develop a metric for probability distributions with applications to biological sequence analysis. Our distance metric is obtained by minimizing a functional defined on the class of paths over probability measures on N categories. The underlying mathematical theory is connected to a constrained problem in the calculus of variations. The solution presented is a numerical solution, which approximates the true solution in a set of cases called rich paths where none of the components of the path is zero. The functional to be minimized is motivated by entropy considerations, reflecting the idea that nature might efficiently carry out mutations of genome sequences in such a way that the increase in entropy involved in transformation is as small as possible. We characterize sequences by frequency profiles or probability vectors, in the case of DNA where N is 4 and the components of the probability vector are the frequency of occurrence of each of the bases A, C, G and T. Given two probability vectors a and b, we define a distance function based as the infimum of path integrals of the entropy function H( p) over all admissible paths p(t), 0 < or = t< or =1, with p(t) a probability vector such that p(0)=a and p(1)=b. If the probability paths p(t) are parameterized as y(s) in terms of arc length s and the optimal path is smooth with arc length L, then smooth and "rich" optimal probability paths may be numerically estimated by a hybrid method of iterating Newton's method on solutions of a two point boundary value problem, with unknown distance L between the abscissas, for the Euler-Lagrange equations resulting from a multiplier rule for the constrained optimization problem together with linear regression to improve the arc length estimate L. Matlab code for these numerical methods is provided which works only for "rich" optimal probability vectors. These methods motivate a definition of an elementary distance function which is easier and faster to calculate, works on non-rich vectors, does not involve variational theory and does not involve differential equations, but is a better approximation of the minimal entropy path distance than the distance //b-a//(2). We compute minimal entropy distance matrices for examples of DNA myostatin genes and amino-acid sequences across several species. Output tree dendograms for our minimal entropy metric are compared with dendograms based on BLAST and BLAST identity scores.

  13. Active vortex sampling system for remote contactless survey of surfaces by laser-based field asymmetrical ion mobility spectrometer

    NASA Astrophysics Data System (ADS)

    Akmalov, Artem E.; Chistyakov, Alexander A.; Kotkovskii, Gennadii E.; Sychev, Alexei V.

    2017-10-01

    The ways for increasing the distance of non-contact sampling up to 40 cm for a field asymmetric ion mobility (FAIM) spectrometer are formulated and implemented by the use of laser desorption and active shaper of the vortex flow. Numerical modeling of air sampling flows was made and the sampling device for a laser-based FAIM spectrometer on the basis of high speed rotating impeller, located coaxial with the ion source, was designed. The dependence of trinitrotoluene vapors signal on the rotational speed and the optimization of the value of the sampling flow were obtained. The effective distance of sampling is increased up to 28 cm for trinitrotoluene vapors detection by a FAIM spectrometer with a rotating impeller. The distance is raised up to 40 cm using laser irradiation of traces of explosives. It is shown that efficient desorption of low-volatile explosives is achieved at laser intensity 107 W / cm2 , wavelength λ=266 nm, pulse energy about 1mJ and pulse frequency not less than 10 Hz under ambient conditions. The ways of optimization of internal gas flows of a FAIM spectrometer for the work at increased sampling distances are discussed.

  14. Analysis of Optimal Transport Route Determination of Oil Palm Fresh Fruit Bunches from Plantation to Processing Factory

    NASA Astrophysics Data System (ADS)

    Tarigan, U.; Sidabutar, R. F.; Tarigan, U. P. P.; Chen, A.

    2018-04-01

    Manufacturers engaged in the business, producing CPO and kernels whose raw materials are oil palm fresh fruit bunches taken from their own plantation, generally face problems of transporting from plantation to factory where there is often a change of distance traveled by the truck the carrier of FFB is due to non-specific transport instructions. The research was conducted to determine the optimal transportation route in terms of distance, time and route number. The determination of this transportation route is solved using Nearest Neighbours and Clarke & Wright Savings methods. Based on the calculations performed then found in area I with method Nearest Neighbours has a distance of 200.78 Km while Clarke & Wright Savings as with a result of 214.09 Km. As for the harvest area, II obtained results with Nearest Neighbours method of 264.37 Km and Clarke & Wright Savings method with a total distance of 264.33 Km. Based on the calculation of the time to do all the activities of transporting FFB juxtaposed with the work time of the driver got the reduction of conveyance from 8 units to 5 units. There is also improvement of fuel efficiency by 0.8%.

  15. Design and optimization of resonance-based efficient wireless power delivery systems for biomedical implants.

    PubMed

    Ramrakhyani, A K; Mirabbasi, S; Mu Chiao

    2011-02-01

    Resonance-based wireless power delivery is an efficient technique to transfer power over a relatively long distance. This technique typically uses four coils as opposed to two coils used in conventional inductive links. In the four-coil system, the adverse effects of a low coupling coefficient between primary and secondary coils are compensated by using high-quality (Q) factor coils, and the efficiency of the system is improved. Unlike its two-coil counterpart, the efficiency profile of the power transfer is not a monotonically decreasing function of the operating distance and is less sensitive to changes in the distance between the primary and secondary coils. A four-coil energy transfer system can be optimized to provide maximum efficiency at a given operating distance. We have analyzed the four-coil energy transfer systems and outlined the effect of design parameters on power-transfer efficiency. Design steps to obtain the efficient power-transfer system are presented and a design example is provided. A proof-of-concept prototype system is implemented and confirms the validity of the proposed analysis and design techniques. In the prototype system, for a power-link frequency of 700 kHz and a coil distance range of 10 to 20 mm, using a 22-mm diameter implantable coil resonance-based system shows a power-transfer efficiency of more than 80% with an enhanced operating range compared to ~40% efficiency achieved by a conventional two-coil system.

  16. The Study of Intelligent Vehicle Navigation Path Based on Behavior Coordination of Particle Swarm.

    PubMed

    Han, Gaining; Fu, Weiping; Wang, Wen

    2016-01-01

    In the behavior dynamics model, behavior competition leads to the shock problem of the intelligent vehicle navigation path, because of the simultaneous occurrence of the time-variant target behavior and obstacle avoidance behavior. Considering the safety and real-time of intelligent vehicle, the particle swarm optimization (PSO) algorithm is proposed to solve these problems for the optimization of weight coefficients of the heading angle and the path velocity. Firstly, according to the behavior dynamics model, the fitness function is defined concerning the intelligent vehicle driving characteristics, the distance between intelligent vehicle and obstacle, and distance of intelligent vehicle and target. Secondly, behavior coordination parameters that minimize the fitness function are obtained by particle swarm optimization algorithms. Finally, the simulation results show that the optimization method and its fitness function can improve the perturbations of the vehicle planning path and real-time and reliability.

  17. The Study of Intelligent Vehicle Navigation Path Based on Behavior Coordination of Particle Swarm

    PubMed Central

    Han, Gaining; Fu, Weiping; Wang, Wen

    2016-01-01

    In the behavior dynamics model, behavior competition leads to the shock problem of the intelligent vehicle navigation path, because of the simultaneous occurrence of the time-variant target behavior and obstacle avoidance behavior. Considering the safety and real-time of intelligent vehicle, the particle swarm optimization (PSO) algorithm is proposed to solve these problems for the optimization of weight coefficients of the heading angle and the path velocity. Firstly, according to the behavior dynamics model, the fitness function is defined concerning the intelligent vehicle driving characteristics, the distance between intelligent vehicle and obstacle, and distance of intelligent vehicle and target. Secondly, behavior coordination parameters that minimize the fitness function are obtained by particle swarm optimization algorithms. Finally, the simulation results show that the optimization method and its fitness function can improve the perturbations of the vehicle planning path and real-time and reliability. PMID:26880881

  18. Optimization of Pulsed-DEER Measurements for Gd-Based Labels: Choice of Operational Frequencies, Pulse Durations and Positions, and Temperature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raitsimring, A.; Astashkin, A. V.; Enemark, J. H.

    2012-12-29

    In this work, the experimental conditions and parameters necessary to optimize the long-distance (≥ 60 Å) Double Electron-Electron Resonance (DEER) measurements of biomacromolecules labeled with Gd(III) tags are analyzed. The specific parameters discussed are the temperature, microwave band, the separation between the pumping and observation frequencies, pulse train repetition rate, pulse durations and pulse positioning in the electron paramagnetic resonance spectrum. It was found that: (i) in optimized DEER measurements, the observation pulses have to be applied at the maximum of the EPR spectrum; (ii) the optimal temperature range for Ka-band measurements is 14-17 K, while in W-band the optimalmore » temperatures are between 6-9 K; (iii) W-band is preferable to Ka-band for DEER measurements. Recent achievements and the conditions necessary for short-distance measurements (<15 Å) are also briefly discussed.« less

  19. Matrix Completion Optimization for Localization in Wireless Sensor Networks for Intelligent IoT

    PubMed Central

    Nguyen, Thu L. N.; Shin, Yoan

    2016-01-01

    Localization in wireless sensor networks (WSNs) is one of the primary functions of the intelligent Internet of Things (IoT) that offers automatically discoverable services, while the localization accuracy is a key issue to evaluate the quality of those services. In this paper, we develop a framework to solve the Euclidean distance matrix completion problem, which is an important technical problem for distance-based localization in WSNs. The sensor network localization problem is described as a low-rank dimensional Euclidean distance completion problem with known nodes. The task is to find the sensor locations through recovery of missing entries of a squared distance matrix when the dimension of the data is small compared to the number of data points. We solve a relaxation optimization problem using a modification of Newton’s method, where the cost function depends on the squared distance matrix. The solution obtained in our scheme achieves a lower complexity and can perform better if we use it as an initial guess for an interactive local search of other higher precision localization scheme. Simulation results show the effectiveness of our approach. PMID:27213378

  20. The effect of proximity to hurricanes Katrina and Rita on subsequent hurricane outlook and optimistic bias.

    PubMed

    Trumbo, Craig; Lueck, Michelle; Marlatt, Holly; Peek, Lori

    2011-12-01

    This study evaluated how individuals living on the Gulf Coast perceived hurricane risk after Hurricanes Katrina and Rita. It was hypothesized that hurricane outlook and optimistic bias for hurricane risk would be associated positively with distance from the Katrina-Rita landfall (more optimism at greater distance), controlling for historically based hurricane risk and county population density, demographics, individual hurricane experience, and dispositional optimism. Data were collected in January 2006 through a mail survey sent to 1,375 households in 41 counties on the coast (n = 824, 60% response). The analysis used hierarchal regression to test hypotheses. Hurricane history and population density had no effect on outlook; individuals who were male, older, and with higher household incomes were associated with lower risk perception; individual hurricane experience and personal impacts from Katrina and Rita predicted greater risk perception; greater dispositional optimism predicted more optimistic outlook; distance had a small effect but predicted less optimistic outlook at greater distance (model R(2) = 0.21). The model for optimistic bias had fewer effects: age and community tenure were significant; dispositional optimism had a positive effect on optimistic bias; distance variables were not significant (model R(2) = 0.05). The study shows that an existing measure of hurricane outlook has utility, hurricane outlook appears to be a unique concept from hurricane optimistic bias, and proximity has at most small effects. Future extension of this research will include improved conceptualization and measurement of hurricane risk perception and will bring to focus several concepts involving risk communication. © 2011 Society for Risk Analysis.

  1. The Near-infrared Optimal Distances Method Applied to Galactic Classical Cepheids Tightly Constrains Mid-infrared Period–Luminosity Relations

    NASA Astrophysics Data System (ADS)

    Wang, Shu; Chen, Xiaodian; de Grijs, Richard; Deng, Licai

    2018-01-01

    Classical Cepheids are well-known and widely used distance indicators. As distance and extinction are usually degenerate, it is important to develop suitable methods to robustly anchor the distance scale. Here, we introduce a near-infrared optimal distance method to determine both the extinction values of and distances to a large sample of 288 Galactic classical Cepheids. The overall uncertainty in the derived distances is less than 4.9%. We compare our newly determined distances to the Cepheids in our sample with previously published distances to the same Cepheids with Hubble Space Telescope parallax measurements and distances based on the IR surface brightness method, Wesenheit functions, and the main-sequence fitting method. The systematic deviations in the distances determined here with respect to those of previous publications is less than 1%–2%. Hence, we constructed Galactic mid-IR period–luminosity (PL) relations for classical Cepheids in the four Wide-Field Infrared Survey Explorer (WISE) bands (W1, W2, W3, and W4) and the four Spitzer Space Telescope bands ([3.6], [4.5], [5.8], and [8.0]). Based on our sample of hundreds of Cepheids, the WISE PL relations have been determined for the first time; their dispersion is approximately 0.10 mag. Using the currently most complete sample, our Spitzer PL relations represent a significant improvement in accuracy, especially in the [3.6] band which has the smallest dispersion (0.066 mag). In addition, the average mid-IR extinction curve for Cepheids has been obtained: {A}W1/{A}{K{{s}}}≈ 0.560, {A}W2/{A}{K{{s}}}≈ 0.479, {A}W3/{A}{K{{s}}}≈ 0.507, {A}W4/{A}{K{{s}}}≈ 0.406, {A}[3.6]/{A}{K{{s}}}≈ 0.481, {A}[4.5]/{A}{K{{s}}}≈ 0.469, {A}[5.8]/{A}{K{{s}}}≈ 0.427, and {A}[8.0]/{A}{K{{s}}}≈ 0.427 {mag}.

  2. Comprehensive investigation of noble metal nanoparticles shape, size and material on the optical response of optimal plasmonic Y-splitter waveguides

    NASA Astrophysics Data System (ADS)

    Ahmadivand, Arash; Golmohammadi, Saeed

    2014-01-01

    With the purpose of guiding and splitting of optical power at C-band spectrum, we studied Y-shape splitters based on various shapes of nanoparticles as a plasmon waveguide. We applied different configurations of Gold (Au) and Silver (Ag) nanoparticles including spheres, rods and rings, to optimize the efficiency and losses of two and four-branch splitters. The best performance in light transportation specifically at telecom wavelength (λ≈1550 nm) is achieved by nanorings, due to an extra degree of freedom in their geometrical components. In addition, comparisons of several values for offset distance (doffset) of examined structures shows that Au nanoring splitters with feasible lower doffset have high quality in guiding and splitting of light through the structure. Finally, we studied four-branch Y-splitters based on Au and Ag nanorings with least possible offset distances to optimize the splitter performance. The power transmission as a key element is calculated for examined structures.

  3. Photoluminescence decay dynamics in γ-Ga2O3 nanocrystals: The role of exclusion distance at short time scales

    NASA Astrophysics Data System (ADS)

    Fernandes, Brian; Hegde, Manu; Stanish, Paul C.; Mišković, Zoran L.; Radovanovic, Pavle V.

    2017-09-01

    We developed a comprehensive theoretical model describing the photoluminescence decay dynamics at short and long time scales based on the donor-acceptor defect interactions in γ-Ga2O3 nanocrystals, and quantitatively determined the importance of exclusion distance and spatial distribution of defects. We allowed for donors and acceptors to be adjacent to each other or separated by different exclusion distances. The optimal exclusion distance was found to be comparable to the donor Bohr radius and have a strong effect on the photoluminescence decay curve at short times. The importance of the exclusion distance at short time scales was confirmed by Monte Carlo simulations.

  4. Simulation-Based Design for Wearable Robotic Systems: An Optimization Framework for Enhancing a Standing Long Jump.

    PubMed

    Ong, Carmichael F; Hicks, Jennifer L; Delp, Scott L

    2016-05-01

    Technologies that augment human performance are the focus of intensive research and development, driven by advances in wearable robotic systems. Success has been limited by the challenge of understanding human-robot interaction. To address this challenge, we developed an optimization framework to synthesize a realistic human standing long jump and used the framework to explore how simulated wearable robotic devices might enhance jump performance. A planar, five-segment, seven-degree-of-freedom model with physiological torque actuators, which have variable torque capacity depending on joint position and velocity, was used to represent human musculoskeletal dynamics. An active augmentation device was modeled as a torque actuator that could apply a single pulse of up to 100 Nm of extension torque. A passive design was modeled as rotational springs about each lower limb joint. Dynamic optimization searched for physiological and device actuation patterns to maximize jump distance. Optimization of the nominal case yielded a 2.27 m jump that captured salient kinematic and kinetic features of human jumps. When the active device was added to the ankle, knee, or hip, jump distance increased to between 2.49 and 2.52 m. Active augmentation of all three joints increased the jump distance to 3.10 m. The passive design increased jump distance to 3.32 m by adding torques of 135, 365, and 297 Nm to the ankle, knee, and hip, respectively. Dynamic optimization can be used to simulate a standing long jump and investigate human-robot interaction. Simulation can aid in the design of performance-enhancing technologies.

  5. Design and Optimization of a 3-Coil Inductive Link for Efficient Wireless Power Transmission.

    PubMed

    Kiani, Mehdi; Jow, Uei-Ming; Ghovanloo, Maysam

    2011-07-14

    Inductive power transmission is widely used to energize implantable microelectronic devices (IMDs), recharge batteries, and energy harvesters. Power transfer efficiency (PTE) and power delivered to the load (PDL) are two key parameters in wireless links, which affect the energy source specifications, heat dissipation, power transmission range, and interference with other devices. To improve the PTE, a 4-coil inductive link has been recently proposed. Through a comprehensive circuit based analysis that can guide a design and optimization scheme, we have shown that despite achieving high PTE at larger coil separations, the 4-coil inductive links fail to achieve a high PDL. Instead, we have proposed a 3-coil inductive power transfer link with comparable PTE over its 4-coil counterpart at large coupling distances, which can also achieve high PDL. We have also devised an iterative design methodology that provides the optimal coil geometries in a 3-coil inductive power transfer link. Design examples of 2-, 3-, and 4-coil inductive links have been presented, and optimized for 13.56 MHz carrier frequency and 12 cm coupling distance, showing PTEs of 15%, 37%, and 35%, respectively. At this distance, the PDL of the proposed 3-coil inductive link is 1.5 and 59 times higher than its equivalent 2- and 4-coil links, respectively. For short coupling distances, however, 2-coil links remain the optimal choice when a high PDL is required, while 4-coil links are preferred when the driver has large output resistance or small power is needed. These results have been verified through simulations and measurements.

  6. Fast Algorithms for Earth Mover Distance Based on Optimal Transport and L1 Regularization II

    DTIC Science & Technology

    2016-09-01

    of optimal transport, the EMD problem can be reformulated as a familiar L1 minimization. We use a regularization which gives us a unique solution for...plays a central role in many applications, including image processing, computer vision and statistics etc. [13, 17, 20, 24]. The EMD is a metric defined

  7. Capacitated vehicle-routing problem model for scheduled solid waste collection and route optimization using PSO algorithm.

    PubMed

    Hannan, M A; Akhtar, Mahmuda; Begum, R A; Basri, H; Hussain, A; Scavino, Edgar

    2018-01-01

    Waste collection widely depends on the route optimization problem that involves a large amount of expenditure in terms of capital, labor, and variable operational costs. Thus, the more waste collection route is optimized, the more reduction in different costs and environmental effect will be. This study proposes a modified particle swarm optimization (PSO) algorithm in a capacitated vehicle-routing problem (CVRP) model to determine the best waste collection and route optimization solutions. In this study, threshold waste level (TWL) and scheduling concepts are applied in the PSO-based CVRP model under different datasets. The obtained results from different datasets show that the proposed algorithmic CVRP model provides the best waste collection and route optimization in terms of travel distance, total waste, waste collection efficiency, and tightness at 70-75% of TWL. The obtained results for 1 week scheduling show that 70% of TWL performs better than all node consideration in terms of collected waste, distance, tightness, efficiency, fuel consumption, and cost. The proposed optimized model can serve as a valuable tool for waste collection and route optimization toward reducing socioeconomic and environmental impacts. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Fundamental Limits of Delay and Security in Device-to-Device Communication

    DTIC Science & Technology

    2013-01-01

    systematic MDS (maximum distance separable) codes and random binning strategies that achieve a Pareto optimal delayreconstruction tradeoff. The erasure MD...file, and a coding scheme based on erasure compression and Slepian-Wolf binning is presented. The coding scheme is shown to provide a Pareto optimal...ble) codes and random binning strategies that achieve a Pareto optimal delay- reconstruction tradeoff. The erasure MD setup is then used to propose a

  9. Analysis and Optimization of Four-Coil Planar Magnetically Coupled Printed Spiral Resonators.

    PubMed

    Khan, Sadeque Reza; Choi, GoangSeog

    2016-08-03

    High-efficiency power transfer at a long distance can be efficiently established using resonance-based wireless techniques. In contrast to the conventional two-coil-based inductive links, this paper presents a magnetically coupled fully planar four-coil printed spiral resonator-based wireless power-transfer system that compensates the adverse effect of low coupling and improves efficiency by using high quality-factor coils. A conformal architecture is adopted to reduce the transmitter and receiver sizes. Both square architecture and circular architectures are analyzed and optimized to provide maximum efficiency at a certain operating distance. Furthermore, their performance is compared on the basis of the power-transfer efficiency and power delivered to the load. Square resonators can produce higher measured power-transfer efficiency (79.8%) than circular resonators (78.43%) when the distance between the transmitter and receiver coils is 10 mm of air medium at a resonant frequency of 13.56 MHz. On the other hand, circular coils can deliver higher power (443.5 mW) to the load than the square coils (396 mW) under the same medium properties. The performance of the proposed structures is investigated by simulation using a three-layer human-tissue medium and by experimentation.

  10. Intelligent fault recognition strategy based on adaptive optimized multiple centers

    NASA Astrophysics Data System (ADS)

    Zheng, Bo; Li, Yan-Feng; Huang, Hong-Zhong

    2018-06-01

    For the recognition principle based optimized single center, one important issue is that the data with nonlinear separatrix cannot be recognized accurately. In order to solve this problem, a novel recognition strategy based on adaptive optimized multiple centers is proposed in this paper. This strategy recognizes the data sets with nonlinear separatrix by the multiple centers. Meanwhile, the priority levels are introduced into the multi-objective optimization, including recognition accuracy, the quantity of optimized centers, and distance relationship. According to the characteristics of various data, the priority levels are adjusted to ensure the quantity of optimized centers adaptively and to keep the original accuracy. The proposed method is compared with other methods, including support vector machine (SVM), neural network, and Bayesian classifier. The results demonstrate that the proposed strategy has the same or even better recognition ability on different distribution characteristics of data.

  11. Distance-to-Agreement Investigation of Tomotherapy's Bony Anatomy-Based Autoregistration and Planning Target Volume Contour-Based Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suh, Steve, E-mail: ssuh@coh.org; Schultheiss, Timothy E.

    Purpose: To compare Tomotherapy's megavoltage computed tomography bony anatomy autoregistration with the best achievable registration, assuming no deformation and perfect knowledge of planning target volume (PTV) location. Methods and Materials: Distance-to-agreement (DTA) of the PTV was determined by applying a rigid-body shift to the PTV region of interest of the prostate from its reference position, assuming no deformations. Planning target volume region of interest of the prostate was extracted from the patient archives. The reference position was set by the 6 degrees of freedom (dof)—x, y, z, roll, pitch, and yaw—optimization results from the previous study at this institution. Themore » DTA and the compensating parameters were calculated by the shift of the PTV from the reference 6-dof to the 4-dof—x, y, z, and roll—optimization. In this study, the effectiveness of Tomotherapy's 4-dof bony anatomy–based autoregistration was compared with the idealized 4-dof PTV contour-based optimization. Results: The maximum DTA (maxDTA) of the bony anatomy-based autoregistration was 3.2 ± 1.9 mm, with the maximum value of 8.0 mm. The maxDTA of the contour-based optimization was 1.8 ± 1.3 mm, with the maximum value of 5.7 mm. Comparison of Pearson correlation of the compensating parameters between the 2 4-dof optimization algorithms shows that there is a small but statistically significant correlation in y and z (0.236 and 0.300, respectively), whereas there is very weak correlation in x and roll (0.062 and 0.025, respectively). Conclusions: We find that there is an average improvement of approximately 1 mm in terms of maxDTA on the PTV going from 4-dof bony anatomy-based autoregistration to the 4-dof contour-based optimization. Pearson correlation analysis of the 2 4-dof optimizations suggests that uncertainties due to deformation and inadequate resolution account for much of the compensating parameters, but pitch variation also makes a statistically significant contribution.« less

  12. An optimization approach for observation association with systemic uncertainty applied to electro-optical systems

    NASA Astrophysics Data System (ADS)

    Worthy, Johnny L.; Holzinger, Marcus J.; Scheeres, Daniel J.

    2018-06-01

    The observation to observation measurement association problem for dynamical systems can be addressed by determining if the uncertain admissible regions produced from each observation have one or more points of intersection in state space. An observation association method is developed which uses an optimization based approach to identify local Mahalanobis distance minima in state space between two uncertain admissible regions. A binary hypothesis test with a selected false alarm rate is used to assess the probability that an intersection exists at the point(s) of minimum distance. The systemic uncertainties, such as measurement uncertainties, timing errors, and other parameter errors, define a distribution about a state estimate located at the local Mahalanobis distance minima. If local minima do not exist, then the observations are not associated. The proposed method utilizes an optimization approach defined on a reduced dimension state space to reduce the computational load of the algorithm. The efficacy and efficiency of the proposed method is demonstrated on observation data collected from the Georgia Tech Space Object Research Telescope.

  13. A hybrid genetic algorithm for solving bi-objective traveling salesman problems

    NASA Astrophysics Data System (ADS)

    Ma, Mei; Li, Hecheng

    2017-08-01

    The traveling salesman problem (TSP) is a typical combinatorial optimization problem, in a traditional TSP only tour distance is taken as a unique objective to be minimized. When more than one optimization objective arises, the problem is known as a multi-objective TSP. In the present paper, a bi-objective traveling salesman problem (BOTSP) is taken into account, where both the distance and the cost are taken as optimization objectives. In order to efficiently solve the problem, a hybrid genetic algorithm is proposed. Firstly, two satisfaction degree indices are provided for each edge by considering the influences of the distance and the cost weight. The first satisfaction degree is used to select edges in a “rough” way, while the second satisfaction degree is executed for a more “refined” choice. Secondly, two satisfaction degrees are also applied to generate new individuals in the iteration process. Finally, based on genetic algorithm framework as well as 2-opt selection strategy, a hybrid genetic algorithm is proposed. The simulation illustrates the efficiency of the proposed algorithm.

  14. The metabolic cost of changing walking speeds is significant, implies lower optimal speeds for shorter distances, and increases daily energy estimates.

    PubMed

    Seethapathi, Nidhi; Srinivasan, Manoj

    2015-09-01

    Humans do not generally walk at constant speed, except perhaps on a treadmill. Normal walking involves starting, stopping and changing speeds, in addition to roughly steady locomotion. Here, we measure the metabolic energy cost of walking when changing speed. Subjects (healthy adults) walked with oscillating speeds on a constant-speed treadmill, alternating between walking slower and faster than the treadmill belt, moving back and forth in the laboratory frame. The metabolic rate for oscillating-speed walking was significantly higher than that for constant-speed walking (6-20% cost increase for ±0.13-0.27 m s(-1) speed fluctuations). The metabolic rate increase was correlated with two models: a model based on kinetic energy fluctuations and an inverted pendulum walking model, optimized for oscillating-speed constraints. The cost of changing speeds may have behavioural implications: we predicted that the energy-optimal walking speed is lower for shorter distances. We measured preferred human walking speeds for different walking distances and found people preferred lower walking speeds for shorter distances as predicted. Further, analysing published daily walking-bout distributions, we estimate that the cost of changing speeds is 4-8% of daily walking energy budget. © 2015 The Author(s).

  15. The metabolic cost of changing walking speeds is significant, implies lower optimal speeds for shorter distances, and increases daily energy estimates

    PubMed Central

    Seethapathi, Nidhi; Srinivasan, Manoj

    2015-01-01

    Humans do not generally walk at constant speed, except perhaps on a treadmill. Normal walking involves starting, stopping and changing speeds, in addition to roughly steady locomotion. Here, we measure the metabolic energy cost of walking when changing speed. Subjects (healthy adults) walked with oscillating speeds on a constant-speed treadmill, alternating between walking slower and faster than the treadmill belt, moving back and forth in the laboratory frame. The metabolic rate for oscillating-speed walking was significantly higher than that for constant-speed walking (6–20% cost increase for ±0.13–0.27 m s−1 speed fluctuations). The metabolic rate increase was correlated with two models: a model based on kinetic energy fluctuations and an inverted pendulum walking model, optimized for oscillating-speed constraints. The cost of changing speeds may have behavioural implications: we predicted that the energy-optimal walking speed is lower for shorter distances. We measured preferred human walking speeds for different walking distances and found people preferred lower walking speeds for shorter distances as predicted. Further, analysing published daily walking-bout distributions, we estimate that the cost of changing speeds is 4–8% of daily walking energy budget. PMID:26382072

  16. Toward Optimal Manifold Hashing via Discrete Locally Linear Embedding.

    PubMed

    Rongrong Ji; Hong Liu; Liujuan Cao; Di Liu; Yongjian Wu; Feiyue Huang

    2017-11-01

    Binary code learning, also known as hashing, has received increasing attention in large-scale visual search. By transforming high-dimensional features to binary codes, the original Euclidean distance is approximated via Hamming distance. More recently, it is advocated that it is the manifold distance, rather than the Euclidean distance, that should be preserved in the Hamming space. However, it retains as an open problem to directly preserve the manifold structure by hashing. In particular, it first needs to build the local linear embedding in the original feature space, and then quantize such embedding to binary codes. Such a two-step coding is problematic and less optimized. Besides, the off-line learning is extremely time and memory consuming, which needs to calculate the similarity matrix of the original data. In this paper, we propose a novel hashing algorithm, termed discrete locality linear embedding hashing (DLLH), which well addresses the above challenges. The DLLH directly reconstructs the manifold structure in the Hamming space, which learns optimal hash codes to maintain the local linear relationship of data points. To learn discrete locally linear embeddingcodes, we further propose a discrete optimization algorithm with an iterative parameters updating scheme. Moreover, an anchor-based acceleration scheme, termed Anchor-DLLH, is further introduced, which approximates the large similarity matrix by the product of two low-rank matrices. Experimental results on three widely used benchmark data sets, i.e., CIFAR10, NUS-WIDE, and YouTube Face, have shown superior performance of the proposed DLLH over the state-of-the-art approaches.

  17. The Distance to M51

    NASA Astrophysics Data System (ADS)

    McQuinn, Kristen. B. W.; Skillman, Evan D.; Dolphin, Andrew E.; Berg, Danielle; Kennicutt, Robert

    2016-07-01

    Great investments of observing time have been dedicated to the study of nearby spiral galaxies with diverse goals ranging from understanding the star formation process to characterizing their dark matter distributions. Accurate distances are fundamental to interpreting observations of these galaxies, yet many of the best studied nearby galaxies have distances based on methods with relatively large uncertainties. We have started a program to derive accurate distances to these galaxies. Here we measure the distance to M51—the Whirlpool galaxy—from newly obtained Hubble Space Telescope optical imaging using the tip of the red giant branch method. We measure the distance modulus to be 8.58 ± 0.10 Mpc (statistical), corresponding to a distance modulus of 29.67 ± 0.02 mag. Our distance is an improvement over previous results as we use a well-calibrated, stable distance indicator, precision photometry in a optimally selected field of view, and a Bayesian Maximum Likelihood technique that reduces measurement uncertainties. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the Data Archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.

  18. Optimal case-control matching in practice.

    PubMed

    Cologne, J B; Shibata, Y

    1995-05-01

    We illustrate modern matching techniques and discuss practical issues in defining the closeness of matching for retrospective case-control designs (in which the pool of subjects already exists when the study commences). We empirically compare matching on a balancing score, analogous to the propensity score for treated/control matching, with matching on a weighted distance measure. Although both methods in principle produce balance between cases and controls in the marginal distributions of the matching covariates, the weighted distance measure provides better balance in practice because the balancing score can be poorly estimated. We emphasize the use of optimal matching based on efficient network algorithms. An illustration is based on the design of a case-control study of hepatitis B virus infection as a possible confounder and/or effect modifier of radiation-related primary liver cancer in atomic bomb survivors.

  19. Research on Optimal Observation Scale for Damaged Buildings after Earthquake Based on Optimal Feature Space

    NASA Astrophysics Data System (ADS)

    Chen, J.; Chen, W.; Dou, A.; Li, W.; Sun, Y.

    2018-04-01

    A new information extraction method of damaged buildings rooted in optimal feature space is put forward on the basis of the traditional object-oriented method. In this new method, ESP (estimate of scale parameter) tool is used to optimize the segmentation of image. Then the distance matrix and minimum separation distance of all kinds of surface features are calculated through sample selection to find the optimal feature space, which is finally applied to extract the image of damaged buildings after earthquake. The overall extraction accuracy reaches 83.1 %, the kappa coefficient 0.813. The new information extraction method greatly improves the extraction accuracy and efficiency, compared with the traditional object-oriented method, and owns a good promotional value in the information extraction of damaged buildings. In addition, the new method can be used for the information extraction of different-resolution images of damaged buildings after earthquake, then to seek the optimal observation scale of damaged buildings through accuracy evaluation. It is supposed that the optimal observation scale of damaged buildings is between 1 m and 1.2 m, which provides a reference for future information extraction of damaged buildings.

  20. Analysis of point-to-point lung motion with full inspiration and expiration CT data using non-linear optimization method: optimal geometric assumption model for the effective registration algorithm

    NASA Astrophysics Data System (ADS)

    Kim, Namkug; Seo, Joon Beom; Heo, Jeong Nam; Kang, Suk-Ho

    2007-03-01

    The study was conducted to develop a simple model for more robust lung registration of volumetric CT data, which is essential for various clinical lung analysis applications, including the lung nodule matching in follow up CT studies, semi-quantitative assessment of lung perfusion, and etc. The purpose of this study is to find the most effective reference point and geometric model based on the lung motion analysis from the CT data sets obtained in full inspiration (In.) and expiration (Ex.). Ten pairs of CT data sets in normal subjects obtained in full In. and Ex. were used in this study. Two radiologists were requested to draw 20 points representing the subpleural point of the central axis in each segment. The apex, hilar point, and center of inertia (COI) of each unilateral lung were proposed as the reference point. To evaluate optimal expansion point, non-linear optimization without constraints was employed. The objective function is sum of distances from the line, consist of the corresponding points between In. and Ex. to the optimal point x. By using the nonlinear optimization, the optimal points was evaluated and compared between reference points. The average distance between the optimal point and each line segment revealed that the balloon model was more suitable to explain the lung expansion model. This lung motion analysis based on vector analysis and non-linear optimization shows that balloon model centered on the center of inertia of lung is most effective geometric model to explain lung expansion by breathing.

  1. Optimal multi-floor plant layout based on the mathematical programming and particle swarm optimization.

    PubMed

    Lee, Chang Jun

    2015-01-01

    In the fields of researches associated with plant layout optimization, the main goal is to minimize the costs of pipelines and pumping between connecting equipment under various constraints. However, what is the lacking of considerations in previous researches is to transform various heuristics or safety regulations into mathematical equations. For example, proper safety distances between equipments have to be complied for preventing dangerous accidents on a complex plant. Moreover, most researches have handled single-floor plant. However, many multi-floor plants have been constructed for the last decade. Therefore, the proper algorithm handling various regulations and multi-floor plant should be developed. In this study, the Mixed Integer Non-Linear Programming (MINLP) problem including safety distances, maintenance spaces, etc. is suggested based on mathematical equations. The objective function is a summation of pipeline and pumping costs. Also, various safety and maintenance issues are transformed into inequality or equality constraints. However, it is really hard to solve this problem due to complex nonlinear constraints. Thus, it is impossible to use conventional MINLP solvers using derivatives of equations. In this study, the Particle Swarm Optimization (PSO) technique is employed. The ethylene oxide plant is illustrated to verify the efficacy of this study.

  2. Modelling optimal location for pre-hospital helicopter emergency medical services.

    PubMed

    Schuurman, Nadine; Bell, Nathaniel J; L'Heureux, Randy; Hameed, Syed M

    2009-05-09

    Increasing the range and scope of early activation/auto launch helicopter emergency medical services (HEMS) may alleviate unnecessary injury mortality that disproportionately affects rural populations. To date, attempts to develop a quantitative framework for the optimal location of HEMS facilities have been absent. Our analysis used five years of critical care data from tertiary health care facilities, spatial data on origin of transport and accurate road travel time catchments for tertiary centres. A location optimization model was developed to identify where the expansion of HEMS would cover the greatest population among those currently underserved. The protocol was developed using geographic information systems (GIS) to measure populations, distances and accessibility to services. Our model determined Royal Inland Hospital (RIH) was the optimal site for an expanded HEMS - based on denominator population, distance to services and historical usage patterns. GIS based protocols for location of emergency medical resources can provide supportive evidence for allocation decisions - especially when resources are limited. In this study, we were able to demonstrate conclusively that a logical choice exists for location of additional HEMS. This protocol could be extended to location analysis for other emergency and health services.

  3. Flexible Fusion Structure-Based Performance Optimization Learning for Multisensor Target Tracking

    PubMed Central

    Ge, Quanbo; Wei, Zhongliang; Cheng, Tianfa; Chen, Shaodong; Wang, Xiangfeng

    2017-01-01

    Compared with the fixed fusion structure, the flexible fusion structure with mixed fusion methods has better adjustment performance for the complex air task network systems, and it can effectively help the system to achieve the goal under the given constraints. Because of the time-varying situation of the task network system induced by moving nodes and non-cooperative target, and limitations such as communication bandwidth and measurement distance, it is necessary to dynamically adjust the system fusion structure including sensors and fusion methods in a given adjustment period. Aiming at this, this paper studies the design of a flexible fusion algorithm by using an optimization learning technology. The purpose is to dynamically determine the sensors’ numbers and the associated sensors to take part in the centralized and distributed fusion processes, respectively, herein termed sensor subsets selection. Firstly, two system performance indexes are introduced. Especially, the survivability index is presented and defined. Secondly, based on the two indexes and considering other conditions such as communication bandwidth and measurement distance, optimization models for both single target tracking and multi-target tracking are established. Correspondingly, solution steps are given for the two optimization models in detail. Simulation examples are demonstrated to validate the proposed algorithms. PMID:28481243

  4. Study on the measuring distance for blood glucose infrared spectral measuring by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Li, Xiang

    2016-10-01

    Blood glucose monitoring is of great importance for controlling diabetes procedure and preventing the complications. At present, the clinical blood glucose concentration measurement is invasive and could be replaced by noninvasive spectroscopy analytical techniques. Among various parameters of optical fiber probe used in spectrum measuring, the measurement distance is the key one. The Monte Carlo technique is a flexible method for simulating light propagation in tissue. The simulation is based on the random walks that photons make as they travel through tissue, which are chosen by statistically sampling the probability distributions for step size and angular deflection per scattering event. The traditional method for determine the optimal distance between transmitting fiber and detector is using Monte Carlo simulation to find out the point where most photons come out. But there is a problem. In the epidermal layer there is no artery, vein or capillary vessel. Thus, when photons propagate and interactive with tissue in epidermal layer, no information is given to the photons. A new criterion is proposed to determine the optimal distance, which is named effective path length in this paper. The path length of each photons travelling in dermis is recorded when running Monte-Carlo simulation, which is the effective path length defined above. The sum of effective path length of every photon at each point is calculated. The detector should be place on the point which has most effective path length. Then the optimal measuring distance between transmitting fiber and detector is determined.

  5. Simulation-Based Design for Wearable Robotic Systems: An Optimization Framework for Enhancing a Standing Long Jump

    PubMed Central

    Ong, Carmichael F.; Hicks, Jennifer L.; Delp, Scott L.

    2017-01-01

    Goal Technologies that augment human performance are the focus of intensive research and development, driven by advances in wearable robotic systems. Success has been limited by the challenge of understanding human–robot interaction. To address this challenge, we developed an optimization framework to synthesize a realistic human standing long jump and used the framework to explore how simulated wearable robotic devices might enhance jump performance. Methods A planar, five-segment, seven-degree-of-freedom model with physiological torque actuators, which have variable torque capacity depending on joint position and velocity, was used to represent human musculoskeletal dynamics. An active augmentation device was modeled as a torque actuator that could apply a single pulse of up to 100 Nm of extension torque. A passive design was modeled as rotational springs about each lower limb joint. Dynamic optimization searched for physiological and device actuation patterns to maximize jump distance. Results Optimization of the nominal case yielded a 2.27 m jump that captured salient kinematic and kinetic features of human jumps. When the active device was added to the ankle, knee, or hip, jump distance increased to between 2.49 and 2.52 m. Active augmentation of all three joints increased the jump distance to 3.10 m. The passive design increased jump distance to 3.32 m by adding torques of 135 Nm, 365 Nm, and 297 Nm to the ankle, knee, and hip, respectively. Conclusion Dynamic optimization can be used to simulate a standing long jump and investigate human-robot interaction. Significance Simulation can aid in the design of performance-enhancing technologies. PMID:26258930

  6. A combined NLP-differential evolution algorithm approach for the optimization of looped water distribution systems

    NASA Astrophysics Data System (ADS)

    Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.

    2011-08-01

    This paper proposes a novel optimization approach for the least cost design of looped water distribution systems (WDSs). Three distinct steps are involved in the proposed optimization approach. In the first step, the shortest-distance tree within the looped network is identified using the Dijkstra graph theory algorithm, for which an extension is proposed to find the shortest-distance tree for multisource WDSs. In the second step, a nonlinear programming (NLP) solver is employed to optimize the pipe diameters for the shortest-distance tree (chords of the shortest-distance tree are allocated the minimum allowable pipe sizes). Finally, in the third step, the original looped water network is optimized using a differential evolution (DE) algorithm seeded with diameters in the proximity of the continuous pipe sizes obtained in step two. As such, the proposed optimization approach combines the traditional deterministic optimization technique of NLP with the emerging evolutionary algorithm DE via the proposed network decomposition. The proposed methodology has been tested on four looped WDSs with the number of decision variables ranging from 21 to 454. Results obtained show the proposed approach is able to find optimal solutions with significantly less computational effort than other optimization techniques.

  7. On Consistency Test Method of Expert Opinion in Ecological Security Assessment

    PubMed Central

    Wang, Lihong

    2017-01-01

    To reflect the initiative design and initiative of human security management and safety warning, ecological safety assessment is of great value. In the comprehensive evaluation of regional ecological security with the participation of experts, the expert’s individual judgment level, ability and the consistency of the expert’s overall opinion will have a very important influence on the evaluation result. This paper studies the consistency measure and consensus measure based on the multiplicative and additive consistency property of fuzzy preference relation (FPR). We firstly propose the optimization methods to obtain the optimal multiplicative consistent and additively consistent FPRs of individual and group judgments, respectively. Then, we put forward a consistency measure by computing the distance between the original individual judgment and the optimal individual estimation, along with a consensus measure by computing the distance between the original collective judgment and the optimal collective estimation. In the end, we make a case study on ecological security for five cities. Result shows that the optimal FPRs are helpful in measuring the consistency degree of individual judgment and the consensus degree of collective judgment. PMID:28869570

  8. On Consistency Test Method of Expert Opinion in Ecological Security Assessment.

    PubMed

    Gong, Zaiwu; Wang, Lihong

    2017-09-04

    To reflect the initiative design and initiative of human security management and safety warning, ecological safety assessment is of great value. In the comprehensive evaluation of regional ecological security with the participation of experts, the expert's individual judgment level, ability and the consistency of the expert's overall opinion will have a very important influence on the evaluation result. This paper studies the consistency measure and consensus measure based on the multiplicative and additive consistency property of fuzzy preference relation (FPR). We firstly propose the optimization methods to obtain the optimal multiplicative consistent and additively consistent FPRs of individual and group judgments, respectively. Then, we put forward a consistency measure by computing the distance between the original individual judgment and the optimal individual estimation, along with a consensus measure by computing the distance between the original collective judgment and the optimal collective estimation. In the end, we make a case study on ecological security for five cities. Result shows that the optimal FPRs are helpful in measuring the consistency degree of individual judgment and the consensus degree of collective judgment.

  9. Machine learning enhanced optical distance sensor

    NASA Astrophysics Data System (ADS)

    Amin, M. Junaid; Riza, N. A.

    2018-01-01

    Presented for the first time is a machine learning enhanced optical distance sensor. The distance sensor is based on our previously demonstrated distance measurement technique that uses an Electronically Controlled Variable Focus Lens (ECVFL) with a laser source to illuminate a target plane with a controlled optical beam spot. This spot with varying spot sizes is viewed by an off-axis camera and the spot size data is processed to compute the distance. In particular, proposed and demonstrated in this paper is the use of a regularized polynomial regression based supervised machine learning algorithm to enhance the accuracy of the operational sensor. The algorithm uses the acquired features and corresponding labels that are the actual target distance values to train a machine learning model. The optimized training model is trained over a 1000 mm (or 1 m) experimental target distance range. Using the machine learning algorithm produces a training set and testing set distance measurement errors of <0.8 mm and <2.2 mm, respectively. The test measurement error is at least a factor of 4 improvement over our prior sensor demonstration without the use of machine learning. Applications for the proposed sensor include industrial scenario distance sensing where target material specific training models can be generated to realize low <1% measurement error distance measurements.

  10. Virtual Distances Methodology as Verification Technique for AACMMs with a Capacitive Sensor Based Indexed Metrology Platform

    PubMed Central

    Acero, Raquel; Santolaria, Jorge; Brau, Agustin; Pueo, Marcos

    2016-01-01

    This paper presents a new verification procedure for articulated arm coordinate measuring machines (AACMMs) together with a capacitive sensor-based indexed metrology platform (IMP) based on the generation of virtual reference distances. The novelty of this procedure lays on the possibility of creating virtual points, virtual gauges and virtual distances through the indexed metrology platform’s mathematical model taking as a reference the measurements of a ball bar gauge located in a fixed position of the instrument’s working volume. The measurements are carried out with the AACMM assembled on the IMP from the six rotating positions of the platform. In this way, an unlimited number and types of reference distances could be created without the need of using a physical gauge, therefore optimizing the testing time, the number of gauge positions and the space needed in the calibration and verification procedures. Four evaluation methods are presented to assess the volumetric performance of the AACMM. The results obtained proved the suitability of the virtual distances methodology as an alternative procedure for verification of AACMMs using the indexed metrology platform. PMID:27869722

  11. Virtual Distances Methodology as Verification Technique for AACMMs with a Capacitive Sensor Based Indexed Metrology Platform.

    PubMed

    Acero, Raquel; Santolaria, Jorge; Brau, Agustin; Pueo, Marcos

    2016-11-18

    This paper presents a new verification procedure for articulated arm coordinate measuring machines (AACMMs) together with a capacitive sensor-based indexed metrology platform (IMP) based on the generation of virtual reference distances. The novelty of this procedure lays on the possibility of creating virtual points, virtual gauges and virtual distances through the indexed metrology platform's mathematical model taking as a reference the measurements of a ball bar gauge located in a fixed position of the instrument's working volume. The measurements are carried out with the AACMM assembled on the IMP from the six rotating positions of the platform. In this way, an unlimited number and types of reference distances could be created without the need of using a physical gauge, therefore optimizing the testing time, the number of gauge positions and the space needed in the calibration and verification procedures. Four evaluation methods are presented to assess the volumetric performance of the AACMM. The results obtained proved the suitability of the virtual distances methodology as an alternative procedure for verification of AACMMs using the indexed metrology platform.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guta, Madalin; Matsumoto, Keiji; Quantum Computation and Information Project, JST, Hongo 5-28-3, Bunkyo-ku, Tokyo 113-0033

    We construct the optimal one to two cloning transformation for the family of displaced thermal equilibrium states of a harmonic oscillator, with a fixed and known temperature. The transformation is Gaussian and it is optimal with respect to the figure of merit based on the joint output state and norm distance. The proof of the result is based on the equivalence between the optimal cloning problem and that of optimal amplification of Gaussian states which is then reduced to an optimization problem for diagonal states of a quantum oscillator. A key concept in finding the optimum is that of stochasticmore » ordering which plays a similar role in the purely classical problem of Gaussian cloning. The result is then extended to the case of n to m cloning of mixed Gaussian states.« less

  13. Distributed Efficient Similarity Search Mechanism in Wireless Sensor Networks

    PubMed Central

    Ahmed, Khandakar; Gregory, Mark A.

    2015-01-01

    The Wireless Sensor Network similarity search problem has received considerable research attention due to sensor hardware imprecision and environmental parameter variations. Most of the state-of-the-art distributed data centric storage (DCS) schemes lack optimization for similarity queries of events. In this paper, a DCS scheme with metric based similarity searching (DCSMSS) is proposed. DCSMSS takes motivation from vector distance index, called iDistance, in order to transform the issue of similarity searching into the problem of an interval search in one dimension. In addition, a sector based distance routing algorithm is used to efficiently route messages. Extensive simulation results reveal that DCSMSS is highly efficient and significantly outperforms previous approaches in processing similarity search queries. PMID:25751081

  14. Molecular taxonomy of phytopathogenic fungi: a case study in Peronospora.

    PubMed

    Göker, Markus; García-Blázquez, Gema; Voglmayr, Hermann; Tellería, M Teresa; Martín, María P

    2009-07-29

    Inappropriate taxon definitions may have severe consequences in many areas. For instance, biologically sensible species delimitation of plant pathogens is crucial for measures such as plant protection or biological control and for comparative studies involving model organisms. However, delimiting species is challenging in the case of organisms for which often only molecular data are available, such as prokaryotes, fungi, and many unicellular eukaryotes. Even in the case of organisms with well-established morphological characteristics, molecular taxonomy is often necessary to emend current taxonomic concepts and to analyze DNA sequences directly sampled from the environment. Typically, for this purpose clustering approaches to delineate molecular operational taxonomic units have been applied using arbitrary choices regarding the distance threshold values, and the clustering algorithms. Here, we report on a clustering optimization method to establish a molecular taxonomy of Peronospora based on ITS nrDNA sequences. Peronospora is the largest genus within the downy mildews, which are obligate parasites of higher plants, and includes various economically important pathogens. The method determines the distance function and clustering setting that result in an optimal agreement with selected reference data. Optimization was based on both taxonomy-based and host-based reference information, yielding the same outcome. Resampling and permutation methods indicate that the method is robust regarding taxon sampling and errors in the reference data. Tests with newly obtained ITS sequences demonstrate the use of the re-classified dataset in molecular identification of downy mildews. A corrected taxonomy is provided for all Peronospora ITS sequences contained in public databases. Clustering optimization appears to be broadly applicable in automated, sequence-based taxonomy. The method connects traditional and modern taxonomic disciplines by specifically addressing the issue of how to optimally account for both traditional species concepts and genetic divergence.

  15. Molecular Taxonomy of Phytopathogenic Fungi: A Case Study in Peronospora

    PubMed Central

    Göker, Markus; García-Blázquez, Gema; Voglmayr, Hermann; Tellería, M. Teresa; Martín, María P.

    2009-01-01

    Background Inappropriate taxon definitions may have severe consequences in many areas. For instance, biologically sensible species delimitation of plant pathogens is crucial for measures such as plant protection or biological control and for comparative studies involving model organisms. However, delimiting species is challenging in the case of organisms for which often only molecular data are available, such as prokaryotes, fungi, and many unicellular eukaryotes. Even in the case of organisms with well-established morphological characteristics, molecular taxonomy is often necessary to emend current taxonomic concepts and to analyze DNA sequences directly sampled from the environment. Typically, for this purpose clustering approaches to delineate molecular operational taxonomic units have been applied using arbitrary choices regarding the distance threshold values, and the clustering algorithms. Methodology Here, we report on a clustering optimization method to establish a molecular taxonomy of Peronospora based on ITS nrDNA sequences. Peronospora is the largest genus within the downy mildews, which are obligate parasites of higher plants, and includes various economically important pathogens. The method determines the distance function and clustering setting that result in an optimal agreement with selected reference data. Optimization was based on both taxonomy-based and host-based reference information, yielding the same outcome. Resampling and permutation methods indicate that the method is robust regarding taxon sampling and errors in the reference data. Tests with newly obtained ITS sequences demonstrate the use of the re-classified dataset in molecular identification of downy mildews. Conclusions A corrected taxonomy is provided for all Peronospora ITS sequences contained in public databases. Clustering optimization appears to be broadly applicable in automated, sequence-based taxonomy. The method connects traditional and modern taxonomic disciplines by specifically addressing the issue of how to optimally account for both traditional species concepts and genetic divergence. PMID:19641601

  16. Integrating NOE and RDC using sum-of-squares relaxation for protein structure determination.

    PubMed

    Khoo, Y; Singer, A; Cowburn, D

    2017-07-01

    We revisit the problem of protein structure determination from geometrical restraints from NMR, using convex optimization. It is well-known that the NP-hard distance geometry problem of determining atomic positions from pairwise distance restraints can be relaxed into a convex semidefinite program (SDP). However, often the NOE distance restraints are too imprecise and sparse for accurate structure determination. Residual dipolar coupling (RDC) measurements provide additional geometric information on the angles between atom-pair directions and axes of the principal-axis-frame. The optimization problem involving RDC is highly non-convex and requires a good initialization even within the simulated annealing framework. In this paper, we model the protein backbone as an articulated structure composed of rigid units. Determining the rotation of each rigid unit gives the full protein structure. We propose solving the non-convex optimization problems using the sum-of-squares (SOS) hierarchy, a hierarchy of convex relaxations with increasing complexity and approximation power. Unlike classical global optimization approaches, SOS optimization returns a certificate of optimality if the global optimum is found. Based on the SOS method, we proposed two algorithms-RDC-SOS and RDC-NOE-SOS, that have polynomial time complexity in the number of amino-acid residues and run efficiently on a standard desktop. In many instances, the proposed methods exactly recover the solution to the original non-convex optimization problem. To the best of our knowledge this is the first time SOS relaxation is introduced to solve non-convex optimization problems in structural biology. We further introduce a statistical tool, the Cramér-Rao bound (CRB), to provide an information theoretic bound on the highest resolution one can hope to achieve when determining protein structure from noisy measurements using any unbiased estimator. Our simulation results show that when the RDC measurements are corrupted by Gaussian noise of realistic variance, both SOS based algorithms attain the CRB. We successfully apply our method in a divide-and-conquer fashion to determine the structure of ubiquitin from experimental NOE and RDC measurements obtained in two alignment media, achieving more accurate and faster reconstructions compared to the current state of the art.

  17. Fabrication of polymer microlenses on single mode optical fibers for light coupling

    NASA Astrophysics Data System (ADS)

    Zaboub, Monsef; Guessoum, Assia; Demagh, Nacer-Eddine; Guermat, Abdelhak

    2016-05-01

    In this paper, we present a technique for producing fibers optics micro-collimators composed of polydimethylsiloxane PDMS microlenses of different radii of curvature. The waist and working distance values obtained enable the optimization of optical coupling between optical fibers, fibers and optical sources, and fibers and detectors. The principal is based on the injection of polydimethylsiloxane (PDMS) into a conical micro-cavity chemically etched at the end of optical fibers. A spherical microlens is then formed that is self-centered with respect to the axis of the fiber. Typically, an optimal radius of curvature of 10.08 μm is obtained. This optimized micro-collimator is characterized by a working distance of 19.27 μm and a waist equal to 2.28 μm for an SMF 9/125 μm fiber. The simulation and experimental results reveal an optical coupling efficiency that can reach a value of 99.75%.

  18. On Utilizing Optimal and Information Theoretic Syntactic Modeling for Peptide Classification

    NASA Astrophysics Data System (ADS)

    Aygün, Eser; Oommen, B. John; Cataltepe, Zehra

    Syntactic methods in pattern recognition have been used extensively in bioinformatics, and in particular, in the analysis of gene and protein expressions, and in the recognition and classification of bio-sequences. These methods are almost universally distance-based. This paper concerns the use of an Optimal and Information Theoretic (OIT) probabilistic model [11] to achieve peptide classification using the information residing in their syntactic representations. The latter has traditionally been achieved using the edit distances required in the respective peptide comparisons. We advocate that one can model the differences between compared strings as a mutation model consisting of random Substitutions, Insertions and Deletions (SID) obeying the OIT model. Thus, in this paper, we show that the probability measure obtained from the OIT model can be perceived as a sequence similarity metric, using which a Support Vector Machine (SVM)-based peptide classifier, referred to as OIT_SVM, can be devised.

  19. An automated method for modeling proteins on known templates using distance geometry.

    PubMed

    Srinivasan, S; March, C J; Sudarsanam, S

    1993-02-01

    We present an automated method incorporated into a software package, FOLDER, to fold a protein sequence on a given three-dimensional (3D) template. Starting with the sequence alignment of a family of homologous proteins, tertiary structures are modeled using the known 3D structure of one member of the family as a template. Homologous interatomic distances from the template are used as constraints. For nonhomologous regions in the model protein, the lower and the upper bounds for the interatomic distances are imposed by steric constraints and the globular dimensions of the template, respectively. Distance geometry is used to embed an ensemble of structures consistent with these distance bounds. Structures are selected from this ensemble based on minimal distance error criteria, after a penalty function optimization step. These structures are then refined using energy optimization methods. The method is tested by simulating the alpha-chain of horse hemoglobin using the alpha-chain of human hemoglobin as the template and by comparing the generated models with the crystal structure of the alpha-chain of horse hemoglobin. We also test the packing efficiency of this method by reconstructing the atomic positions of the interior side chains beyond C beta atoms of a protein domain from a known 3D structure. In both test cases, models retain the template constraints and any additionally imposed constraints while the packing of the interior residues is optimized with no short contacts or bond deformations. To demonstrate the use of this method in simulating structures of proteins with nonhomologous disulfides, we construct a model of murine interleukin (IL)-4 using the NMR structure of human IL-4 as the template. The resulting geometry of the nonhomologous disulfide in the model structure for murine IL-4 is consistent with standard disulfide geometry.

  20. Rise and Shock: Optimal Defibrillator Placement in a High-rise Building.

    PubMed

    Chan, Timothy C Y

    2017-01-01

    Out-of-hospital cardiac arrests (OHCA) in high-rise buildings experience lower survival and longer delays until paramedic arrival. Use of publicly accessible automated external defibrillators (AED) can improve survival, but "vertical" placement has not been studied. We aim to determine whether elevator-based or lobby-based AED placement results in shorter vertical distance travelled ("response distance") to OHCAs in a high-rise building. We developed a model of a single-elevator, n-floor high-rise building. We calculated and compared the average distance from AED to floor of arrest for the two AED locations. We modeled OHCA occurrences using floor-specific Poisson processes, the risk of OHCA on the ground floor (λ 1 ) and the risk on any above-ground floor (λ). The elevator was modeled with an override function enabling direct travel to the target floor. The elevator location upon override was modeled as a discrete uniform random variable. Calculations used the laws of probability. Elevator-based AED placement had shorter average response distance if the number of floors (n) in the building exceeded three quarters of the ratio of ground-floor OHCA risk to above-ground floor risk (λ 1 /λ) plus one half (n ≥ 3λ 1 /4λ + 0.5). Otherwise, a lobby-based AED had shorter average response distance. If OHCA risk on each floor was equal, an elevator-based AED had shorter average response distance. Elevator-based AEDs travel less vertical distance to OHCAs in tall buildings or those with uniform vertical risk, while lobby-based AEDs travel less vertical distance in buildings with substantial lobby, underground, and nearby street-level traffic and OHCA risk.

  1. Web page sorting algorithm based on query keyword distance relation

    NASA Astrophysics Data System (ADS)

    Yang, Han; Cui, Hong Gang; Tang, Hao

    2017-08-01

    In order to optimize the problem of page sorting, according to the search keywords in the web page in the relationship between the characteristics of the proposed query keywords clustering ideas. And it is converted into the degree of aggregation of the search keywords in the web page. Based on the PageRank algorithm, the clustering degree factor of the query keyword is added to make it possible to participate in the quantitative calculation. This paper proposes an improved algorithm for PageRank based on the distance relation between search keywords. The experimental results show the feasibility and effectiveness of the method.

  2. Ant Navigation: Fractional Use of the Home Vector

    PubMed Central

    Cheung, Allen; Hiby, Lex; Narendra, Ajay

    2012-01-01

    Home is a special location for many animals, offering shelter from the elements, protection from predation, and a common place for gathering of the same species. Not surprisingly, many species have evolved efficient, robust homing strategies, which are used as part of each and every foraging journey. A basic strategy used by most animals is to take the shortest possible route home by accruing the net distances and directions travelled during foraging, a strategy well known as path integration. This strategy is part of the navigation toolbox of ants occupying different landscapes. However, when there is a visual discrepancy between test and training conditions, the distance travelled by animals relying on the path integrator varies dramatically between species: from 90% of the home vector to an absolute distance of only 50 cm. We here ask what the theoretically optimal balance between PI-driven and landmark-driven navigation should be. In combination with well-established results from optimal search theory, we show analytically that this fractional use of the home vector is an optimal homing strategy under a variety of circumstances. Assuming there is a familiar route that an ant recognizes, theoretically optimal search should always begin at some fraction of the home vector, depending on the region of familiarity. These results are shown to be largely independent of the search algorithm used. Ant species from different habitats appear to have optimized their navigation strategy based on the availability and nature of navigational information content in their environment. PMID:23209744

  3. Femtosecond frequency comb based distance measurement in air.

    PubMed

    Balling, Petr; Kren, Petr; Masika, Pavel; van den Berg, S A

    2009-05-25

    Interferometric measurement of distance using a femtosecond frequency comb is demonstrated and compared with a counting interferometer displacement measurement. A numerical model of pulse propagation in air is developed and the results are compared with experimental data for short distances. The relative agreement for distance measurement in known laboratory conditions is better than 10(-7). According to the model, similar precision seems feasible even for long-distance measurement in air if conditions are sufficiently known. It is demonstrated that the relative width of the interferogram envelope even decreases with the measured length, and a fringe contrast higher than 90% could be obtained for kilometer distances in air, if optimal spectral width for that length and wavelength is used. The possibility of comb radiation delivery to the interferometer by an optical fiber is shown by model and experiment, which is important from a practical point of view.

  4. Optimal installation locations for automated external defibrillators in Taipei 7-Eleven stores: using GIS and a genetic algorithm with a new stirring operator.

    PubMed

    Huang, Chung-Yuan; Wen, Tzai-Hung

    2014-01-01

    Immediate treatment with an automated external defibrillator (AED) increases out-of-hospital cardiac arrest (OHCA) patient survival potential. While considerable attention has been given to determining optimal public AED locations, spatial and temporal factors such as time of day and distance from emergency medical services (EMSs) are understudied. Here we describe a geocomputational genetic algorithm with a new stirring operator (GANSO) that considers spatial and temporal cardiac arrest occurrence factors when assessing the feasibility of using Taipei 7-Eleven stores as installation locations for AEDs. Our model is based on two AED conveyance modes, walking/running and driving, involving service distances of 100 and 300 meters, respectively. Our results suggest different AED allocation strategies involving convenience stores in urban settings. In commercial areas, such installations can compensate for temporal gaps in EMS locations when responding to nighttime OHCA incidents. In residential areas, store installations can compensate for long distances from fire stations, where AEDs are currently held in Taipei.

  5. Optimization of tribological behaviour on Al- coconut shell ash composite at elevated temperature

    NASA Astrophysics Data System (ADS)

    Siva Sankara Raju, R.; Panigrahi, M. K.; Ganguly, R. I.; Srinivasa Rao, G.

    2018-02-01

    In this study, determine the tribological behaviour of composite at elevated temperature i.e. 50 - 150 °C. The aluminium matrix composite (AMC) are prepared with compo casting route by volume of reinforcement of coconut shell ash (CSA) such as 5, 10 and 15%. Mechanical properties of composite has enhances with increasing volume of CSA. This study details to optimization of wear behaviour of composite at elevated temperatures. The influencing parameters such as temperature, sliding velocity and sliding distance are considered. The outcome response is wear rate (mm3/m) and coefficient of friction. The experiments are designed based on Taguchi [L9] array. All the experiments are considered as constant load of 10N. Analysis of variance (ANOVA) revealed that temperature is highest influencing factor followed by sliding velocity and sliding distance. Similarly, sliding velocity is most influencing factor followed by temperature and distance on coefficient of friction (COF). Finally, corroborates analytical and regression equation values by confirmation test.

  6. A Wireless Sensor Network with Soft Computing Localization Techniques for Track Cycling Applications.

    PubMed

    Gharghan, Sadik Kamel; Nordin, Rosdiadee; Ismail, Mahamod

    2016-08-06

    In this paper, we propose two soft computing localization techniques for wireless sensor networks (WSNs). The two techniques, Neural Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN), focus on a range-based localization method which relies on the measurement of the received signal strength indicator (RSSI) from the three ZigBee anchor nodes distributed throughout the track cycling field. The soft computing techniques aim to estimate the distance between bicycles moving on the cycle track for outdoor and indoor velodromes. In the first approach the ANFIS was considered, whereas in the second approach the ANN was hybridized individually with three optimization algorithms, namely Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), and Backtracking Search Algorithm (BSA). The results revealed that the hybrid GSA-ANN outperforms the other methods adopted in this paper in terms of accuracy localization and distance estimation accuracy. The hybrid GSA-ANN achieves a mean absolute distance estimation error of 0.02 m and 0.2 m for outdoor and indoor velodromes, respectively.

  7. A Wireless Sensor Network with Soft Computing Localization Techniques for Track Cycling Applications

    PubMed Central

    Gharghan, Sadik Kamel; Nordin, Rosdiadee; Ismail, Mahamod

    2016-01-01

    In this paper, we propose two soft computing localization techniques for wireless sensor networks (WSNs). The two techniques, Neural Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN), focus on a range-based localization method which relies on the measurement of the received signal strength indicator (RSSI) from the three ZigBee anchor nodes distributed throughout the track cycling field. The soft computing techniques aim to estimate the distance between bicycles moving on the cycle track for outdoor and indoor velodromes. In the first approach the ANFIS was considered, whereas in the second approach the ANN was hybridized individually with three optimization algorithms, namely Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), and Backtracking Search Algorithm (BSA). The results revealed that the hybrid GSA-ANN outperforms the other methods adopted in this paper in terms of accuracy localization and distance estimation accuracy. The hybrid GSA-ANN achieves a mean absolute distance estimation error of 0.02 m and 0.2 m for outdoor and indoor velodromes, respectively. PMID:27509495

  8. Three-dimensional single-cell imaging with X-ray waveguides in the holographic regime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krenkel, Martin; Toepperwien, Mareike; Alves, Frauke

    X-ray tomography at the level of single biological cells is possible in a low-dose regime, based on full-field holographic recordings, with phase contrast originating from free-space wave propagation. Building upon recent progress in cellular imaging based on the illumination by quasi-point sources provided by X-ray waveguides, here this approach is extended in several ways. First, the phase-retrieval algorithms are extended by an optimized deterministic inversion, based on a multi-distance recording. Second, different advanced forms of iterative phase retrieval are used, operational for single-distance and multi-distance recordings. Results are compared for several different preparations of macrophage cells, for different staining andmore » labelling. As a result, it is shown that phase retrieval is no longer a bottleneck for holographic imaging of cells, and how advanced schemes can be implemented to cope also with high noise and inconsistencies in the data.« less

  9. Three-dimensional single-cell imaging with X-ray waveguides in the holographic regime

    DOE PAGES

    Krenkel, Martin; Toepperwien, Mareike; Alves, Frauke; ...

    2017-06-29

    X-ray tomography at the level of single biological cells is possible in a low-dose regime, based on full-field holographic recordings, with phase contrast originating from free-space wave propagation. Building upon recent progress in cellular imaging based on the illumination by quasi-point sources provided by X-ray waveguides, here this approach is extended in several ways. First, the phase-retrieval algorithms are extended by an optimized deterministic inversion, based on a multi-distance recording. Second, different advanced forms of iterative phase retrieval are used, operational for single-distance and multi-distance recordings. Results are compared for several different preparations of macrophage cells, for different staining andmore » labelling. As a result, it is shown that phase retrieval is no longer a bottleneck for holographic imaging of cells, and how advanced schemes can be implemented to cope also with high noise and inconsistencies in the data.« less

  10. Influence of geometry and material of insulating posts on particle trapping using positive dielectrophoresis.

    PubMed

    Pesch, Georg R; Du, Fei; Baune, Michael; Thöming, Jorg

    2017-02-03

    Insulator-based dielectrophoresis (iDEP) is a powerful particle analysis technique based on electric field scattering at material boundaries which can be used, for example, for particle filtration or to achieve chromatographic separation. Typical devices consist of microchannels containing an array of posts but large scale application was also successfully tested. Distribution and magnitude of the generated field gradients and thus the possibility to trap particles depends apart from the applied field strength on the material combination between post and surrounding medium and on the boundary shape. In this study we simulate trajectories of singe particles under the influence of positive DEP that are flowing past one single post due to an external fluid flow. We analyze the influence of key parameters (excitatory field strength, fluid flow velocity, particle size, distance from the post, post size, and cross-sectional geometry) on two benchmark criteria, i.e., a critical initial distance from the post so that trapping still occurs (at fixed particle size) and a critical minimum particle size necessary for trapping (at fixed initial distance). Our approach is fundamental and not based on finding an optimal geometry of insulating structures but rather aims to understand the underlying phenomena of particle trapping. A sensitivity analysis reveals that electric field strength and particle size have the same impact, as have fluid flow velocity and post dimension. Compared to these parameters the geometry of the post's cross-section (i.e. rhomboidal or elliptical with varying width-to-height or aspect ratio) has a rather small influence but can be used to optimize the trapping efficiency at a specific distance. We hence found an ideal aspect ratio for trapping for each base geometry and initial distance to the tip which is independent of the other parameters. As a result we present design criteria which we believe to be a valuable addition to the existing literature. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jang, Junhwan; Hwang, Sungui; Park, Kyihwan, E-mail: khpark@gist.ac.kr

    To utilize a time-of-flight-based laser scanner as a distance measurement sensor, the measurable distance and accuracy are the most important performance parameters to consider. For these purposes, the optical system and electronic signal processing of the laser scanner should be optimally designed in order to reduce a distance error caused by the optical crosstalk and wide dynamic range input. Optical system design for removing optical crosstalk problem is proposed in this work. Intensity control is also considered to solve the problem of a phase-shift variation in the signal processing circuit caused by object reflectivity. The experimental results for optical systemmore » and signal processing design are performed using 3D measurements.« less

  12. Optimal design of dampers within seismic structures

    NASA Astrophysics Data System (ADS)

    Ren, Wenjie; Qian, Hui; Song, Wali; Wang, Liqiang

    2009-07-01

    An improved multi-objective genetic algorithm for structural passive control system optimization is proposed. Based on the two-branch tournament genetic algorithm, the selection operator is constructed by evaluating individuals according to their dominance in one run. For a constrained problem, the dominance-based penalty function method is advanced, containing information on an individual's status (feasible or infeasible), position in a search space, and distance from a Pareto optimal set. The proposed approach is used for the optimal designs of a six-storey building with shape memory alloy dampers subjected to earthquake. The number and position of dampers are chosen as the design variables. The number of dampers and peak relative inter-storey drift are considered as the objective functions. Numerical results generate a set of non-dominated solutions.

  13. Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter

    PubMed Central

    Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Gu, Chengfan

    2018-01-01

    This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation. PMID:29415509

  14. Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter.

    PubMed

    Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Zhong, Yongmin; Gu, Chengfan

    2018-02-06

    This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation.

  15. Optimizing a desirable fare structure for a bus-subway corridor

    PubMed Central

    Liu, Bing-Zheng; Ge, Ying-En; Cao, Kai; Jiang, Xi; Meng, Lingyun; Liu, Ding; Gao, Yunfeng

    2017-01-01

    This paper aims to optimize a desirable fare structure for the public transit service along a bus-subway corridor with the consideration of those factors related to equity in trip, including travel distance and comfort level. The travel distance factor is represented by the distance-based fare strategy, which is an existing differential strategy. The comfort level one is considered in the area-based fare strategy which is a new differential strategy defined in this paper. Both factors are referred to by the combined fare strategy which is composed of distance-based and area-based fare strategies. The flat fare strategy is applied to determine a reference level of social welfare and obtain the general passenger flow along transit lines, which is used to divide areas or zones along the corridor. This problem is formulated as a bi-level program, of which the upper level maximizes the social welfare and the lower level capturing traveler choice behavior is a variable-demand stochastic user equilibrium assignment model. A genetic algorithm is applied to solve the bi-level program while the method of successive averages is adopted to solve the lower-level model. A series of numerical experiments are carried out to illustrate the performance of the models and solution methods. Numerical results indicate that all three differential fare strategies play a better role in enhancing the social welfare than the flat fare strategy and that the fare structure under the combined fare strategy generates the highest social welfare and the largest resulting passenger demand, which implies that the more equity factors a differential fare strategy involves the more desirable fare structure the strategy has. PMID:28981508

  16. Optimizing a desirable fare structure for a bus-subway corridor.

    PubMed

    Liu, Bing-Zheng; Ge, Ying-En; Cao, Kai; Jiang, Xi; Meng, Lingyun; Liu, Ding; Gao, Yunfeng

    2017-01-01

    This paper aims to optimize a desirable fare structure for the public transit service along a bus-subway corridor with the consideration of those factors related to equity in trip, including travel distance and comfort level. The travel distance factor is represented by the distance-based fare strategy, which is an existing differential strategy. The comfort level one is considered in the area-based fare strategy which is a new differential strategy defined in this paper. Both factors are referred to by the combined fare strategy which is composed of distance-based and area-based fare strategies. The flat fare strategy is applied to determine a reference level of social welfare and obtain the general passenger flow along transit lines, which is used to divide areas or zones along the corridor. This problem is formulated as a bi-level program, of which the upper level maximizes the social welfare and the lower level capturing traveler choice behavior is a variable-demand stochastic user equilibrium assignment model. A genetic algorithm is applied to solve the bi-level program while the method of successive averages is adopted to solve the lower-level model. A series of numerical experiments are carried out to illustrate the performance of the models and solution methods. Numerical results indicate that all three differential fare strategies play a better role in enhancing the social welfare than the flat fare strategy and that the fare structure under the combined fare strategy generates the highest social welfare and the largest resulting passenger demand, which implies that the more equity factors a differential fare strategy involves the more desirable fare structure the strategy has.

  17. Physics-based method to validate and repair flaws in protein structures

    PubMed Central

    Martin, Osvaldo A.; Arnautova, Yelena A.; Icazatti, Alejandro A.; Scheraga, Harold A.; Vila, Jorge A.

    2013-01-01

    A method that makes use of information provided by the combination of 13Cα and 13Cβ chemical shifts, computed at the density functional level of theory, enables one to (i) validate, at the residue level, conformations of proteins and detect backbone or side-chain flaws by taking into account an ensemble average of chemical shifts over all of the conformations used to represent a protein, with a sensitivity of ∼90%; and (ii) provide a set of (χ1/χ2) torsional angles that leads to optimal agreement between the observed and computed 13Cα and 13Cβ chemical shifts. The method has been incorporated into the CheShift-2 protein validation Web server. To test the reliability of the provided set of (χ1/χ2) torsional angles, the side chains of all reported conformations of five NMR-determined protein models were refined by a simple routine, without using NOE-based distance restraints. The refinement of each of these five proteins leads to optimal agreement between the observed and computed 13Cα and 13Cβ chemical shifts for ∼94% of the flaws, on average, without introducing a significantly large number of violations of the NOE-based distance restraints for a distance range ≤ 0.5 Ǻ, in which the largest number of distance violations occurs. The results of this work suggest that use of the provided set of (χ1/χ2) torsional angles together with other observables, such as NOEs, should lead to a fast and accurate refinement of the side-chain conformations of protein models. PMID:24082119

  18. Physics-based method to validate and repair flaws in protein structures.

    PubMed

    Martin, Osvaldo A; Arnautova, Yelena A; Icazatti, Alejandro A; Scheraga, Harold A; Vila, Jorge A

    2013-10-15

    A method that makes use of information provided by the combination of (13)C(α) and (13)C(β) chemical shifts, computed at the density functional level of theory, enables one to (i) validate, at the residue level, conformations of proteins and detect backbone or side-chain flaws by taking into account an ensemble average of chemical shifts over all of the conformations used to represent a protein, with a sensitivity of ∼90%; and (ii) provide a set of (χ1/χ2) torsional angles that leads to optimal agreement between the observed and computed (13)C(α) and (13)C(β) chemical shifts. The method has been incorporated into the CheShift-2 protein validation Web server. To test the reliability of the provided set of (χ1/χ2) torsional angles, the side chains of all reported conformations of five NMR-determined protein models were refined by a simple routine, without using NOE-based distance restraints. The refinement of each of these five proteins leads to optimal agreement between the observed and computed (13)C(α) and (13)C(β) chemical shifts for ∼94% of the flaws, on average, without introducing a significantly large number of violations of the NOE-based distance restraints for a distance range ≤ 0.5 , in which the largest number of distance violations occurs. The results of this work suggest that use of the provided set of (χ1/χ2) torsional angles together with other observables, such as NOEs, should lead to a fast and accurate refinement of the side-chain conformations of protein models.

  19. A constraint optimization based virtual network mapping method

    NASA Astrophysics Data System (ADS)

    Li, Xiaoling; Guo, Changguo; Wang, Huaimin; Li, Zhendong; Yang, Zhiwen

    2013-03-01

    Virtual network mapping problem, maps different virtual networks onto the substrate network is an extremely challenging work. This paper proposes a constraint optimization based mapping method for solving virtual network mapping problem. This method divides the problem into two phases, node mapping phase and link mapping phase, which are all NP-hard problems. Node mapping algorithm and link mapping algorithm are proposed for solving node mapping phase and link mapping phase, respectively. Node mapping algorithm adopts the thinking of greedy algorithm, mainly considers two factors, available resources which are supplied by the nodes and distance between the nodes. Link mapping algorithm is based on the result of node mapping phase, adopts the thinking of distributed constraint optimization method, which can guarantee to obtain the optimal mapping with the minimum network cost. Finally, simulation experiments are used to validate the method, and results show that the method performs very well.

  20. Research on vehicle routing optimization for the terminal distribution of B2C E-commerce firms

    NASA Astrophysics Data System (ADS)

    Zhang, Shiyun; Lu, Yapei; Li, Shasha

    2018-05-01

    In this paper, we established a half open multi-objective optimization model for the vehicle routing problem of B2C (business-to-customer) E-Commerce firms. To minimize the current transport distance as well as the disparity between the excepted shipments and the transport capacity in the next distribution, we applied the concept of dominated solution and Pareto solutions to the standard particle swarm optimization and proposed a MOPSO (multi-objective particle swarm optimization) algorithm to support the model. Besides, we also obtained the optimization solution of MOPSO algorithm based on data randomly generated through the system, which verified the validity of the model.

  1. The median problems on linear multichromosomal genomes: graph representation and fast exact solutions.

    PubMed

    Xu, Andrew Wei

    2010-09-01

    In genome rearrangement, given a set of genomes G and a distance measure d, the median problem asks for another genome q that minimizes the total distance [Formula: see text]. This is a key problem in genome rearrangement based phylogenetic analysis. Although this problem is known to be NP-hard, we have shown in a previous article, on circular genomes and under the DCJ distance measure, that a family of patterns in the given genomes--represented by adequate subgraphs--allow us to rapidly find exact solutions to the median problem in a decomposition approach. In this article, we extend this result to the case of linear multichromosomal genomes, in order to solve more interesting problems on eukaryotic nuclear genomes. A multi-way capping problem in the linear multichromosomal case imposes an extra computational challenge on top of the difficulty in the circular case, and this difficulty has been underestimated in our previous study and is addressed in this article. We represent the median problem by the capped multiple breakpoint graph, extend the adequate subgraphs into the capped adequate subgraphs, and prove optimality-preserving decomposition theorems, which give us the tools to solve the median problem and the multi-way capping optimization problem together. We also develop an exact algorithm ASMedian-linear, which iteratively detects instances of (capped) adequate subgraphs and decomposes problems into subproblems. Tested on simulated data, ASMedian-linear can rapidly solve most problems with up to several thousand genes, and it also can provide optimal or near-optimal solutions to the median problem under the reversal/HP distance measures. ASMedian-linear is available at http://sites.google.com/site/andrewweixu .

  2. Content Based Image Retrieval based on Wavelet Transform coefficients distribution

    PubMed Central

    Lamard, Mathieu; Cazuguel, Guy; Quellec, Gwénolé; Bekri, Lynda; Roux, Christian; Cochener, Béatrice

    2007-01-01

    In this paper we propose a content based image retrieval method for diagnosis aid in medical fields. We characterize images without extracting significant features by using distribution of coefficients obtained by building signatures from the distribution of wavelet transform. The research is carried out by computing signature distances between the query and database images. Several signatures are proposed; they use a model of wavelet coefficient distribution. To enhance results, a weighted distance between signatures is used and an adapted wavelet base is proposed. Retrieval efficiency is given for different databases including a diabetic retinopathy, a mammography and a face database. Results are promising: the retrieval efficiency is higher than 95% for some cases using an optimization process. PMID:18003013

  3. Efficient distribution of toy products using ant colony optimization algorithm

    NASA Astrophysics Data System (ADS)

    Hidayat, S.; Nurpraja, C. A.

    2017-12-01

    CV Atham Toys (CVAT) produces wooden toys and furniture, comprises 13 small and medium industries. CVAT always attempt to deliver customer orders on time but delivery costs are high. This is because of inadequate infrastructure such that delivery routes are long, car maintenance costs are high, while fuel subsidy by the government is still temporary. This study seeks to minimize the cost of product distribution based on the shortest route using one of five Ant Colony Optimization (ACO) algorithms to solve the Vehicle Routing Problem (VRP). This study concludes that the best of the five is the Ant Colony System (ACS) algorithm. The best route in 1st week gave a total distance of 124.11 km at a cost of Rp 66,703.75. The 2nd week route gave a total distance of 132.27 km at a cost of Rp 71,095.13. The 3rd week best route gave a total distance of 122.70 km with a cost of Rp 65,951.25. While the 4th week gave a total distance of 132.27 km at a cost of Rp 74,083.63. Prior to this study there was no effort to calculate these figures.

  4. Utilizing Diffuse Reflection to Increase the Efficiency of Luminescent Solar Concentrators

    NASA Astrophysics Data System (ADS)

    Bowser, Seth; Weible, Seth; Solomon, Joel; Schrecengost, Jonathan; Wittmershaus, Bruce

    A luminescent solar concentrator (LSC) consists of a high index solid plate containing a fluorescent material that converts sunlight into fluorescence. Utilizing total internal reflection, the LSC collects and concentrates the fluorescence at the plate's edges where it is converted into electricity via photovoltaic solar cells. The lower production costs of LSCs make them an attractive alternative to photovoltaic solar cells. To optimize an LSC's efficiency, a white diffusive surface (background) is positioned behind it. The background allows sunlight transmitted in the first pass to be reflected back through the LSC providing a second chance for absorption. Our research examines how the LSC's performance is affected by changing the distance between the white background and the LSC. An automated linear motion apparatus was engineered to precisely measure this distance and the LSC's electrical current, simultaneously. LSC plates, with and without the presence of fluorescent material and in an isolated environment, showed a maximum current at a distance greater than zero. Further experimentation has proved that the optimal distance results from the background's optical properties and how the reflected light enters the LSC. This material is based upon work supported by the National Science Foundation under Grant Number NSF-ECCS-1306157.

  5. Design optimization of Cassegrain telescope for remote explosive trace detection

    NASA Astrophysics Data System (ADS)

    Bhavsar, Kaushalkumar; Eseller, K. E.; Prabhu, Radhakrishna

    2017-10-01

    The past three years have seen a global increase in explosive-based terror attacks. The widespread use of improvised explosives and anti-personnel landmines have caused thousands of civilian casualties across the world. Current scenario of globalized civilization threat from terror drives the need to improve the performance and capabilities of standoff explosive trace detection devices to be able to anticipate the threat from a safe distance to prevent explosions and save human lives. In recent years, laser-induced breakdown spectroscopy (LIBS) is an emerging approach for material or elemental investigations. All the principle elements on the surface are detectable in a single measurement using LIBS and hence, a standoff LIBS based method has been used to remotely detect explosive traces from several to tens of metres distance. The most important component of LIBS based standoff explosive trace detection system is the telescope which enables remote identification of chemical constituents of the explosives. However, in a compact LIBS system where Cassegrain telescope serves the purpose of laser beam delivery and light collection, need a design optimization of the telescope system. This paper reports design optimization of a Cassegrain telescope to detect explosives remotely for LIBS system. A design optimization of Schmidt corrector plate was carried out for Nd:YAG laser. Effect of different design parameters was investigated to eliminate spherical aberration in the system. Effect of different laser wavelengths on the Schmidt corrector design was also investigated for the standoff LIBS system.

  6. Frustration in protein elastic network models

    NASA Astrophysics Data System (ADS)

    Lezon, Timothy; Bahar, Ivet

    2010-03-01

    Elastic network models (ENMs) are widely used for studying the equilibrium dynamics of proteins. The most common approach in ENM analysis is to adopt a uniform force constant or a non-specific distance dependent function to represent the force constant strength. Here we discuss the influence of sequence and structure in determining the effective force constants between residues in ENMs. Using a novel method based on entropy maximization, we optimize the force constants such that they exactly reporduce a subset of experimentally determined pair covariances for a set of proteins. We analyze the optimized force constants in terms of amino acid types, distances, contact order and secondary structure, and we demonstrate that including frustrated interactions in the ENM is essential for accurately reproducing the global modes in the middle of the frequency spectrum.

  7. Adaptive density trajectory cluster based on time and space distance

    NASA Astrophysics Data System (ADS)

    Liu, Fagui; Zhang, Zhijie

    2017-10-01

    There are some hotspot problems remaining in trajectory cluster for discovering mobile behavior regularity, such as the computation of distance between sub trajectories, the setting of parameter values in cluster algorithm and the uncertainty/boundary problem of data set. As a result, based on the time and space, this paper tries to define the calculation method of distance between sub trajectories. The significance of distance calculation for sub trajectories is to clearly reveal the differences in moving trajectories and to promote the accuracy of cluster algorithm. Besides, a novel adaptive density trajectory cluster algorithm is proposed, in which cluster radius is computed through using the density of data distribution. In addition, cluster centers and number are selected by a certain strategy automatically, and uncertainty/boundary problem of data set is solved by designed weighted rough c-means. Experimental results demonstrate that the proposed algorithm can perform the fuzzy trajectory cluster effectively on the basis of the time and space distance, and obtain the optimal cluster centers and rich cluster results information adaptably for excavating the features of mobile behavior in mobile and sociology network.

  8. Optimizing wind farm layout via LES-calibrated geometric models inclusive of wind direction and atmospheric stability effects

    NASA Astrophysics Data System (ADS)

    Archer, Cristina; Ghaisas, Niranjan

    2015-04-01

    The energy generation at a wind farm is controlled primarily by the average wind speed at hub height. However, two other factors impact wind farm performance: 1) the layout of the wind turbines, in terms of spacing between turbines along and across the prevailing wind direction; staggering or aligning consecutive rows; angles between rows, columns, and prevailing wind direction); and 2) atmospheric stability, which is a measure of whether vertical motion is enhanced (unstable), suppressed (stable), or neither (neutral). Studying both factors and their complex interplay with Large-Eddy Simulation (LES) is a valid approach because it produces high-resolution, 3D, turbulent fields, such as wind velocity, temperature, and momentum and heat fluxes, and it properly accounts for the interactions between wind turbine blades and the surrounding atmospheric and near-surface properties. However, LES are computationally expensive and simulating all the possible combinations of wind directions, atmospheric stabilities, and turbine layouts to identify the optimal wind farm configuration is practically unfeasible today. A new, geometry-based method is proposed that is computationally inexpensive and that combines simple geometric quantities with a minimal number of LES simulations to identify the optimal wind turbine layout, taking into account not only the actual frequency distribution of wind directions (i.e., wind rose) at the site of interest, but also atmospheric stability. The geometry-based method is calibrated with LES of the Lillgrund wind farm conducted with the Software for Offshore/onshore Wind Farm Applications (SOWFA), based on the open-access OpenFOAM libraries. The geometric quantities that offer the best correlations (>0.93) with the LES results are the blockage ratio, defined as the fraction of the swept area of a wind turbine that is blocked by an upstream turbine, and the blockage distance, the weighted distance from a given turbine to all upstream turbines that can potentially block it. Based on blockage ratio and distance, an optimization procedure is proposed that explores many different layout variables and identifies, given actual wind direction and stability distributions, the optimal wind farm layout, i.e., the one with the highest wind energy production. The optimization procedure is applied to both the calibration wind farm (Lillgrund) and a test wind farm (Horns Rev) and a number of layouts more efficient than the existing ones are identified. The optimization procedure based on geometric models proposed here can be applied very quickly (within a few hours) to any proposed wind farm, once enough information on wind direction frequency and, if available, atmospheric stability frequency has been gathered and once the number of turbines and/or the areal extent of the wind farm have been identified.

  9. Identifying protein complexes based on brainstorming strategy.

    PubMed

    Shen, Xianjun; Zhou, Jin; Yi, Li; Hu, Xiaohua; He, Tingting; Yang, Jincai

    2016-11-01

    Protein complexes comprising of interacting proteins in protein-protein interaction network (PPI network) play a central role in driving biological processes within cells. Recently, more and more swarm intelligence based algorithms to detect protein complexes have been emerging, which have become the research hotspot in proteomics field. In this paper, we propose a novel algorithm for identifying protein complexes based on brainstorming strategy (IPC-BSS), which is integrated into the main idea of swarm intelligence optimization and the improved K-means algorithm. Distance between the nodes in PPI network is defined by combining the network topology and gene ontology (GO) information. Inspired by human brainstorming process, IPC-BSS algorithm firstly selects the clustering center nodes, and then they are separately consolidated with the other nodes with short distance to form initial clusters. Finally, we put forward two ways of updating the initial clusters to search optimal results. Experimental results show that our IPC-BSS algorithm outperforms the other classic algorithms on yeast and human PPI networks, and it obtains many predicted protein complexes with biological significance. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Research on droplet size measurement of impulse antiriots water cannon based on sheet laser

    NASA Astrophysics Data System (ADS)

    Fa-dong, Zhao; Hong-wei, Zhuang; Ren-jun, Zhan

    2014-04-01

    As a new-style counter-personnel non-lethal weapon, it is the non-steady characteristic and large water mist field that increase the difficulty of measuring the droplet size distribution of impulse anti-riots water cannon which is the most important index to examine its tactical and technology performance. A method based on the technologies of particle scattering, sheet laser imaging and high speed handling was proposed and an universal droplet size measuring algorithm was designed and verified. According to this method, the droplet size distribution was measured. The measuring results of the size distribution under the same position with different timescale, the same axial distance with different radial distance, the same radial distance with different axial distance were analyzed qualitatively and some rational cause was presented. The droplet size measuring method proposed in this article provides a scientific and effective experiment method to ascertain the technical and tactical performance and optimize the relative system performance.

  11. Directional coupler based on an elliptic cylindrical nanowire hybrid plasmonic waveguide.

    PubMed

    Zeng, Dezheng; Zhang, Li; Xiong, Qiulin; Ma, Junxian

    2018-06-01

    We present what we believe is a novel directional coupler based on an elliptic cylindrical nanowire hybrid plasmonic waveguide. Using the finite element method, the electric field distributions of y-polarized symmetric and antisymmetric modes of the coupler are compared, and the coupling and transmission characteristics are analyzed; then the optimized separation distance between the two parallel waveguides, 100 nm, is obtained. This optimized architecture fits in the weak coupling regime. Furthermore, the energy transfer is studied, and the performances of the directional coupler are evaluated, including excess loss, coupling degree, and directionality. The results show that when the separation distance is set to 100 nm, the coupling length reaches the shorter value of 1.646 μm, and the propagation loss is as low as 0.076 dB/μm, and the maximum energy transfer can reach 80%. The proposed directional coupler features good energy confinement, ultracompact and low propagation loss, which has potential application in dense photonic-integrated circuits and other photonic devices.

  12. DCT-based iris recognition.

    PubMed

    Monro, Donald M; Rakshit, Soumyadip; Zhang, Dexin

    2007-04-01

    This paper presents a novel iris coding method based on differences of discrete cosine transform (DCT) coefficients of overlapped angular patches from normalized iris images. The feature extraction capabilities of the DCT are optimized on the two largest publicly available iris image data sets, 2,156 images of 308 eyes from the CASIA database and 2,955 images of 150 eyes from the Bath database. On this data, we achieve 100 percent Correct Recognition Rate (CRR) and perfect Receiver-Operating Characteristic (ROC) Curves with no registered false accepts or rejects. Individual feature bit and patch position parameters are optimized for matching through a product-of-sum approach to Hamming distance calculation. For verification, a variable threshold is applied to the distance metric and the False Acceptance Rate (FAR) and False Rejection Rate (FRR) are recorded. A new worst-case metric is proposed for predicting practical system performance in the absence of matching failures, and the worst case theoretical Equal Error Rate (EER) is predicted to be as low as 2.59 x 10(-4) on the available data sets.

  13. Applying genetic algorithms to set the optimal combination of forest fire related variables and model forest fire susceptibility based on data mining models. The case of Dayu County, China.

    PubMed

    Hong, Haoyuan; Tsangaratos, Paraskevas; Ilia, Ioanna; Liu, Junzhi; Zhu, A-Xing; Xu, Chong

    2018-07-15

    The main objective of the present study was to utilize Genetic Algorithms (GA) in order to obtain the optimal combination of forest fire related variables and apply data mining methods for constructing a forest fire susceptibility map. In the proposed approach, a Random Forest (RF) and a Support Vector Machine (SVM) was used to produce a forest fire susceptibility map for the Dayu County which is located in southwest of Jiangxi Province, China. For this purpose, historic forest fires and thirteen forest fire related variables were analyzed, namely: elevation, slope angle, aspect, curvature, land use, soil cover, heat load index, normalized difference vegetation index, mean annual temperature, mean annual wind speed, mean annual rainfall, distance to river network and distance to road network. The Natural Break and the Certainty Factor method were used to classify and weight the thirteen variables, while a multicollinearity analysis was performed to determine the correlation among the variables and decide about their usability. The optimal set of variables, determined by the GA limited the number of variables into eight excluding from the analysis, aspect, land use, heat load index, distance to river network and mean annual rainfall. The performance of the forest fire models was evaluated by using the area under the Receiver Operating Characteristic curve (ROC-AUC) based on the validation dataset. Overall, the RF models gave higher AUC values. Also the results showed that the proposed optimized models outperform the original models. Specifically, the optimized RF model gave the best results (0.8495), followed by the original RF (0.8169), while the optimized SVM gave lower values (0.7456) than the RF, however higher than the original SVM (0.7148) model. The study highlights the significance of feature selection techniques in forest fire susceptibility, whereas data mining methods could be considered as a valid approach for forest fire susceptibility modeling. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Backtracking search algorithm in CVRP models for efficient solid waste collection and route optimization.

    PubMed

    Akhtar, Mahmuda; Hannan, M A; Begum, R A; Basri, Hassan; Scavino, Edgar

    2017-03-01

    Waste collection is an important part of waste management that involves different issues, including environmental, economic, and social, among others. Waste collection optimization can reduce the waste collection budget and environmental emissions by reducing the collection route distance. This paper presents a modified Backtracking Search Algorithm (BSA) in capacitated vehicle routing problem (CVRP) models with the smart bin concept to find the best optimized waste collection route solutions. The objective function minimizes the sum of the waste collection route distances. The study introduces the concept of the threshold waste level (TWL) of waste bins to reduce the number of bins to be emptied by finding an optimal range, thus minimizing the distance. A scheduling model is also introduced to compare the feasibility of the proposed model with that of the conventional collection system in terms of travel distance, collected waste, fuel consumption, fuel cost, efficiency and CO 2 emission. The optimal TWL was found to be between 70% and 75% of the fill level of waste collection nodes and had the maximum tightness value for different problem cases. The obtained results for four days show a 36.80% distance reduction for 91.40% of the total waste collection, which eventually increases the average waste collection efficiency by 36.78% and reduces the fuel consumption, fuel cost and CO 2 emission by 50%, 47.77% and 44.68%, respectively. Thus, the proposed optimization model can be considered a viable tool for optimizing waste collection routes to reduce economic costs and environmental impacts. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Integrated optimisation technique based on computer-aided capacity and safety evaluation for managing downstream lane-drop merging area of signalised junctions

    NASA Astrophysics Data System (ADS)

    Chen, CHAI; Yiik Diew, WONG

    2017-02-01

    This study provides an integrated strategy, encompassing microscopic simulation, safety assessment, and multi-attribute decision-making, to optimize traffic performance at downstream merging area of signalized intersections. A Fuzzy Cellular Automata (FCA) model is developed to replicate microscopic movement and merging behavior. Based on simulation experiment, the proposed FCA approach is able to provide capacity and safety evaluation of different traffic scenarios. The results are then evaluated through data envelopment analysis (DEA) and analytic hierarchy process (AHP). Optimized geometric layout and control strategies are then suggested for various traffic conditions. An optimal lane-drop distance that is dependent on traffic volume and speed limit can thus be established at the downstream merging area.

  16. Ground-to-satellite quantum teleportation.

    PubMed

    Ren, Ji-Gang; Xu, Ping; Yong, Hai-Lin; Zhang, Liang; Liao, Sheng-Kai; Yin, Juan; Liu, Wei-Yue; Cai, Wen-Qi; Yang, Meng; Li, Li; Yang, Kui-Xing; Han, Xuan; Yao, Yong-Qiang; Li, Ji; Wu, Hai-Yan; Wan, Song; Liu, Lei; Liu, Ding-Quan; Kuang, Yao-Wu; He, Zhi-Ping; Shang, Peng; Guo, Cheng; Zheng, Ru-Hua; Tian, Kai; Zhu, Zhen-Cai; Liu, Nai-Le; Lu, Chao-Yang; Shu, Rong; Chen, Yu-Ao; Peng, Cheng-Zhi; Wang, Jian-Yu; Pan, Jian-Wei

    2017-09-07

    An arbitrary unknown quantum state cannot be measured precisely or replicated perfectly. However, quantum teleportation enables unknown quantum states to be transferred reliably from one object to another over long distances, without physical travelling of the object itself. Long-distance teleportation is a fundamental element of protocols such as large-scale quantum networks and distributed quantum computation. But the distances over which transmission was achieved in previous teleportation experiments, which used optical fibres and terrestrial free-space channels, were limited to about 100 kilometres, owing to the photon loss of these channels. To realize a global-scale 'quantum internet' the range of quantum teleportation needs to be greatly extended. A promising way of doing so involves using satellite platforms and space-based links, which can connect two remote points on Earth with greatly reduced channel loss because most of the propagation path of the photons is in empty space. Here we report quantum teleportation of independent single-photon qubits from a ground observatory to a low-Earth-orbit satellite, through an uplink channel, over distances of up to 1,400 kilometres. To optimize the efficiency of the link and to counter the atmospheric turbulence in the uplink, we use a compact ultra-bright source of entangled photons, a narrow beam divergence and high-bandwidth and high-accuracy acquiring, pointing and tracking. We demonstrate successful quantum teleportation of six input states in mutually unbiased bases with an average fidelity of 0.80 ± 0.01, well above the optimal state-estimation fidelity on a single copy of a qubit (the classical limit). Our demonstration of a ground-to-satellite uplink for reliable and ultra-long-distance quantum teleportation is an essential step towards a global-scale quantum internet.

  17. Ground-to-satellite quantum teleportation

    NASA Astrophysics Data System (ADS)

    Ren, Ji-Gang; Xu, Ping; Yong, Hai-Lin; Zhang, Liang; Liao, Sheng-Kai; Yin, Juan; Liu, Wei-Yue; Cai, Wen-Qi; Yang, Meng; Li, Li; Yang, Kui-Xing; Han, Xuan; Yao, Yong-Qiang; Li, Ji; Wu, Hai-Yan; Wan, Song; Liu, Lei; Liu, Ding-Quan; Kuang, Yao-Wu; He, Zhi-Ping; Shang, Peng; Guo, Cheng; Zheng, Ru-Hua; Tian, Kai; Zhu, Zhen-Cai; Liu, Nai-Le; Lu, Chao-Yang; Shu, Rong; Chen, Yu-Ao; Peng, Cheng-Zhi; Wang, Jian-Yu; Pan, Jian-Wei

    2017-09-01

    An arbitrary unknown quantum state cannot be measured precisely or replicated perfectly. However, quantum teleportation enables unknown quantum states to be transferred reliably from one object to another over long distances, without physical travelling of the object itself. Long-distance teleportation is a fundamental element of protocols such as large-scale quantum networks and distributed quantum computation. But the distances over which transmission was achieved in previous teleportation experiments, which used optical fibres and terrestrial free-space channels, were limited to about 100 kilometres, owing to the photon loss of these channels. To realize a global-scale ‘quantum internet’ the range of quantum teleportation needs to be greatly extended. A promising way of doing so involves using satellite platforms and space-based links, which can connect two remote points on Earth with greatly reduced channel loss because most of the propagation path of the photons is in empty space. Here we report quantum teleportation of independent single-photon qubits from a ground observatory to a low-Earth-orbit satellite, through an uplink channel, over distances of up to 1,400 kilometres. To optimize the efficiency of the link and to counter the atmospheric turbulence in the uplink, we use a compact ultra-bright source of entangled photons, a narrow beam divergence and high-bandwidth and high-accuracy acquiring, pointing and tracking. We demonstrate successful quantum teleportation of six input states in mutually unbiased bases with an average fidelity of 0.80 ± 0.01, well above the optimal state-estimation fidelity on a single copy of a qubit (the classical limit). Our demonstration of a ground-to-satellite uplink for reliable and ultra-long-distance quantum teleportation is an essential step towards a global-scale quantum internet.

  18. Optimal Detection Range of RFID Tag for RFID-based Positioning System Using the k-NN Algorithm.

    PubMed

    Han, Soohee; Kim, Junghwan; Park, Choung-Hwan; Yoon, Hee-Cheon; Heo, Joon

    2009-01-01

    Positioning technology to track a moving object is an important and essential component of ubiquitous computing environments and applications. An RFID-based positioning system using the k-nearest neighbor (k-NN) algorithm can determine the position of a moving reader from observed reference data. In this study, the optimal detection range of an RFID-based positioning system was determined on the principle that tag spacing can be derived from the detection range. It was assumed that reference tags without signal strength information are regularly distributed in 1-, 2- and 3-dimensional spaces. The optimal detection range was determined, through analytical and numerical approaches, to be 125% of the tag-spacing distance in 1-dimensional space. Through numerical approaches, the range was 134% in 2-dimensional space, 143% in 3-dimensional space.

  19. Gender classification in children based on speech characteristics: using fundamental and formant frequencies of Malay vowels.

    PubMed

    Zourmand, Alireza; Ting, Hua-Nong; Mirhassani, Seyed Mostafa

    2013-03-01

    Speech is one of the prevalent communication mediums for humans. Identifying the gender of a child speaker based on his/her speech is crucial in telecommunication and speech therapy. This article investigates the use of fundamental and formant frequencies from sustained vowel phonation to distinguish the gender of Malay children aged between 7 and 12 years. The Euclidean minimum distance and multilayer perceptron were used to classify the gender of 360 Malay children based on different combinations of fundamental and formant frequencies (F0, F1, F2, and F3). The Euclidean minimum distance with normalized frequency data achieved a classification accuracy of 79.44%, which was higher than that of the nonnormalized frequency data. Age-dependent modeling was used to improve the accuracy of gender classification. The Euclidean distance method obtained 84.17% based on the optimal classification accuracy for all age groups. The accuracy was further increased to 99.81% using multilayer perceptron based on mel-frequency cepstral coefficients. Copyright © 2013 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  20. Optical characterization of nonimaging dish concentrator for the application of dense-array concentrator photovoltaic system.

    PubMed

    Tan, Ming-Hui; Chong, Kok-Keong; Wong, Chee-Woon

    2014-01-20

    Optimization of the design of a nonimaging dish concentrator (NIDC) for a dense-array concentrator photovoltaic system is presented. A new algorithm has been developed to determine configuration of facet mirrors in a NIDC. Analytical formulas were derived to analyze the optical performance of a NIDC and then compared with a simulated result obtained from a numerical method. Comprehensive analysis of optical performance via analytical method has been carried out based on facet dimension and focal distance of the concentrator with a total reflective area of 120 m2. The result shows that a facet dimension of 49.8 cm, focal distance of 8 m, and solar concentration ratio of 411.8 suns is the most optimized design for the lowest cost-per-output power, which is US$1.93 per watt.

  1. Optimal Installation Locations for Automated External Defibrillators in Taipei 7-Eleven Stores: Using GIS and a Genetic Algorithm with a New Stirring Operator

    PubMed Central

    Wen, Tzai-Hung

    2014-01-01

    Immediate treatment with an automated external defibrillator (AED) increases out-of-hospital cardiac arrest (OHCA) patient survival potential. While considerable attention has been given to determining optimal public AED locations, spatial and temporal factors such as time of day and distance from emergency medical services (EMSs) are understudied. Here we describe a geocomputational genetic algorithm with a new stirring operator (GANSO) that considers spatial and temporal cardiac arrest occurrence factors when assessing the feasibility of using Taipei 7-Eleven stores as installation locations for AEDs. Our model is based on two AED conveyance modes, walking/running and driving, involving service distances of 100 and 300 meters, respectively. Our results suggest different AED allocation strategies involving convenience stores in urban settings. In commercial areas, such installations can compensate for temporal gaps in EMS locations when responding to nighttime OHCA incidents. In residential areas, store installations can compensate for long distances from fire stations, where AEDs are currently held in Taipei. PMID:25045396

  2. The use of geographic information system and 1860s cadastral data to model agricultural suitability before heavy mechanization. A case study from Malta.

    PubMed

    Alberti, Gianmarco; Grima, Reuben; Vella, Nicholas C

    2018-01-01

    The present study seeks to understand the determinants of land agricultural suitability in Malta before heavy mechanization. A GIS-based Logistic Regression model is built on the basis of the data from mid-1800s cadastral maps (cabreo). This is the first time that such data are being used for the purpose of building a predictive model. The maps record the agricultural quality of parcels (ranging from good to lowest), which is represented by different colours. The study treats the agricultural quality as a depended variable with two levels: optimal (corresponding to the good class) vs. non-optimal quality (mediocre, bad, low, and lowest classes). Seventeen predictors are isolated on the basis of literature review and data availability. Logistic Regression is used to isolate the predictors that can be considered determinants of the agricultural quality. Our model has an optimal discriminatory power (AUC: 0.92). The positive effect on land agricultural quality of the following predictors is considered and discussed: sine of the aspect (odds ratio 1.42), coast distance (2.46), Brown Rendzinas (2.31), Carbonate Raw (2.62) and Xerorendzinas (9.23) soils, distance to minor roads (4.88). Predictors resulting having a negative effect are: terrain elevation (0.96), slope (0.97), distance to the nearest geological fault lines (0.09), Terra Rossa soil (0.46), distance to secondary roads (0.19) and footpaths (0.41). The model isolates a host of topographic and cultural variables, the latter related to human mobility and landscape accessibility, which differentially contributed to the agricultural suitability, providing the bases for the creation of the fragmented and extremely variegated agricultural landscape that is the hallmark of the Maltese Islands. Our findings are also useful to suggest new questions that may be posed to the more meagre evidence from earlier periods.

  3. The use of geographic information system and 1860s cadastral data to model agricultural suitability before heavy mechanization. A case study from Malta

    PubMed Central

    Grima, Reuben; Vella, Nicholas C.

    2018-01-01

    The present study seeks to understand the determinants of land agricultural suitability in Malta before heavy mechanization. A GIS-based Logistic Regression model is built on the basis of the data from mid-1800s cadastral maps (cabreo). This is the first time that such data are being used for the purpose of building a predictive model. The maps record the agricultural quality of parcels (ranging from good to lowest), which is represented by different colours. The study treats the agricultural quality as a depended variable with two levels: optimal (corresponding to the good class) vs. non-optimal quality (mediocre, bad, low, and lowest classes). Seventeen predictors are isolated on the basis of literature review and data availability. Logistic Regression is used to isolate the predictors that can be considered determinants of the agricultural quality. Our model has an optimal discriminatory power (AUC: 0.92). The positive effect on land agricultural quality of the following predictors is considered and discussed: sine of the aspect (odds ratio 1.42), coast distance (2.46), Brown Rendzinas (2.31), Carbonate Raw (2.62) and Xerorendzinas (9.23) soils, distance to minor roads (4.88). Predictors resulting having a negative effect are: terrain elevation (0.96), slope (0.97), distance to the nearest geological fault lines (0.09), Terra Rossa soil (0.46), distance to secondary roads (0.19) and footpaths (0.41). The model isolates a host of topographic and cultural variables, the latter related to human mobility and landscape accessibility, which differentially contributed to the agricultural suitability, providing the bases for the creation of the fragmented and extremely variegated agricultural landscape that is the hallmark of the Maltese Islands. Our findings are also useful to suggest new questions that may be posed to the more meagre evidence from earlier periods. PMID:29415059

  4. Optimal energy-utilization ratio for long-distance cruising of a model fish

    NASA Astrophysics Data System (ADS)

    Liu, Geng; Yu, Yong-Liang; Tong, Bing-Gang

    2012-07-01

    The efficiency of total energy utilization and its optimization for long-distance migration of fish have attracted much attention in the past. This paper presents theoretical and computational research, clarifying the above well-known classic questions. Here, we specify the energy-utilization ratio (fη) as a scale of cruising efficiency, which consists of the swimming speed over the sum of the standard metabolic rate and the energy consumption rate of muscle activities per unit mass. Theoretical formulation of the function fη is made and it is shown that based on a basic dimensional analysis, the main dimensionless parameters for our simplified model are the Reynolds number (Re) and the dimensionless quantity of the standard metabolic rate per unit mass (Rpm). The swimming speed and the hydrodynamic power output in various conditions can be computed by solving the coupled Navier-Stokes equations and the fish locomotion dynamic equations. Again, the energy consumption rate of muscle activities can be estimated by the quotient of dividing the hydrodynamic power by the muscle efficiency studied by previous researchers. The present results show the following: (1) When the value of fη attains a maximum, the dimensionless parameter Rpm keeps almost constant for the same fish species in different sizes. (2) In the above cases, the tail beat period is an exponential function of the fish body length when cruising is optimal, e.g., the optimal tail beat period of Sockeye salmon is approximately proportional to the body length to the power of 0.78. Again, the larger fish's ability of long-distance cruising is more excellent than that of smaller fish. (3) The optimal swimming speed we obtained is consistent with previous researchers’ estimations.

  5. A model of optimal voluntary muscular control.

    PubMed

    FitzHugh, R

    1977-07-19

    In the absence of detailed knowledge of how the CNS controls a muscle through its motor fibers, a reasonable hypothesis is that of optimal control. This hypothesis is studied using a simplified mathematical model of a single muscle, based on A.V. Hill's equations, with series elastic element omitted, and with the motor signal represented by a single input variable. Two cost functions were used. The first was total energy expended by the muscle (work plus heat). If the load is a constant force, with no inertia, Hill's optimal velocity of shortening results. If the load includes a mass, analysis by optimal control theory shows that the motor signal to the muscle consists of three phases: (1) maximal stimulation to accelerate the mass to the optimal velocity as quickly as possible, (2) an intermediate level of stimulation to hold the velocity at its optimal value, once reached, and (3) zero stimulation, to permit the mass to slow down, as quickly as possible, to zero velocity at the specified distance shortened. If the latter distance is too small, or the mass too large, the optimal velocity is not reached, and phase (2) is absent. For lengthening, there is no optimal velocity; there are only two phases, zero stimulation followed by maximal stimulation. The second cost function was total time. The optimal control for shortening consists of only phases (1) and (3) above, and is identical to the minimal energy control whenever phase (2) is absent from the latter. Generalization of this model to include viscous loads and a series elastic element are discussed.

  6. Extended shortest path selection for package routing of complex networks

    NASA Astrophysics Data System (ADS)

    Ye, Fan; Zhang, Lei; Wang, Bing-Hong; Liu, Lu; Zhang, Xing-Yi

    The routing strategy plays a very important role in complex networks such as Internet system and Peer-to-Peer networks. However, most of the previous work concentrates only on the path selection, e.g. Flooding and Random Walk, or finding the shortest path (SP) and rarely considering the local load information such as SP and Distance Vector Routing. Flow-based Routing mainly considers load balance and still cannot achieve best optimization. Thus, in this paper, we propose a novel dynamic routing strategy on complex network by incorporating the local load information into SP algorithm to enhance the traffic flow routing optimization. It was found that the flow in a network is greatly affected by the waiting time of the network, so we should not consider only choosing optimized path for package transformation but also consider node congestion. As a result, the packages should be transmitted with a global optimized path with smaller congestion and relatively short distance. Analysis work and simulation experiments show that the proposed algorithm can largely enhance the network flow with the maximum throughput within an acceptable calculating time. The detailed analysis of the algorithm will also be provided for explaining the efficiency.

  7. Probabilistic determination of probe locations from distance data

    PubMed Central

    Xu, Xiao-Ping; Slaughter, Brian D.; Volkmann, Niels

    2013-01-01

    Distance constraints, in principle, can be employed to determine information about the location of probes within a three-dimensional volume. Traditional methods for locating probes from distance constraints involve optimization of scoring functions that measure how well the probe location fits the distance data, exploring only a small subset of the scoring function landscape in the process. These methods are not guaranteed to find the global optimum and provide no means to relate the identified optimum to all other optima in scoring space. Here, we introduce a method for the location of probes from distance information that is based on probability calculus. This method allows exploration of the entire scoring space by directly combining probability functions representing the distance data and information about attachment sites. The approach is guaranteed to identify the global optimum and enables the derivation of confidence intervals for the probe location as well as statistical quantification of ambiguities. We apply the method to determine the location of a fluorescence probe using distances derived by FRET and show that the resulting location matches that independently derived by electron microscopy. PMID:23770585

  8. Spatiotemporal Interpolation for Environmental Modelling

    PubMed Central

    Susanto, Ferry; de Souza, Paulo; He, Jing

    2016-01-01

    A variation of the reduction-based approach to spatiotemporal interpolation (STI), in which time is treated independently from the spatial dimensions, is proposed in this paper. We reviewed and compared three widely-used spatial interpolation techniques: ordinary kriging, inverse distance weighting and the triangular irregular network. We also proposed a new distribution-based distance weighting (DDW) spatial interpolation method. In this study, we utilised one year of Tasmania’s South Esk Hydrology model developed by CSIRO. Root mean squared error statistical methods were performed for performance evaluations. Our results show that the proposed reduction approach is superior to the extension approach to STI. However, the proposed DDW provides little benefit compared to the conventional inverse distance weighting (IDW) method. We suggest that the improved IDW technique, with the reduction approach used for the temporal dimension, is the optimal combination for large-scale spatiotemporal interpolation within environmental modelling applications. PMID:27509497

  9. A noncontact force sensor based on a fiber Bragg grating and its application for corrosion measurement.

    PubMed

    Pacheco, Clara J; Bruno, Antonio C

    2013-08-29

    A simple noncontact force sensor based on an optical fiber Bragg grating attached to a small magnet has been proposed and built. The sensor measures the force between the magnet and any ferromagnetic material placed within a few millimeters of the sensor. Maintaining the sensor at a constant standoff distance, material loss due to corrosion increases the distance between the magnet and the corroded surface, which decreases the magnetic force. This will decrease the strain in the optical fiber shifting the reflected Bragg wavelength. The measured shift for the optical fiber used was 1.36 nm per Newton. Models were developed to optimize the magnet geometry for a specific sensor standoff distance and for particular corrosion pit depths. The sensor was able to detect corrosion pits on a fuel storage tank bottom with depths in the sub-millimeter range.

  10. A Noncontact Force Sensor Based on a Fiber Bragg Grating and Its Application for Corrosion Measurement

    PubMed Central

    Pacheco, Clara J.; Bruno, Antonio C.

    2013-01-01

    A simple noncontact force sensor based on an optical fiber Bragg grating attached to a small magnet has been proposed and built. The sensor measures the force between the magnet and any ferromagnetic material placed within a few millimeters of the sensor. Maintaining the sensor at a constant standoff distance, material loss due to corrosion increases the distance between the magnet and the corroded surface, which decreases the magnetic force. This will decrease the strain in the optical fiber shifting the reflected Bragg wavelength. The measured shift for the optical fiber used was 1.36 nm per Newton. Models were developed to optimize the magnet geometry for a specific sensor standoff distance and for particular corrosion pit depths. The sensor was able to detect corrosion pits on a fuel storage tank bottom with depths in the sub-millimeter range. PMID:23995095

  11. Neuro-fuzzy model for estimating race and gender from geometric distances of human face across pose

    NASA Astrophysics Data System (ADS)

    Nanaa, K.; Rahman, M. N. A.; Rizon, M.; Mohamad, F. S.; Mamat, M.

    2018-03-01

    Classifying human face based on race and gender is a vital process in face recognition. It contributes to an index database and eases 3D synthesis of the human face. Identifying race and gender based on intrinsic factor is problematic, which is more fitting to utilizing nonlinear model for estimating process. In this paper, we aim to estimate race and gender in varied head pose. For this purpose, we collect dataset from PICS and CAS-PEAL databases, detect the landmarks and rotate them to the frontal pose. After geometric distances are calculated, all of distance values will be normalized. Implementation is carried out by using Neural Network Model and Fuzzy Logic Model. These models are combined by using Adaptive Neuro-Fuzzy Model. The experimental results showed that the optimization of address fuzzy membership. Model gives a better assessment rate and found that estimating race contributing to a more accurate gender assessment.

  12. Survey results of Internet and computer usage in veterans with epilepsy.

    PubMed

    Pramuka, Michael; Hendrickson, Rick; Van Cott, Anne C

    2010-03-01

    After our study of a self-management intervention for epilepsy, we gathered data on Internet use and computer availability to assess the feasibility of computer-based interventions in a veteran population. Veterans were asked to complete an anonymous questionnaire that gathered information regarding seizures/epilepsy in addition to demographic data, Internet use, computer availability, and interest in distance education regarding epilepsy. Three hundred twenty-four VA neurology clinic patients completed the survey. One hundred twenty-six self-reported a medical diagnosis of epilepsy and constituted the epilepsy/seizure group. For this group of veterans, the need for remote/distance-based interventions was validated given the majority of veterans traveled long distances (>2 hours). Only 51% of the epilepsy/seizure group had access to the Internet, and less than half (42%) expressed an interest in getting information on epilepsy self-management on their computer, suggesting that Web-based interventions may not be an optimal method for a self-management intervention in this population. Published by Elsevier Inc.

  13. Clustering "N" Objects into "K" Groups under Optimal Scaling of Variables.

    ERIC Educational Resources Information Center

    van Buuren, Stef; Heiser, Willem J.

    1989-01-01

    A method based on homogeneity analysis (multiple correspondence analysis or multiple scaling) is proposed to reduce many categorical variables to one variable with "k" categories. The method is a generalization of the sum of squared distances cluster analysis problem to the case of mixed measurement level variables. (SLD)

  14. Detecting epileptic seizure with different feature extracting strategies using robust machine learning classification techniques by applying advance parameter optimization approach.

    PubMed

    Hussain, Lal

    2018-06-01

    Epilepsy is a neurological disorder produced due to abnormal excitability of neurons in the brain. The research reveals that brain activity is monitored through electroencephalogram (EEG) of patients suffered from seizure to detect the epileptic seizure. The performance of EEG detection based epilepsy require feature extracting strategies. In this research, we have extracted varying features extracting strategies based on time and frequency domain characteristics, nonlinear, wavelet based entropy and few statistical features. A deeper study was undertaken using novel machine learning classifiers by considering multiple factors. The support vector machine kernels are evaluated based on multiclass kernel and box constraint level. Likewise, for K-nearest neighbors (KNN), we computed the different distance metrics, Neighbor weights and Neighbors. Similarly, the decision trees we tuned the paramours based on maximum splits and split criteria and ensemble classifiers are evaluated based on different ensemble methods and learning rate. For training/testing tenfold Cross validation was employed and performance was evaluated in form of TPR, NPR, PPV, accuracy and AUC. In this research, a deeper analysis approach was performed using diverse features extracting strategies using robust machine learning classifiers with more advanced optimal options. Support Vector Machine linear kernel and KNN with City block distance metric give the overall highest accuracy of 99.5% which was higher than using the default parameters for these classifiers. Moreover, highest separation (AUC = 0.9991, 0.9990) were obtained at different kernel scales using SVM. Additionally, the K-nearest neighbors with inverse squared distance weight give higher performance at different Neighbors. Moreover, to distinguish the postictal heart rate oscillations from epileptic ictal subjects, and highest performance of 100% was obtained using different machine learning classifiers.

  15. Multi-objective decoupling algorithm for active distance control of intelligent hybrid electric vehicle

    NASA Astrophysics Data System (ADS)

    Luo, Yugong; Chen, Tao; Li, Keqiang

    2015-12-01

    The paper presents a novel active distance control strategy for intelligent hybrid electric vehicles (IHEV) with the purpose of guaranteeing an optimal performance in view of the driving functions, optimum safety, fuel economy and ride comfort. Considering the complexity of driving situations, the objects of safety and ride comfort are decoupled from that of fuel economy, and a hierarchical control architecture is adopted to improve the real-time performance and the adaptability. The hierarchical control structure consists of four layers: active distance control object determination, comprehensive driving and braking torque calculation, comprehensive torque distribution and torque coordination. The safety distance control and the emergency stop algorithms are designed to achieve the safety and ride comfort goals. The optimal rule-based energy management algorithm of the hybrid electric system is developed to improve the fuel economy. The torque coordination control strategy is proposed to regulate engine torque, motor torque and hydraulic braking torque to improve the ride comfort. This strategy is verified by simulation and experiment using a forward simulation platform and a prototype vehicle. The results show that the novel control strategy can achieve the integrated and coordinated control of its multiple subsystems, which guarantees top performance of the driving functions and optimum safety, fuel economy and ride comfort.

  16. Optimal flight initiation distance.

    PubMed

    Cooper, William E; Frederick, William G

    2007-01-07

    Decisions regarding flight initiation distance have received scant theoretical attention. A graphical model by Ydenberg and Dill (1986. The economics of fleeing from predators. Adv. Stud. Behav. 16, 229-249) that has guided research for the past 20 years specifies when escape begins. In the model, a prey detects a predator, monitors its approach until costs of escape and of remaining are equal, and then flees. The distance between predator and prey when escape is initiated (approach distance = flight initiation distance) occurs where decreasing cost of remaining and increasing cost of fleeing intersect. We argue that prey fleeing as predicted cannot maximize fitness because the best prey can do is break even during an encounter. We develop two optimality models, one applying when all expected future contribution to fitness (residual reproductive value) is lost if the prey dies, the other when any fitness gained (increase in expected RRV) during the encounter is retained after death. Both models predict optimal flight initiation distance from initial expected fitness, benefits obtainable during encounters, costs of escaping, and probability of being killed. Predictions match extensively verified predictions of Ydenberg and Dill's (1986) model. Our main conclusion is that optimality models are preferable to break-even models because they permit fitness maximization, offer many new testable predictions, and allow assessment of prey decisions in many naturally occurring situations through modification of benefit, escape cost, and risk functions.

  17. Image Segmentation Method Using Fuzzy C Mean Clustering Based on Multi-Objective Optimization

    NASA Astrophysics Data System (ADS)

    Chen, Jinlin; Yang, Chunzhi; Xu, Guangkui; Ning, Li

    2018-04-01

    Image segmentation is not only one of the hottest topics in digital image processing, but also an important part of computer vision applications. As one kind of image segmentation algorithms, fuzzy C-means clustering is an effective and concise segmentation algorithm. However, the drawback of FCM is that it is sensitive to image noise. To solve the problem, this paper designs a novel fuzzy C-mean clustering algorithm based on multi-objective optimization. We add a parameter λ to the fuzzy distance measurement formula to improve the multi-objective optimization. The parameter λ can adjust the weights of the pixel local information. In the algorithm, the local correlation of neighboring pixels is added to the improved multi-objective mathematical model to optimize the clustering cent. Two different experimental results show that the novel fuzzy C-means approach has an efficient performance and computational time while segmenting images by different type of noises.

  18. Optimal sensor placement for leak location in water distribution networks using genetic algorithms.

    PubMed

    Casillas, Myrna V; Puig, Vicenç; Garza-Castañón, Luis E; Rosich, Albert

    2013-11-04

    This paper proposes a new sensor placement approach for leak location in water distribution networks (WDNs). The sensor placement problem is formulated as an integer optimization problem. The optimization criterion consists in minimizing the number of non-isolable leaks according to the isolability criteria introduced. Because of the large size and non-linear integer nature of the resulting optimization problem, genetic algorithms (GAs) are used as the solution approach. The obtained results are compared with a semi-exhaustive search method with higher computational effort, proving that GA allows one to find near-optimal solutions with less computational load. Moreover, three ways of increasing the robustness of the GA-based sensor placement method have been proposed using a time horizon analysis, a distance-based scoring and considering different leaks sizes. A great advantage of the proposed methodology is that it does not depend on the isolation method chosen by the user, as long as it is based on leak sensitivity analysis. Experiments in two networks allow us to evaluate the performance of the proposed approach.

  19. Optimal Sensor Placement for Leak Location in Water Distribution Networks Using Genetic Algorithms

    PubMed Central

    Casillas, Myrna V.; Puig, Vicenç; Garza-Castañón, Luis E.; Rosich, Albert

    2013-01-01

    This paper proposes a new sensor placement approach for leak location in water distribution networks (WDNs). The sensor placement problem is formulated as an integer optimization problem. The optimization criterion consists in minimizing the number of non-isolable leaks according to the isolability criteria introduced. Because of the large size and non-linear integer nature of the resulting optimization problem, genetic algorithms (GAs) are used as the solution approach. The obtained results are compared with a semi-exhaustive search method with higher computational effort, proving that GA allows one to find near-optimal solutions with less computational load. Moreover, three ways of increasing the robustness of the GA-based sensor placement method have been proposed using a time horizon analysis, a distance-based scoring and considering different leaks sizes. A great advantage of the proposed methodology is that it does not depend on the isolation method chosen by the user, as long as it is based on leak sensitivity analysis. Experiments in two networks allow us to evaluate the performance of the proposed approach. PMID:24193099

  20. An extended car-following model considering random safety distance with different probabilities

    NASA Astrophysics Data System (ADS)

    Wang, Jufeng; Sun, Fengxin; Cheng, Rongjun; Ge, Hongxia; Wei, Qi

    2018-02-01

    Because of the difference in vehicle type or driving skill, the driving strategy is not exactly the same. The driving speeds of the different vehicles may be different for the same headway. Since the optimal velocity function is just determined by the safety distance besides the maximum velocity and headway, an extended car-following model accounting for random safety distance with different probabilities is proposed in this paper. The linear stable condition for this extended traffic model is obtained by using linear stability theory. Numerical simulations are carried out to explore the complex phenomenon resulting from multiple safety distance in the optimal velocity function. The cases of multiple types of safety distances selected with different probabilities are presented. Numerical results show that the traffic flow with multiple safety distances with different probabilities will be more unstable than that with single type of safety distance, and will result in more stop-and-go phenomena.

  1. Effective and extensible feature extraction method using genetic algorithm-based frequency-domain feature search for epileptic EEG multiclassification

    PubMed Central

    Wen, Tingxi; Zhang, Zhongnan

    2017-01-01

    Abstract In this paper, genetic algorithm-based frequency-domain feature search (GAFDS) method is proposed for the electroencephalogram (EEG) analysis of epilepsy. In this method, frequency-domain features are first searched and then combined with nonlinear features. Subsequently, these features are selected and optimized to classify EEG signals. The extracted features are analyzed experimentally. The features extracted by GAFDS show remarkable independence, and they are superior to the nonlinear features in terms of the ratio of interclass distance and intraclass distance. Moreover, the proposed feature search method can search for features of instantaneous frequency in a signal after Hilbert transformation. The classification results achieved using these features are reasonable; thus, GAFDS exhibits good extensibility. Multiple classical classifiers (i.e., k-nearest neighbor, linear discriminant analysis, decision tree, AdaBoost, multilayer perceptron, and Naïve Bayes) achieve satisfactory classification accuracies by using the features generated by the GAFDS method and the optimized feature selection. The accuracies for 2-classification and 3-classification problems may reach up to 99% and 97%, respectively. Results of several cross-validation experiments illustrate that GAFDS is effective in the extraction of effective features for EEG classification. Therefore, the proposed feature selection and optimization model can improve classification accuracy. PMID:28489789

  2. Effective and extensible feature extraction method using genetic algorithm-based frequency-domain feature search for epileptic EEG multiclassification.

    PubMed

    Wen, Tingxi; Zhang, Zhongnan

    2017-05-01

    In this paper, genetic algorithm-based frequency-domain feature search (GAFDS) method is proposed for the electroencephalogram (EEG) analysis of epilepsy. In this method, frequency-domain features are first searched and then combined with nonlinear features. Subsequently, these features are selected and optimized to classify EEG signals. The extracted features are analyzed experimentally. The features extracted by GAFDS show remarkable independence, and they are superior to the nonlinear features in terms of the ratio of interclass distance and intraclass distance. Moreover, the proposed feature search method can search for features of instantaneous frequency in a signal after Hilbert transformation. The classification results achieved using these features are reasonable; thus, GAFDS exhibits good extensibility. Multiple classical classifiers (i.e., k-nearest neighbor, linear discriminant analysis, decision tree, AdaBoost, multilayer perceptron, and Naïve Bayes) achieve satisfactory classification accuracies by using the features generated by the GAFDS method and the optimized feature selection. The accuracies for 2-classification and 3-classification problems may reach up to 99% and 97%, respectively. Results of several cross-validation experiments illustrate that GAFDS is effective in the extraction of effective features for EEG classification. Therefore, the proposed feature selection and optimization model can improve classification accuracy.

  3. Joint Power Charging and Routing in Wireless Rechargeable Sensor Networks.

    PubMed

    Jia, Jie; Chen, Jian; Deng, Yansha; Wang, Xingwei; Aghvami, Abdol-Hamid

    2017-10-09

    The development of wireless power transfer (WPT) technology has inspired the transition from traditional battery-based wireless sensor networks (WSNs) towards wireless rechargeable sensor networks (WRSNs). While extensive efforts have been made to improve charging efficiency, little has been done for routing optimization. In this work, we present a joint optimization model to maximize both charging efficiency and routing structure. By analyzing the structure of the optimization model, we first decompose the problem and propose a heuristic algorithm to find the optimal charging efficiency for the predefined routing tree. Furthermore, by coding the many-to-one communication topology as an individual, we further propose to apply a genetic algorithm (GA) for the joint optimization of both routing and charging. The genetic operations, including tree-based recombination and mutation, are proposed to obtain a fast convergence. Our simulation results show that the heuristic algorithm reduces the number of resident locations and the total moving distance. We also show that our proposed algorithm achieves a higher charging efficiency compared with existing algorithms.

  4. Joint Power Charging and Routing in Wireless Rechargeable Sensor Networks

    PubMed Central

    Jia, Jie; Chen, Jian; Deng, Yansha; Wang, Xingwei; Aghvami, Abdol-Hamid

    2017-01-01

    The development of wireless power transfer (WPT) technology has inspired the transition from traditional battery-based wireless sensor networks (WSNs) towards wireless rechargeable sensor networks (WRSNs). While extensive efforts have been made to improve charging efficiency, little has been done for routing optimization. In this work, we present a joint optimization model to maximize both charging efficiency and routing structure. By analyzing the structure of the optimization model, we first decompose the problem and propose a heuristic algorithm to find the optimal charging efficiency for the predefined routing tree. Furthermore, by coding the many-to-one communication topology as an individual, we further propose to apply a genetic algorithm (GA) for the joint optimization of both routing and charging. The genetic operations, including tree-based recombination and mutation, are proposed to obtain a fast convergence. Our simulation results show that the heuristic algorithm reduces the number of resident locations and the total moving distance. We also show that our proposed algorithm achieves a higher charging efficiency compared with existing algorithms. PMID:28991200

  5. Stochastic Optimization for an Analytical Model of Saltwater Intrusion in Coastal Aquifers

    PubMed Central

    Stratis, Paris N.; Karatzas, George P.; Papadopoulou, Elena P.; Zakynthinaki, Maria S.; Saridakis, Yiannis G.

    2016-01-01

    The present study implements a stochastic optimization technique to optimally manage freshwater pumping from coastal aquifers. Our simulations utilize the well-known sharp interface model for saltwater intrusion in coastal aquifers together with its known analytical solution. The objective is to maximize the total volume of freshwater pumped by the wells from the aquifer while, at the same time, protecting the aquifer from saltwater intrusion. In the direction of dealing with this problem in real time, the ALOPEX stochastic optimization method is used, to optimize the pumping rates of the wells, coupled with a penalty-based strategy that keeps the saltwater front at a safe distance from the wells. Several numerical optimization results, that simulate a known real aquifer case, are presented. The results explore the computational performance of the chosen stochastic optimization method as well as its abilities to manage freshwater pumping in real aquifer environments. PMID:27689362

  6. Layout design-based research on optimization and assessment method for shipbuilding workshop

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Meng, Mei; Liu, Shuang

    2013-06-01

    The research study proposes to examine a three-dimensional visualization program, emphasizing on improving genetic algorithms through the optimization of a layout design-based standard and discrete shipbuilding workshop. By utilizing a steel processing workshop as an example, the principle of minimum logistic costs will be implemented to obtain an ideological equipment layout, and a mathematical model. The objectiveness is to minimize the total necessary distance traveled between machines. An improved control operator is implemented to improve the iterative efficiency of the genetic algorithm, and yield relevant parameters. The Computer Aided Tri-Dimensional Interface Application (CATIA) software is applied to establish the manufacturing resource base and parametric model of the steel processing workshop. Based on the results of optimized planar logistics, a visual parametric model of the steel processing workshop is constructed, and qualitative and quantitative adjustments then are applied to the model. The method for evaluating the results of the layout is subsequently established through the utilization of AHP. In order to provide a mode of reference to the optimization and layout of the digitalized production workshop, the optimized discrete production workshop will possess a certain level of practical significance.

  7. Model-Based Localization and Tracking Using Bluetooth Low-Energy Beacons

    PubMed Central

    Cemgil, Ali Taylan

    2017-01-01

    We introduce a high precision localization and tracking method that makes use of cheap Bluetooth low-energy (BLE) beacons only. We track the position of a moving sensor by integrating highly unreliable and noisy BLE observations streaming from multiple locations. A novel aspect of our approach is the development of an observation model, specifically tailored for received signal strength indicator (RSSI) fingerprints: a combination based on the optimal transport model of Wasserstein distance. The tracking results of the entire system are compared with alternative baseline estimation methods, such as nearest neighboring fingerprints and an artificial neural network. Our results show that highly accurate estimation from noisy Bluetooth data is practically feasible with an observation model based on Wasserstein distance interpolation combined with the sequential Monte Carlo (SMC) method for tracking. PMID:29109375

  8. Model-Based Localization and Tracking Using Bluetooth Low-Energy Beacons.

    PubMed

    Daniş, F Serhan; Cemgil, Ali Taylan

    2017-10-29

    We introduce a high precision localization and tracking method that makes use of cheap Bluetooth low-energy (BLE) beacons only. We track the position of a moving sensor by integrating highly unreliable and noisy BLE observations streaming from multiple locations. A novel aspect of our approach is the development of an observation model, specifically tailored for received signal strength indicator (RSSI) fingerprints: a combination based on the optimal transport model of Wasserstein distance. The tracking results of the entire system are compared with alternative baseline estimation methods, such as nearest neighboring fingerprints and an artificial neural network. Our results show that highly accurate estimation from noisy Bluetooth data is practically feasible with an observation model based on Wasserstein distance interpolation combined with the sequential Monte Carlo (SMC) method for tracking.

  9. Considerations for the Optimal Design of a Two-Way Interactive Distance Education Classroom.

    ERIC Educational Resources Information Center

    Gregg, Joe; Persichitte, Kay

    To make effective use of a two-way interactive distance education system, classroom design should be a primary consideration. A properly designed classroom will enhance content objectives and increase acceptance of this type of instructional delivery. This paper describes key considerations for optimal design. Construction considerations include…

  10. An experimental comparison of various methods of nearfield acoustic holography

    DOE PAGES

    Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.

    2017-05-19

    An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less

  11. An experimental comparison of various methods of nearfield acoustic holography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.

    An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less

  12. Reliable femoral frame construction based on MRI dedicated to muscles position follow-up.

    PubMed

    Dubois, G; Bonneau, D; Lafage, V; Rouch, P; Skalli, W

    2015-10-01

    In vivo follow-up of muscle shape variation represents a challenge when evaluating muscle development due to disease or treatment. Recent developments in muscles reconstruction techniques indicate MRI as a clinical tool for the follow-up of the thigh muscles. The comparison of 3D muscles shape from two different sequences is not easy because there is no common frame. This study proposes an innovative method for the reconstruction of a reliable femoral frame based on the femoral head and both condyles centers. In order to robustify the definition of condylar spheres, an original method was developed to combine the estimation of diameters of both condyles from the lateral antero-posterior distance and the estimation of the spheres center from an optimization process. The influence of spacing between MR slices and of origin positions was studied. For all axes, the proposed method presented an angular error lower than 1° with spacing between slice of 10 mm and the optimal position of the origin was identified at 56 % of the distance between the femoral head center and the barycenter of both condyles. The high reliability of this method provides a robust frame for clinical follow-up based on MRI .

  13. Einstein-Podolsky-Rosen steering: Its geometric quantification and witness

    NASA Astrophysics Data System (ADS)

    Ku, Huan-Yu; Chen, Shin-Liang; Budroni, Costantino; Miranowicz, Adam; Chen, Yueh-Nan; Nori, Franco

    2018-02-01

    We propose a measure of quantum steerability, namely, a convex steering monotone, based on the trace distance between a given assemblage and its corresponding closest assemblage admitting a local-hidden-state (LHS) model. We provide methods to estimate such a quantity, via lower and upper bounds, based on semidefinite programming. One of these upper bounds has a clear geometrical interpretation as a linear function of rescaled Euclidean distances in the Bloch sphere between the normalized quantum states of (i) a given assemblage and (ii) an LHS assemblage. For a qubit-qubit quantum state, these ideas also allow us to visualize various steerability properties of the state in the Bloch sphere via the so-called LHS surface. In particular, some steerability properties can be obtained by comparing such an LHS surface with a corresponding quantum steering ellipsoid. Thus, we propose a witness of steerability corresponding to the difference of the volumes enclosed by these two surfaces. This witness (which reveals the steerability of a quantum state) enables one to find an optimal measurement basis, which can then be used to determine the proposed steering monotone (which describes the steerability of an assemblage) optimized over all mutually unbiased bases.

  14. One-year eye-to-eye comparison of wavefront-guided versus wavefront-optimized laser in situ keratomileusis in hyperopes

    PubMed Central

    Sáles, Christopher S; Manche, Edward E

    2014-01-01

    Background To compare wavefront (WF)-guided and WF-optimized laser in situ keratomileusis (LASIK) in hyperopes with respect to the parameters of safety, efficacy, predictability, refractive error, uncorrected distance visual acuity, corrected distance visual acuity, contrast sensitivity, and higher order aberrations. Methods Twenty-two eyes of eleven participants with hyperopia with or without astigmatism were prospectively randomized to receive WF-guided LASIK with the VISX CustomVue S4 IR or WF-optimized LASIK with the WaveLight Allegretto Eye-Q 400 Hz. LASIK flaps were created using the 150-kHz IntraLase iFS. Evaluations included measurement of uncorrected distance visual acuity, corrected distance visual acuity, <5% and <25% contrast sensitivity, and WF aberrometry. Patients also completed a questionnaire detailing symptoms on a quantitative grading scale. Results There were no statistically significant differences between the groups for any of the variables studied after 12 months of follow-up (all P>0.05). Conclusion This comparative case series of 11 subjects with hyperopia showed that WF-guided and WF-optimized LASIK had similar clinical outcomes at 12 months. PMID:25419115

  15. The Shortlist Method for fast computation of the Earth Mover's Distance and finding optimal solutions to transportation problems.

    PubMed

    Gottschlich, Carsten; Schuhmacher, Dominic

    2014-01-01

    Finding solutions to the classical transportation problem is of great importance, since this optimization problem arises in many engineering and computer science applications. Especially the Earth Mover's Distance is used in a plethora of applications ranging from content-based image retrieval, shape matching, fingerprint recognition, object tracking and phishing web page detection to computing color differences in linguistics and biology. Our starting point is the well-known revised simplex algorithm, which iteratively improves a feasible solution to optimality. The Shortlist Method that we propose substantially reduces the number of candidates inspected for improving the solution, while at the same time balancing the number of pivots required. Tests on simulated benchmarks demonstrate a considerable reduction in computation time for the new method as compared to the usual revised simplex algorithm implemented with state-of-the-art initialization and pivot strategies. As a consequence, the Shortlist Method facilitates the computation of large scale transportation problems in viable time. In addition we describe a novel method for finding an initial feasible solution which we coin Modified Russell's Method.

  16. The Shortlist Method for Fast Computation of the Earth Mover's Distance and Finding Optimal Solutions to Transportation Problems

    PubMed Central

    Gottschlich, Carsten; Schuhmacher, Dominic

    2014-01-01

    Finding solutions to the classical transportation problem is of great importance, since this optimization problem arises in many engineering and computer science applications. Especially the Earth Mover's Distance is used in a plethora of applications ranging from content-based image retrieval, shape matching, fingerprint recognition, object tracking and phishing web page detection to computing color differences in linguistics and biology. Our starting point is the well-known revised simplex algorithm, which iteratively improves a feasible solution to optimality. The Shortlist Method that we propose substantially reduces the number of candidates inspected for improving the solution, while at the same time balancing the number of pivots required. Tests on simulated benchmarks demonstrate a considerable reduction in computation time for the new method as compared to the usual revised simplex algorithm implemented with state-of-the-art initialization and pivot strategies. As a consequence, the Shortlist Method facilitates the computation of large scale transportation problems in viable time. In addition we describe a novel method for finding an initial feasible solution which we coin Modified Russell's Method. PMID:25310106

  17. Semi-automatic segmentation of brain tumors using population and individual information.

    PubMed

    Wu, Yao; Yang, Wei; Jiang, Jun; Li, Shuanqian; Feng, Qianjin; Chen, Wufan

    2013-08-01

    Efficient segmentation of tumors in medical images is of great practical importance in early diagnosis and radiation plan. This paper proposes a novel semi-automatic segmentation method based on population and individual statistical information to segment brain tumors in magnetic resonance (MR) images. First, high-dimensional image features are extracted. Neighborhood components analysis is proposed to learn two optimal distance metrics, which contain population and patient-specific information, respectively. The probability of each pixel belonging to the foreground (tumor) and the background is estimated by the k-nearest neighborhood classifier under the learned optimal distance metrics. A cost function for segmentation is constructed through these probabilities and is optimized using graph cuts. Finally, some morphological operations are performed to improve the achieved segmentation results. Our dataset consists of 137 brain MR images, including 68 for training and 69 for testing. The proposed method overcomes segmentation difficulties caused by the uneven gray level distribution of the tumors and even can get satisfactory results if the tumors have fuzzy edges. Experimental results demonstrate that the proposed method is robust to brain tumor segmentation.

  18. Rigorous force field optimization principles based on statistical distance minimization

    DOE PAGES

    Vlcek, Lukas; Chialvo, Ariel A.

    2015-10-12

    We use the concept of statistical distance to define a measure of distinguishability between a pair of statistical mechanical systems, i.e., a model and its target, and show that its minimization leads to general convergence of the model’s static measurable properties to those of the target. Here we exploit this feature to define a rigorous basis for the development of accurate and robust effective molecular force fields that are inherently compatible with coarse-grained experimental data. The new model optimization principles and their efficient implementation are illustrated through selected examples, whose outcome demonstrates the higher robustness and predictive accuracy of themore » approach compared to other currently used methods, such as force matching and relative entropy minimization. We also discuss relations between the newly developed principles and established thermodynamic concepts, which include the Gibbs-Bogoliubov inequality and the thermodynamic length.« less

  19. Combining geostatistics with Moran's I analysis for mapping soil heavy metals in Beijing, China.

    PubMed

    Huo, Xiao-Ni; Li, Hong; Sun, Dan-Feng; Zhou, Lian-Di; Li, Bao-Guo

    2012-03-01

    Production of high quality interpolation maps of heavy metals is important for risk assessment of environmental pollution. In this paper, the spatial correlation characteristics information obtained from Moran's I analysis was used to supplement the traditional geostatistics. According to Moran's I analysis, four characteristics distances were obtained and used as the active lag distance to calculate the semivariance. Validation of the optimality of semivariance demonstrated that using the two distances where the Moran's I and the standardized Moran's I, Z(I) reached a maximum as the active lag distance can improve the fitting accuracy of semivariance. Then, spatial interpolation was produced based on the two distances and their nested model. The comparative analysis of estimation accuracy and the measured and predicted pollution status showed that the method combining geostatistics with Moran's I analysis was better than traditional geostatistics. Thus, Moran's I analysis is a useful complement for geostatistics to improve the spatial interpolation accuracy of heavy metals.

  20. Combining Geostatistics with Moran’s I Analysis for Mapping Soil Heavy Metals in Beijing, China

    PubMed Central

    Huo, Xiao-Ni; Li, Hong; Sun, Dan-Feng; Zhou, Lian-Di; Li, Bao-Guo

    2012-01-01

    Production of high quality interpolation maps of heavy metals is important for risk assessment of environmental pollution. In this paper, the spatial correlation characteristics information obtained from Moran’s I analysis was used to supplement the traditional geostatistics. According to Moran’s I analysis, four characteristics distances were obtained and used as the active lag distance to calculate the semivariance. Validation of the optimality of semivariance demonstrated that using the two distances where the Moran’s I and the standardized Moran’s I, Z(I) reached a maximum as the active lag distance can improve the fitting accuracy of semivariance. Then, spatial interpolation was produced based on the two distances and their nested model. The comparative analysis of estimation accuracy and the measured and predicted pollution status showed that the method combining geostatistics with Moran’s I analysis was better than traditional geostatistics. Thus, Moran’s I analysis is a useful complement for geostatistics to improve the spatial interpolation accuracy of heavy metals. PMID:22690179

  1. Estimation of optimal nasotracheal tube depth in adult patients.

    PubMed

    Ji, Sung-Mi

    2017-12-01

    The aim of this study was to estimate the optimal depth of nasotracheal tube placement. We enrolled 110 patients scheduled to undergo oral and maxillofacial surgery, requiring nasotracheal intubation. After intubation, the depth of tube insertion was measured. The neck circumference and distances from nares to tragus, tragus to angle of the mandible, and angle of the mandible to sternal notch were measured. To estimate optimal tube depth, correlation and regression analyses were performed using clinical and anthropometric parameters. The mean tube depth was 28.9 ± 1.3 cm in men (n = 62), and 26.6 ± 1.5 cm in women (n = 48). Tube depth significantly correlated with height (r = 0.735, P < 0.001). Distances from nares to tragus, tragus to angle of the mandible, and angle of the mandible to sternal notch correlated with depth of the endotracheal tube (r = 0.363, r = 0.362, and r = 0.546, P < 0.05). The tube depth also correlated with the sum of these distances (r = 0.646, P < 0.001). We devised the following formula for estimating tube depth: 19.856 + 0.267 × sum of the three distances (R 2 = 0.432, P < 0.001). The optimal tube depth for nasotracheally intubated adult patients correlated with height and sum of the distances from nares to tragus, tragus to angle of the mandible, and angle of the mandible to sternal notch. The proposed equation would be a useful guide to determine optimal nasotracheal tube placement.

  2. Model-based multiple patterning layout decomposition

    NASA Astrophysics Data System (ADS)

    Guo, Daifeng; Tian, Haitong; Du, Yuelin; Wong, Martin D. F.

    2015-10-01

    As one of the most promising next generation lithography technologies, multiple patterning lithography (MPL) plays an important role in the attempts to keep in pace with 10 nm technology node and beyond. With feature size keeps shrinking, it has become impossible to print dense layouts within one single exposure. As a result, MPL such as double patterning lithography (DPL) and triple patterning lithography (TPL) has been widely adopted. There is a large volume of literature on DPL/TPL layout decomposition, and the current approach is to formulate the problem as a classical graph-coloring problem: Layout features (polygons) are represented by vertices in a graph G and there is an edge between two vertices if and only if the distance between the two corresponding features are less than a minimum distance threshold value dmin. The problem is to color the vertices of G using k colors (k = 2 for DPL, k = 3 for TPL) such that no two vertices connected by an edge are given the same color. This is a rule-based approach, which impose a geometric distance as a minimum constraint to simply decompose polygons within the distance into different masks. It is not desired in practice because this criteria cannot completely capture the behavior of the optics. For example, it lacks of sufficient information such as the optical source characteristics and the effects between the polygons outside the minimum distance. To remedy the deficiency, a model-based layout decomposition approach to make the decomposition criteria base on simulation results was first introduced at SPIE 2013.1 However, the algorithm1 is based on simplified assumption on the optical simulation model and therefore its usage on real layouts is limited. Recently AMSL2 also proposed a model-based approach to layout decomposition by iteratively simulating the layout, which requires excessive computational resource and may lead to sub-optimal solutions. The approach2 also potentially generates too many stiches. In this paper, we propose a model-based MPL layout decomposition method using a pre-simulated library of frequent layout patterns. Instead of using the graph G in the standard graph-coloring formulation, we build an expanded graph H where each vertex represents a group of adjacent features together with a coloring solution. By utilizing the library and running sophisticated graph algorithms on H, our approach can obtain optimal decomposition results efficiently. Our model-based solution can achieve a practical mask design which significantly improves the lithography quality on the wafer compared to the rule based decomposition.

  3. Robust linear discriminant analysis with distance based estimators

    NASA Astrophysics Data System (ADS)

    Lim, Yai-Fung; Yahaya, Sharipah Soaad Syed; Ali, Hazlina

    2017-11-01

    Linear discriminant analysis (LDA) is one of the supervised classification techniques concerning relationship between a categorical variable and a set of continuous variables. The main objective of LDA is to create a function to distinguish between populations and allocating future observations to previously defined populations. Under the assumptions of normality and homoscedasticity, the LDA yields optimal linear discriminant rule (LDR) between two or more groups. However, the optimality of LDA highly relies on the sample mean and pooled sample covariance matrix which are known to be sensitive to outliers. To alleviate these conflicts, a new robust LDA using distance based estimators known as minimum variance vector (MVV) has been proposed in this study. The MVV estimators were used to substitute the classical sample mean and classical sample covariance to form a robust linear discriminant rule (RLDR). Simulation and real data study were conducted to examine on the performance of the proposed RLDR measured in terms of misclassification error rates. The computational result showed that the proposed RLDR is better than the classical LDR and was comparable with the existing robust LDR.

  4. Multi-Patches IRIS Based Person Authentication System Using Particle Swarm Optimization and Fuzzy C-Means Clustering

    NASA Astrophysics Data System (ADS)

    Shekar, B. H.; Bhat, S. S.

    2017-05-01

    Locating the boundary parameters of pupil and iris and segmenting the noise free iris portion are the most challenging phases of an automated iris recognition system. In this paper, we have presented person authentication frame work which uses particle swarm optimization (PSO) to locate iris region and circular hough transform (CHT) to device the boundary parameters. To undermine the effect of the noise presented in the segmented iris region we have divided the candidate region into N patches and used Fuzzy c-means clustering (FCM) to classify the patches into best iris region and not so best iris region (noisy region) based on the probability density function of each patch. Weighted mean Hammimng distance is adopted to find the dissimilarity score between the two candidate irises. We have used Log-Gabor, Riesz and Taylor's series expansion (TSE) filters and combinations of these three for iris feature extraction. To justify the feasibility of the proposed method, we experimented on the three publicly available data sets IITD, MMU v-2 and CASIA v-4 distance.

  5. Accelerating atomic structure search with cluster regularization

    NASA Astrophysics Data System (ADS)

    Sørensen, K. H.; Jørgensen, M. S.; Bruix, A.; Hammer, B.

    2018-06-01

    We present a method for accelerating the global structure optimization of atomic compounds. The method is demonstrated to speed up the finding of the anatase TiO2(001)-(1 × 4) surface reconstruction within a density functional tight-binding theory framework using an evolutionary algorithm. As a key element of the method, we use unsupervised machine learning techniques to categorize atoms present in a diverse set of partially disordered surface structures into clusters of atoms having similar local atomic environments. Analysis of more than 1000 different structures shows that the total energy of the structures correlates with the summed distances of the atomic environments to their respective cluster centers in feature space, where the sum runs over all atoms in each structure. Our method is formulated as a gradient based minimization of this summed cluster distance for a given structure and alternates with a standard gradient based energy minimization. While the latter minimization ensures local relaxation within a given energy basin, the former enables escapes from meta-stable basins and hence increases the overall performance of the global optimization.

  6. Cross-layer cluster-based energy-efficient protocol for wireless sensor networks.

    PubMed

    Mammu, Aboobeker Sidhik Koyamparambil; Hernandez-Jayo, Unai; Sainz, Nekane; de la Iglesia, Idoia

    2015-04-09

    Recent developments in electronics and wireless communications have enabled the improvement of low-power and low-cost wireless sensors networks (WSNs). One of the most important challenges in WSNs is to increase the network lifetime due to the limited energy capacity of the network nodes. Another major challenge in WSNs is the hot spots that emerge as locations under heavy traffic load. Nodes in such areas quickly drain energy resources, leading to disconnection in network services. In such an environment, cross-layer cluster-based energy-efficient algorithms (CCBE) can prolong the network lifetime and energy efficiency. CCBE is based on clustering the nodes to different hexagonal structures. A hexagonal cluster consists of cluster members (CMs) and a cluster head (CH). The CHs are selected from the CMs based on nodes near the optimal CH distance and the residual energy of the nodes. Additionally, the optimal CH distance that links to optimal energy consumption is derived. To balance the energy consumption and the traffic load in the network, the CHs are rotated among all CMs. In WSNs, energy is mostly consumed during transmission and reception. Transmission collisions can further decrease the energy efficiency. These collisions can be avoided by using a contention-free protocol during the transmission period. Additionally, the CH allocates slots to the CMs based on their residual energy to increase sleep time. Furthermore, the energy consumption of CH can be further reduced by data aggregation. In this paper, we propose a data aggregation level based on the residual energy of CH and a cost-aware decision scheme for the fusion of data. Performance results show that the CCBE scheme performs better in terms of network lifetime, energy consumption and throughput compared to low-energy adaptive clustering hierarchy (LEACH) and hybrid energy-efficient distributed clustering (HEED).

  7. Numerical approach of collision avoidance and optimal control on robotic manipulators

    NASA Technical Reports Server (NTRS)

    Wang, Jyhshing Jack

    1990-01-01

    Collision-free optimal motion and trajectory planning for robotic manipulators are solved by a method of sequential gradient restoration algorithm. Numerical examples of a two degree-of-freedom (DOF) robotic manipulator are demonstrated to show the excellence of the optimization technique and obstacle avoidance scheme. The obstacle is put on the midway, or even further inward on purpose, of the previous no-obstacle optimal trajectory. For the minimum-time purpose, the trajectory grazes by the obstacle and the minimum-time motion successfully avoids the obstacle. The minimum-time is longer for the obstacle avoidance cases than the one without obstacle. The obstacle avoidance scheme can deal with multiple obstacles in any ellipsoid forms by using artificial potential fields as penalty functions via distance functions. The method is promising in solving collision-free optimal control problems for robotics and can be applied to any DOF robotic manipulators with any performance indices and mobile robots as well. Since this method generates optimum solution based on Pontryagin Extremum Principle, rather than based on assumptions, the results provide a benchmark against which any optimization techniques can be measured.

  8. The CAFADIS camera: a new tomographic wavefront sensor for Adaptive Optics

    NASA Astrophysics Data System (ADS)

    Rodríguez, J. M.; Femenía, B.; Montilla, I.; Rodríguez-Ramos, L. F.; Marichal-Hernández, J. G.; Lüke, J. P.; López, R.; Díaz, J. J.; Martín, Y.

    The CAFADIS camera is a new wavefront sensor (WFS) patented by the Universidad de La Laguna. CAFADIS is a system based on the concept of plenoptic camera originally proposed by Adelson and Wang [Single lens stereo with a plenoptic camera, IEEE Transactions on Pattern Analysis and Machine Intelligence 14 (1992)] and its most salient feature is its ability to simultaneously measuring wavefront maps and distances to objects [Wavefront and distance measurements using the CAFADIS camera, in Astronomical telescopes, Marseille (2008)]. This makes of CAFADIS an interesting alternative for LGS-based AO systems as it is capable of measuring from an LGS-beacon the atmospheric turbulence wavefront and simultaneously the distance to the LGS beacon thus removing the need of a NGS defocus sensor to probe changes in distance to the LGS beacon due to drifts of the mesospheric Na layer. In principle, the concept can also be employed to recover 3D profiles of the Na Layer allowing for optimizations of the measurement of the distance to the LGS-beacon. Currently we are investigating the possibility of extending the plenoptic WFS into a tomographic wavefront sensor. Simulations will be shown of a plenoptic WFS when operated within an LGS-based AO system for the recovery of wavefront maps at different heights. The preliminary results presented here show the tomographic ability of CAFADIS.

  9. Pixel-based OPC optimization based on conjugate gradients.

    PubMed

    Ma, Xu; Arce, Gonzalo R

    2011-01-31

    Optical proximity correction (OPC) methods are resolution enhancement techniques (RET) used extensively in the semiconductor industry to improve the resolution and pattern fidelity of optical lithography. In pixel-based OPC (PBOPC), the mask is divided into small pixels, each of which is modified during the optimization process. Two critical issues in PBOPC are the required computational complexity of the optimization process, and the manufacturability of the optimized mask. Most current OPC optimization methods apply the steepest descent (SD) algorithm to improve image fidelity augmented by regularization penalties to reduce the complexity of the mask. Although simple to implement, the SD algorithm converges slowly. The existing regularization penalties, however, fall short in meeting the mask rule check (MRC) requirements often used in semiconductor manufacturing. This paper focuses on developing OPC optimization algorithms based on the conjugate gradient (CG) method which exhibits much faster convergence than the SD algorithm. The imaging formation process is represented by the Fourier series expansion model which approximates the partially coherent system as a sum of coherent systems. In order to obtain more desirable manufacturability properties of the mask pattern, a MRC penalty is proposed to enlarge the linear size of the sub-resolution assistant features (SRAFs), as well as the distances between the SRAFs and the main body of the mask. Finally, a projection method is developed to further reduce the complexity of the optimized mask pattern.

  10. Squeezed-state quantum key distribution with a Rindler observer

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Shi, Ronghua; Guo, Ying

    2018-03-01

    Lengthening the maximum transmission distance of quantum key distribution plays a vital role in quantum information processing. In this paper, we propose a directional squeezed-state protocol with signals detected by a Rindler observer in the relativistic quantum field framework. We derive an analytical solution to the transmission problem of squeezed states from the inertial sender to the accelerated receiver. The variance of the involved signal mode is closer to optimality than that of the coherent-state-based protocol. Simulation results show that the proposed protocol has better performance than the coherent-state counterpart especially in terms of the maximal transmission distance.

  11. An enhanced performance through agent-based secure approach for mobile ad hoc networks

    NASA Astrophysics Data System (ADS)

    Bisen, Dhananjay; Sharma, Sanjeev

    2018-01-01

    This paper proposes an agent-based secure enhanced performance approach (AB-SEP) for mobile ad hoc network. In this approach, agent nodes are selected through optimal node reliability as a factor. This factor is calculated on the basis of node performance features such as degree difference, normalised distance value, energy level, mobility and optimal hello interval of node. After selection of agent nodes, a procedure of malicious behaviour detection is performed using fuzzy-based secure architecture (FBSA). To evaluate the performance of the proposed approach, comparative analysis is done with conventional schemes using performance parameters such as packet delivery ratio, throughput, total packet forwarding, network overhead, end-to-end delay and percentage of malicious detection.

  12. Comparison of optimized algorithms in facility location allocation problems with different distance measures

    NASA Astrophysics Data System (ADS)

    Kumar, Rakesh; Chandrawat, Rajesh Kumar; Garg, B. P.; Joshi, Varun

    2017-07-01

    Opening the new firm or branch with desired execution is very relevant to facility location problem. Along the lines to locate the new ambulances and firehouses, the government desires to minimize average response time for emergencies from all residents of cities. So finding the best location is biggest challenge in day to day life. These type of problems were named as facility location problems. A lot of algorithms have been developed to handle these problems. In this paper, we review five algorithms that were applied to facility location problems. The significance of clustering in facility location problems is also presented. First we compare Fuzzy c-means clustering (FCM) algorithm with alternating heuristic (AH) algorithm, then with Particle Swarm Optimization (PSO) algorithms using different type of distance function. The data was clustered with the help of FCM and then we apply median model and min-max problem model on that data. After finding optimized locations using these algorithms we find the distance from optimized location point to the demanded point with different distance techniques and compare the results. At last, we design a general example to validate the feasibility of the five algorithms for facilities location optimization, and authenticate the advantages and drawbacks of them.

  13. Large Eddy Simulation of the fuel transport and mixing process in a scramjet combustor with rearwall-expansion cavity

    NASA Astrophysics Data System (ADS)

    Cai, Zun; Liu, Xiao; Gong, Cheng; Sun, Mingbo; Wang, Zhenguo; Bai, Xue-Song

    2016-09-01

    Large Eddy Simulation (LES) was employed to investigate the fuel/oxidizer mixing process in an ethylene fueled scramjet combustor with a rearwall-expansion cavity. The numerical solver was first validated for an experimental flow, the DLR strut-based scramjet combustor case. Shock wave structures and wall-pressure distribution from the numerical simulations were compared with experimental data and the numerical results were shown in good agreement with the available experimental data. Effects of the injection location on the flow and mixing process were then studied. It was found that with a long injection distance upstream the cavity, the fuel is transported much further into the main flow and a smaller subsonic zone is formed inside the cavity. Conversely, with a short injection distance, the fuel is entrained more into the cavity and a larger subsonic zone is formed inside the cavity, which is favorable for ignition in the cavity. For the rearwall-expansion cavity, it is suggested that the optimized ignition location with a long upstream injection distance should be in the bottom wall in the middle part of the cavity, while the optimized ignition location with a short upstream injection distance should be in the bottom wall in the front side of the cavity. By employing a cavity direct injection on the rear wall, the fuel mass fraction inside the cavity and the local turbulent intensity will both be increased due to this fueling, and it will also enhance the mixing process which will also lead to increased mixing efficiency. For the rearwall-expansion cavity, the combined injection scheme is expected to be an optimized injection scheme.

  14. Optimizing light delivery for a photoacoustic surgical system

    NASA Astrophysics Data System (ADS)

    Eddins, Blackberrie; Lediju Bell, Muyinatu A.

    2017-03-01

    This work explores light delivery optimization for a photoacoustic surgical system previously proposed to provide real-time, intraoperative visualization of the internal carotid arteries hidden by bone during minimally invasive neurosurgeries. Monte Carlo simulations were employed to study 3D light propagation in tissue. For a 2.4 mm diameter drill shaft and 2.9 mm spherical drill tip, the optimal fiber distance from the drill shaft was 2 mm, determined from the maximum normalized fluence seen by the artery. A single fiber was insufficient to deliver light to arteries separated by a minimum of 8 mm. Using similar drill geometry and the optimal 2 mm fiber-to-drill shaft distance, Zemax ray tracing simulations were employed to propagate a 950 nm wavelength Gaussian beam through one or more 600 μm core diameter optical fibers, and the resulting optical beam profile was detected on the representative bone surface. For equally spaced fibers, a single merged optical profile formed with 7 or more fibers, determined by thresholding the resulting light profile images at 1/e times the maximum intensity. The corresponding spot size was larger than that of a single fiber transmitting the same input energy, thus reducing the fluence delivered to the sphenoid bone and enabling higher energies within safety limits. A prototype was designed and built based on these optimization parameters. The methodology we used to optimize our light delivery system to surround surgical tools is generalizable to multiple interventional photoacoustic applications.

  15. Particle physics and polyedra proximity calculation for hazard simulations in large-scale industrial plants

    NASA Astrophysics Data System (ADS)

    Plebe, Alice; Grasso, Giorgio

    2016-12-01

    This paper describes a system developed for the simulation of flames inside an open-source 3D computer graphic software, Blender, with the aim of analyzing in virtual reality scenarios of hazards in large-scale industrial plants. The advantages of Blender are of rendering at high resolution the very complex structure of large industrial plants, and of embedding a physical engine based on smoothed particle hydrodynamics. This particle system is used to evolve a simulated fire. The interaction of this fire with the components of the plant is computed using polyhedron separation distance, adopting a Voronoi-based strategy that optimizes the number of feature distance computations. Results on a real oil and gas refining industry are presented.

  16. Electronic neural network for solving traveling salesman and similar global optimization problems

    NASA Technical Reports Server (NTRS)

    Thakoor, Anilkumar P. (Inventor); Moopenn, Alexander W. (Inventor); Duong, Tuan A. (Inventor); Eberhardt, Silvio P. (Inventor)

    1993-01-01

    This invention is a novel high-speed neural network based processor for solving the 'traveling salesman' and other global optimization problems. It comprises a novel hybrid architecture employing a binary synaptic array whose embodiment incorporates the fixed rules of the problem, such as the number of cities to be visited. The array is prompted by analog voltages representing variables such as distances. The processor incorporates two interconnected feedback networks, each of which solves part of the problem independently and simultaneously, yet which exchange information dynamically.

  17. Chemical graphs, molecular matrices and topological indices in chemoinformatics and quantitative structure-activity relationships.

    PubMed

    Ivanciuc, Ovidiu

    2013-06-01

    Chemical and molecular graphs have fundamental applications in chemoinformatics, quantitative structureproperty relationships (QSPR), quantitative structure-activity relationships (QSAR), virtual screening of chemical libraries, and computational drug design. Chemoinformatics applications of graphs include chemical structure representation and coding, database search and retrieval, and physicochemical property prediction. QSPR, QSAR and virtual screening are based on the structure-property principle, which states that the physicochemical and biological properties of chemical compounds can be predicted from their chemical structure. Such structure-property correlations are usually developed from topological indices and fingerprints computed from the molecular graph and from molecular descriptors computed from the three-dimensional chemical structure. We present here a selection of the most important graph descriptors and topological indices, including molecular matrices, graph spectra, spectral moments, graph polynomials, and vertex topological indices. These graph descriptors are used to define several topological indices based on molecular connectivity, graph distance, reciprocal distance, distance-degree, distance-valency, spectra, polynomials, and information theory concepts. The molecular descriptors and topological indices can be developed with a more general approach, based on molecular graph operators, which define a family of graph indices related by a common formula. Graph descriptors and topological indices for molecules containing heteroatoms and multiple bonds are computed with weighting schemes based on atomic properties, such as the atomic number, covalent radius, or electronegativity. The correlation in QSPR and QSAR models can be improved by optimizing some parameters in the formula of topological indices, as demonstrated for structural descriptors based on atomic connectivity and graph distance.

  18. Deployment-based lifetime optimization model for homogeneous Wireless Sensor Network under retransmission.

    PubMed

    Li, Ruiying; Liu, Xiaoxi; Xie, Wei; Huang, Ning

    2014-12-10

    Sensor-deployment-based lifetime optimization is one of the most effective methods used to prolong the lifetime of Wireless Sensor Network (WSN) by reducing the distance-sensitive energy consumption. In this paper, data retransmission, a major consumption factor that is usually neglected in the previous work, is considered. For a homogeneous WSN, monitoring a circular target area with a centered base station, a sensor deployment model based on regular hexagonal grids is analyzed. To maximize the WSN lifetime, optimization models for both uniform and non-uniform deployment schemes are proposed by constraining on coverage, connectivity and success transmission rate. Based on the data transmission analysis in a data gathering cycle, the WSN lifetime in the model can be obtained through quantifying the energy consumption at each sensor location. The results of case studies show that it is meaningful to consider data retransmission in the lifetime optimization. In particular, our investigations indicate that, with the same lifetime requirement, the number of sensors needed in a non-uniform topology is much less than that in a uniform one. Finally, compared with a random scheme, simulation results further verify the advantage of our deployment model.

  19. Process Mining-Based Method of Designing and Optimizing the Layouts of Emergency Departments in Hospitals.

    PubMed

    Rismanchian, Farhood; Lee, Young Hoon

    2017-07-01

    This article proposes an approach to help designers analyze complex care processes and identify the optimal layout of an emergency department (ED) considering several objectives simultaneously. These objectives include minimizing the distances traveled by patients, maximizing design preferences, and minimizing the relocation costs. Rising demand for healthcare services leads to increasing demand for new hospital buildings as well as renovating existing ones. Operations management techniques have been successfully applied in both manufacturing and service industries to design more efficient layouts. However, high complexity of healthcare processes makes it challenging to apply these techniques in healthcare environments. Process mining techniques were applied to address the problem of complexity and to enhance healthcare process analysis. Process-related information, such as information about the clinical pathways, was extracted from the information system of an ED. A goal programming approach was then employed to find a single layout that would simultaneously satisfy several objectives. The layout identified using the proposed method improved the distances traveled by noncritical and critical patients by 42.2% and 47.6%, respectively, and minimized the relocation costs. This study has shown that an efficient placement of the clinical units yields remarkable improvements in the distances traveled by patients.

  20. DD-HDS: A method for visualization and exploration of high-dimensional data.

    PubMed

    Lespinats, Sylvain; Verleysen, Michel; Giron, Alain; Fertil, Bernard

    2007-09-01

    Mapping high-dimensional data in a low-dimensional space, for example, for visualization, is a problem of increasingly major concern in data analysis. This paper presents data-driven high-dimensional scaling (DD-HDS), a nonlinear mapping method that follows the line of multidimensional scaling (MDS) approach, based on the preservation of distances between pairs of data. It improves the performance of existing competitors with respect to the representation of high-dimensional data, in two ways. It introduces (1) a specific weighting of distances between data taking into account the concentration of measure phenomenon and (2) a symmetric handling of short distances in the original and output spaces, avoiding false neighbor representations while still allowing some necessary tears in the original distribution. More precisely, the weighting is set according to the effective distribution of distances in the data set, with the exception of a single user-defined parameter setting the tradeoff between local neighborhood preservation and global mapping. The optimization of the stress criterion designed for the mapping is realized by "force-directed placement" (FDP). The mappings of low- and high-dimensional data sets are presented as illustrations of the features and advantages of the proposed algorithm. The weighting function specific to high-dimensional data and the symmetric handling of short distances can be easily incorporated in most distance preservation-based nonlinear dimensionality reduction methods.

  1. The optimization of needle electrode number and placement for irreversible electroporation of hepatocellular carcinoma

    PubMed Central

    Adeyanju, Oyinlolu O.; Al-Angari, Haitham M.; Sahakian, Alan V.

    2012-01-01

    Background Irreversible electroporation (IRE) is a novel ablation tool that uses brief high-voltage pulses to treat cancer. The efficacy of the therapy depends upon the distribution of the electric field, which in turn depends upon the configuration of electrodes used. Methods We sought to optimize the electrode configuration in terms of the distance between electrodes, the depth of electrode insertion, and the number of electrodes. We employed a 3D Finite Element Model and systematically varied the distance between the electrodes and the depth of electrode insertion, monitoring the lowest voltage sufficient to ablate the tumor, VIRE. We also measured the amount of normal (non-cancerous) tissue ablated. Measurements were performed for two electrodes, three electrodes, and four electrodes. The optimal electrode configuration was determined to be the one with the lowest VIRE, as that minimized damage to normal tissue. Results The optimal electrode configuration to ablate a 2.5 cm spheroidal tumor used two electrodes with a distance of 2 cm between the electrodes and a depth of insertion of 1 cm below the halfway point in the spherical tumor, as measured from the bottom of the electrode. This produced a VIRE of 3700 V. We found that it was generally best to have a small distance between the electrodes and for the center of the electrodes to be inserted at a depth equal to or deeper than the center of the tumor. We also found the distance between electrodes was far more important in influencing the outcome measures when compared with the depth of electrode insertion. Conclusions Overall, the distribution of electric field is highly dependent upon the electrode configuration, but the optimal configuration can be determined using numerical modeling. Our findings can help guide the clinical application of IRE as well as the selection of the best optimization algorithm to use in finding the optimal electrode configuration. PMID:23077449

  2. A Segment-Based Trajectory Similarity Measure in the Urban Transportation Systems.

    PubMed

    Mao, Yingchi; Zhong, Haishi; Xiao, Xianjian; Li, Xiaofang

    2017-03-06

    With the rapid spread of built-in GPS handheld smart devices, the trajectory data from GPS sensors has grown explosively. Trajectory data has spatio-temporal characteristics and rich information. Using trajectory data processing techniques can mine the patterns of human activities and the moving patterns of vehicles in the intelligent transportation systems. A trajectory similarity measure is one of the most important issues in trajectory data mining (clustering, classification, frequent pattern mining, etc.). Unfortunately, the main similarity measure algorithms with the trajectory data have been found to be inaccurate, highly sensitive of sampling methods, and have low robustness for the noise data. To solve the above problems, three distances and their corresponding computation methods are proposed in this paper. The point-segment distance can decrease the sensitivity of the point sampling methods. The prediction distance optimizes the temporal distance with the features of trajectory data. The segment-segment distance introduces the trajectory shape factor into the similarity measurement to improve the accuracy. The three kinds of distance are integrated with the traditional dynamic time warping algorithm (DTW) algorithm to propose a new segment-based dynamic time warping algorithm (SDTW). The experimental results show that the SDTW algorithm can exhibit about 57%, 86%, and 31% better accuracy than the longest common subsequence algorithm (LCSS), and edit distance on real sequence algorithm (EDR) , and DTW, respectively, and that the sensitivity to the noise data is lower than that those algorithms.

  3. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1992-01-01

    Worked performed during the reporting period is summarized. Construction of robustly good trellis codes for use with sequential decoding was developed. The robustly good trellis codes provide a much better trade off between free distance and distance profile. The unequal error protection capabilities of convolutional codes was studied. The problem of finding good large constraint length, low rate convolutional codes for deep space applications is investigated. A formula for computing the free distance of 1/n convolutional codes was discovered. Double memory (DM) codes, codes with two memory units per unit bit position, were studied; a search for optimal DM codes is being conducted. An algorithm for constructing convolutional codes from a given quasi-cyclic code was developed. Papers based on the above work are included in the appendix.

  4. [Problem based learning by distance education and analysis of a training system].

    PubMed

    Dury, Cécile

    2004-12-01

    This article presents and analyses a training system aiming at acquiring skills in nursing cares. The aims followed are the development: --of an active pedagogic method: learning through problems (LTP); --of the interdisciplinary and intercultural approach, the same problems being solves by students from different disciplines and cultures; --of the use of the new technologies of information and communication (NTIC) so as to enable a maximal "distance" cooperation between the various partners of the project. The analysis of the system shows that the pedagogic aims followed by LTP are reached. The pluridisciplinary and pluricultural approach, to be optimal, requires great coordination between the partners, balance between the groups of students from different countries and disciplines, training and support from the tutors in the use of the distance teaching platform.

  5. Near-Optimal Re-Entry Trajectories for Reusable Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Chou, H.-C.; Ardema, M. D.; Bowles, J. V.

    1997-01-01

    A near-optimal guidance law for the descent trajectory for earth orbit re-entry of a fully reusable single-stage-to-orbit pure rocket launch vehicle is derived. A methodology is developed to investigate using both bank angle and altitude as control variables and selecting parameters that maximize various performance functions. The method is based on the energy-state model of the aircraft equations of motion. The major task of this paper is to obtain optimal re-entry trajectories under a variety of performance goals: minimum time, minimum surface temperature, minimum heating, and maximum heading change; four classes of trajectories were investigated: no banking, optimal left turn banking, optimal right turn banking, and optimal bank chattering. The cost function is in general a weighted sum of all performance goals. In particular, the trade-off between minimizing heat load into the vehicle and maximizing cross range distance is investigated. The results show that the optimization methodology can be used to derive a wide variety of near-optimal trajectories.

  6. Selecting registration schemes in case of interstitial lung disease follow-up in CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vlachopoulos, Georgios; Korfiatis, Panayiotis; Skiadopoulos, Spyros

    Purpose: Primary goal of this study is to select optimal registration schemes in the framework of interstitial lung disease (ILD) follow-up analysis in CT. Methods: A set of 128 multiresolution schemes composed of multiresolution nonrigid and combinations of rigid and nonrigid registration schemes are evaluated, utilizing ten artificially warped ILD follow-up volumes, originating from ten clinical volumetric CT scans of ILD affected patients, to select candidate optimal schemes. Specifically, all combinations of four transformation models (three rigid: rigid, similarity, affine and one nonrigid: third order B-spline), four cost functions (sum-of-square distances, normalized correlation coefficient, mutual information, and normalized mutual information),more » four gradient descent optimizers (standard, regular step, adaptive stochastic, and finite difference), and two types of pyramids (recursive and Gaussian-smoothing) were considered. The selection process involves two stages. The first stage involves identification of schemes with deformation field singularities, according to the determinant of the Jacobian matrix. In the second stage, evaluation methodology is based on distance between corresponding landmark points in both normal lung parenchyma (NLP) and ILD affected regions. Statistical analysis was performed in order to select near optimal registration schemes per evaluation metric. Performance of the candidate registration schemes was verified on a case sample of ten clinical follow-up CT scans to obtain the selected registration schemes. Results: By considering near optimal schemes common to all ranking lists, 16 out of 128 registration schemes were initially selected. These schemes obtained submillimeter registration accuracies in terms of average distance errors 0.18 ± 0.01 mm for NLP and 0.20 ± 0.01 mm for ILD, in case of artificially generated follow-up data. Registration accuracy in terms of average distance error in clinical follow-up data was in the range of 1.985–2.156 mm and 1.966–2.234 mm, for NLP and ILD affected regions, respectively, excluding schemes with statistically significant lower performance (Wilcoxon signed-ranks test, p < 0.05), resulting in 13 finally selected registration schemes. Conclusions: Selected registration schemes in case of ILD CT follow-up analysis indicate the significance of adaptive stochastic gradient descent optimizer, as well as the importance of combined rigid and nonrigid schemes providing high accuracy and time efficiency. The selected optimal deformable registration schemes are equivalent in terms of their accuracy and thus compatible in terms of their clinical outcome.« less

  7. The performance of approximations of farm contiguity compared to contiguity defined using detailed geographical information in two sample areas in Scotland: implications for foot-and-mouth disease modelling.

    PubMed

    Flood, Jessica S; Porphyre, Thibaud; Tildesley, Michael J; Woolhouse, Mark E J

    2013-10-08

    When modelling infectious diseases, accurately capturing the pattern of dissemination through space is key to providing optimal recommendations for control. Mathematical models of disease spread in livestock, such as for foot-and-mouth disease (FMD), have done this by incorporating a transmission kernel which describes the decay in transmission rate with increasing Euclidean distance from an infected premises (IP). However, this assumes a homogenous landscape, and is based on the distance between point locations of farms. Indeed, underlying the spatial pattern of spread are the contact networks involved in transmission. Accordingly, area-weighted tessellation around farm point locations has been used to approximate field-contiguity and simulate the effect of contiguous premises (CP) culling for FMD. Here, geographic data were used to determine contiguity based on distance between premises' fields and presence of landscape features for two sample areas in Scotland. Sensitivity, positive predictive value, and the True Skill Statistic (TSS) were calculated to determine how point distance measures and area-weighted tessellation compared to the 'gold standard' of the map-based measures in identifying CPs. In addition, the mean degree and density of the different contact networks were calculated. Utilising point distances <1 km and <5 km as a measure for contiguity resulted in poor discrimination between map-based CPs/non-CPs (TSS 0.279-0.344 and 0.385-0.400, respectively). Point distance <1 km missed a high proportion of map-based CPs; <5 km point distance picked up a high proportion of map-based non-CPs as CPs. Area-weighted tessellation performed best, with reasonable discrimination between map-based CPs/non-CPs (TSS 0.617-0.737) and comparable mean degree and density. Landscape features altered network properties considerably when taken into account. The farming landscape is not homogeneous. Basing contiguity on geographic locations of field boundaries and including landscape features known to affect transmission into FMD models are likely to improve individual farm-level accuracy of spatial predictions in the event of future outbreaks. If a substantial proportion of FMD transmission events are by contiguous spread, and CPs should be assigned an elevated relative transmission rate, the shape of the kernel could be significantly altered since ability to discriminate between map-based CPs and non-CPs is different over different Euclidean distances.

  8. A training image evaluation and selection method based on minimum data event distance for multiple-point geostatistics

    NASA Astrophysics Data System (ADS)

    Feng, Wenjie; Wu, Shenghe; Yin, Yanshu; Zhang, Jiajia; Zhang, Ke

    2017-07-01

    A training image (TI) can be regarded as a database of spatial structures and their low to higher order statistics used in multiple-point geostatistics (MPS) simulation. Presently, there are a number of methods to construct a series of candidate TIs (CTIs) for MPS simulation based on a modeler's subjective criteria. The spatial structures of TIs are often various, meaning that the compatibilities of different CTIs with the conditioning data are different. Therefore, evaluation and optimal selection of CTIs before MPS simulation is essential. This paper proposes a CTI evaluation and optimal selection method based on minimum data event distance (MDevD). In the proposed method, a set of MDevD properties are established through calculation of the MDevD of conditioning data events in each CTI. Then, CTIs are evaluated and ranked according to the mean value and variance of the MDevD properties. The smaller the mean value and variance of an MDevD property are, the more compatible the corresponding CTI is with the conditioning data. In addition, data events with low compatibility in the conditioning data grid can be located to help modelers select a set of complementary CTIs for MPS simulation. The MDevD property can also help to narrow the range of the distance threshold for MPS simulation. The proposed method was evaluated using three examples: a 2D categorical example, a 2D continuous example, and an actual 3D oil reservoir case study. To illustrate the method, a C++ implementation of the method is attached to the paper.

  9. Limitations to mapping habitat-use areas in changing landscapes using the Mahalanobis distance statistic

    USGS Publications Warehouse

    Knick, Steven T.; Rotenberry, J.T.

    1998-01-01

    We tested the potential of a GIS mapping technique, using a resource selection model developed for black-tailed jackrabbits (Lepus californicus) and based on the Mahalanobis distance statistic, to track changes in shrubsteppe habitats in southwestern Idaho. If successful, the technique could be used to predict animal use areas, or those undergoing change, in different regions from the same selection function and variables without additional sampling. We determined the multivariate mean vector of 7 GIS variables that described habitats used by jackrabbits. We then ranked the similarity of all cells in the GIS coverage from their Mahalanobis distance to the mean habitat vector. The resulting map accurately depicted areas where we sighted jackrabbits on verification surveys. We then simulated an increase in shrublands (which are important habitats). Contrary to expectation, the new configurations were classified as lower similarity relative to the original mean habitat vector. Because the selection function is based on a unimodal mean, any deviation, even if biologically positive, creates larger Malanobis distances and lower similarity values. We recommend the Mahalanobis distance technique for mapping animal use areas when animals are distributed optimally, the landscape is well-sampled to determine the mean habitat vector, and distributions of the habitat variables does not change.

  10. Real-Time Pathogen Detection in the Era of Whole-Genome Sequencing and Big Data: Comparison of k-mer and Site-Based Methods for Inferring the Genetic Distances among Tens of Thousands of Salmonella Samples.

    PubMed

    Pettengill, James B; Pightling, Arthur W; Baugher, Joseph D; Rand, Hugh; Strain, Errol

    2016-01-01

    The adoption of whole-genome sequencing within the public health realm for molecular characterization of bacterial pathogens has been followed by an increased emphasis on real-time detection of emerging outbreaks (e.g., food-borne Salmonellosis). In turn, large databases of whole-genome sequence data are being populated. These databases currently contain tens of thousands of samples and are expected to grow to hundreds of thousands within a few years. For these databases to be of optimal use one must be able to quickly interrogate them to accurately determine the genetic distances among a set of samples. Being able to do so is challenging due to both biological (evolutionary diverse samples) and computational (petabytes of sequence data) issues. We evaluated seven measures of genetic distance, which were estimated from either k-mer profiles (Jaccard, Euclidean, Manhattan, Mash Jaccard, and Mash distances) or nucleotide sites (NUCmer and an extended multi-locus sequence typing (MLST) scheme). When analyzing empirical data (whole-genome sequence data from 18,997 Salmonella isolates) there are features (e.g., genomic, assembly, and contamination) that cause distances inferred from k-mer profiles, which treat absent data as informative, to fail to accurately capture the distance between samples when compared to distances inferred from differences in nucleotide sites. Thus, site-based distances, like NUCmer and extended MLST, are superior in performance, but accessing the computing resources necessary to perform them may be challenging when analyzing large databases.

  11. Real-Time Pathogen Detection in the Era of Whole-Genome Sequencing and Big Data: Comparison of k-mer and Site-Based Methods for Inferring the Genetic Distances among Tens of Thousands of Salmonella Samples

    DOE PAGES

    Pettengill, James B.; Pightling, Arthur W.; Baugher, Joseph D.; ...

    2016-11-10

    The adoption of whole-genome sequencing within the public health realm for molecular characterization of bacterial pathogens has been followed by an increased emphasis on real-time detection of emerging outbreaks (e.g., food-borne Salmonellosis). In turn, large databases of whole-genome sequence data are being populated. These databases currently contain tens of thousands of samples and are expected to grow to hundreds of thousands within a few years. For these databases to be of optimal use one must be able to quickly interrogate them to accurately determine the genetic distances among a set of samples. Being able to do so is challenging duemore » to both biological (evolutionary diverse samples) and computational (petabytes of sequence data) issues. We evaluated seven measures of genetic distance, which were estimated from either k-mer profiles (Jaccard, Euclidean, Manhattan, Mash Jaccard, and Mash distances) or nucleotide sites (NUCmer and an extended multi-locus sequence typing (MLST) scheme). Finally, when analyzing empirical data (wholegenome sequence data from 18,997 Salmonella isolates) there are features (e.g., genomic, assembly, and contamination) that cause distances inferred from k-mer profiles, which treat absent data as informative, to fail to accurately capture the distance between samples when compared to distances inferred from differences in nucleotide sites. Thus, site-based distances, like NUCmer and extended MLST, are superior in performance, but accessing the computing resources necessary to perform them may be challenging when analyzing large databases.« less

  12. Real-Time Pathogen Detection in the Era of Whole-Genome Sequencing and Big Data: Comparison of k-mer and Site-Based Methods for Inferring the Genetic Distances among Tens of Thousands of Salmonella Samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pettengill, James B.; Pightling, Arthur W.; Baugher, Joseph D.

    The adoption of whole-genome sequencing within the public health realm for molecular characterization of bacterial pathogens has been followed by an increased emphasis on real-time detection of emerging outbreaks (e.g., food-borne Salmonellosis). In turn, large databases of whole-genome sequence data are being populated. These databases currently contain tens of thousands of samples and are expected to grow to hundreds of thousands within a few years. For these databases to be of optimal use one must be able to quickly interrogate them to accurately determine the genetic distances among a set of samples. Being able to do so is challenging duemore » to both biological (evolutionary diverse samples) and computational (petabytes of sequence data) issues. We evaluated seven measures of genetic distance, which were estimated from either k-mer profiles (Jaccard, Euclidean, Manhattan, Mash Jaccard, and Mash distances) or nucleotide sites (NUCmer and an extended multi-locus sequence typing (MLST) scheme). Finally, when analyzing empirical data (wholegenome sequence data from 18,997 Salmonella isolates) there are features (e.g., genomic, assembly, and contamination) that cause distances inferred from k-mer profiles, which treat absent data as informative, to fail to accurately capture the distance between samples when compared to distances inferred from differences in nucleotide sites. Thus, site-based distances, like NUCmer and extended MLST, are superior in performance, but accessing the computing resources necessary to perform them may be challenging when analyzing large databases.« less

  13. Fast Algorithms for Earth Mover’s Distance Based on Optimal Transport and L1 Type Regularization I

    DTIC Science & Technology

    2016-09-01

    which EMD can be reformulated as a familiar homogeneous degree 1 regularized minimization. The new minimization problem is very similar to problems which...which is also named the Monge problem or the Wasserstein metric, plays a central role in many applications, including image processing, computer vision

  14. Development of an in-situ multi-component reinforced Al-based metal matrix composite by direct metal laser sintering technique — Optimization of process parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghosh, Subrata Kumar, E-mail: subratagh82@gmail.com; Bandyopadhyay, Kaushik; Saha, Partha

    2014-07-01

    In the present investigation, an in-situ multi-component reinforced aluminum based metal matrix composite was fabricated by the combination of self-propagating high-temperature synthesis and direct metal laser sintering process. The different mixtures of Al, TiO{sub 2} and B{sub 4}C powders were used to initiate and maintain the self-propagating high-temperature synthesis by laser during the sintering process. It was found from the X-ray diffraction analysis and scanning electron microscopy that the reinforcements like Al{sub 2}O{sub 3}, TiC, and TiB{sub 2} were formed in the composite. The scanning electron microscopy revealed the distribution of the reinforcement phases in the composite and phase identities.more » The variable parameters such as powder layer thickness, laser power, scanning speed, hatching distance and composition of the powder mixture were optimized for higher density, lower porosity and higher microhardness using Taguchi method. Experimental investigation shows that the density of the specimen mainly depends upon the hatching distance, composition and layer thickness. On the other hand, hatching distance, layer thickness and laser power are the significant parameters which influence the porosity. The composition, laser power and layer thickness are the key influencing parameters for microhardness. - Highlights: • The reinforcements such as Al{sub 2}O{sub 3}, TiC, and TiB{sub 2} were produced in Al-MMC through SHS. • The density is mainly influenced by the material composition and hatching distance. • Hatching distance is the major influencing parameter on porosity. • The material composition is the significant parameter to enhance the microhardness. • The SEM micrographs reveal the distribution of TiC, TiB{sub 2} and Al{sub 2}O{sub 3} in the composite.« less

  15. The Hubble Space Telescope Extragalactic Distance Scale Key Project. 1: The discovery of Cepheids and a new distance to M81

    NASA Technical Reports Server (NTRS)

    Freedman, Wendy L.; Hughes, Shaun M.; Madore, Barry F.; Mould, Jeremy R.; Lee, Myung Gyoon; Stetson, Peter; Kennicutt, Robert C.; Turner, Anne; Ferrarese, Laura; Ford, Holland

    1994-01-01

    We report on the discovery of 30 new Cepheids in the nearby galaxy M81 based on observations using the Hubble Space Telescope (HST). The periods of these Cepheids lie in the range of 10-55 days, based on 18 independent epochs using the HST wide-band F555W filter. The HST F555W and F785LP data have been transformed to the Cousins standard V and I magnitude system using a ground-based calibration. Apparent period-luminosity relations at V and I were constructed, from which apparent distance moduli were measured with respect to assumed values of mu(sub 0) = 18.50 mag and E(B - V) = 0.10 mag for the Large Magellanic Cloud. The difference in the apparent V and I moduli yields a measure of the difference in the total mean extinction between the M81 and the LMC Cepheid samples. A low total mean extinction to the M81 sample of E(B - V) = 0.03 +/- 0.05 mag is obtained. The true distance modulus to M81 is determined to be 27.80 +/- 0.20 mag, corresponding to a distance of 3.63 +/- 0.34 Mpc. These data illustrate that with an optimal (power-law) sampling strategy, the HST provides a powerful tool for the discovery of extragalactic Cepheids and their application to the distance scale. M81 is the first calibrating galaxy in the target sample of the HST Key Project on the Extragalactic Distance Scale, the ultimate aim of which is to provide a value of the Hubble constant to 10% accuracy.

  16. Optimization of the Multi-Spectral Euclidean Distance Calculation for FPGA-based Spaceborne Systems

    NASA Technical Reports Server (NTRS)

    Cristo, Alejandro; Fisher, Kevin; Perez, Rosa M.; Martinez, Pablo; Gualtieri, Anthony J.

    2012-01-01

    Due to the high quantity of operations that spaceborne processing systems must carry out in space, new methodologies and techniques are being presented as good alternatives in order to free the main processor from work and improve the overall performance. These include the development of ancillary dedicated hardware circuits that carry out the more redundant and computationally expensive operations in a faster way, leaving the main processor free to carry out other tasks while waiting for the result. One of these devices is SpaceCube, a FPGA-based system designed by NASA. The opportunity to use FPGA reconfigurable architectures in space allows not only the optimization of the mission operations with hardware-level solutions, but also the ability to create new and improved versions of the circuits, including error corrections, once the satellite is already in orbit. In this work, we propose the optimization of a common operation in remote sensing: the Multi-Spectral Euclidean Distance calculation. For that, two different hardware architectures have been designed and implemented in a Xilinx Virtex-5 FPGA, the same model of FPGAs used by SpaceCube. Previous results have shown that the communications between the embedded processor and the circuit create a bottleneck that affects the overall performance in a negative way. In order to avoid this, advanced methods including memory sharing, Native Port Interface (NPI) connections and Data Burst Transfers have been used.

  17. Understanding Innovation Engines: Automated Creativity and Improved Stochastic Optimization via Deep Learning.

    PubMed

    Nguyen, A; Yosinski, J; Clune, J

    2016-01-01

    The Achilles Heel of stochastic optimization algorithms is getting trapped on local optima. Novelty Search mitigates this problem by encouraging exploration in all interesting directions by replacing the performance objective with a reward for novel behaviors. This reward for novel behaviors has traditionally required a human-crafted, behavioral distance function. While Novelty Search is a major conceptual breakthrough and outperforms traditional stochastic optimization on certain problems, it is not clear how to apply it to challenging, high-dimensional problems where specifying a useful behavioral distance function is difficult. For example, in the space of images, how do you encourage novelty to produce hawks and heroes instead of endless pixel static? Here we propose a new algorithm, the Innovation Engine, that builds on Novelty Search by replacing the human-crafted behavioral distance with a Deep Neural Network (DNN) that can recognize interesting differences between phenotypes. The key insight is that DNNs can recognize similarities and differences between phenotypes at an abstract level, wherein novelty means interesting novelty. For example, a DNN-based novelty search in the image space does not explore in the low-level pixel space, but instead creates a pressure to create new types of images (e.g., churches, mosques, obelisks, etc.). Here, we describe the long-term vision for the Innovation Engine algorithm, which involves many technical challenges that remain to be solved. We then implement a simplified version of the algorithm that enables us to explore some of the algorithm's key motivations. Our initial results, in the domain of images, suggest that Innovation Engines could ultimately automate the production of endless streams of interesting solutions in any domain: for example, producing intelligent software, robot controllers, optimized physical components, and art.

  18. An improved hierarchical A * algorithm in the optimization of parking lots

    NASA Astrophysics Data System (ADS)

    Wang, Yong; Wu, Junjuan; Wang, Ying

    2017-08-01

    In the parking lot parking path optimization, the traditional evaluation index is the shortest distance as the best index and it does not consider the actual road conditions. Now, the introduction of a more practical evaluation index can not only simplify the hardware design of the boot system but also save the software overhead. Firstly, we establish the parking lot network graph RPCDV mathematical model and all nodes in the network is divided into two layers which were constructed using different evaluation function base on the improved hierarchical A * algorithm which improves the time optimal path search efficiency and search precision of the evaluation index. The final results show that for different sections of the program attribute parameter algorithm always faster the time to find the optimal path.

  19. Optical design of soft multifocal contact lens with uniform optical power in center-distance zone with optimized NURBS.

    PubMed

    Vu, Lien T; Chen, Chao-Chang A; Yu, Chia-Wei

    2018-02-05

    This study aims to develop a new optical design method of soft multifocal contact lens (CLs) to obtain uniform optical power in large center-distance zone with optimized Non-Uniform Rational B-spline (NURBS). For the anterior surface profiles of CLs, the NURBS design curves are optimized to match given optical power distributions. Then, the NURBS in the center-distance zones are fitted in the corresponding spherical/aspheric curves for both data points and their centers of curvature to achieve the uniform power. Four cases of soft CLs have been manufactured by casting in shell molds by injection molding and then measured to verify the design specifications. Results of power profiles of these CLs are concord with the given clinical requirements of uniform powers in larger center-distance zone. The developed optical design method has been verified for multifocal CLs design and can be further applied for production of soft multifocal CLs.

  20. Individualized optimal release angles in discus throwing.

    PubMed

    Leigh, Steve; Liu, Hui; Hubbard, Mont; Yu, Bing

    2010-02-10

    The purpose of this study was to determine individualized optimal release angles for elite discus throwers. Three-dimensional coordinate data were obtained for at least 10 competitive trials for each subject. Regression relationships between release speed and release angle, and between aerodynamic distance and release angle were determined for each subject. These relationships were linear with subject-specific characteristics. The subject-specific relationships between release speed and release angle may be due to subjects' technical and physical characteristics. The subject-specific relationships between aerodynamic distance and release angle may be due to interactions between the release angle, the angle of attack, and the aerodynamic distance. Optimal release angles were estimated for each subject using the regression relationships and equations of projectile motion. The estimated optimal release angle was different for different subjects, and ranged from 35 degrees to 44 degrees . The results of this study demonstrate that the optimal release angle for discus throwing is thrower-specific. The release angles used by elite discus throwers in competition are not necessarily optimal for all discus throwers, or even themselves. The results of this study provide significant information for understanding the biomechanics of discus throwing techniques. Copyright 2009 Elsevier Ltd. All rights reserved.

  1. Optimization of wearable microwave antenna with simplified electromagnetic model of the human body

    NASA Astrophysics Data System (ADS)

    Januszkiewicz, Łukasz; Barba, Paolo Di; Hausman, Sławomir

    2017-12-01

    In this paper the problem of optimization design of a microwave wearable antenna is investigated. Reference is made to a specific antenna design that is a wideband Vee antenna the geometry of which is characterized by 6 parameters. These parameters were automatically adjusted with an evolution strategy based algorithm EStra to obtain the impedance matching of the antenna located in the proximity of the human body. The antenna was designed to operate in the ISM (industrial, scientific, medical) band which covers the frequency range of 2.4 GHz up to 2.5 GHz. The optimization procedure used the finite-difference time-domain method based full-wave simulator with a simplified human body model. In the optimization procedure small movements of antenna towards or away of the human body that are likely to happen during real use were considered. The stability of the antenna parameters irrespective of the movements of the user's body is an important factor in wearable antenna design. The optimization procedure allowed obtaining good impedance matching for a given range of antenna distances with respect to the human body.

  2. Numerical Optimization of the Position in Femoral Head of Proximal Locking Screws of Proximal Femoral Nail System; Biomechanical Study.

    PubMed

    Konya, Mehmet Nuri; Verim, Özgür

    2017-09-29

    Proximal femoral fracture rates are increasing due to osteoporosis and traffic accidents. Proximal femoral nails are routinely used in the treatment of these fractures in the proximal femur. To compare various combinations and to determine the ideal proximal lag screw position in pertrochanteric fractures (Arbeitsgemeinschaft für Osteosynthesefragen classification 31-A1) of the femur by using optimized finite element analysis. Biomechanical study. Computed tomography images of patients' right femurs were processed with Mimics. Afterwards a solid femur model was created with SolidWorks 2015 and transferred to ANSYS Workbench 16.0 for response surface optimization analysis which was carried out according to anterior-posterior (-10°0) and posterior-anterior directions of the femur neck significantly increased these stresses. The most suitable position of the proximal lag screw was confirmed as the middle of the femoral neck by using optimized finite element analysis.

  3. Research and Infrastructure Development Center for Nanomaterials Research

    DTIC Science & Technology

    2009-05-01

    scale, this technique may prove highly valuable for optimizing the distance dependent energy transfer effects for maximum sensitivity to target...this technique may prove highly valuable for optimizing the distance dependent energy transfer effects for maximum sensitivity 0 20000 40000 60000... Pulsed laser deposition of carbon films on quartz and silicon simply did not work due to their poor conductivity. We found that pyrolized photoresist

  4. Graph-based optimization of epitope coverage for vaccine antigen design

    DOE PAGES

    Theiler, James Patrick; Korber, Bette Tina Marie

    2017-01-29

    Epigraph is a recently developed algorithm that enables the computationally efficient design of single or multi-antigen vaccines to maximize the potential epitope coverage for a diverse pathogen population. Potential epitopes are defined as short contiguous stretches of proteins, comparable in length to T-cell epitopes. This optimal coverage problem can be formulated in terms of a directed graph, with candidate antigens represented as paths that traverse this graph. Epigraph protein sequences can also be used as the basis for designing peptides for experimental evaluation of immune responses in natural infections to highly variable proteins. The epigraph tool suite also enables rapidmore » characterization of populations of diverse sequences from an immunological perspective. Fundamental distance measures are based on immunologically relevant shared potential epitope frequencies, rather than simple Hamming or phylogenetic distances. Here, we provide a mathematical description of the epigraph algorithm, include a comparison of different heuristics that can be used when graphs are not acyclic, and we describe an additional tool we have added to the web-based epigraph tool suite that provides frequency summaries of all distinct potential epitopes in a population. Lastly, we also show examples of the graphical output and summary tables that can be generated using the epigraph tool suite and explain their content and applications.« less

  5. Fuzziness-based active learning framework to enhance hyperspectral image classification performance for discriminative and generative classifiers

    PubMed Central

    2018-01-01

    Hyperspectral image classification with a limited number of training samples without loss of accuracy is desirable, as collecting such data is often expensive and time-consuming. However, classifiers trained with limited samples usually end up with a large generalization error. To overcome the said problem, we propose a fuzziness-based active learning framework (FALF), in which we implement the idea of selecting optimal training samples to enhance generalization performance for two different kinds of classifiers, discriminative and generative (e.g. SVM and KNN). The optimal samples are selected by first estimating the boundary of each class and then calculating the fuzziness-based distance between each sample and the estimated class boundaries. Those samples that are at smaller distances from the boundaries and have higher fuzziness are chosen as target candidates for the training set. Through detailed experimentation on three publically available datasets, we showed that when trained with the proposed sample selection framework, both classifiers achieved higher classification accuracy and lower processing time with the small amount of training data as opposed to the case where the training samples were selected randomly. Our experiments demonstrate the effectiveness of our proposed method, which equates favorably with the state-of-the-art methods. PMID:29304512

  6. Graph-based optimization of epitope coverage for vaccine antigen design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Theiler, James Patrick; Korber, Bette Tina Marie

    Epigraph is a recently developed algorithm that enables the computationally efficient design of single or multi-antigen vaccines to maximize the potential epitope coverage for a diverse pathogen population. Potential epitopes are defined as short contiguous stretches of proteins, comparable in length to T-cell epitopes. This optimal coverage problem can be formulated in terms of a directed graph, with candidate antigens represented as paths that traverse this graph. Epigraph protein sequences can also be used as the basis for designing peptides for experimental evaluation of immune responses in natural infections to highly variable proteins. The epigraph tool suite also enables rapidmore » characterization of populations of diverse sequences from an immunological perspective. Fundamental distance measures are based on immunologically relevant shared potential epitope frequencies, rather than simple Hamming or phylogenetic distances. Here, we provide a mathematical description of the epigraph algorithm, include a comparison of different heuristics that can be used when graphs are not acyclic, and we describe an additional tool we have added to the web-based epigraph tool suite that provides frequency summaries of all distinct potential epitopes in a population. Lastly, we also show examples of the graphical output and summary tables that can be generated using the epigraph tool suite and explain their content and applications.« less

  7. Polynomial Supertree Methods Revisited

    PubMed Central

    Brinkmeyer, Malte; Griebel, Thasso; Böcker, Sebastian

    2011-01-01

    Supertree methods allow to reconstruct large phylogenetic trees by combining smaller trees with overlapping leaf sets into one, more comprehensive supertree. The most commonly used supertree method, matrix representation with parsimony (MRP), produces accurate supertrees but is rather slow due to the underlying hard optimization problem. In this paper, we present an extensive simulation study comparing the performance of MRP and the polynomial supertree methods MinCut Supertree, Modified MinCut Supertree, Build-with-distances, PhySIC, PhySIC_IST, and super distance matrix. We consider both quality and resolution of the reconstructed supertrees. Our findings illustrate the tradeoff between accuracy and running time in supertree construction, as well as the pros and cons of voting- and veto-based supertree approaches. Based on our results, we make some general suggestions for supertree methods yet to come. PMID:22229028

  8. An improved chaotic fruit fly optimization based on a mutation strategy for simultaneous feature selection and parameter optimization for SVM and its applications.

    PubMed

    Ye, Fei; Lou, Xin Yuan; Sun, Lin Fu

    2017-01-01

    This paper proposes a new support vector machine (SVM) optimization scheme based on an improved chaotic fly optimization algorithm (FOA) with a mutation strategy to simultaneously perform parameter setting turning for the SVM and feature selection. In the improved FOA, the chaotic particle initializes the fruit fly swarm location and replaces the expression of distance for the fruit fly to find the food source. However, the proposed mutation strategy uses two distinct generative mechanisms for new food sources at the osphresis phase, allowing the algorithm procedure to search for the optimal solution in both the whole solution space and within the local solution space containing the fruit fly swarm location. In an evaluation based on a group of ten benchmark problems, the proposed algorithm's performance is compared with that of other well-known algorithms, and the results support the superiority of the proposed algorithm. Moreover, this algorithm is successfully applied in a SVM to perform both parameter setting turning for the SVM and feature selection to solve real-world classification problems. This method is called chaotic fruit fly optimization algorithm (CIFOA)-SVM and has been shown to be a more robust and effective optimization method than other well-known methods, particularly in terms of solving the medical diagnosis problem and the credit card problem.

  9. An improved chaotic fruit fly optimization based on a mutation strategy for simultaneous feature selection and parameter optimization for SVM and its applications

    PubMed Central

    Lou, Xin Yuan; Sun, Lin Fu

    2017-01-01

    This paper proposes a new support vector machine (SVM) optimization scheme based on an improved chaotic fly optimization algorithm (FOA) with a mutation strategy to simultaneously perform parameter setting turning for the SVM and feature selection. In the improved FOA, the chaotic particle initializes the fruit fly swarm location and replaces the expression of distance for the fruit fly to find the food source. However, the proposed mutation strategy uses two distinct generative mechanisms for new food sources at the osphresis phase, allowing the algorithm procedure to search for the optimal solution in both the whole solution space and within the local solution space containing the fruit fly swarm location. In an evaluation based on a group of ten benchmark problems, the proposed algorithm’s performance is compared with that of other well-known algorithms, and the results support the superiority of the proposed algorithm. Moreover, this algorithm is successfully applied in a SVM to perform both parameter setting turning for the SVM and feature selection to solve real-world classification problems. This method is called chaotic fruit fly optimization algorithm (CIFOA)-SVM and has been shown to be a more robust and effective optimization method than other well-known methods, particularly in terms of solving the medical diagnosis problem and the credit card problem. PMID:28369096

  10. The effect of increasing strength and approach velocity on triple jump performance.

    PubMed

    Allen, Sam J; Yeadon, M R Fred; King, Mark A

    2016-12-08

    The triple jump is an athletic event comprising three phases in which the optimal phase ratio (the proportion of each phase to the total distance jumped) is unknown. This study used a planar whole body torque-driven computer simulation model of the ground contact parts of all three phases of the triple jump to investigate the effect of strength and approach velocity on optimal performance. The strength and approach velocity of the simulation model were each increased by up to 30% in 10% increments from baseline data collected from a national standard triple jumper. Increasing strength always resulted in an increased overall jump distance. Increasing approach velocity also typically resulted in an increased overall jump distance but there was a point past which increasing approach velocity without increasing strength did not lead to an increase in overall jump distance. Increasing both strength and approach velocity by 10%, 20%, and 30% led to roughly equivalent increases in overall jump distances. Distances ranged from 14.05m with baseline strength and approach velocity, up to 18.49m with 30% increases in both. Optimal phase ratios were either hop-dominated or balanced, and typically became more balanced when the strength of the model was increased by a greater percentage than its approach velocity. The range of triple jump distances that resulted from the optimisation process suggests that strength and approach velocity are of great importance for triple jump performance. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Interlead distance and left ventricular lead electrical delay predict reverse remodeling during cardiac resynchronization therapy.

    PubMed

    Merchant, Faisal M; Heist, E Kevin; Nandigam, K Veena; Mulligan, Lawrence J; Blendea, Dan; Riedl, Lindsay; McCarty, David; Orencole, Mary; Picard, Michael H; Ruskin, Jeremy N; Singh, Jagmeet P

    2010-05-01

    Both anatomic interlead separation and left ventricle lead electrical delay (LVLED) have been associated with outcomes following cardiac resynchronization therapy (CRT). However, the relationship between interlead distance and electrical delay in predicting CRT outcomes has not been defined. We studied 61 consecutive patients undergoing CRT for standard clinical indications. All patients underwent intraprocedural measurement of LVLED. Interlead distances in the horizontal (HD), vertical (VD), and direct (DD) dimensions were measured from postprocedure chest radiographs (CXR). Remodeling indices [percent change in left ventricle (LV) ejection fraction, end-diastolic, end-systolic dimensions] were assessed by transthoracic echocardiogram. There was a positive correlation between corrected LVLED and HD on lateral CXR (r = 0.361, P = 0.004) and a negative correlation between LVLED and VD on posteroanterior (PA) CXR (r =-0.281, P = 0.028). To account for this inverse relationship, we developed a composite anatomic distance (defined as: lateral HD-PA VD), which correlated most closely with LVLED (r = 0.404, P = 0.001). Follow-up was available for 48 patients. At a mean of 4.1 +/- 3.2 months, patients with optimal values for both corrected LVLED (>or=75%) and composite anatomic distance (>or=15 cm) demonstrated greater reverse LV remodeling than patients with either one or neither of these optimized values. We identified a significant correlation between LV-right ventricular interlead distance and LVLED; additionally, both parameters act synergistically in predicting LV anatomic reverse remodeling. Efforts to optimize both interlead distance and electrical delay may improve CRT outcomes.

  12. Influence of the distance between target surface and focal point on the expansion dynamics of a laser-induced silicon plasma with spatial confinement

    NASA Astrophysics Data System (ADS)

    Zhang, Dan; Chen, Anmin; Wang, Xiaowei; Wang, Ying; Sui, Laizhi; Ke, Da; Li, Suyu; Jiang, Yuanfei; Jin, Mingxing

    2018-05-01

    Expansion dynamics of a laser-induced plasma plume, with spatial confinement, for various distances between the target surface and focal point were studied by the fast photography technique. A silicon wafer was ablated to induce the plasma with a Nd:YAG laser in an atmospheric environment. The expansion dynamics of the plasma plume depended on the distance between the target surface and focal point. In addition, spatially confined time-resolved images showed the different structures of the plasma plumes at different distances between the target surface and focal point. By analyzing the plume images, the optimal distance for emission enhancement was found to be approximately 6 mm away from the geometrical focus using a 10 cm focal length lens. This optimized distance resulted in the strongest compression ratio of the plasma plume by the reflected shock wave. Furthermore, the duration of the interaction between the reflected shock wave and the plasma plume was also prolonged.

  13. More rapid climate change promotes evolutionary rescue through selection for increased dispersal distance.

    PubMed

    Boeye, Jeroen; Travis, Justin M J; Stoks, Robby; Bonte, Dries

    2013-02-01

    Species can either adapt to new conditions induced by climate change or shift their range in an attempt to track optimal environmental conditions. During current range shifts, species are simultaneously confronted with a second major anthropogenic disturbance, landscape fragmentation. Using individual-based models with a shifting climate window, we examine the effect of different rates of climate change on the evolution of dispersal distances through changes in the genetically determined dispersal kernel. Our results demonstrate that the rate of climate change is positively correlated to the evolved dispersal distances although too fast climate change causes the population to crash. When faced with realistic rates of climate change, greater dispersal distances evolve than those required for the population to keep track of the climate, thereby maximizing population size. Importantly, the greater dispersal distances that evolve when climate change is more rapid, induce evolutionary rescue by facilitating the population in crossing large gaps in the landscape. This could ensure population persistence in case of range shifting in fragmented landscapes. Furthermore, we highlight problems in using invasion speed as a proxy for potential range shifting abilities under climate change.

  14. Solution for a bipartite Euclidean traveling-salesman problem in one dimension

    NASA Astrophysics Data System (ADS)

    Caracciolo, Sergio; Di Gioacchino, Andrea; Gherardi, Marco; Malatesta, Enrico M.

    2018-05-01

    The traveling-salesman problem is one of the most studied combinatorial optimization problems, because of the simplicity in its statement and the difficulty in its solution. We characterize the optimal cycle for every convex and increasing cost function when the points are thrown independently and with an identical probability distribution in a compact interval. We compute the average optimal cost for every number of points when the distance function is the square of the Euclidean distance. We also show that the average optimal cost is not a self-averaging quantity by explicitly computing the variance of its distribution in the thermodynamic limit. Moreover, we prove that the cost of the optimal cycle is not smaller than twice the cost of the optimal assignment of the same set of points. Interestingly, this bound is saturated in the thermodynamic limit.

  15. Solution for a bipartite Euclidean traveling-salesman problem in one dimension.

    PubMed

    Caracciolo, Sergio; Di Gioacchino, Andrea; Gherardi, Marco; Malatesta, Enrico M

    2018-05-01

    The traveling-salesman problem is one of the most studied combinatorial optimization problems, because of the simplicity in its statement and the difficulty in its solution. We characterize the optimal cycle for every convex and increasing cost function when the points are thrown independently and with an identical probability distribution in a compact interval. We compute the average optimal cost for every number of points when the distance function is the square of the Euclidean distance. We also show that the average optimal cost is not a self-averaging quantity by explicitly computing the variance of its distribution in the thermodynamic limit. Moreover, we prove that the cost of the optimal cycle is not smaller than twice the cost of the optimal assignment of the same set of points. Interestingly, this bound is saturated in the thermodynamic limit.

  16. Stochastic multi-objective model for optimal energy exchange optimization of networked microgrids with presence of renewable generation under risk-based strategies.

    PubMed

    Gazijahani, Farhad Samadi; Ravadanegh, Sajad Najafi; Salehi, Javad

    2018-02-01

    The inherent volatility and unpredictable nature of renewable generations and load demand pose considerable challenges for energy exchange optimization of microgrids (MG). To address these challenges, this paper proposes a new risk-based multi-objective energy exchange optimization for networked MGs from economic and reliability standpoints under load consumption and renewable power generation uncertainties. In so doing, three various risk-based strategies are distinguished by using conditional value at risk (CVaR) approach. The proposed model is specified as a two-distinct objective function. The first function minimizes the operation and maintenance costs, cost of power transaction between upstream network and MGs as well as power loss cost, whereas the second function minimizes the energy not supplied (ENS) value. Furthermore, the stochastic scenario-based approach is incorporated into the approach in order to handle the uncertainty. Also, Kantorovich distance scenario reduction method has been implemented to reduce the computational burden. Finally, non-dominated sorting genetic algorithm (NSGAII) is applied to minimize the objective functions simultaneously and the best solution is extracted by fuzzy satisfying method with respect to risk-based strategies. To indicate the performance of the proposed model, it is performed on the modified IEEE 33-bus distribution system and the obtained results show that the presented approach can be considered as an efficient tool for optimal energy exchange optimization of MGs. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  17. High Accuracy Passive Magnetic Field-Based Localization for Feedback Control Using Principal Component Analysis.

    PubMed

    Foong, Shaohui; Sun, Zhenglong

    2016-08-12

    In this paper, a novel magnetic field-based sensing system employing statistically optimized concurrent multiple sensor outputs for precise field-position association and localization is presented. This method capitalizes on the independence between simultaneous spatial field measurements at multiple locations to induce unique correspondences between field and position. This single-source-multi-sensor configuration is able to achieve accurate and precise localization and tracking of translational motion without contact over large travel distances for feedback control. Principal component analysis (PCA) is used as a pseudo-linear filter to optimally reduce the dimensions of the multi-sensor output space for computationally efficient field-position mapping with artificial neural networks (ANNs). Numerical simulations are employed to investigate the effects of geometric parameters and Gaussian noise corruption on PCA assisted ANN mapping performance. Using a 9-sensor network, the sensing accuracy and closed-loop tracking performance of the proposed optimal field-based sensing system is experimentally evaluated on a linear actuator with a significantly more expensive optical encoder as a comparison.

  18. Clinical predictors of effective continuous positive airway pressure in patients with obstructive sleep apnea/hypopnea syndrome.

    PubMed

    Lai, Chi-Chih; Friedman, Michael; Lin, Hsin-Ching; Wang, Pa-Chun; Hwang, Michelle S; Hsu, Cheng-Ming; Lin, Meng-Chih; Chin, Chien-Hung

    2015-08-01

    To identify standard clinical parameters that may predict the optimal level of continuous positive airway pressure (CPAP) in adult patients with obstructive sleep apnea/hypopnea syndrome (OSAHS). This is a retrospective study in a tertiary academic medical center that included 129 adult patients (117 males and 12 females) with OSAHS confirmed by diagnostic polysomnography (PSG). All OSAHS patients underwent successful full-night manual titration to determine the optimal CPAP pressure level for OSAHS treatment. The PSG parameters and completed physical examination, including body mass index, tonsil size grading, modified Mallampati grade (also known as updated Friedman's tongue position [uFTP]), uvular length, neck circumference, waist circumference, hip circumference, thyroid-mental distance, and hyoid-mental distance (HMD) were recorded. When the physical examination variables and OSAHS disease were correlated singly with the optimal CPAP pressure, we found that uFTP, HMD, and apnea/hypopnea index (AHI) were reliable predictors of CPAP pressures (P = .013, P = .002, and P < .001, respectively, by multiple regression). When all important factors were considered in a stepwise multiple linear regression analysis, a significant correlation with optimal CPAP pressure was formulated by factoring the uFTP, HMD, and AHI (optimal CPAP pressure = 1.01 uFTP + 0.74 HMD + 0.059 AHI - 1.603). This study distinguished the correlation between uFTP, HMD, and AHI with the optimal CPAP pressure. The structure of the upper airway (especially tongue base obstruction) and disease severity may predict the effective level of CPAP pressure. 4. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.

  19. A novel approach to find and optimize bin locations and collection routes using a geographic information system.

    PubMed

    Erfani, Seyed Mohammad Hassan; Danesh, Shahnaz; Karrabi, Seyed Mohsen; Shad, Rouzbeh

    2017-07-01

    One of the major challenges in big cities is planning and implementation of an optimized, integrated solid waste management system. This optimization is crucial if environmental problems are to be prevented and the expenses to be reduced. A solid waste management system consists of many stages including collection, transfer and disposal. In this research, an integrated model was proposed and used to optimize two functional elements of municipal solid waste management (storage and collection systems) in the Ahmadabad neighbourhood located in the City of Mashhad - Iran. The integrated model was performed by modelling and solving the location allocation problem and capacitated vehicle routing problem (CVRP) through Geographic Information Systems (GIS). The results showed that the current collection system is not efficient owing to its incompatibility with the existing urban structure and population distribution. Application of the proposed model could significantly improve the storage and collection system. Based on the results of minimizing facilities analyses, scenarios with 100, 150 and 180 m walking distance were considered to find optimal bin locations for Alamdasht, C-metri and Koohsangi. The total number of daily collection tours was reduced to seven as compared to the eight tours carried out in the current system (12.50% reduction). In addition, the total number of required crews was minimized and reduced by 41.70% (24 crews in the current collection system vs 14 in the system provided by the model). The total collection vehicle routing was also optimized such that the total travelled distances during night and day working shifts was cut back by 53%.

  20. Automated working distance adjustment for a handheld OCT-Laryngoscope

    NASA Astrophysics Data System (ADS)

    Donner, Sabine; Bleeker, Sebastian; Ripken, Tammo; Krueger, Alexander

    2014-03-01

    Optical coherence tomography (OCT) is an imaging technique which enables diagnosis of vocal cord tissue structure by non-contact optical biopsies rather than invasive tissue biopsies. For diagnosis on awake patients OCT was adapted to a rigid indirect laryngoscope. The working distance must match the probe-sample distance, which varies from patient to patient. Therefore the endoscopic OCT sample arm has a variable working distance of 40 mm to 80 mm. The current axial position is identified by automated working distance adjustments based on image processing. The OCT reference plane and the focal plane of the sample arm are moved according to position errors. Repeated position adjustment during the whole diagnostic procedure keeps the tissue sample at the optimal axial position. The auto focus identifies and adjusts the working distance within the range of 50 mm within a maximum time of 2.7 s. Continuous image stabilisation reduces axial sample movement within the sampling depth for handheld OCT scanning. Rapid autofocus reduces the duration of the diagnostic procedure and axial position stabilisation eases the use of the OCT laryngoscope. Therefore this work is an important step towards the integration of OCT into indirect laryngoscopes.

  1. A solution quality assessment method for swarm intelligence optimization algorithms.

    PubMed

    Zhang, Zhaojun; Wang, Gai-Ge; Zou, Kuansheng; Zhang, Jianhua

    2014-01-01

    Nowadays, swarm intelligence optimization has become an important optimization tool and wildly used in many fields of application. In contrast to many successful applications, the theoretical foundation is rather weak. Therefore, there are still many problems to be solved. One problem is how to quantify the performance of algorithm in finite time, that is, how to evaluate the solution quality got by algorithm for practical problems. It greatly limits the application in practical problems. A solution quality assessment method for intelligent optimization is proposed in this paper. It is an experimental analysis method based on the analysis of search space and characteristic of algorithm itself. Instead of "value performance," the "ordinal performance" is used as evaluation criteria in this method. The feasible solutions were clustered according to distance to divide solution samples into several parts. Then, solution space and "good enough" set can be decomposed based on the clustering results. Last, using relative knowledge of statistics, the evaluation result can be got. To validate the proposed method, some intelligent algorithms such as ant colony optimization (ACO), particle swarm optimization (PSO), and artificial fish swarm algorithm (AFS) were taken to solve traveling salesman problem. Computational results indicate the feasibility of proposed method.

  2. Nozzle Mounting Method Optimization Based on Robot Kinematic Analysis

    NASA Astrophysics Data System (ADS)

    Chen, Chaoyue; Liao, Hanlin; Montavon, Ghislain; Deng, Sihao

    2016-08-01

    Nowadays, the application of industrial robots in thermal spray is gaining more and more importance. A desired coating quality depends on factors such as a balanced robot performance, a uniform scanning trajectory and stable parameters (e.g. nozzle speed, scanning step, spray angle, standoff distance). These factors also affect the mass and heat transfer as well as the coating formation. Thus, the kinematic optimization of all these aspects plays a key role in order to obtain an optimal coating quality. In this study, the robot performance was optimized from the aspect of nozzle mounting on the robot. An optimized nozzle mounting for a type F4 nozzle was designed, based on the conventional mounting method from the point of view of robot kinematics validated on a virtual robot. Robot kinematic parameters were obtained from the simulation by offline programming software and analyzed by statistical methods. The energy consumptions of different nozzle mounting methods were also compared. The results showed that it was possible to reasonably assign the amount of robot motion to each axis during the process, so achieving a constant nozzle speed. Thus, it is possible optimize robot performance and to economize robot energy.

  3. Calibrating the orientation between a microlens array and a sensor based on projective geometry

    NASA Astrophysics Data System (ADS)

    Su, Lijuan; Yan, Qiangqiang; Cao, Jun; Yuan, Yan

    2016-07-01

    We demonstrate a method for calibrating a microlens array (MLA) with a sensor component by building a plenoptic camera with a conventional prime lens. This calibration method includes a geometric model, a setup to adjust the distance (L) between the prime lens and the MLA, a calibration procedure for determining the subimage centers, and an optimization algorithm. The geometric model introduces nine unknown parameters regarding the centers of the microlenses and their images, whereas the distance adjustment setup provides an initial guess for the distance L. The simulation results verify the effectiveness and accuracy of the proposed method. The experimental results demonstrate the calibration process can be performed with a commercial prime lens and the proposed method can be used to quantitatively evaluate whether a MLA and a sensor is assembled properly for plenoptic systems.

  4. Is multiple-sequence alignment required for accurate inference of phylogeny?

    PubMed

    Höhl, Michael; Ragan, Mark A

    2007-04-01

    The process of inferring phylogenetic trees from molecular sequences almost always starts with a multiple alignment of these sequences but can also be based on methods that do not involve multiple sequence alignment. Very little is known about the accuracy with which such alignment-free methods recover the correct phylogeny or about the potential for increasing their accuracy. We conducted a large-scale comparison of ten alignment-free methods, among them one new approach that does not calculate distances and a faster variant of our pattern-based approach; all distance-based alignment-free methods are freely available from http://www.bioinformatics.org.au (as Python package decaf+py). We show that most methods exhibit a higher overall reconstruction accuracy in the presence of high among-site rate variation. Under all conditions that we considered, variants of the pattern-based approach were significantly better than the other alignment-free methods. The new pattern-based variant achieved a speed-up of an order of magnitude in the distance calculation step, accompanied by a small loss of tree reconstruction accuracy. A method of Bayesian inference from k-mers did not improve on classical alignment-free (and distance-based) methods but may still offer other advantages due to its Bayesian nature. We found the optimal word length k of word-based methods to be stable across various data sets, and we provide parameter ranges for two different alphabets. The influence of these alphabets was analyzed to reveal a trade-off in reconstruction accuracy between long and short branches. We have mapped the phylogenetic accuracy for many alignment-free methods, among them several recently introduced ones, and increased our understanding of their behavior in response to biologically important parameters. In all experiments, the pattern-based approach emerged as superior, at the expense of higher resource consumption. Nonetheless, no alignment-free method that we examined recovers the correct phylogeny as accurately as does an approach based on maximum-likelihood distance estimates of multiply aligned sequences.

  5. A segmentation method for lung nodule image sequences based on superpixels and density-based spatial clustering of applications with noise

    PubMed Central

    Zhang, Wei; Zhang, Xiaolong; Qiang, Yan; Tian, Qi; Tang, Xiaoxian

    2017-01-01

    The fast and accurate segmentation of lung nodule image sequences is the basis of subsequent processing and diagnostic analyses. However, previous research investigating nodule segmentation algorithms cannot entirely segment cavitary nodules, and the segmentation of juxta-vascular nodules is inaccurate and inefficient. To solve these problems, we propose a new method for the segmentation of lung nodule image sequences based on superpixels and density-based spatial clustering of applications with noise (DBSCAN). First, our method uses three-dimensional computed tomography image features of the average intensity projection combined with multi-scale dot enhancement for preprocessing. Hexagonal clustering and morphological optimized sequential linear iterative clustering (HMSLIC) for sequence image oversegmentation is then proposed to obtain superpixel blocks. The adaptive weight coefficient is then constructed to calculate the distance required between superpixels to achieve precise lung nodules positioning and to obtain the subsequent clustering starting block. Moreover, by fitting the distance and detecting the change in slope, an accurate clustering threshold is obtained. Thereafter, a fast DBSCAN superpixel sequence clustering algorithm, which is optimized by the strategy of only clustering the lung nodules and adaptive threshold, is then used to obtain lung nodule mask sequences. Finally, the lung nodule image sequences are obtained. The experimental results show that our method rapidly, completely and accurately segments various types of lung nodule image sequences. PMID:28880916

  6. Low-Cost Ultrasonic Distance Sensor Arrays with Networked Error Correction

    PubMed Central

    Dai, Hongjun; Zhao, Shulin; Jia, Zhiping; Chen, Tianzhou

    2013-01-01

    Distance has been one of the basic factors in manufacturing and control fields, and ultrasonic distance sensors have been widely used as a low-cost measuring tool. However, the propagation of ultrasonic waves is greatly affected by environmental factors such as temperature, humidity and atmospheric pressure. In order to solve the problem of inaccurate measurement, which is significant within industry, this paper presents a novel ultrasonic distance sensor model using networked error correction (NEC) trained on experimental data. This is more accurate than other existing approaches because it uses information from indirect association with neighboring sensors, which has not been considered before. The NEC technique, focusing on optimization of the relationship of the topological structure of sensor arrays, is implemented for the compensation of erroneous measurements caused by the environment. We apply the maximum likelihood method to determine the optimal fusion data set and use a neighbor discovery algorithm to identify neighbor nodes at the top speed. Furthermore, we adopt the NEC optimization algorithm, which takes full advantage of the correlation coefficients for neighbor sensors. The experimental results demonstrate that the ranging errors of the NEC system are within 2.20%; furthermore, the mean absolute percentage error is reduced to 0.01% after three iterations of this method, which means that the proposed method performs extremely well. The optimized method of distance measurement we propose, with the capability of NEC, would bring a significant advantage for intelligent industrial automation. PMID:24013491

  7. Optimizing the positional relationships between instruments used in laparoscopic simulation using a simple trigonometric method.

    PubMed

    Lorias Espinoza, Daniel; Ordorica Flores, Ricardo; Minor Martínez, Arturo; Gutiérrez Gnecchi, José Antonio

    2014-06-01

    Various methods for evaluating laparoscopic skill have been reported, but without detailed information on the configuration used they are difficult to reproduce. Here we present a method based on the trigonometric relationships between the instruments used in a laparoscopic training platform in order to provide a tool to aid in the reproducible assessment of surgical laparoscopic technique. The positions of the instruments were represented using triangles. Basic trigonometry was used to objectively establish the distances among the working ports RL, the placement of the optical port h', and the placement of the surgical target OT. The optimal configuration of a training platform depends on the selected working angles, the intracorporeal/extracorporeal lengths of the instrument, and the depth of the surgical target. We demonstrate that some distances, angles, and positions of the instruments are inappropriate for satisfactory laparoscopy. By applying basic trigonometric principles we can determine the ideal placement of the working ports and the optics in a simple, precise, and objective way. In addition, because the method is based on parameters known to be important in both the performance and quantitative quality of laparoscopy, the results are generalizable to different training platforms and types of laparoscopic surgery.

  8. On Global Optimal Sailplane Flight Strategy

    NASA Technical Reports Server (NTRS)

    Sander, G. J.; Litt, F. X.

    1979-01-01

    The derivation and interpretation of the necessary conditions that a sailplane cross-country flight has to satisfy to achieve the maximum global flight speed is considered. Simple rules are obtained for two specific meteorological models. The first one uses concentrated lifts of various strengths and unequal distance. The second one takes into account finite, nonuniform space amplitudes for the lifts and allows, therefore, for dolphin style flight. In both models, altitude constraints consisting of upper and lower limits are shown to be essential to model realistic problems. Numerical examples illustrate the difference with existing techniques based on local optimality conditions.

  9. Likelihood testing of seismicity-based rate forecasts of induced earthquakes in Oklahoma and Kansas

    USGS Publications Warehouse

    Moschetti, Morgan P.; Hoover, Susan M.; Mueller, Charles

    2016-01-01

    Likelihood testing of induced earthquakes in Oklahoma and Kansas has identified the parameters that optimize the forecasting ability of smoothed seismicity models and quantified the recent temporal stability of the spatial seismicity patterns. Use of the most recent 1-year period of earthquake data and use of 10–20-km smoothing distances produced the greatest likelihood. The likelihood that the locations of January–June 2015 earthquakes were consistent with optimized forecasts decayed with increasing elapsed time between the catalogs used for model development and testing. Likelihood tests with two additional sets of earthquakes from 2014 exhibit a strong sensitivity of the rate of decay to the smoothing distance. Marked reductions in likelihood are caused by the nonstationarity of the induced earthquake locations. Our results indicate a multiple-fold benefit from smoothed seismicity models in developing short-term earthquake rate forecasts for induced earthquakes in Oklahoma and Kansas, relative to the use of seismic source zones.

  10. Optimization of biostimulant for bioremediation of contaminated coastal sediment by response surface methodology (RSM) and evaluation of microbial diversity by pyrosequencing.

    PubMed

    Subha, Bakthavachallam; Song, Young Chae; Woo, Jung Hui

    2015-09-15

    The present study aims to optimize the slow release biostimulant ball (BSB) for bioremediation of contaminated coastal sediment using response surface methodology (RSM). Different bacterial communities were evaluated using a pyrosequencing-based approach in contaminated coastal sediments. The effects of BSB size (1-5cm), distance (1-10cm) and time (1-4months) on changes in chemical oxygen demand (COD) and volatile solid (VS) reduction were determined. Maximum reductions of COD and VS, 89.7% and 78.8%, respectively, were observed at a 3cm ball size, 5.5cm distance and 4months; these values are the optimum conditions for effective treatment of contaminated coastal sediment. Most of the variance in COD and VS (0.9291 and 0.9369, respectively) was explained in our chosen models. BSB is a promising method for COD and VS reduction and enhancement of SRB diversity. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Geostatistical modeling of riparian forest microclimate and its implications for sampling

    USGS Publications Warehouse

    Eskelson, B.N.I.; Anderson, P.D.; Hagar, J.C.; Temesgen, H.

    2011-01-01

    Predictive models of microclimate under various site conditions in forested headwater stream - riparian areas are poorly developed, and sampling designs for characterizing underlying riparian microclimate gradients are sparse. We used riparian microclimate data collected at eight headwater streams in the Oregon Coast Range to compare ordinary kriging (OK), universal kriging (UK), and kriging with external drift (KED) for point prediction of mean maximum air temperature (Tair). Several topographic and forest structure characteristics were considered as site-specific parameters. Height above stream and distance to stream were the most important covariates in the KED models, which outperformed OK and UK in terms of root mean square error. Sample patterns were optimized based on the kriging variance and the weighted means of shortest distance criterion using the simulated annealing algorithm. The optimized sample patterns outperformed systematic sample patterns in terms of mean kriging variance mainly for small sample sizes. These findings suggest methods for increasing efficiency of microclimate monitoring in riparian areas.

  12. Optimal regionalization of extreme value distributions for flood estimation

    NASA Astrophysics Data System (ADS)

    Asadi, Peiman; Engelke, Sebastian; Davison, Anthony C.

    2018-01-01

    Regionalization methods have long been used to estimate high return levels of river discharges at ungauged locations on a river network. In these methods, discharge measurements from a homogeneous group of similar, gauged, stations are used to estimate high quantiles at a target location that has no observations. The similarity of this group to the ungauged location is measured in terms of a hydrological distance measuring differences in physical and meteorological catchment attributes. We develop a statistical method for estimation of high return levels based on regionalizing the parameters of a generalized extreme value distribution. The group of stations is chosen by optimizing over the attribute weights of the hydrological distance, ensuring similarity and in-group homogeneity. Our method is applied to discharge data from the Rhine basin in Switzerland, and its performance at ungauged locations is compared to that of other regionalization methods. For gauged locations we show how our approach improves the estimation uncertainty for long return periods by combining local measurements with those from the chosen group.

  13. Eliciting Naturalistic Cortical Responses with a Sensory Prosthesis via Optimized Microstimulation

    DTIC Science & Technology

    2016-08-12

    error and correlation as metrics amenable to highly efficient convex optimization. This study concentrates on characterizing the neural responses to both...spiking signal. For LFP, distance measures such as the traditional mean-squared error and cross- correlation can be used, whereas distances between spike...with parameters that describe their associated temporal dynamics and relations to the observed output. A description of the model follows, but we

  14. Metabolic flux estimation using particle swarm optimization with penalty function.

    PubMed

    Long, Hai-Xia; Xu, Wen-Bo; Sun, Jun

    2009-01-01

    Metabolic flux estimation through 13C trace experiment is crucial for quantifying the intracellular metabolic fluxes. In fact, it corresponds to a constrained optimization problem that minimizes a weighted distance between measured and simulated results. In this paper, we propose particle swarm optimization (PSO) with penalty function to solve 13C-based metabolic flux estimation problem. The stoichiometric constraints are transformed to an unconstrained one, by penalizing the constraints and building a single objective function, which in turn is minimized using PSO algorithm for flux quantification. The proposed algorithm is applied to estimate the central metabolic fluxes of Corynebacterium glutamicum. From simulation results, it is shown that the proposed algorithm has superior performance and fast convergence ability when compared to other existing algorithms.

  15. Optimizing electrode configuration for electrical impedance measurements of muscle via the finite element method.

    PubMed

    Jafarpoor, Mina; Li, Jia; White, Jacob K; Rutkove, Seward B

    2013-05-01

    Electrical impedance myography (EIM) is a technique for the evaluation of neuromuscular diseases, including amyotrophic lateral sclerosis and muscular dystrophy. In this study, we evaluated how alterations in the size and conductivity of muscle and thickness of subcutaneous fat impact the EIM data, with the aim of identifying an optimized electrode configuration for EIM measurements. Finite element models were developed for the human upper arm based on anatomic data; material properties of the tissues were obtained from rat and published sources. The developed model matched the frequency-dependent character of the data. Of the three major EIM parameters, resistance, reactance, and phase, the reactance was least susceptible to alterations in the subcutaneous fat thickness, regardless of electrode arrangement. For example, a quadrupling of fat thickness resulted in a 375% increase in resistance at 35 kHz but only a 29% reduction in reactance. By further optimizing the electrode configuration, the change in reactance could be reduced to just 0.25%. For a fixed 30 mm distance between the sense electrodes centered between the excitation electrodes, an 80 mm distance between the excitation electrodes was found to provide the best balance, with a less than 1% change in reactance despite a doubling of subcutaneous fat thickness or halving of muscle size. These analyses describe a basic approach for further electrode configuration optimization for EIM.

  16. Exploring Cloud Computing for Distance Learning

    ERIC Educational Resources Information Center

    He, Wu; Cernusca, Dan; Abdous, M'hammed

    2011-01-01

    The use of distance courses in learning is growing exponentially. To better support faculty and students for teaching and learning, distance learning programs need to constantly innovate and optimize their IT infrastructures. The new IT paradigm called "cloud computing" has the potential to transform the way that IT resources are utilized and…

  17. Reduction of shock induced noise in imperfectly expanded supersonic jets using convex optimization

    NASA Astrophysics Data System (ADS)

    Adhikari, Sam

    2007-11-01

    Imperfectly expanded jets generate screech noise. The imbalance between the backpressure and the exit pressure of the imperfectly expanded jets produce shock cells and expansion or compression waves from the nozzle. The instability waves and the shock cells interact to generate the screech sound. The mathematical model consists of cylindrical coordinate based full Navier-Stokes equations and large-eddy-simulation turbulence modeling. Analytical and computational analysis of the three-dimensional helical effects provide a model that relates several parameters with shock cell patterns, screech frequency and distribution of shock generation locations. Convex optimization techniques minimize the shock cell patterns and the instability waves. The objective functions are (convex) quadratic and the constraint functions are affine. In the quadratic optimization programs, minimization of the quadratic functions over a set of polyhedrons provides the optimal result. Various industry standard methods like regression analysis, distance between polyhedra, bounding variance, Markowitz optimization, and second order cone programming is used for Quadratic Optimization.

  18. Clinical predictors of the optimal spectacle correction for comfort performing desktop tasks.

    PubMed

    Leffler, Christopher T; Davenport, Byrd; Rentz, Jodi; Miller, Amy; Benson, William

    2008-11-01

    The best strategy for spectacle correction of presbyopia for near tasks has not been determined. Thirty volunteers over the age of 40 years were tested for subjective accommodative amplitude, pupillary size, fusional vergence, interpupillary distance, arm length, preferred working distance, near and far visual acuity and preferred reading correction in the phoropter and trial frames. Subjects performed near tasks (reading, writing and counting change) using various spectacle correction strengths. Predictors of the correction maximising near task comfort were determined by multivariable linear regression. The mean age was 54.9 years (range 43 to 71) and 40 per cent had diabetes. Significant predictors of the most comfortable addition in univariate analyses were age (p<0.001), interpupillary distance (p=0.02), fusional vergence amplitude (p=0.02), distance visual acuity in the worse eye (p=0.01), vision at 40 cm in the worse eye with distance correction (p=0.01), duration of diabetes (p=0.01), and the preferred correction to read at 40 cm with the phoropter (p=0.002) or trial frames (p<0.001). Target distance selected wearing trial frames (in dioptres), arm length, and accommodative amplitude were not significant predictors (p>0.15). The preferred addition wearing trial frames holding a reading target at a distance selected by the patient was the only independent predictor. Excluding this variable, distance visual acuity was predictive independent of age or near vision wearing distance correction. The distance selected for task performance was predicted by vision wearing distance correction at near and at distance. Multivariable linear regression can be used to generate tables based on distance visual acuity and age or near vision wearing distance correction to determine tentative near spectacle addition. Final spectacle correction for desktop tasks can be estimated by subjective refraction with trial frames.

  19. Ant colony optimization for solving university facility layout problem

    NASA Astrophysics Data System (ADS)

    Mohd Jani, Nurul Hafiza; Mohd Radzi, Nor Haizan; Ngadiman, Mohd Salihin

    2013-04-01

    Quadratic Assignment Problems (QAP) is classified as the NP hard problem. It has been used to model a lot of problem in several areas such as operational research, combinatorial data analysis and also parallel and distributed computing, optimization problem such as graph portioning and Travel Salesman Problem (TSP). In the literature, researcher use exact algorithm, heuristics algorithm and metaheuristic approaches to solve QAP problem. QAP is largely applied in facility layout problem (FLP). In this paper we used QAP to model university facility layout problem. There are 8 facilities that need to be assigned to 8 locations. Hence we have modeled a QAP problem with n ≤ 10 and developed an Ant Colony Optimization (ACO) algorithm to solve the university facility layout problem. The objective is to assign n facilities to n locations such that the minimum product of flows and distances is obtained. Flow is the movement from one to another facility, whereas distance is the distance between one locations of a facility to other facilities locations. The objective of the QAP is to obtain minimum total walking (flow) of lecturers from one destination to another (distance).

  20. Classification and recognition of dynamical models: the role of phase, independent components, kernels and optimal transport.

    PubMed

    Bissacco, Alessandro; Chiuso, Alessandro; Soatto, Stefano

    2007-11-01

    We address the problem of performing decision tasks, and in particular classification and recognition, in the space of dynamical models in order to compare time series of data. Motivated by the application of recognition of human motion in image sequences, we consider a class of models that include linear dynamics, both stable and marginally stable (periodic), both minimum and non-minimum phase, driven by non-Gaussian processes. This requires extending existing learning and system identification algorithms to handle periodic modes and nonminimum phase behavior, while taking into account higher-order statistics of the data. Once a model is identified, we define a kernel-based cord distance between models that includes their dynamics, their initial conditions as well as input distribution. This is made possible by a novel kernel defined between two arbitrary (non-Gaussian) distributions, which is computed by efficiently solving an optimal transport problem. We validate our choice of models, inference algorithm, and distance on the tasks of human motion synthesis (sample paths of the learned models), and recognition (nearest-neighbor classification in the computed distance). However, our work can be applied more broadly where one needs to compare historical data while taking into account periodic trends, non-minimum phase behavior, and non-Gaussian input distributions.

  1. Multislice CT perfusion imaging of the lung in detection of pulmonary embolism

    NASA Astrophysics Data System (ADS)

    Hong, Helen; Lee, Jeongjin

    2006-03-01

    We propose a new subtraction technique for accurately imaging lung perfusion and efficiently detecting pulmonary embolism in chest MDCT angiography. Our method is composed of five stages. First, optimal segmentation technique is performed for extracting same volume of the lungs, major airway and vascular structures from pre- and post-contrast images with different lung density. Second, initial registration based on apex, hilar point and center of inertia (COI) of each unilateral lung is proposed to correct the gross translational mismatch. Third, initial alignment is refined by iterative surface registration. For fast and robust convergence of the distance measure to the optimal value, a 3D distance map is generated by the narrow-band distance propagation. Fourth, 3D nonlinear filter is applied to the lung parenchyma to compensate for residual spiral artifacts and artifacts caused by heart motion. Fifth, enhanced vessels are visualized by subtracting registered pre-contrast images from post-contrast images. To facilitate visualization of parenchyma enhancement, color-coded mapping and image fusion is used. Our method has been successfully applied to ten patients of pre- and post-contrast images in chest MDCT angiography. Experimental results show that the performance of our method is very promising compared with conventional methods with the aspects of its visual inspection, accuracy and processing time.

  2. Performance of an optical equalizer in a 10 G wavelength converting optical access network.

    PubMed

    Mendinueta, José Manuel D; Cao, Bowen; Thomsen, Benn C; Mitchell, John E

    2011-12-12

    A centralized optical processing unit (COPU) that functions both as a wavelength converter (WC) and optical burst equaliser in a 10 Gb/s wavelength-converting optical access network is proposed and experimentally characterized. This COPU is designed to consolidate drifting wavelengths generated with an uncooled laser in the upstream direction into a stable wavelength channel for WDM backhaul transmission and to equalize the optical loud/soft burst power in order to relax the burst-mode receiver dynamic range requirement. The COPU consists of an optical power equaliser composed of two cascaded SOAs followed by a WC. Using an optical packet generator and a DC-coupled PIN-based digital burst-mode receiver, the COPU is characterized in terms of payload-BER for back-to-back and backhaul transmission distances of 22, 40, and 62 km. We show that there is a compromise between the receiver sensitivity and overload points that can be optimized tuning the WC operating point for a particular backhaul fiber transmission distance. Using the optimized settings, sensitivities of -30.94, -30.17, and -27.26 dBm with overloads of -9.3, -5, and >-5 dBm were demonstrated for backhaul transmission distances of 22, 40 and 62 km, respectively. © 2011 Optical Society of America

  3. RED: a set of molecular descriptors based on Renyi entropy.

    PubMed

    Delgado-Soler, Laura; Toral, Raul; Tomás, M Santos; Rubio-Martinez, Jaime

    2009-11-01

    New molecular descriptors, RED (Renyi entropy descriptors), based on the generalized entropies introduced by Renyi are presented. Topological descriptors based on molecular features have proven to be useful for describing molecular profiles. Renyi entropy is used as a variability measure to contract a feature-pair distribution composing the descriptor vector. The performance of RED descriptors was tested for the analysis of different sets of molecular distances, virtual screening, and pharmacological profiling. A free parameter of the Renyi entropy has been optimized for all the considered applications.

  4. Influence of a Locomotor Training Approach on Walking Speed and Distance in People With Chronic Spinal Cord Injury: A Randomized Clinical Trial

    PubMed Central

    Roach, Kathryn E.

    2011-01-01

    Background Impaired walking limits function after spinal cord injury (SCI), but training-related improvements are possible even in people with chronic motor incomplete SCI. Objective The objective of this study was to compare changes in walking speed and distance associated with 4 locomotor training approaches. Design This study was a single-blind, randomized clinical trial. Setting This study was conducted in a rehabilitation research laboratory. Participants Participants were people with minimal walking function due to chronic SCI. Intervention Participants (n=74) trained 5 days per week for 12 weeks with the following approaches: treadmill-based training with manual assistance (TM), treadmill-based training with stimulation (TS), overground training with stimulation (OG), and treadmill-based training with robotic assistance (LR). Measurements Overground walking speed and distance were the primary outcome measures. Results In participants who completed the training (n=64), there were overall effects for speed (effect size index [d]=0.33) and distance (d=0.35). For speed, there were no significant between-group differences; however, distance gains were greatest with OG. Effect sizes for speed and distance were largest with OG (d=0.43 and d=0.40, respectively). Effect sizes for speed were the same for TM and TS (d=0.28); there was no effect for LR. The effect size for distance was greater with TS (d=0.16) than with TM or LR, for which there was no effect. Ten participants who improved with training were retested at least 6 months after training; walking speed at this time was slower than that at the conclusion of training but remained faster than before training. Limitations It is unknown whether the training dosage and the emphasis on training speed were optimal. Robotic training that requires active participation would likely yield different results. Conclusions In people with chronic motor incomplete SCI, walking speed improved with both overground training and treadmill-based training; however, walking distance improved to a greater extent with overground training. PMID:21051593

  5. Point-based warping with optimized weighting factors of displacement vectors

    NASA Astrophysics Data System (ADS)

    Pielot, Ranier; Scholz, Michael; Obermayer, Klaus; Gundelfinger, Eckart D.; Hess, Andreas

    2000-06-01

    The accurate comparison of inter-individual 3D image brain datasets requires non-affine transformation techniques (warping) to reduce geometric variations. Constrained by the biological prerequisites we use in this study a landmark-based warping method with weighted sums of displacement vectors, which is enhanced by an optimization process. Furthermore, we investigate fast automatic procedures for determining landmarks to improve the practicability of 3D warping. This combined approach was tested on 3D autoradiographs of Gerbil brains. The autoradiographs were obtained after injecting a non-metabolized radioactive glucose derivative into the Gerbil thereby visualizing neuronal activity in the brain. Afterwards the brain was processed with standard autoradiographical methods. The landmark-generator computes corresponding reference points simultaneously within a given number of datasets by Monte-Carlo-techniques. The warping function is a distance weighted exponential function with a landmark- specific weighting factor. These weighting factors are optimized by a computational evolution strategy. The warping quality is quantified by several coefficients (correlation coefficient, overlap-index, and registration error). The described approach combines a highly suitable procedure to automatically detect landmarks in autoradiographical brain images and an enhanced point-based warping technique, optimizing the local weighting factors. This optimization process significantly improves the similarity between the warped and the target dataset.

  6. Provisional-Ideal-Point-Based Multi-objective Optimization Method for Drone Delivery Problem

    NASA Astrophysics Data System (ADS)

    Omagari, Hiroki; Higashino, Shin-Ichiro

    2018-04-01

    In this paper, we proposed a new evolutionary multi-objective optimization method for solving drone delivery problems (DDP). It can be formulated as a constrained multi-objective optimization problem. In our previous research, we proposed the "aspiration-point-based method" to solve multi-objective optimization problems. However, this method needs to calculate the optimal values of each objective function value in advance. Moreover, it does not consider the constraint conditions except for the objective functions. Therefore, it cannot apply to DDP which has many constraint conditions. To solve these issues, we proposed "provisional-ideal-point-based method." The proposed method defines a "penalty value" to search for feasible solutions. It also defines a new reference solution named "provisional-ideal point" to search for the preferred solution for a decision maker. In this way, we can eliminate the preliminary calculations and its limited application scope. The results of the benchmark test problems show that the proposed method can generate the preferred solution efficiently. The usefulness of the proposed method is also demonstrated by applying it to DDP. As a result, the delivery path when combining one drone and one truck drastically reduces the traveling distance and the delivery time compared with the case of using only one truck.

  7. Analyzing the multiple-target-multiple-agent scenario using optimal assignment algorithms

    NASA Astrophysics Data System (ADS)

    Kwok, Kwan S.; Driessen, Brian J.; Phillips, Cynthia A.; Tovey, Craig A.

    1997-09-01

    This work considers the problem of maximum utilization of a set of mobile robots with limited sensor-range capabilities and limited travel distances. The robots are initially in random positions. A set of robots properly guards or covers a region if every point within the region is within the effective sensor range of at least one vehicle. We wish to move the vehicles into surveillance positions so as to guard or cover a region, while minimizing the maximum distance traveled by any vehicle. This problem can be formulated as an assignment problem, in which we must optimally decide which robot to assign to which slot of a desired matrix of grid points. The cost function is the maximum distance traveled by any robot. Assignment problems can be solved very efficiently. Solution times for one hundred robots took only seconds on a silicon graphics crimson workstation. The initial positions of all the robots can be sampled by a central base station and their newly assigned positions communicated back to the robots. Alternatively, the robots can establish their own coordinate system with the origin fixed at one of the robots and orientation determined by the compass bearing of another robot relative to this robot. This paper presents example solutions to the multiple-target-multiple-agent scenario using a matching algorithm. Two separate cases with one hundred agents in each were analyzed using this method. We have found these mobile robot problems to be a very interesting application of network optimization methods, and we expect this to be a fruitful area for future research.

  8. [Modeling and analysis of volume conduction based on field-circuit coupling].

    PubMed

    Tang, Zhide; Liu, Hailong; Xie, Xiaohui; Chen, Xiufa; Hou, Deming

    2012-08-01

    Numerical simulations of volume conduction can be used to analyze the process of energy transfer and explore the effects of some physical factors on energy transfer efficiency. We analyzed the 3D quasi-static electric field by the finite element method, and developed A 3D coupled field-circuit model of volume conduction basing on the coupling between the circuit and the electric field. The model includes a circuit simulation of the volume conduction to provide direct theoretical guidance for energy transfer optimization design. A field-circuit coupling model with circular cylinder electrodes was established on the platform of the software FEM3.5. Based on this, the effects of electrode cross section area, electrode distance and circuit parameters on the performance of volume conduction system were obtained, which provided a basis for optimized design of energy transfer efficiency.

  9. The optimal distance between two electrode tips during recording of compound nerve action potentials in the rat median nerve

    PubMed Central

    Li, Yongping; Lao, Jie; Zhao, Xin; Tian, Dong; Zhu, Yi; Wei, Xiaochun

    2014-01-01

    The distance between the two electrode tips can greatly influence the parameters used for recording compound nerve action potentials. To investigate the optimal parameters for these recordings in the rat median nerve, we dissociated the nerve using different methods and compound nerve action potentials were orthodromically or antidromically recorded with different electrode spacings. Compound nerve action potentials could be consistently recorded using a method in which the middle part of the median nerve was intact, with both ends dissociated from the surrounding fascia and a ground wire inserted into the muscle close to the intact part. When the distance between two stimulating electrode tips was increased, the threshold and supramaximal stimulating intensity of compound nerve action potentials were gradually decreased, but the amplitude was not changed significantly. When the distance between two recording electrode tips was increased, the amplitude was gradually increased, but the threshold and supramaximal stimulating intensity exhibited no significant change. Different distances between recording and stimulating sites did not produce significant effects on the aforementioned parameters. A distance of 5 mm between recording and stimulating electrodes and a distance of 10 mm between recording and stimulating sites were found to be optimal for compound nerve action potential recording in the rat median nerve. In addition, the orthodromic compound action potential, with a biphasic waveform that was more stable and displayed less interference (however also required a higher threshold and higher supramaximal stimulus), was found to be superior to the antidromic compound action potential. PMID:25206798

  10. Laser Brazing Characteristics of Al to Brass with Zn-Based Filler

    NASA Astrophysics Data System (ADS)

    Tan, Caiwang; Liu, Fuyun; Sun, Yiming; Chen, Bo; Song, Xiaoguo; Li, Liqun; Zhao, Hongyun; Feng, Jicai

    2018-05-01

    Laser brazing of Al to brass in lap configuration with Zn-based filler was performed in this work. The process parameters including laser power, defocused distance were found to have a significant influence on appearance, microstructure and mechanical properties. The process parameters were optimized to be laser power of 2700 W and defocusing distance of + 40 mm from brass surface. In addition, preheating exerted great influence on wetting and spreading ability of Zn filler on brass surface. The microstructure observation showed the thickness of reaction layer (CuZn phase) at the interface of the brass side would grow with the increase in laser power and the decrease in the laser defocusing distance. Moreover, preheating could increase the spreading area of the filler metal and induced the growth of the reaction layer. The highest tensile-shear load of the joint could reach 2100 N, which was 80% of that of Al alloy base metal. All the joints fractured along the CuZn reaction layer and brass interface. The fracture morphology displayed the characteristics of the cleavage fracture when without preheating before welding, while it displayed the characteristics of the quasi-cleavage fracture with preheating before welding.

  11. Effects of Power on Mental Rotation and Emotion Recognition in Women.

    PubMed

    Nissan, Tali; Shapira, Oren; Liberman, Nira

    2015-10-01

    Based on construal-level theory (CLT) and its view of power as an instance of social distance, we predicted that high, relative to low power would enhance women's mental-rotation performance and impede their emotion-recognition performance. The predicted effects of power emerged both when it was manipulated via a recall priming task (Study 1) and environmental cues (Studies 2 and 3). Studies 3 and 4 found evidence for mediation by construal level of the effect of power on emotion recognition but not on mental rotation. We discuss potential mediating mechanisms for these effects based on both the social distance/construal level and the approach/inhibition views of power. We also discuss implications for optimizing performance on mental rotation and emotion recognition in everyday life. © 2015 by the Society for Personality and Social Psychology, Inc.

  12. Spiral bacterial foraging optimization method: Algorithm, evaluation and convergence analysis

    NASA Astrophysics Data System (ADS)

    Kasaiezadeh, Alireza; Khajepour, Amir; Waslander, Steven L.

    2014-04-01

    A biologically-inspired algorithm called Spiral Bacterial Foraging Optimization (SBFO) is investigated in this article. SBFO, previously proposed by the same authors, is a multi-agent, gradient-based algorithm that minimizes both the main objective function (local cost) and the distance between each agent and a temporary central point (global cost). A random jump is included normal to the connecting line of each agent to the central point, which produces a vortex around the temporary central point. This random jump is also suitable to cope with premature convergence, which is a feature of swarm-based optimization methods. The most important advantages of this algorithm are as follows: First, this algorithm involves a stochastic type of search with a deterministic convergence. Second, as gradient-based methods are employed, faster convergence is demonstrated over GA, DE, BFO, etc. Third, the algorithm can be implemented in a parallel fashion in order to decentralize large-scale computation. Fourth, the algorithm has a limited number of tunable parameters, and finally SBFO has a strong certainty of convergence which is rare in existing global optimization algorithms. A detailed convergence analysis of SBFO for continuously differentiable objective functions has also been investigated in this article.

  13. "Catch the Pendulum": The Problem of Asymmetric Data Delivery in Electromagnetic Nanonetworks.

    PubMed

    Islam, Nabiul; Misra, Sudip

    2016-09-01

    The network of novel nano-material based nanodevices, known as nanoscale communication networks or nanonetworks has ushered a new communication paradigm in the terahertz band (0.1-10 THz). In this work, first we envisage an architecture of nanonetworks-based Coronary Heart Disease (CHD) monitoring, consisting of nano-macro interface (NM) and nanodevice-embedded Drug Eluting Stents (DESs), termed as nanoDESs. Next, we study the problem of asymmetric data delivery in such nanonetworks-based systems and propose a simple distance-aware power allocation algorithm, named catch-the-pendulum, which optimizes the energy consumption of nanoDESs for communicating data from the underlying nanonetworks to radio frequency (RF) based macro-scale communication networks. The algorithm exploits the periodic change in mean distance between a nanoDES, inserted inside the affected coronary artery, and the NM, fitted in the intercostal space of the rib cage of a patient suffering from a CHD. Extensive simulations confirm superior performance of the proposed algorithm with respect to energy consumption, packet delivery, and shutdown phase.

  14. An analysis of region-of-influence methods for flood regionalization in the Gulf-Atlantic Rolling Plains

    USGS Publications Warehouse

    Eng, K.; Tasker, Gary D.; Milly, P.C.D.

    2005-01-01

    Region-of-influence (RoI) approaches for estimating streamflow characteristics at ungaged sites were applied and evaluated in a case study of the 50-year peak discharge in the Gulf-Atlantic Rolling Plains of the southeastern United States. Linear regression against basin characteristics was performed for each ungaged site considered based on data from a region of influence containing the n closest gages in predictor variable (PRoI) or geographic (GRoI) space. Augmentation of this count based cutoff by a distance based cutoff also was considered. Prediction errors were evaluated for an independent (split-sampled) dataset. For the dataset and metrics considered here: (1) for either PRoI or GRoI, optimal results were found when the simpler count based cutoff, rather than the distance augmented cutoff, was used; (2) GRoI produced lower error than PRoI when applied indiscriminately over the entire study region; (3) PRoI performance improved considerably when RoI was restricted to predefined geographic subregions.

  15. Optimal river monitoring network using optimal partition analysis: a case study of Hun River, Northeast China.

    PubMed

    Wang, Hui; Liu, Chunyue; Rong, Luge; Wang, Xiaoxu; Sun, Lina; Luo, Qing; Wu, Hao

    2018-01-09

    River monitoring networks play an important role in water environmental management and assessment, and it is critical to develop an appropriate method to optimize the monitoring network. In this study, an effective method was proposed based on the attainment rate of National Grade III water quality, optimal partition analysis and Euclidean distance, and Hun River was taken as a method validation case. There were 7 sampling sites in the monitoring network of the Hun River, and 17 monitoring items were analyzed once a month during January 2009 to December 2010. The results showed that the main monitoring items in the surface water of Hun River were ammonia nitrogen (NH 4 + -N), chemical oxygen demand, and biochemical oxygen demand. After optimization, the required number of monitoring sites was reduced from seven to three, and 57% of the cost was saved. In addition, there were no significant differences between non-optimized and optimized monitoring networks, and the optimized monitoring networks could correctly represent the original monitoring network. The duplicate setting degree of monitoring sites decreased after optimization, and the rationality of the monitoring network was improved. Therefore, the optimal method was identified as feasible, efficient, and economic.

  16. Optimal Tikhonov regularization for DEER spectroscopy

    NASA Astrophysics Data System (ADS)

    Edwards, Thomas H.; Stoll, Stefan

    2018-03-01

    Tikhonov regularization is the most commonly used method for extracting distance distributions from experimental double electron-electron resonance (DEER) spectroscopy data. This method requires the selection of a regularization parameter, α , and a regularization operator, L. We analyze the performance of a large set of α selection methods and several regularization operators, using a test set of over half a million synthetic noisy DEER traces. These are generated from distance distributions obtained from in silico double labeling of a protein crystal structure of T4 lysozyme with the spin label MTSSL. We compare the methods and operators based on their ability to recover the model distance distributions from the noisy time traces. The results indicate that several α selection methods perform quite well, among them the Akaike information criterion and the generalized cross validation method with either the first- or second-derivative operator. They perform significantly better than currently utilized L-curve methods.

  17. Mathematical Modeling and Optimization of Gaseous Fuel Processing as a Basic Technology for Long-distance Energy Transportation: The Use of Methanol and Dimethyl Ether as Energy Carriers.

    NASA Astrophysics Data System (ADS)

    Tyurina, E. A.; Mednikov, A. S.

    2017-11-01

    The paper presents the results of studies on the perspective technologies of natural gas conversion to synthetic liquid fuel (SLF) at energy-technology installations for combined production of SLF and electricity based on their detailed mathematical models. The technologies of the long-distance transport of energy of natural gas from large fields to final consumers are compared in terms of their efficiency. This work was carried out at Melentiev Energy Systems Institute of Siberian Branch of the Russian Academy of Sciences and supported by Russian Science Foundation via grant No 16-19-10174

  18. Robust iterative closest point algorithm based on global reference point for rotation invariant registration.

    PubMed

    Du, Shaoyi; Xu, Yiting; Wan, Teng; Hu, Huaizhong; Zhang, Sirui; Xu, Guanglin; Zhang, Xuetao

    2017-01-01

    The iterative closest point (ICP) algorithm is efficient and accurate for rigid registration but it needs the good initial parameters. It is easily failed when the rotation angle between two point sets is large. To deal with this problem, a new objective function is proposed by introducing a rotation invariant feature based on the Euclidean distance between each point and a global reference point, where the global reference point is a rotation invariant. After that, this optimization problem is solved by a variant of ICP algorithm, which is an iterative method. Firstly, the accurate correspondence is established by using the weighted rotation invariant feature distance and position distance together. Secondly, the rigid transformation is solved by the singular value decomposition method. Thirdly, the weight is adjusted to control the relative contribution of the positions and features. Finally this new algorithm accomplishes the registration by a coarse-to-fine way whatever the initial rotation angle is, which is demonstrated to converge monotonically. The experimental results validate that the proposed algorithm is more accurate and robust compared with the original ICP algorithm.

  19. Robust iterative closest point algorithm based on global reference point for rotation invariant registration

    PubMed Central

    Du, Shaoyi; Xu, Yiting; Wan, Teng; Zhang, Sirui; Xu, Guanglin; Zhang, Xuetao

    2017-01-01

    The iterative closest point (ICP) algorithm is efficient and accurate for rigid registration but it needs the good initial parameters. It is easily failed when the rotation angle between two point sets is large. To deal with this problem, a new objective function is proposed by introducing a rotation invariant feature based on the Euclidean distance between each point and a global reference point, where the global reference point is a rotation invariant. After that, this optimization problem is solved by a variant of ICP algorithm, which is an iterative method. Firstly, the accurate correspondence is established by using the weighted rotation invariant feature distance and position distance together. Secondly, the rigid transformation is solved by the singular value decomposition method. Thirdly, the weight is adjusted to control the relative contribution of the positions and features. Finally this new algorithm accomplishes the registration by a coarse-to-fine way whatever the initial rotation angle is, which is demonstrated to converge monotonically. The experimental results validate that the proposed algorithm is more accurate and robust compared with the original ICP algorithm. PMID:29176780

  20. MASTtreedist: visualization of tree space based on maximum agreement subtree.

    PubMed

    Huang, Hong; Li, Yongji

    2013-01-01

    Phylogenetic tree construction process might produce many candidate trees as the "best estimates." As the number of constructed phylogenetic trees grows, the need to efficiently compare their topological or physical structures arises. One of the tree comparison's software tools, the Mesquite's Tree Set Viz module, allows the rapid and efficient visualization of the tree comparison distances using multidimensional scaling (MDS). Tree-distance measures, such as Robinson-Foulds (RF), for the topological distance among different trees have been implemented in Tree Set Viz. New and sophisticated measures such as Maximum Agreement Subtree (MAST) can be continuously built upon Tree Set Viz. MAST can detect the common substructures among trees and provide more precise information on the similarity of the trees, but it is NP-hard and difficult to implement. In this article, we present a practical tree-distance metric: MASTtreedist, a MAST-based comparison metric in Mesquite's Tree Set Viz module. In this metric, the efficient optimizations for the maximum weight clique problem are applied. The results suggest that the proposed method can efficiently compute the MAST distances among trees, and such tree topological differences can be translated as a scatter of points in two-dimensional (2D) space. We also provide statistical evaluation of provided measures with respect to RF-using experimental data sets. This new comparison module provides a new tree-tree pairwise comparison metric based on the differences of the number of MAST leaves among constructed phylogenetic trees. Such a new phylogenetic tree comparison metric improves the visualization of taxa differences by discriminating small divergences of subtree structures for phylogenetic tree reconstruction.

  1. Optimal architectures for long distance quantum communication.

    PubMed

    Muralidharan, Sreraman; Li, Linshu; Kim, Jungsang; Lütkenhaus, Norbert; Lukin, Mikhail D; Jiang, Liang

    2016-02-15

    Despite the tremendous progress of quantum cryptography, efficient quantum communication over long distances (≥ 1000 km) remains an outstanding challenge due to fiber attenuation and operation errors accumulated over the entire communication distance. Quantum repeaters (QRs), as a promising approach, can overcome both photon loss and operation errors, and hence significantly speedup the communication rate. Depending on the methods used to correct loss and operation errors, all the proposed QR schemes can be classified into three categories (generations). Here we present the first systematic comparison of three generations of quantum repeaters by evaluating the cost of both temporal and physical resources, and identify the optimized quantum repeater architecture for a given set of experimental parameters for use in quantum key distribution. Our work provides a roadmap for the experimental realizations of highly efficient quantum networks over transcontinental distances.

  2. Optimal architectures for long distance quantum communication

    PubMed Central

    Muralidharan, Sreraman; Li, Linshu; Kim, Jungsang; Lütkenhaus, Norbert; Lukin, Mikhail D.; Jiang, Liang

    2016-01-01

    Despite the tremendous progress of quantum cryptography, efficient quantum communication over long distances (≥1000 km) remains an outstanding challenge due to fiber attenuation and operation errors accumulated over the entire communication distance. Quantum repeaters (QRs), as a promising approach, can overcome both photon loss and operation errors, and hence significantly speedup the communication rate. Depending on the methods used to correct loss and operation errors, all the proposed QR schemes can be classified into three categories (generations). Here we present the first systematic comparison of three generations of quantum repeaters by evaluating the cost of both temporal and physical resources, and identify the optimized quantum repeater architecture for a given set of experimental parameters for use in quantum key distribution. Our work provides a roadmap for the experimental realizations of highly efficient quantum networks over transcontinental distances. PMID:26876670

  3. Optimal architectures for long distance quantum communication

    NASA Astrophysics Data System (ADS)

    Muralidharan, Sreraman; Li, Linshu; Kim, Jungsang; Lütkenhaus, Norbert; Lukin, Mikhail D.; Jiang, Liang

    2016-02-01

    Despite the tremendous progress of quantum cryptography, efficient quantum communication over long distances (≥1000 km) remains an outstanding challenge due to fiber attenuation and operation errors accumulated over the entire communication distance. Quantum repeaters (QRs), as a promising approach, can overcome both photon loss and operation errors, and hence significantly speedup the communication rate. Depending on the methods used to correct loss and operation errors, all the proposed QR schemes can be classified into three categories (generations). Here we present the first systematic comparison of three generations of quantum repeaters by evaluating the cost of both temporal and physical resources, and identify the optimized quantum repeater architecture for a given set of experimental parameters for use in quantum key distribution. Our work provides a roadmap for the experimental realizations of highly efficient quantum networks over transcontinental distances.

  4. Decomposition-based transfer distance metric learning for image classification.

    PubMed

    Luo, Yong; Liu, Tongliang; Tao, Dacheng; Xu, Chao

    2014-09-01

    Distance metric learning (DML) is a critical factor for image analysis and pattern recognition. To learn a robust distance metric for a target task, we need abundant side information (i.e., the similarity/dissimilarity pairwise constraints over the labeled data), which is usually unavailable in practice due to the high labeling cost. This paper considers the transfer learning setting by exploiting the large quantity of side information from certain related, but different source tasks to help with target metric learning (with only a little side information). The state-of-the-art metric learning algorithms usually fail in this setting because the data distributions of the source task and target task are often quite different. We address this problem by assuming that the target distance metric lies in the space spanned by the eigenvectors of the source metrics (or other randomly generated bases). The target metric is represented as a combination of the base metrics, which are computed using the decomposed components of the source metrics (or simply a set of random bases); we call the proposed method, decomposition-based transfer DML (DTDML). In particular, DTDML learns a sparse combination of the base metrics to construct the target metric by forcing the target metric to be close to an integration of the source metrics. The main advantage of the proposed method compared with existing transfer metric learning approaches is that we directly learn the base metric coefficients instead of the target metric. To this end, far fewer variables need to be learned. We therefore obtain more reliable solutions given the limited side information and the optimization tends to be faster. Experiments on the popular handwritten image (digit, letter) classification and challenge natural image annotation tasks demonstrate the effectiveness of the proposed method.

  5. Estimating Origin-Destination Matrices Using AN Efficient Moth Flame-Based Spatial Clustering Approach

    NASA Astrophysics Data System (ADS)

    Heidari, A. A.; Moayedi, A.; Abbaspour, R. Ali

    2017-09-01

    Automated fare collection (AFC) systems are regarded as valuable resources for public transport planners. In this paper, the AFC data are utilized to analysis and extract mobility patterns in a public transportation system. For this purpose, the smart card data are inserted into a proposed metaheuristic-based aggregation model and then converted to O-D matrix between stops, since the size of O-D matrices makes it difficult to reproduce the measured passenger flows precisely. The proposed strategy is applied to a case study from Haaglanden, Netherlands. In this research, moth-flame optimizer (MFO) is utilized and evaluated for the first time as a new metaheuristic algorithm (MA) in estimating transit origin-destination matrices. The MFO is a novel, efficient swarm-based MA inspired from the celestial navigation of moth insects in nature. To investigate the capabilities of the proposed MFO-based approach, it is compared to methods that utilize the K-means algorithm, gray wolf optimization algorithm (GWO) and genetic algorithm (GA). The sum of the intra-cluster distances and computational time of operations are considered as the evaluation criteria to assess the efficacy of the optimizers. The optimality of solutions of different algorithms is measured in detail. The traveler's behavior is analyzed to achieve to a smooth and optimized transport system. The results reveal that the proposed MFO-based aggregation strategy can outperform other evaluated approaches in terms of convergence tendency and optimality of the results. The results show that it can be utilized as an efficient approach to estimating the transit O-D matrices.

  6. Heuristic lipophilicity potential for computer-aided rational drug design: optimizations of screening functions and parameters.

    PubMed

    Du, Q; Mezey, P G

    1998-09-01

    In this research we test and compare three possible atom-based screening functions used in the heuristic molecular lipophilicity potential (HMLP). Screening function 1 is a power distance-dependent function, bi/[formula: see text] Ri-r [formula: see text] gamma, screening function 2 is an exponential distance-dependent function, bi exp(-[formula: see text] Ri-r [formula: see text]/d0), and screening function 3 is a weighted distance-dependent function, sign(bi) exp[-xi [formula: see text] Ri-r [formula: see text]/magnitude of bi)]. For every screening function, the parameters (gamma, d0, and xi) are optimized using 41 common organic molecules of 4 types of compounds: aliphatic alcohols, aliphatic carboxylic acids, aliphatic amines, and aliphatic alkanes. The results of calculations show that screening function 3 cannot give chemically reasonable results, however, both the power screening function and the exponential screening function give chemically satisfactory results. There are two notable differences between screening functions 1 and 2. First, the exponential screening function has larger values in the short distance than the power screening function, therefore more influence from the nearest neighbors is involved using screening function 2 than screening function 1. Second, the power screening function has larger values in the long distance than the exponential screening function, therefore screening function 1 is effected by atoms at long distance more than screening function 2. For screening function 1, the suitable range of parameter gamma is 1.0 < gamma < 3.0, gamma = 2.3 is recommended, and gamma = 2.0 is the nearest integral value. For screening function 2, the suitable range of parameter d0 is 1.5 < d0 < 3.0, and d0 = 2.0 is recommended. HMLP developed in this research provides a potential tool for computer-aided three-dimensional drug design.

  7. Anharmonic Normal Mode Analysis of Elastic Network Model Improves the Modeling of Atomic Fluctuations in Protein Crystal Structures

    PubMed Central

    Zheng, Wenjun

    2010-01-01

    Abstract Protein conformational dynamics, despite its significant anharmonicity, has been widely explored by normal mode analysis (NMA) based on atomic or coarse-grained potential functions. To account for the anharmonic aspects of protein dynamics, this study proposes, and has performed, an anharmonic NMA (ANMA) based on the Cα-only elastic network models, which assume elastic interactions between pairs of residues whose Cα atoms or heavy atoms are within a cutoff distance. The key step of ANMA is to sample an anharmonic potential function along the directions of eigenvectors of the lowest normal modes to determine the mean-squared fluctuations along these directions. ANMA was evaluated based on the modeling of anisotropic displacement parameters (ADPs) from a list of 83 high-resolution protein crystal structures. Significant improvement was found in the modeling of ADPs by ANMA compared with standard NMA. Further improvement in the modeling of ADPs is attained if the interactions between a protein and its crystalline environment are taken into account. In addition, this study has determined the optimal cutoff distances for ADP modeling based on elastic network models, and these agree well with the peaks of the statistical distributions of distances between Cα atoms or heavy atoms derived from a large set of protein crystal structures. PMID:20550915

  8. Robustness analysis of superpixel algorithms to image blur, additive Gaussian noise, and impulse noise

    NASA Astrophysics Data System (ADS)

    Brekhna, Brekhna; Mahmood, Arif; Zhou, Yuanfeng; Zhang, Caiming

    2017-11-01

    Superpixels have gradually become popular in computer vision and image processing applications. However, no comprehensive study has been performed to evaluate the robustness of superpixel algorithms in regard to common forms of noise in natural images. We evaluated the robustness of 11 recently proposed algorithms to different types of noise. The images were corrupted with various degrees of Gaussian blur, additive white Gaussian noise, and impulse noise that either made the object boundaries weak or added extra information to it. We performed a robustness analysis of simple linear iterative clustering (SLIC), Voronoi Cells (VCells), flooding-based superpixel generation (FCCS), bilateral geodesic distance (Bilateral-G), superpixel via geodesic distance (SSS-G), manifold SLIC (M-SLIC), Turbopixels, superpixels extracted via energy-driven sampling (SEEDS), lazy random walk (LRW), real-time superpixel segmentation by DBSCAN clustering, and video supervoxels using partially absorbing random walks (PARW) algorithms. The evaluation process was carried out both qualitatively and quantitatively. For quantitative performance comparison, we used achievable segmentation accuracy (ASA), compactness, under-segmentation error (USE), and boundary recall (BR) on the Berkeley image database. The results demonstrated that all algorithms suffered performance degradation due to noise. For Gaussian blur, Bilateral-G exhibited optimal results for ASA and USE measures, SLIC yielded optimal compactness, whereas FCCS and DBSCAN remained optimal for BR. For the case of additive Gaussian and impulse noises, FCCS exhibited optimal results for ASA, USE, and BR, whereas Bilateral-G remained a close competitor in ASA and USE for Gaussian noise only. Additionally, Turbopixel demonstrated optimal performance for compactness for both types of noise. Thus, no single algorithm was able to yield optimal results for all three types of noise across all performance measures. Conclusively, to solve real-world problems effectively, more robust superpixel algorithms must be developed.

  9. Development of GEM detector for plasma diagnostics application: simulations addressing optimization of its performance

    NASA Astrophysics Data System (ADS)

    Chernyshova, M.; Malinowski, K.; Kowalska-Strzęciwilk, E.; Czarski, T.; Linczuk, P.; Wojeński, A.; Krawczyk, R. D.

    2017-12-01

    The advanced Soft X-ray (SXR) diagnostics setup devoted to studies of the SXR plasma emissivity is at the moment a highly relevant and important for ITER/DEMO application. Especially focusing on the energy range of tungsten emission lines, as plasma contamination by W and its transport in the plasma must be understood and monitored for W plasma-facing material. The Gas Electron Multiplier, with a spatial and energy-resolved photon detecting chamber, based SXR radiation detection system under development by our group may become such a diagnostic setup considering and solving many physical, technical and technological aspects. This work presents the results of simulations aimed to optimize a design of the detector's internal chamber and its performance. The study of the effect of electrodes alignment allowed choosing the gap distances which maximizes electron transmission and choosing the optimal magnitudes of the applied electric fields. Finally, the optimal readout structure design was identified suitable to collect a total formed charge effectively, basing on the range of the simulated electron cloud at the readout plane which was in the order of ~ 2 mm.

  10. Optimizing Travel Time to Outpatient Interventional Radiology Procedures in a Multi-Site Hospital System Using a Google Maps Application.

    PubMed

    Mandel, Jacob E; Morel-Ovalle, Louis; Boas, Franz E; Ziv, Etay; Yarmohammadi, Hooman; Deipolyi, Amy; Mohabir, Heeralall R; Erinjeri, Joseph P

    2018-02-20

    The purpose of this study is to determine whether a custom Google Maps application can optimize site selection when scheduling outpatient interventional radiology (IR) procedures within a multi-site hospital system. The Google Maps for Business Application Programming Interface (API) was used to develop an internal web application that uses real-time traffic data to determine estimated travel time (ETT; minutes) and estimated travel distance (ETD; miles) from a patient's home to each a nearby IR facility in our hospital system. Hypothetical patient home addresses based on the 33 cities comprising our institution's catchment area were used to determine the optimal IR site for hypothetical patients traveling from each city based on real-time traffic conditions. For 10/33 (30%) cities, there was discordance between the optimal IR site based on ETT and the optimal IR site based on ETD at non-rush hour time or rush hour time. By choosing to travel to an IR site based on ETT rather than ETD, patients from discordant cities were predicted to save an average of 7.29 min during non-rush hour (p = 0.03), and 28.80 min during rush hour (p < 0.001). Using a custom Google Maps application to schedule outpatients for IR procedures can effectively reduce patient travel time when more than one location providing IR procedures is available within the same hospital system.

  11. Accuracy in identifying the elbow rotation axis on simulated fluoroscopic images using a new anatomical landmark.

    PubMed

    Wiggers, J K; Snijders, R M; Dobbe, J G G; Streekstra, G J; den Hartog, D; Schep, N W L

    2017-11-01

    External fixation of the elbow requires identification of the elbow rotation axis, but the accuracy of traditional landmarks (capitellum and trochlea) on fluoroscopy is limited. The relative distance (RD) of the humerus may be helpful as additional landmark. The first aim of this study was to determine the optimal RD that corresponds to an on-axis lateral image of the elbow. The second aim was to assess whether the use of the optimal RD improves the surgical accuracy to identify the elbow rotation axis on fluoroscopy. CT scans of elbows from five volunteers were used to simulate fluoroscopy; the actual rotation axis was calculated with CT-based flexion-extension analysis. First, three observers measured the optimal RD on simulated fluoroscopy. The RD is defined as the distance between the dorsal part of the humerus and the projection of the posteromedial cortex of the distal humerus, divided by the anteroposterior diameter of the humerus. Second, eight trauma surgeons assessed the elbow rotation axis on simulated fluoroscopy. In a preteaching session, surgeons used traditional landmarks. The surgeons were then instructed how to use the optimal RD as additional landmark in a postteaching session. The deviation from the actual rotation axis was expressed as rotational and translational error (±SD). Measurement of the RD was robust and easily reproducible; the optimal RD was 45%. The surgeons identified the elbow rotation axis with a mean rotational error decreasing from 7.6° ± 3.4° to 6.7° ± 3.3° after teaching how to use the RD. The mean translational error decreased from 4.2 ± 2.0 to 3.7 ± 2.0 mm after teaching. The humeral RD as additional landmark yielded small but relevant improvements. Although fluoroscopy-based external fixator alignment to the elbow remains prone to error, it is recommended to use the RD as additional landmark.

  12. Self-Configuration and Self-Optimization Process in Heterogeneous Wireless Networks

    PubMed Central

    Guardalben, Lucas; Villalba, Luis Javier García; Buiati, Fábio; Sobral, João Bosco Mangueira; Camponogara, Eduardo

    2011-01-01

    Self-organization in Wireless Mesh Networks (WMN) is an emergent research area, which is becoming important due to the increasing number of nodes in a network. Consequently, the manual configuration of nodes is either impossible or highly costly. So it is desirable for the nodes to be able to configure themselves. In this paper, we propose an alternative architecture for self-organization of WMN based on Optimized Link State Routing Protocol (OLSR) and the ad hoc on demand distance vector (AODV) routing protocols as well as using the technology of software agents. We argue that the proposed self-optimization and self-configuration modules increase the throughput of network, reduces delay transmission and network load, decreases the traffic of HELLO messages according to network’s scalability. By simulation analysis, we conclude that the self-optimization and self-configuration mechanisms can significantly improve the performance of OLSR and AODV protocols in comparison to the baseline protocols analyzed. PMID:22346584

  13. Self-configuration and self-optimization process in heterogeneous wireless networks.

    PubMed

    Guardalben, Lucas; Villalba, Luis Javier García; Buiati, Fábio; Sobral, João Bosco Mangueira; Camponogara, Eduardo

    2011-01-01

    Self-organization in Wireless Mesh Networks (WMN) is an emergent research area, which is becoming important due to the increasing number of nodes in a network. Consequently, the manual configuration of nodes is either impossible or highly costly. So it is desirable for the nodes to be able to configure themselves. In this paper, we propose an alternative architecture for self-organization of WMN based on Optimized Link State Routing Protocol (OLSR) and the ad hoc on demand distance vector (AODV) routing protocols as well as using the technology of software agents. We argue that the proposed self-optimization and self-configuration modules increase the throughput of network, reduces delay transmission and network load, decreases the traffic of HELLO messages according to network's scalability. By simulation analysis, we conclude that the self-optimization and self-configuration mechanisms can significantly improve the performance of OLSR and AODV protocols in comparison to the baseline protocols analyzed.

  14. Time optimized path-choice in the termite hunting ant Megaponera analis.

    PubMed

    Frank, Erik T; Hönle, Philipp O; Linsenmair, K Eduard

    2018-05-10

    Trail network systems among ants have received a lot of scientific attention due to their various applications in problem solving of networks. Recent studies have shown that ants select the fastest available path when facing different velocities on different substrates, rather than the shortest distance. The progress of decision-making by these ants is determined by pheromone-based maintenance of paths, which is a collective decision. However, path optimization through individual decision-making remains mostly unexplored. Here we present the first study of time-optimized path selection via individual decision-making by scout ants. Megaponera analis scouts search for termite foraging sites and lead highly organized raid columns to them. The path of the scout determines the path of the column. Through installation of artificial roads around M. analis nests we were able to influence the pathway choice of the raids. After road installation 59% of all recorded raids took place completely or partly on the road, instead of the direct, i.e. distance-optimized, path through grass from the nest to the termites. The raid velocity on the road was more than double the grass velocity, the detour thus saved 34.77±23.01% of the travel time compared to a hypothetical direct path. The pathway choice of the ants was similar to a mathematical model of least time allowing us to hypothesize the underlying mechanisms regulating the behavior. Our results highlight the importance of individual decision-making in the foraging behavior of ants and show a new procedure of pathway optimization. © 2018. Published by The Company of Biologists Ltd.

  15. Development of Gis Tool for the Solution of Minimum Spanning Tree Problem using Prim's Algorithm

    NASA Astrophysics Data System (ADS)

    Dutta, S.; Patra, D.; Shankar, H.; Alok Verma, P.

    2014-11-01

    minimum spanning tree (MST) of a connected, undirected and weighted network is a tree of that network consisting of all its nodes and the sum of weights of all its edges is minimum among all such possible spanning trees of the same network. In this study, we have developed a new GIS tool using most commonly known rudimentary algorithm called Prim's algorithm to construct the minimum spanning tree of a connected, undirected and weighted road network. This algorithm is based on the weight (adjacency) matrix of a weighted network and helps to solve complex network MST problem easily, efficiently and effectively. The selection of the appropriate algorithm is very essential otherwise it will be very hard to get an optimal result. In case of Road Transportation Network, it is very essential to find the optimal results by considering all the necessary points based on cost factor (time or distance). This paper is based on solving the Minimum Spanning Tree (MST) problem of a road network by finding it's minimum span by considering all the important network junction point. GIS technology is usually used to solve the network related problems like the optimal path problem, travelling salesman problem, vehicle routing problems, location-allocation problems etc. Therefore, in this study we have developed a customized GIS tool using Python script in ArcGIS software for the solution of MST problem for a Road Transportation Network of Dehradun city by considering distance and time as the impedance (cost) factors. It has a number of advantages like the users do not need a greater knowledge of the subject as the tool is user-friendly and that allows to access information varied and adapted the needs of the users. This GIS tool for MST can be applied for a nationwide plan called Prime Minister Gram Sadak Yojana in India to provide optimal all weather road connectivity to unconnected villages (points). This tool is also useful for constructing highways or railways spanning several cities optimally or connecting all cities with minimum total road length.

  16. Some aspects of hybrid-zeppelins. [optimization of delta wings for airships

    NASA Technical Reports Server (NTRS)

    Mackrodt, P. A.

    1975-01-01

    To increase an airship's maneuverability and payload capacity as well as to save bouyant gas it is proposed to outfit it with a slender delta-wing, which carries about one half of the total take-off weight of the vehicle. An optimization calculation based on the data of LZ 129 (the last airship, which saw passenger-service) leads to a Hybrid-Zeppelin with a wing of aspect-ratio 1.5 and 105 m span. The vehicle carries a payload of 40% of it's total take-off weight and consumes 0.8 t fuel per ton payload over a distance of 10000 km.

  17. A Comparison of Two Path Planners for Planetary Rovers

    NASA Technical Reports Server (NTRS)

    Tarokh, M.; Shiller, Z.; Hayati, S.

    1999-01-01

    The paper presents two path planners suitable for planetary rovers. The first is based on fuzzy description of the terrain, and genetic algorithm to find a traversable path in a rugged terrain. The second planner uses a global optimization method with a cost function that is the path distance divided by the velocity limit obtained from the consideration of the rover static and dynamic stability. A description of both methods is provided, and the results of paths produced are given which show the effectiveness of the path planners in finding near optimal paths. The features of the methods and their suitability and application for rover path planning are compared

  18. Time-distance domain transformation for Acoustic Emission source localization in thin metallic plates.

    PubMed

    Grabowski, Krzysztof; Gawronski, Mateusz; Baran, Ireneusz; Spychalski, Wojciech; Staszewski, Wieslaw J; Uhl, Tadeusz; Kundu, Tribikram; Packo, Pawel

    2016-05-01

    Acoustic Emission used in Non-Destructive Testing is focused on analysis of elastic waves propagating in mechanical structures. Then any information carried by generated acoustic waves, further recorded by a set of transducers, allow to determine integrity of these structures. It is clear that material properties and geometry strongly impacts the result. In this paper a method for Acoustic Emission source localization in thin plates is presented. The approach is based on the Time-Distance Domain Transform, that is a wavenumber-frequency mapping technique for precise event localization. The major advantage of the technique is dispersion compensation through a phase-shifting of investigated waveforms in order to acquire the most accurate output, allowing for source-sensor distance estimation using a single transducer. The accuracy and robustness of the above process are also investigated. This includes the study of Young's modulus value and numerical parameters influence on damage detection. By merging the Time-Distance Domain Transform with an optimal distance selection technique, an identification-localization algorithm is achieved. The method is investigated analytically, numerically and experimentally. The latter involves both laboratory and large scale industrial tests. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Advanced two-layer level set with a soft distance constraint for dual surfaces segmentation in medical images

    NASA Astrophysics Data System (ADS)

    Ji, Yuanbo; van der Geest, Rob J.; Nazarian, Saman; Lelieveldt, Boudewijn P. F.; Tao, Qian

    2018-03-01

    Anatomical objects in medical images very often have dual contours or surfaces that are highly correlated. Manually segmenting both of them by following local image details is tedious and subjective. In this study, we proposed a two-layer region-based level set method with a soft distance constraint, which not only regularizes the level set evolution at two levels, but also imposes prior information on wall thickness in an effective manner. By updating the level set function and distance constraint functions alternatingly, the method simultaneously optimizes both contours while regularizing their distance. The method was applied to segment the inner and outer wall of both left atrium (LA) and left ventricle (LV) from MR images, using a rough initialization from inside the blood pool. Compared to manual annotation from experience observers, the proposed method achieved an average perpendicular distance (APD) of less than 1mm for the LA segmentation, and less than 1.5mm for the LV segmentation, at both inner and outer contours. The method can be used as a practical tool for fast and accurate dual wall annotations given proper initialization.

  20. More rapid climate change promotes evolutionary rescue through selection for increased dispersal distance

    PubMed Central

    Boeye, Jeroen; Travis, Justin M J; Stoks, Robby; Bonte, Dries

    2013-01-01

    Species can either adapt to new conditions induced by climate change or shift their range in an attempt to track optimal environmental conditions. During current range shifts, species are simultaneously confronted with a second major anthropogenic disturbance, landscape fragmentation. Using individual-based models with a shifting climate window, we examine the effect of different rates of climate change on the evolution of dispersal distances through changes in the genetically determined dispersal kernel. Our results demonstrate that the rate of climate change is positively correlated to the evolved dispersal distances although too fast climate change causes the population to crash. When faced with realistic rates of climate change, greater dispersal distances evolve than those required for the population to keep track of the climate, thereby maximizing population size. Importantly, the greater dispersal distances that evolve when climate change is more rapid, induce evolutionary rescue by facilitating the population in crossing large gaps in the landscape. This could ensure population persistence in case of range shifting in fragmented landscapes. Furthermore, we highlight problems in using invasion speed as a proxy for potential range shifting abilities under climate change. PMID:23467649

  1. Human-tracking strategies for a six-legged rescue robot based on distance and view

    NASA Astrophysics Data System (ADS)

    Pan, Yang; Gao, Feng; Qi, Chenkun; Chai, Xun

    2016-03-01

    Human tracking is an important issue for intelligent robotic control and can be used in many scenarios, such as robotic services and human-robot cooperation. Most of current human-tracking methods are targeted for mobile/tracked robots, but few of them can be used for legged robots. Two novel human-tracking strategies, view priority strategy and distance priority strategy, are proposed specially for legged robots, which enable them to track humans in various complex terrains. View priority strategy focuses on keeping humans in its view angle arrange with priority, while its counterpart, distance priority strategy, focuses on keeping human at a reasonable distance with priority. To evaluate these strategies, two indexes(average and minimum tracking capability) are defined. With the help of these indexes, the view priority strategy shows advantages compared with distance priority strategy. The optimization is done in terms of these indexes, which let the robot has maximum tracking capability. The simulation results show that the robot can track humans with different curves like square, circular, sine and screw paths. Two novel control strategies are proposed which specially concerning legged robot characteristics to solve human tracking problems more efficiently in rescue circumstances.

  2. Synthesis and Process Optimization of Electrospun PEEK-Sulfonated Nanofibers by Response Surface Methodology

    PubMed Central

    Boaretti, Carlo; Roso, Martina; Lorenzetti, Alessandra; Modesti, Michele

    2015-01-01

    In this study electrospun nanofibers of partially sulfonated polyether ether ketone have been produced as a preliminary step for a possible development of composite proton exchange membranes for fuel cells. Response surface methodology has been employed for the modelling and optimization of the electrospinning process, using a Box-Behnken design. The investigation, based on a second order polynomial model, has been focused on the analysis of the effect of both process (voltage, tip-to-collector distance, flow rate) and material (sulfonation degree) variables on the mean fiber diameter. The final model has been verified by a series of statistical tests on the residuals and validated by a comparison procedure of samples at different sulfonation degrees, realized according to optimized conditions, for the production of homogeneous thin nanofibers. PMID:28793427

  3. Synthesis and Process Optimization of Electrospun PEEK-Sulfonated Nanofibers by Response Surface Methodology.

    PubMed

    Boaretti, Carlo; Roso, Martina; Lorenzetti, Alessandra; Modesti, Michele

    2015-07-07

    In this study electrospun nanofibers of partially sulfonated polyether ether ketone have been produced as a preliminary step for a possible development of composite proton exchange membranes for fuel cells. Response surface methodology has been employed for the modelling and optimization of the electrospinning process, using a Box-Behnken design. The investigation, based on a second order polynomial model, has been focused on the analysis of the effect of both process (voltage, tip-to-collector distance, flow rate) and material (sulfonation degree) variables on the mean fiber diameter. The final model has been verified by a series of statistical tests on the residuals and validated by a comparison procedure of samples at different sulfonation degrees, realized according to optimized conditions, for the production of homogeneous thin nanofibers.

  4. Generating subtour elimination constraints for the TSP from pure integer solutions.

    PubMed

    Pferschy, Ulrich; Staněk, Rostislav

    2017-01-01

    The traveling salesman problem ( TSP ) is one of the most prominent combinatorial optimization problems. Given a complete graph [Formula: see text] and non-negative distances d for every edge, the TSP asks for a shortest tour through all vertices with respect to the distances d. The method of choice for solving the TSP to optimality is a branch and cut approach . Usually the integrality constraints are relaxed first and all separation processes to identify violated inequalities are done on fractional solutions . In our approach we try to exploit the impressive performance of current ILP-solvers and work only with integer solutions without ever interfering with fractional solutions. We stick to a very simple ILP-model and relax the subtour elimination constraints only. The resulting problem is solved to integer optimality, violated constraints (which are trivial to find) are added and the process is repeated until a feasible solution is found. In order to speed up the algorithm we pursue several attempts to find as many relevant subtours as possible. These attempts are based on the clustering of vertices with additional insights gained from empirical observations and random graph theory. Computational results are performed on test instances taken from the TSPLIB95 and on random Euclidean graphs .

  5. 3D prostate MR-TRUS non-rigid registration using dual optimization with volume-preserving constraint

    NASA Astrophysics Data System (ADS)

    Qiu, Wu; Yuan, Jing; Fenster, Aaron

    2016-03-01

    We introduce an efficient and novel convex optimization-based approach to the challenging non-rigid registration of 3D prostate magnetic resonance (MR) and transrectal ultrasound (TRUS) images, which incorporates a new volume preserving constraint to essentially improve the accuracy of targeting suspicious regions during the 3D TRUS guided prostate biopsy. Especially, we propose a fast sequential convex optimization scheme to efficiently minimize the employed highly nonlinear image fidelity function using the robust multi-channel modality independent neighborhood descriptor (MIND) across the two modalities of MR and TRUS. The registration accuracy was evaluated using 10 patient images by calculating the target registration error (TRE) using manually identified corresponding intrinsic fiducials in the whole prostate gland. We also compared the MR and TRUS manually segmented prostate surfaces in the registered images in terms of the Dice similarity coefficient (DSC), mean absolute surface distance (MAD), and maximum absolute surface distance (MAXD). Experimental results showed that the proposed method with the introduced volume-preserving prior significantly improves the registration accuracy comparing to the method without the volume-preserving constraint, by yielding an overall mean TRE of 2:0+/-0:7 mm, and an average DSC of 86:5+/-3:5%, MAD of 1:4+/-0:6 mm and MAXD of 6:5+/-3:5 mm.

  6. Discrepancies between conformational distributions of a polyalanine peptide in solution obtained from molecular dynamics force fields and amide I' band profiles.

    PubMed

    Verbaro, Daniel; Ghosh, Indrajit; Nau, Werner M; Schweitzer-Stenner, Reinhard

    2010-12-30

    Structural preferences in the unfolded state of peptides determined by molecular dynamics still contradict experimental data. A remedy in this regard has been suggested by MD simulations with an optimized Amber force field ff03* ( Best, R. Hummer, G. J. Phys. Chem. B 2009 , 113 , 9004 - 9015 ). The simulations yielded a statistical coil distribution for alanine which is at variance with recent experimental results. To check the validity of this distribution, we investigated the peptide H-A(5)W-OH, which with the exception of the additional terminal tryptophan is analogous to the peptide used to optimize the force fields ff03*. Electronic circular dichroism, vibrational circular dichroism, and infrared spectroscopy as well as J-coupling constants obtained from NMR experiments were used to derive the peptide's conformational ensemble. Additionally, Förster resonance energy transfer between the terminal chromophores of the fluorescently labeled peptide analogue H-Dbo-A(5)W-OH was used to determine its average length, from which the end-to-end distance of the unlabeled peptide was estimated. Qualitatively, the experimental (3)J(H(N),C(α)), VCD, and ECD indicated a preference of alanine for polyproline II-like conformations. The experimental (3)J(H(N),C(α)) for A(5)W closely resembles the constants obtained for A(5). In order to quantitatively relate the conformational distribution of A(5) obtained with the optimized AMBER ff03* force field to experimental data, the former was used to derive a distribution function which expressed the conformational ensemble as a mixture of polyproline II, β-strand, helical, and turn conformations. This model was found to satisfactorily reproduce all experimental J-coupling constants. We employed the model to calculate the amide I' profiles of the IR and vibrational circular dichroism spectrum of A(5)W, as well as the distance between the two terminal peptide carbonyls. This led to an underestimated negative VCD couplet and an overestimated distance between terminal carbonyl groups. In order to more accurately account for the experimental data, we changed the distribution parameters based on results recently obtained for the alanine-based tripeptides. The final model, which satisfactorily reproduced amide I' profiles, J-coupling constant, and the end-to-end distance of A(5)W, reinforces alanine's high structural preference for polyproline II. Our results suggest that distributions obtained from MD simulations suggesting a statistical coil-like distribution for alanine are still based on insufficiently accurate force fields.

  7. An Investigation of Generalized Differential Evolution Metaheuristic for Multiobjective Optimal Crop-Mix Planning Decision

    PubMed Central

    Olugbara, Oludayo

    2014-01-01

    This paper presents an annual multiobjective crop-mix planning as a problem of concurrent maximization of net profit and maximization of crop production to determine an optimal cropping pattern. The optimal crop production in a particular planting season is a crucial decision making task from the perspectives of economic management and sustainable agriculture. A multiobjective optimal crop-mix problem is formulated and solved using the generalized differential evolution 3 (GDE3) metaheuristic to generate a globally optimal solution. The performance of the GDE3 metaheuristic is investigated by comparing its results with the results obtained using epsilon constrained and nondominated sorting genetic algorithms—being two representatives of state-of-the-art in evolutionary optimization. The performance metrics of additive epsilon, generational distance, inverted generational distance, and spacing are considered to establish the comparability. In addition, a graphical comparison with respect to the true Pareto front for the multiobjective optimal crop-mix planning problem is presented. Empirical results generally show GDE3 to be a viable alternative tool for solving a multiobjective optimal crop-mix planning problem. PMID:24883369

  8. An investigation of generalized differential evolution metaheuristic for multiobjective optimal crop-mix planning decision.

    PubMed

    Adekanmbi, Oluwole; Olugbara, Oludayo; Adeyemo, Josiah

    2014-01-01

    This paper presents an annual multiobjective crop-mix planning as a problem of concurrent maximization of net profit and maximization of crop production to determine an optimal cropping pattern. The optimal crop production in a particular planting season is a crucial decision making task from the perspectives of economic management and sustainable agriculture. A multiobjective optimal crop-mix problem is formulated and solved using the generalized differential evolution 3 (GDE3) metaheuristic to generate a globally optimal solution. The performance of the GDE3 metaheuristic is investigated by comparing its results with the results obtained using epsilon constrained and nondominated sorting genetic algorithms-being two representatives of state-of-the-art in evolutionary optimization. The performance metrics of additive epsilon, generational distance, inverted generational distance, and spacing are considered to establish the comparability. In addition, a graphical comparison with respect to the true Pareto front for the multiobjective optimal crop-mix planning problem is presented. Empirical results generally show GDE3 to be a viable alternative tool for solving a multiobjective optimal crop-mix planning problem.

  9. A general theory of interference fringes in x-ray phase grating imaging.

    PubMed

    Yan, Aimin; Wu, Xizeng; Liu, Hong

    2015-06-01

    The authors note that the concept of the Talbot self-image distance in x-ray phase grating interferometry is indeed not well defined for polychromatic x-rays, because both the grating phase shift and the fractional Talbot distances are all x-ray wavelength-dependent. For x-ray interferometry optimization, there is a need for a quantitative theory that is able to predict if a good intensity modulation is attainable at a given grating-to-detector distance. In this work, the authors set out to meet this need. In order to apply Fourier analysis directly to the intensity fringe patterns of two-dimensional and one-dimensional phase grating interferometers, the authors start their derivation from a general phase space theory of x-ray phase-contrast imaging. Unlike previous Fourier analyses, the authors evolved the Wigner distribution to obtain closed-form expressions of the Fourier coefficients of the intensity fringes for any grating-to-detector distance, even if it is not a fractional Talbot distance. The developed theory determines the visibility of any diffraction order as a function of the grating-to-detector distance, the phase shift of the grating, and the x-ray spectrum. The authors demonstrate that the visibilities of diffraction orders can serve as the indicators of the underlying interference intensity modulation. Applying the theory to the conventional and inverse geometry configurations of single-grating interferometers, the authors demonstrated that the proposed theory provides a quantitative tool for the grating interferometer optimization with or without the Talbot-distance constraints. In this work, the authors developed a novel theory of the interference intensity fringes in phase grating x-ray interferometry. This theory provides a quantitative tool in design optimization of phase grating x-ray interferometers.

  10. Double quantum coherence ESR spectroscopy and quantum chemical calculations on a BDPA biradical.

    PubMed

    Haeri, Haleh Hashemi; Spindler, Philipp; Plackmeyer, Jörn; Prisner, Thomas

    2016-10-26

    Carbon-centered radicals are interesting alternatives to otherwise commonly used nitroxide spin labels for dipolar spectroscopy techniques because of their narrow ESR linewidth. Herein, we present a novel BDPA biradical, where two BDPA (α,α,γ,γ-bisdiphenylene-β-phenylallyl) radicals are covalently tethered by a saturated biphenyl acetylene linker. The inter-spin distance between the two spin carrier fragments was measured using double quantum coherence (DQC) ESR methodology. The DQC experiment revealed a mean distance of only 1.8 nm between the two unpaired electron spins. This distance is shorter than the predictions based on a simple modelling of the biradical geometry with the electron spins located at the central carbon atoms. Therefore, DFT (density functional theory) calculations were performed to obtain a picture of the spin delocalization, which may give rise to a modified dipolar interaction tensor, and to find those conformations that correspond best to the experimentally observed inter-spin distance. Quantum chemical calculations showed that the attachment of the biphenyl acetylene linker at the second position of the fluorenyl ring of BDPA did not affect the spin population or geometry of the BDPA radical. Therefore, spin delocalization and geometry optimization of each BDPA moiety could be performed on the monomeric unit alone. The allylic dihedral angle θ 1 between the fluorenyl rings in the monomer subunit was determined to be 30° or 150° using quantum chemical calculations. The proton hyperfine coupling constant calculated from both energy minima was in very good agreement with literature values. Based on the optimal monomer geometries and spin density distributions, the dipolar coupling interaction between both BDPA units could be calculated for several dimer geometries. It was shown that the rotation of the BDPA units around the linker axis (θ 2 ) does not significantly influence the dipolar coupling strength when compared to the allylic dihedral angle θ 1 . A good agreement between the experimental and calculated dipolar coupling was found for θ 1 = 30°.

  11. TU-D-201-07: Severity Indication in High Dose Rate Brachytherapy Emergency Response Procedure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, K; Rustad, F

    Purpose: Understanding the corresponding dose to different staff during the High Dose Rate (HDR) Brachytherapy emergency response procedure could help to develop a strategy in efficiency and effective action. In this study, the variation and risk analysis methodology was developed to simulation the HDR emergency response procedure based on severity indicator. Methods: A GammaMedplus iX HDR unit from Varian Medical System was used for this simulation. The emergency response procedure was decomposed based on risk management methods. Severity indexes were used to identify the impact of a risk occurrence on the step including dose to patient and dose to operationmore » staff by varying the time, HDR source activity, distance from the source to patient and staff and the actions. These actions in 7 steps were to press the interrupt button, press emergency shutoff switch, press emergency button on the afterloader keypad, turn emergency hand-crank, remove applicator from the patient, disconnect transfer tube and move afterloader from the patient, and execute emergency surgical recovery. Results: Given the accumulated time in second at the assumed 7 steps were 15, 5, 30, 15, 180, 120, 1800, and the dose rate of HDR source is 10 Ci, the accumulated dose in cGy to patient at 1cm distance were 188, 250, 625, 813, 3063, 4563 and 27063, and the accumulated exposure in rem to operator at outside the vault, 1m and 10cm distance were 0.0, 0.0, 0.1, 0.1, 22.6, 37.6 and 262.6. The variation was determined by the operators in action at different time and distance from the HDR source. Conclusion: The time and dose were estimated for a HDR unit emergency response procedure. It provided information in making optimal decision during the emergency procedure. Further investigation would be to optimize and standardize the responses for other emergency procedure by time-spatial-dose severity function.« less

  12. Left ventricle segmentation via graph cut distribution matching.

    PubMed

    Ben Ayed, Ismail; Punithakumar, Kumaradevan; Li, Shuo; Islam, Ali; Chong, Jaron

    2009-01-01

    We present a discrete kernel density matching energy for segmenting the left ventricle cavity in cardiac magnetic resonance sequences. The energy and its graph cut optimization based on an original first-order approximation of the Bhattacharyya measure have not been proposed previously, and yield competitive results in nearly real-time. The algorithm seeks a region within each frame by optimization of two priors, one geometric (distance-based) and the other photometric, each measuring a distribution similarity between the region and a model learned from the first frame. Based on global rather than pixelwise information, the proposed algorithm does not require complex training and optimization with respect to geometric transformations. Unlike related active contour methods, it does not compute iterative updates of computationally expensive kernel densities. Furthermore, the proposed first-order analysis can be used for other intractable energies and, therefore, can lead to segmentation algorithms which share the flexibility of active contours and computational advantages of graph cuts. Quantitative evaluations over 2280 images acquired from 20 subjects demonstrated that the results correlate well with independent manual segmentations by an expert.

  13. Optimisation of phase ratio in the triple jump using computer simulation.

    PubMed

    Allen, Sam J; King, Mark A; Yeadon, M R Fred

    2016-04-01

    The triple jump is an athletic event comprising three phases in which the optimal proportion of each phase to the total distance jumped, termed the phase ratio, is unknown. This study used a whole-body torque-driven computer simulation model of all three phases of the triple jump to investigate optimal technique. The technique of the simulation model was optimised by varying torque generator activation parameters using a Genetic Algorithm in order to maximise total jump distance, resulting in a hop-dominated technique (35.7%:30.8%:33.6%) and a distance of 14.05m. Optimisations were then run with penalties forcing the model to adopt hop and jump phases of 33%, 34%, 35%, 36%, and 37% of the optimised distance, resulting in total distances of: 13.79m, 13.87m, 13.95m, 14.05m, and 14.02m; and 14.01m, 14.02m, 13.97m, 13.84m, and 13.67m respectively. These results indicate that in this subject-specific case there is a plateau in optimum technique encompassing balanced and hop-dominated techniques, but that a jump-dominated technique is associated with a decrease in performance. Hop-dominated techniques are associated with higher forces than jump-dominated techniques; therefore optimal phase ratio may be related to a combination of strength and approach velocity. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Optimal graph based segmentation using flow lines with application to airway wall segmentation.

    PubMed

    Petersen, Jens; Nielsen, Mads; Lo, Pechin; Saghir, Zaigham; Dirksen, Asger; de Bruijne, Marleen

    2011-01-01

    This paper introduces a novel optimal graph construction method that is applicable to multi-dimensional, multi-surface segmentation problems. Such problems are often solved by refining an initial coarse surface within the space given by graph columns. Conventional columns are not well suited for surfaces with high curvature or complex shapes but the proposed columns, based on properly generated flow lines, which are non-intersecting, guarantee solutions that do not self-intersect and are better able to handle such surfaces. The method is applied to segment human airway walls in computed tomography images. Comparison with manual annotations on 649 cross-sectional images from 15 different subjects shows significantly smaller contour distances and larger area of overlap than are obtained with recently published graph based methods. Airway abnormality measurements obtained with the method on 480 scan pairs from a lung cancer screening trial are reproducible and correlate significantly with lung function.

  15. Probabilistic Swarm Guidance using Optimal Transport

    DTIC Science & Technology

    2014-10-10

    controlled to collectively exhibit useful emergent behavior [2]–[5]. Similarly, swarms of hundreds to thousands of femtosatellites (100-gram-class...algorithm using inhomo- geneous Markov chains (PSG– IMC ), each agent chooses the tuning parameter (ξjk) based on the Hellinger distance (HD) between the...PGA and PSG– IMC in the next section. B. Simulation Results We now present the setup of this simulation example. The swarm containing m = 5000 agents is

  16. Be-safe travel, a web-based geographic application to explore safe-route in an area

    NASA Astrophysics Data System (ADS)

    Utamima, Amalia; Djunaidy, Arif

    2017-08-01

    In large cities in developing countries, the various forms of criminality are often found. For instance, the most prominent crimes in Surabaya, Indonesia is 3C, that is theft with violence (curas), theft by weighting (curat), and motor vehicle theft (curanmor). 3C case most often occurs on the highway and residential areas. Therefore, new entrants in an area should be aware of these kind of crimes. Route Planners System or route planning system such as Google Maps only consider the shortest distance in the calculation of the optimal route. The selection of the optimal path in this study not only consider the shortest distance, but also involves other factors, namely the security level. This research considers at the need for an application to recommend the safest road to be passed by the vehicle passengers while drive an area. This research propose Be-Safe Travel, a web-based application using Google API that can be accessed by people who like to drive in an area, but still lack of knowledge of the pathways which are safe from crime. Be-Safe Travel is not only useful for the new entrants, but also useful for delivery courier of valuables goods to go through the safest streets.

  17. Optimization model of conventional missile maneuvering route based on improved Floyd algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Runping; Liu, Weidong

    2018-04-01

    Missile combat plays a crucial role in the victory of war under high-tech conditions. According to the characteristics of maneuver tasks of conventional missile units in combat operations, the factors influencing road maneuvering are analyzed. Based on road distance, road conflicts, launching device speed, position requirements, launch device deployment, Concealment and so on. The shortest time optimization model was built to discuss the situation of road conflict and the strategy of conflict resolution. The results suggest that in the process of solving road conflict, the effect of node waiting is better than detour to another way. In this study, we analyzed the deficiency of the traditional Floyd algorithm which may limit the optimal way of solving road conflict, and put forward the improved Floyd algorithm, meanwhile, we designed the algorithm flow which would be better than traditional Floyd algorithm. Finally, throgh a numerical example, the model and the algorithm were proved to be reliable and effective.

  18. Innovative design of closing loops producing an optimal force system applicable in the 0.022-in bracket slot system.

    PubMed

    Sumi, Mayumi; Koga, Yoshiyuki; Tominaga, Jun-Ya; Hamanaka, Ryo; Ozaki, Hiroya; Chiang, Pao-Chang; Yoshida, Noriaki

    2016-12-01

    Most closing loops designed for producing higher moment-to-force (M/F) ratios require complex wire bending and are likely to cause hygiene problems and discomfort because of their complicated configurations. We aimed to develop a simple loop design that can produce optimal force and M/F ratio. A loop design that can generate a high M/F ratio and the ideal force level was investigated by varying the portion and length of the cross-sectional reduction of a teardrop loop and the loop position. The forces and moments acting on closing loops were calculated using structural analysis based on the tangent stiffness method. An M/F ratio of 9.3 (high enough to achieve controlled movement of the anterior teeth) and an optimal force level of approximately 250 g of force can be generated by activation of a 10-mm-high teardrop loop whose cross-section of 0.019 × 0.025 or 0.021 × 0.025 in was reduced in thickness by 50% for a distance of 3 mm from the apex, located between a quarter and a third of the interbracket distance from the canine bracket. The simple loop design that we developed delivers an optimal force and an M/F ratio for the retraction of anterior teeth, and is applicable in a 0.022-in slot system. Copyright © 2016 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  19. Optimal partial mass transportation and obstacle Monge-Kantorovich equation

    NASA Astrophysics Data System (ADS)

    Igbida, Noureddine; Nguyen, Van Thanh

    2018-05-01

    Optimal partial mass transport, which is a variant of the optimal transport problem, consists in transporting effectively a prescribed amount of mass from a source to a target. The problem was first studied by Caffarelli and McCann (2010) [6] and Figalli (2010) [12] with a particular attention to the quadratic cost. Our aim here is to study the optimal partial mass transport problem with Finsler distance costs including the Monge cost given by the Euclidian distance. Our approach is different and our results do not follow from previous works. Among our results, we introduce a PDE of Monge-Kantorovich type with a double obstacle to characterize active submeasures, Kantorovich potential and optimal flow for the optimal partial transport problem. This new PDE enables us to study the uniqueness and monotonicity results for the active submeasures. Another interesting issue of our approach is its convenience for numerical analysis and computations that we develop in a separate paper [14] (Igbida and Nguyen, 2018).

  20. Numerical simulation of the optimal two-mode attacks for two-way continuous-variable quantum cryptography in reverse reconciliation

    NASA Astrophysics Data System (ADS)

    Zhang, Yichen; Li, Zhengyu; Zhao, Yijia; Yu, Song; Guo, Hong

    2017-02-01

    We analyze the security of the two-way continuous-variable quantum key distribution protocol in reverse reconciliation against general two-mode attacks, which represent all accessible attacks at fixed channel parameters. Rather than against one specific attack model, the expression of secret key rates of the two-way protocol are derived against all accessible attack models. It is found that there is an optimal two-mode attack to minimize the performance of the protocol in terms of both secret key rates and maximal transmission distances. We identify the optimal two-mode attack, give the specific attack model of the optimal two-mode attack and show the performance of the two-way protocol against the optimal two-mode attack. Even under the optimal two-mode attack, the performances of two-way protocol are still better than the corresponding one-way protocol, which shows the advantage of making double use of the quantum channel and the potential of long-distance secure communication using a two-way protocol.

  1. Phylogenetics beyond biology.

    PubMed

    Retzlaff, Nancy; Stadler, Peter F

    2018-06-21

    Evolutionary processes have been described not only in biology but also for a wide range of human cultural activities including languages and law. In contrast to the evolution of DNA or protein sequences, the detailed mechanisms giving rise to the observed evolution-like processes are not or only partially known. The absence of a mechanistic model of evolution implies that it remains unknown how the distances between different taxa have to be quantified. Considering distortions of metric distances, we first show that poor choices of the distance measure can lead to incorrect phylogenetic trees. Based on the well-known fact that phylogenetic inference requires additive metrics, we then show that the correct phylogeny can be computed from a distance matrix [Formula: see text] if there is a monotonic, subadditive function [Formula: see text] such that [Formula: see text] is additive. The required metric-preserving transformation [Formula: see text] can be computed as the solution of an optimization problem. This result shows that the problem of phylogeny reconstruction is well defined even if a detailed mechanistic model of the evolutionary process remains elusive.

  2. Influence of scanning parameters on the estimation accuracy of control points of B-spline surfaces

    NASA Astrophysics Data System (ADS)

    Aichinger, Julia; Schwieger, Volker

    2018-04-01

    This contribution deals with the influence of scanning parameters like scanning distance, incidence angle, surface quality and sampling width on the average estimated standard deviations of the position of control points from B-spline surfaces which are used to model surfaces from terrestrial laser scanning data. The influence of the scanning parameters is analyzed by the Monte Carlo based variance analysis. The samples were generated for non-correlated and correlated data, leading to the samples generated by Latin hypercube and replicated Latin hypercube sampling algorithms. Finally, the investigations show that the most influential scanning parameter is the distance from the laser scanner to the object. The angle of incidence shows a significant effect for distances of 50 m and longer, while the surface quality contributes only negligible effects. The sampling width has no influence. Optimal scanning parameters can be found in the smallest possible object distance at an angle of incidence close to 0° in the highest surface quality. The consideration of correlations improves the estimation accuracy and underlines the importance of complete stochastic models for TLS measurements.

  3. Optimization and Analysis of Laser Beam Machining Parameters for Al7075-TiB2 In-situ Composite

    NASA Astrophysics Data System (ADS)

    Manjoth, S.; Keshavamurthy, R.; Pradeep Kumar, G. S.

    2016-09-01

    The paper focuses on laser beam machining (LBM) of In-situ synthesized Al7075-TiB2 metal matrix composite. Optimization and influence of laser machining process parameters on surface roughness, volumetric material removal rate (VMRR) and dimensional accuracy of composites were studied. Al7075-TiB2 metal matrix composite was synthesized by in-situ reaction technique using stir casting process. Taguchi's L9 orthogonal array was used to design experimental trials. Standoff distance (SOD) (0.3 - 0.5mm), Cutting Speed (1000 - 1200 m/hr) and Gas pressure (0.5 - 0.7 bar) were considered as variable input parameters at three different levels, while power and nozzle diameter were maintained constant with air as assisting gas. Optimized process parameters for surface roughness, volumetric material removal rate (VMRR) and dimensional accuracy were calculated by generating the main effects plot for signal noise ratio (S/N ratio) for surface roughness, VMRR and dimensional error using Minitab software (version 16). The Significant of standoff distance (SOD), cutting speed and gas pressure on surface roughness, volumetric material removal rate (VMRR) and dimensional error were calculated using analysis of variance (ANOVA) method. Results indicate that, for surface roughness, cutting speed (56.38%) is most significant parameter followed by standoff distance (41.03%) and gas pressure (2.6%). For volumetric material removal (VMRR), gas pressure (42.32%) is most significant parameter followed by cutting speed (33.60%) and standoff distance (24.06%). For dimensional error, Standoff distance (53.34%) is most significant parameter followed by cutting speed (34.12%) and gas pressure (12.53%). Further, verification experiments were carried out to confirm performance of optimized process parameters.

  4. Comparison of global positioning and computer-based tracking systems for measuring player movement distance during Australian football.

    PubMed

    Edgecomb, S J; Norton, K I

    2006-05-01

    Sports scientists require a thorough understanding of the energy demands of sports and physical activities so that optimal training strategies and game simulations can be constructed. A range of techniques has been used to both directly assess and estimate the physiological and biochemical changes during competition. A fundamental approach to understanding the contribution of the energy systems in physical activity has involved the use of time-motion studies. A number of tools have been used from simple pen and paper methods, the use of video recordings, to sophisticated electronic tracking devices. Depending on the sport, there may be difficulties in using electronic tracking devices because of concerns of player safety. This paper assesses two methods currently used to measure player movement patterns during competition: (1) global positioning technology (GPS) and (2) a computer-based tracking (CBT) system that relies on a calibrated miniaturised playing field and mechanical movements of the tracker. A range of ways was used to determine the validity and reliability of these methods for tracking Australian footballers for distance covered during games. Comparisons were also made between these methods. The results indicate distances measured using CBT overestimated the actual values (measured with a calibrated trundle wheel) by an average of about 5.8%. The GPS system overestimated the actual values by about 4.8%. Distances measured using CBT in experienced hands were as accurate as the GPS technology. Both systems showed relatively small errors in true distances.

  5. Method of assessing the state of a rolling bearing based on the relative compensation distance of multiple-domain features and locally linear embedding

    NASA Astrophysics Data System (ADS)

    Kang, Shouqiang; Ma, Danyang; Wang, Yujing; Lan, Chaofeng; Chen, Qingguo; Mikulovich, V. I.

    2017-03-01

    To effectively assess different fault locations and different degrees of performance degradation of a rolling bearing with a unified assessment index, a novel state assessment method based on the relative compensation distance of multiple-domain features and locally linear embedding is proposed. First, for a single-sample signal, time-domain and frequency-domain indexes can be calculated for the original vibration signal and each sensitive intrinsic mode function obtained by improved ensemble empirical mode decomposition, and the singular values of the sensitive intrinsic mode function matrix can be extracted by singular value decomposition to construct a high-dimensional hybrid-domain feature vector. Second, a feature matrix can be constructed by arranging each feature vector of multiple samples, the dimensions of each row vector of the feature matrix can be reduced by the locally linear embedding algorithm, and the compensation distance of each fault state of the rolling bearing can be calculated using the support vector machine. Finally, the relative distance between different fault locations and different degrees of performance degradation and the normal-state optimal classification surface can be compensated, and on the basis of the proposed relative compensation distance, the assessment model can be constructed and an assessment curve drawn. Experimental results show that the proposed method can effectively assess different fault locations and different degrees of performance degradation of the rolling bearing under certain conditions.

  6. Role of Distance-Based Routing in Traffic Dynamics on Mobile Networks

    NASA Astrophysics Data System (ADS)

    Yang, Han-Xin; Wang, Wen-Xu

    2013-06-01

    Despite of intensive investigations on transportation dynamics taking place on complex networks with fixed structures, a deep understanding of networks consisting of mobile nodes is challenging yet, especially the lacking of insight into the effects of routing strategies on transmission efficiency. We introduce a distance-based routing strategy for networks of mobile agents toward enhancing the network throughput and the transmission efficiency. We study the transportation capacity and delivering time of data packets associated with mobility and communication ability. Interestingly, we find that the transportation capacity is optimized at moderate moving speed, which is quite different from random routing strategy. In addition, both continuous and discontinuous transitions from free flow to congestions are observed. Degree distributions are explored in order to explain the enhancement of network throughput and other observations. Our work is valuable toward understanding complex transportation dynamics and designing effective routing protocols.

  7. Maximal coherence and the resource theory of purity

    NASA Astrophysics Data System (ADS)

    Streltsov, Alexander; Kampermann, Hermann; Wölk, Sabine; Gessner, Manuel; Bruß, Dagmar

    2018-05-01

    The resource theory of quantum coherence studies the off-diagonal elements of a density matrix in a distinguished basis, whereas the resource theory of purity studies all deviations from the maximally mixed state. We establish a direct connection between the two resource theories, by identifying purity as the maximal coherence which is achievable by unitary operations. The states that saturate this maximum identify a universal family of maximally coherent mixed states. These states are optimal resources under maximally incoherent operations, and thus independent of the way coherence is quantified. For all distance-based coherence quantifiers the maximal coherence can be evaluated exactly, and is shown to coincide with the corresponding distance-based purity quantifier. We further show that purity bounds the maximal amount of entanglement and discord that can be generated by unitary operations, thus demonstrating that purity is the most elementary resource for quantum information processing.

  8. Biologically tunable reactivity of energetic nanomaterials using protein cages.

    PubMed

    Slocik, Joseph M; Crouse, Christopher A; Spowart, Jonathan E; Naik, Rajesh R

    2013-06-12

    The performance of aluminum nanomaterial based energetic formulations is dependent on the mass transport, diffusion distance, and stability of reactive components. Here we use a biologically inspired approach to direct the assembly of oxidizer loaded protein cages onto the surface of aluminum nanoparticles to improve reaction kinetics by reducing the diffusion distance between the reactants. Ferritin protein cages were loaded with ammonium perchlorate (AP) or iron oxide and assembled with nAl to create an oxidation-reduction based energetic reaction and the first demonstration of a nanoscale biobased thermite material. Both materials showed enhanced exothermic behavior in comparison to nanothermite mixtures of bulk free AP or synthesized iron oxide nanopowders prepared without the use of ferritin. In addition, by utilizing a layer-by-layer (LbL) process to build multiple layers of protein cages containing iron oxide and iron oxide/AP on nAl, stoichiometric conditions and energetic performance can be optimized.

  9. Validity of the Catapult ClearSky T6 Local Positioning System for Team Sports Specific Drills, in Indoor Conditions

    PubMed Central

    Luteberget, Live S.; Spencer, Matt; Gilgien, Matthias

    2018-01-01

    Aim: The aim of the present study was to determine the validity of position, distance traveled and instantaneous speed of team sport players as measured by a commercially available local positioning system (LPS) during indoor use. In addition, the study investigated how the placement of the field of play relative to the anchor nodes and walls of the building affected the validity of the system. Method: The LPS (Catapult ClearSky T6, Catapult Sports, Australia) and the reference system [Qualisys Oqus, Qualisys AB, Sweden, (infra-red camera system)] were installed around the field of play to capture the athletes' motion. Athletes completed five tasks, all designed to imitate team-sports movements. The same protocol was completed in two sessions, one with an assumed optimal geometrical setup of the LPS (optimal condition), and once with a sub-optimal geometrical setup of the LPS (sub-optimal condition). Raw two-dimensional position data were extracted from both the LPS and the reference system for accuracy assessment. Position, distance and speed were compared. Results: The mean difference between the LPS and reference system for all position estimations was 0.21 ± 0.13 m (n = 30,166) in the optimal setup, and 1.79 ± 7.61 m (n = 22,799) in the sub-optimal setup. The average difference in distance was below 2% for all tasks in the optimal condition, while it was below 30% in the sub-optimal condition. Instantaneous speed showed the largest differences between the LPS and reference system of all variables, both in the optimal (≥35%) and sub-optimal condition (≥74%). The differences between the LPS and reference system in instantaneous speed were speed dependent, showing increased differences with increasing speed. Discussion: Measures of position, distance, and average speed from the LPS show low errors, and can be used confidently in time-motion analyses for indoor team sports. The calculation of instantaneous speed from LPS raw data is not valid. To enhance instantaneous speed calculation the application of appropriate filtering techniques to enhance the validity of such data should be investigated. For all measures, the placement of anchor nodes and the field of play relative to the walls of the building influence LPS output to a large degree. PMID:29670530

  10. Beam steering performance of compressed Luneburg lens based on transformation optics

    NASA Astrophysics Data System (ADS)

    Gao, Ju; Wang, Cong; Zhang, Kuang; Hao, Yang; Wu, Qun

    2018-06-01

    In this paper, two types of compressed Luneburg lenses based on transformation optics are investigated and simulated using two different sources, namely, waveguides and dipoles, which represent plane and spherical wave sources, respectively. We determined that the largest beam steering angle and the related feed point are intrinsic characteristics of a certain type of compressed Luneburg lens, and that the optimized distance between the feed and lens, gain enhancement, and side-lobe suppression are related to the type of source. Based on our results, we anticipate that these lenses will prove useful in various future antenna applications.

  11. High-order distance-based multiview stochastic learning in image classification.

    PubMed

    Yu, Jun; Rui, Yong; Tang, Yuan Yan; Tao, Dacheng

    2014-12-01

    How do we find all images in a larger set of images which have a specific content? Or estimate the position of a specific object relative to the camera? Image classification methods, like support vector machine (supervised) and transductive support vector machine (semi-supervised), are invaluable tools for the applications of content-based image retrieval, pose estimation, and optical character recognition. However, these methods only can handle the images represented by single feature. In many cases, different features (or multiview data) can be obtained, and how to efficiently utilize them is a challenge. It is inappropriate for the traditionally concatenating schema to link features of different views into a long vector. The reason is each view has its specific statistical property and physical interpretation. In this paper, we propose a high-order distance-based multiview stochastic learning (HD-MSL) method for image classification. HD-MSL effectively combines varied features into a unified representation and integrates the labeling information based on a probabilistic framework. In comparison with the existing strategies, our approach adopts the high-order distance obtained from the hypergraph to replace pairwise distance in estimating the probability matrix of data distribution. In addition, the proposed approach can automatically learn a combination coefficient for each view, which plays an important role in utilizing the complementary information of multiview data. An alternative optimization is designed to solve the objective functions of HD-MSL and obtain different views on coefficients and classification scores simultaneously. Experiments on two real world datasets demonstrate the effectiveness of HD-MSL in image classification.

  12. Prediction of acoustic feature parameters using myoelectric signals.

    PubMed

    Lee, Ki-Seung

    2010-07-01

    It is well-known that a clear relationship exists between human voices and myoelectric signals (MESs) from the area of the speaker's mouth. In this study, we utilized this information to implement a speech synthesis scheme in which MES alone was used to predict the parameters characterizing the vocal-tract transfer function of specific speech signals. Several feature parameters derived from MES were investigated to find the optimal feature for maximization of the mutual information between the acoustic and the MES features. After the optimal feature was determined, an estimation rule for the acoustic parameters was proposed, based on a minimum mean square error (MMSE) criterion. In a preliminary study, 60 isolated words were used for both objective and subjective evaluations. The results showed that the average Euclidean distance between the original and predicted acoustic parameters was reduced by about 30% compared with the average Euclidean distance of the original parameters. The intelligibility of the synthesized speech signals using the predicted features was also evaluated. A word-level identification ratio of 65.5% and a syllable-level identification ratio of 73% were obtained through a listening test.

  13. Optimizing the way kinematical feed chains with great distance between slides are chosen for CNC machine tools

    NASA Astrophysics Data System (ADS)

    Lucian, P.; Gheorghe, S.

    2017-08-01

    This paper presents a new method, based on FRISCO formula, for optimizing the choice of the best control system for kinematical feed chains with great distance between slides used in computer numerical controlled machine tools. Such machines are usually, but not limited to, used for machining large and complex parts (mostly in the aviation industry) or complex casting molds. For such machine tools the kinematic feed chains are arranged in a dual-parallel drive structure that allows the mobile element to be moved by the two kinematical branches and their related control systems. Such an arrangement allows for high speed and high rigidity (a critical requirement for precision machining) during the machining process. A significant issue for such an arrangement it’s the ability of the two parallel control systems to follow the same trajectory accurately in order to address this issue it is necessary to achieve synchronous motion control for the two kinematical branches ensuring that the correct perpendicular position it’s kept by the mobile element during its motion on the two slides.

  14. Electrocoagulation treatment of raw landfill leachate using iron-based electrodes: Effects of process parameters and optimization.

    PubMed

    Huda, N; Raman, A A A; Bello, M M; Ramesh, S

    2017-12-15

    The main problem of landfill leachate is its diverse composition comprising many persistent organic pollutants which must be removed before being discharge into the environment. This study investigated the treatment of raw landfill leachate using electrocoagulation process. An electrocoagulation system was designed with iron as both the anode and cathode. The effects of inter-electrode distance, initial pH and electrolyte concentration on colour and COD removals were investigated. All these factors were found to have significant effects on the colour removal. On the other hand, electrolyte concentration was the most significant parameter affecting the COD removal. Numerical optimization was also conducted to obtain the optimum process performance. Under optimum conditions (initial pH: 7.73, inter-electrode distance: 1.16 cm, and electrolyte concentration (NaCl): 2.00 g/L), the process could remove up to 82.7% colour and 45.1% COD. The process can be applied as a pre-treatment for raw leachates before applying other appropriate treatment technologies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Estimation of cylinder orientation in three-dimensional point cloud using angular distance-based optimization

    NASA Astrophysics Data System (ADS)

    Su, Yun-Ting; Hu, Shuowen; Bethel, James S.

    2017-05-01

    Light detection and ranging (LIDAR) has become a widely used tool in remote sensing for mapping, surveying, modeling, and a host of other applications. The motivation behind this work is the modeling of piping systems in industrial sites, where cylinders are the most common primitive or shape. We focus on cylinder parameter estimation in three-dimensional point clouds, proposing a mathematical formulation based on angular distance to determine the cylinder orientation. We demonstrate the accuracy and robustness of the technique on synthetically generated cylinder point clouds (where the true axis orientation is known) as well as on real LIDAR data of piping systems. The proposed algorithm is compared with a discrete space Hough transform-based approach as well as a continuous space inlier approach, which iteratively discards outlier points to refine the cylinder parameter estimates. Results show that the proposed method is more computationally efficient than the Hough transform approach and is more accurate than both the Hough transform approach and the inlier method.

  16. A swarm-trained k-nearest prototypes adaptive classifier with automatic feature selection for interval data.

    PubMed

    Silva Filho, Telmo M; Souza, Renata M C R; Prudêncio, Ricardo B C

    2016-08-01

    Some complex data types are capable of modeling data variability and imprecision. These data types are studied in the symbolic data analysis field. One such data type is interval data, which represents ranges of values and is more versatile than classic point data for many domains. This paper proposes a new prototype-based classifier for interval data, trained by a swarm optimization method. Our work has two main contributions: a swarm method which is capable of performing both automatic selection of features and pruning of unused prototypes and a generalized weighted squared Euclidean distance for interval data. By discarding unnecessary features and prototypes, the proposed algorithm deals with typical limitations of prototype-based methods, such as the problem of prototype initialization. The proposed distance is useful for learning classes in interval datasets with different shapes, sizes and structures. When compared to other prototype-based methods, the proposed method achieves lower error rates in both synthetic and real interval datasets. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Breeding success of a marine central place forager in the context of climate change: A modeling approach.

    PubMed

    Massardier-Galatà, Lauriane; Morinay, Jennifer; Bailleul, Frédéric; Wajnberg, Eric; Guinet, Christophe; Coquillard, Patrick

    2017-01-01

    In response to climate warming, a southward shift in productive frontal systems serving as the main foraging sites for many top predator species is likely to occur in Subantarctic areas. Central place foragers, such as seabirds and pinnipeds, are thus likely to cope with an increase in the distance between foraging locations and their land-based breeding colonies. Understanding how central place foragers should modify their foraging behavior in response to changes in prey accessibility appears crucial. A spatially explicit individual-based simulation model (Marine Central Place Forager Simulator (MarCPFS)), including bio-energetic components, was built to evaluate effects of possible changes in prey resources accessibility on individual performances and breeding success. The study was calibrated on a particular example: the Antarctic fur seal (Arctocephalus gazella), which alternates between oceanic areas in which females feed and the land-based colony in which they suckle their young over a 120 days rearing period. Our model shows the importance of the distance covered to feed and prey aggregation which appeared to be key factors to which animals are highly sensitive. Memorization and learning abilities also appear to be essential breeding success traits. Females were found to be most successful for intermediate levels of prey aggregation and short distance to the resource, resulting in optimal female body length. Increased distance to resources due to climate warming should hinder pups' growth and survival while female body length should increase.

  18. Breeding success of a marine central place forager in the context of climate change: A modeling approach

    PubMed Central

    Massardier-Galatà, Lauriane; Morinay, Jennifer; Bailleul, Frédéric; Wajnberg, Eric; Guinet, Christophe; Coquillard, Patrick

    2017-01-01

    In response to climate warming, a southward shift in productive frontal systems serving as the main foraging sites for many top predator species is likely to occur in Subantarctic areas. Central place foragers, such as seabirds and pinnipeds, are thus likely to cope with an increase in the distance between foraging locations and their land-based breeding colonies. Understanding how central place foragers should modify their foraging behavior in response to changes in prey accessibility appears crucial. A spatially explicit individual-based simulation model (Marine Central Place Forager Simulator (MarCPFS)), including bio-energetic components, was built to evaluate effects of possible changes in prey resources accessibility on individual performances and breeding success. The study was calibrated on a particular example: the Antarctic fur seal (Arctocephalus gazella), which alternates between oceanic areas in which females feed and the land-based colony in which they suckle their young over a 120 days rearing period. Our model shows the importance of the distance covered to feed and prey aggregation which appeared to be key factors to which animals are highly sensitive. Memorization and learning abilities also appear to be essential breeding success traits. Females were found to be most successful for intermediate levels of prey aggregation and short distance to the resource, resulting in optimal female body length. Increased distance to resources due to climate warming should hinder pups’ growth and survival while female body length should increase. PMID:28355282

  19. SOP: parallel surrogate global optimization with Pareto center selection for computationally expensive single objective problems

    DOE PAGES

    Krityakierne, Tipaluck; Akhtar, Taimoor; Shoemaker, Christine A.

    2016-02-02

    This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centersmore » from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.« less

  20. Monotonicity of fitness landscapes and mutation rate control.

    PubMed

    Belavkin, Roman V; Channon, Alastair; Aston, Elizabeth; Aston, John; Krašovec, Rok; Knight, Christopher G

    2016-12-01

    A common view in evolutionary biology is that mutation rates are minimised. However, studies in combinatorial optimisation and search have shown a clear advantage of using variable mutation rates as a control parameter to optimise the performance of evolutionary algorithms. Much biological theory in this area is based on Ronald Fisher's work, who used Euclidean geometry to study the relation between mutation size and expected fitness of the offspring in infinite phenotypic spaces. Here we reconsider this theory based on the alternative geometry of discrete and finite spaces of DNA sequences. First, we consider the geometric case of fitness being isomorphic to distance from an optimum, and show how problems of optimal mutation rate control can be solved exactly or approximately depending on additional constraints of the problem. Then we consider the general case of fitness communicating only partial information about the distance. We define weak monotonicity of fitness landscapes and prove that this property holds in all landscapes that are continuous and open at the optimum. This theoretical result motivates our hypothesis that optimal mutation rate functions in such landscapes will increase when fitness decreases in some neighbourhood of an optimum, resembling the control functions derived in the geometric case. We test this hypothesis experimentally by analysing approximately optimal mutation rate control functions in 115 complete landscapes of binding scores between DNA sequences and transcription factors. Our findings support the hypothesis and find that the increase of mutation rate is more rapid in landscapes that are less monotonic (more rugged). We discuss the relevance of these findings to living organisms.

  1. Mapping the optimal forest road network based on the multicriteria evaluation technique: the case study of Mediterranean Island of Thassos in Greece.

    PubMed

    Tampekis, Stergios; Sakellariou, Stavros; Samara, Fani; Sfougaris, Athanassios; Jaeger, Dirk; Christopoulou, Olga

    2015-11-01

    The sustainable management of forest resources can only be achieved through a well-organized road network designed with the optimal spatial planning and the minimum environmental impacts. This paper describes the spatial layout mapping for the optimal forest road network and the environmental impacts evaluation that are caused to the natural environment based on the multicriteria evaluation (MCE) technique at the Mediterranean island of Thassos in Greece. Data analysis and its presentation are achieved through a spatial decision support system using the MCE method with the contribution of geographic information systems (GIS). With the use of the MCE technique, we evaluated the human impact intensity to the forest ecosystem as well as the ecosystem's absorption from the impacts that are caused from the forest roads' construction. For the human impact intensity evaluation, the criteria that were used are as follows: the forest's protection percentage, the forest road density, the applied skidding means (with either the use of tractors or the cable logging systems in timber skidding), the timber skidding direction, the visitors' number and truck load, the distance between forest roads and streams, the distance between forest roads and the forest boundaries, and the probability that the forest roads are located on sights with unstable soils. In addition, for the ecosystem's absorption evaluation, we used forestry, topographical, and social criteria. The recommended MCE technique which is described in this study provides a powerful, useful, and easy-to-use implement in order to combine the sustainable utilization of natural resources and the environmental protection in Mediterranean ecosystems.

  2. Target-triggered signal turn-on detection of prostate specific antigen based on metal-enhanced fluorescence of Ag@SiO2@SiO2-RuBpy composite nanoparticles

    NASA Astrophysics Data System (ADS)

    Deng, Yun-Liang; Xu, Dang-Dang; Pang, Dai-Wen; Tang, Hong-Wu

    2017-02-01

    A three-layer core-shell nanostructure consisting of a silver core, a silica spacer, and a fluorescent dye RuBpy-doped outer silica layer was fabricated, and the optimal metal-enhanced fluorescence (MEF) distance was explored through adjusting the thickness of the silica spacer. The results show that the optimal distance is ˜10.4 nm with the maximum fluorescence enhancement factor 2.12. Then a new target-triggered MEF ‘turn-on’ strategy based on the optimized composite nanoparticles was successfully constructed for quantitative detection of prostate specific antigen (PSA), by using RuBpy as the energy donor and BHQ-2 as the acceptor. The hybridization of the complementary DNA of PSA-aptamer immobilized on the surface of the MEF nanoparticles with PSA-aptamer modified with BHQ-2, brought BHQ-2 in close proximity to RuBpy-doped silica shell and resulted in the decrease of fluorescence. In the presence of target PSA molecules, the BHQ-PSA aptamer is dissociated from the surface of the nanoparticles with the fluorescence switched on. Therefore, the assay of PSA was achieved by measuring the varying fluorescence intensity. The results show that PSA can be detected in the range of 1-100 ng ml-1 with a detection limit of 0.20 ng ml-1 (6.1 pM), which is 6.7-fold increase of that using hollow RuBpy-doped silica nanoparticles. Moreover, satisfactory results were obtained when PSA was detected in 1% serum.

  3. Clinical Outcomes of an Optimized Prolate Ablation Procedure for Correcting Residual Refractive Errors Following Laser Surgery.

    PubMed

    Chung, Byunghoon; Lee, Hun; Choi, Bong Joon; Seo, Kyung Ryul; Kim, Eung Kwon; Kim, Dae Yune; Kim, Tae-Im

    2017-02-01

    The purpose of this study was to investigate the clinical efficacy of an optimized prolate ablation procedure for correcting residual refractive errors following laser surgery. We analyzed 24 eyes of 15 patients who underwent an optimized prolate ablation procedure for the correction of residual refractive errors following laser in situ keratomileusis, laser-assisted subepithelial keratectomy, or photorefractive keratectomy surgeries. Preoperative ophthalmic examinations were performed, and uncorrected distance visual acuity, corrected distance visual acuity, manifest refraction values (sphere, cylinder, and spherical equivalent), point spread function, modulation transfer function, corneal asphericity (Q value), ocular aberrations, and corneal haze measurements were obtained postoperatively at 1, 3, and 6 months. Uncorrected distance visual acuity improved and refractive errors decreased significantly at 1, 3, and 6 months postoperatively. Total coma aberration increased at 3 and 6 months postoperatively, while changes in all other aberrations were not statistically significant. Similarly, no significant changes in point spread function were detected, but modulation transfer function increased significantly at the postoperative time points measured. The optimized prolate ablation procedure was effective in terms of improving visual acuity and objective visual performance for the correction of persistent refractive errors following laser surgery.

  4. CPV cells cooling system based on submerged jet impingement: CFD modeling and experimental validation

    NASA Astrophysics Data System (ADS)

    Montorfano, Davide; Gaetano, Antonio; Barbato, Maurizio C.; Ambrosetti, Gianluca; Pedretti, Andrea

    2014-09-01

    Concentrating photovoltaic (CPV) cells offer higher efficiencies with regard to the PV ones and allow to strongly reduce the overall solar cell area. However, to operate correctly and exploit their advantages, their temperature has to be kept low and as uniform as possible and the cooling circuit pressure drops need to be limited. In this work an impingement water jet cooling system specifically designed for an industrial HCPV receiver is studied. Through the literature and by means of accurate computational fluid dynamics (CFD) simulations, the nozzle to plate distance, the number of jets and the nozzle pitch, i.e. the distance between adjacent jets, were optimized. Afterwards, extensive experimental tests were performed to validate pressure drops and cooling power simulation results.

  5. An Ultrasonic Multiple-Access Ranging Core Based on Frequency Shift Keying Towards Indoor Localization

    PubMed Central

    Segers, Laurent; Van Bavegem, David; De Winne, Sam; Braeken, An; Touhafi, Abdellah; Steenhaut, Kris

    2015-01-01

    This paper describes a new approach and implementation methodology for indoor ranging based on the time difference of arrival using code division multiple access with ultrasound signals. A novel implementation based on a field programmable gate array using finite impulse response filters and an optimized correlation demodulator implementation for ultrasound orthogonal signals is developed. Orthogonal codes are modulated onto ultrasound signals using frequency shift keying with carrier frequencies of 24.5 kHz and 26 kHz. This implementation enhances the possibilities for real-time, embedded and low-power tracking of several simultaneous transmitters. Due to the high degree of parallelism offered by field programmable gate arrays, up to four transmitters can be tracked simultaneously. The implementation requires at most 30% of the available logic gates of a Spartan-6 XC6SLX45 device and is evaluated on accuracy and precision through several ranging topologies. In the first topology, the distance between one transmitter and one receiver is evaluated. Afterwards, ranging analyses are applied between two simultaneous transmitters and one receiver. Ultimately, the position of the receiver against four transmitters using trilateration is also demonstrated. Results show enhanced distance measurements with distances ranging from a few centimeters up to 17 m, while keeping a centimeter-level accuracy. PMID:26263986

  6. Application of principal component analysis to distinguish patients with schizophrenia from healthy controls based on fractional anisotropy measurements.

    PubMed

    Caprihan, A; Pearlson, G D; Calhoun, V D

    2008-08-15

    Principal component analysis (PCA) is often used to reduce the dimension of data before applying more sophisticated data analysis methods such as non-linear classification algorithms or independent component analysis. This practice is based on selecting components corresponding to the largest eigenvalues. If the ultimate goal is separation of data in two groups, then these set of components need not have the most discriminatory power. We measured the distance between two such populations using Mahalanobis distance and chose the eigenvectors to maximize it, a modified PCA method, which we call the discriminant PCA (DPCA). DPCA was applied to diffusion tensor-based fractional anisotropy images to distinguish age-matched schizophrenia subjects from healthy controls. The performance of the proposed method was evaluated by the one-leave-out method. We show that for this fractional anisotropy data set, the classification error with 60 components was close to the minimum error and that the Mahalanobis distance was twice as large with DPCA, than with PCA. Finally, by masking the discriminant function with the white matter tracts of the Johns Hopkins University atlas, we identified left superior longitudinal fasciculus as the tract which gave the least classification error. In addition, with six optimally chosen tracts the classification error was zero.

  7. On the security of compressed encryption with partial unitary sensing matrices embedding a secret keystream

    NASA Astrophysics Data System (ADS)

    Yu, Nam Yul

    2017-12-01

    The principle of compressed sensing (CS) can be applied in a cryptosystem by providing the notion of security. In this paper, we study the computational security of a CS-based cryptosystem that encrypts a plaintext with a partial unitary sensing matrix embedding a secret keystream. The keystream is obtained by a keystream generator of stream ciphers, where the initial seed becomes the secret key of the CS-based cryptosystem. For security analysis, the total variation distance, bounded by the relative entropy and the Hellinger distance, is examined as a security measure for the indistinguishability. By developing upper bounds on the distance measures, we show that the CS-based cryptosystem can be computationally secure in terms of the indistinguishability, as long as the keystream length for each encryption is sufficiently large with low compression and sparsity ratios. In addition, we consider a potential chosen plaintext attack (CPA) from an adversary, which attempts to recover the key of the CS-based cryptosystem. Associated with the key recovery attack, we show that the computational security of our CS-based cryptosystem is brought by the mathematical intractability of a constrained integer least-squares (ILS) problem. For a sub-optimal, but feasible key recovery attack, we consider a successive approximate maximum-likelihood detection (SAMD) and investigate the performance by developing an upper bound on the success probability. Through theoretical and numerical analyses, we demonstrate that our CS-based cryptosystem can be secure against the key recovery attack through the SAMD.

  8. Improved discrete swarm intelligence algorithms for endmember extraction from hyperspectral remote sensing images

    NASA Astrophysics Data System (ADS)

    Su, Yuanchao; Sun, Xu; Gao, Lianru; Li, Jun; Zhang, Bing

    2016-10-01

    Endmember extraction is a key step in hyperspectral unmixing. A new endmember extraction framework is proposed for hyperspectral endmember extraction. The proposed approach is based on the swarm intelligence (SI) algorithm, where discretization is used to solve the SI algorithm because pixels in a hyperspectral image are naturally defined within a discrete space. Moreover, a "distance" factor is introduced into the objective function to limit the endmember numbers which is generally limited in real scenarios, while traditional SI algorithms likely produce superabundant spectral signatures, which generally belong to the same classes. Three endmember extraction methods are proposed based on the artificial bee colony, ant colony optimization, and particle swarm optimization algorithms. Experiments with both simulated and real hyperspectral images indicate that the proposed framework can improve the accuracy of endmember extraction.

  9. On the efficiency of treating singularities in triatomic variational vibrational computations. The vibrational states of H(+)3 up to dissociation.

    PubMed

    Szidarovszky, Tamás; Császár, Attila G; Czakó, Gábor

    2010-08-01

    Several techniques of varying efficiency are investigated, which treat all singularities present in the triatomic vibrational kinetic energy operator given in orthogonal internal coordinates of the two distances-one angle type. The strategies are based on the use of a direct-product basis built from one-dimensional discrete variable representation (DVR) bases corresponding to the two distances and orthogonal Legendre polynomials, or the corresponding Legendre-DVR basis, corresponding to the angle. The use of Legendre functions ensures the efficient treatment of the angular singularity. Matrix elements of the singular radial operators are calculated employing DVRs using the quadrature approximation as well as special DVRs satisfying the boundary conditions and thus allowing for the use of exact DVR expressions. Potential optimized (PO) radial DVRs, based on one-dimensional Hamiltonians with potentials obtained by fixing or relaxing the two non-active coordinates, are also studied. The numerical calculations employed Hermite-DVR, spherical-oscillator-DVR, and Bessel-DVR bases as the primitive radial functions. A new analytical formula is given for the determination of the matrix elements of the singular radial operator using the Bessel-DVR basis. The usually claimed failure of the quadrature approximation in certain singular integrals is revisited in one and three dimensions. It is shown that as long as no potential optimization is carried out the quadrature approximation works almost as well as the exact DVR expressions. If wave functions with finite amplitude at the boundary are to be computed, the basis sets need to meet the required boundary conditions. The present numerical results also confirm that PO-DVRs should be constructed employing relaxed potentials and PO-DVRs can be useful for optimizing quadrature points for calculations applying large coordinate intervals and describing large-amplitude motions. The utility and efficiency of the different algorithms is demonstrated by the computation of converged near-dissociation vibrational energy levels for the H molecular ion.

  10. Utilizing patient geographic information system data to plan telemedicine service locations.

    PubMed

    Soares, Neelkamal; Dewalle, Joseph; Marsh, Ben

    2017-09-01

    To understand potential utilization of clinical services at a rural integrated health care system by generating optimal groups of telemedicine locations from electronic health record (EHR) data using geographic information systems (GISs). This retrospective study extracted nonidentifiable grouped data of patients over a 2-year period from the EHR, including geomasked locations. Spatially optimal groupings were created using available telemedicine sites by calculating patients' average travel distance (ATD) to the closest clinic site. A total of 4027 visits by 2049 unique patients were analyzed. The best travel distances for site groupings of 3, 4, 5, or 6 site locations were ranked based on increasing ATD. Each one-site increase in the number of available telemedicine sites decreased minimum ATD by about 8%. For a given group size, the best groupings were very similar in minimum travel distance. There were significant differences in predicted patient load imbalance between otherwise similar groupings. A majority of the best site groupings used the same small number of sites, and urban sites were heavily used. With EHR geospatial data at an individual patient level, we can model potential telemedicine sites for specialty access in a rural geographic area. Relatively few sites could serve most of the population. Direct access to patient GIS data from an EHR provides direct knowledge of the client base compared to methods that allocate aggregated data. Geospatial data and methods can assist health care location planning, generating data about load, load balance, and spatial accessibility. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  11. Optimization Parameters of Air-conditioning and Heat Insulation Systems of a Pressurized Cabins of Long-distance Airplanes

    NASA Astrophysics Data System (ADS)

    Gusev, Sergey A.; Nikolaev, Vladimir N.

    2018-01-01

    The method for determination of an aircraft compartment thermal condition, based on a mathematical model of a compartment thermal condition was developed. Development of solution techniques for solving heat exchange direct and inverse problems and for determining confidence intervals of parametric identification estimations was carried out. The required performance of air-conditioning, ventilation systems and heat insulation depth of crew and passenger cabins were received.

  12. Multiobjective immune algorithm with nondominated neighbor-based selection.

    PubMed

    Gong, Maoguo; Jiao, Licheng; Du, Haifeng; Bo, Liefeng

    2008-01-01

    Abstract Nondominated Neighbor Immune Algorithm (NNIA) is proposed for multiobjective optimization by using a novel nondominated neighbor-based selection technique, an immune inspired operator, two heuristic search operators, and elitism. The unique selection technique of NNIA only selects minority isolated nondominated individuals in the population. The selected individuals are then cloned proportionally to their crowding-distance values before heuristic search. By using the nondominated neighbor-based selection and proportional cloning, NNIA pays more attention to the less-crowded regions of the current trade-off front. We compare NNIA with NSGA-II, SPEA2, PESA-II, and MISA in solving five DTLZ problems, five ZDT problems, and three low-dimensional problems. The statistical analysis based on three performance metrics including the coverage of two sets, the convergence metric, and the spacing, show that the unique selection method is effective, and NNIA is an effective algorithm for solving multiobjective optimization problems. The empirical study on NNIA's scalability with respect to the number of objectives shows that the new algorithm scales well along the number of objectives.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krityakierne, Tipaluck; Akhtar, Taimoor; Shoemaker, Christine A.

    This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centersmore » from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.« less

  14. Molecule signatures in photoluminescence spectra of transition metal dichalcogenides

    NASA Astrophysics Data System (ADS)

    Feierabend, Maja; Berghäuser, Gunnar; Selig, Malte; Brem, Samuel; Shegai, Timur; Eigler, Siegfried; Malic, Ermin

    2018-01-01

    Monolayer transition metal dichalcogenides (TMDs) show an optimal surface-to-volume ratio and are thus promising candidates for novel molecule sensor devices. It was recently predicted that a certain class of molecules exhibiting a large dipole moment can be detected through the activation of optically inaccessible (dark) excitonic states in absorption spectra of tungsten-based TMDs. In this paper, we investigate the molecule signatures in photoluminescence spectra in dependence of a number of different experimentally accessible quantities, such as excitation density, temperature, as well as molecular characteristics including the dipole moment and its orientation, molecule-TMD distance, molecular coverage, and distribution. We show that under certain optimal conditions even room-temperature detection of molecules can be achieved.

  15. An Extremely Low Mid-infrared Extinction Law toward the Galactic Center and 4% Distance Precision to 55 Classical Cepheids

    NASA Astrophysics Data System (ADS)

    Chen, Xiaodian; Wang, Shu; Deng, Licai; de Grijs, Richard

    2018-06-01

    Distances and extinction values are usually degenerate. To refine the distance to the general Galactic Center region, a carefully determined extinction law (taking into account the prevailing systematic errors) is urgently needed. We collected data for 55 classical Cepheids projected toward the Galactic Center region to derive the near- to mid-infrared extinction law using three different approaches. The relative extinction values obtained are {A}J/{A}{K{{s}}}=3.005,{A}H/{A}{K{{s}}}=1.717, {A}[3.6]/{A}{K{{s}}}=0.478,{A}[4.5]/{A}{K{{s}}}=0.341, {A}[5.8]/{A}{K{{s}}}=0.234,{A}[8.0]/{A}{K{{s}}} =0.321,{A}W1/{A}{K{{s}}}=0.506, and {A}W2/{A}{K{{s}}}=0.340. We also calculated the corresponding systematic errors. Compared with previous work, we report an extremely low and steep mid-infrared extinction law. Using a seven-passband “optimal distance” method, we improve the mean distance precision to our sample of 55 Cepheids to 4%. Based on four confirmed Galactic Center Cepheids, a solar Galactocentric distance of R 0 = 8.10 ± 0.19 ± 0.22 kpc is determined, featuring an uncertainty that is close to the limiting distance accuracy (2.8%) for Galactic Center Cepheids.

  16. A novel growth mode of Physarum polycephalum during starvation

    NASA Astrophysics Data System (ADS)

    Lee, Jonghyun; Oettmeier, Christina; Döbereiner, Hans-Günther

    2018-06-01

    Organisms are constantly looking to forage and respond to various environmental queues to maximize their chance of survival. This is reflected in the unicellular organism Physarum polycephalum, which is known to grow as an optimized network. Here, we describe a new growth pattern of Physarum mesoplasmodium, where sheet-like motile bodies termed ‘satellites’ are formed. This non-network pattern formation is induced only when nutrients are scarce, suggesting that it is a type of emergency response. Our goal is to construct a model to describe the behaviour of satellites based on negative chemotaxis. We conjecture a diffusion-based model which implements detection of a signal molecule above a threshold concentration. Then we calculate how far the satellites must travel until the concentration signal falls below the threshold. These calculated distances are in good agreement with the distances where satellites stop. Based on the Akaike weight analysis, our threshold model is at least 2.3 times more likely to be the better model than the others we have considered. Based on the model, we estimate the diffusion coefficient of this molecule, which corresponds to typical signalling molecules.

  17. Optimal Path Planning Program for Autonomous Speed Sprayer in Orchard Using Order-Picking Algorithm

    NASA Astrophysics Data System (ADS)

    Park, T. S.; Park, S. J.; Hwang, K. Y.; Cho, S. I.

    This study was conducted to develop a software program which computes optimal path for autonomous navigation in orchard, especially for speed sprayer. Possibilities of autonomous navigation in orchard were shown by other researches which have minimized distance error between planned path and performed path. But, research of planning an optimal path for speed sprayer in orchard is hardly founded. In this study, a digital map and a database for orchard which contains GPS coordinate information (coordinates of trees and boundary of orchard) and entity information (heights and widths of trees, radius of main stem of trees, disease of trees) was designed. An orderpicking algorithm which has been used for management of warehouse was used to calculate optimum path based on the digital map. Database for digital map was created by using Microsoft Access and graphic interface for database was made by using Microsoft Visual C++ 6.0. It was possible to search and display information about boundary of an orchard, locations of trees, daily plan for scattering chemicals and plan optimal path on different orchard based on digital map, on each circumstance (starting speed sprayer in different location, scattering chemicals for only selected trees).

  18. On combining multi-normalization and ancillary measures for the optimal score level fusion of fingerprint and voice biometrics

    NASA Astrophysics Data System (ADS)

    Mohammed Anzar, Sharafudeen Thaha; Sathidevi, Puthumangalathu Savithri

    2014-12-01

    In this paper, we have considered the utility of multi-normalization and ancillary measures, for the optimal score level fusion of fingerprint and voice biometrics. An efficient matching score preprocessing technique based on multi-normalization is employed for improving the performance of the multimodal system, under various noise conditions. Ancillary measures derived from the feature space and the score space are used in addition to the matching score vectors, for weighing the modalities, based on their relative degradation. Reliability (dispersion) and the separability (inter-/intra-class distance and d-prime statistics) measures under various noise conditions are estimated from the individual modalities, during the training/validation stage. The `best integration weights' are then computed by algebraically combining these measures using the weighted sum rule. The computed integration weights are then optimized against the recognition accuracy using techniques such as grid search, genetic algorithm and particle swarm optimization. The experimental results show that, the proposed biometric solution leads to considerable improvement in the recognition performance even under low signal-to-noise ratio (SNR) conditions and reduces the false acceptance rate (FAR) and false rejection rate (FRR), making the system useful for security as well as forensic applications.

  19. An optimal beam alignment method for large-scale distributed space surveillance radar system

    NASA Astrophysics Data System (ADS)

    Huang, Jian; Wang, Dongya; Xia, Shuangzhi

    2018-06-01

    Large-scale distributed space surveillance radar is a very important ground-based equipment to maintain a complete catalogue for Low Earth Orbit (LEO) space debris. However, due to the thousands of kilometers distance between each sites of the distributed radar system, how to optimally implement the Transmitting/Receiving (T/R) beams alignment in a great space using the narrow beam, which proposed a special and considerable technical challenge in the space surveillance area. According to the common coordinate transformation model and the radar beam space model, we presented a two dimensional projection algorithm for T/R beam using the direction angles, which could visually describe and assess the beam alignment performance. Subsequently, the optimal mathematical models for the orientation angle of the antenna array, the site location and the T/R beam coverage are constructed, and also the beam alignment parameters are precisely solved. At last, we conducted the optimal beam alignment experiments base on the site parameters of Air Force Space Surveillance System (AFSSS). The simulation results demonstrate the correctness and effectiveness of our novel method, which can significantly stimulate the construction for the LEO space debris surveillance equipment.

  20. GeneOnEarth: fitting genetic PC plots on the globe.

    PubMed

    Torres-Sánchez, Sergio; Medina-Medina, Nuria; Gignoux, Chris; Abad-Grau, María M; González-Burchard, Esteban

    2013-01-01

    Principal component (PC) plots have become widely used to summarize genetic variation of individuals in a sample. The similarity between genetic distance in PC plots and geographical distance has shown to be quite impressive. However, in most situations, individual ancestral origins are not precisely known or they are heterogeneously distributed; hence, they are hardly linked to a geographical area. We have developed GeneOnEarth, a user-friendly web-based tool to help geneticists to understand whether a linear isolation-by-distance model may apply to a genetic data set; thus, genetic distances among a set of individuals resemble geographical distances among their origins. Its main goal is to allow users to first apply a by-view Procrustes method to visually learn whether this model holds. To do that, the user can choose the exact geographical area from an on line 2D or 3D world map by using, respectively, Google Maps or Google Earth, and rotate, flip, and resize the images. GeneOnEarth can also compute the optimal rotation angle using Procrustes analysis and assess statistical evidence of similarity when a different rotation angle has been chosen by the user. An online version of GeneOnEarth is available for testing and using purposes at http://bios.ugr.es/GeneOnEarth.

  1. The role of stenosis ratio as a predictor of surgical satisfaction in patients with lumbar spinal canal stenosis: a receiver-operator characteristic (ROC) curve analysis.

    PubMed

    Mohammadi, Hassanreza R; Azimi, Parisa; Benzel, Edward C; Shahzadi, Sohrab; Azhari, Shirzad

    2016-09-01

    The aim of this study was to elucidate independent factors that predict surgical satisfaction in lumbar spinal canal stenosis (LSCS) patients. Patients who underwent surgery were grouped based on the age, gender, duration of symptoms, walking distance, Neurogenic Claudication Outcome Score (NCOS) and the stenosis ratio (SR) described by Lurencin. We recorded on 2-year patient satisfaction using standardized measure. The optimal cut-off points in SR, NCOS and walking distance for predicting surgical satisfaction were estimated from sensitivity and specificity calculations and receiver operator characteristic (ROC) curves. One hundred fifty consecutive patients (51 male, 99 female, mean age 62.4±10.9 years) were followed up for 34±13 months (range 24-49). One, two, three and four level stenosis was observed in 10.7%, 39.3%, 36.0 % and 14.0% of patients, respectively. Post-surgical satisfaction was 78.5% at the 2 years follow up. In ROC curve analysis, the asymptotic significance is less than 0.05 in SR and the optimal cut-off value of SR to predict worsening surgical satisfaction was measured as more than 0.52, with 85.4% sensitivity and 77.4% specificity (AUC 0.798, 95% CI 0.73-0.90; P<0.01). The present study suggests that the SR, with a cut-off set a 0.52 cross-sectional area, may be superior to walking distance and NCOS in patients with degenerative lumbar stenosis considered for surgical treatment. Using a ROC curve analysis, a radiological feature, the SR, demonstrated superiority in predicting patient satisfaction, compared to functional and clinical characteristics such as walking distance and NCOS.

  2. Performance analysis of jump-gliding locomotion for miniature robotics.

    PubMed

    Vidyasagar, A; Zufferey, Jean-Christohphe; Floreano, Dario; Kovač, M

    2015-03-26

    Recent work suggests that jumping locomotion in combination with a gliding phase can be used as an effective mobility principle in robotics. Compared to pure jumping without a gliding phase, the potential benefits of hybrid jump-gliding locomotion includes the ability to extend the distance travelled and reduce the potentially damaging impact forces upon landing. This publication evaluates the performance of jump-gliding locomotion and provides models for the analysis of the relevant dynamics of flight. It also defines a jump-gliding envelope that encompasses the range that can be achieved with jump-gliding robots and that can be used to evaluate the performance and improvement potential of jump-gliding robots. We present first a planar dynamic model and then a simplified closed form model, which allow for quantification of the distance travelled and the impact energy on landing. In order to validate the prediction of these models, we validate the model with experiments using a novel jump-gliding robot, named the 'EPFL jump-glider'. It has a mass of 16.5 g and is able to perform jumps from elevated positions, perform steered gliding flight, land safely and traverse on the ground by repetitive jumping. The experiments indicate that the developed jump-gliding model fits very well with the measured flight data using the EPFL jump-glider, confirming the benefits of jump-gliding locomotion to mobile robotics. The jump-glide envelope considerations indicate that the EPFL jump-glider, when traversing from a 2 m height, reaches 74.3% of optimal jump-gliding distance compared to pure jumping without a gliding phase which only reaches 33.4% of the optimal jump-gliding distance. Methods of further improving flight performance based on the models and inspiration from biological systems are presented providing mechanical design pathways to future jump-gliding robot designs.

  3. Optimal estimation of recurrence structures from time series

    NASA Astrophysics Data System (ADS)

    beim Graben, Peter; Sellers, Kristin K.; Fröhlich, Flavio; Hutt, Axel

    2016-05-01

    Recurrent temporal dynamics is a phenomenon observed frequently in high-dimensional complex systems and its detection is a challenging task. Recurrence quantification analysis utilizing recurrence plots may extract such dynamics, however it still encounters an unsolved pertinent problem: the optimal selection of distance thresholds for estimating the recurrence structure of dynamical systems. The present work proposes a stochastic Markov model for the recurrent dynamics that allows for the analytical derivation of a criterion for the optimal distance threshold. The goodness of fit is assessed by a utility function which assumes a local maximum for that threshold reflecting the optimal estimate of the system's recurrence structure. We validate our approach by means of the nonlinear Lorenz system and its linearized stochastic surrogates. The final application to neurophysiological time series obtained from anesthetized animals illustrates the method and reveals novel dynamic features of the underlying system. We propose the number of optimal recurrence domains as a statistic for classifying an animals' state of consciousness.

  4. Uncertainty-based simulation-optimization using Gaussian process emulation: Application to coastal groundwater management

    NASA Astrophysics Data System (ADS)

    Rajabi, Mohammad Mahdi; Ketabchi, Hamed

    2017-12-01

    Combined simulation-optimization (S/O) schemes have long been recognized as a valuable tool in coastal groundwater management (CGM). However, previous applications have mostly relied on deterministic seawater intrusion (SWI) simulations. This is a questionable simplification, knowing that SWI models are inevitably prone to epistemic and aleatory uncertainty, and hence a management strategy obtained through S/O without consideration of uncertainty may result in significantly different real-world outcomes than expected. However, two key issues have hindered the use of uncertainty-based S/O schemes in CGM, which are addressed in this paper. The first issue is how to solve the computational challenges resulting from the need to perform massive numbers of simulations. The second issue is how the management problem is formulated in presence of uncertainty. We propose the use of Gaussian process (GP) emulation as a valuable tool in solving the computational challenges of uncertainty-based S/O in CGM. We apply GP emulation to the case study of Kish Island (located in the Persian Gulf) using an uncertainty-based S/O algorithm which relies on continuous ant colony optimization and Monte Carlo simulation. In doing so, we show that GP emulation can provide an acceptable level of accuracy, with no bias and low statistical dispersion, while tremendously reducing the computational time. Moreover, five new formulations for uncertainty-based S/O are presented based on concepts such as energy distances, prediction intervals and probabilities of SWI occurrence. We analyze the proposed formulations with respect to their resulting optimized solutions, the sensitivity of the solutions to the intended reliability levels, and the variations resulting from repeated optimization runs.

  5. A LiDAR data-based camera self-calibration method

    NASA Astrophysics Data System (ADS)

    Xu, Lijun; Feng, Jing; Li, Xiaolu; Chen, Jianjun

    2018-07-01

    To find the intrinsic parameters of a camera, a LiDAR data-based camera self-calibration method is presented here. Parameters have been estimated using particle swarm optimization (PSO), enhancing the optimal solution of a multivariate cost function. The main procedure of camera intrinsic parameter estimation has three parts, which include extraction and fine matching of interest points in the images, establishment of cost function, based on Kruppa equations and optimization of PSO using LiDAR data as the initialization input. To improve the precision of matching pairs, a new method of maximal information coefficient (MIC) and maximum asymmetry score (MAS) was used to remove false matching pairs based on the RANSAC algorithm. Highly precise matching pairs were used to calculate the fundamental matrix so that the new cost function (deduced from Kruppa equations in terms of the fundamental matrix) was more accurate. The cost function involving four intrinsic parameters was minimized by PSO for the optimal solution. To overcome the issue of optimization pushed to a local optimum, LiDAR data was used to determine the scope of initialization, based on the solution to the P4P problem for camera focal length. To verify the accuracy and robustness of the proposed method, simulations and experiments were implemented and compared with two typical methods. Simulation results indicated that the intrinsic parameters estimated by the proposed method had absolute errors less than 1.0 pixel and relative errors smaller than 0.01%. Based on ground truth obtained from a meter ruler, the distance inversion accuracy in the experiments was smaller than 1.0 cm. Experimental and simulated results demonstrated that the proposed method was highly accurate and robust.

  6. Effects of external loads on balance control during upright stance: experimental results and model-based predictions.

    PubMed

    Qu, Xingda; Nussbaum, Maury A

    2009-01-01

    The purpose of this study was to identify the effects of external loads on balance control during upright stance, and to examine the ability of a new balance control model to predict these effects. External loads were applied to 12 young, healthy participants, and effects on balance control were characterized by center-of-pressure (COP) based measures. Several loading conditions were studied, involving combinations of load mass (10% and 20% of individual body mass) and height (at or 15% of stature above the whole-body COM). A balance control model based on an optimal control strategy was used to predict COP time series. It was assumed that a given individual would adopt the same neural optimal control mechanisms, identified in a no-load condition, under diverse external loading conditions. With the application of external loads, COP mean velocity in the anterior-posterior direction and RMS distance in the medial-lateral direction increased 8.1% and 10.4%, respectively. Predicted COP mean velocity and RMS distance in the anterior-posterior direction also increased with external loading, by 11.1% and 2.9%, respectively. Both experimental COP data and model-based predictions provided the same general conclusion, that application of larger external loads and loads more superior to the whole body center of mass lead to less effective postural control and perhaps a greater risk of loss of balance or falls. Thus, it can be concluded that the assumption about consistency in control mechanisms was partially supported, and it is the mechanical changes induced by external loads that primarily affect balance control.

  7. Lexical evolution rates derived from automated stability measures

    NASA Astrophysics Data System (ADS)

    Petroni, Filippo; Serva, Maurizio

    2010-03-01

    Phylogenetic trees can be reconstructed from the matrix which contains the distances between all pairs of languages in a family. Recently, we proposed a new method which uses normalized Levenshtein distances among words with the same meaning and averages over all the items of a given list. Decisions about the number of items in the input lists for language comparison have been debated since the beginning of glottochronology. The point is that words associated with some of the meanings have a rapid lexical evolution. Therefore, a large vocabulary comparison is only apparently more accurate than a smaller one, since many of the words do not carry any useful information. In principle, one should find the optimal length of the input lists, studying the stability of the different items. In this paper we tackle the problem with an automated methodology based only on our normalized Levenshtein distance. With this approach, the program of an automated reconstruction of language relationships is completed.

  8. Optimal signal constellation design for ultra-high-speed optical transport in the presence of nonlinear phase noise.

    PubMed

    Liu, Tao; Djordjevic, Ivan B

    2014-12-29

    In this paper, we first describe an optimal signal constellation design algorithm suitable for the coherent optical channels dominated by the linear phase noise. Then, we modify this algorithm to be suitable for the nonlinear phase noise dominated channels. In optimization procedure, the proposed algorithm uses the cumulative log-likelihood function instead of the Euclidian distance. Further, an LDPC coded modulation scheme is proposed to be used in combination with signal constellations obtained by proposed algorithm. Monte Carlo simulations indicate that the LDPC-coded modulation schemes employing the new constellation sets, obtained by our new signal constellation design algorithm, outperform corresponding QAM constellations significantly in terms of transmission distance and have better nonlinearity tolerance.

  9. Estimating the brain pathological age of Alzheimer’s disease patients from MR image data based on the separability distance criterion

    NASA Astrophysics Data System (ADS)

    Li, Yongming; Li, Fan; Wang, Pin; Zhu, Xueru; Liu, Shujun; Qiu, Mingguo; Zhang, Jingna; Zeng, Xiaoping

    2016-10-01

    Traditional age estimation methods are based on the same idea that uses the real age as the training label. However, these methods ignore that there is a deviation between the real age and the brain age due to accelerated brain aging. This paper considers this deviation and searches for it by maximizing the separability distance value rather than by minimizing the difference between the estimated brain age and the real age. Firstly, set the search range of the deviation as the deviation candidates according to prior knowledge. Secondly, use the support vector regression (SVR) as the age estimation model to minimize the difference between the estimated age and the real age plus deviation rather than the real age itself. Thirdly, design the fitness function based on the separability distance criterion. Fourthly, conduct age estimation on the validation dataset using the trained age estimation model, put the estimated age into the fitness function, and obtain the fitness value of the deviation candidate. Fifthly, repeat the iteration until all the deviation candidates are involved and get the optimal deviation with maximum fitness values. The real age plus the optimal deviation is taken as the brain pathological age. The experimental results showed that the separability was apparently improved. For normal control-Alzheimer’s disease (NC-AD), normal control-mild cognition impairment (NC-MCI), and MCI-AD, the average improvements were 0.178 (35.11%), 0.033 (14.47%), and 0.017 (39.53%), respectively. For NC-MCI-AD, the average improvement was 0.2287 (64.22%). The estimated brain pathological age could be not only more helpful to the classification of AD but also more precisely reflect accelerated brain aging. In conclusion, this paper offers a new method for brain age estimation that can distinguish different states of AD and can better reflect the extent of accelerated aging.

  10. Analyzing the relationship between sequence divergence and nodal support using Bayesian phylogenetic analyses.

    PubMed

    Makowsky, Robert; Cox, Christian L; Roelke, Corey; Chippindale, Paul T

    2010-11-01

    Determining the appropriate gene for phylogeny reconstruction can be a difficult process. Rapidly evolving genes tend to resolve recent relationships, but suffer from alignment issues and increased homoplasy among distantly related species. Conversely, slowly evolving genes generally perform best for deeper relationships, but lack sufficient variation to resolve recent relationships. We determine the relationship between sequence divergence and Bayesian phylogenetic reconstruction ability using both natural and simulated datasets. The natural data are based on 28 well-supported relationships within the subphylum Vertebrata. Sequences of 12 genes were acquired and Bayesian analyses were used to determine phylogenetic support for correct relationships. Simulated datasets were designed to determine whether an optimal range of sequence divergence exists across extreme phylogenetic conditions. Across all genes we found that an optimal range of divergence for resolving the correct relationships does exist, although this level of divergence expectedly depends on the distance metric. Simulated datasets show that an optimal range of sequence divergence exists across diverse topologies and models of evolution. We determine that a simple to measure property of genetic sequences (genetic distance) is related to phylogenic reconstruction ability in Bayesian analyses. This information should be useful for selecting the most informative gene to resolve any relationships, especially those that are difficult to resolve, as well as minimizing both cost and confounding information during project design. Copyright © 2010. Published by Elsevier Inc.

  11. A Long-Distance RF-Powered Sensor Node with Adaptive Power Management for IoT Applications.

    PubMed

    Pizzotti, Matteo; Perilli, Luca; Del Prete, Massimo; Fabbri, Davide; Canegallo, Roberto; Dini, Michele; Masotti, Diego; Costanzo, Alessandra; Franchi Scarselli, Eleonora; Romani, Aldo

    2017-07-28

    We present a self-sustained battery-less multi-sensor platform with RF harvesting capability down to -17 dBm and implementing a standard DASH7 wireless communication interface. The node operates at distances up to 17 m from a 2 W UHF carrier. RF power transfer allows operation when common energy scavenging sources (e.g., sun, heat, etc.) are not available, while the DASH7 communication protocol makes it fully compatible with a standard IoT infrastructure. An optimized energy-harvesting module has been designed, including a rectifying antenna (rectenna) and an integrated nano-power DC/DC converter performing maximum-power-point-tracking (MPPT). A nonlinear/electromagnetic co-design procedure is adopted to design the rectenna, which is optimized to operate at ultra-low power levels. An ultra-low power microcontroller controls on-board sensors and wireless protocol, to adapt the power consumption to the available detected power by changing wake-up policies. As a result, adaptive behavior can be observed in the designed platform, to the extent that the transmission data rate is dynamically determined by RF power. Among the novel features of the system, we highlight the use of nano-power energy harvesting, the implementation of specific hardware/software wake-up policies, optimized algorithms for best sampling rate implementation, and adaptive behavior by the node based on the power received.

  12. A Long-Distance RF-Powered Sensor Node with Adaptive Power Management for IoT Applications

    PubMed Central

    del Prete, Massimo; Fabbri, Davide; Canegallo, Roberto; Dini, Michele; Costanzo, Alessandra

    2017-01-01

    We present a self-sustained battery-less multi-sensor platform with RF harvesting capability down to −17 dBm and implementing a standard DASH7 wireless communication interface. The node operates at distances up to 17 m from a 2 W UHF carrier. RF power transfer allows operation when common energy scavenging sources (e.g., sun, heat, etc.) are not available, while the DASH7 communication protocol makes it fully compatible with a standard IoT infrastructure. An optimized energy-harvesting module has been designed, including a rectifying antenna (rectenna) and an integrated nano-power DC/DC converter performing maximum-power-point-tracking (MPPT). A nonlinear/electromagnetic co-design procedure is adopted to design the rectenna, which is optimized to operate at ultra-low power levels. An ultra-low power microcontroller controls on-board sensors and wireless protocol, to adapt the power consumption to the available detected power by changing wake-up policies. As a result, adaptive behavior can be observed in the designed platform, to the extent that the transmission data rate is dynamically determined by RF power. Among the novel features of the system, we highlight the use of nano-power energy harvesting, the implementation of specific hardware/software wake-up policies, optimized algorithms for best sampling rate implementation, and adaptive behavior by the node based on the power received. PMID:28788084

  13. Optimization of pencil beam f-theta lens for high-accuracy metrology

    NASA Astrophysics Data System (ADS)

    Peng, Chuanqian; He, Yumei; Wang, Jie

    2018-01-01

    Pencil beam deflectometric profilers are common instruments for high-accuracy surface slope metrology of x-ray mirrors in synchrotron facilities. An f-theta optical system is a key optical component of the deflectometric profilers and is used to perform the linear angle-to-position conversion. Traditional optimization procedures of the f-theta systems are not directly related to the angle-to-position conversion relation and are performed with stops of large size and a fixed working distance, which means they may not be suitable for the design of f-theta systems working with a small-sized pencil beam within a working distance range for ultra-high-accuracy metrology. If an f-theta system is not well-designed, aberrations of the f-theta system will introduce many systematic errors into the measurement. A least-squares' fitting procedure was used to optimize the configuration parameters of an f-theta system. Simulations using ZEMAX software showed that the optimized f-theta system significantly suppressed the angle-to-position conversion errors caused by aberrations. Any pencil-beam f-theta optical system can be optimized with the help of this optimization method.

  14. Modeling and simulation for the field emission of carbon nanotubes array

    NASA Astrophysics Data System (ADS)

    Wang, X. Q.; Wang, M.; Ge, H. L.; Chen, Q.; Xu, Y. B.

    2005-12-01

    To optimize the field emission of the infinite carbon nanotubes (CNTs) array on a planar cathode surface, the numerical simulation for the behavior of field emission with finite difference method was proposed. By solving the Laplace equation with computer, the influence of the intertube distance, the anode-cathode distance and the opened/capped CNT on the field emission of CNTs array were taken into account, and the results could accord well with the experiments. The simulated results proved that the field enhancement factor of individual CNT is largest, but the emission current density is little. Due to the enhanced screening of the electric field, the enhancement factor of CNTs array decreases with decreasing the intertube distance. From the simulation the field emission can be optimized when the intertube distance is close to the tube height. The anode-cathode distance hardly influences the field enhancement factor of CNTs array, but can low the threshold voltage by decreasing the anode-cathode distance. Finally, the distribution of potential of the capped CNTs array and the opened CNTs array was simulated, which the results showed that the distribution of potential can be influenced to some extent by the anode-cathode distance, especially at the apex of the capped CNTs array and the brim of the opened CNTs array. The opened CNTs array has larger field enhancement factor and can emit more current than the capped one.

  15. Structural Statics Analysis and Optimization Design of Regulating Device for Air Conveyer Outlet in Coal Mine

    NASA Astrophysics Data System (ADS)

    Gong, Xiaoyan; Li, Ying; Zhang, Yongqiang

    2018-06-01

    In view of the enlargement of fully mechanized face excavation and long distance driving, gas emission and dust production increase greatly. However, the current ventilation device direction angle, caliber and front-back distance cannot change dynamically at any time, resulting in the serious accumulation in the dead zone. In this paper, a new device were proposed that can solve above problems. Finite element ANSYS software were used to simulate and optimize the structural safety of the control device' key components. The optimization results showed that the equivalent stress decreases by 49%; after the optimization of deformation and mass are 0.829mm and 0.548kg, which were 21% and 10% lower than before.The quality, safety, reliability and cost of the control device reach the expected standards perfectly, which can meet the requirements of safe ventilation and down-dusting of fully mechanized face.

  16. Optimizing Environmental Monitoring Networks with Direction-Dependent Distance Thresholds.

    ERIC Educational Resources Information Center

    Hudak, Paul F.

    1993-01-01

    In the direction-dependent approach to location modeling developed herein, the distance within which a point of demand can find service from a facility depends on direction of measurement. The utility of the approach is illustrated through an application to groundwater remediation. (Author/MDH)

  17. New spatial clustering-based models for optimal urban facility location considering geographical obstacles

    NASA Astrophysics Data System (ADS)

    Javadi, Maryam; Shahrabi, Jamal

    2014-03-01

    The problems of facility location and the allocation of demand points to facilities are crucial research issues in spatial data analysis and urban planning. It is very important for an organization or governments to best locate its resources and facilities and efficiently manage resources to ensure that all demand points are covered and all the needs are met. Most of the recent studies, which focused on solving facility location problems by performing spatial clustering, have used the Euclidean distance between two points as the dissimilarity function. Natural obstacles, such as mountains and rivers, can have drastic impacts on the distance that needs to be traveled between two geographical locations. While calculating the distance between various supply chain entities (including facilities and demand points), it is necessary to take such obstacles into account to obtain better and more realistic results regarding location-allocation. In this article, new models were presented for location of urban facilities while considering geographical obstacles at the same time. In these models, three new distance functions were proposed. The first function was based on the analysis of shortest path in linear network, which was called SPD function. The other two functions, namely PD and P2D, were based on the algorithms that deal with robot geometry and route-based robot navigation in the presence of obstacles. The models were implemented in ArcGIS Desktop 9.2 software using the visual basic programming language. These models were evaluated using synthetic and real data sets. The overall performance was evaluated based on the sum of distance from demand points to their corresponding facilities. Because of the distance between the demand points and facilities becoming more realistic in the proposed functions, results indicated desired quality of the proposed models in terms of quality of allocating points to centers and logistic cost. Obtained results show promising improvements of the allocation, the logistics costs and the response time. It can also be inferred from this study that the P2D-based model and the SPD-based model yield similar results in terms of the facility location and the demand allocation. It is noted that the P2D-based model showed better execution time than the SPD-based model. Considering logistic costs, facility location and response time, the P2D-based model was appropriate choice for urban facility location problem considering the geographical obstacles.

  18. SVM-Based Synthetic Fingerprint Discrimination Algorithm and Quantitative Optimization Strategy

    PubMed Central

    Chen, Suhang; Chang, Sheng; Huang, Qijun; He, Jin; Wang, Hao; Huang, Qiangui

    2014-01-01

    Synthetic fingerprints are a potential threat to automatic fingerprint identification systems (AFISs). In this paper, we propose an algorithm to discriminate synthetic fingerprints from real ones. First, four typical characteristic factors—the ridge distance features, global gray features, frequency feature and Harris Corner feature—are extracted. Then, a support vector machine (SVM) is used to distinguish synthetic fingerprints from real fingerprints. The experiments demonstrate that this method can achieve a recognition accuracy rate of over 98% for two discrete synthetic fingerprint databases as well as a mixed database. Furthermore, a performance factor that can evaluate the SVM's accuracy and efficiency is presented, and a quantitative optimization strategy is established for the first time. After the optimization of our synthetic fingerprint discrimination task, the polynomial kernel with a training sample proportion of 5% is the optimized value when the minimum accuracy requirement is 95%. The radial basis function (RBF) kernel with a training sample proportion of 15% is a more suitable choice when the minimum accuracy requirement is 98%. PMID:25347063

  19. Automatic lung nodule matching for the follow-up in temporal chest CT scans

    NASA Astrophysics Data System (ADS)

    Hong, Helen; Lee, Jeongjin; Shin, Yeong Gil

    2006-03-01

    We propose a fast and robust registration method for matching lung nodules of temporal chest CT scans. Our method is composed of four stages. First, the lungs are extracted from chest CT scans by the automatic segmentation method. Second, the gross translational mismatch is corrected by the optimal cube registration. This initial registration does not require extracting any anatomical landmarks. Third, initial alignment is step by step refined by the iterative surface registration. To evaluate the distance measure between surface boundary points, a 3D distance map is generated by the narrow-band distance propagation, which drives fast and robust convergence to the optimal location. Fourth, nodule correspondences are established by the pairs with the smallest Euclidean distances. The results of pulmonary nodule alignment of twenty patients are reported on a per-center-of mass point basis using the average Euclidean distance (AED) error between corresponding nodules of initial and follow-up scans. The average AED error of twenty patients is significantly reduced to 4.7mm from 30.0mm by our registration. Experimental results show that our registration method aligns the lung nodules much faster than the conventional ones using a distance measure. Accurate and fast result of our method would be more useful for the radiologist's evaluation of pulmonary nodules on chest CT scans.

  20. A Simple but Powerful Heuristic Method for Accelerating k-Means Clustering of Large-Scale Data in Life Science.

    PubMed

    Ichikawa, Kazuki; Morishita, Shinichi

    2014-01-01

    K-means clustering has been widely used to gain insight into biological systems from large-scale life science data. To quantify the similarities among biological data sets, Pearson correlation distance and standardized Euclidean distance are used most frequently; however, optimization methods have been largely unexplored. These two distance measurements are equivalent in the sense that they yield the same k-means clustering result for identical sets of k initial centroids. Thus, an efficient algorithm used for one is applicable to the other. Several optimization methods are available for the Euclidean distance and can be used for processing the standardized Euclidean distance; however, they are not customized for this context. We instead approached the problem by studying the properties of the Pearson correlation distance, and we invented a simple but powerful heuristic method for markedly pruning unnecessary computation while retaining the final solution. Tests using real biological data sets with 50-60K vectors of dimensions 10-2001 (~400 MB in size) demonstrated marked reduction in computation time for k = 10-500 in comparison with other state-of-the-art pruning methods such as Elkan's and Hamerly's algorithms. The BoostKCP software is available at http://mlab.cb.k.u-tokyo.ac.jp/~ichikawa/boostKCP/.

  1. Automatic segmentation of right ventricular ultrasound images using sparse matrix transform and a level set

    NASA Astrophysics Data System (ADS)

    Qin, Xulei; Cong, Zhibin; Fei, Baowei

    2013-11-01

    An automatic segmentation framework is proposed to segment the right ventricle (RV) in echocardiographic images. The method can automatically segment both epicardial and endocardial boundaries from a continuous echocardiography series by combining sparse matrix transform, a training model, and a localized region-based level set. First, the sparse matrix transform extracts main motion regions of the myocardium as eigen-images by analyzing the statistical information of the images. Second, an RV training model is registered to the eigen-images in order to locate the position of the RV. Third, the training model is adjusted and then serves as an optimized initialization for the segmentation of each image. Finally, based on the initializations, a localized, region-based level set algorithm is applied to segment both epicardial and endocardial boundaries in each echocardiograph. Three evaluation methods were used to validate the performance of the segmentation framework. The Dice coefficient measures the overall agreement between the manual and automatic segmentation. The absolute distance and the Hausdorff distance between the boundaries from manual and automatic segmentation were used to measure the accuracy of the segmentation. Ultrasound images of human subjects were used for validation. For the epicardial and endocardial boundaries, the Dice coefficients were 90.8 ± 1.7% and 87.3 ± 1.9%, the absolute distances were 2.0 ± 0.42 mm and 1.79 ± 0.45 mm, and the Hausdorff distances were 6.86 ± 1.71 mm and 7.02 ± 1.17 mm, respectively. The automatic segmentation method based on a sparse matrix transform and level set can provide a useful tool for quantitative cardiac imaging.

  2. A Robot Trajectory Optimization Approach for Thermal Barrier Coatings Used for Free-Form Components

    NASA Astrophysics Data System (ADS)

    Cai, Zhenhua; Qi, Beichun; Tao, Chongyuan; Luo, Jie; Chen, Yuepeng; Xie, Changjun

    2017-10-01

    This paper is concerned with a robot trajectory optimization approach for thermal barrier coatings. As the requirements of high reproducibility of complex workpieces increase, an optimal thermal spraying trajectory should not only guarantee an accurate control of spray parameters defined by users (e.g., scanning speed, spray distance, scanning step, etc.) to achieve coating thickness homogeneity but also help to homogenize the heat transfer distribution on the coating surface. A mesh-based trajectory generation approach is introduced in this work to generate path curves on a free-form component. Then, two types of meander trajectories are generated by performing a different connection method. Additionally, this paper presents a research approach for introducing the heat transfer analysis into the trajectory planning process. Combining heat transfer analysis with trajectory planning overcomes the defects of traditional trajectory planning methods (e.g., local over-heating), which helps form the uniform temperature field by optimizing the time sequence of path curves. The influence of two different robot trajectories on the process of heat transfer is estimated by coupled FEM models which demonstrates the effectiveness of the presented optimization approach.

  3. Evaluating information content of SNPs for sample-tagging in re-sequencing projects.

    PubMed

    Hu, Hao; Liu, Xiang; Jin, Wenfei; Hilger Ropers, H; Wienker, Thomas F

    2015-05-15

    Sample-tagging is designed for identification of accidental sample mix-up, which is a major issue in re-sequencing studies. In this work, we develop a model to measure the information content of SNPs, so that we can optimize a panel of SNPs that approach the maximal information for discrimination. The analysis shows that as low as 60 optimized SNPs can differentiate the individuals in a population as large as the present world, and only 30 optimized SNPs are in practice sufficient in labeling up to 100 thousand individuals. In the simulated populations of 100 thousand individuals, the average Hamming distances, generated by the optimized set of 30 SNPs are larger than 18, and the duality frequency, is lower than 1 in 10 thousand. This strategy of sample discrimination is proved robust in large sample size and different datasets. The optimized sets of SNPs are designed for Whole Exome Sequencing, and a program is provided for SNP selection, allowing for customized SNP numbers and interested genes. The sample-tagging plan based on this framework will improve re-sequencing projects in terms of reliability and cost-effectiveness.

  4. Automated position control of a surface array relative to a liquid microjunction surface sampler

    DOEpatents

    Van Berkel, Gary J.; Kertesz, Vilmos; Ford, Michael James

    2007-11-13

    A system and method utilizes an image analysis approach for controlling the probe-to-surface distance of a liquid junction-based surface sampling system for use with mass spectrometric detection. Such an approach enables a hands-free formation of the liquid microjunction used to sample solution composition from the surface and for re-optimization, as necessary, of the microjunction thickness during a surface scan to achieve a fully automated surface sampling system.

  5. Heterogeneous Multi-Metric Learning for Multi-Sensor Fusion

    DTIC Science & Technology

    2011-07-01

    distance”. One of the most widely used methods is the k-nearest neighbor ( KNN ) method [4], which labels an input data sample to be the class with majority...despite of its simplicity, it can be an effective candidate and can be easily extended to handle multiple sensors. Distance based method such as KNN relies...Neighbor (LMNN) method [21] which will be briefly reviewed in the sequel. LMNN method tries to learn an optimal metric specifically for KNN classifier. The

  6. Simulation-Based Joint Estimation of Body Deformation and Elasticity Parameters for Medical Image Analysis

    PubMed Central

    Foskey, Mark; Niethammer, Marc; Krajcevski, Pavel; Lin, Ming C.

    2014-01-01

    Estimation of tissue stiffness is an important means of noninvasive cancer detection. Existing elasticity reconstruction methods usually depend on a dense displacement field (inferred from ultrasound or MR images) and known external forces. Many imaging modalities, however, cannot provide details within an organ and therefore cannot provide such a displacement field. Furthermore, force exertion and measurement can be difficult for some internal organs, making boundary forces another missing parameter. We propose a general method for estimating elasticity and boundary forces automatically using an iterative optimization framework, given the desired (target) output surface. During the optimization, the input model is deformed by the simulator, and an objective function based on the distance between the deformed surface and the target surface is minimized numerically. The optimization framework does not depend on a particular simulation method and is therefore suitable for different physical models. We show a positive correlation between clinical prostate cancer stage (a clinical measure of severity) and the recovered elasticity of the organ. Since the surface correspondence is established, our method also provides a non-rigid image registration, where the quality of the deformation fields is guaranteed, as they are computed using a physics-based simulation. PMID:22893381

  7. Dynamic Obstacle Avoidance for Unmanned Underwater Vehicles Based on an Improved Velocity Obstacle Method

    PubMed Central

    Zhang, Wei; Wei, Shilin; Teng, Yanbin; Zhang, Jianku; Wang, Xiufang; Yan, Zheping

    2017-01-01

    In view of a dynamic obstacle environment with motion uncertainty, we present a dynamic collision avoidance method based on the collision risk assessment and improved velocity obstacle method. First, through the fusion optimization of forward-looking sonar data, the redundancy of the data is reduced and the position, size and velocity information of the obstacles are obtained, which can provide an accurate decision-making basis for next-step collision avoidance. Second, according to minimum meeting time and the minimum distance between the obstacle and unmanned underwater vehicle (UUV), this paper establishes the collision risk assessment model, and screens key obstacles to avoid collision. Finally, the optimization objective function is established based on the improved velocity obstacle method, and a UUV motion characteristic is used to calculate the reachable velocity sets. The optimal collision speed of UUV is searched in velocity space. The corresponding heading and speed commands are calculated, and outputted to the motion control module. The above is the complete dynamic obstacle avoidance process. The simulation results show that the proposed method can obtain a better collision avoidance effect in the dynamic environment, and has good adaptability to the unknown dynamic environment. PMID:29186878

  8. Neural decoding with kernel-based metric learning.

    PubMed

    Brockmeier, Austin J; Choi, John S; Kriminger, Evan G; Francis, Joseph T; Principe, Jose C

    2014-06-01

    In studies of the nervous system, the choice of metric for the neural responses is a pivotal assumption. For instance, a well-suited distance metric enables us to gauge the similarity of neural responses to various stimuli and assess the variability of responses to a repeated stimulus-exploratory steps in understanding how the stimuli are encoded neurally. Here we introduce an approach where the metric is tuned for a particular neural decoding task. Neural spike train metrics have been used to quantify the information content carried by the timing of action potentials. While a number of metrics for individual neurons exist, a method to optimally combine single-neuron metrics into multineuron, or population-based, metrics is lacking. We pose the problem of optimizing multineuron metrics and other metrics using centered alignment, a kernel-based dependence measure. The approach is demonstrated on invasively recorded neural data consisting of both spike trains and local field potentials. The experimental paradigm consists of decoding the location of tactile stimulation on the forepaws of anesthetized rats. We show that the optimized metrics highlight the distinguishing dimensions of the neural response, significantly increase the decoding accuracy, and improve nonlinear dimensionality reduction methods for exploratory neural analysis.

  9. Salient object detection based on discriminative boundary and multiple cues integration

    NASA Astrophysics Data System (ADS)

    Jiang, Qingzhu; Wu, Zemin; Tian, Chang; Liu, Tao; Zeng, Mingyong; Hu, Lei

    2016-01-01

    In recent years, many saliency models have achieved good performance by taking the image boundary as the background prior. However, if all boundaries of an image are equally and artificially selected as background, misjudgment may happen when the object touches the boundary. We propose an algorithm called weighted contrast optimization based on discriminative boundary (wCODB). First, a background estimation model is reliably constructed through discriminating each boundary via Hausdorff distance. Second, the background-only weighted contrast is improved by fore-background weighted contrast, which is optimized through weight-adjustable optimization framework. Then to objectively estimate the quality of a saliency map, a simple but effective metric called spatial distribution of saliency map and mean saliency in covered window ratio (MSR) is designed. Finally, in order to further promote the detection result using MSR as the weight, we propose a saliency fusion framework to integrate three other cues-uniqueness, distribution, and coherence from three representative methods into our wCODB model. Extensive experiments on six public datasets demonstrate that our wCODB performs favorably against most of the methods based on boundary, and the integrated result outperforms all state-of-the-art methods.

  10. Distance measures and optimization spaces in quantitative fatty acid signature analysis

    USGS Publications Warehouse

    Bromaghin, Jeffrey F.; Rode, Karyn D.; Budge, Suzanne M.; Thiemann, Gregory W.

    2015-01-01

    Quantitative fatty acid signature analysis has become an important method of diet estimation in ecology, especially marine ecology. Controlled feeding trials to validate the method and estimate the calibration coefficients necessary to account for differential metabolism of individual fatty acids have been conducted with several species from diverse taxa. However, research into potential refinements of the estimation method has been limited. We compared the performance of the original method of estimating diet composition with that of five variants based on different combinations of distance measures and calibration-coefficient transformations between prey and predator fatty acid signature spaces. Fatty acid signatures of pseudopredators were constructed using known diet mixtures of two prey data sets previously used to estimate the diets of polar bears Ursus maritimus and gray seals Halichoerus grypus, and their diets were then estimated using all six variants. In addition, previously published diets of Chukchi Sea polar bears were re-estimated using all six methods. Our findings reveal that the selection of an estimation method can meaningfully influence estimates of diet composition. Among the pseudopredator results, which allowed evaluation of bias and precision, differences in estimator performance were rarely large, and no one estimator was universally preferred, although estimators based on the Aitchison distance measure tended to have modestly superior properties compared to estimators based on the Kullback-Leibler distance measure. However, greater differences were observed among estimated polar bear diets, most likely due to differential estimator sensitivity to assumption violations. Our results, particularly the polar bear example, suggest that additional research into estimator performance and model diagnostics is warranted.

  11. Utilization of a modified special-cubic design and an electronic tongue for bitterness masking formulation optimization.

    PubMed

    Li, Lianli; Naini, Venkatesh; Ahmed, Salah U

    2007-10-01

    A unique modification of simplex design was applied to an electronic tongue (E-Tongue) analysis in bitterness masking formulation optimization. Three formulation variables were evaluated in the simplex design, i.e. concentrations of two taste masking polymers, Amberlite and Carbopol, and pH of the granulating fluid. Response of the design was a bitterness distance measured using an E-Tongue by applying a principle component analysis, which represents taste masking efficiency of the formulation. The smaller the distance, the better the bitterness masking effect. Contour plots and polynomial equations of the bitterness distance response were generated as a function of formulation composition and pH. It was found that interactions between polymer and pH reduced the bitterness of the formulation, attributed to pH-dependent ionization and complexation properties of the ionic polymers, thus keeping the drug out of solution and unavailable to bitterness perception. At pH 4.9 and an Amberlite/Carbopol ratio of 1.4:1 (w/w), the optimal taste masking formulation was achieved and in agreement with human gustatory sensation study results. Therefore, adopting a modified simplex experimental design on response measured using an E-Tongue provided an efficient approach to taste masking formulation optimization using ionic binding polymers. (c) 2007 Wiley-Liss, Inc.

  12. Spectral imaging using consumer-level devices and kernel-based regression.

    PubMed

    Heikkinen, Ville; Cámara, Clara; Hirvonen, Tapani; Penttinen, Niko

    2016-06-01

    Hyperspectral reflectance factor image estimations were performed in the 400-700 nm wavelength range using a portable consumer-level laptop display as an adjustable light source for a trichromatic camera. Targets of interest were ColorChecker Classic samples, Munsell Matte samples, geometrically challenging tempera icon paintings from the turn of the 20th century, and human hands. Measurements and simulations were performed using Nikon D80 RGB camera and Dell Vostro 2520 laptop screen as a light source. Estimations were performed without spectral characteristics of the devices and by emphasizing simplicity for training sets and estimation model optimization. Spectral and color error images are shown for the estimations using line-scanned hyperspectral images as the ground truth. Estimations were performed using kernel-based regression models via a first-degree inhomogeneous polynomial kernel and a Matérn kernel, where in the latter case the median heuristic approach for model optimization and link function for bounded estimation were evaluated. Results suggest modest requirements for a training set and show that all estimation models have markedly improved accuracy with respect to the DE00 color distance (up to 99% for paintings and hands) and the Pearson distance (up to 98% for paintings and 99% for hands) from a weak training set (Digital ColorChecker SG) case when small representative training data were used in the estimation.

  13. A Figure-of-Merit for Designing High-Performance Inductive Power Transmission Links

    PubMed Central

    Kiani, Mehdi; Ghovanloo, Maysam

    2014-01-01

    Power transfer efficiency (PTE) and power delivered to the load (PDL) are two key inductive link design parameters that relate to the power source and driver specs, power loss, transmission range, robustness against misalignment, variations in loading, and interference with other devices. Designers need to strike a delicate balance between these two because designing the link to achieve high PTE will degrade the PDL and vice versa. We are proposing a new figure-of-merit (FoM), which can help designers to find out whether a two-, three-, or four-coil link is appropriate for their particular application and guide them through an iterative design procedure to reach optimal coil geometries based on how they weigh the PTE versus PDL for that application. Three design examples at three different power levels have been presented based on the proposed FoM for implantable microelectronic devices, handheld mobile devices, and electric vehicles. The new FoM suggests that the two-coil links are suitable when the coils are strongly coupled, and a large PDL is needed. Three-coil links are the best when the coils are loosely coupled, the coupling distance varies considerably, and large PDL is necessary. Finally, four-coil links are optimal when the PTE is paramount, the coils are loosely coupled, and their relative distance and alignment are stable. Measurement results support the accuracy of the theoretical design procedure and conclusions. PMID:25382898

  14. A Figure-of-Merit for Designing High-Performance Inductive Power Transmission Links.

    PubMed

    Kiani, Mehdi; Ghovanloo, Maysam

    2012-11-16

    Power transfer efficiency (PTE) and power delivered to the load (PDL) are two key inductive link design parameters that relate to the power source and driver specs, power loss, transmission range, robustness against misalignment, variations in loading, and interference with other devices. Designers need to strike a delicate balance between these two because designing the link to achieve high PTE will degrade the PDL and vice versa. We are proposing a new figure-of-merit (FoM), which can help designers to find out whether a two-, three-, or four-coil link is appropriate for their particular application and guide them through an iterative design procedure to reach optimal coil geometries based on how they weigh the PTE versus PDL for that application. Three design examples at three different power levels have been presented based on the proposed FoM for implantable microelectronic devices, handheld mobile devices, and electric vehicles. The new FoM suggests that the two-coil links are suitable when the coils are strongly coupled, and a large PDL is needed. Three-coil links are the best when the coils are loosely coupled, the coupling distance varies considerably, and large PDL is necessary. Finally, four-coil links are optimal when the PTE is paramount, the coils are loosely coupled, and their relative distance and alignment are stable. Measurement results support the accuracy of the theoretical design procedure and conclusions.

  15. Research on bathymetry estimation by Worldview-2 based with the semi-analytical model

    NASA Astrophysics Data System (ADS)

    Sheng, L.; Bai, J.; Zhou, G.-W.; Zhao, Y.; Li, Y.-C.

    2015-04-01

    South Sea Islands of China are far away from the mainland, the reefs takes more than 95% of south sea, and most reefs scatter over interested dispute sensitive area. Thus, the methods of obtaining the reefs bathymetry accurately are urgent to be developed. Common used method, including sonar, airborne laser and remote sensing estimation, are limited by the long distance, large area and sensitive location. Remote sensing data provides an effective way for bathymetry estimation without touching over large area, by the relationship between spectrum information and bathymetry. Aimed at the water quality of the south sea of China, our paper develops a bathymetry estimation method without measured water depth. Firstly the semi-analytical optimization model of the theoretical interpretation models has been studied based on the genetic algorithm to optimize the model. Meanwhile, OpenMP parallel computing algorithm has been introduced to greatly increase the speed of the semi-analytical optimization model. One island of south sea in China is selected as our study area, the measured water depth are used to evaluate the accuracy of bathymetry estimation from Worldview-2 multispectral images. The results show that: the semi-analytical optimization model based on genetic algorithm has good results in our study area;the accuracy of estimated bathymetry in the 0-20 meters shallow water area is accepted.Semi-analytical optimization model based on genetic algorithm solves the problem of the bathymetry estimation without water depth measurement. Generally, our paper provides a new bathymetry estimation method for the sensitive reefs far away from mainland.

  16. Spectral anomaly methods for aerial detection using KUT nuisance rejection

    NASA Astrophysics Data System (ADS)

    Detwiler, R. S.; Pfund, D. M.; Myjak, M. J.; Kulisek, J. A.; Seifert, C. E.

    2015-06-01

    This work discusses the application and optimization of a spectral anomaly method for the real-time detection of gamma radiation sources from an aerial helicopter platform. Aerial detection presents several key challenges over ground-based detection. For one, larger and more rapid background fluctuations are typical due to higher speeds, larger field of view, and geographically induced background changes. As well, the possible large altitude or stand-off distance variations cause significant steps in background count rate as well as spectral changes due to increased gamma-ray scatter with detection at higher altitudes. The work here details the adaptation and optimization of the PNNL-developed algorithm Nuisance-Rejecting Spectral Comparison Ratios for Anomaly Detection (NSCRAD), a spectral anomaly method previously developed for ground-based applications, for an aerial platform. The algorithm has been optimized for two multi-detector systems; a NaI(Tl)-detector-based system and a CsI detector array. The optimization here details the adaptation of the spectral windows for a particular set of target sources to aerial detection and the tailoring for the specific detectors. As well, the methodology and results for background rejection methods optimized for the aerial gamma-ray detection using Potassium, Uranium and Thorium (KUT) nuisance rejection are shown. Results indicate that use of a realistic KUT nuisance rejection may eliminate metric rises due to background magnitude and spectral steps encountered in aerial detection due to altitude changes and geographically induced steps such as at land-water interfaces.

  17. Antitumor activity of 3,4-ethylenedioxythiophene derivatives and quantitative structure-activity relationship analysis

    NASA Astrophysics Data System (ADS)

    Jukić, Marijana; Rastija, Vesna; Opačak-Bernardi, Teuta; Stolić, Ivana; Krstulović, Luka; Bajić, Miroslav; Glavaš-Obrovac, Ljubica

    2017-04-01

    The aim of this study was to evaluate nine newly synthesized amidine derivatives of 3,4- ethylenedioxythiophene (3,4-EDOT) for their cytotoxic activity against a panel of human cancer cell lines and to perform a quantitative structure-activity relationship (QSAR) analysis for the antitumor activity of a total of 27 3,4-ethylenedioxythiophene derivatives. Induction of apoptosis was investigated on the selected compounds, along with delivery options for the optimization of activity. The best obtained QSAR models include the following group of descriptors: BCUT, WHIM, 2D autocorrelations, 3D-MoRSE, GETAWAY descriptors, 2D frequency fingerprint and information indices. Obtained QSAR models should be relieved in elucidation of important physicochemical and structural requirements for this biological activity. Highly potent molecules have a symmetrical arrangement of substituents along the x axis, high frequency of distance between N and O atoms at topological distance 9, as well as between C and N atoms at topological distance 10, and more C atoms located at topological distances 6 and 3. Based on the conclusion given in the QSAR analysis, a new compound with possible great activity was proposed.

  18. Use of GIS to identify optimal settings for cancer prevention and control in African American communities

    PubMed Central

    Alcaraz, Kassandra I.; Kreuter, Matthew W.; Bryan, Rebecca P.

    2009-01-01

    Objective Rarely have Geographic Information Systems (GIS) been used to inform community-based outreach and intervention planning. This study sought to identify community settings most likely to reach individuals from geographically localized areas. Method An observational study conducted in an urban city in Missouri during 2003–2007 placed computerized breast cancer education kiosks in seven types of community settings: beauty salons, churches, health fairs, neighborhood health centers, Laundromats, public libraries and social service agencies. We used GIS to measure distance between kiosk users’ (n=7,297) home ZIP codes and the location where they used the kiosk. Mean distances were compared across settings. Results Mean distance between individuals’ home ZIP codes and the location where they used the kiosk varied significantly (p<0.001) across settings. The distance was shortest among kiosk users in Laundromats (2.3 miles) and public libraries (2.8 miles) and greatest among kiosk users at health fairs (7.6 miles). Conclusion Some community settings are more likely than others to reach highly localized populations. A better understanding of how and where to reach specific populations can complement the progress already being made in identifying populations at increased disease risk. PMID:19422844

  19. Optimization of self-study room open problem based on green and low-carbon campus construction

    NASA Astrophysics Data System (ADS)

    Liu, Baoyou

    2017-04-01

    The optimization of self-study room open arrangement problem in colleges and universities is conducive to accelerate the fine management of the campus and promote green and low-carbon campus construction. Firstly, combined with the actual survey data, the self-study area and living area were divided into different blocks, and the electricity consumption in each self-study room and distance between different living and studying areas were normalized. Secondly, the minimum of total satisfaction index and the minimum of the total electricity consumption were selected as the optimization targets respectively. The mathematical models of linear programming were established and resolved by LINGO software. The results showed that the minimum of total satisfaction index was 4055.533 and the total minimum electricity consumption was 137216 W. Finally, some advice had been put forward on how to realize the high efficient administration of the study room.

  20. Optimization of output power and transmission efficiency of magnetically coupled resonance wireless power transfer system

    NASA Astrophysics Data System (ADS)

    Yan, Rongge; Guo, Xiaoting; Cao, Shaoqing; Zhang, Changgeng

    2018-05-01

    Magnetically coupled resonance (MCR) wireless power transfer (WPT) system is a promising technology in electric energy transmission. But, if its system parameters are designed unreasonably, output power and transmission efficiency will be low. Therefore, optimized parameters design of MCR WPT has important research value. In the MCR WPT system with designated coil structure, the main parameters affecting output power and transmission efficiency are the distance between the coils, the resonance frequency and the resistance of the load. Based on the established mathematical model and the differential evolution algorithm, the change of output power and transmission efficiency with parameters can be simulated. From the simulation results, it can be seen that output power and transmission efficiency of the two-coil MCR WPT system and four-coil one with designated coil structure are improved. The simulation results confirm the validity of the optimization method for MCR WPT system with designated coil structure.

  1. Development and validation of automatic tools for interactive recurrence analysis in radiation therapy: optimization of treatment algorithms for locally advanced pancreatic cancer.

    PubMed

    Kessel, Kerstin A; Habermehl, Daniel; Jäger, Andreas; Floca, Ralf O; Zhang, Lanlan; Bendl, Rolf; Debus, Jürgen; Combs, Stephanie E

    2013-06-07

    In radiation oncology recurrence analysis is an important part in the evaluation process and clinical quality assurance of treatment concepts. With the example of 9 patients with locally advanced pancreatic cancer we developed and validated interactive analysis tools to support the evaluation workflow. After an automatic registration of the radiation planning CTs with the follow-up images, the recurrence volumes are segmented manually. Based on these volumes the DVH (dose volume histogram) statistic is calculated, followed by the determination of the dose applied to the region of recurrence and the distance between the boost and recurrence volume. We calculated the percentage of the recurrence volume within the 80%-isodose volume and compared it to the location of the recurrence within the boost volume, boost + 1 cm, boost + 1.5 cm and boost + 2 cm volumes. Recurrence analysis of 9 patients demonstrated that all recurrences except one occurred within the defined GTV/boost volume; one recurrence developed beyond the field border/outfield. With the defined distance volumes in relation to the recurrences, we could show that 7 recurrent lesions were within the 2 cm radius of the primary tumor. Two large recurrences extended beyond the 2 cm, however, this might be due to very rapid growth and/or late detection of the tumor progression. The main goal of using automatic analysis tools is to reduce time and effort conducting clinical analyses. We showed a first approach and use of a semi-automated workflow for recurrence analysis, which will be continuously optimized. In conclusion, despite the limitations of the automatic calculations we contributed to in-house optimization of subsequent study concepts based on an improved and validated target volume definition.

  2. The requirements for low-temperature plasma ionization support miniaturization of the ion source.

    PubMed

    Kiontke, Andreas; Holzer, Frank; Belder, Detlev; Birkemeyer, Claudia

    2018-06-01

    Ambient ionization mass spectrometry (AI-MS), the ionization of samples under ambient conditions, enables fast and simple analysis of samples without or with little sample preparation. Due to their simple construction and low resource consumption, plasma-based ionization methods in particular are considered ideal for use in mobile analytical devices. However, systematic investigations that have attempted to identify the optimal configuration of a plasma source to achieve the sensitive detection of target molecules are still rare. We therefore used a low-temperature plasma ionization (LTPI) source based on dielectric barrier discharge with helium employed as the process gas to identify the factors that most strongly influence the signal intensity in the mass spectrometry of species formed by plasma ionization. In this study, we investigated several construction-related parameters of the plasma source and found that a low wall thickness of the dielectric, a small outlet spacing, and a short distance between the plasma source and the MS inlet are needed to achieve optimal signal intensity with a process-gas flow rate of as little as 10 mL/min. In conclusion, this type of ion source is especially well suited for downscaling, which is usually required in mobile devices. Our results provide valuable insights into the LTPI mechanism; they reveal the potential to further improve its implementation and standardization for mobile mass spectrometry as well as our understanding of the requirements and selectivity of this technique. Graphical abstract Optimized parameters of a dielectric barrier discharge plasma for ionization in mass spectrometry. The electrode size, shape, and arrangement, the thickness of the dielectric, and distances between the plasma source, sample, and MS inlet are marked in red. The process gas (helium) flow is shown in black.

  3. Quantum state discrimination bounds for finite sample size

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Audenaert, Koenraad M. R.; Mosonyi, Milan; Mathematical Institute, Budapest University of Technology and Economics, Egry Jozsef u 1., Budapest 1111

    2012-12-15

    In the problem of quantum state discrimination, one has to determine by measurements the state of a quantum system, based on the a priori side information that the true state is one of the two given and completely known states, {rho} or {sigma}. In general, it is not possible to decide the identity of the true state with certainty, and the optimal measurement strategy depends on whether the two possible errors (mistaking {rho} for {sigma}, or the other way around) are treated as of equal importance or not. Results on the quantum Chernoff and Hoeffding bounds and the quantum Stein'smore » lemma show that, if several copies of the system are available then the optimal error probabilities decay exponentially in the number of copies, and the decay rate is given by a certain statistical distance between {rho} and {sigma} (the Chernoff distance, the Hoeffding distances, and the relative entropy, respectively). While these results provide a complete solution to the asymptotic problem, they are not completely satisfying from a practical point of view. Indeed, in realistic scenarios one has access only to finitely many copies of a system, and therefore it is desirable to have bounds on the error probabilities for finite sample size. In this paper we provide finite-size bounds on the so-called Stein errors, the Chernoff errors, the Hoeffding errors, and the mixed error probabilities related to the Chernoff and the Hoeffding errors.« less

  4. Toward a multi-objective decision support framework to support regulations of unconventional oil and gas development

    NASA Astrophysics Data System (ADS)

    Alongi, M.; Howard, C.; Kasprzyk, J. R.; Ryan, J. N.

    2015-12-01

    Unconventional oil and gas development (UOGD) using hydraulic fracturing and horizontal drilling has recently fostered an unprecedented acceleration in energy development. Regulations seek to protect environmental quality of areas surrounding UOGD, while maintaining economic benefits. One such regulation is a setback distance, which dictates the minimum proximity between an oil and gas well and an object such as a residential or commercial building, property line, or water source. In general, most setback regulations have been strongly politically motivated without a clear scientific basis for understanding the relationship between the setback distance and various performance outcomes. This presentation discusses a new decision support framework for setback regulations, as part of a large NSF-funded sustainability research network (SRN) on UOGD. The goal of the decision support framework is to integrate a wide array of scientific information from the SRN into a coherent framework that can help inform policy regarding UOGD. The decision support framework employs multiobjective evolutionary algorithm (MOEA) optimization coupled with simulation models of air quality and other performance-based outcomes on UOGD. The result of the MOEA optimization runs are quantitative tradeoff curves among different objectives. For example, one such curve could demonstrate air pollution concentrations versus estimates of energy development profits, for different levels of setback distance. Our results will also inform policy-relevant discussions surrounding UOGD such as comparing single- and multi-well pads, as well as regulations on the density of well development over a spatial area.

  5. Anatomic Location of Tumor Predicts the Accuracy of Motor Function Localization in Diffuse Lower-Grade Gliomas Involving the Hand Knob Area.

    PubMed

    Fang, S; Liang, J; Qian, T; Wang, Y; Liu, X; Fan, X; Li, S; Wang, Y; Jiang, T

    2017-10-01

    The accuracy of preoperative blood oxygen level-dependent fMRI remains controversial. This study assessed the association between the anatomic location of a tumor and the accuracy of fMRI-based motor function mapping in diffuse lower-grade gliomas. Thirty-five patients with lower-grade gliomas involving motor areas underwent preoperative blood oxygen level-dependent fMRI scans with grasping tasks and received intraoperative direct cortical stimulation. Patients were classified into an overlapping group and a nonoverlapping group, depending on the extent to which blood oxygen level-dependent fMRI and direct cortical stimulation results concurred. Tumor location was quantitatively measured, including the shortest distance from the tumor to the hand knob and the deviation distance of the midpoint of the hand knob in the lesion hemisphere relative to the midline compared with the normal contralateral hemisphere. A 4-mm shortest distance from the tumor to the hand knob value was identified as optimal for differentiating the overlapping and nonoverlapping group with the receiver operating characteristic curve (sensitivity, 84.6%; specificity, 77.8%). The shortest distances from the tumor to the hand knob of ≤4 mm were associated with inaccurate fMRI-based localizations of the hand motor cortex. The shortest distances from the tumor to the hand knob were larger ( P = .002), and the deviation distances for the midpoint of the hand knob in the lesion hemisphere were smaller ( P = .003) in the overlapping group than in the nonoverlapping group. This study suggests that the shortest distance from the tumor to the hand knob and the deviation distance for the midpoint of the hand knob on the lesion hemisphere are predictive of the accuracy of blood oxygen level-dependent fMRI results. Smaller shortest distances from the tumor to the hand knob and larger deviation distances for the midpoint of hand knob on the lesion hemisphere are associated with less accuracy of motor cortex localization with blood oxygen level-dependent fMRI. Preoperative fMRI data for surgical planning should be used cautiously when the shortest distance from the tumor to the hand knob is ≤4 mm, especially for lower-grade gliomas anterior to the central sulcus. © 2017 by American Journal of Neuroradiology.

  6. An upstream burst-mode equalization scheme for 40 Gb/s TWDM PON based on optimized SOA cascade

    NASA Astrophysics Data System (ADS)

    Sun, Xiao; Chang, Qingjiang; Gao, Zhensen; Ye, Chenhui; Xiao, Simiao; Huang, Xiaoan; Hu, Xiaofeng; Zhang, Kaibin

    2016-02-01

    We present a novel upstream burst-mode equalization scheme based on optimized SOA cascade for 40 Gb/s TWDMPON. The power equalizer is placed at the OLT which consists of two SOAs, two circulators, an optical NOT gate, and a variable optical attenuator. The first SOA operates in the linear region which acts as a pre-amplifier to let the second SOA operate in the saturation region. The upstream burst signals are equalized through the second SOA via nonlinear amplification. From theoretical analysis, this scheme gives sufficient dynamic range suppression up to 16.7 dB without any dynamic control or signal degradation. In addition, a total power budget extension of 9.3 dB for loud packets and 26 dB for soft packets has been achieved to allow longer transmission distance and increased splitting ratio.

  7. Density-based penalty parameter optimization on C-SVM.

    PubMed

    Liu, Yun; Lian, Jie; Bartolacci, Michael R; Zeng, Qing-An

    2014-01-01

    The support vector machine (SVM) is one of the most widely used approaches for data classification and regression. SVM achieves the largest distance between the positive and negative support vectors, which neglects the remote instances away from the SVM interface. In order to avoid a position change of the SVM interface as the result of an error system outlier, C-SVM was implemented to decrease the influences of the system's outliers. Traditional C-SVM holds a uniform parameter C for both positive and negative instances; however, according to the different number proportions and the data distribution, positive and negative instances should be set with different weights for the penalty parameter of the error terms. Therefore, in this paper, we propose density-based penalty parameter optimization of C-SVM. The experiential results indicated that our proposed algorithm has outstanding performance with respect to both precision and recall.

  8. Using optimal transport theory to estimate transition probabilities in metapopulation dynamics

    USGS Publications Warehouse

    Nichols, Jonathan M.; Spendelow, Jeffrey A.; Nichols, James D.

    2017-01-01

    This work considers the estimation of transition probabilities associated with populations moving among multiple spatial locations based on numbers of individuals at each location at two points in time. The problem is generally underdetermined as there exists an extremely large number of ways in which individuals can move from one set of locations to another. A unique solution therefore requires a constraint. The theory of optimal transport provides such a constraint in the form of a cost function, to be minimized in expectation over the space of possible transition matrices. We demonstrate the optimal transport approach on marked bird data and compare to the probabilities obtained via maximum likelihood estimation based on marked individuals. It is shown that by choosing the squared Euclidean distance as the cost, the estimated transition probabilities compare favorably to those obtained via maximum likelihood with marked individuals. Other implications of this cost are discussed, including the ability to accurately interpolate the population's spatial distribution at unobserved points in time and the more general relationship between the cost and minimum transport energy.

  9. An adjoint method for gradient-based optimization of stellarator coil shapes

    NASA Astrophysics Data System (ADS)

    Paul, E. J.; Landreman, M.; Bader, A.; Dorland, W.

    2018-07-01

    We present a method for stellarator coil design via gradient-based optimization of the coil-winding surface. The REGCOIL (Landreman 2017 Nucl. Fusion 57 046003) approach is used to obtain the coil shapes on the winding surface using a continuous current potential. We apply the adjoint method to calculate derivatives of the objective function, allowing for efficient computation of analytic gradients while eliminating the numerical noise of approximate derivatives. We are able to improve engineering properties of the coils by targeting the root-mean-squared current density in the objective function. We obtain winding surfaces for W7-X and HSX which simultaneously decrease the normal magnetic field on the plasma surface and increase the surface-averaged distance between the coils and the plasma in comparison with the actual winding surfaces. The coils computed on the optimized surfaces feature a smaller toroidal extent and curvature and increased inter-coil spacing. A technique for computation of the local sensitivity of figures of merit to normal displacements of the winding surface is presented, with potential applications for understanding engineering tolerances.

  10. Splash-cup plants accelerate raindrops to disperse seeds.

    PubMed

    Amador, Guillermo J; Yamada, Yasukuni; McCurley, Matthew; Hu, David L

    2013-02-01

    The conical flowers of splash-cup plants Chrysosplenium and Mazus catch raindrops opportunistically, exploiting the subsequent splash to disperse their seeds. In this combined experimental and theoretical study, we elucidate their mechanism for maximizing dispersal distance. We fabricate conical plant mimics using three-dimensional printing, and use high-speed video to visualize splash profiles and seed travel distance. Drop impacts that strike the cup off-centre achieve the largest dispersal distances of up to 1 m. Such distances are achieved because splash speeds are three to five times faster than incoming drop speeds, and so faster than the traditionally studied splashes occurring upon horizontal surfaces. This anomalous splash speed is because of the superposition of two components of momentum, one associated with a component of the drop's motion parallel to the splash-cup surface, and the other associated with film spreading induced by impact with the splash-cup. Our model incorporating these effects predicts the observed dispersal distance within 6-18% error. According to our experiments, the optimal cone angle for the splash-cup is 40°, a value consistent with the average of five species of splash-cup plants. This optimal angle arises from the competing effects of velocity amplification and projectile launching angle.

  11. Optimal orientation in flows: providing a benchmark for animal movement strategies.

    PubMed

    McLaren, James D; Shamoun-Baranes, Judy; Dokter, Adriaan M; Klaassen, Raymond H G; Bouten, Willem

    2014-10-06

    Animal movements in air and water can be strongly affected by experienced flow. While various flow-orientation strategies have been proposed and observed, their performance in variable flow conditions remains unclear. We apply control theory to establish a benchmark for time-minimizing (optimal) orientation. We then define optimal orientation for movement in steady flow patterns and, using dynamic wind data, for short-distance mass movements of thrushes (Turdus sp.) and 6000 km non-stop migratory flights by great snipes, Gallinago media. Relative to the optimal benchmark, we assess the efficiency (travel speed) and reliability (success rate) of three generic orientation strategies: full compensation for lateral drift, vector orientation (single-heading movement) and goal orientation (continually heading towards the goal). Optimal orientation is characterized by detours to regions of high flow support, especially when flow speeds approach and exceed the animal's self-propelled speed. In strong predictable flow (short distance thrush flights), vector orientation adjusted to flow on departure is nearly optimal, whereas for unpredictable flow (inter-continental snipe flights), only goal orientation was near-optimally reliable and efficient. Optimal orientation provides a benchmark for assessing efficiency of responses to complex flow conditions, thereby offering insight into adaptive flow-orientation across taxa in the light of flow strength, predictability and navigation capacity.

  12. Optimal orientation in flows: providing a benchmark for animal movement strategies

    PubMed Central

    McLaren, James D.; Shamoun-Baranes, Judy; Dokter, Adriaan M.; Klaassen, Raymond H. G.; Bouten, Willem

    2014-01-01

    Animal movements in air and water can be strongly affected by experienced flow. While various flow-orientation strategies have been proposed and observed, their performance in variable flow conditions remains unclear. We apply control theory to establish a benchmark for time-minimizing (optimal) orientation. We then define optimal orientation for movement in steady flow patterns and, using dynamic wind data, for short-distance mass movements of thrushes (Turdus sp.) and 6000 km non-stop migratory flights by great snipes, Gallinago media. Relative to the optimal benchmark, we assess the efficiency (travel speed) and reliability (success rate) of three generic orientation strategies: full compensation for lateral drift, vector orientation (single-heading movement) and goal orientation (continually heading towards the goal). Optimal orientation is characterized by detours to regions of high flow support, especially when flow speeds approach and exceed the animal's self-propelled speed. In strong predictable flow (short distance thrush flights), vector orientation adjusted to flow on departure is nearly optimal, whereas for unpredictable flow (inter-continental snipe flights), only goal orientation was near-optimally reliable and efficient. Optimal orientation provides a benchmark for assessing efficiency of responses to complex flow conditions, thereby offering insight into adaptive flow-orientation across taxa in the light of flow strength, predictability and navigation capacity. PMID:25056213

  13. Algorithm for selection of optimized EPR distance restraints for de novo protein structure determination

    PubMed Central

    Kazmier, Kelli; Alexander, Nathan S.; Meiler, Jens; Mchaourab, Hassane S.

    2010-01-01

    A hybrid protein structure determination approach combining sparse Electron Paramagnetic Resonance (EPR) distance restraints and Rosetta de novo protein folding has been previously demonstrated to yield high quality models (Alexander et al., 2008). However, widespread application of this methodology to proteins of unknown structures is hindered by the lack of a general strategy to place spin label pairs in the primary sequence. In this work, we report the development of an algorithm that optimally selects spin labeling positions for the purpose of distance measurements by EPR. For the α-helical subdomain of T4 lysozyme (T4L), simulated restraints that maximize sequence separation between the two spin labels while simultaneously ensuring pairwise connectivity of secondary structure elements yielded vastly improved models by Rosetta folding. 50% of all these models have the correct fold compared to only 21% and 8% correctly folded models when randomly placed restraints or no restraints are used, respectively. Moreover, the improvements in model quality require a limited number of optimized restraints, the number of which is determined by the pairwise connectivities of T4L α-helices. The predicted improvement in Rosetta model quality was verified by experimental determination of distances between spin labels pairs selected by the algorithm. Overall, our results reinforce the rationale for the combined use of sparse EPR distance restraints and de novo folding. By alleviating the experimental bottleneck associated with restraint selection, this algorithm sets the stage for extending computational structure determination to larger, traditionally elusive protein topologies of critical structural and biochemical importance. PMID:21074624

  14. Long working distance objective lenses for single atom trapping and imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pritchard, J. D., E-mail: jonathan.pritchard@strath.ac.uk; Department of Physics, University of Strathclyde, 107 Rottenrow East, Glasgow G4 0NG; Isaacs, J. A.

    We present a pair of optimized objective lenses with long working distances of 117 mm and 65 mm, respectively, that offer diffraction limited performance for both Cs and Rb wavelengths when imaging through standard vacuum windows. The designs utilise standard catalog lens elements to provide a simple and cost-effective solution. Objective 1 provides NA = 0.175 offering 3 μm resolution whilst objective 2 is optimized for high collection efficiency with NA = 0.29 and 1.8 μm resolution. This flexible design can be further extended for use at shorter wavelengths by simply re-optimising the lens separations.

  15. Potential of Audiographic Computerized Telelearning for Distance Extension Education.

    ERIC Educational Resources Information Center

    Verma, Satish; And Others

    In the last 10 years, an approach to electronic distance education called audiographic computerized telelearning using standard telephone lines has come to the fore. Telelearning is a cost-effective system which optimizes existing computer facilities and creates a teaching-learning environment that is interactive, efficient, and adaptable to a…

  16. Biased optimal guidance for a bank-to-turn missile

    NASA Astrophysics Data System (ADS)

    Stallard, D. V.

    A practical terminal-phase guidance law for controlling the pitch acceleration and roll rate of a bank-to-turn missile with zero autopilot lags was derived and tested, so as to minimize squared miss distance without requiring overly large commands. An acceleration bias is introduced to prevent excessive roll commands due to noise. The Separation Theorem is invoked and the guidance (control) law is derived by applying optimal control theory, linearizing the nonlinear plant equation around the present missile orientation, and obtaining a closed-form solution. The optimal pitch-acceleration and roll-rate commands are respectively proportional to two components of the projected, constant-bias, miss distance, with a resemblance to earlier derivations and proportional navigation. Simulaiation results and other related work confirm the suitability of the guidance law.

  17. Optimal free descriptions of many-body theories

    NASA Astrophysics Data System (ADS)

    Turner, Christopher J.; Meichanetzidis, Konstantinos; Papić, Zlatko; Pachos, Jiannis K.

    2017-04-01

    Interacting bosons or fermions give rise to some of the most fascinating phases of matter, including high-temperature superconductivity, the fractional quantum Hall effect, quantum spin liquids and Mott insulators. Although these systems are promising for technological applications, they also present conceptual challenges, as they require approaches beyond mean-field and perturbation theory. Here we develop a general framework for identifying the free theory that is closest to a given interacting model in terms of their ground-state correlations. Moreover, we quantify the distance between them using the entanglement spectrum. When this interaction distance is small, the optimal free theory provides an effective description of the low-energy physics of the interacting model. Our construction of the optimal free model is non-perturbative in nature; thus, it offers a theoretical framework for investigating strongly correlated systems.

  18. Bioisostere Identification by Determining the Amino Acid Binding Preferences of Common Chemical Fragments.

    PubMed

    Sato, Tomohiro; Hashimoto, Noriaki; Honma, Teruki

    2017-12-26

    To assist in the structural optimization of hit/lead compounds during drug discovery, various computational approaches to identify potentially useful bioisosteric conversions have been reported. Here, the preference of chemical fragments to hydrogen bonds with specific amino acid residues was used to identify potential bioisosteric conversions. We first compiled a data set of chemical fragments frequently occurring in complex structures contained in the Protein Data Bank. We then used a computational approach to determine the amino acids to which these chemical fragments most frequently hydrogen bonded. The results of the frequency analysis were used to hierarchically cluster chemical fragments according to their amino acid preferences. The Euclid distance between amino acid preferences of chemical fragments for hydrogen bonding was then compared to MMP information in the ChEMBL database. To demonstrate the applicability of the approach for compound optimization, the similarity of amino acid preferences was used to identify known bioisosteric conversions of the epidermal growth factor receptor inhibitor gefitinib. The amino acid preference distance successfully detected bioisosteric fragments corresponding to the morpholine ring in gefitinib with a higher ROC score compared to those based on topological similarity of substituents and frequency of MMP in the ChEMBL database.

  19. Development of a Dual Plasma Desorption/Ionization System for the Noncontact and Highly Sensitive Analysis of Surface Adhesive Compounds

    PubMed Central

    Aida, Mari; Iwai, Takahiro; Okamoto, Yuki; Kohno, Satoshi; Kakegawa, Ken; Miyahara, Hidekazu; Seto, Yasuo; Okino, Akitoshi

    2017-01-01

    We developed a dual plasma desorption/ionization system using two plasmas for the semi-invasive analysis of compounds on heat-sensitive substrates such as skin. The first plasma was used for the desorption of the surface compounds, whereas the second was used for the ionization of the desorbed compounds. Using the two plasmas, each process can be optimized individually. A successful analysis of phenyl salicylate and 2-isopropylpyridine was achieved using the developed system. Furthermore, we showed that it was possible to detect the mass signals derived from a sample even at a distance 50 times greater than the distance from the position at which the samples were detached. In addition, to increase the intensity of the mass signal, 0%–0.02% (v/v) of hydrogen gas was added to the base gas generated in the ionizing plasma. We found that by optimizing the gas flow rate through the addition of a small amount of hydrogen gas, it was possible to obtain the intensity of the mass signal that was 45–824 times greater than that obtained without the addition of hydrogen gas. PMID:29234573

  20. Surface target-tracking guidance by self-organizing formation flight of fixed-wing UAV

    NASA Astrophysics Data System (ADS)

    Regina, N.; Zanzi, M.

    This paper presents a new concept of ground target surveillance based on a formation flight of two Unmanned Aerial Vehicles (UAVs) of fixed-wing type. Each UAV considered in this work has its own guidance law specifically designed for two different aims. A self organizing non-symmetric collaborative surveying scheme has been developed based on pursuers with different roles: the close-up-pursuer and the distance-pursuer. The close-up-pursuer behaves according to a guidance law which takes it to continually over-fly the target, also optimizing flight endurance. On the other hand, the distancepursuer behaves so as to circle around the target by flying at a certain distance and altitude from it; moreover, its motion ensures the maximum “ seeability” of the ground based target. In addition, the guidance law designed for the distance-pursuer also implements a collision avoidance feature in order to prevent possible risks of collision with the close-up-pursuer during the tracking maneuvers. The surveying scheme is non-symmetric in the sense that the collision avoidance feature is accomplished by a guidance law implemented only on one of the two pursuers; moreover, it is collaborative because the surveying is performed by different tasks of two UAVs and is self-organizing because, due to the collision avoidance feature, target tracking does not require pre-planned collision-risk-free trajectories but trajectories are generated in real time.

  1. Efficient DV-HOP Localization for Wireless Cyber-Physical Social Sensing System: A Correntropy-Based Neural Network Learning Scheme

    PubMed Central

    Xu, Yang; Luo, Xiong; Wang, Weiping; Zhao, Wenbing

    2017-01-01

    Integrating wireless sensor network (WSN) into the emerging computing paradigm, e.g., cyber-physical social sensing (CPSS), has witnessed a growing interest, and WSN can serve as a social network while receiving more attention from the social computing research field. Then, the localization of sensor nodes has become an essential requirement for many applications over WSN. Meanwhile, the localization information of unknown nodes has strongly affected the performance of WSN. The received signal strength indication (RSSI) as a typical range-based algorithm for positioning sensor nodes in WSN could achieve accurate location with hardware saving, but is sensitive to environmental noises. Moreover, the original distance vector hop (DV-HOP) as an important range-free localization algorithm is simple, inexpensive and not related to the environment factors, but performs poorly when lacking anchor nodes. Motivated by these, various improved DV-HOP schemes with RSSI have been introduced, and we present a new neural network (NN)-based node localization scheme, named RHOP-ELM-RCC, through the use of DV-HOP, RSSI and a regularized correntropy criterion (RCC)-based extreme learning machine (ELM) algorithm (ELM-RCC). Firstly, the proposed scheme employs both RSSI and DV-HOP to evaluate the distances between nodes to enhance the accuracy of distance estimation at a reasonable cost. Then, with the help of ELM featured with a fast learning speed with a good generalization performance and minimal human intervention, a single hidden layer feedforward network (SLFN) on the basis of ELM-RCC is used to implement the optimization task for obtaining the location of unknown nodes. Since the RSSI may be influenced by the environmental noises and may bring estimation error, the RCC instead of the mean square error (MSE) estimation, which is sensitive to noises, is exploited in ELM. Hence, it may make the estimation more robust against outliers. Additionally, the least square estimation (LSE) in ELM is replaced by the half-quadratic optimization technique. Simulation results show that our proposed scheme outperforms other traditional localization schemes. PMID:28085084

  2. Application of affinity propagation algorithm based on manifold distance for transformer PD pattern recognition

    NASA Astrophysics Data System (ADS)

    Wei, B. G.; Huo, K. X.; Yao, Z. F.; Lou, J.; Li, X. Y.

    2018-03-01

    It is one of the difficult problems encountered in the research of condition maintenance technology of transformers to recognize partial discharge (PD) pattern. According to the main physical characteristics of PD, three models of oil-paper insulation defects were set up in laboratory to study the PD of transformers, and phase resolved partial discharge (PRPD) was constructed. By using least square method, the grey-scale images of PRPD were constructed and features of each grey-scale image were 28 box dimensions and 28 information dimensions. Affinity propagation algorithm based on manifold distance (AP-MD) for transformers PD pattern recognition was established, and the data of box dimension and information dimension were clustered based on AP-MD. Study shows that clustering result of AP-MD is better than the results of affinity propagation (AP), k-means and fuzzy c-means algorithm (FCM). By choosing different k values of k-nearest neighbor, we find clustering accuracy of AP-MD falls when k value is larger or smaller, and the optimal k value depends on sample size.

  3. Uncertainty Evaluation and Appropriate Distribution for the RDHM in the Rockies

    NASA Astrophysics Data System (ADS)

    Kim, J.; Bastidas, L. A.; Clark, E. P.

    2010-12-01

    The problems that hydrologic models have in properly reproducing the processes involved in mountainous areas, and in particular the Rocky Mountains, are widely acknowledged. Herein, we present an application of the National Weather Service RDHM distributed model over the Durango River basin in Colorado. We focus primarily in the assessment of the model prediction uncertainty associated with the parameter estimation and the comparison of the model performance using parameters obtained with a priori estimation following the procedure of Koren et al., and those obtained via inverse modeling using a variety of Markov chain Monte Carlo based optimization algorithms. The model evaluation is based on traditional procedures as well as non-traditional ones based on the use of shape matching functions, which are more appropriate for the evaluation of distributed information (e.g. Hausdorff distance, earth movers distance). The variables used for the model performance evaluation are discharge (with internal nodes), snow cover and snow water equivalent. An attempt to establish the proper degree of distribution, for the Durango basin with the RDHM model, is also presented.

  4. Integration of geospatial multi-mode transportation Systems in Kuala Lumpur

    NASA Astrophysics Data System (ADS)

    Ismail, M. A.; Said, M. N.

    2014-06-01

    Public transportation serves people with mobility and accessibility to workplaces, health facilities, community resources, and recreational areas across the country. Development in the application of Geographical Information Systems (GIS) to transportation problems represents one of the most important areas of GIS-technology today. To show the importance of GIS network analysis, this paper highlights the determination of the optimal path between two or more destinations based on multi-mode concepts. The abstract connector is introduced in this research as an approach to integrate urban public transportation in Kuala Lumpur, Malaysia including facilities such as Light Rapid Transit (LRT), Keretapi Tanah Melayu (KTM) Komuter, Express Rail Link (ERL), KL Monorail, road driving as well as pedestrian modes into a single intelligent data model. To assist such analysis, ArcGIS's Network Analyst functions are used whereby the final output includes the total distance, total travelled time, directional maps produced to find the quickest, shortest paths, and closest facilities based on either time or distance impedance for multi-mode route analysis.

  5. Developing lignin-based bio-nanofibers by centrifugal spinning technique.

    PubMed

    Stojanovska, Elena; Kurtulus, Mustafa; Abdelgawad, Abdelrahman; Candan, Zeki; Kilic, Ali

    2018-07-01

    Lignin-based nanofibers were produced via centrifugal spinning from lignin-thermoplastic polyurethane polymer blends. The most suitable process parameters were chosen by optimization of the rotational speed, nozzle diameter and spinneret-to-collector distance using different blend ratios of the two polymers at different total polymer concentrations. The basic characteristics of polymer solutions were enlightened by their viscosity and surface tension. The morphology of the fibers produced was characterized by SEM, while their thermal properties by DSC and TG analysis. Multiply regression was used to determine the parameters that have higher impact on the fiber diameter. It was possible to obtain thermally stable lignin/polyurethane nanofibers with diameters below 500nm. From the aspect of spinnability, 1:1 lignin/TPU contents were shown to be more feasible. On the other side, the most suitable processing parameters were found to be angular velocity of 8500rpm for nozzles of 0.5mm diameter and working distance of 30cm. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Design evaluation of graphene nanoribbon nanoelectromechanical devices

    NASA Astrophysics Data System (ADS)

    Lam, Kai-Tak; Stephen Leo, Marie; Lee, Chengkuo; Liang, Gengchiau

    2011-07-01

    Computational studies on nanoelectromechanical switches based on bilayer graphene nanoribbons (BGNRs) with different designs are presented in this work. By varying the interlayer distance via electrostatic means, the conductance of the BGNR can be changed in order to achieve ON-states and OFF-states, thereby mimicking the function of a switch. Two actuator designs based on the modified capacitive parallel plate (CPP) model and the electrostatic repulsive force (ERF) model are discussed for different applications. Although the CPP design provides a simple electrostatic approach to changing the interlayer distance of the BGNR, their switching gate bias VTH strongly depends on the gate area, which poses a limitation on the size of the device. In addition, there exists a risk of device failure due to static fraction between the mobile and fixed electrodes. In contrast, the ERF design can circumvent both issues with a more complex structure. Finally, optimizations of the devices are carried out in order to provide insights into the design considerations of these nanoelectromechanical switches.

  7. [Optimization for MSW logistics of new Xicheng and new Dongcheng districts in Beijing based on the maximum capacity of transfer stations].

    PubMed

    Yuan, Jing; Li, Guo-xue; Zhang, Hong-yu; Luo, Yi-ming

    2013-09-01

    It is necessary to achieve the optimization for MSW logistics based on the new Xicheng (combining the former Xicheng and the former Xuanwu districts) and the new Dongcheng (combining the former Dongcheng and the former Chongwen districts) districts of Beijing. Based on the analysis of current MSW logistics system, transfer station's processing capacity and the terminal treatment facilities' conditions of the four former districts and other districts, a MSW logistics system was built by GIS methods considering transregional treatment. This article analyzes the MSW material balance of current and new logistics systems. Results show that the optimization scheme could reduce the MSW collection distance of the new Xicheng and the new Dongcheng by 9.3 x 10(5) km x a(-1), reduced by 10% compared with current logistics. Under the new logistics solution, considering transregional treatment, can reduce landfill treatment of untreated MSW about 28.3%. If the construction of three incineration plants finished based on the new logistics, the system's optimal ratio of incineration: biochemical treatment: landfill can reach 3.8 : 4.5 : 1.7 compared with 1 : 4.8 : 4.2, which is the ratio of current MSW logistics. The ratio of the amount of incineration: biochemical treatment: landfill approximately reach 4 : 3 : 3 which is the target for 2015. The research results are benefit in increasing MSW utilization and reduction rate of the new Dongcheng and Xicheng districts and nearby districts.

  8. Investigating energy-based pool structure selection in the structure ensemble modeling with experimental distance constraints: The example from a multidomain protein Pub1.

    PubMed

    Zhu, Guanhua; Liu, Wei; Bao, Chenglong; Tong, Dudu; Ji, Hui; Shen, Zuowei; Yang, Daiwen; Lu, Lanyuan

    2018-05-01

    The structural variations of multidomain proteins with flexible parts mediate many biological processes, and a structure ensemble can be determined by selecting a weighted combination of representative structures from a simulated structure pool, producing the best fit to experimental constraints such as interatomic distance. In this study, a hybrid structure-based and physics-based atomistic force field with an efficient sampling strategy is adopted to simulate a model di-domain protein against experimental paramagnetic relaxation enhancement (PRE) data that correspond to distance constraints. The molecular dynamics simulations produce a wide range of conformations depicted on a protein energy landscape. Subsequently, a conformational ensemble recovered with low-energy structures and the minimum-size restraint is identified in good agreement with experimental PRE rates, and the result is also supported by chemical shift perturbations and small-angle X-ray scattering data. It is illustrated that the regularizations of energy and ensemble-size prevent an arbitrary interpretation of protein conformations. Moreover, energy is found to serve as a critical control to refine the structure pool and prevent data overfitting, because the absence of energy regularization exposes ensemble construction to the noise from high-energy structures and causes a more ambiguous representation of protein conformations. Finally, we perform structure-ensemble optimizations with a topology-based structure pool, to enhance the understanding on the ensemble results from different sources of pool candidates. © 2018 Wiley Periodicals, Inc.

  9. Energy hyperspace for stacking interaction in AU/AU dinucleotide step: Dispersion-corrected density functional theory study.

    PubMed

    Mukherjee, Sanchita; Kailasam, Senthilkumar; Bansal, Manju; Bhattacharyya, Dhananjay

    2014-01-01

    Double helical structures of DNA and RNA are mostly determined by base pair stacking interactions, which give them the base sequence-directed features, such as small roll values for the purine-pyrimidine steps. Earlier attempts to characterize stacking interactions were mostly restricted to calculations on fiber diffraction geometries or optimized structure using ab initio calculations lacking variation in geometry to comment on rather unusual large roll values observed in AU/AU base pair step in crystal structures of RNA double helices. We have generated stacking energy hyperspace by modeling geometries with variations along the important degrees of freedom, roll, and slide, which were chosen via statistical analysis as maximally sequence dependent. Corresponding energy contours were constructed by several quantum chemical methods including dispersion corrections. This analysis established the most suitable methods for stacked base pair systems despite the limitation imparted by number of atom in a base pair step to employ very high level of theory. All the methods predict negative roll value and near-zero slide to be most favorable for the purine-pyrimidine steps, in agreement with Calladine's steric clash based rule. Successive base pairs in RNA are always linked by sugar-phosphate backbone with C3'-endo sugars and this demands C1'-C1' distance of about 5.4 Å along the chains. Consideration of an energy penalty term for deviation of C1'-C1' distance from the mean value, to the recent DFT-D functionals, specifically ωB97X-D appears to predict reliable energy contour for AU/AU step. Such distance-based penalty improves energy contours for the other purine-pyrimidine sequences also. © 2013 Wiley Periodicals, Inc. Biopolymers 101: 107-120, 2014. Copyright © 2013 Wiley Periodicals, Inc.

  10. OPTIMAL AIRCRAFT TRAJECTORIES FOR SPECIFIED RANGE

    NASA Technical Reports Server (NTRS)

    Lee, H.

    1994-01-01

    For an aircraft operating over a fixed range, the operating costs are basically a sum of fuel cost and time cost. While minimum fuel and minimum time trajectories are relatively easy to calculate, the determination of a minimum cost trajectory can be a complex undertaking. This computer program was developed to optimize trajectories with respect to a cost function based on a weighted sum of fuel cost and time cost. As a research tool, the program could be used to study various characteristics of optimum trajectories and their comparison to standard trajectories. It might also be used to generate a model for the development of an airborne trajectory optimization system. The program could be incorporated into an airline flight planning system, with optimum flight plans determined at takeoff time for the prevailing flight conditions. The use of trajectory optimization could significantly reduce the cost for a given aircraft mission. The algorithm incorporated in the program assumes that a trajectory consists of climb, cruise, and descent segments. The optimization of each segment is not done independently, as in classical procedures, but is performed in a manner which accounts for interaction between the segments. This is accomplished by the application of optimal control theory. The climb and descent profiles are generated by integrating a set of kinematic and dynamic equations, where the total energy of the aircraft is the independent variable. At each energy level of the climb and descent profiles, the air speed and power setting necessary for an optimal trajectory are determined. The variational Hamiltonian of the problem consists of the rate of change of cost with respect to total energy and a term dependent on the adjoint variable, which is identical to the optimum cruise cost at a specified altitude. This variable uniquely specifies the optimal cruise energy, cruise altitude, cruise Mach number, and, indirectly, the climb and descent profiles. If the optimum cruise cost is specified, an optimum trajectory can easily be generated; however, the range obtained for a particular optimum cruise cost is not known a priori. For short range flights, the program iteratively varies the optimum cruise cost until the computed range converges to the specified range. For long-range flights, iteration is unnecessary since the specified range can be divided into a cruise segment distance and full climb and descent distances. The user must supply the program with engine fuel flow rate coefficients and an aircraft aerodynamic model. The program currently includes coefficients for the Pratt-Whitney JT8D-7 engine and an aerodynamic model for the Boeing 727. Input to the program consists of the flight range to be covered and the prevailing flight conditions including pressure, temperature, and wind profiles. Information output by the program includes: optimum cruise tables at selected weights, optimal cruise quantities as a function of cruise weight and cruise distance, climb and descent profiles, and a summary of the complete synthesized optimal trajectory. This program is written in FORTRAN IV for batch execution and has been implemented on a CDC 6000 series computer with a central memory requirement of approximately 100K (octal) of 60 bit words. This aircraft trajectory optimization program was developed in 1979.

  11. The Flow-field From Galaxy Groups In 2MASS

    NASA Astrophysics Data System (ADS)

    Crook, Aidan; Huchra, J.; Macri, L.; Masters, K.; Jarrett, T.

    2011-01-01

    We present the first model of a flow-field in the nearby Universe (cz < 12,000 km/s) constructed from groups of galaxies identified in an all-sky flux-limited survey. The Two Micron All-Sky Redshift Survey (2MRS), upon which the model is based, represents the most complete survey of its class and, with near-IR fluxes, provides the optimal method for tracing baryonic matter in the nearby Universe. Peculiar velocities are reconstructed self-consistently with a density-field based upon groups identified in the 2MRS Ks<11.75 catalog. The model predicts infall toward Virgo, Perseus-Pisces, Hydra-Centaurus, Norma, Coma, Shapley and Hercules, and most notably predicts backside-infall into the Norma Cluster. We discuss the application of the model as a predictor of galaxy distances using only angular position and redshift measurements. By calibrating the model using measured distances to galaxies inside 3000 km/s, we show that, for a randomly-sampled 2MRS galaxy, improvement in the estimated distance over the application of Hubble's law is expected to be 30%, and considerably better in the proximity of clusters. We test the model using distance estimates from the SFI++ sample, and find evidence for improvement over the application of Hubble's law to galaxies inside 4000 km/s, although the performance varies depending on the location of the target. This work has been supported by NSF grant AST 0406906 and the Massachusetts Institute of Technology Bruno Rossi and Whiteman Fellowships.

  12. Extending rule-based methods to model molecular geometry and 3D model resolution.

    PubMed

    Hoard, Brittany; Jacobson, Bruna; Manavi, Kasra; Tapia, Lydia

    2016-08-01

    Computational modeling is an important tool for the study of complex biochemical processes associated with cell signaling networks. However, it is challenging to simulate processes that involve hundreds of large molecules due to the high computational cost of such simulations. Rule-based modeling is a method that can be used to simulate these processes with reasonably low computational cost, but traditional rule-based modeling approaches do not include details of molecular geometry. The incorporation of geometry into biochemical models can more accurately capture details of these processes, and may lead to insights into how geometry affects the products that form. Furthermore, geometric rule-based modeling can be used to complement other computational methods that explicitly represent molecular geometry in order to quantify binding site accessibility and steric effects. We propose a novel implementation of rule-based modeling that encodes details of molecular geometry into the rules and binding rates. We demonstrate how rules are constructed according to the molecular curvature. We then perform a study of antigen-antibody aggregation using our proposed method. We simulate the binding of antibody complexes to binding regions of the shrimp allergen Pen a 1 using a previously developed 3D rigid-body Monte Carlo simulation, and we analyze the aggregate sizes. Then, using our novel approach, we optimize a rule-based model according to the geometry of the Pen a 1 molecule and the data from the Monte Carlo simulation. We use the distances between the binding regions of Pen a 1 to optimize the rules and binding rates. We perform this procedure for multiple conformations of Pen a 1 and analyze the impact of conformation and resolution on the optimal rule-based model. We find that the optimized rule-based models provide information about the average steric hindrance between binding regions and the probability that antibodies will bind to these regions. These optimized models quantify the variation in aggregate size that results from differences in molecular geometry and from model resolution.

  13. Optimization of multi-objective integrated process planning and scheduling problem using a priority based optimization algorithm

    NASA Astrophysics Data System (ADS)

    Ausaf, Muhammad Farhan; Gao, Liang; Li, Xinyu

    2015-12-01

    For increasing the overall performance of modern manufacturing systems, effective integration of process planning and scheduling functions has been an important area of consideration among researchers. Owing to the complexity of handling process planning and scheduling simultaneously, most of the research work has been limited to solving the integrated process planning and scheduling (IPPS) problem for a single objective function. As there are many conflicting objectives when dealing with process planning and scheduling, real world problems cannot be fully captured considering only a single objective for optimization. Therefore considering multi-objective IPPS (MOIPPS) problem is inevitable. Unfortunately, only a handful of research papers are available on solving MOIPPS problem. In this paper, an optimization algorithm for solving MOIPPS problem is presented. The proposed algorithm uses a set of dispatching rules coupled with priority assignment to optimize the IPPS problem for various objectives like makespan, total machine load, total tardiness, etc. A fixed sized external archive coupled with a crowding distance mechanism is used to store and maintain the non-dominated solutions. To compare the results with other algorithms, a C-matric based method has been used. Instances from four recent papers have been solved to demonstrate the effectiveness of the proposed algorithm. The experimental results show that the proposed method is an efficient approach for solving the MOIPPS problem.

  14. Maximum life spiral bevel reduction design

    NASA Technical Reports Server (NTRS)

    Savage, M.; Prasanna, M. G.; Coe, H. H.

    1992-01-01

    Optimization is applied to the design of a spiral bevel gear reduction for maximum life at a given size. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial values. Gear tooth bending strength and minimum contact ratio under load are included in the active constraints. The optimal design of the spiral bevel gear reduction includes the selection of bearing and shaft proportions in addition to gear mesh parameters. System life is maximized subject to a fixed back-cone distance of the spiral bevel gear set for a specified speed ratio, shaft angle, input torque, and power. Significant parameters in the design are: the spiral angle, the pressure angle, the numbers of teeth on the pinion and gear, and the location and size of the four support bearings. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for gradient optimization. After finding the continuous optimum, a designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearing lives on the gear parameters in the optimal configurations. For a fixed back-cone distance, optimal designs with larger shaft angles have larger service lives.

  15. Multidimensional Risk Analysis: MRISK

    NASA Technical Reports Server (NTRS)

    McCollum, Raymond; Brown, Douglas; O'Shea, Sarah Beth; Reith, William; Rabulan, Jennifer; Melrose, Graeme

    2015-01-01

    Multidimensional Risk (MRISK) calculates the combined multidimensional score using Mahalanobis distance. MRISK accounts for covariance between consequence dimensions, which de-conflicts the interdependencies of consequence dimensions, providing a clearer depiction of risks. Additionally, in the event the dimensions are not correlated, Mahalanobis distance reduces to Euclidean distance normalized by the variance and, therefore, represents the most flexible and optimal method to combine dimensions. MRISK is currently being used in NASA's Environmentally Responsible Aviation (ERA) project o assess risk and prioritize scarce resources.

  16. Simultaneous minimization of leaf travel distance and tongue-and-groove effect for segmental intensity-modulated radiation therapy.

    PubMed

    Dai, Jianrong; Que, William

    2004-12-07

    This paper introduces a method to simultaneously minimize the leaf travel distance and the tongue-and-groove effect for IMRT leaf sequences to be delivered in segmental mode. The basic idea is to add a large enough number of openings through cutting or splitting existing openings for those leaf pairs with openings fewer than the number of segments so that all leaf pairs have the same number of openings. The cutting positions are optimally determined with a simulated annealing technique called adaptive simulated annealing. The optimization goal is set to minimize the weighted summation of the leaf travel distance and tongue-and-groove effect. Its performance was evaluated with 19 beams from three clinical cases; one brain, one head-and-neck and one prostate case. The results show that it can reduce the leaf travel distance and (or) tongue-and-groove effect; the reduction of the leaf travel distance reaches its maximum of about 50% when minimized alone; the reduction of the tongue-and-groove reaches its maximum of about 70% when minimized alone. The maximum reduction in the leaf travel distance translates to a 1 to 2 min reduction in treatment delivery time per fraction, depending on leaf speed. If the method is implemented clinically, it could result in significant savings in treatment delivery time, and also result in significant reduction in the wear-and-tear of MLC mechanics.

  17. Optimization of vertical and lateral distances between target and substrate in deposition process of CuGaSe 2 thin films using one-step sputtering

    DOE PAGES

    Park, Jae -Cheol; Al-Jassim, Mowafak; Kim, Tae -Won

    2017-02-01

    Here, copper gallium selenide (CGS) thin films were fabricated using a combinatorial one-step sputtering process without an additional selenization process. The sample libraries as a function of vertical and lateral distance from the sputtering target were synthesized on a single soda-lime glass substrate at the substrate temperature of 500 °C employing a stoichiometric CGS single target. As we increased the vertical distance between the target and substrate, the CGS thin films had more stable and uniform characteristics in structural and chemical properties. Under the optimized conditions of the vertical distance (150 mm), the CGS thin films showed densely packed grainsmore » and large grain sizes up to 1 μm in scale with decreasing lateral distances. The composition ratio of Ga/[Cu+Ga] and Se/[Cu+Ga] showed 0.50 and 0.93, respectively, in nearly the same composition as the sputtering target. X-ray diffraction and Raman spectroscopy revealed that the CGS thin films had a pure chalcopyrite phase without any secondary phases such as Cu–Se or ordered vacancy compounds, respectively. In addition, we found that the optical bandgap energies of the CGS thin films are shifted from 1.650 to 1.664 eV with decreasing lateral distance, showing a near-stoichiometric region with chalcopyrite characteristics.« less

  18. Impact of imaging approach on radiation dose and associated cancer risk in children undergoing cardiac catheterization

    PubMed Central

    Einstein, Andrew J.; Januzis, Natalie; Nguyen, Giao; Li, Jennifer S.; Fleming, Gregory A.; Yoshizumi, Terry K.

    2016-01-01

    Objectives To quantify the impact of image optimization on absorbed radiation dose and associated risk in children undergoing cardiac catheterization. Background Various imaging and fluoroscopy system technical parameters including camera magnification, source-to-image distance, collimation, anti-scatter grids, beam quality, and pulse rates, all affect radiation dose but have not been well studied in younger children. Methods We used anthropomorphic phantoms (ages: newborn and 5-years-old) to measure surface radiation exposure from various imaging approaches and estimated absorbed organ doses and effective doses (ED) using Monte Carlo simulations. Models developed in the National Academies’ Biological Effects of Ionizing Radiation VII report were used to compare an imaging protocol optimized for dose reduction versus suboptimal imaging (+20cm source-to-image-distance, +1 magnification setting, no collimation) on lifetime attributable risk (LAR) of cancer. Results For the newborn and 5-year-old phantoms respectively ED changes were as follows: +157% and +232% for an increase from 6-inch to 10-inch camera magnification; +61% and +59% for a 20cm increase in source-to-image-distance; −42% and −48% with addition of 1-inch periphery collimation; −31% and −46% with removal of the anti-scatter grid. Compared to an optimized protocol, suboptimal imaging increased ED by 2.75-fold (newborn) and 4-fold (5-year-old). Estimated cancer LAR from 30-minutes of postero-anterior fluoroscopy using optimized versus sub-optimal imaging respectively was: 0.42% versus 1.23% (newborn female), 0.20% vs 0.53% (newborn male), 0.47% versus 1.70% (5-year-old female) and 0.16% vs 0.69% (5-year-old male). Conclusions Radiation-related risks to children undergoing cardiac catheterization can be substantial but are markedly reduced with an optimized imaging approach. PMID:27315598

  19. Joint learning of labels and distance metric.

    PubMed

    Liu, Bo; Wang, Meng; Hong, Richang; Zha, Zhengjun; Hua, Xian-Sheng

    2010-06-01

    Machine learning algorithms frequently suffer from the insufficiency of training data and the usage of inappropriate distance metric. In this paper, we propose a joint learning of labels and distance metric (JLLDM) approach, which is able to simultaneously address the two difficulties. In comparison with the existing semi-supervised learning and distance metric learning methods that focus only on label prediction or distance metric construction, the JLLDM algorithm optimizes the labels of unlabeled samples and a Mahalanobis distance metric in a unified scheme. The advantage of JLLDM is multifold: 1) the problem of training data insufficiency can be tackled; 2) a good distance metric can be constructed with only very few training samples; and 3) no radius parameter is needed since the algorithm automatically determines the scale of the metric. Extensive experiments are conducted to compare the JLLDM approach with different semi-supervised learning and distance metric learning methods, and empirical results demonstrate its effectiveness.

  20. Optimal base station placement for wireless sensor networks with successive interference cancellation.

    PubMed

    Shi, Lei; Zhang, Jianjun; Shi, Yi; Ding, Xu; Wei, Zhenchun

    2015-01-14

    We consider the base station placement problem for wireless sensor networks with successive interference cancellation (SIC) to improve throughput. We build a mathematical model for SIC. Although this model cannot be solved directly, it enables us to identify a necessary condition for SIC on distances from sensor nodes to the base station. Based on this relationship, we propose to divide the feasible region of the base station into small pieces and choose a point within each piece for base station placement. The point with the largest throughput is identified as the solution. The complexity of this algorithm is polynomial. Simulation results show that this algorithm can achieve about 25% improvement compared with the case that the base station is placed at the center of the network coverage area when using SIC.

  1. Optimization of propagation-based x-ray phase-contrast tomography for breast cancer imaging

    NASA Astrophysics Data System (ADS)

    Baran, P.; Pacile, S.; Nesterets, Y. I.; Mayo, S. C.; Dullin, C.; Dreossi, D.; Arfelli, F.; Thompson, D.; Lockie, D.; McCormack, M.; Taba, S. T.; Brun, F.; Pinamonti, M.; Nickson, C.; Hall, C.; Dimmock, M.; Zanconati, F.; Cholewa, M.; Quiney, H.; Brennan, P. C.; Tromba, G.; Gureyev, T. E.

    2017-03-01

    The aim of this study was to optimise the experimental protocol and data analysis for in-vivo breast cancer x-ray imaging. Results are presented of the experiment at the SYRMEP beamline of Elettra Synchrotron using the propagation-based phase-contrast mammographic tomography method, which incorporates not only absorption, but also x-ray phase information. In this study the images of breast tissue samples, of a size corresponding to a full human breast, with radiologically acceptable x-ray doses were obtained, and the degree of improvement of the image quality (from the diagnostic point of view) achievable using propagation-based phase-contrast image acquisition protocols with proper incorporation of x-ray phase retrieval into the reconstruction pipeline was investigated. Parameters such as the x-ray energy, sample-to-detector distance and data processing methods were tested, evaluated and optimized with respect to the estimated diagnostic value using a mastectomy sample with a malignant lesion. The results of quantitative evaluation of images were obtained by means of radiological assessment carried out by 13 experienced specialists. A comparative analysis was performed between the x-ray and the histological images of the specimen. The results of the analysis indicate that, within the investigated range of parameters, both the objective image quality characteristics and the subjective radiological scores of propagation-based phase-contrast images of breast tissues monotonically increase with the strength of phase contrast which in turn is directly proportional to the product of the radiation wavelength and the sample-to-detector distance. The outcomes of this study serve to define the practical imaging conditions and the CT reconstruction procedures appropriate for low-dose phase-contrast mammographic imaging of live patients at specially designed synchrotron beamlines.

  2. Parallel algorithms for the molecular conformation problem

    NASA Astrophysics Data System (ADS)

    Rajan, Kumar

    Given a set of objects, and some of the pairwise distances between them, the problem of identifying the positions of the objects in the Euclidean space is referred to as the molecular conformation problem. This problem is known to be computationally difficult. One of the most important applications of this problem is the determination of the structure of molecules. In the case of molecular structure determination, usually only the lower and upper bounds on some of the interatomic distances are available. The process of obtaining a tighter set of bounds between all pairs of atoms, using the available interatomic distance bounds is referred to as bound-smoothing . One method for bound-smoothing is to use the limits imposed by the triangle inequality. The distance bounds so obtained can often be tightened further by applying the tetrangle inequality---the limits imposed on the six pairwise distances among a set of four atoms (instead of three for the triangle inequalities). The tetrangle inequality is expressed by the Cayley-Menger determinants. The sequential tetrangle-inequality bound-smoothing algorithm considers a quadruple of atoms at a time, and tightens the bounds on each of its six distances. The sequential algorithm is computationally expensive, and its application is limited to molecules with up to a few hundred atoms. Here, we conduct an experimental study of tetrangle-inequality bound-smoothing and reduce the sequential time by identifying the most computationally expensive portions of the process. We also present a simple criterion to determine which of the quadruples of atoms are likely to be tightened the most by tetrangle-inequality bound-smoothing. This test could be used to enhance the applicability of this process to large molecules. We map the problem of parallelizing tetrangle-inequality bound-smoothing to that of generating disjoint packing designs of a certain kind. We map this, in turn, to a regular-graph coloring problem, and present a simple, parallel algorithm for tetrangle-inequality bound-smoothing. We implement the parallel algorithm on the Intel Paragon X/PS, and apply it to real-life molecules. Our results show that with this parallel algorithm, tetrangle inequality can be applied to large molecules in a reasonable amount of time. We extend the regular graph to represent more general packing designs, and present a coloring algorithm for this graph. This can be used to generate constant-weight binary codes in parallel. Once a tighter set of distance bounds is obtained, the molecular conformation problem is usually formulated as a non-linear optimization problem, and a global optimization algorithm is then used to solve the problem. Here we present a parallel, deterministic algorithm for the optimization problem based on Interval Analysis. We implement our algorithm, using dynamic load balancing, on a network of Sun Ultra-Sparc workstations. Our experience with this algorithm shows that its application is limited to small instances of the molecular conformation problem, where the number of measured, pairwise distances is close to the maximum value. However, since the interval method eliminates a substantial portion of the initial search space very quickly, it can be used to prune the search space before any of the more efficient, nondeterministic methods can be applied.

  3. Multi-objective optimal design of magnetorheological engine mount based on an improved non-dominated sorting genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Ling; Duan, Xuwei; Deng, Zhaoxue; Li, Yinong

    2014-03-01

    A novel flow-mode magneto-rheological (MR) engine mount integrated a diaphragm de-coupler and the spoiler plate is designed and developed to isolate engine and the transmission from the chassis in a wide frequency range and overcome the stiffness in high frequency. A lumped parameter model of the MR engine mount in single degree of freedom system is further developed based on bond graph method to predict the performance of the MR engine mount accurately. The optimization mathematical model is established to minimize the total of force transmissibility over several frequency ranges addressed. In this mathematical model, the lumped parameters are considered as design variables. The maximum of force transmissibility and the corresponding frequency in low frequency range as well as individual lumped parameter are limited as constraints. The multiple interval sensitivity analysis method is developed to select the optimized variables and improve the efficiency of optimization process. An improved non-dominated sorting genetic algorithm (NSGA-II) is used to solve the multi-objective optimization problem. The synthesized distance between the individual in Pareto set and the individual in possible set in engineering is defined and calculated. A set of real design parameters is thus obtained by the internal relationship between the optimal lumped parameters and practical design parameters for the MR engine mount. The program flowchart for the improved non-dominated sorting genetic algorithm (NSGA-II) is given. The obtained results demonstrate the effectiveness of the proposed optimization approach in minimizing the total of force transmissibility over several frequency ranges addressed.

  4. Ultrasound comparison of external and internal neck anatomy with the LMA Unique.

    PubMed

    Lee, Steven M; Wojtczak, Jacek A; Cattano, Davide

    2017-12-01

    Internal neck anatomy landmarks and their relation after placement of an extraglottic airway devices have not been studied extensively by the use of ultrasound. Based on our group experience with external landmarks as well as internal landmarks evaluation with other techniques, we aimed use ultrasound to analyze the internal neck anatomy landmarks and the related changes due to the placement of the Laryngeal Mask Airway Unique. Observational pilot investigation. Non-obese adult patients with no evidence of airway anomalies, were recruited. External neck landmarks were measured based on a validated and standardized method by tape. Eight internal anatomical landmarks, reciprocal by the investigational hypothesis to the external landmarks, were also measured by ultrasound guidance. The internal landmarks were re-measured after optimal placement and inflation of the extraglottic airway devices cuff Laryngeal Mask Airway Unique. Six subjects were recruited. Ultrasound measurements of hyoid-mental distance, thyroid-cricoid distance, thyroid height, and thyroid width were found to be significantly ( p < 0.05) overestimated using a tape measure. Sagittal neck landmark distances such as thyroid height, sternal-mental distance, and thyroid-cricoid distance significantly decreased after placement of the Laryngeal Mask Airway Unique. The laryngeal mask airway Unique resulted in significant changes in internal neck anatomy. The induced changes and respective specific internal neck anatomy landmarks could help to design devices that would modify their shape accordingly to areas of greatest displacement. Also, while external neck landmark measurements overestimate their respective internal neck landmarks, as we previously reported, the concordance of each measurement and their respective conversion factor could continue to be of help in sizing extraglottic airway devices. Due to the pilot nature of the study, more investigations are warranted.

  5. An orbital angular momentum radio communication system optimized by intensity controlled masks effectively: Theoretical design and experimental verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Xinlu; Applied Optics Beijing Area Major Laboratory, Department of Physics, Beijing Normal University, Beijing 100875; Huang, Shanguo, E-mail: shghuang@bupt.edu.cn

    A system of generating and receiving orbital angular momentum (OAM) radio beams, which are collectively formed by two circular array antennas (CAAs) and effectively optimized by two intensity controlled masks, is proposed and experimentally investigated. The scheme is effective in blocking of the unwanted OAM modes and enhancing the power of received radio signals, which results in the capacity gain of system and extended transmission distance of the OAM radio beams. The operation principle of the intensity controlled masks, which can be regarded as both collimator and filter, is feasible and simple to realize. Numerical simulations of intensity and phasemore » distributions at each key cross-sectional plane of the radio beams demonstrate the collimated results. The experimental results match well with the theoretical analysis and the receive distance of the OAM radio beam at radio frequency (RF) 20 GHz is extended up to 200 times of the wavelength of the RF signals, the measured distance is 5 times of the original measured distance. The presented proof-of-concept experiment demonstrates the feasibility of the system.« less

  6. Derivation of an optimal directivity pattern for sweet spot widening in stereo sound reproduction

    NASA Astrophysics Data System (ADS)

    Ródenas, Josep A.; Aarts, Ronald M.; Janssen, A. J. E. M.

    2003-01-01

    In this paper the correction of the degradation of the stereophonic illusion during sound reproduction due to off-center listening is investigated. The main idea is that the directivity pattern of a loudspeaker array should have a well-defined shape such that a good stereo reproduction is achieved in a large listening area. Therefore, a mathematical description to derive an optimal directivity pattern opt that achieves sweet spot widening in a large listening area for stereophonic sound applications is described. This optimal directivity pattern is based on parametrized time/intensity trading data coming from psycho-acoustic experiments within a wide listening area. After the study, the required digital FIR filters are determined by means of a least-squares optimization method for a given stereo base setup (two pair of drivers for the loudspeaker arrays and 2.5-m distance between loudspeakers), which radiate sound in a broad range of listening positions in accordance with the derived opt. Informal listening tests have shown that the opt worked as predicted by the theoretical simulations. They also demonstrated the correct central sound localization for speech and music for a number of listening positions. This application is referred to as ``Position-Independent (PI) stereo.''

  7. Derivation of an optimal directivity pattern for sweet spot widening in stereo sound reproduction.

    PubMed

    Ródenas, Josep A; Aarts, Ronald M; Janssen, A J E M

    2003-01-01

    In this paper the correction of the degradation of the stereophonic illusion during sound reproduction due to off-center listening is investigated. The main idea is that the directivity pattern of a loudspeaker array should have a well-defined shape such that a good stereo reproduction is achieved in a large listening area. Therefore, a mathematical description to derive an optimal directivity pattern l(opt) that achieves sweet spot widening in a large listening area for stereophonic sound applications is described. This optimal directivity pattern is based on parametrized time/intensity trading data coming from psycho-acoustic experiments within a wide listening area. After the study, the required digital FIR filters are determined by means of a least-squares optimization method for a given stereo base setup (two pair of drivers for the loudspeaker arrays and 2.5-m distance between loudspeakers), which radiate sound in a broad range of listening positions in accordance with the derived l(opt). Informal listening tests have shown that the l(opt) worked as predicted by the theoretical simulations. They also demonstrated the correct central sound localization for speech and music for a number of listening positions. This application is referred to as "Position-Independent (PI) stereo."

  8. Feature selection gait-based gender classification under different circumstances

    NASA Astrophysics Data System (ADS)

    Sabir, Azhin; Al-Jawad, Naseer; Jassim, Sabah

    2014-05-01

    This paper proposes a gender classification based on human gait features and investigates the problem of two variations: clothing (wearing coats) and carrying bag condition as addition to the normal gait sequence. The feature vectors in the proposed system are constructed after applying wavelet transform. Three different sets of feature are proposed in this method. First, Spatio-temporal distance that is dealing with the distance of different parts of the human body (like feet, knees, hand, Human Height and shoulder) during one gait cycle. The second and third feature sets are constructed from approximation and non-approximation coefficient of human body respectively. To extract these two sets of feature we divided the human body into two parts, upper and lower body part, based on the golden ratio proportion. In this paper, we have adopted a statistical method for constructing the feature vector from the above sets. The dimension of the constructed feature vector is reduced based on the Fisher score as a feature selection method to optimize their discriminating significance. Finally k-Nearest Neighbor is applied as a classification method. Experimental results demonstrate that our approach is providing more realistic scenario and relatively better performance compared with the existing approaches.

  9. Preliminary evaluation of the diffraction behind the PROBA 3/ASPIICS optimized occulter

    NASA Astrophysics Data System (ADS)

    Baccani, Cristian; Landini, Federico; Romoli, Marco; Taccola, Matteo; Schweitzer, Hagen; Fineschi, Silvano; Bemporad, Alessandro; Loreggia, Davide; Capobianco, Gerardo; Pancrazzi, Maurizio; Focardi, Mauro; Noce, Vladimiro; Thizy, Cédric; Servaye, Jean-Sébastien; Renotte, Etienne

    2016-07-01

    PROBA-3 is a technological mission of the European Space Agency (ESA), devoted to the in-orbit demon- stration of formation flying (FF) techniques and technologies. ASPIICS is an externally occulted coronagraph approved by ESA as payload in the framework of the PROBA-3 mission and is currently in its C/D phase. FF offers a solution to investigate the solar corona close the solar limb using a two-component space system: the external occulter on one spacecraft and the optical instrument on the other, separated by a large distance and kept in strict alignment. ASPIICS is characterized by an inter-satellite distance of ˜144 m and an external occulter diameter of 1.42 m. The stray light due to the diffraction by the external occulter edge is always the most critical offender to a coronagraph performance: the designer work is focused on reducing the stray light and carefully evaluating the residuals. In order to match this goal, external occulters are usually characterized by an optimized shape along the optical axis. Part of the stray light evaluation process is based on the diffraction calculation with the optimized occulter and with the whole solar disk as a source. We used the field tracing software VirtualLabTM Fusion by Wyrowski Photonics [1] to simulate the diffraction. As a first approach and in order to evaluate the software, we simulated linear occulters, through as portions of the flight occulter, in order to make a direct comparison with the Phase-A measurements [2].

  10. Derivation of Optimal Operating Rules for Large-scale Reservoir Systems Considering Multiple Trade-off

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Lei, X.; Liu, P.; Wang, H.; Li, Z.

    2017-12-01

    Flood control operation of multi-reservoir systems such as parallel reservoirs and hybrid reservoirs often suffer from complex interactions and trade-off among tributaries and the mainstream. The optimization of such systems is computationally intensive due to nonlinear storage curves, numerous constraints and complex hydraulic connections. This paper aims to derive the optimal flood control operating rules based on the trade-off among tributaries and the mainstream using a new algorithm known as weighted non-dominated sorting genetic algorithm II (WNSGA II). WNSGA II could locate the Pareto frontier in non-dominated region efficiently due to the directed searching by weighted crowding distance, and the results are compared with those of conventional operating rules (COR) and single objective genetic algorithm (GA). Xijiang river basin in China is selected as a case study, with eight reservoirs and five flood control sections within four tributaries and the mainstream. Furthermore, the effects of inflow uncertainty have been assessed. Results indicate that: (1) WNSGA II could locate the non-dominated solutions faster and provide better Pareto frontier than the traditional non-dominated sorting genetic algorithm II (NSGA II) due to the weighted crowding distance; (2) WNSGA II outperforms COR and GA on flood control in the whole basin; (3) The multi-objective operating rules from WNSGA II deal with the inflow uncertainties better than COR. Therefore, the WNSGA II can be used to derive stable operating rules for large-scale reservoir systems effectively and efficiently.

  11. Cross-layer Energy Optimization Under Image Quality Constraints for Wireless Image Transmissions.

    PubMed

    Yang, Na; Demirkol, Ilker; Heinzelman, Wendi

    2012-01-01

    Wireless image transmission is critical in many applications, such as surveillance and environment monitoring. In order to make the best use of the limited energy of the battery-operated cameras, while satisfying the application-level image quality constraints, cross-layer design is critical. In this paper, we develop an image transmission model that allows the application layer (e.g., the user) to specify an image quality constraint, and optimizes the lower layer parameters of transmit power and packet length, to minimize the energy dissipation in image transmission over a given distance. The effectiveness of this approach is evaluated by applying the proposed energy optimization to a reference ZigBee system and a WiFi system, and also by comparing to an energy optimization study that does not consider any image quality constraint. Evaluations show that our scheme outperforms the default settings of the investigated commercial devices and saves a significant amount of energy at middle-to-large transmission distances.

  12. Penalized nonparametric scalar-on-function regression via principal coordinates

    PubMed Central

    Reiss, Philip T.; Miller, David L.; Wu, Pei-Shien; Hua, Wen-Yu

    2016-01-01

    A number of classical approaches to nonparametric regression have recently been extended to the case of functional predictors. This paper introduces a new method of this type, which extends intermediate-rank penalized smoothing to scalar-on-function regression. In the proposed method, which we call principal coordinate ridge regression, one regresses the response on leading principal coordinates defined by a relevant distance among the functional predictors, while applying a ridge penalty. Our publicly available implementation, based on generalized additive modeling software, allows for fast optimal tuning parameter selection and for extensions to multiple functional predictors, exponential family-valued responses, and mixed-effects models. In an application to signature verification data, principal coordinate ridge regression, with dynamic time warping distance used to define the principal coordinates, is shown to outperform a functional generalized linear model. PMID:29217963

  13. Distance majorization and its applications.

    PubMed

    Chi, Eric C; Zhou, Hua; Lange, Kenneth

    2014-08-01

    The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton's method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications.

  14. Background-reducing X-ray multilayer mirror

    DOEpatents

    Bloch, Jeffrey J.; Roussel-Dupre', Diane; Smith, Barham W.

    1992-01-01

    Background-reducing x-ray multilayer mirror. A multiple-layer "wavetrap" deposited over the surface of a layered, synthetic-microstructure soft x-ray mirror optimized for reflectivity at chosen wavelengths is disclosed for reducing the reflectivity of undesired, longer wavelength incident radiation incident thereon. In three separate mirror designs employing an alternating molybdenum and silicon layered, mirrored structure overlaid by two layers of a molybdenum/silicon pair anti-reflection coating, reflectivities of near normal incidence 133, 171, and 186 .ANG. wavelengths have been optimized, while that at 304 .ANG. has been minimized. The optimization process involves the choice of materials, the composition of the layer/pairs as well as the number thereof, and the distance therebetween for the mirror, and the simultaneous choice of materials, the composition of the layer/pairs, and their number and distance for the "wavetrap."

  15. Resistor-logic demultiplexers for nanoelectronics based on constant-weight codes.

    PubMed

    Kuekes, Philip J; Robinett, Warren; Roth, Ron M; Seroussi, Gadiel; Snider, Gregory S; Stanley Williams, R

    2006-02-28

    The voltage margin of a resistor-logic demultiplexer can be improved significantly by basing its connection pattern on a constant-weight code. Each distinct code determines a unique demultiplexer, and therefore a large family of circuits is defined. We consider using these demultiplexers for building nanoscale crossbar memories, and determine the voltage margin of the memory system based on a particular code. We determine a purely code-theoretic criterion for selecting codes that will yield memories with large voltage margins, which is to minimize the ratio of the maximum to the minimum Hamming distance between distinct codewords. For the specific example of a 64 × 64 crossbar, we discuss what codes provide optimal performance for a memory.

  16. Evaluation of Effective Factors on Travel Time in Optimization of Bus Stops Placement Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Bargegol, Iraj; Ghorbanzadeh, Mahyar; Ghasedi, Meisam; Rastbod, Mohammad

    2017-10-01

    In congested cities, locating and proper designing of bus stops according to the unequal distribution of passengers is crucial issue economically and functionally, since this subject plays an important role in the use of bus system by passengers. Location of bus stops is a complicated subject; by reducing distances between stops, walking time decreases, but the total travel time may increase. In this paper, a specified corridor in the city of Rasht in north of Iran is studied. Firstly, a new formula is presented to calculate the travel time, by which the number of stops and consequently, the travel time can be optimized. An intended corridor with specified number of stops and distances between them is addressed, the related formulas to travel time are created, and its travel time is calculated. Then the corridor is modelled using a meta-heuristic method in order that the placement and the optimal distances of bus stops for that are determined. It was found that alighting and boarding time along with bus capacity are the most effective factors affecting travel time. Consequently, it is better to have more concentration on indicated factors for improving the efficiency of bus system.

  17. Multi-modal and targeted imaging improves automated mid-brain segmentation

    NASA Astrophysics Data System (ADS)

    Plassard, Andrew J.; D'Haese, Pierre F.; Pallavaram, Srivatsan; Newton, Allen T.; Claassen, Daniel O.; Dawant, Benoit M.; Landman, Bennett A.

    2017-02-01

    The basal ganglia and limbic system, particularly the thalamus, putamen, internal and external globus pallidus, substantia nigra, and sub-thalamic nucleus, comprise a clinically relevant signal network for Parkinson's disease. In order to manually trace these structures, a combination of high-resolution and specialized sequences at 7T are used, but it is not feasible to scan clinical patients in those scanners. Targeted imaging sequences at 3T such as F-GATIR, and other optimized inversion recovery sequences, have been presented which enhance contrast in a select group of these structures. In this work, we show that a series of atlases generated at 7T can be used to accurately segment these structures at 3T using a combination of standard and optimized imaging sequences, though no one approach provided the best result across all structures. In the thalamus and putamen, a median Dice coefficient over 0.88 and a mean surface distance less than 1.0mm was achieved using a combination of T1 and an optimized inversion recovery imaging sequences. In the internal and external globus pallidus a Dice over 0.75 and a mean surface distance less than 1.2mm was achieved using a combination of T1 and FGATIR imaging sequences. In the substantia nigra and sub-thalamic nucleus a Dice coefficient of over 0.6 and a mean surface distance of less than 1.0mm was achieved using the optimized inversion recovery imaging sequence. On average, using T1 and optimized inversion recovery together produced significantly improved segmentation results than any individual modality (p<0.05 wilcox sign-rank test).

  18. Splash-cup plants accelerate raindrops to disperse seeds

    PubMed Central

    Amador, Guillermo J.; Yamada, Yasukuni; McCurley, Matthew; Hu, David L.

    2013-01-01

    The conical flowers of splash-cup plants Chrysosplenium and Mazus catch raindrops opportunistically, exploiting the subsequent splash to disperse their seeds. In this combined experimental and theoretical study, we elucidate their mechanism for maximizing dispersal distance. We fabricate conical plant mimics using three-dimensional printing, and use high-speed video to visualize splash profiles and seed travel distance. Drop impacts that strike the cup off-centre achieve the largest dispersal distances of up to 1 m. Such distances are achieved because splash speeds are three to five times faster than incoming drop speeds, and so faster than the traditionally studied splashes occurring upon horizontal surfaces. This anomalous splash speed is because of the superposition of two components of momentum, one associated with a component of the drop's motion parallel to the splash-cup surface, and the other associated with film spreading induced by impact with the splash-cup. Our model incorporating these effects predicts the observed dispersal distance within 6–18% error. According to our experiments, the optimal cone angle for the splash-cup is 40°, a value consistent with the average of five species of splash-cup plants. This optimal angle arises from the competing effects of velocity amplification and projectile launching angle. PMID:23235266

  19. On Location Estimation Technique Based of the Time of Flight in Low-power Wireless Systems

    NASA Astrophysics Data System (ADS)

    Botta, Miroslav; Simek, Milan; Krajsa, Ondrej; Cervenka, Vladimir; Pal, Tamas

    2015-04-01

    This study deals with the distance estimation issue in low-power wireless systems being usually used for sensor networking and interconnecting the Internet of Things. There is an effort to locate or track these sensor entities for different needs the radio signal time of flight principle from the theoretical and practical side of application research is evaluated. Since these sensor devices are mainly targeted for low power consumption appliances, there is always need for optimization of any aspects needed for regular sensor operation. For the distance estimation we benefit from IEEE 802.15.4a technology, which offers the precise ranging capabilities. There is no need for additional hardware to be used for the ranging task and all fundamental measurements are acquired within the 15.4a standard compliant hardware in the real environment. The proposed work examines the problems and the solutions for implementation of distance estimation algorithms for WSN devices. The main contribution of the article is seen in this real testbed evaluation of the ranging technology.

  20. Protein-protein interaction site predictions with minimum covariance determinant and Mahalanobis distance.

    PubMed

    Qiu, Zhijun; Zhou, Bo; Yuan, Jiangfeng

    2017-11-21

    Protein-protein interaction site (PPIS) prediction must deal with the diversity of interaction sites that limits their prediction accuracy. Use of proteins with unknown or unidentified interactions can also lead to missing interfaces. Such data errors are often brought into the training dataset. In response to these two problems, we used the minimum covariance determinant (MCD) method to refine the training data to build a predictor with better performance, utilizing its ability of removing outliers. In order to predict test data in practice, a method based on Mahalanobis distance was devised to select proper test data as input for the predictor. With leave-one-validation and independent test, after the Mahalanobis distance screening, our method achieved higher performance according to Matthews correlation coefficient (MCC), although only a part of test data could be predicted. These results indicate that data refinement is an efficient approach to improve protein-protein interaction site prediction. By further optimizing our method, it is hopeful to develop predictors of better performance and wide range of application. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Incremental fuzzy C medoids clustering of time series data using dynamic time warping distance

    PubMed Central

    Chen, Jingli; Wu, Shuai; Liu, Zhizhong; Chao, Hao

    2018-01-01

    Clustering time series data is of great significance since it could extract meaningful statistics and other characteristics. Especially in biomedical engineering, outstanding clustering algorithms for time series may help improve the health level of people. Considering data scale and time shifts of time series, in this paper, we introduce two incremental fuzzy clustering algorithms based on a Dynamic Time Warping (DTW) distance. For recruiting Single-Pass and Online patterns, our algorithms could handle large-scale time series data by splitting it into a set of chunks which are processed sequentially. Besides, our algorithms select DTW to measure distance of pair-wise time series and encourage higher clustering accuracy because DTW could determine an optimal match between any two time series by stretching or compressing segments of temporal data. Our new algorithms are compared to some existing prominent incremental fuzzy clustering algorithms on 12 benchmark time series datasets. The experimental results show that the proposed approaches could yield high quality clusters and were better than all the competitors in terms of clustering accuracy. PMID:29795600

  2. A novel heterogeneous training sample selection method on space-time adaptive processing

    NASA Astrophysics Data System (ADS)

    Wang, Qiang; Zhang, Yongshun; Guo, Yiduo

    2018-04-01

    The performance of ground target detection about space-time adaptive processing (STAP) decreases when non-homogeneity of clutter power is caused because of training samples contaminated by target-like signals. In order to solve this problem, a novel nonhomogeneous training sample selection method based on sample similarity is proposed, which converts the training sample selection into a convex optimization problem. Firstly, the existing deficiencies on the sample selection using generalized inner product (GIP) are analyzed. Secondly, the similarities of different training samples are obtained by calculating mean-hausdorff distance so as to reject the contaminated training samples. Thirdly, cell under test (CUT) and the residual training samples are projected into the orthogonal subspace of the target in the CUT, and mean-hausdorff distances between the projected CUT and training samples are calculated. Fourthly, the distances are sorted in order of value and the training samples which have the bigger value are selective preference to realize the reduced-dimension. Finally, simulation results with Mountain-Top data verify the effectiveness of the proposed method.

  3. Incremental fuzzy C medoids clustering of time series data using dynamic time warping distance.

    PubMed

    Liu, Yongli; Chen, Jingli; Wu, Shuai; Liu, Zhizhong; Chao, Hao

    2018-01-01

    Clustering time series data is of great significance since it could extract meaningful statistics and other characteristics. Especially in biomedical engineering, outstanding clustering algorithms for time series may help improve the health level of people. Considering data scale and time shifts of time series, in this paper, we introduce two incremental fuzzy clustering algorithms based on a Dynamic Time Warping (DTW) distance. For recruiting Single-Pass and Online patterns, our algorithms could handle large-scale time series data by splitting it into a set of chunks which are processed sequentially. Besides, our algorithms select DTW to measure distance of pair-wise time series and encourage higher clustering accuracy because DTW could determine an optimal match between any two time series by stretching or compressing segments of temporal data. Our new algorithms are compared to some existing prominent incremental fuzzy clustering algorithms on 12 benchmark time series datasets. The experimental results show that the proposed approaches could yield high quality clusters and were better than all the competitors in terms of clustering accuracy.

  4. Determinants of seed removal distance by scatter-hoarding rodents in deciduous forests.

    PubMed

    Moore, Jeffrey E; McEuen, Amy B; Swihart, Robert K; Contreras, Thomas A; Steele, Michael A

    2007-10-01

    Scatter-hoarding rodents should space food caches to maximize cache recovery rate (to minimize loss to pilferers) relative to the energetic cost of carrying food items greater distances. Optimization models of cache spacing make two predictions. First, spacing of caches should be greater for food items with greater energy content. Second, the mean distance between caches should increase with food abundance. However, the latter prediction fails to account for the effect of food abundance on the behavior of potential pilferers or on the ability of caching individuals to acquire food by means other than recovering their own caches. When considering these factors, shorter cache distances may be predicted in conditions of higher food abundance. We predicted that seed caching distances would be greater for food items of higher energy content and during lower ambient food abundance and that the effect of seed type on cache distance variation would be lower during higher food abundance. We recorded distances moved for 8636 seeds of five seed types at 15 locations in three forested sites in Pennsylvania, USA, and 29 forest fragments in Indiana, U.S.A., across five different years. Seed production was poor in three years and high in two years. Consistent with previous studies, seeds with greater energy content were moved farther than less profitable food items. Seeds were dispersed less far in seed-rich years than in seed-poor years, contrary to predictions of conventional models. Interactions were important, with seed type effects more evident in seed-poor years. These results suggest that, when food is superabundant, optimal cache distances are more strongly determined by minimizing energy cost of caching than by minimizing pilfering rates and that cache loss rates may be more strongly density-dependent in times of low seed abundance.

  5. Wavelet optimization for content-based image retrieval in medical databases.

    PubMed

    Quellec, G; Lamard, M; Cazuguel, G; Cochener, B; Roux, C

    2010-04-01

    We propose in this article a content-based image retrieval (CBIR) method for diagnosis aid in medical fields. In the proposed system, images are indexed in a generic fashion, without extracting domain-specific features: a signature is built for each image from its wavelet transform. These image signatures characterize the distribution of wavelet coefficients in each subband of the decomposition. A distance measure is then defined to compare two image signatures and thus retrieve the most similar images in a database when a query image is submitted by a physician. To retrieve relevant images from a medical database, the signatures and the distance measure must be related to the medical interpretation of images. As a consequence, we introduce several degrees of freedom in the system so that it can be tuned to any pathology and image modality. In particular, we propose to adapt the wavelet basis, within the lifting scheme framework, and to use a custom decomposition scheme. Weights are also introduced between subbands. All these parameters are tuned by an optimization procedure, using the medical grading of each image in the database to define a performance measure. The system is assessed on two medical image databases: one for diabetic retinopathy follow up and one for screening mammography, as well as a general purpose database. Results are promising: a mean precision of 56.50%, 70.91% and 96.10% is achieved for these three databases, when five images are returned by the system. Copyright 2009 Elsevier B.V. All rights reserved.

  6. Optimizing Negotiation Conflict in the Cloud Service Negotiation Framework Using Probabilistic Decision Making Model

    PubMed Central

    Rajavel, Rajkumar; Thangarathinam, Mala

    2015-01-01

    Optimization of negotiation conflict in the cloud service negotiation framework is identified as one of the major challenging issues. This negotiation conflict occurs during the bilateral negotiation process between the participants due to the misperception, aggressive behavior, and uncertain preferences and goals about their opponents. Existing research work focuses on the prerequest context of negotiation conflict optimization by grouping similar negotiation pairs using distance, binary, context-dependent, and fuzzy similarity approaches. For some extent, these approaches can maximize the success rate and minimize the communication overhead among the participants. To further optimize the success rate and communication overhead, the proposed research work introduces a novel probabilistic decision making model for optimizing the negotiation conflict in the long-term negotiation context. This decision model formulates the problem of managing different types of negotiation conflict that occurs during negotiation process as a multistage Markov decision problem. At each stage of negotiation process, the proposed decision model generates the heuristic decision based on the past negotiation state information without causing any break-off among the participants. In addition, this heuristic decision using the stochastic decision tree scenario can maximize the revenue among the participants available in the cloud service negotiation framework. PMID:26543899

  7. Optimizing Negotiation Conflict in the Cloud Service Negotiation Framework Using Probabilistic Decision Making Model.

    PubMed

    Rajavel, Rajkumar; Thangarathinam, Mala

    2015-01-01

    Optimization of negotiation conflict in the cloud service negotiation framework is identified as one of the major challenging issues. This negotiation conflict occurs during the bilateral negotiation process between the participants due to the misperception, aggressive behavior, and uncertain preferences and goals about their opponents. Existing research work focuses on the prerequest context of negotiation conflict optimization by grouping similar negotiation pairs using distance, binary, context-dependent, and fuzzy similarity approaches. For some extent, these approaches can maximize the success rate and minimize the communication overhead among the participants. To further optimize the success rate and communication overhead, the proposed research work introduces a novel probabilistic decision making model for optimizing the negotiation conflict in the long-term negotiation context. This decision model formulates the problem of managing different types of negotiation conflict that occurs during negotiation process as a multistage Markov decision problem. At each stage of negotiation process, the proposed decision model generates the heuristic decision based on the past negotiation state information without causing any break-off among the participants. In addition, this heuristic decision using the stochastic decision tree scenario can maximize the revenue among the participants available in the cloud service negotiation framework.

  8. Multimodal Registration of White Matter Brain Data via Optimal Mass Transport.

    PubMed

    Rehman, Tauseefur; Haber, Eldad; Pohl, Kilian M; Haker, Steven; Halle, Mike; Talos, Florin; Wald, Lawrence L; Kikinis, Ron; Tannenbaum, Allen

    2008-09-01

    The elastic registration of medical scans from different acquisition sequences is becoming an important topic for many research labs that would like to continue the post-processing of medical scans acquired via the new generation of high-field-strength scanners. In this note, we present a parameter-free registration algorithm that is well suited for this scenario as it requires no tuning to specific acquisition sequences. The algorithm encompasses a new numerical scheme for computing elastic registration maps based on the minimizing flow approach to optimal mass transport. The approach utilizes all of the gray-scale data in both images, and the optimal mapping from image A to image B is the inverse of the optimal mapping from B to A . Further, no landmarks need to be specified, and the minimizer of the distance functional involved is unique. We apply the algorithm to register the white matter folds of two different scans and use the results to parcellate the cortex of the target image. To the best of our knowledge, this is the first time that the optimal mass transport function has been applied to register large 3D multimodal data sets.

  9. Multimodal Registration of White Matter Brain Data via Optimal Mass Transport

    PubMed Central

    Rehman, Tauseefur; Haber, Eldad; Pohl, Kilian M.; Haker, Steven; Halle, Mike; Talos, Florin; Wald, Lawrence L.; Kikinis, Ron; Tannenbaum, Allen

    2017-01-01

    The elastic registration of medical scans from different acquisition sequences is becoming an important topic for many research labs that would like to continue the post-processing of medical scans acquired via the new generation of high-field-strength scanners. In this note, we present a parameter-free registration algorithm that is well suited for this scenario as it requires no tuning to specific acquisition sequences. The algorithm encompasses a new numerical scheme for computing elastic registration maps based on the minimizing flow approach to optimal mass transport. The approach utilizes all of the gray-scale data in both images, and the optimal mapping from image A to image B is the inverse of the optimal mapping from B to A. Further, no landmarks need to be specified, and the minimizer of the distance functional involved is unique. We apply the algorithm to register the white matter folds of two different scans and use the results to parcellate the cortex of the target image. To the best of our knowledge, this is the first time that the optimal mass transport function has been applied to register large 3D multimodal data sets. PMID:28626844

  10. Optimal Energy Consumption Analysis of Natural Gas Pipeline

    PubMed Central

    Liu, Enbin; Li, Changjun; Yang, Yi

    2014-01-01

    There are many compressor stations along long-distance natural gas pipelines. Natural gas can be transported using different boot programs and import pressures, combined with temperature control parameters. Moreover, different transport methods have correspondingly different energy consumptions. At present, the operating parameters of many pipelines are determined empirically by dispatchers, resulting in high energy consumption. This practice does not abide by energy reduction policies. Therefore, based on a full understanding of the actual needs of pipeline companies, we introduce production unit consumption indicators to establish an objective function for achieving the goal of lowering energy consumption. By using a dynamic programming method for solving the model and preparing calculation software, we can ensure that the solution process is quick and efficient. Using established optimization methods, we analyzed the energy savings for the XQ gas pipeline. By optimizing the boot program, the import station pressure, and the temperature parameters, we achieved the optimal energy consumption. By comparison with the measured energy consumption, the pipeline now has the potential to reduce energy consumption by 11 to 16 percent. PMID:24955410

  11. Optimized Reduction of Unsteady Radial Forces in a Singlechannel Pump for Wastewater Treatment

    NASA Astrophysics Data System (ADS)

    Kim, Jin-Hyuk; Cho, Bo-Min; Choi, Young-Seok; Lee, Kyoung-Yong; Peck, Jong-Hyeon; Kim, Seon-Chang

    2016-11-01

    A single-channel pump for wastewater treatment was optimized to reduce unsteady radial force sources caused by impeller-volute interactions. The steady and unsteady Reynolds- averaged Navier-Stokes equations using the shear-stress transport turbulence model were discretized by finite volume approximations and solved on tetrahedral grids to analyze the flow in the single-channel pump. The sweep area of radial force during one revolution and the distance of the sweep-area center of mass from the origin were selected as the objective functions; the two design variables were related to the internal flow cross-sectional area of the volute. These objective functions were integrated into one objective function by applying the weighting factor for optimization. Latin hypercube sampling was employed to generate twelve design points within the design space. A response-surface approximation model was constructed as a surrogate model for the objectives, based on the objective function values at the generated design points. The optimized results showed considerable reduction in the unsteady radial force sources in the optimum design, relative to those of the reference design.

  12. Development of mathematical models and optimization of the process parameters of laser surface hardened EN25 steel using elitist non-dominated sorting genetic algorithm

    NASA Astrophysics Data System (ADS)

    Vignesh, S.; Dinesh Babu, P.; Surya, G.; Dinesh, S.; Marimuthu, P.

    2018-02-01

    The ultimate goal of all production entities is to select the process parameters that would be of maximum strength, minimum wear and friction. The friction and wear are serious problems in most of the industries which are influenced by the working set of parameters, oxidation characteristics and mechanism involved in formation of wear. The experimental input parameters such as sliding distance, applied load, and temperature are utilized in finding out the optimized solution for achieving the desired output responses such as coefficient of friction, wear rate, and volume loss. The optimization is performed with the help of a novel method, Elitist Non-dominated Sorting Genetic Algorithm (NSGA-II) based on an evolutionary algorithm. The regression equations obtained using Response Surface Methodology (RSM) are used in determining the optimum process parameters. Further, the results achieved through desirability approach in RSM are compared with that of the optimized solution obtained through NSGA-II. The results conclude that proposed evolutionary technique is much effective and faster than the desirability approach.

  13. Incremental social learning in particle swarms.

    PubMed

    de Oca, Marco A Montes; Stutzle, Thomas; Van den Enden, Ken; Dorigo, Marco

    2011-04-01

    Incremental social learning (ISL) was proposed as a way to improve the scalability of systems composed of multiple learning agents. In this paper, we show that ISL can be very useful to improve the performance of population-based optimization algorithms. Our study focuses on two particle swarm optimization (PSO) algorithms: a) the incremental particle swarm optimizer (IPSO), which is a PSO algorithm with a growing population size in which the initial position of new particles is biased toward the best-so-far solution, and b) the incremental particle swarm optimizer with local search (IPSOLS), in which solutions are further improved through a local search procedure. We first derive analytically the probability density function induced by the proposed initialization rule applied to new particles. Then, we compare the performance of IPSO and IPSOLS on a set of benchmark functions with that of other PSO algorithms (with and without local search) and a random restart local search algorithm. Finally, we measure the benefits of using incremental social learning on PSO algorithms by running IPSO and IPSOLS on problems with different fitness distance correlations.

  14. Multimodal Optimization by Covariance Matrix Self-Adaptation Evolution Strategy with Repelling Subpopulations.

    PubMed

    Ahrari, Ali; Deb, Kalyanmoy; Preuss, Mike

    2017-01-01

    During the recent decades, many niching methods have been proposed and empirically verified on some available test problems. They often rely on some particular assumptions associated with the distribution, shape, and size of the basins, which can seldom be made in practical optimization problems. This study utilizes several existing concepts and techniques, such as taboo points, normalized Mahalanobis distance, and the Ursem's hill-valley function in order to develop a new tool for multimodal optimization, which does not make any of these assumptions. In the proposed method, several subpopulations explore the search space in parallel. Offspring of a subpopulation are forced to maintain a sufficient distance to the center of fitter subpopulations and the previously identified basins, which are marked as taboo points. The taboo points repel the subpopulation to prevent convergence to the same basin. A strategy to update the repelling power of the taboo points is proposed to address the challenge of basins of dissimilar size. The local shape of a basin is also approximated by the distribution of the subpopulation members converging to that basin. The proposed niching strategy is incorporated into the covariance matrix self-adaptation evolution strategy (CMSA-ES), a potent global optimization method. The resultant method, called the covariance matrix self-adaptation with repelling subpopulations (RS-CMSA), is assessed and compared to several state-of-the-art niching methods on a standard test suite for multimodal optimization. An organized procedure for parameter setting is followed which assumes a rough estimation of the desired/expected number of minima available. Performance sensitivity to the accuracy of this estimation is also studied by introducing the concept of robust mean peak ratio. Based on the numerical results using the available and the introduced performance measures, RS-CMSA emerges as the most successful method when robustness and efficiency are considered at the same time.

  15. Automatic Segmenting Structures in MRI's Based on Texture Analysis and Fuzzy Logic

    NASA Astrophysics Data System (ADS)

    Kaur, Mandeep; Rattan, Munish; Singh, Pushpinder

    2017-12-01

    The purpose of this paper is to present the variational method for geometric contours which helps the level set function remain close to the sign distance function, therefor it remove the need of expensive re-initialization procedure and thus, level set method is applied on magnetic resonance images (MRI) to track the irregularities in them as medical imaging plays a substantial part in the treatment, therapy and diagnosis of various organs, tumors and various abnormalities. It favors the patient with more speedy and decisive disease controlling with lesser side effects. The geometrical shape, the tumor's size and tissue's abnormal growth can be calculated by the segmentation of that particular image. It is still a great challenge for the researchers to tackle with an automatic segmentation in the medical imaging. Based on the texture analysis, different images are processed by optimization of level set segmentation. Traditionally, optimization was manual for every image where each parameter is selected one after another. By applying fuzzy logic, the segmentation of image is correlated based on texture features, to make it automatic and more effective. There is no initialization of parameters and it works like an intelligent system. It segments the different MRI images without tuning the level set parameters and give optimized results for all MRI's.

  16. Financial model calibration using consistency hints.

    PubMed

    Abu-Mostafa, Y S

    2001-01-01

    We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.

  17. [94 km Brillouin distributed optical fiber sensors based on ultra-long fiber ring laser pumping].

    PubMed

    Yuan, Cheng-Xu; Wang, Zi-Nan; Jia, Xin-Hong; Li, Jin; Yan, Xiao-Dong; Cui, An-Bin

    2014-05-01

    A novel optical amplification configuration based on ultra-long fiber laser with a ring cavity was proposed and applied to Brillouin optical time-domain analysis (BOTDA) sensing system, in order to extend the measurement distance significantly. The parameters used in the experiment were optimized, considering the main limitations of the setup, such as depletion, self-phase modulation (SPM) and pump-signal relative intensity noise (RIN) transfer. Through analyzing Brillouin gain spectrum, we demonstrated distributed sensing over 94 km of standard single-mode fiber with 3 meter spatial resolution and strain/temperature accuracy of 28 /1. 4 degree C.

  18. Identification of terrain cover using the optimum polarimetric classifier

    NASA Technical Reports Server (NTRS)

    Kong, J. A.; Swartz, A. A.; Yueh, H. A.; Novak, L. M.; Shin, R. T.

    1988-01-01

    A systematic approach for the identification of terrain media such as vegetation canopy, forest, and snow-covered fields is developed using the optimum polarimetric classifier. The covariance matrices for various terrain cover are computed from theoretical models of random medium by evaluating the scattering matrix elements. The optimal classification scheme makes use of a quadratic distance measure and is applied to classify a vegetation canopy consisting of both trees and grass. Experimentally measured data are used to validate the classification scheme. Analytical and Monte Carlo simulated classification errors using the fully polarimetric feature vector are compared with classification based on single features which include the phase difference between the VV and HH polarization returns. It is shown that the full polarimetric results are optimal and provide better classification performance than single feature measurements.

  19. Modeling protein conformational changes by iterative fitting of distance constraints using reoriented normal modes.

    PubMed

    Zheng, Wenjun; Brooks, Bernard R

    2006-06-15

    Recently we have developed a normal-modes-based algorithm that predicts the direction of protein conformational changes given the initial state crystal structure together with a small number of pairwise distance constraints for the end state. Here we significantly extend this method to accurately model both the direction and amplitude of protein conformational changes. The new protocol implements a multisteps search in the conformational space that is driven by iteratively minimizing the error of fitting the given distance constraints and simultaneously enforcing the restraint of low elastic energy. At each step, an incremental structural displacement is computed as a linear combination of the lowest 10 normal modes derived from an elastic network model, whose eigenvectors are reorientated to correct for the distortions caused by the structural displacements in the previous steps. We test this method on a list of 16 pairs of protein structures for which relatively large conformational changes are observed (root mean square deviation >3 angstroms), using up to 10 pairwise distance constraints selected by a fluctuation analysis of the initial state structures. This method has achieved a near-optimal performance in almost all cases, and in many cases the final structural models lie within root mean square deviation of 1 approximately 2 angstroms from the native end state structures.

  20. Safety of LigaSure in recurrent laryngeal nerve dissection-porcine model using continuous monitoring.

    PubMed

    Dionigi, Gianlorenzo; Chiang, Feng-Yu; Kim, Hoon Yub; Randolph, Gregory W; Mangano, Alberto; Chang, Pi-Ying; Lu, I-Cheng; Lin, Yi-Chu; Chen, Hui-Chun; Wu, Che-Wei

    2017-07-01

    This study investigated recurrent laryngeal nerve (RLN) real-time electromyography (EMG) data to define optimal safety parameters of the LigaSure Small Jaw (LSJ) instrument during thyroidectomy. Prospective animal model. Dynamic EMG tracings were recorded from 32 RLNs (16 piglets) during various applications of LSJ around using continuous electrophysiologic monitoring. At varying distances from the RLN, the LSJ was activated (activation study). The LSJ was also applied to the RLN at timed intervals after activation and after a cooling maneuver through placement on the sternocleidomastoid muscle (cooling study). In the activation study, there was no adverse EMG event at 2 to 5 mm distance (16 RLNs, 96 tests). In the cooling study, there was no adverse EMG event after 2-second cooling time (16 RLNs, 96 tests) or after the LSJ cooling maneuver on the surrounding muscle before reaching the RLNs (8 RLNs, 24 tests). Based on EMG functional assessment, the safe distance for LSJ activation was 2 mm. Further LSJ-RLN contact was safe if the LSJ was cooled for more than 2 seconds or cooled by touch muscle maneuver. The LSJ should be used with these distance and time parameters in mind to avoid RLN injury. N/A. Laryngoscope, 127:1724-1729, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  1. 3D finite element modeling of epiretinal stimulation: Impact of prosthetic electrode size and distance from the retina.

    PubMed

    Sui, Xiaohong; Huang, Yu; Feng, Fuchen; Huang, Chenhui; Chan, Leanne Lai Hang; Wang, Guoxing

    2015-05-01

    A novel 3-dimensional (3D) finite element model was established to systematically investigate the impact of the diameter (Φ) of disc electrodes and the electrode-to-retina distance on the effectiveness of stimulation. The 3D finite element model was established based on a disc platinum stimulating electrode and a 6-layered retinal structure. The ground electrode was placed in the extraocular space in direct attachment with sclera and treated as a distant return electrode. An established criterion of electric-field strength of 1000 Vm-1 was adopted as the activation threshold for RGCs. The threshold current (TC) increased linearly with increasing Φ and electrode-to-retina distance and remained almost unchanged with further increases in diameter. However, the threshold charge density (TCD) increased dramatically with decreasing electrode diameter. TCD exceeded the electrode safety limit for an electrode diameter of 50 µm at an electrode-to-retina distance of 50 to 200 μm. The electric field distributions illustrated that smaller electrode diameters and shorter electrode-to-retina distances were preferred due to more localized excitation of RGC area under stimulation of different threshold currents in terms of varied electrode size and electrode-to-retina distances. Under the condition of same-amplitude current stimulation, a large electrode exhibited an improved potential spatial selectivity at large electrode-to-retina distances. Modeling results were consistent with those reported in animal electrophysiological experiments and clinical trials, validating the 3D finite element model of epiretinal stimulation. The computational model proved to be useful in optimizing the design of an epiretinal stimulating electrode for prosthesis.

  2. On Computing Breakpoint Distances for Genomes with Duplicate Genes.

    PubMed

    Shao, Mingfu; Moret, Bernard M E

    2017-06-01

    A fundamental problem in comparative genomics is to compute the distance between two genomes in terms of its higher level organization (given by genes or syntenic blocks). For two genomes without duplicate genes, we can easily define (and almost always efficiently compute) a variety of distance measures, but the problem is NP-hard under most models when genomes contain duplicate genes. To tackle duplicate genes, three formulations (exemplar, maximum matching, and any matching) have been proposed, all of which aim to build a matching between homologous genes so as to minimize some distance measure. Of the many distance measures, the breakpoint distance (the number of nonconserved adjacencies) was the first one to be studied and remains of significant interest because of its simplicity and model-free property. The three breakpoint distance problems corresponding to the three formulations have been widely studied. Although we provided last year a solution for the exemplar problem that runs very fast on full genomes, computing optimal solutions for the other two problems has remained challenging. In this article, we describe very fast, exact algorithms for these two problems. Our algorithms rely on a compact integer-linear program that we further simplify by developing an algorithm to remove variables, based on new results on the structure of adjacencies and matchings. Through extensive experiments using both simulations and biological data sets, we show that our algorithms run very fast (in seconds) on mammalian genomes and scale well beyond. We also apply these algorithms (as well as the classic orthology tool MSOAR) to create orthology assignment, then compare their quality in terms of both accuracy and coverage. We find that our algorithm for the "any matching" formulation significantly outperforms other methods in terms of accuracy while achieving nearly maximum coverage.

  3. Movement of foraging Tundra Swans explained by spatial pattern in cryptic food densities.

    PubMed

    Klaassen, Raymond H G; Nolet, Bart A; Bankert, Daniëlle

    2006-09-01

    We tested whether Tundra Swans use information on the spatial distribution of cryptic food items (below ground Sago pondweed tubers) to shape their movement paths. In a continuous environment, swans create their own food patches by digging craters, which they exploit in several feeding bouts. Series of short (<1 m) intra-patch movements alternate with longer inter-patch movements (>1 m). Tuber biomass densities showed a positive spatial auto-correlation at a short distance (<3 m), but not at a larger distance (3-8 m). Based on the spatial pattern of the food distribution (which is assumed to be pre-harvest information for the swan) and the energy costs and benefits for different food densities at various distances, we calculated the optimal length of an inter-patch movement. A swan that moves to the patch with the highest gain rate was predicted to move to the adjacent patch (at 1 m) if the food density in the current patch had been high (>25 g/m2) and to a more distant patch (at 7-8 m) if the food density in the current patch had been low (<25 g/m2). This prediction was tested by measuring the response of swans to manipulated tuber densities. In accordance with our predictions, swans moved a long distance (>3 m) from a low-density patch and a short distance (<3 m) from a high-density patch. The quantitative agreement between prediction and observation was greater for swans feeding in pairs than for solitary swans. The result of this movement strategy is that swans visit high-density patches at a higher frequency than on offer and, consequently, achieve a 38% higher long-term gain rate. Swans also take advantage of spatial variance in food abundance by regulating the time in patches, staying longer and consuming more food from rich than from poor patches. We can conclude that the shape of the foraging path is a reflection of the spatial pattern in the distribution of tuber densities and can be understood from an optimal foraging perspective.

  4. Retention of contaminants Cd and Hg adsorbed and intercalated in aluminosilicate clays: A first principles study

    NASA Astrophysics Data System (ADS)

    Crasto de Lima, F. D.; Miwa, R. H.; Miranda, Caetano R.

    2017-11-01

    Layered clay materials have been used to incorporate transition metal (TM) contaminants. Based on first-principles calculations, we have examined the energetic stability and the electronic properties due to the incorporation of Cd and Hg in layered clay materials, kaolinite (KAO) and pyrophyllite (PYR). The TM can be (i) adsorbed on the clay surface as well as (ii) intercalated between the clay layers. For the intercalated case, the contaminant incorporation rate can be optimized by controlling the interlayer spacing of the clay, namely, pillared clays. Our total energy results reveal that the incorporation of the TMs can be maximized through a suitable tuning of vertical distance between the clay layers. Based on the calculated TM/clay binding energies and the Langmuir absorption model, we estimate the concentrations of the TMs. Further kinetic properties have been examined by calculating the activation energies, where we found energy barriers of ˜20 and ˜130 meV for adsorbed and intercalated cases, respectively. The adsorption and intercalation of ionized TM adatoms were also considered within the deprotonated KAO surface. This also leads to an optimal interlayer distance which maximizes the TM incorporation rate. By mapping the total charge transfers at the TM/clay interface, we identify a net electronic charge transfer from the TM adatoms to the topmost clay surface layer. The effect of such a charge transfer on the electronic structure of the clay (host) has been examined through a set of X-ray absorption near edge structure (XANES) simulations, characterizing the changes of the XANES spectra upon the presence of the contaminants. Finally, for the pillared clays, we quantify the Cd and Hg K-edge energy shifts of the TMs as a function of the interlayer distance between the clay layers and the Al K-edge spectra for the pristine and pillared clays.

  5. Application of Finite Element Modeling Methods in Magnetic Resonance Imaging-Based Research and Clinical Management

    NASA Astrophysics Data System (ADS)

    Fwu, Peter Tramyeon

    The medical image is very complex by its nature. Modeling built upon the medical image is challenging due to the lack of analytical solution. Finite element method (FEM) is a numerical technique which can be used to solve the partial differential equations. It utilized the transformation from a continuous domain into solvable discrete sub-domains. In three-dimensional space, FEM has the capability dealing with complicated structure and heterogeneous interior. That makes FEM an ideal tool to approach the medical-image based modeling problems. In this study, I will address the three modeling in (1) photon transport inside the human breast by implanting the radiative transfer equation to simulate the diffuse optical spectroscopy imaging (DOSI) in order to measurement the percent density (PD), which has been proven as a cancer risk factor in mammography. Our goal is to use MRI as the ground truth to optimize the DOSI scanning protocol to get a consistent measurement of PD. Our result shows DOSI measurement is position and depth dependent and proper scanning scheme and body configuration are needed; (2) heat flow in the prostate by implementing the Penne's bioheat equation to evaluate the cooling performance of regional hypothermia during the robot assisted radical prostatectomy for the individual patient in order to achieve the optimal cooling setting. Four factors are taken into account during the simulation: blood abundance, artery perfusion, cooling balloon temperature, and the anatomical distance. The result shows that blood abundance, prostate size, and anatomical distance are significant factors to the equilibrium temperature of neurovascular bundle; (3) shape analysis in hippocampus by using the radial distance mapping, and two registration methods to find the correlation between sub-regional change to the age and cognition performance, which might not reveal in the volumetric analysis. The result gives a fundamental knowledge of normal distribution in young preadolescent children who may be compared to children with, or at risk of, neurological diseases for early diagnosis.

  6. Global Optimization of Interplanetary Trajectories in the Presence of Realistic Mission Constraints

    NASA Technical Reports Server (NTRS)

    Hinckley, David; Englander, Jacob; Hitt, Darren

    2015-01-01

    Single trial evaluations Trial creation by Phase-wise GA-style or DE-inspired recombination Bin repository structure requires an initialization period Non-exclusionary Kill Distance Population collapse mechanic Main loop Creation Probabilistic switch between GA and DE creation types Locally optimize Submit to repository Repeat.

  7. Linear discriminant analysis based on L1-norm maximization.

    PubMed

    Zhong, Fujin; Zhang, Jiashu

    2013-08-01

    Linear discriminant analysis (LDA) is a well-known dimensionality reduction technique, which is widely used for many purposes. However, conventional LDA is sensitive to outliers because its objective function is based on the distance criterion using L2-norm. This paper proposes a simple but effective robust LDA version based on L1-norm maximization, which learns a set of local optimal projection vectors by maximizing the ratio of the L1-norm-based between-class dispersion and the L1-norm-based within-class dispersion. The proposed method is theoretically proved to be feasible and robust to outliers while overcoming the singular problem of the within-class scatter matrix for conventional LDA. Experiments on artificial datasets, standard classification datasets and three popular image databases demonstrate the efficacy of the proposed method.

  8. Detection and reading distances of retroreflective road signs during night driving.

    PubMed

    Dahlstedt, S; Svenson, O

    1977-03-01

    The detectability and legibility of road signs of different reflective intensities were studied in night driving conditions. The results indicated that for obtaining optimal detectability and legibility distances, the reflective intensity of a new road sign should be in the range of 4 to 10 mcd/lux x cm(2). For signs in this range it was shown that doubling the area of a sign increased a detection distance of about 600 m by about 150-200 m. Opposing headlights on an oncoming car decreased detection distances of 500-900 m by about 100 m. Finally, it was found that standard signs, with a text 170 mm high, permitted reading from a distance of about 115 m.

  9. A systematic review of the efficacy of ergogenic aids for improving running performance.

    PubMed

    Schubert, Matthew M; Astorino, Todd A

    2013-06-01

    Running is a common form of activity worldwide, and participants range from "weekend warriors" to Olympians. Unfortunately, few studies have examined efficacy of various ergogenic aids in runners because the majority of the literature consists of cycling-based protocols, which do not relate to running performance. The majority of running studies conducted markedly vary in regards to specific distance completed, subject fitness level, and effectiveness of the ergogenic aid examined. The aim of this article was to systematically examine the literature concerning utility of several ergogenic aids on middle-distance running (400-5,000 m) and long-distance running (10,000 meters marathon = 42.2 km) performance. In addition, this article highlights the dearth of running-specific studies in the literature and addresses recommendations for future research to optimize running performance through nutritional intervention. Results revealed 23 studies examining effects of various ergogenic aids on running performance, with a mean Physiotherapy Evidence Database score equal to 7.85 ± 0.70. Of these studies, 71% (n = 15) demonstrated improved running performance with ergogenic aid ingestion when compared with a placebo trial. The most effective ergogenic aids for distances from 400 m to 40 km included sodium bicarbonate (4 studies; 1.5 ± 1.1% improvement), sodium citrate (6 studies; 0.3 ± 1.7% improvement), caffeine (CAFF) (7 studies; 1.1 ± 0.4% improvement), and carbohydrate (CHO) (6 studies; 4.1 ± 4.4% improvement). Therefore, runners may benefit from ingestion of sodium bicarbonate to enhance middle distance performance and caffeine and carbohydrate to enhance performance at multiple distances.

  10. Spherical hashing: binary code embedding with hyperspheres.

    PubMed

    Heo, Jae-Pil; Lee, Youngwoon; He, Junfeng; Chang, Shih-Fu; Yoon, Sung-Eui

    2015-11-01

    Many binary code embedding schemes have been actively studied recently, since they can provide efficient similarity search, and compact data representations suitable for handling large scale image databases. Existing binary code embedding techniques encode high-dimensional data by using hyperplane-based hashing functions. In this paper we propose a novel hypersphere-based hashing function, spherical hashing, to map more spatially coherent data points into a binary code compared to hyperplane-based hashing functions. We also propose a new binary code distance function, spherical Hamming distance, tailored for our hypersphere-based binary coding scheme, and design an efficient iterative optimization process to achieve both balanced partitioning for each hash function and independence between hashing functions. Furthermore, we generalize spherical hashing to support various similarity measures defined by kernel functions. Our extensive experiments show that our spherical hashing technique significantly outperforms state-of-the-art techniques based on hyperplanes across various benchmarks with sizes ranging from one to 75 million of GIST, BoW and VLAD descriptors. The performance gains are consistent and large, up to 100 percent improvements over the second best method among tested methods. These results confirm the unique merits of using hyperspheres to encode proximity regions in high-dimensional spaces. Finally, our method is intuitive and easy to implement.

  11. Secondary environmental impacts of remedial alternatives for sediment contaminated with hydrophobic organic contaminants.

    PubMed

    Choi, Yongju; Thompson, Jay M; Lin, Diana; Cho, Yeo-Myoung; Ismail, Niveen S; Hsieh, Ching-Hong; Luthy, Richard G

    2016-03-05

    This study evaluates secondary environmental impacts of various remedial alternatives for sediment contaminated with hydrophobic organic contaminants using life cycle assessment (LCA). Three alternatives including two conventional methods, dredge-and-fill and capping, and an innovative sediment treatment technique, in-situ activated carbon (AC) amendment, are compared for secondary environmental impacts by a case study for a site at Hunters Point Shipyard, San Francisco, CA. The LCA results show that capping generates substantially smaller impacts than dredge-and-fill and in-situ amendment using coal-based virgin AC. The secondary impacts from in-situ AC amendment can be reduced effectively by using recycled or wood-based virgin AC as production of these materials causes much smaller impacts than coal-based virgin AC. The secondary environmental impacts are highly sensitive to the dredged amount and the distance to a disposal site for dredging, the capping thickness and the distance to the cap materials for capping, and the AC dose for in-situ AC amendment. Based on the analysis, this study identifies strategies to minimize secondary impacts caused by different remediation activities: optimize the dredged amount, the capping thickness, or the AC dose by extensive site assessments, obtain source materials from local sites, and use recycled or bio-based AC. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Optimization of numerical weather/wave prediction models based on information geometry and computational techniques

    NASA Astrophysics Data System (ADS)

    Galanis, George; Famelis, Ioannis; Kalogeri, Christina

    2014-10-01

    The last years a new highly demanding framework has been set for environmental sciences and applied mathematics as a result of the needs posed by issues that are of interest not only of the scientific community but of today's society in general: global warming, renewable resources of energy, natural hazards can be listed among them. Two are the main directions that the research community follows today in order to address the above problems: The utilization of environmental observations obtained from in situ or remote sensing sources and the meteorological-oceanographic simulations based on physical-mathematical models. In particular, trying to reach credible local forecasts the two previous data sources are combined by algorithms that are essentially based on optimization processes. The conventional approaches in this framework usually neglect the topological-geometrical properties of the space of the data under study by adopting least square methods based on classical Euclidean geometry tools. In the present work new optimization techniques are discussed making use of methodologies from a rapidly advancing branch of applied Mathematics, the Information Geometry. The latter prove that the distributions of data sets are elements of non-Euclidean structures in which the underlying geometry may differ significantly from the classical one. Geometrical entities like Riemannian metrics, distances, curvature and affine connections are utilized in order to define the optimum distributions fitting to the environmental data at specific areas and to form differential systems that describes the optimization procedures. The methodology proposed is clarified by an application for wind speed forecasts in the Kefaloniaisland, Greece.

  13. Task-based design of a synthetic-collimator SPECT system used for small animal imaging.

    PubMed

    Lin, Alexander; Kupinski, Matthew A; Peterson, Todd E; Shokouhi, Sepideh; Johnson, Lindsay C

    2018-05-07

    In traditional multipinhole SPECT systems, image multiplexing - the overlapping of pinhole projection images - may occur on the detector, which can inhibit quality image reconstructions due to photon-origin uncertainty. One proposed system to mitigate the effects of multiplexing is the synthetic-collimator SPECT system. In this system, two detectors, a silicon detector and a germanium detector, are placed at different distances behind the multipinhole aperture, allowing for image detection to occur at different magnifications and photon energies, resulting in higher overall sensitivity while maintaining high resolution. The unwanted effects of multiplexing are reduced by utilizing the additional data collected from the front silicon detector. However, determining optimal system configurations for a given imaging task requires efficient parsing of the complex parameter space, to understand how pinhole spacings and the two detector distances influence system performance. In our simulation studies, we use the ensemble mean-squared error of the Wiener estimator (EMSE W ) as the figure of merit to determine optimum system parameters for the task of estimating the uptake of an 123 I-labeled radiotracer in three different regions of a computer-generated mouse brain phantom. The segmented phantom map is constructed by using data from the MRM NeAt database and allows for the reduction in dimensionality of the system matrix which improves the computational efficiency of scanning the system's parameter space. To contextualize our results, the Wiener estimator is also compared against a region of interest estimator using maximum-likelihood reconstructed data. Our results show that the synthetic-collimator SPECT system outperforms traditional multipinhole SPECT systems in this estimation task. We also find that image multiplexing plays an important role in the system design of the synthetic-collimator SPECT system, with optimal germanium detector distances occurring at maxima in the derivative of the percent multiplexing function. Furthermore, we report that improved task performance can be achieved by using an adaptive system design in which the germanium detector distance may vary with projection angle. Finally, in our comparative study, we find that the Wiener estimator outperforms the conventional region of interest estimator. Our work demonstrates how this optimization method has the potential to quickly and efficiently explore vast parameter spaces, providing insight into the behavior of competing factors, which are otherwise very difficult to calculate and study using other existing means. © 2018 American Association of Physicists in Medicine.

  14. Pressure-Aware Control Layer Optimization for Flow-Based Microfluidic Biochips.

    PubMed

    Wang, Qin; Xu, Yue; Zuo, Shiliang; Yao, Hailong; Ho, Tsung-Yi; Li, Bing; Schlichtmann, Ulf; Cai, Yici

    2017-12-01

    Flow-based microfluidic biochips are attracting increasing attention with successful biomedical applications. One critical issue with flow-based microfluidic biochips is the large number of microvalves that require peripheral control pins. Even using the broadcasting addressing scheme, i.e., one control pin controls multiple microvalves simultaneously, thousands of microvalves would still require hundreds of control prins, which is unrealistic. To address this critical challenge in control scalability, the control-layer multiplexer is introduced to effectively reduce the number of control pins into log scale of the number of microvalves. There are two practical design issues with the control-layer multiplexer: (1) the reliability issue caused by the frequent control-valve switching, and (2) the pressure degradation problem caused by the control-valve switching without pressure refreshing from the pressure source. This paper addresses these two design issues by the proposed Hamming-distance-based switching sequence optimization method and the XOR-based pressure refreshing method. Simulation results demonstrate the effectiveness and efficiency of the proposed methods with an average 77.2% (maximum 89.6%) improvement in total pressure refreshing cost, and an average 88.5% (maximum 90.0%) improvement in pressure deviation.

  15. Optimized decoy state QKD for underwater free space communication

    NASA Astrophysics Data System (ADS)

    Lopes, Minal; Sarwade, Nisha

    Quantum cryptography (QC) is envisioned as a solution for global key distribution through fiber optic, free space and underwater optical communication due to its unconditional security. In view of this, this paper investigates underwater free space quantum key distribution (QKD) model for enhanced transmission distance, secret key rates and security. It is reported that secure underwater free space QKD is feasible in the clearest ocean water with the sifted key rates up to 207kbps. This paper extends this work by testing performance of optimized decoy state QKD protocol with underwater free space communication model. The attenuation of photons, quantum bit error rate and the sifted key generation rate of underwater quantum communication is obtained with vector radiative transfer theory and Monte Carlo method. It is observed from the simulations that optimized decoy state QKD evidently enhances the underwater secret key transmission distance as well as secret key rates.

  16. Metabolic energy demand and optimal walking speed in post-polio subjects with lower limb afflictions.

    PubMed

    Ghosh, A K; Ganguli, S; Bose, K S

    1982-12-01

    The metabolic demand, using the relationship between speed and energy cost, and the optimal speed of walking, estimated by means of speed and energy cost per unit distance travelled, were studied in 16 post-polio subjects with lower limb affliction and 20 normal subjects with sedentary habits. It was observed that the post-polio subjects consumed higher energy than the normal persons at each walking speed between 0.28 and 1.26 m/s. The optimal speed of walking in post-polio subjects was lower than that of the normal persons and was associated with a higher energy demand per unit distance travelled. It was deduced that the post-polio subjects. not having used any assistive devices for a long time, have acquired severe degrees of disability which not only hindered their normal gait but also demanded extra energy from them.

  17. Intelligent QoS routing algorithm based on improved AODV protocol for Ad Hoc networks

    NASA Astrophysics Data System (ADS)

    Huibin, Liu; Jun, Zhang

    2016-04-01

    Mobile Ad Hoc Networks were playing an increasingly important part in disaster reliefs, military battlefields and scientific explorations. However, networks routing difficulties are more and more outstanding due to inherent structures. This paper proposed an improved cuckoo searching-based Ad hoc On-Demand Distance Vector Routing protocol (CSAODV). It elaborately designs the calculation methods of optimal routing algorithm used by protocol and transmission mechanism of communication-package. In calculation of optimal routing algorithm by CS Algorithm, by increasing QoS constraint, the found optimal routing algorithm can conform to the requirements of specified bandwidth and time delay, and a certain balance can be obtained among computation spending, bandwidth and time delay. Take advantage of NS2 simulation software to take performance test on protocol in three circumstances and validate the feasibility and validity of CSAODV protocol. In results, CSAODV routing protocol is more adapt to the change of network topological structure than AODV protocol, which improves package delivery fraction of protocol effectively, reduce the transmission time delay of network, reduce the extra burden to network brought by controlling information, and improve the routing efficiency of network.

  18. Adaptive behaviors in multi-agent source localization using passive sensing.

    PubMed

    Shaukat, Mansoor; Chitre, Mandar

    2016-12-01

    In this paper, the role of adaptive group cohesion in a cooperative multi-agent source localization problem is investigated. A distributed source localization algorithm is presented for a homogeneous team of simple agents. An agent uses a single sensor to sense the gradient and two sensors to sense its neighbors. The algorithm is a set of individualistic and social behaviors where the individualistic behavior is as simple as an agent keeping its previous heading and is not self-sufficient in localizing the source. Source localization is achieved as an emergent property through agent's adaptive interactions with the neighbors and the environment. Given a single agent is incapable of localizing the source, maintaining team connectivity at all times is crucial. Two simple temporal sampling behaviors, intensity-based-adaptation and connectivity-based-adaptation, ensure an efficient localization strategy with minimal agent breakaways. The agent behaviors are simultaneously optimized using a two phase evolutionary optimization process. The optimized behaviors are estimated with analytical models and the resulting collective behavior is validated against the agent's sensor and actuator noise, strong multi-path interference due to environment variability, initialization distance sensitivity and loss of source signal.

  19. Example-based human motion denoising.

    PubMed

    Lou, Hui; Chai, Jinxiang

    2010-01-01

    With the proliferation of motion capture data, interest in removing noise and outliers from motion capture data has increased. In this paper, we introduce an efficient human motion denoising technique for the simultaneous removal of noise and outliers from input human motion data. The key idea of our approach is to learn a series of filter bases from precaptured motion data and use them along with robust statistics techniques to filter noisy motion data. Mathematically, we formulate the motion denoising process in a nonlinear optimization framework. The objective function measures the distance between the noisy input and the filtered motion in addition to how well the filtered motion preserves spatial-temporal patterns embedded in captured human motion data. Optimizing the objective function produces an optimal filtered motion that keeps spatial-temporal patterns in captured motion data. We also extend the algorithm to fill in the missing values in input motion data. We demonstrate the effectiveness of our system by experimenting with both real and simulated motion data. We also show the superior performance of our algorithm by comparing it with three baseline algorithms and to those in state-of-art motion capture data processing software such as Vicon Blade.

  20. Optimization-based mesh correction with volume and convexity constraints

    DOE PAGES

    D'Elia, Marta; Ridzal, Denis; Peterson, Kara J.; ...

    2016-02-24

    In this study, we consider the problem of finding a mesh such that 1) it is the closest, with respect to a suitable metric, to a given source mesh having the same connectivity, and 2) the volumes of its cells match a set of prescribed positive values that are not necessarily equal to the cell volumes in the source mesh. This volume correction problem arises in important simulation contexts, such as satisfying a discrete geometric conservation law and solving transport equations by incremental remapping or similar semi-Lagrangian transport schemes. In this paper we formulate volume correction as a constrained optimizationmore » problem in which the distance to the source mesh defines an optimization objective, while the prescribed cell volumes, mesh validity and/or cell convexity specify the constraints. We solve this problem numerically using a sequential quadratic programming (SQP) method whose performance scales with the mesh size. To achieve scalable performance we develop a specialized multigrid-based preconditioner for optimality systems that arise in the application of the SQP method to the volume correction problem. Numerical examples illustrate the importance of volume correction, and showcase the accuracy, robustness and scalability of our approach.« less

Top