Sample records for head selection algorithm

  1. A Differential Evolution-Based Routing Algorithm for Environmental Monitoring Wireless Sensor Networks

    PubMed Central

    Li, Xiaofang; Xu, Lizhong; Wang, Huibin; Song, Jie; Yang, Simon X.

    2010-01-01

    The traditional Low Energy Adaptive Cluster Hierarchy (LEACH) routing protocol is a clustering-based protocol. The uneven selection of cluster heads results in premature death of cluster heads and premature blind nodes inside the clusters, thus reducing the overall lifetime of the network. With a full consideration of information on energy and distance distribution of neighboring nodes inside the clusters, this paper proposes a new routing algorithm based on differential evolution (DE) to improve the LEACH routing protocol. To meet the requirements of monitoring applications in outdoor environments such as the meteorological, hydrological and wetland ecological environments, the proposed algorithm uses the simple and fast search features of DE to optimize the multi-objective selection of cluster heads and prevent blind nodes for improved energy efficiency and system stability. Simulation results show that the proposed new LEACH routing algorithm has better performance, effectively extends the working lifetime of the system, and improves the quality of the wireless sensor networks. PMID:22219670

  2. Optimizing Cluster Heads for Energy Efficiency in Large-Scale Heterogeneous Wireless Sensor Networks

    DOE PAGES

    Gu, Yi; Wu, Qishi; Rao, Nageswara S. V.

    2010-01-01

    Many complex sensor network applications require deploying a large number of inexpensive and small sensors in a vast geographical region to achieve quality through quantity. Hierarchical clustering is generally considered as an efficient and scalable way to facilitate the management and operation of such large-scale networks and minimize the total energy consumption for prolonged lifetime. Judicious selection of cluster heads for data integration and communication is critical to the success of applications based on hierarchical sensor networks organized as layered clusters. We investigate the problem of selecting sensor nodes in a predeployed sensor network to be the cluster heads tomore » minimize the total energy needed for data gathering. We rigorously derive an analytical formula to optimize the number of cluster heads in sensor networks under uniform node distribution, and propose a Distance-based Crowdedness Clustering algorithm to determine the cluster heads in sensor networks under general node distribution. The results from an extensive set of experiments on a large number of simulated sensor networks illustrate the performance superiority of the proposed solution over the clustering schemes based on k -means algorithm.« less

  3. A hybrid algorithm for selecting head-related transfer function based on similarity of anthropometric structures

    NASA Astrophysics Data System (ADS)

    Zeng, Xiang-Yang; Wang, Shu-Guang; Gao, Li-Ping

    2010-09-01

    As the basic data for virtual auditory technology, head-related transfer function (HRTF) has many applications in the areas of room acoustic modeling, spatial hearing and multimedia. How to individualize HRTF fast and effectively has become an opening problem at present. Based on the similarity and relativity of anthropometric structures, a hybrid HRTF customization algorithm, which has combined the method of principal component analysis (PCA), multiple linear regression (MLR) and database matching (DM), has been presented in this paper. The HRTFs selected by both the best match and the worst match have been applied into obtaining binaurally auralized sounds, which are then used for subjective listening experiments and the results are compared. For the area in the horizontal plane, the localization results have shown that the selection of HRTFs can enhance the localization accuracy and can also abate the problem of front-back confusion.

  4. Improvement of the SEP protocol based on community structure of node degree

    NASA Astrophysics Data System (ADS)

    Li, Donglin; Wei, Suyuan

    2017-05-01

    Analyzing the Stable election protocol (SEP) in wireless sensor networks and aiming at the problem of inhomogeneous cluster-heads distribution and unreasonable cluster-heads selectivity and single hop transmission in the SEP, a SEP Protocol based on community structure of node degree (SEP-CSND) is proposed. In this algorithm, network node deployed by using grid deployment model, and the connection between nodes established by setting up the communication threshold. The community structure constructed by node degree, then cluster head is elected in the community structure. On the basis of SEP, the node's residual energy and node degree is added in cluster-heads election. The information is transmitted with mode of multiple hops between network nodes. The simulation experiments showed that compared to the classical LEACH and SEP, this algorithm balances the energy consumption of the entire network and significantly prolongs network lifetime.

  5. Automatic selection of landmarks in T1-weighted head MRI with regression forests for image registration initialization.

    PubMed

    Wang, Jianing; Liu, Yuan; Noble, Jack H; Dawant, Benoit M

    2017-10-01

    Medical image registration establishes a correspondence between images of biological structures, and it is at the core of many applications. Commonly used deformable image registration methods depend on a good preregistration initialization. We develop a learning-based method to automatically find a set of robust landmarks in three-dimensional MR image volumes of the head. These landmarks are then used to compute a thin plate spline-based initialization transformation. The process involves two steps: (1) identifying a set of landmarks that can be reliably localized in the images and (2) selecting among them the subset that leads to a good initial transformation. To validate our method, we use it to initialize five well-established deformable registration algorithms that are subsequently used to register an atlas to MR images of the head. We compare our proposed initialization method with a standard approach that involves estimating an affine transformation with an intensity-based approach. We show that for all five registration algorithms the final registration results are statistically better when they are initialized with the method that we propose than when a standard approach is used. The technique that we propose is generic and could be used to initialize nonrigid registration algorithms for other applications.

  6. Automatic selection of landmarks in T1-weighted head MRI with regression forests for image registration initialization

    NASA Astrophysics Data System (ADS)

    Wang, Jianing; Liu, Yuan; Noble, Jack H.; Dawant, Benoit M.

    2017-02-01

    Medical image registration establishes a correspondence between images of biological structures and it is at the core of many applications. Commonly used deformable image registration methods are dependent on a good preregistration initialization. The initialization can be performed by localizing homologous landmarks and calculating a point-based transformation between the images. The selection of landmarks is however important. In this work, we present a learning-based method to automatically find a set of robust landmarks in 3D MR image volumes of the head to initialize non-rigid transformations. To validate our method, these selected landmarks are localized in unknown image volumes and they are used to compute a smoothing thin-plate splines transformation that registers the atlas to the volumes. The transformed atlas image is then used as the preregistration initialization of an intensity-based non-rigid registration algorithm. We show that the registration accuracy of this algorithm is statistically significantly improved when using the presented registration initialization over a standard intensity-based affine registration.

  7. Synchronous Firefly Algorithm for Cluster Head Selection in WSN.

    PubMed

    Baskaran, Madhusudhanan; Sadagopan, Chitra

    2015-01-01

    Wireless Sensor Network (WSN) consists of small low-cost, low-power multifunctional nodes interconnected to efficiently aggregate and transmit data to sink. Cluster-based approaches use some nodes as Cluster Heads (CHs) and organize WSNs efficiently for aggregation of data and energy saving. A CH conveys information gathered by cluster nodes and aggregates/compresses data before transmitting it to a sink. However, this additional responsibility of the node results in a higher energy drain leading to uneven network degradation. Low Energy Adaptive Clustering Hierarchy (LEACH) offsets this by probabilistically rotating cluster heads role among nodes with energy above a set threshold. CH selection in WSN is NP-Hard as optimal data aggregation with efficient energy savings cannot be solved in polynomial time. In this work, a modified firefly heuristic, synchronous firefly algorithm, is proposed to improve the network performance. Extensive simulation shows the proposed technique to perform well compared to LEACH and energy-efficient hierarchical clustering. Simulations show the effectiveness of the proposed method in decreasing the packet loss ratio by an average of 9.63% and improving the energy efficiency of the network when compared to LEACH and EEHC.

  8. Automatic facial animation parameters extraction in MPEG-4 visual communication

    NASA Astrophysics Data System (ADS)

    Yang, Chenggen; Gong, Wanwei; Yu, Lu

    2002-01-01

    Facial Animation Parameters (FAPs) are defined in MPEG-4 to animate a facial object. The algorithm proposed in this paper to extract these FAPs is applied to very low bit-rate video communication, in which the scene is composed of a head-and-shoulder object with complex background. This paper addresses the algorithm to automatically extract all FAPs needed to animate a generic facial model, estimate the 3D motion of head by points. The proposed algorithm extracts human facial region by color segmentation and intra-frame and inter-frame edge detection. Facial structure and edge distribution of facial feature such as vertical and horizontal gradient histograms are used to locate the facial feature region. Parabola and circle deformable templates are employed to fit facial feature and extract a part of FAPs. A special data structure is proposed to describe deformable templates to reduce time consumption for computing energy functions. Another part of FAPs, 3D rigid head motion vectors, are estimated by corresponding-points method. A 3D head wire-frame model provides facial semantic information for selection of proper corresponding points, which helps to increase accuracy of 3D rigid object motion estimation.

  9. A low-complexity 2-point step size gradient projection method with selective function evaluations for smoothed total variation based CBCT reconstructions

    NASA Astrophysics Data System (ADS)

    Song, Bongyong; Park, Justin C.; Song, William Y.

    2014-11-01

    The Barzilai-Borwein (BB) 2-point step size gradient method is receiving attention for accelerating Total Variation (TV) based CBCT reconstructions. In order to become truly viable for clinical applications, however, its convergence property needs to be properly addressed. We propose a novel fast converging gradient projection BB method that requires ‘at most one function evaluation’ in each iterative step. This Selective Function Evaluation method, referred to as GPBB-SFE in this paper, exhibits the desired convergence property when it is combined with a ‘smoothed TV’ or any other differentiable prior. This way, the proposed GPBB-SFE algorithm offers fast and guaranteed convergence to the desired 3DCBCT image with minimal computational complexity. We first applied this algorithm to a Shepp-Logan numerical phantom. We then applied to a CatPhan 600 physical phantom (The Phantom Laboratory, Salem, NY) and a clinically-treated head-and-neck patient, both acquired from the TrueBeam™ system (Varian Medical Systems, Palo Alto, CA). Furthermore, we accelerated the reconstruction by implementing the algorithm on NVIDIA GTX 480 GPU card. We first compared GPBB-SFE with three recently proposed BB-based CBCT reconstruction methods available in the literature using Shepp-Logan numerical phantom with 40 projections. It is found that GPBB-SFE shows either faster convergence speed/time or superior convergence property compared to existing BB-based algorithms. With the CatPhan 600 physical phantom, the GPBB-SFE algorithm requires only 3 function evaluations in 30 iterations and reconstructs the standard, 364-projection FDK reconstruction quality image using only 60 projections. We then applied the algorithm to a clinically-treated head-and-neck patient. It was observed that the GPBB-SFE algorithm requires only 18 function evaluations in 30 iterations. Compared with the FDK algorithm with 364 projections, the GPBB-SFE algorithm produces visibly equivalent quality CBCT image for the head-and-neck patient with only 180 projections, in 131.7 s, further supporting its clinical applicability.

  10. A low-complexity 2-point step size gradient projection method with selective function evaluations for smoothed total variation based CBCT reconstructions.

    PubMed

    Song, Bongyong; Park, Justin C; Song, William Y

    2014-11-07

    The Barzilai-Borwein (BB) 2-point step size gradient method is receiving attention for accelerating Total Variation (TV) based CBCT reconstructions. In order to become truly viable for clinical applications, however, its convergence property needs to be properly addressed. We propose a novel fast converging gradient projection BB method that requires 'at most one function evaluation' in each iterative step. This Selective Function Evaluation method, referred to as GPBB-SFE in this paper, exhibits the desired convergence property when it is combined with a 'smoothed TV' or any other differentiable prior. This way, the proposed GPBB-SFE algorithm offers fast and guaranteed convergence to the desired 3DCBCT image with minimal computational complexity. We first applied this algorithm to a Shepp-Logan numerical phantom. We then applied to a CatPhan 600 physical phantom (The Phantom Laboratory, Salem, NY) and a clinically-treated head-and-neck patient, both acquired from the TrueBeam™ system (Varian Medical Systems, Palo Alto, CA). Furthermore, we accelerated the reconstruction by implementing the algorithm on NVIDIA GTX 480 GPU card. We first compared GPBB-SFE with three recently proposed BB-based CBCT reconstruction methods available in the literature using Shepp-Logan numerical phantom with 40 projections. It is found that GPBB-SFE shows either faster convergence speed/time or superior convergence property compared to existing BB-based algorithms. With the CatPhan 600 physical phantom, the GPBB-SFE algorithm requires only 3 function evaluations in 30 iterations and reconstructs the standard, 364-projection FDK reconstruction quality image using only 60 projections. We then applied the algorithm to a clinically-treated head-and-neck patient. It was observed that the GPBB-SFE algorithm requires only 18 function evaluations in 30 iterations. Compared with the FDK algorithm with 364 projections, the GPBB-SFE algorithm produces visibly equivalent quality CBCT image for the head-and-neck patient with only 180 projections, in 131.7 s, further supporting its clinical applicability.

  11. Synchronous Firefly Algorithm for Cluster Head Selection in WSN

    PubMed Central

    Baskaran, Madhusudhanan; Sadagopan, Chitra

    2015-01-01

    Wireless Sensor Network (WSN) consists of small low-cost, low-power multifunctional nodes interconnected to efficiently aggregate and transmit data to sink. Cluster-based approaches use some nodes as Cluster Heads (CHs) and organize WSNs efficiently for aggregation of data and energy saving. A CH conveys information gathered by cluster nodes and aggregates/compresses data before transmitting it to a sink. However, this additional responsibility of the node results in a higher energy drain leading to uneven network degradation. Low Energy Adaptive Clustering Hierarchy (LEACH) offsets this by probabilistically rotating cluster heads role among nodes with energy above a set threshold. CH selection in WSN is NP-Hard as optimal data aggregation with efficient energy savings cannot be solved in polynomial time. In this work, a modified firefly heuristic, synchronous firefly algorithm, is proposed to improve the network performance. Extensive simulation shows the proposed technique to perform well compared to LEACH and energy-efficient hierarchical clustering. Simulations show the effectiveness of the proposed method in decreasing the packet loss ratio by an average of 9.63% and improving the energy efficiency of the network when compared to LEACH and EEHC. PMID:26495431

  12. Energy Aware Clustering Algorithms for Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Rakhshan, Noushin; Rafsanjani, Marjan Kuchaki; Liu, Chenglian

    2011-09-01

    The sensor nodes deployed in wireless sensor networks (WSNs) are extremely power constrained, so maximizing the lifetime of the entire networks is mainly considered in the design. In wireless sensor networks, hierarchical network structures have the advantage of providing scalable and energy efficient solutions. In this paper, we investigate different clustering algorithms for WSNs and also compare these clustering algorithms based on metrics such as clustering distribution, cluster's load balancing, Cluster Head's (CH) selection strategy, CH's role rotation, node mobility, clusters overlapping, intra-cluster communications, reliability, security and location awareness.

  13. A new clustering strategy

    NASA Astrophysics Data System (ADS)

    Feng, Jian-xin; Tang, Jia-fu; Wang, Guang-xing

    2007-04-01

    On the basis of the analysis of clustering algorithm that had been proposed for MANET, a novel clustering strategy was proposed in this paper. With the trust defined by statistical hypothesis in probability theory and the cluster head selected by node trust and node mobility, this strategy can realize the function of the malicious nodes detection which was neglected by other clustering algorithms and overcome the deficiency of being incapable of implementing the relative mobility metric of corresponding nodes in the MOBIC algorithm caused by the fact that the receiving power of two consecutive HELLO packet cannot be measured. It's an effective solution to cluster MANET securely.

  14. Energy Aware Cluster-Based Routing in Flying Ad-Hoc Networks.

    PubMed

    Aadil, Farhan; Raza, Ali; Khan, Muhammad Fahad; Maqsood, Muazzam; Mehmood, Irfan; Rho, Seungmin

    2018-05-03

    Flying ad-hoc networks (FANETs) are a very vibrant research area nowadays. They have many military and civil applications. Limited battery energy and the high mobility of micro unmanned aerial vehicles (UAVs) represent their two main problems, i.e., short flight time and inefficient routing. In this paper, we try to address both of these problems by means of efficient clustering. First, we adjust the transmission power of the UAVs by anticipating their operational requirements. Optimal transmission range will have minimum packet loss ratio (PLR) and better link quality, which ultimately save the energy consumed during communication. Second, we use a variant of the K-Means Density clustering algorithm for selection of cluster heads. Optimal cluster heads enhance the cluster lifetime and reduce the routing overhead. The proposed model outperforms the state of the art artificial intelligence techniques such as Ant Colony Optimization-based clustering algorithm and Grey Wolf Optimization-based clustering algorithm. The performance of the proposed algorithm is evaluated in term of number of clusters, cluster building time, cluster lifetime and energy consumption.

  15. A comparative study of machine learning methods for time-to-event survival data for radiomics risk modelling.

    PubMed

    Leger, Stefan; Zwanenburg, Alex; Pilz, Karoline; Lohaus, Fabian; Linge, Annett; Zöphel, Klaus; Kotzerke, Jörg; Schreiber, Andreas; Tinhofer, Inge; Budach, Volker; Sak, Ali; Stuschke, Martin; Balermpas, Panagiotis; Rödel, Claus; Ganswindt, Ute; Belka, Claus; Pigorsch, Steffi; Combs, Stephanie E; Mönnich, David; Zips, Daniel; Krause, Mechthild; Baumann, Michael; Troost, Esther G C; Löck, Steffen; Richter, Christian

    2017-10-16

    Radiomics applies machine learning algorithms to quantitative imaging data to characterise the tumour phenotype and predict clinical outcome. For the development of radiomics risk models, a variety of different algorithms is available and it is not clear which one gives optimal results. Therefore, we assessed the performance of 11 machine learning algorithms combined with 12 feature selection methods by the concordance index (C-Index), to predict loco-regional tumour control (LRC) and overall survival for patients with head and neck squamous cell carcinoma. The considered algorithms are able to deal with continuous time-to-event survival data. Feature selection and model building were performed on a multicentre cohort (213 patients) and validated using an independent cohort (80 patients). We found several combinations of machine learning algorithms and feature selection methods which achieve similar results, e.g. C-Index = 0.71 and BT-COX: C-Index = 0.70 in combination with Spearman feature selection. Using the best performing models, patients were stratified into groups of low and high risk of recurrence. Significant differences in LRC were obtained between both groups on the validation cohort. Based on the presented analysis, we identified a subset of algorithms which should be considered in future radiomics studies to develop stable and clinically relevant predictive models for time-to-event endpoints.

  16. TU-H-CAMPUS-JeP1-02: Fully Automatic Verification of Automatically Contoured Normal Tissues in the Head and Neck

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCarroll, R; UT Health Science Center, Graduate School of Biomedical Sciences, Houston, TX; Beadle, B

    Purpose: To investigate and validate the use of an independent deformable-based contouring algorithm for automatic verification of auto-contoured structures in the head and neck towards fully automated treatment planning. Methods: Two independent automatic contouring algorithms [(1) Eclipse’s Smart Segmentation followed by pixel-wise majority voting, (2) an in-house multi-atlas based method] were used to create contours of 6 normal structures of 10 head-and-neck patients. After rating by a radiation oncologist, the higher performing algorithm was selected as the primary contouring method, the other used for automatic verification of the primary. To determine the ability of the verification algorithm to detect incorrectmore » contours, contours from the primary method were shifted from 0.5 to 2cm. Using a logit model the structure-specific minimum detectable shift was identified. The models were then applied to a set of twenty different patients and the sensitivity and specificity of the models verified. Results: Per physician rating, the multi-atlas method (4.8/5 point scale, with 3 rated as generally acceptable for planning purposes) was selected as primary and the Eclipse-based method (3.5/5) for verification. Mean distance to agreement and true positive rate were selected as covariates in an optimized logit model. These models, when applied to a group of twenty different patients, indicated that shifts could be detected at 0.5cm (brain), 0.75cm (mandible, cord), 1cm (brainstem, cochlea), or 1.25cm (parotid), with sensitivity and specificity greater than 0.95. If sensitivity and specificity constraints are reduced to 0.9, detectable shifts of mandible and brainstem were reduced by 0.25cm. These shifts represent additional safety margins which might be considered if auto-contours are used for automatic treatment planning without physician review. Conclusion: Automatically contoured structures can be automatically verified. This fully automated process could be used to flag auto-contours for special review or used with safety margins in a fully automatic treatment planning system.« less

  17. Station Keeping of Small Outboard-Powered Boats

    NASA Technical Reports Server (NTRS)

    Fisher, A. D.; VanZwieten, J. H., Jr.; VanZwieten, T. S.

    2010-01-01

    Three station keeping controllers have been developed which work to minimize displacement of a small outboard-powered vessel from a desired location. Each of these three controllers has a common initial layer that uses fixed-gain feedback control to calculate the desired heading of the vessel. A second control layer uses a common fixed-gain feedback controller to calculate the net forward thrust, one of two algorithms for controlling engine angle (Fixed-Gain Proportional-integral-derivative (PID) or PID with Adaptively Augmented Gains), and one of two algorithms for differential throttle control (Fixed-Gain PID and PID with Adaptive Differential Throttle gains), which work together to eliminate heading error. The three selected controllers are evaluated using a numerical simulation of a 33-foot center console vessel with twin outboards that is subject to wave, wind, and current disturbances. Each controller is tested for its ability to maintain position in the presence of three sets of environmental disturbances. These algorithms were tested with current velocity of 1.5 m/s, significant wave height of 0.5 m, and wind speeds of 2, 5, and 10 m/s. These values were chosen to model conditions a small vessel may experience in the Gulf Stream off of Fort Lauderdale. The Fixed-gain PID controller progressively got worse as wind speeds increased, while the controllers using adaptive methodologies showed consistent performance over all weather conditions and decreased heading error by as much as 20%. Thus, enhanced robustness to environmental changes has been gained by using an adaptive algorithm.

  18. Discovering shared segments on the migration route of the bar-headed goose by time-based plane-sweeping trajectory clustering

    USGS Publications Warehouse

    Luo, Ze; Baoping, Yan; Takekawa, John Y.; Prosser, Diann J.

    2012-01-01

    We propose a new method to help ornithologists and ecologists discover shared segments on the migratory pathway of the bar-headed geese by time-based plane-sweeping trajectory clustering. We present a density-based time parameterized line segment clustering algorithm, which extends traditional comparable clustering algorithms from temporal and spatial dimensions. We present a time-based plane-sweeping trajectory clustering algorithm to reveal the dynamic evolution of spatial-temporal object clusters and discover common motion patterns of bar-headed geese in the process of migration. Experiments are performed on GPS-based satellite telemetry data from bar-headed geese and results demonstrate our algorithms can correctly discover shared segments of the bar-headed geese migratory pathway. We also present findings on the migratory behavior of bar-headed geese determined from this new analytical approach.

  19. Node Self-Deployment Algorithm Based on an Uneven Cluster with Radius Adjusting for Underwater Sensor Networks

    PubMed Central

    Jiang, Peng; Xu, Yiming; Wu, Feng

    2016-01-01

    Existing move-restricted node self-deployment algorithms are based on a fixed node communication radius, evaluate the performance based on network coverage or the connectivity rate and do not consider the number of nodes near the sink node and the energy consumption distribution of the network topology, thereby degrading network reliability and the energy consumption balance. Therefore, we propose a distributed underwater node self-deployment algorithm. First, each node begins the uneven clustering based on the distance on the water surface. Each cluster head node selects its next-hop node to synchronously construct a connected path to the sink node. Second, the cluster head node adjusts its depth while maintaining the layout formed by the uneven clustering and then adjusts the positions of in-cluster nodes. The algorithm originally considers the network reliability and energy consumption balance during node deployment and considers the coverage redundancy rate of all positions that a node may reach during the node position adjustment. Simulation results show, compared to the connected dominating set (CDS) based depth computation algorithm, that the proposed algorithm can increase the number of the nodes near the sink node and improve network reliability while guaranteeing the network connectivity rate. Moreover, it can balance energy consumption during network operation, further improve network coverage rate and reduce energy consumption. PMID:26784193

  20. Adaptive algorithm of magnetic heading detection

    NASA Astrophysics Data System (ADS)

    Liu, Gong-Xu; Shi, Ling-Feng

    2017-11-01

    Magnetic data obtained from a magnetic sensor usually fluctuate in a certain range, which makes it difficult to estimate the magnetic heading accurately. In fact, magnetic heading information is usually submerged in noise because of all kinds of electromagnetic interference and the diversity of the pedestrian’s motion states. In order to solve this problem, a new adaptive algorithm based on the (typically) right-angled corridors of a building or residential buildings is put forward to process heading information. First, a 3D indoor localization platform is set up based on MPU9250. Then, several groups of data are measured by changing the experimental environment and pedestrian’s motion pace. The raw data from the attached inertial measurement unit are calibrated and arranged into a time-stamped array and written to a data file. Later, the data file is imported into MATLAB for processing and analysis using the proposed adaptive algorithm. Finally, the algorithm is verified by comparison with the existing algorithm. The experimental results show that the algorithm has strong robustness and good fault tolerance, which can detect the heading information accurately and in real-time.

  1. A Novel Energy-Aware Distributed Clustering Algorithm for Heterogeneous Wireless Sensor Networks in the Mobile Environment

    PubMed Central

    Gao, Ying; Wkram, Chris Hadri; Duan, Jiajie; Chou, Jarong

    2015-01-01

    In order to prolong the network lifetime, energy-efficient protocols adapted to the features of wireless sensor networks should be used. This paper explores in depth the nature of heterogeneous wireless sensor networks, and finally proposes an algorithm to address the problem of finding an effective pathway for heterogeneous clustering energy. The proposed algorithm implements cluster head selection according to the degree of energy attenuation during the network’s running and the degree of candidate nodes’ effective coverage on the whole network, so as to obtain an even energy consumption over the whole network for the situation with high degree of coverage. Simulation results show that the proposed clustering protocol has better adaptability to heterogeneous environments than existing clustering algorithms in prolonging the network lifetime. PMID:26690440

  2. A low-cost GPS/INS integrated vehicle heading angle measurement system

    NASA Astrophysics Data System (ADS)

    Wu, Ye; Gao, Tongyue; Ding, Yi

    2018-04-01

    GPS can provide continuous heading information, but the accuracy is easily affected by the velocity and shelter from buildings or trees. For vehicle systems, we propose a low-cost heading angle update algorithm. Based on the GPS/INS integrated navigation kalman filter, we add the GPS heading angle to the measurement vector, and establish its error model. The experiment results show that this algorithm can effectively improve the accuracy of GPS heading angle.

  3. A nudging data assimilation algorithm for the identification of groundwater pumping

    NASA Astrophysics Data System (ADS)

    Cheng, Wei-Chen; Kendall, Donald R.; Putti, Mario; Yeh, William W.-G.

    2009-08-01

    This study develops a nudging data assimilation algorithm for estimating unknown pumping from private wells in an aquifer system using measured data of hydraulic head. The proposed algorithm treats the unknown pumping as an additional sink term in the governing equation of groundwater flow and provides a consistent physical interpretation for pumping rate identification. The algorithm identifies the unknown pumping and, at the same time, reduces the forecast error in hydraulic heads. We apply the proposed algorithm to the Las Posas Groundwater Basin in southern California. We consider the following three pumping scenarios: constant pumping rates, spatially varying pumping rates, and temporally varying pumping rates. We also study the impact of head measurement errors on the proposed algorithm. In the case study we seek to estimate the six unknown pumping rates from private wells using head measurements from four observation wells. The results show an excellent rate of convergence for pumping estimation. The case study demonstrates the applicability, accuracy, and efficiency of the proposed data assimilation algorithm for the identification of unknown pumping in an aquifer system.

  4. A nudging data assimilation algorithm for the identification of groundwater pumping

    NASA Astrophysics Data System (ADS)

    Cheng, W.; Kendall, D. R.; Putti, M.; Yeh, W. W.

    2008-12-01

    This study develops a nudging data assimilation algorithm for estimating unknown pumping from private wells in an aquifer system using measurement data of hydraulic head. The proposed algorithm treats the unknown pumping as an additional sink term in the governing equation of groundwater flow and provides a consistently physical interpretation for pumping rate identification. The algorithm identifies unknown pumping and, at the same time, reduces the forecast error in hydraulic heads. We apply the proposed algorithm to the Las Posas Groundwater Basin in southern California. We consider the following three pumping scenarios: constant pumping rate, spatially varying pumping rates, and temporally varying pumping rates. We also study the impact of head measurement errors on the proposed algorithm. In the case study, we seek to estimate the six unknown pumping rates from private wells using head measurements from four observation wells. The results show excellent rate of convergence for pumping estimation. The case study demonstrates the applicability, accuracy, and efficiency of the proposed data assimilation algorithm for the identification of unknown pumping in an aquifer system.

  5. Optimization of wireless sensor networks based on chicken swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Qingxi; Zhu, Lihua

    2017-05-01

    In order to reduce the energy consumption of wireless sensor network and improve the survival time of network, the clustering routing protocol of wireless sensor networks based on chicken swarm optimization algorithm was proposed. On the basis of LEACH agreement, it was improved and perfected that the points on the cluster and the selection of cluster head using the chicken group optimization algorithm, and update the location of chicken which fall into the local optimum by Levy flight, enhance population diversity, ensure the global search capability of the algorithm. The new protocol avoided the die of partial node of intensive using by making balanced use of the network nodes, improved the survival time of wireless sensor network. The simulation experiments proved that the protocol is better than LEACH protocol on energy consumption, also is better than that of clustering routing protocol based on particle swarm optimization algorithm.

  6. Midline Dose Verification with Diode In Vivo Dosimetry for External Photon Therapy of Head and Neck and Pelvis Cancers During Initial Large-Field Treatments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tung, Chuan-Jong; Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan; Yu, Pei-Chieh

    2010-01-01

    During radiotherapy treatments, quality assurance/control is essential, particularly dose delivery to patients. This study was designed to verify midline doses with diode in vivo dosimetry. Dosimetry was studied for 6-MV bilateral fields in head and neck cancer treatments and 10-MV bilateral and anteroposterior/posteroanterior (AP/PA) fields in pelvic cancer treatments. Calibrations with corrections of diodes were performed using plastic water phantoms; 190 and 100 portals were studied for head and neck and pelvis treatments, respectively. Calculations of midline doses were made using the midline transmission, arithmetic mean, and geometric mean algorithms. These midline doses were compared with the treatment planning systemmore » target doses for lateral or AP (PA) portals and paired opposed portals. For head and neck treatments, all 3 algorithms were satisfactory, although the geometric mean algorithm was less accurate and more uncertain. For pelvis treatments, the arithmetic mean algorithm seemed unacceptable, whereas the other algorithms were satisfactory. The random error was reduced by using averaged midline doses of paired opposed portals because the asymmetric effect was averaged out. Considering the simplicity of in vivo dosimetry, the arithmetic mean and geometric mean algorithm should be adopted for head/neck and pelvis treatments, respectively.« less

  7. An efficient method for automatic morphological abnormality detection from human sperm images.

    PubMed

    Ghasemian, Fatemeh; Mirroshandel, Seyed Abolghasem; Monji-Azad, Sara; Azarnia, Mahnaz; Zahiri, Ziba

    2015-12-01

    Sperm morphology analysis (SMA) is an important factor in the diagnosis of human male infertility. This study presents an automatic algorithm for sperm morphology analysis (to detect malformation) using images of human sperm cells. The SMA method was used to detect and analyze different parts of the human sperm. First of all, SMA removes the image noises and enhances the contrast of the image to a great extent. Then it recognizes the different parts of sperm (e.g., head, tail) and analyzes the size and shape of each part. Finally, the algorithm classifies each sperm as normal or abnormal. Malformations in the head, midpiece, and tail of a sperm, can be detected by the SMA method. In contrast to other similar methods, the SMA method can work with low resolution and non-stained images. Furthermore, an image collection created for the SMA, has also been described in this study. This benchmark consists of 1457 sperm images from 235 patients, and is known as human sperm morphology analysis dataset (HSMA-DS). The proposed algorithm was tested on HSMA-DS. The experimental results show the high ability of SMA to detect morphological deformities from sperm images. In this study, the SMA algorithm produced above 90% accuracy in sperm abnormality detection task. Another advantage of the proposed method is its low computation time (that is, less than 9s), as such, the expert can quickly decide to choose the analyzed sperm or select another one. Automatic and fast analysis of human sperm morphology can be useful during intracytoplasmic sperm injection for helping embryologists to select the best sperm in real time. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  8. SU-F-T-148: Are the Approximations in Analytic Semi-Empirical Dose Calculation Algorithms for Intensity Modulated Proton Therapy for Complex Heterogeneities of Head and Neck Clinically Significant?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yepes, P; UT MD Anderson Cancer Center, Houston, TX; Titt, U

    2016-06-15

    Purpose: Evaluate the differences in dose distributions between the proton analytic semi-empirical dose calculation algorithm used in the clinic and Monte Carlo calculations for a sample of 50 head-and-neck (H&N) patients and estimate the potential clinical significance of the differences. Methods: A cohort of 50 H&N patients, treated at the University of Texas Cancer Center with Intensity Modulated Proton Therapy (IMPT), were selected for evaluation of clinical significance of approximations in computed dose distributions. H&N site was selected because of the highly inhomogeneous nature of the anatomy. The Fast Dose Calculator (FDC), a fast track-repeating accelerated Monte Carlo algorithm formore » proton therapy, was utilized for the calculation of dose distributions delivered during treatment plans. Because of its short processing time, FDC allows for the processing of large cohorts of patients. FDC has been validated versus GEANT4, a full Monte Carlo system and measurements in water and for inhomogeneous phantoms. A gamma-index analysis, DVHs, EUDs, and TCP and NTCPs computed using published models were utilized to evaluate the differences between the Treatment Plan System (TPS) and FDC. Results: The Monte Carlo results systematically predict lower dose delivered in the target. The observed differences can be as large as 8 Gy, and should have a clinical impact. Gamma analysis also showed significant differences between both approaches, especially for the target volumes. Conclusion: Monte Carlo calculations with fast algorithms is practical and should be considered for the clinic, at least as a treatment plan verification tool.« less

  9. Development of a novel constellation based landmark detection algorithm

    NASA Astrophysics Data System (ADS)

    Ghayoor, Ali; Vaidya, Jatin G.; Johnson, Hans J.

    2013-03-01

    Anatomical landmarks such as the anterior commissure (AC) and posterior commissure (PC) are commonly used by researchers for co-registration of images. In this paper, we present a novel, automated approach for landmark detection that combines morphometric constraining and statistical shape models to provide accurate estimation of landmark points. This method is made robust to large rotations in initial head orientation by extracting extra information of the eye centers using a radial Hough transform and exploiting the centroid of head mass (CM) using a novel estimation approach. To evaluate the effectiveness of this method, the algorithm is trained on a set of 20 images with manually selected landmarks, and a test dataset is used to compare the automatically detected against the manually detected landmark locations of the AC, PC, midbrain-pons junction (MPJ), and fourth ventricle notch (VN4). The results show that the proposed method is accurate as the average error between the automatically and manually labeled landmark points is less than 1 mm. Also, the algorithm is highly robust as it was successfully run on a large dataset that included different kinds of images with various orientation, spacing, and origin.

  10. Poster - 32: Atlas Selection for Automated Segmentation of Pelvic CT for Prostate Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mallawi, Abrar; Farrell, TomTom; Diamond, Kevin-Ro

    2016-08-15

    Atlas based-segmentation has recently been evaluated for use in prostate radiotherapy. In a typical approach, the essential step is the selection of an atlas from a database that the best matches of the target image. This work proposes an atlas selection strategy and evaluate it impacts on final segmentation accuracy. Several anatomical parameters were measured to indicate the overall prostate and body shape, all of these measurements obtained on CT images. A brute force procedure was first performed for a training dataset of 20 patients using image registration to pair subject with similar contours; each subject was served as amore » target image to which all reaming 19 images were affinity registered. The overlap between the prostate and femoral heads was quantified for each pair using the Dice Similarity Coefficient (DSC). Finally, an atlas selection procedure was designed; relying on the computation of a similarity score defined as a weighted sum of differences between the target and atlas subject anatomical measurement. The algorithm ability to predict the most similar atlas was excellent, achieving mean DSCs of 0.78 ± 0.07 and 0.90 ± 0.02 for the CTV and either femoral head. The proposed atlas selection yielded 0.72 ± 0.11 and 0.87 ± 0.03 for CTV and either femoral head. The DSC obtained with the proposed selection method were slightly lower than the maximum established using brute force, but this does not include potential improvements expected with deformable registration. The proposed atlas selection method provides reasonable segmentation accuracy.« less

  11. An Enhanced PSO-Based Clustering Energy Optimization Algorithm for Wireless Sensor Network.

    PubMed

    Vimalarani, C; Subramanian, R; Sivanandam, S N

    2016-01-01

    Wireless Sensor Network (WSN) is a network which formed with a maximum number of sensor nodes which are positioned in an application environment to monitor the physical entities in a target area, for example, temperature monitoring environment, water level, monitoring pressure, and health care, and various military applications. Mostly sensor nodes are equipped with self-supported battery power through which they can perform adequate operations and communication among neighboring nodes. Maximizing the lifetime of the Wireless Sensor networks, energy conservation measures are essential for improving the performance of WSNs. This paper proposes an Enhanced PSO-Based Clustering Energy Optimization (EPSO-CEO) algorithm for Wireless Sensor Network in which clustering and clustering head selection are done by using Particle Swarm Optimization (PSO) algorithm with respect to minimizing the power consumption in WSN. The performance metrics are evaluated and results are compared with competitive clustering algorithm to validate the reduction in energy consumption.

  12. Efficient evaluation of three-center Coulomb integrals

    PubMed Central

    Samu, Gyula

    2017-01-01

    In this study we pursue the most efficient paths for the evaluation of three-center electron repulsion integrals (ERIs) over solid harmonic Gaussian functions of various angular momenta. First, the adaptation of the well-established techniques developed for four-center ERIs, such as the Obara–Saika, McMurchie–Davidson, Gill–Head-Gordon–Pople, and Rys quadrature schemes, and the combinations thereof for three-center ERIs is discussed. Several algorithmic aspects, such as the order of the various operations and primitive loops as well as prescreening strategies, are analyzed. Second, the number of floating point operations (FLOPs) is estimated for the various algorithms derived, and based on these results the most promising ones are selected. We report the efficient implementation of the latter algorithms invoking automated programming techniques and also evaluate their practical performance. We conclude that the simplified Obara–Saika scheme of Ahlrichs is the most cost-effective one in the majority of cases, but the modified Gill–Head-Gordon–Pople and Rys algorithms proposed herein are preferred for particular shell triplets. Our numerical experiments also show that even though the solid harmonic transformation and the horizontal recurrence require significantly fewer FLOPs if performed at the contracted level, this approach does not improve the efficiency in practical cases. Instead, it is more advantageous to carry out these operations at the primitive level, which allows for more efficient integral prescreening and memory layout. PMID:28571354

  13. Efficient evaluation of three-center Coulomb integrals.

    PubMed

    Samu, Gyula; Kállay, Mihály

    2017-05-28

    In this study we pursue the most efficient paths for the evaluation of three-center electron repulsion integrals (ERIs) over solid harmonic Gaussian functions of various angular momenta. First, the adaptation of the well-established techniques developed for four-center ERIs, such as the Obara-Saika, McMurchie-Davidson, Gill-Head-Gordon-Pople, and Rys quadrature schemes, and the combinations thereof for three-center ERIs is discussed. Several algorithmic aspects, such as the order of the various operations and primitive loops as well as prescreening strategies, are analyzed. Second, the number of floating point operations (FLOPs) is estimated for the various algorithms derived, and based on these results the most promising ones are selected. We report the efficient implementation of the latter algorithms invoking automated programming techniques and also evaluate their practical performance. We conclude that the simplified Obara-Saika scheme of Ahlrichs is the most cost-effective one in the majority of cases, but the modified Gill-Head-Gordon-Pople and Rys algorithms proposed herein are preferred for particular shell triplets. Our numerical experiments also show that even though the solid harmonic transformation and the horizontal recurrence require significantly fewer FLOPs if performed at the contracted level, this approach does not improve the efficiency in practical cases. Instead, it is more advantageous to carry out these operations at the primitive level, which allows for more efficient integral prescreening and memory layout.

  14. Comparison of selected dose calculation algorithms in radiotherapy treatment planning for tissues with inhomogeneities

    NASA Astrophysics Data System (ADS)

    Woon, Y. L.; Heng, S. P.; Wong, J. H. D.; Ung, N. M.

    2016-03-01

    Inhomogeneity correction is recommended for accurate dose calculation in radiotherapy treatment planning since human body are highly inhomogeneous with the presence of bones and air cavities. However, each dose calculation algorithm has its own limitations. This study is to assess the accuracy of five algorithms that are currently implemented for treatment planning, including pencil beam convolution (PBC), superposition (SP), anisotropic analytical algorithm (AAA), Monte Carlo (MC) and Acuros XB (AXB). The calculated dose was compared with the measured dose using radiochromic film (Gafchromic EBT2) in inhomogeneous phantoms. In addition, the dosimetric impact of different algorithms on intensity modulated radiotherapy (IMRT) was studied for head and neck region. MC had the best agreement with the measured percentage depth dose (PDD) within the inhomogeneous region. This was followed by AXB, AAA, SP and PBC. For IMRT planning, MC algorithm is recommended for treatment planning in preference to PBC and SP. The MC and AXB algorithms were found to have better accuracy in terms of inhomogeneity correction and should be used for tumour volume within the proximity of inhomogeneous structures.

  15. Design a software real-time operation platform for wave piercing catamarans motion control using linear quadratic regulator based genetic algorithm.

    PubMed

    Liang, Lihua; Yuan, Jia; Zhang, Songtao; Zhao, Peng

    2018-01-01

    This work presents optimal linear quadratic regulator (LQR) based on genetic algorithm (GA) to solve the two degrees of freedom (2 DoF) motion control problem in head seas for wave piercing catamarans (WPC). The proposed LQR based GA control strategy is to select optimal weighting matrices (Q and R). The seakeeping performance of WPC based on proposed algorithm is challenged because of multi-input multi-output (MIMO) system of uncertain coefficient problems. Besides the kinematical constraint problems of WPC, the external conditions must be considered, like the sea disturbance and the actuators (a T-foil and two flaps) control. Moreover, this paper describes the MATLAB and LabVIEW software plats to simulate the reduction effects of WPC. Finally, the real-time (RT) NI CompactRIO embedded controller is selected to test the effectiveness of the actuators based on proposed techniques. In conclusion, simulation and experimental results prove the correctness of the proposed algorithm. The percentage of heave and pitch reductions are more than 18% in different high speeds and bad sea conditions. And the results also verify the feasibility of NI CompactRIO embedded controller.

  16. Design a software real-time operation platform for wave piercing catamarans motion control using linear quadratic regulator based genetic algorithm

    PubMed Central

    Liang, Lihua; Zhang, Songtao; Zhao, Peng

    2018-01-01

    This work presents optimal linear quadratic regulator (LQR) based on genetic algorithm (GA) to solve the two degrees of freedom (2 DoF) motion control problem in head seas for wave piercing catamarans (WPC). The proposed LQR based GA control strategy is to select optimal weighting matrices (Q and R). The seakeeping performance of WPC based on proposed algorithm is challenged because of multi-input multi-output (MIMO) system of uncertain coefficient problems. Besides the kinematical constraint problems of WPC, the external conditions must be considered, like the sea disturbance and the actuators (a T-foil and two flaps) control. Moreover, this paper describes the MATLAB and LabVIEW software plats to simulate the reduction effects of WPC. Finally, the real-time (RT) NI CompactRIO embedded controller is selected to test the effectiveness of the actuators based on proposed techniques. In conclusion, simulation and experimental results prove the correctness of the proposed algorithm. The percentage of heave and pitch reductions are more than 18% in different high speeds and bad sea conditions. And the results also verify the feasibility of NI CompactRIO embedded controller. PMID:29709008

  17. Design of 6 MeV X-band electron linac for dual-head gantry radiotherapy system

    NASA Astrophysics Data System (ADS)

    Shin, Seung-wook; Lee, Seung-Hyun; Lee, Jong-Chul; Kim, Huisu; Ha, Donghyup; Ghergherehchi, Mitra; Chai, Jongseo; Lee, Byung-no; Chae, Moonsik

    2017-12-01

    A compact 6 MeV electron linac is being developed at Sungkyunkwan University, in collaboration with the Korea atomic energy research institute (KAERI). The linac will be used as an X-ray source for a dual-head gantry radiotherapy system. X-band technology has been employed to satisfy the size requirement of the dual-head gantry radiotherapy machine. Among the several options available, we selected a pi/2-mode, standing-wave, side-coupled cavity. This choice of radiofrequency (RF) cavity design is intended to enhance the shunt impedance of each cavity in the linac. An optimum structure of the RF cavity with a high-performance design was determined by applying a genetic algorithm during the optimization procedure. This paper describes the detailed design process for a single normal RF cavity and the entire structure, including the RF power coupler and coupling cavity, as well as the beam dynamics results.

  18. Carrying Position Independent User Heading Estimation for Indoor Pedestrian Navigation with Smartphones

    PubMed Central

    Deng, Zhi-An; Wang, Guofeng; Hu, Ying; Cui, Yang

    2016-01-01

    This paper proposes a novel heading estimation approach for indoor pedestrian navigation using the built-in inertial sensors on a smartphone. Unlike previous approaches constraining the carrying position of a smartphone on the user’s body, our approach gives the user a larger freedom by implementing automatic recognition of the device carrying position and subsequent selection of an optimal strategy for heading estimation. We firstly predetermine the motion state by a decision tree using an accelerometer and a barometer. Then, to enable accurate and computational lightweight carrying position recognition, we combine a position classifier with a novel position transition detection algorithm, which may also be used to avoid the confusion between position transition and user turn during pedestrian walking. For a device placed in the trouser pockets or held in a swinging hand, the heading estimation is achieved by deploying a principal component analysis (PCA)-based approach. For a device held in the hand or against the ear during a phone call, user heading is directly estimated by adding the yaw angle of the device to the related heading offset. Experimental results show that our approach can automatically detect carrying positions with high accuracy, and outperforms previous heading estimation approaches in terms of accuracy and applicability. PMID:27187391

  19. SU-E-T-605: Performance Evaluation of MLC Leaf-Sequencing Algorithms in Head-And-Neck IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jing, J; Lin, H; Chow, J

    2015-06-15

    Purpose: To investigate the efficiency of three multileaf collimator (MLC) leaf-sequencing algorithms proposed by Galvin et al, Chen et al and Siochi et al using external beam treatment plans for head-and-neck intensity modulated radiation therapy (IMRT). Methods: IMRT plans for head-and-neck were created using the CORVUS treatment planning system. The plans were optimized and the fluence maps for all photon beams determined. Three different MLC leaf-sequencing algorithms based on Galvin et al, Chen et al and Siochi et al were used to calculate the final photon segmental fields and their monitor units in delivery. For comparison purpose, the maximum intensitymore » of fluence map was kept constant in different plans. The number of beam segments and total number of monitor units were calculated for the three algorithms. Results: From results of number of beam segments and total number of monitor units, we found that algorithm of Galvin et al had the largest number of monitor unit which was about 70% larger than the other two algorithms. Moreover, both algorithms of Galvin et al and Siochi et al have relatively lower number of beam segment compared to Chen et al. Although values of number of beam segment and total number of monitor unit calculated by different algorithms varied with the head-and-neck plans, it can be seen that algorithms of Galvin et al and Siochi et al performed well with a lower number of beam segment, though algorithm of Galvin et al had a larger total number of monitor units than Siochi et al. Conclusion: Although performance of the leaf-sequencing algorithm varied with different IMRT plans having different fluence maps, an evaluation is possible based on the calculated number of beam segment and monitor unit. In this study, algorithm by Siochi et al was found to be more efficient in the head-and-neck IMRT. The Project Sponsored by the Fundamental Research Funds for the Central Universities (J2014HGXJ0094) and the Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry.« less

  20. Hierarchical planning for a surface mounting machine placement.

    PubMed

    Zeng, You-jiao; Ma, Deng-ze; Jin, Ye; Yan, Jun-qi

    2004-11-01

    For a surface mounting machine (SMM) in printed circuit board (PCB) assembly line, there are four problems, e.g. CAD data conversion, nozzle selection, feeder assignment and placement sequence determination. A hierarchical planning for them to maximize the throughput rate of an SMM is presented here. To minimize set-up time, a CAD data conversion system was first applied that could automatically generate the data for machine placement from CAD design data files. Then an effective nozzle selection approach implemented to minimize the time of nozzle changing. And then, to minimize picking time, an algorithm for feeder assignment was used to make picking multiple components simultaneously as much as possible. Finally, in order to shorten pick-and-place time, a heuristic algorithm was used to determine optimal component placement sequence according to the decided feeder positions. Experiments were conducted on a four head SMM. The experimental results were used to analyse the assembly line performance.

  1. How Magnetic Disturbance Influences the Attitude and Heading in Magnetic and Inertial Sensor-Based Orientation Estimation.

    PubMed

    Fan, Bingfei; Li, Qingguo; Liu, Tao

    2017-12-28

    With the advancements in micro-electromechanical systems (MEMS) technologies, magnetic and inertial sensors are becoming more and more accurate, lightweight, smaller in size as well as low-cost, which in turn boosts their applications in human movement analysis. However, challenges still exist in the field of sensor orientation estimation, where magnetic disturbance represents one of the obstacles limiting their practical application. The objective of this paper is to systematically analyze exactly how magnetic disturbances affects the attitude and heading estimation for a magnetic and inertial sensor. First, we reviewed four major components dealing with magnetic disturbance, namely decoupling attitude estimation from magnetic reading, gyro bias estimation, adaptive strategies of compensating magnetic disturbance and sensor fusion algorithms. We review and analyze the features of existing methods of each component. Second, to understand each component in magnetic disturbance rejection, four representative sensor fusion methods were implemented, including gradient descent algorithms, improved explicit complementary filter, dual-linear Kalman filter and extended Kalman filter. Finally, a new standardized testing procedure has been developed to objectively assess the performance of each method against magnetic disturbance. Based upon the testing results, the strength and weakness of the existing sensor fusion methods were easily examined, and suggestions were presented for selecting a proper sensor fusion algorithm or developing new sensor fusion method.

  2. Energy Efficient and Stable Weight Based Clustering for Mobile Ad Hoc Networks

    NASA Astrophysics Data System (ADS)

    Bouk, Safdar H.; Sasase, Iwao

    Recently several weighted clustering algorithms have been proposed, however, to the best of our knowledge; there is none that propagates weights to other nodes without weight message for leader election, normalizes node parameters and considers neighboring node parameters to calculate node weights. In this paper, we propose an Energy Efficient and Stable Weight Based Clustering (EE-SWBC) algorithm that elects cluster heads without sending any additional weight message. It propagates node parameters to its neighbors through neighbor discovery message (HELLO Message) and stores these parameters in neighborhood list. Each node normalizes parameters and efficiently calculates its own weight and the weights of neighboring nodes from that neighborhood table using Grey Decision Method (GDM). GDM finds the ideal solution (best node parameters in neighborhood list) and calculates node weights in comparison to the ideal solution. The node(s) with maximum weight (parameters closer to the ideal solution) are elected as cluster heads. In result, EE-SWBC fairly selects potential nodes with parameters closer to ideal solution with less overhead. Different performance metrics of EE-SWBC and Distributed Weighted Clustering Algorithm (DWCA) are compared through simulations. The simulation results show that EE-SWBC maintains fewer average numbers of stable clusters with minimum overhead, less energy consumption and fewer changes in cluster structure within network compared to DWCA.

  3. Vector Graph Assisted Pedestrian Dead Reckoning Using an Unconstrained Smartphone

    PubMed Central

    Qian, Jiuchao; Pei, Ling; Ma, Jiabin; Ying, Rendong; Liu, Peilin

    2015-01-01

    The paper presents a hybrid indoor positioning solution based on a pedestrian dead reckoning (PDR) approach using built-in sensors on a smartphone. To address the challenges of flexible and complex contexts of carrying a phone while walking, a robust step detection algorithm based on motion-awareness has been proposed. Given the fact that step length is influenced by different motion states, an adaptive step length estimation algorithm based on motion recognition is developed. Heading estimation is carried out by an attitude acquisition algorithm, which contains a two-phase filter to mitigate the distortion of magnetic anomalies. In order to estimate the heading for an unconstrained smartphone, principal component analysis (PCA) of acceleration is applied to determine the offset between the orientation of smartphone and the actual heading of a pedestrian. Moreover, a particle filter with vector graph assisted particle weighting is introduced to correct the deviation in step length and heading estimation. Extensive field tests, including four contexts of carrying a phone, have been conducted in an office building to verify the performance of the proposed algorithm. Test results show that the proposed algorithm can achieve sub-meter mean error in all contexts. PMID:25738763

  4. Automated Sperm Head Detection Using Intersecting Cortical Model Optimised by Particle Swarm Optimization.

    PubMed

    Tan, Weng Chun; Mat Isa, Nor Ashidi

    2016-01-01

    In human sperm motility analysis, sperm segmentation plays an important role to determine the location of multiple sperms. To ensure an improved segmentation result, the Laplacian of Gaussian filter is implemented as a kernel in a pre-processing step before applying the image segmentation process to automatically segment and detect human spermatozoa. This study proposes an intersecting cortical model (ICM), which was derived from several visual cortex models, to segment the sperm head region. However, the proposed method suffered from parameter selection; thus, the ICM network is optimised using particle swarm optimization where feature mutual information is introduced as the new fitness function. The final results showed that the proposed method is more accurate and robust than four state-of-the-art segmentation methods. The proposed method resulted in rates of 98.14%, 98.82%, 86.46% and 99.81% in accuracy, sensitivity, specificity and precision, respectively, after testing with 1200 sperms. The proposed algorithm is expected to be implemented in analysing sperm motility because of the robustness and capability of this algorithm.

  5. Simultaneous beam sampling and aperture shape optimization for SPORT.

    PubMed

    Zarepisheh, Masoud; Li, Ruijiang; Ye, Yinyu; Xing, Lei

    2015-02-01

    Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and neck and a prostate case. It significantly improved the target conformality and at the same time critical structure sparing compared with conventional intensity modulated radiation therapy (IMRT). In the head and neck case, for example, the average PTV coverage D99% for two PTVs, cord and brainstem max doses, and right parotid gland mean dose were improved, respectively, by about 7%, 37%, 12%, and 16%. The proposed method automatically determines the number of the stations required to generate a satisfactory plan and optimizes simultaneously the involved station parameters, leading to improved quality of the resultant treatment plans as compared with the conventional IMRT plans.

  6. Simultaneous beam sampling and aperture shape optimization for SPORT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei, E-mail: Lei@stanford.edu

    Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decisionmore » variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and neck and a prostate case. It significantly improved the target conformality and at the same time critical structure sparing compared with conventional intensity modulated radiation therapy (IMRT). In the head and neck case, for example, the average PTV coverage D99% for two PTVs, cord and brainstem max doses, and right parotid gland mean dose were improved, respectively, by about 7%, 37%, 12%, and 16%. Conclusions: The proposed method automatically determines the number of the stations required to generate a satisfactory plan and optimizes simultaneously the involved station parameters, leading to improved quality of the resultant treatment plans as compared with the conventional IMRT plans.« less

  7. Real-time 2D spatially selective MRI experiments: Comparative analysis of optimal control design methods

    NASA Astrophysics Data System (ADS)

    Maximov, Ivan I.; Vinding, Mads S.; Tse, Desmond H. Y.; Nielsen, Niels Chr.; Shah, N. Jon

    2015-05-01

    There is an increasing need for development of advanced radio-frequency (RF) pulse techniques in modern magnetic resonance imaging (MRI) systems driven by recent advancements in ultra-high magnetic field systems, new parallel transmit/receive coil designs, and accessible powerful computational facilities. 2D spatially selective RF pulses are an example of advanced pulses that have many applications of clinical relevance, e.g., reduced field of view imaging, and MR spectroscopy. The 2D spatially selective RF pulses are mostly generated and optimised with numerical methods that can handle vast controls and multiple constraints. With this study we aim at demonstrating that numerical, optimal control (OC) algorithms are efficient for the design of 2D spatially selective MRI experiments, when robustness towards e.g. field inhomogeneity is in focus. We have chosen three popular OC algorithms; two which are gradient-based, concurrent methods using first- and second-order derivatives, respectively; and a third that belongs to the sequential, monotonically convergent family. We used two experimental models: a water phantom, and an in vivo human head. Taking into consideration the challenging experimental setup, our analysis suggests the use of the sequential, monotonic approach and the second-order gradient-based approach as computational speed, experimental robustness, and image quality is key. All algorithms used in this work were implemented in the MATLAB environment and are freely available to the MRI community.

  8. Double Cluster Heads Model for Secure and Accurate Data Fusion in Wireless Sensor Networks

    PubMed Central

    Fu, Jun-Song; Liu, Yun

    2015-01-01

    Secure and accurate data fusion is an important issue in wireless sensor networks (WSNs) and has been extensively researched in the literature. In this paper, by combining clustering techniques, reputation and trust systems, and data fusion algorithms, we propose a novel cluster-based data fusion model called Double Cluster Heads Model (DCHM) for secure and accurate data fusion in WSNs. Different from traditional clustering models in WSNs, two cluster heads are selected after clustering for each cluster based on the reputation and trust system and they perform data fusion independently of each other. Then, the results are sent to the base station where the dissimilarity coefficient is computed. If the dissimilarity coefficient of the two data fusion results exceeds the threshold preset by the users, the cluster heads will be added to blacklist, and the cluster heads must be reelected by the sensor nodes in a cluster. Meanwhile, feedback is sent from the base station to the reputation and trust system, which can help us to identify and delete the compromised sensor nodes in time. Through a series of extensive simulations, we found that the DCHM performed very well in data fusion security and accuracy. PMID:25608211

  9. Human recognition based on head-shoulder contour extraction and BP neural network

    NASA Astrophysics Data System (ADS)

    Kong, Xiao-fang; Wang, Xiu-qin; Gu, Guohua; Chen, Qian; Qian, Wei-xian

    2014-11-01

    In practical application scenarios like video surveillance and human-computer interaction, human body movements are uncertain because the human body is a non-rigid object. Based on the fact that the head-shoulder part of human body can be less affected by the movement, and will seldom be obscured by other objects, in human detection and recognition, a head-shoulder model with its stable characteristics can be applied as a detection feature to describe the human body. In order to extract the head-shoulder contour accurately, a head-shoulder model establish method with combination of edge detection and the mean-shift algorithm in image clustering has been proposed in this paper. First, an adaptive method of mixture Gaussian background update has been used to extract targets from the video sequence. Second, edge detection has been used to extract the contour of moving objects, and the mean-shift algorithm has been combined to cluster parts of target's contour. Third, the head-shoulder model can be established, according to the width and height ratio of human head-shoulder combined with the projection histogram of the binary image, and the eigenvectors of the head-shoulder contour can be acquired. Finally, the relationship between head-shoulder contour eigenvectors and the moving objects will be formed by the training of back-propagation (BP) neural network classifier, and the human head-shoulder model can be clustered for human detection and recognition. Experiments have shown that the method combined with edge detection and mean-shift algorithm proposed in this paper can extract the complete head-shoulder contour, with low calculating complexity and high efficiency.

  10. An open library of CT patient projection data

    NASA Astrophysics Data System (ADS)

    Chen, Baiyu; Leng, Shuai; Yu, Lifeng; Holmes, David; Fletcher, Joel; McCollough, Cynthia

    2016-03-01

    Lack of access to projection data from patient CT scans is a major limitation for development and validation of new reconstruction algorithms. To meet this critical need, we are building a library of CT patient projection data in an open and vendor-neutral format, DICOM-CT-PD, which is an extended DICOM format that contains sinogram data, acquisition geometry, patient information, and pathology identification. The library consists of scans of various types, including head scans, chest scans, abdomen scans, electrocardiogram (ECG)-gated scans, and dual-energy scans. For each scan, three types of data are provided, including DICOM-CT-PD projection data at various dose levels, reconstructed CT images, and a free-form text file. Several instructional documents are provided to help the users extract information from DICOM-CT-PD files, including a dictionary file for the DICOM-CT-PD format, a DICOM-CT-PD reader, and a user manual. Radiologist detection performance based on the reconstructed CT images is also provided. So far 328 head cases, 228 chest cases, and 228 abdomen cases have been collected for potential inclusion. The final library will include a selection of 50 head, chest, and abdomen scans each from at least two different manufacturers, and a few ECG-gated scans and dual-source, dual-energy scans. It will be freely available to academic researchers, and is expected to greatly facilitate the development and validation of CT reconstruction algorithms.

  11. An energy-efficient and secure hybrid algorithm for wireless sensor networks using a mobile data collector

    NASA Astrophysics Data System (ADS)

    Dayananda, Karanam Ravichandran; Straub, Jeremy

    2017-05-01

    This paper proposes a new hybrid algorithm for security, which incorporates both distributed and hierarchal approaches. It uses a mobile data collector (MDC) to collect information in order to save energy of sensor nodes in a wireless sensor network (WSN) as, in most networks, these sensor nodes have limited energy. Wireless sensor networks are prone to security problems because, among other things, it is possible to use a rogue sensor node to eavesdrop on or alter the information being transmitted. To prevent this, this paper introduces a security algorithm for MDC-based WSNs. A key use of this algorithm is to protect the confidentiality of the information sent by the sensor nodes. The sensor nodes are deployed in a random fashion and form group structures called clusters. Each cluster has a cluster head. The cluster head collects data from the other nodes using the time-division multiple access protocol. The sensor nodes send their data to the cluster head for transmission to the base station node for further processing. The MDC acts as an intermediate node between the cluster head and base station. The MDC, using its dynamic acyclic graph path, collects the data from the cluster head and sends it to base station. This approach is useful for applications including warfighting, intelligent building and medicine. To assess the proposed system, the paper presents a comparison of its performance with other approaches and algorithms that can be used for similar purposes.

  12. The use of a genetic algorithm-based search strategy in geostatistics: application to a set of anisotropic piezometric head data

    NASA Astrophysics Data System (ADS)

    Abedini, M. J.; Nasseri, M.; Burn, D. H.

    2012-04-01

    In any geostatistical study, an important consideration is the choice of an appropriate, repeatable, and objective search strategy that controls the nearby samples to be included in the location-specific estimation procedure. Almost all geostatistical software available in the market puts the onus on the user to supply search strategy parameters in a heuristic manner. These parameters are solely controlled by geographical coordinates that are defined for the entire area under study, and the user has no guidance as to how to choose these parameters. The main thesis of the current study is that the selection of search strategy parameters has to be driven by data—both the spatial coordinates and the sample values—and cannot be chosen beforehand. For this purpose, a genetic-algorithm-based ordinary kriging with moving neighborhood technique is proposed. The search capability of a genetic algorithm is exploited to search the feature space for appropriate, either local or global, search strategy parameters. Radius of circle/sphere and/or radii of standard or rotated ellipse/ellipsoid are considered as the decision variables to be optimized by GA. The superiority of GA-based ordinary kriging is demonstrated through application to the Wolfcamp Aquifer piezometric head data. Assessment of numerical results showed that definition of search strategy parameters based on both geographical coordinates and sample values improves cross-validation statistics when compared with that based on geographical coordinates alone. In the case of a variable search neighborhood for each estimation point, optimization of local search strategy parameters for an elliptical support domain—the orientation of which is dictated by anisotropic axes—via GA was able to capture the dynamics of piezometric head in west Texas/New Mexico in an efficient way.

  13. TH-EF-BRB-05: 4pi Non-Coplanar IMRT Beam Angle Selection by Convex Optimization with Group Sparsity Penalty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O’Connor, D; Nguyen, D; Voronenko, Y

    Purpose: Integrated beam orientation and fluence map optimization is expected to be the foundation of robust automated planning but existing heuristic methods do not promise global optimality. We aim to develop a new method for beam angle selection in 4π non-coplanar IMRT systems based on solving (globally) a single convex optimization problem, and to demonstrate the effectiveness of the method by comparison with a state of the art column generation method for 4π beam angle selection. Methods: The beam angle selection problem is formulated as a large scale convex fluence map optimization problem with an additional group sparsity term thatmore » encourages most candidate beams to be inactive. The optimization problem is solved using an accelerated first-order method, the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA). The beam angle selection and fluence map optimization algorithm is used to create non-coplanar 4π treatment plans for several cases (including head and neck, lung, and prostate cases) and the resulting treatment plans are compared with 4π treatment plans created using the column generation algorithm. Results: In our experiments the treatment plans created using the group sparsity method meet or exceed the dosimetric quality of plans created using the column generation algorithm, which was shown superior to clinical plans. Moreover, the group sparsity approach converges in about 3 minutes in these cases, as compared with runtimes of a few hours for the column generation method. Conclusion: This work demonstrates the first non-greedy approach to non-coplanar beam angle selection, based on convex optimization, for 4π IMRT systems. The method given here improves both treatment plan quality and runtime as compared with a state of the art column generation algorithm. When the group sparsity term is set to zero, we obtain an excellent method for fluence map optimization, useful when beam angles have already been selected. NIH R43CA183390, NIH R01CA188300, Varian Medical Systems; Part of this research took place while D. O’Connor was a summer intern at RefleXion Medical.« less

  14. Inertial Pocket Navigation System: Unaided 3D Positioning

    PubMed Central

    Munoz Diaz, Estefania

    2015-01-01

    Inertial navigation systems use dead-reckoning to estimate the pedestrian's position. There are two types of pedestrian dead-reckoning, the strapdown algorithm and the step-and-heading approach. Unlike the strapdown algorithm, which consists of the double integration of the three orthogonal accelerometer readings, the step-and-heading approach lacks the vertical displacement estimation. We propose the first step-and-heading approach based on unaided inertial data solving 3D positioning. We present a step detector for steps up and down and a novel vertical displacement estimator. Our navigation system uses the sensor introduced in the front pocket of the trousers, a likely location of a smartphone. The proposed algorithms are based on the opening angle of the leg or pitch angle. We analyzed our step detector and compared it with the state-of-the-art, as well as our already proposed step length estimator. Lastly, we assessed our vertical displacement estimator in a real-world scenario. We found that our algorithms outperform the literature step and heading algorithms and solve 3D positioning using unaided inertial data. Additionally, we found that with the pitch angle, five activities are distinguishable: standing, sitting, walking, walking up stairs and walking down stairs. This information complements the pedestrian location and is of interest for applications, such as elderly care. PMID:25897501

  15. Spatiotemporal mapping of scalp potentials.

    PubMed

    Fender, D H; Santoro, T P

    1977-11-01

    Computerized analysis and display techniques are applied to the problem of identifying the origins of visually evoked scalped potentials (VESP's). A new stimulus for VESP work, white noise, is being incorporated in the solution of this problem. VESP's for white-noise stimulation exhibit time domain behavior similar to the classical response for flash stimuli but with certain significant differences. Contour mapping algorithms are used to display the time behavior of equipotential surfaces on the scalp during the VESP. The electrical and geometrical parameters of the head are modeled. Electrical fields closely matching those obtained experimentally are generated on the surface of the model head by optimally selecting the location and strength parameters of one or two dipole current sources contained within the model. Computer graphics are used to display as a movie the actual and model scalp potential field and the parameters of the dipole generators whithin the model head during the time course of the VESP. These techniques are currently used to study retinotopic mapping, fusion, and texture perception in the human.

  16. How Magnetic Disturbance Influences the Attitude and Heading in Magnetic and Inertial Sensor-Based Orientation Estimation

    PubMed Central

    Li, Qingguo

    2017-01-01

    With the advancements in micro-electromechanical systems (MEMS) technologies, magnetic and inertial sensors are becoming more and more accurate, lightweight, smaller in size as well as low-cost, which in turn boosts their applications in human movement analysis. However, challenges still exist in the field of sensor orientation estimation, where magnetic disturbance represents one of the obstacles limiting their practical application. The objective of this paper is to systematically analyze exactly how magnetic disturbances affects the attitude and heading estimation for a magnetic and inertial sensor. First, we reviewed four major components dealing with magnetic disturbance, namely decoupling attitude estimation from magnetic reading, gyro bias estimation, adaptive strategies of compensating magnetic disturbance and sensor fusion algorithms. We review and analyze the features of existing methods of each component. Second, to understand each component in magnetic disturbance rejection, four representative sensor fusion methods were implemented, including gradient descent algorithms, improved explicit complementary filter, dual-linear Kalman filter and extended Kalman filter. Finally, a new standardized testing procedure has been developed to objectively assess the performance of each method against magnetic disturbance. Based upon the testing results, the strength and weakness of the existing sensor fusion methods were easily examined, and suggestions were presented for selecting a proper sensor fusion algorithm or developing new sensor fusion method. PMID:29283432

  17. Shading correction algorithm for cone-beam CT in radiotherapy: extensive clinical validation of image quality improvement

    NASA Astrophysics Data System (ADS)

    Joshi, K. D.; Marchant, T. E.; Moore, C. J.

    2017-03-01

    A shading correction algorithm for the improvement of cone-beam CT (CBCT) images (Phys. Med. Biol. 53 5719{33) has been further developed, optimised and validated extensively using 135 clinical CBCT images of patients undergoing radiotherapy treatment of the pelvis, lungs and head and neck. An automated technique has been developed to efficiently analyse the large number of clinical images. Small regions of similar tissue (for example fat tissue) are automatically identified using CT images. The same regions on the corresponding CBCT image are analysed to ensure that they do not contain pixels representing multiple types of tissue. The mean value of all selected pixels and the non-uniformity, defined as the median absolute deviation of the mean values in each small region, are calculated. Comparisons between CT and raw and corrected CBCT images are then made. Analysis of fat regions in pelvis images shows an average difference in mean pixel value between CT and CBCT of 136:0 HU in raw CBCT images, which is reduced to 2:0 HU after the application of the shading correction algorithm. The average difference in non-uniformity of fat pixels is reduced from 33:7 in raw CBCT to 2:8 in shading-corrected CBCT images. Similar results are obtained in the analysis of lung and head and neck images.

  18. Real-time 2D spatially selective MRI experiments: Comparative analysis of optimal control design methods.

    PubMed

    Maximov, Ivan I; Vinding, Mads S; Tse, Desmond H Y; Nielsen, Niels Chr; Shah, N Jon

    2015-05-01

    There is an increasing need for development of advanced radio-frequency (RF) pulse techniques in modern magnetic resonance imaging (MRI) systems driven by recent advancements in ultra-high magnetic field systems, new parallel transmit/receive coil designs, and accessible powerful computational facilities. 2D spatially selective RF pulses are an example of advanced pulses that have many applications of clinical relevance, e.g., reduced field of view imaging, and MR spectroscopy. The 2D spatially selective RF pulses are mostly generated and optimised with numerical methods that can handle vast controls and multiple constraints. With this study we aim at demonstrating that numerical, optimal control (OC) algorithms are efficient for the design of 2D spatially selective MRI experiments, when robustness towards e.g. field inhomogeneity is in focus. We have chosen three popular OC algorithms; two which are gradient-based, concurrent methods using first- and second-order derivatives, respectively; and a third that belongs to the sequential, monotonically convergent family. We used two experimental models: a water phantom, and an in vivo human head. Taking into consideration the challenging experimental setup, our analysis suggests the use of the sequential, monotonic approach and the second-order gradient-based approach as computational speed, experimental robustness, and image quality is key. All algorithms used in this work were implemented in the MATLAB environment and are freely available to the MRI community. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Selective robust optimization: A new intensity-modulated proton therapy optimization strategy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yupeng; Niemela, Perttu; Siljamaki, Sami

    2015-08-15

    Purpose: To develop a new robust optimization strategy for intensity-modulated proton therapy as an important step in translating robust proton treatment planning from research to clinical applications. Methods: In selective robust optimization, a worst-case-based robust optimization algorithm is extended, and terms of the objective function are selectively computed from either the worst-case dose or the nominal dose. Two lung cancer cases and one head and neck cancer case were used to demonstrate the practical significance of the proposed robust planning strategy. The lung cancer cases had minimal tumor motion less than 5 mm, and, for the demonstration of the methodology,more » are assumed to be static. Results: Selective robust optimization achieved robust clinical target volume (CTV) coverage and at the same time increased nominal planning target volume coverage to 95.8%, compared to the 84.6% coverage achieved with CTV-based robust optimization in one of the lung cases. In the other lung case, the maximum dose in selective robust optimization was lowered from a dose of 131.3% in the CTV-based robust optimization to 113.6%. Selective robust optimization provided robust CTV coverage in the head and neck case, and at the same time improved controls over isodose distribution so that clinical requirements may be readily met. Conclusions: Selective robust optimization may provide the flexibility and capability necessary for meeting various clinical requirements in addition to achieving the required plan robustness in practical proton treatment planning settings.« less

  20. Clinical Decision Support Tools for Selecting Interventions for Patients with Disabling Musculoskeletal Disorders: A Scoping Review.

    PubMed

    Gross, Douglas P; Armijo-Olivo, Susan; Shaw, William S; Williams-Whitt, Kelly; Shaw, Nicola T; Hartvigsen, Jan; Qin, Ziling; Ha, Christine; Woodhouse, Linda J; Steenstra, Ivan A

    2016-09-01

    Purpose We aimed to identify and inventory clinical decision support (CDS) tools for helping front-line staff select interventions for patients with musculoskeletal (MSK) disorders. Methods We used Arksey and O'Malley's scoping review framework which progresses through five stages: (1) identifying the research question; (2) identifying relevant studies; (3) selecting studies for analysis; (4) charting the data; and (5) collating, summarizing and reporting results. We considered computer-based, and other available tools, such as algorithms, care pathways, rules and models. Since this research crosses multiple disciplines, we searched health care, computing science and business databases. Results Our search resulted in 4605 manuscripts. Titles and abstracts were screened for relevance. The reliability of the screening process was high with an average percentage of agreement of 92.3 %. Of the located articles, 123 were considered relevant. Within this literature, there were 43 CDS tools located. These were classified into 3 main areas: computer-based tools/questionnaires (n = 8, 19 %), treatment algorithms/models (n = 14, 33 %), and clinical prediction rules/classification systems (n = 21, 49 %). Each of these areas and the associated evidence are described. The state of evidentiary support for CDS tools is still preliminary and lacks external validation, head-to-head comparisons, or evidence of generalizability across different populations and settings. Conclusions CDS tools, especially those employing rapidly advancing computer technologies, are under development and of potential interest to health care providers, case management organizations and funders of care. Based on the results of this scoping review, we conclude that these tools, models and systems should be subjected to further validation before they can be recommended for large-scale implementation for managing patients with MSK disorders.

  1. Enhanced patient reported outcome measurement suitable for head and neck cancer follow-up clinics

    PubMed Central

    2012-01-01

    Background The ‘Worse-Stable-Better’ (W-S-B) question was introduced to capture patient-perceived change in University of Washington Quality of Life (UW-QOL) domains. Methods 202 head and neck cancer patients in remission prospectively completed UW-QOL and Patients Concerns Inventory (PCI). For each UW-QOL domain, patients indicated whether over the last month things had worsened (W), remained stable (S) or were better (B). Results 202 patients at 448 attendances selected 1752 PCI items they wanted to discuss in consultation, and 58% (1024/1752) of these were not covered by the UW-QOL. UW-QOL algorithms highlighted another 440 significant problems that the patient did not want to discuss (i.e. the corresponding items on the PCI were not selected). After making allowance for UW-QOL algorithms to identify 'significant problems' and PCI selection of corresponding issues for discussion there remained clear residual and notable variation in W-S-B responses, in particular to identify patients with significant problems that were getting worse, and patients without significant problems that wanted to discuss issues that were getting worse. Changes in mean UW-QOL scores were notably lower for those getting worse on the W-S-B question, typically by 10 or more units a magnitude that suggests clinically important changes in score. Conclusions The W-S-B question adds little questionnaire burden and could help to better identify patients who might benefit from intervention. The results of this study suggest that the UW-QOL with the W-S-B modification should be used together with the PCI to allow optimal identification of issues for patient-clinician discussion during routine outpatient clinics. PMID:22695251

  2. Live Speech Driven Head-and-Eye Motion Generators.

    PubMed

    Le, Binh H; Ma, Xiaohan; Deng, Zhigang

    2012-11-01

    This paper describes a fully automated framework to generate realistic head motion, eye gaze, and eyelid motion simultaneously based on live (or recorded) speech input. Its central idea is to learn separate yet interrelated statistical models for each component (head motion, gaze, or eyelid motion) from a prerecorded facial motion data set: 1) Gaussian Mixture Models and gradient descent optimization algorithm are employed to generate head motion from speech features; 2) Nonlinear Dynamic Canonical Correlation Analysis model is used to synthesize eye gaze from head motion and speech features, and 3) nonnegative linear regression is used to model voluntary eye lid motion and log-normal distribution is used to describe involuntary eye blinks. Several user studies are conducted to evaluate the effectiveness of the proposed speech-driven head and eye motion generator using the well-established paired comparison methodology. Our evaluation results clearly show that this approach can significantly outperform the state-of-the-art head and eye motion generation algorithms. In addition, a novel mocap+video hybrid data acquisition technique is introduced to record high-fidelity head movement, eye gaze, and eyelid motion simultaneously.

  3. Building a medical image processing algorithm verification database

    NASA Astrophysics Data System (ADS)

    Brown, C. Wayne

    2000-06-01

    The design of a database containing head Computed Tomography (CT) studies is presented, along with a justification for the database's composition. The database will be used to validate software algorithms that screen normal head CT studies from studies that contain pathology. The database is designed to have the following major properties: (1) a size sufficient for statistical viability, (2) inclusion of both normal (no pathology) and abnormal scans, (3) inclusion of scans due to equipment malfunction, technologist error, and uncooperative patients, (4) inclusion of data sets from multiple scanner manufacturers, (5) inclusion of data sets from different gender and age groups, and (6) three independent diagnosis of each data set. Designed correctly, the database will provide a partial basis for FDA (United States Food and Drug Administration) approval of image processing algorithms for clinical use. Our goal for the database is the proof of viability of screening head CT's for normal anatomy using computer algorithms. To put this work into context, a classification scheme for 'computer aided diagnosis' systems is proposed.

  4. Demons deformable registration for CBCT-guided procedures in the head and neck: Convergence and accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nithiananthan, S.; Brock, K. K.; Daly, M. J.

    2009-10-15

    Purpose: The accuracy and convergence behavior of a variant of the Demons deformable registration algorithm were investigated for use in cone-beam CT (CBCT)-guided procedures of the head and neck. Online use of deformable registration for guidance of therapeutic procedures such as image-guided surgery or radiation therapy places trade-offs on accuracy and computational expense. This work describes a convergence criterion for Demons registration developed to balance these demands; the accuracy of a multiscale Demons implementation using this convergence criterion is quantified in CBCT images of the head and neck. Methods: Using an open-source ''symmetric'' Demons registration algorithm, a convergence criterion basedmore » on the change in the deformation field between iterations was developed to advance among multiple levels of a multiscale image pyramid in a manner that optimized accuracy and computation time. The convergence criterion was optimized in cadaver studies involving CBCT images acquired using a surgical C-arm prototype modified for 3D intraoperative imaging. CBCT-to-CBCT registration was performed and accuracy was quantified in terms of the normalized cross-correlation (NCC) and target registration error (TRE). The accuracy and robustness of the algorithm were then tested in clinical CBCT images of ten patients undergoing radiation therapy of the head and neck. Results: The cadaver model allowed optimization of the convergence factor and initial measurements of registration accuracy: Demons registration exhibited TRE=(0.8{+-}0.3) mm and NCC=0.99 in the cadaveric head compared to TRE=(2.6{+-}1.0) mm and NCC=0.93 with rigid registration. Similarly for the patient data, Demons registration gave mean TRE=(1.6{+-}0.9) mm compared to rigid registration TRE=(3.6{+-}1.9) mm, suggesting registration accuracy at or near the voxel size of the patient images (1x1x2 mm{sup 3}). The multiscale implementation based on optimal convergence criteria completed registration in 52 s for the cadaveric head and in an average time of 270 s for the larger FOV patient images. Conclusions: Appropriate selection of convergence and multiscale parameters in Demons registration was shown to reduce computational expense without sacrificing registration performance. For intraoperative CBCT imaging with deformable registration, the ability to perform accurate registration within the stringent time requirements of the operating environment could offer a useful clinical tool allowing integration of preoperative information while accurately reflecting changes in the patient anatomy. Similarly for CBCT-guided radiation therapy, fast accurate deformable registration could further augment high-precision treatment strategies.« less

  5. Demons deformable registration for CBCT-guided procedures in the head and neck: convergence and accuracy.

    PubMed

    Nithiananthan, S; Brock, K K; Daly, M J; Chan, H; Irish, J C; Siewerdsen, J H

    2009-10-01

    The accuracy and convergence behavior of a variant of the Demons deformable registration algorithm were investigated for use in cone-beam CT (CBCT)-guided procedures of the head and neck. Online use of deformable registration for guidance of therapeutic procedures such as image-guided surgery or radiation therapy places trade-offs on accuracy and computational expense. This work describes a convergence criterion for Demons registration developed to balance these demands; the accuracy of a multiscale Demons implementation using this convergence criterion is quantified in CBCT images of the head and neck. Using an open-source "symmetric" Demons registration algorithm, a convergence criterion based on the change in the deformation field between iterations was developed to advance among multiple levels of a multiscale image pyramid in a manner that optimized accuracy and computation time. The convergence criterion was optimized in cadaver studies involving CBCT images acquired using a surgical C-arm prototype modified for 3D intraoperative imaging. CBCT-to-CBCT registration was performed and accuracy was quantified in terms of the normalized cross-correlation (NCC) and target registration error (TRE). The accuracy and robustness of the algorithm were then tested in clinical CBCT images of ten patients undergoing radiation therapy of the head and neck. The cadaver model allowed optimization of the convergence factor and initial measurements of registration accuracy: Demons registration exhibited TRE=(0.8+/-0.3) mm and NCC =0.99 in the cadaveric head compared to TRE=(2.6+/-1.0) mm and NCC=0.93 with rigid registration. Similarly for the patient data, Demons registration gave mean TRE=(1.6+/-0.9) mm compared to rigid registration TRE=(3.6+/-1.9) mm, suggesting registration accuracy at or near the voxel size of the patient images (1 x 1 x 2 mm3). The multiscale implementation based on optimal convergence criteria completed registration in 52 s for the cadaveric head and in an average time of 270 s for the larger FOV patient images. Appropriate selection of convergence and multiscale parameters in Demons registration was shown to reduce computational expense without sacrificing registration performance. For intraoperative CBCT imaging with deformable registration, the ability to perform accurate registration within the stringent time requirements of the operating environment could offer a useful clinical tool allowing integration of preoperative information while accurately reflecting changes in the patient anatomy. Similarly for CBCT-guided radiation therapy, fast accurate deformable registration could further augment high-precision treatment strategies.

  6. Demons deformable registration for CBCT-guided procedures in the head and neck: Convergence and accuracy

    PubMed Central

    Nithiananthan, S.; Brock, K. K.; Daly, M. J.; Chan, H.; Irish, J. C.; Siewerdsen, J. H.

    2009-01-01

    Purpose: The accuracy and convergence behavior of a variant of the Demons deformable registration algorithm were investigated for use in cone-beam CT (CBCT)-guided procedures of the head and neck. Online use of deformable registration for guidance of therapeutic procedures such as image-guided surgery or radiation therapy places trade-offs on accuracy and computational expense. This work describes a convergence criterion for Demons registration developed to balance these demands; the accuracy of a multiscale Demons implementation using this convergence criterion is quantified in CBCT images of the head and neck. Methods: Using an open-source “symmetric” Demons registration algorithm, a convergence criterion based on the change in the deformation field between iterations was developed to advance among multiple levels of a multiscale image pyramid in a manner that optimized accuracy and computation time. The convergence criterion was optimized in cadaver studies involving CBCT images acquired using a surgical C-arm prototype modified for 3D intraoperative imaging. CBCT-to-CBCT registration was performed and accuracy was quantified in terms of the normalized cross-correlation (NCC) and target registration error (TRE). The accuracy and robustness of the algorithm were then tested in clinical CBCT images of ten patients undergoing radiation therapy of the head and neck. Results: The cadaver model allowed optimization of the convergence factor and initial measurements of registration accuracy: Demons registration exhibited TRE=(0.8±0.3) mm and NCC=0.99 in the cadaveric head compared to TRE=(2.6±1.0) mm and NCC=0.93 with rigid registration. Similarly for the patient data, Demons registration gave mean TRE=(1.6±0.9) mm compared to rigid registration TRE=(3.6±1.9) mm, suggesting registration accuracy at or near the voxel size of the patient images (1×1×2 mm3). The multiscale implementation based on optimal convergence criteria completed registration in 52 s for the cadaveric head and in an average time of 270 s for the larger FOV patient images. Conclusions: Appropriate selection of convergence and multiscale parameters in Demons registration was shown to reduce computational expense without sacrificing registration performance. For intraoperative CBCT imaging with deformable registration, the ability to perform accurate registration within the stringent time requirements of the operating environment could offer a useful clinical tool allowing integration of preoperative information while accurately reflecting changes in the patient anatomy. Similarly for CBCT-guided radiation therapy, fast accurate deformable registration could further augment high-precision treatment strategies. PMID:19928106

  7. An Adaptive Clustering Approach Based on Minimum Travel Route Planning for Wireless Sensor Networks with a Mobile Sink.

    PubMed

    Tang, Jiqiang; Yang, Wu; Zhu, Lingyun; Wang, Dong; Feng, Xin

    2017-04-26

    In recent years, Wireless Sensor Networks with a Mobile Sink (WSN-MS) have been an active research topic due to the widespread use of mobile devices. However, how to get the balance between data delivery latency and energy consumption becomes a key issue of WSN-MS. In this paper, we study the clustering approach by jointly considering the Route planning for mobile sink and Clustering Problem (RCP) for static sensor nodes. We solve the RCP problem by using the minimum travel route clustering approach, which applies the minimum travel route of the mobile sink to guide the clustering process. We formulate the RCP problem as an Integer Non-Linear Programming (INLP) problem to shorten the travel route of the mobile sink under three constraints: the communication hops constraint, the travel route constraint and the loop avoidance constraint. We then propose an Imprecise Induction Algorithm (IIA) based on the property that the solution with a small hop count is more feasible than that with a large hop count. The IIA algorithm includes three processes: initializing travel route planning with a Traveling Salesman Problem (TSP) algorithm, transforming the cluster head to a cluster member and transforming the cluster member to a cluster head. Extensive experimental results show that the IIA algorithm could automatically adjust cluster heads according to the maximum hops parameter and plan a shorter travel route for the mobile sink. Compared with the Shortest Path Tree-based Data-Gathering Algorithm (SPT-DGA), the IIA algorithm has the characteristics of shorter route length, smaller cluster head count and faster convergence rate.

  8. Algorithm for Automatic Behavior Quantification of Laboratory Mice Using High-Frame-Rate Videos

    NASA Astrophysics Data System (ADS)

    Nie, Yuman; Takaki, Takeshi; Ishii, Idaku; Matsuda, Hiroshi

    In this paper, we propose an algorithm for automatic behavior quantification in laboratory mice to quantify several model behaviors. The algorithm can detect repetitive motions of the fore- or hind-limbs at several or dozens of hertz, which are too rapid for the naked eye, from high-frame-rate video images. Multiple repetitive motions can always be identified from periodic frame-differential image features in four segmented regions — the head, left side, right side, and tail. Even when a mouse changes its posture and orientation relative to the camera, these features can still be extracted from the shift- and orientation-invariant shape of the mouse silhouette by using the polar coordinate system and adjusting the angle coordinate according to the head and tail positions. The effectiveness of the algorithm is evaluated by analyzing long-term 240-fps videos of four laboratory mice for six typical model behaviors: moving, rearing, immobility, head grooming, left-side scratching, and right-side scratching. The time durations for the model behaviors determined by the algorithm have detection/correction ratios greater than 80% for all the model behaviors. This shows good quantification results for actual animal testing.

  9. Advanced illumination control algorithm for medical endoscopy applications

    NASA Astrophysics Data System (ADS)

    Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Morgado-Dias, F.

    2015-05-01

    CMOS image sensor manufacturer, AWAIBA, is providing the world's smallest digital camera modules to the world market for minimally invasive surgery and one time use endoscopic equipment. Based on the world's smallest digital camera head and the evaluation board provided to it, the aim of this paper is to demonstrate an advanced fast response dynamic control algorithm of the illumination LED source coupled to the camera head, over the LED drivers embedded on the evaluation board. Cost efficient and small size endoscopic camera modules nowadays embed minimal size image sensors capable of not only adjusting gain and exposure time but also LED illumination with adjustable illumination power. The LED illumination power has to be dynamically adjusted while navigating the endoscope over changing illumination conditions of several orders of magnitude within fractions of the second to guarantee a smooth viewing experience. The algorithm is centered on the pixel analysis of selected ROIs enabling it to dynamically adjust the illumination intensity based on the measured pixel saturation level. The control core was developed in VHDL and tested in a laboratory environment over changing light conditions. The obtained results show that it is capable of achieving correction speeds under 1 s while maintaining a static error below 3% relative to the total number of pixels on the image. The result of this work will allow the integration of millimeter sized high brightness LED sources on minimal form factor cameras enabling its use in endoscopic surgical robotic or micro invasive surgery.

  10. Eye center localization and gaze gesture recognition for human-computer interaction.

    PubMed

    Zhang, Wenhao; Smith, Melvyn L; Smith, Lyndon N; Farooq, Abdul

    2016-03-01

    This paper introduces an unsupervised modular approach for accurate and real-time eye center localization in images and videos, thus allowing a coarse-to-fine, global-to-regional scheme. The trajectories of eye centers in consecutive frames, i.e., gaze gestures, are further analyzed, recognized, and employed to boost the human-computer interaction (HCI) experience. This modular approach makes use of isophote and gradient features to estimate the eye center locations. A selective oriented gradient filter has been specifically designed to remove strong gradients from eyebrows, eye corners, and shadows, which sabotage most eye center localization methods. A real-world implementation utilizing these algorithms has been designed in the form of an interactive advertising billboard to demonstrate the effectiveness of our method for HCI. The eye center localization algorithm has been compared with 10 other algorithms on the BioID database and six other algorithms on the GI4E database. It outperforms all the other algorithms in comparison in terms of localization accuracy. Further tests on the extended Yale Face Database b and self-collected data have proved this algorithm to be robust against moderate head poses and poor illumination conditions. The interactive advertising billboard has manifested outstanding usability and effectiveness in our tests and shows great potential for benefiting a wide range of real-world HCI applications.

  11. SU-C-BRA-04: Automated Segmentation of Head-And-Neck CT Images for Radiotherapy Treatment Planning Via Multi-Atlas Machine Learning (MAML)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, X; Gao, H; Sharp, G

    Purpose: Accurate image segmentation is a crucial step during image guided radiation therapy. This work proposes multi-atlas machine learning (MAML) algorithm for automated segmentation of head-and-neck CT images. Methods: As the first step, the algorithm utilizes normalized mutual information as similarity metric, affine registration combined with multiresolution B-Spline registration, and then fuses together using the label fusion strategy via Plastimatch. As the second step, the following feature selection strategy is proposed to extract five feature components from reference or atlas images: intensity (I), distance map (D), box (B), center of gravity (C) and stable point (S). The box feature Bmore » is novel. It describes a relative position from each point to minimum inscribed rectangle of ROI. The center-of-gravity feature C is the 3D Euclidean distance from a sample point to the ROI center of gravity, and then S is the distance of the sample point to the landmarks. Then, we adopt random forest (RF) in Scikit-learn, a Python module integrating a wide range of state-of-the-art machine learning algorithms as classifier. Different feature and atlas strategies are used for different ROIs for improved performance, such as multi-atlas strategy with reference box for brainstem, and single-atlas strategy with reference landmark for optic chiasm. Results: The algorithm was validated on a set of 33 CT images with manual contours using a leave-one-out cross-validation strategy. Dice similarity coefficients between manual contours and automated contours were calculated: the proposed MAML method had an improvement from 0.79 to 0.83 for brainstem and 0.11 to 0.52 for optic chiasm with respect to multi-atlas segmentation method (MA). Conclusion: A MAML method has been proposed for automated segmentation of head-and-neck CT images with improved performance. It provides the comparable result in brainstem and the improved result in optic chiasm compared with MA. Xuhua Ren and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000), and the Shanghai Pujiang Talent Program (#14PJ1404500).« less

  12. Efficient electromagnetic source imaging with adaptive standardized LORETA/FOCUSS.

    PubMed

    Schimpf, Paul H; Liu, Hesheng; Ramon, Ceon; Haueisen, Jens

    2005-05-01

    Functional brain imaging and source localization based on the scalp's potential field require a solution to an ill-posed inverse problem with many solutions. This makes it necessary to incorporate a priori knowledge in order to select a particular solution. A computational challenge for some subject-specific head models is that many inverse algorithms require a comprehensive sampling of the candidate source space at the desired resolution. In this study, we present an algorithm that can accurately reconstruct details of localized source activity from a sparse sampling of the candidate source space. Forward computations are minimized through an adaptive procedure that increases source resolution as the spatial extent is reduced. With this algorithm, we were able to compute inverses using only 6% to 11% of the full resolution lead-field, with a localization accuracy that was not significantly different than an exhaustive search through a fully-sampled source space. The technique is, therefore, applicable for use with anatomically-realistic, subject-specific forward models for applications with spatially concentrated source activity.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoang Duc, Albert K., E-mail: albert.hoangduc.ucl@gmail.com; McClelland, Jamie; Modat, Marc

    Purpose: The aim of this study was to assess whether clinically acceptable segmentations of organs at risk (OARs) in head and neck cancer can be obtained automatically and efficiently using the novel “similarity and truth estimation for propagated segmentations” (STEPS) compared to the traditional “simultaneous truth and performance level estimation” (STAPLE) algorithm. Methods: First, 6 OARs were contoured by 2 radiation oncologists in a dataset of 100 patients with head and neck cancer on planning computed tomography images. Each image in the dataset was then automatically segmented with STAPLE and STEPS using those manual contours. Dice similarity coefficient (DSC) wasmore » then used to compare the accuracy of these automatic methods. Second, in a blind experiment, three separate and distinct trained physicians graded manual and automatic segmentations into one of the following three grades: clinically acceptable as determined by universal delineation guidelines (grade A), reasonably acceptable for clinical practice upon manual editing (grade B), and not acceptable (grade C). Finally, STEPS segmentations graded B were selected and one of the physicians manually edited them to grade A. Editing time was recorded. Results: Significant improvements in DSC can be seen when using the STEPS algorithm on large structures such as the brainstem, spinal canal, and left/right parotid compared to the STAPLE algorithm (all p < 0.001). In addition, across all three trained physicians, manual and STEPS segmentation grades were not significantly different for the brainstem, spinal canal, parotid (right/left), and optic chiasm (all p > 0.100). In contrast, STEPS segmentation grades were lower for the eyes (p < 0.001). Across all OARs and all physicians, STEPS produced segmentations graded as well as manual contouring at a rate of 83%, giving a lower bound on this rate of 80% with 95% confidence. Reduction in manual interaction time was on average 61% and 93% when automatic segmentations did and did not, respectively, require manual editing. Conclusions: The STEPS algorithm showed better performance than the STAPLE algorithm in segmenting OARs for radiotherapy of the head and neck. It can automatically produce clinically acceptable segmentation of OARs, with results as relevant as manual contouring for the brainstem, spinal canal, the parotids (left/right), and optic chiasm. A substantial reduction in manual labor was achieved when using STEPS even when manual editing was necessary.« less

  14. A Novel Wireless Power Transfer-Based Weighed Clustering Cooperative Spectrum Sensing Method for Cognitive Sensor Networks.

    PubMed

    Liu, Xin

    2015-10-30

    In a cognitive sensor network (CSN), the wastage of sensing time and energy is a challenge to cooperative spectrum sensing, when the number of cooperative cognitive nodes (CNs) becomes very large. In this paper, a novel wireless power transfer (WPT)-based weighed clustering cooperative spectrum sensing model is proposed, which divides all the CNs into several clusters, and then selects the most favorable CNs as the cluster heads and allows the common CNs to transfer the received radio frequency (RF) energy of the primary node (PN) to the cluster heads, in order to supply the electrical energy needed for sensing and cooperation. A joint resource optimization is formulated to maximize the spectrum access probability of the CSN, through jointly allocating sensing time and clustering number. According to the resource optimization results, a clustering algorithm is proposed. The simulation results have shown that compared to the traditional model, the cluster heads of the proposed model can achieve more transmission power and there exists optimal sensing time and clustering number to maximize the spectrum access probability.

  15. An Adaptive Clustering Approach Based on Minimum Travel Route Planning for Wireless Sensor Networks with a Mobile Sink

    PubMed Central

    Tang, Jiqiang; Yang, Wu; Zhu, Lingyun; Wang, Dong; Feng, Xin

    2017-01-01

    In recent years, Wireless Sensor Networks with a Mobile Sink (WSN-MS) have been an active research topic due to the widespread use of mobile devices. However, how to get the balance between data delivery latency and energy consumption becomes a key issue of WSN-MS. In this paper, we study the clustering approach by jointly considering the Route planning for mobile sink and Clustering Problem (RCP) for static sensor nodes. We solve the RCP problem by using the minimum travel route clustering approach, which applies the minimum travel route of the mobile sink to guide the clustering process. We formulate the RCP problem as an Integer Non-Linear Programming (INLP) problem to shorten the travel route of the mobile sink under three constraints: the communication hops constraint, the travel route constraint and the loop avoidance constraint. We then propose an Imprecise Induction Algorithm (IIA) based on the property that the solution with a small hop count is more feasible than that with a large hop count. The IIA algorithm includes three processes: initializing travel route planning with a Traveling Salesman Problem (TSP) algorithm, transforming the cluster head to a cluster member and transforming the cluster member to a cluster head. Extensive experimental results show that the IIA algorithm could automatically adjust cluster heads according to the maximum hops parameter and plan a shorter travel route for the mobile sink. Compared with the Shortest Path Tree-based Data-Gathering Algorithm (SPT-DGA), the IIA algorithm has the characteristics of shorter route length, smaller cluster head count and faster convergence rate. PMID:28445434

  16. Portable Wideband Microwave Imaging System for Intracranial Hemorrhage Detection Using Improved Back-projection Algorithm with Model of Effective Head Permittivity

    PubMed Central

    Mobashsher, Ahmed Toaha; Mahmoud, A.; Abbosh, A. M.

    2016-01-01

    Intracranial hemorrhage is a medical emergency that requires rapid detection and medication to restrict any brain damage to minimal. Here, an effective wideband microwave head imaging system for on-the-spot detection of intracranial hemorrhage is presented. The operation of the system relies on the dielectric contrast between healthy brain tissues and a hemorrhage that causes a strong microwave scattering. The system uses a compact sensing antenna, which has an ultra-wideband operation with directional radiation, and a portable, compact microwave transceiver for signal transmission and data acquisition. The collected data is processed to create a clear image of the brain using an improved back projection algorithm, which is based on a novel effective head permittivity model. The system is verified in realistic simulation and experimental environments using anatomically and electrically realistic human head phantoms. Quantitative and qualitative comparisons between the images from the proposed and existing algorithms demonstrate significant improvements in detection and localization accuracy. The radiation and thermal safety of the system are examined and verified. Initial human tests are conducted on healthy subjects with different head sizes. The reconstructed images are statistically analyzed and absence of false positive results indicate the efficacy of the proposed system in future preclinical trials. PMID:26842761

  17. Portable Wideband Microwave Imaging System for Intracranial Hemorrhage Detection Using Improved Back-projection Algorithm with Model of Effective Head Permittivity

    NASA Astrophysics Data System (ADS)

    Mobashsher, Ahmed Toaha; Mahmoud, A.; Abbosh, A. M.

    2016-02-01

    Intracranial hemorrhage is a medical emergency that requires rapid detection and medication to restrict any brain damage to minimal. Here, an effective wideband microwave head imaging system for on-the-spot detection of intracranial hemorrhage is presented. The operation of the system relies on the dielectric contrast between healthy brain tissues and a hemorrhage that causes a strong microwave scattering. The system uses a compact sensing antenna, which has an ultra-wideband operation with directional radiation, and a portable, compact microwave transceiver for signal transmission and data acquisition. The collected data is processed to create a clear image of the brain using an improved back projection algorithm, which is based on a novel effective head permittivity model. The system is verified in realistic simulation and experimental environments using anatomically and electrically realistic human head phantoms. Quantitative and qualitative comparisons between the images from the proposed and existing algorithms demonstrate significant improvements in detection and localization accuracy. The radiation and thermal safety of the system are examined and verified. Initial human tests are conducted on healthy subjects with different head sizes. The reconstructed images are statistically analyzed and absence of false positive results indicate the efficacy of the proposed system in future preclinical trials.

  18. Discontinuity minimization for omnidirectional video projections

    NASA Astrophysics Data System (ADS)

    Alshina, Elena; Zakharchenko, Vladyslav

    2017-09-01

    Advances in display technologies both for head mounted devices and television panels demand resolution increase beyond 4K for source signal in virtual reality video streaming applications. This poses a problem of content delivery trough a bandwidth limited distribution networks. Considering a fact that source signal covers entire surrounding space investigation reviled that compression efficiency may fluctuate 40% in average depending on origin selection at the conversion stage from 3D space to 2D projection. Based on these knowledge the origin selection algorithm for video compression applications has been proposed. Using discontinuity entropy minimization function projection origin rotation may be defined to provide optimal compression results. Outcome of this research may be applied across various video compression solutions for omnidirectional content.

  19. Effects of reconstructed magnetic field from sparse noisy boundary measurements on localization of active neural source.

    PubMed

    Shen, Hui-min; Lee, Kok-Meng; Hu, Liang; Foong, Shaohui; Fu, Xin

    2016-01-01

    Localization of active neural source (ANS) from measurements on head surface is vital in magnetoencephalography. As neuron-generated magnetic fields are extremely weak, significant uncertainties caused by stochastic measurement interference complicate its localization. This paper presents a novel computational method based on reconstructed magnetic field from sparse noisy measurements for enhanced ANS localization by suppressing effects of unrelated noise. In this approach, the magnetic flux density (MFD) in the nearby current-free space outside the head is reconstructed from measurements through formulating the infinite series solution of the Laplace's equation, where boundary condition (BC) integrals over the entire measurements provide "smooth" reconstructed MFD with the decrease in unrelated noise. Using a gradient-based method, reconstructed MFDs with good fidelity are selected for enhanced ANS localization. The reconstruction model, spatial interpolation of BC, parametric equivalent current dipole-based inverse estimation algorithm using reconstruction, and gradient-based selection are detailed and validated. The influences of various source depths and measurement signal-to-noise ratio levels on the estimated ANS location are analyzed numerically and compared with a traditional method (where measurements are directly used), and it was demonstrated that gradient-selected high-fidelity reconstructed data can effectively improve the accuracy of ANS localization.

  20. A deformable head and neck phantom with in-vivo dosimetry for adaptive radiotherapy quality assurance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graves, Yan Jiang; Smith, Arthur-Allen; Mcilvena, David

    Purpose: Patients’ interfractional anatomic changes can compromise the initial treatment plan quality. To overcome this issue, adaptive radiotherapy (ART) has been introduced. Deformable image registration (DIR) is an important tool for ART and several deformable phantoms have been built to evaluate the algorithms’ accuracy. However, there is a lack of deformable phantoms that can also provide dosimetric information to verify the accuracy of the whole ART process. The goal of this work is to design and construct a deformable head and neck (HN) ART quality assurance (QA) phantom with in vivo dosimetry. Methods: An axial slice of a HN patientmore » is taken as a model for the phantom construction. Six anatomic materials are considered, with HU numbers similar to a real patient. A filled balloon inside the phantom tissue is inserted to simulate tumor. Deflation of the balloon simulates tumor shrinkage. Nonradiopaque surface markers, which do not influence DIR algorithms, provide the deformation ground truth. Fixed and movable holders are built in the phantom to hold a diode for dosimetric measurements. Results: The measured deformations at the surface marker positions can be compared with deformations calculated by a DIR algorithm to evaluate its accuracy. In this study, the authors selected a Demons algorithm as a DIR algorithm example for demonstration purposes. The average error magnitude is 2.1 mm. The point dose measurements from the in vivo diode dosimeters show a good agreement with the calculated doses from the treatment planning system with a maximum difference of 3.1% of prescription dose, when the treatment plans are delivered to the phantom with original or deformed geometry. Conclusions: In this study, the authors have presented the functionality of this deformable HN phantom for testing the accuracy of DIR algorithms and verifying the ART dosimetric accuracy. The authors’ experiments demonstrate the feasibility of this phantom serving as an end-to-end ART QA phantom.« less

  1. Comparison and combination of several MeSH indexing approaches

    PubMed Central

    Yepes, Antonio Jose Jimeno; Mork, James G.; Demner-Fushman, Dina; Aronson, Alan R.

    2013-01-01

    MeSH indexing of MEDLINE is becoming a more difficult task for the group of highly qualified indexing staff at the US National Library of Medicine, due to the large yearly growth of MEDLINE and the increasing size of MeSH. Since 2002, this task has been assisted by the Medical Text Indexer or MTI program. We extend previous machine learning analysis by adding a more diverse set of MeSH headings targeting examples where MTI has been shown to perform poorly. Machine learning algorithms exceed MTI’s performance on MeSH headings that are used very frequently and headings for which the indexing frequency is very low. We find that when we combine the MTI suggestions and the prediction of the learning algorithms, the performance improves compared to any single method for most of the evaluated MeSH headings. PMID:24551371

  2. Comparison and combination of several MeSH indexing approaches.

    PubMed

    Yepes, Antonio Jose Jimeno; Mork, James G; Demner-Fushman, Dina; Aronson, Alan R

    2013-01-01

    MeSH indexing of MEDLINE is becoming a more difficult task for the group of highly qualified indexing staff at the US National Library of Medicine, due to the large yearly growth of MEDLINE and the increasing size of MeSH. Since 2002, this task has been assisted by the Medical Text Indexer or MTI program. We extend previous machine learning analysis by adding a more diverse set of MeSH headings targeting examples where MTI has been shown to perform poorly. Machine learning algorithms exceed MTI's performance on MeSH headings that are used very frequently and headings for which the indexing frequency is very low. We find that when we combine the MTI suggestions and the prediction of the learning algorithms, the performance improves compared to any single method for most of the evaluated MeSH headings.

  3. Making the most of injury surveillance data: Using narrative text to identify exposure information in case-control studies

    PubMed Central

    Graves, Janessa M.; Whitehill, Jennifer M.; Hagel, Brent E.; Rivara, Frederick P.

    2015-01-01

    Introduction Free-text fields in injury surveillance databases can provide detailed information beyond routinely coded data. Additional data, such as exposures and covariates can be identified from narrative text and used to conduct case-control studies. Methods To illustrate this, we developed a text-search algorithm to identify helmet status (worn, not worn, use unknown) in the U.S. National Electronic Injury Surveillance System (NEISS) narratives for bicycling and other sports injuries from 2005 to 2011. We calculated adjusted odds ratios (ORs) for head injury associated with helmet use, with non-head injuries representing controls. For bicycling, we validated ORs against published estimates. ORs were calculated for other sports and we examined factors associated with helmet reporting. Results Of 105,614 bicycling injury narratives reviewed, 14.1% contained sufficient helmet information for use in the case-control study. The adjusted ORs for head injuries associated with helmet-wearing were smaller than, but directionally consistent, with previously published estimates (e.g., 1999 Cochrane Review). ORs illustrated a protective effect of helmets for other sports as well (less than 1). Conclusions This exploratory analysis illustrates the potential utility of relatively simple text-search algorithms to identify additional variables in surveillance data. Limitations of this study include possible selection bias and the inability to identify individuals with multiple injuries. A similar approach can be applied to study other injuries, conditions, risks, or protective factors. This approach may serve as an efficient method to extend the utility of injury surveillance data to conduct epidemiological research. PMID:25498331

  4. Modeling of light distribution in the brain for topographical imaging

    NASA Astrophysics Data System (ADS)

    Okada, Eiji; Hayashi, Toshiyuki; Kawaguchi, Hiroshi

    2004-07-01

    Multi-channel optical imaging system can obtain a topographical distribution of the activated region in the brain cortex by a simple mapping algorithm. Near-infrared light is strongly scattered in the head and the volume of tissue that contributes to the change in the optical signal detected with source-detector pair on the head surface is broadly distributed in the brain. This scattering effect results in poor resolution and contrast in the topographic image of the brain activity. We report theoretical investigations on the spatial resolution of the topographic imaging of the brain activity. The head model for the theoretical study consists of five layers that imitate the scalp, skull, subarachnoid space, gray matter and white matter. The light propagation in the head model is predicted by Monte Carlo simulation to obtain the spatial sensitivity profile for a source-detector pair. The source-detector pairs are one dimensionally arranged on the surface of the model and the distance between the adjoining source-detector pairs are varied from 4 mm to 32 mm. The change in detected intensity caused by the absorption change is obtained by Monte Carlo simulation. The position of absorption change is reconstructed by the conventional mapping algorithm and the reconstruction algorithm using the spatial sensitivity profiles. We discuss the effective interval between the source-detector pairs and the choice of reconstruction algorithms to improve the topographic images of brain activity.

  5. Quality assessment of MEG-to-MRI coregistrations

    NASA Astrophysics Data System (ADS)

    Sonntag, Hermann; Haueisen, Jens; Maess, Burkhard

    2018-04-01

    For high precision in source reconstruction of magnetoencephalography (MEG) or electroencephalography data, high accuracy of the coregistration of sources and sensors is mandatory. Usually, the source space is derived from magnetic resonance imaging (MRI). In most cases, however, no quality assessment is reported for sensor-to-MRI coregistrations. If any, typically root mean squares (RMS) of point residuals are provided. It has been shown, however, that RMS of residuals do not correlate with coregistration errors. We suggest using target registration error (TRE) as criterion for the quality of sensor-to-MRI coregistrations. TRE measures the effect of uncertainty in coregistrations at all points of interest. In total, 5544 data sets with sensor-to-head and 128 head-to-MRI coregistrations, from a single MEG laboratory, were analyzed. An adaptive Metropolis algorithm was used to estimate the optimal coregistration and to sample the coregistration parameters (rotation and translation). We found an average TRE between 1.3 and 2.3 mm at the head surface. Further, we observed a mean absolute difference in coregistration parameters between the Metropolis and iterative closest point algorithm of (1.9 +/- 15){\\hspace{0pt}}\\circ and (1.1 +/- 9) m. A paired sample t-test indicated a significant improvement in goal function minimization by using the Metropolis algorithm. The sampled parameters allowed computation of TRE on the entire grid of the MRI volume. Hence, we recommend the Metropolis algorithm for head-to-MRI coregistrations.

  6. A new zonation algorithm with parameter estimation using hydraulic head and subsidence observations.

    PubMed

    Zhang, Meijing; Burbey, Thomas J; Nunes, Vitor Dos Santos; Borggaard, Jeff

    2014-01-01

    Parameter estimation codes such as UCODE_2005 are becoming well-known tools in groundwater modeling investigations. These programs estimate important parameter values such as transmissivity (T) and aquifer storage values (Sa ) from known observations of hydraulic head, flow, or other physical quantities. One drawback inherent in these codes is that the parameter zones must be specified by the user. However, such knowledge is often unknown even if a detailed hydrogeological description is available. To overcome this deficiency, we present a discrete adjoint algorithm for identifying suitable zonations from hydraulic head and subsidence measurements, which are highly sensitive to both elastic (Sske) and inelastic (Sskv) skeletal specific storage coefficients. With the advent of interferometric synthetic aperture radar (InSAR), distributed spatial and temporal subsidence measurements can be obtained. A synthetic conceptual model containing seven transmissivity zones, one aquifer storage zone and three interbed zones for elastic and inelastic storage coefficients were developed to simulate drawdown and subsidence in an aquifer interbedded with clay that exhibits delayed drainage. Simulated delayed land subsidence and groundwater head data are assumed to be the observed measurements, to which the discrete adjoint algorithm is called to create approximate spatial zonations of T, Sske , and Sskv . UCODE-2005 is then used to obtain the final optimal parameter values. Calibration results indicate that the estimated zonations calculated from the discrete adjoint algorithm closely approximate the true parameter zonations. This automation algorithm reduces the bias established by the initial distribution of zones and provides a robust parameter zonation distribution. © 2013, National Ground Water Association.

  7. Automated recognition of rear seat occupants' head position using Kinect™ 3D point cloud.

    PubMed

    Loeb, Helen; Kim, Jinyong; Arbogast, Kristy; Kuo, Jonny; Koppel, Sjaan; Cross, Suzanne; Charlton, Judith

    2017-12-01

    Child occupant safety in motor-vehicle crashes is evaluated using Anthropomorphic Test Devices (ATD) seated in optimal positions. However, child occupants often assume suboptimal positions during real-world driving trips. Head impact to the seat back has been identified as one important injury causation scenario for seat belt restrained, head-injured children (Bohman et al., 2011). There is therefore a need to understand the interaction of children with the Child Restraint System to optimize protection. Naturalistic driving studies (NDS) will improve understanding of out-of-position (OOP) trends. To quantify OOP positions, an NDS was conducted. Families used a study vehicle for two weeks during their everyday driving trips. The positions of rear-seated child occupants, representing 22 families, were evaluated. The study vehicle - instrumented with data acquisition systems, including Microsoft Kinect™ V1 - recorded rear seat occupants in 1120 driving 26 trips. Three novel analytical methods were used to analyze data. To assess skeletal tracking accuracy, analysts recorded occurrences where Kinect™ exhibited invalid head recognition among a randomly-selected subset (81 trips). Errors included incorrect target detection (e.g., vehicle headrest) or environmental interference (e.g., sunlight). When head data was present, Kinect™ was correct 41% of the time; two other algorithms - filtering for extreme motion, and background subtraction/head-based depth detection are described in this paper and preliminary results are presented. Accuracy estimates were not possible because of their experimental nature and the difficulty to use a ground truth for this large database. This NDS tested methods to quantify the frequency and magnitude of head positions for rear-seated child occupants utilizing Kinect™ motion-tracking. This study's results informed recent ATD sled tests that replicated observed positions (most common and most extreme), and assessed the validity of child occupant protection on these typical CRS uses. Optimal protection in vehicles requires an understanding of how child occupants use the rear seat space. This study explored the feasibility of using Kinect™ to log positions of rear seated child occupants. Initial analysis used the Kinect™ system's skeleton recognition and two novel analytical algorithms to log head location. This research will lead to further analysis leveraging Kinect™ raw data - and other NDS data - to quantify the frequency/magnitude of OOP situations, ATD sled tests that replicate observed positions, and advances in the design and testing of child occupant protection technology. Copyright © 2017 National Safety Council and Elsevier Ltd. All rights reserved.

  8. Robust Group Sparse Beamforming for Multicast Green Cloud-RAN With Imperfect CSI

    NASA Astrophysics Data System (ADS)

    Shi, Yuanming; Zhang, Jun; Letaief, Khaled B.

    2015-09-01

    In this paper, we investigate the network power minimization problem for the multicast cloud radio access network (Cloud-RAN) with imperfect channel state information (CSI). The key observation is that network power minimization can be achieved by adaptively selecting active remote radio heads (RRHs) via controlling the group-sparsity structure of the beamforming vector. However, this yields a non-convex combinatorial optimization problem, for which we propose a three-stage robust group sparse beamforming algorithm. In the first stage, a quadratic variational formulation of the weighted mixed l1/l2-norm is proposed to induce the group-sparsity structure in the aggregated beamforming vector, which indicates those RRHs that can be switched off. A perturbed alternating optimization algorithm is then proposed to solve the resultant non-convex group-sparsity inducing optimization problem by exploiting its convex substructures. In the second stage, we propose a PhaseLift technique based algorithm to solve the feasibility problem with a given active RRH set, which helps determine the active RRHs. Finally, the semidefinite relaxation (SDR) technique is adopted to determine the robust multicast beamformers. Simulation results will demonstrate the convergence of the perturbed alternating optimization algorithm, as well as, the effectiveness of the proposed algorithm to minimize the network power consumption for multicast Cloud-RAN.

  9. Estimation of flow properties using surface deformation and head data: A trajectory-based approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vasco, D.W.

    2004-07-12

    A trajectory-based algorithm provides an efficient and robust means to infer flow properties from surface deformation and head data. The algorithm is based upon the concept of an ''arrival time'' of a drawdown front, which is defined as the time corresponding to the maximum slope of the drawdown curve. The technique involves three steps: the inference of head changes as a function of position and time, the use of the estimated head changes to define arrival times, and the inversion of the arrival times for flow properties. Trajectories, computed from the output of a numerical simulator, are used to relatemore » the drawdown arrival times to flow properties. The inversion algorithm is iterative, requiring one reservoir simulation for each iteration. The method is applied to data from a set of 14 tiltmeters, located at the Raymond Quarry field site in California. Using the technique, I am able to image a high-conductivity channel which extends to the south of the pumping well. The presence of th is permeable pathway is supported by an analysis of earlier cross-well transient pressure test data.« less

  10. Coherent Waves in Seismic Researches

    NASA Astrophysics Data System (ADS)

    Emanov, A.; Seleznev, V. S.

    2013-05-01

    Development of digital processing algorithms of seismic wave fields for the purpose of useful event picking to study environment and other objects is the basis for the establishment of new seismic techniques. In the submitted paper a fundamental property of seismic wave field coherence is used. The authors extended conception of coherence types of observed wave fields and devised a technique of coherent component selection from observed wave field. Time coherence and space coherence are widely known. In this paper conception "parameter coherence" has been added. The parameter by which wave field is coherent can be the most manifold. The reason is that the wave field is a multivariate process described by a set of parameters. Coherence in the first place means independence of linear connection in wave field of parameter. In seismic wave fields, recorded in confined space, in building-blocks and stratified mediums time coherent standing waves are formed. In prospecting seismology at observation systems with multiple overlapping head waves are coherent by parallel correlation course or, in other words, by one measurement on generalized plane of observation system. For detail prospecting seismology at observation systems with multiple overlapping on basis of coherence property by one measurement of area algorithms have been developed, permitting seismic records to be converted to head wave time sections which have neither reflected nor other types of waves. Conversion in time section is executed on any specified observation base. Energy storage of head waves relative to noise on basis of multiplicity of observation system is realized within area of head wave recording. Conversion on base below the area of wave tracking is performed with lack of signal/noise ratio relative to maximum of this ratio, fit to observation system. Construction of head wave time section and dynamic plots a basis of automatic processing have been developed, similar to CDP procedure in method of reflected waves. With use of developed algorithms of head wave conversion in time sections a work of studying of refracting boundaries in Siberia have been executed. Except for the research by method of refracting waves, the conversion of head waves in time sections, applied to seismograms of reflected wave method, allows to obtain information about refracting horizons in upper part of section in addition to reflecting horizons data. Recovery method of wave field coherent components is the basis of the engineering seismology on the level of accuracy and detail. In seismic microzoning resonance frequency of the upper part of section are determined on the basis of this method. Maps of oscillation amplification and result accuracy are constructed for each of the frequencies. The same method makes it possible to study standing wave field in buildings and constructions with high accuracy and detail, realizing diagnostics of their physical state on set of natural frequencies and form of self-oscillations, examined with high detail. The method of standing waves permits to estimate a seismic stability of structure on new accuracy level.

  11. Kinematic Model-Based Pedestrian Dead Reckoning for Heading Correction and Lower Body Motion Tracking.

    PubMed

    Lee, Min Su; Ju, Hojin; Song, Jin Woo; Park, Chan Gook

    2015-11-06

    In this paper, we present a method for finding the enhanced heading and position of pedestrians by fusing the Zero velocity UPdaTe (ZUPT)-based pedestrian dead reckoning (PDR) and the kinematic constraints of the lower human body. ZUPT is a well known algorithm for PDR, and provides a sufficiently accurate position solution for short term periods, but it cannot guarantee a stable and reliable heading because it suffers from magnetic disturbance in determining heading angles, which degrades the overall position accuracy as time passes. The basic idea of the proposed algorithm is integrating the left and right foot positions obtained by ZUPTs with the heading and position information from an IMU mounted on the waist. To integrate this information, a kinematic model of the lower human body, which is calculated by using orientation sensors mounted on both thighs and calves, is adopted. We note that the position of the left and right feet cannot be apart because of the kinematic constraints of the body, so the kinematic model generates new measurements for the waist position. The Extended Kalman Filter (EKF) on the waist data that estimates and corrects error states uses these measurements and magnetic heading measurements, which enhances the heading accuracy. The updated position information is fed into the foot mounted sensors, and reupdate processes are performed to correct the position error of each foot. The proposed update-reupdate technique consequently ensures improved observability of error states and position accuracy. Moreover, the proposed method provides all the information about the lower human body, so that it can be applied more effectively to motion tracking. The effectiveness of the proposed algorithm is verified via experimental results, which show that a 1.25% Return Position Error (RPE) with respect to walking distance is achieved.

  12. Enhanced Pedestrian Navigation Based on Course Angle Error Estimation Using Cascaded Kalman Filters

    PubMed Central

    Park, Chan Gook

    2018-01-01

    An enhanced pedestrian dead reckoning (PDR) based navigation algorithm, which uses two cascaded Kalman filters (TCKF) for the estimation of course angle and navigation errors, is proposed. The proposed algorithm uses a foot-mounted inertial measurement unit (IMU), waist-mounted magnetic sensors, and a zero velocity update (ZUPT) based inertial navigation technique with TCKF. The first stage filter estimates the course angle error of a human, which is closely related to the heading error of the IMU. In order to obtain the course measurements, the filter uses magnetic sensors and a position-trace based course angle. For preventing magnetic disturbance from contaminating the estimation, the magnetic sensors are attached to the waistband. Because the course angle error is mainly due to the heading error of the IMU, and the characteristic error of the heading angle is highly dependent on that of the course angle, the estimated course angle error is used as a measurement for estimating the heading error in the second stage filter. At the second stage, an inertial navigation system-extended Kalman filter-ZUPT (INS-EKF-ZUPT) method is adopted. As the heading error is estimated directly by using course-angle error measurements, the estimation accuracy for the heading and yaw gyro bias can be enhanced, compared with the ZUPT-only case, which eventually enhances the position accuracy more efficiently. The performance enhancements are verified via experiments, and the way-point position error for the proposed method is compared with those for the ZUPT-only case and with other cases that use ZUPT and various types of magnetic heading measurements. The results show that the position errors are reduced by a maximum of 90% compared with the conventional ZUPT based PDR algorithms. PMID:29690539

  13. A priori mesh grading for the numerical calculation of the head-related transfer functions

    PubMed Central

    Ziegelwanger, Harald; Kreuzer, Wolfgang; Majdak, Piotr

    2017-01-01

    Head-related transfer functions (HRTFs) describe the directional filtering of the incoming sound caused by the morphology of a listener’s head and pinnae. When an accurate model of a listener’s morphology exists, HRTFs can be calculated numerically with the boundary element method (BEM). However, the general recommendation to model the head and pinnae with at least six elements per wavelength renders the BEM as a time-consuming procedure when calculating HRTFs for the full audible frequency range. In this study, a mesh preprocessing algorithm is proposed, viz., a priori mesh grading, which reduces the computational costs in the HRTF calculation process significantly. The mesh grading algorithm deliberately violates the recommendation of at least six elements per wavelength in certain regions of the head and pinnae and varies the size of elements gradually according to an a priori defined grading function. The evaluation of the algorithm involved HRTFs calculated for various geometric objects including meshes of three human listeners and various grading functions. The numerical accuracy and the predicted sound-localization performance of calculated HRTFs were analyzed. A-priori mesh grading appeared to be suitable for the numerical calculation of HRTFs in the full audible frequency range and outperformed uniform meshes in terms of numerical errors, perception based predictions of sound-localization performance, and computational costs. PMID:28239186

  14. SU-D-202-04: Validation of Deformable Image Registration Algorithms for Head and Neck Adaptive Radiotherapy in Routine Clinical Setting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, L; Pi, Y; Chen, Z

    2016-06-15

    Purpose: To evaluate the ROI contours and accumulated dose difference using different deformable image registration (DIR) algorithms for head and neck (H&N) adaptive radiotherapy. Methods: Eight H&N cancer patients were randomly selected from the affiliated hospital. During the treatment, patients were rescanned every week with ROIs well delineated by radiation oncologist on each weekly CT. New weekly treatment plans were also re-designed with consistent dose prescription on the rescanned CT and executed for one week on Siemens CT-on-rails accelerator. At the end, we got six weekly CT scans from CT1 to CT6 including six weekly treatment plans for each patient.more » The primary CT1 was set as the reference CT for DIR proceeding with the left five weekly CTs using ANACONDA and MORFEUS algorithms separately in RayStation and the external skin ROI was set to be the controlling ROI both. The entire calculated weekly dose were deformed and accumulated on corresponding reference CT1 according to the deformation vector field (DVFs) generated by the two different DIR algorithms respectively. Thus we got both the ANACONDA-based and MORFEUS-based accumulated total dose on CT1 for each patient. At the same time, we mapped the ROIs on CT1 to generate the corresponding ROIs on CT6 using ANACONDA and MORFEUS DIR algorithms. DICE coefficients between the DIR deformed and radiation oncologist delineated ROIs on CT6 were calculated. Results: For DIR accumulated dose, PTV D95 and Left-Eyeball Dmax show significant differences with 67.13 cGy and 109.29 cGy respectively (Table1). For DIR mapped ROIs, PTV, Spinal cord and Left-Optic nerve show difference with −0.025, −0.127 and −0.124 (Table2). Conclusion: Even two excellent DIR algorithms can give divergent results for ROI deformation and dose accumulation. As more and more TPS get DIR module integrated, there is an urgent need to realize the potential risk using DIR in clinical.« less

  15. The ANACONDA algorithm for deformable image registration in radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weistrand, Ola; Svensson, Stina, E-mail: stina.svensson@raysearchlabs.com

    2015-01-15

    Purpose: The purpose of this work was to describe a versatile algorithm for deformable image registration with applications in radiotherapy and to validate it on thoracic 4DCT data as well as CT/cone beam CT (CBCT) data. Methods: ANAtomically CONstrained Deformation Algorithm (ANACONDA) combines image information (i.e., intensities) with anatomical information as provided by contoured image sets. The registration problem is formulated as a nonlinear optimization problem and solved with an in-house developed solver, tailored to this problem. The objective function, which is minimized during optimization, is a linear combination of four nonlinear terms: 1. image similarity term; 2. grid regularizationmore » term, which aims at keeping the deformed image grid smooth and invertible; 3. a shape based regularization term which works to keep the deformation anatomically reasonable when regions of interest are present in the reference image; and 4. a penalty term which is added to the optimization problem when controlling structures are used, aimed at deforming the selected structure in the reference image to the corresponding structure in the target image. Results: To validate ANACONDA, the authors have used 16 publically available thoracic 4DCT data sets for which target registration errors from several algorithms have been reported in the literature. On average for the 16 data sets, the target registration error is 1.17 ± 0.87 mm, Dice similarity coefficient is 0.98 for the two lungs, and image similarity, measured by the correlation coefficient, is 0.95. The authors have also validated ANACONDA using two pelvic cases and one head and neck case with planning CT and daily acquired CBCT. Each image has been contoured by a physician (radiation oncologist) or experienced radiation therapist. The results are an improvement with respect to rigid registration. However, for the head and neck case, the sample set is too small to show statistical significance. Conclusions: ANACONDA performs well in comparison with other algorithms. By including CT/CBCT data in the validation, the various aspects of the algorithm such as its ability to handle different modalities, large deformations, and air pockets are shown.« less

  16. A combined use of multispectral and SAR images for ship detection and characterization through object based image analysis

    NASA Astrophysics Data System (ADS)

    Aiello, Martina; Gianinetto, Marco

    2017-10-01

    Marine routes represent a huge portion of commercial and human trades, therefore surveillance, security and environmental protection themes are gaining increasing importance. Being able to overcome the limits imposed by terrestrial means of monitoring, ship detection from satellite has recently prompted a renewed interest for a continuous monitoring of illegal activities. This paper describes an automatic Object Based Image Analysis (OBIA) approach to detect vessels made of different materials in various sea environments. The combined use of multispectral and SAR images allows for a regular observation unrestricted by lighting and atmospheric conditions and complementarity in terms of geographic coverage and geometric detail. The method developed adopts a region growing algorithm to segment the image in homogeneous objects, which are then classified through a decision tree algorithm based on spectral and geometrical properties. Then, a spatial analysis retrieves the vessels' position, length and heading parameters and a speed range is associated. Optimization of the image processing chain is performed by selecting image tiles through a statistical index. Vessel candidates are detected over amplitude SAR images using an adaptive threshold Constant False Alarm Rate (CFAR) algorithm prior the object based analysis. Validation is carried out by comparing the retrieved parameters with the information provided by the Automatic Identification System (AIS), when available, or with manual measurement when AIS data are not available. The estimation of length shows R2=0.85 and estimation of heading R2=0.92, computed as the average of R2 values obtained for both optical and radar images.

  17. Enabling Disabled Persons to Gain Access to Digital Media

    NASA Technical Reports Server (NTRS)

    Beach, Glenn; OGrady, Ryan

    2011-01-01

    A report describes the first phase in an effort to enhance the NaviGaze software to enable profoundly disabled persons to operate computers. (Running on a Windows-based computer equipped with a video camera aimed at the user s head, the original NaviGaze software processes the user's head movements and eye blinks into cursor movements and mouse clicks to enable hands-free control of the computer.) To accommodate large variations in movement capabilities among disabled individuals, one of the enhancements was the addition of a graphical user interface for selection of parameters that affect the way the software interacts with the computer and tracks the user s movements. Tracking algorithms were improved to reduce sensitivity to rotations and reduce the likelihood of tracking the wrong features. Visual feedback to the user was improved to provide an indication of the state of the computer system. It was found that users can quickly learn to use the enhanced software, performing single clicks, double clicks, and drags within minutes of first use. Available programs that could increase the usability of NaviGaze were identified. One of these enables entry of text by using NaviGaze as a mouse to select keys on a virtual keyboard.

  18. Improvements in pencil beam scanning proton therapy dose calculation accuracy in brain tumor cases with a commercial Monte Carlo algorithm.

    PubMed

    Widesott, Lamberto; Lorentini, Stefano; Fracchiolla, Francesco; Farace, Paolo; Schwarz, Marco

    2018-05-04

    validation of a commercial Monte Carlo (MC) algorithm (RayStation ver6.0.024) for the treatment of brain tumours with pencil beam scanning (PBS) proton therapy, comparing it via measurements and analytical calculations in clinically realistic scenarios. Methods: For the measurements a 2D ion chamber array detector (MatriXX PT)) was placed underneath the following targets: 1) anthropomorphic head phantom (with two different thickness) and 2) a biological sample (i.e. half lamb's head). In addition, we compared the MC dose engine vs. the RayStation pencil beam (PB) algorithm clinically implemented so far, in critical conditions such as superficial targets (i.e. in need of range shifter), different air gaps and gantry angles to simulate both orthogonal and tangential beam arrangements. For every plan the PB and MC dose calculation were compared to measurements using a gamma analysis metrics (3%, 3mm). Results: regarding the head phantom the gamma passing rate (GPR) was always >96% and on average > 99% for the MC algorithm; PB algorithm had a GPR ≤90% for all the delivery configurations with single slab (apart 95 % GPR from gantry 0° and small air gap) and in case of two slabs of the head phantom the GPR was >95% only in case of small air gaps for all the three (0°, 45°,and 70°) simulated beam gantry angles. Overall the PB algorithm tends to overestimate the dose to the target (up to 25%) and underestimate the dose to the organ at risk (up to 30%). We found similar results (but a bit worse for PB algorithm) for the two targets of the lamb's head where only two beam gantry angles were simulated. Conclusions: our results suggest that in PBS proton therapy range shifter (RS) need to be used with extreme caution when planning the treatment with an analytical algorithm due to potentially great discrepancies between the planned dose and the dose delivered to the patients, also in case of brain tumours where this issue could be underestimated. Our results also suggest that a MC evaluation of the dose has to be performed every time the RS is used and, mostly, when it is used with large air gaps and beam directions tangential to the patient surface. . © 2018 Institute of Physics and Engineering in Medicine.

  19. Demonstration of Land and Hold Short Technology at the Dallas-Fort Worth International Airport

    NASA Technical Reports Server (NTRS)

    Hyer, Paul V.; Jones, Denise R. (Technical Monitor)

    2002-01-01

    A guidance system for assisting in Land and Hold Short operations was developed and then tested at the Dallas-Fort Worth International Airport. This system displays deceleration advisory information on a head-up display (HUD) in front of the airline pilot during landing. The display includes runway edges, a trend vector, deceleration advisory, locations of the hold line and of the selected exit, and alphanumeric information about the progress of the aircraft. Deceleration guidance is provided to the hold short line or to a pilot selected exit prior to this line. Logic is provided to switch the display automatically to the next available exit. The report includes descriptions of the algorithms utilized in the displays, and a report on the techniques of HUD alignment, and results.

  20. Representing pump-capacity relations in groundwater simulation models

    USGS Publications Warehouse

    Konikow, Leonard F.

    2010-01-01

    The yield (or discharge) of constant-speed pumps varies with the total dynamic head (or lift) against which the pump is discharging. The variation in yield over the operating range of the pump may be substantial. In groundwater simulations that are used for management evaluations or other purposes, where predictive accuracy depends on the reliability of future discharge estimates, model reliability may be enhanced by including the effects of head-capacity (or pump-capacity) relations on the discharge from the well. A relatively simple algorithm has been incorporated into the widely used MODFLOW groundwater flow model that allows a model user to specify head-capacity curves. The algorithm causes the model to automatically adjust the pumping rate each time step to account for the effect of drawdown in the cell and changing lift, and will shut the pump off if lift exceeds a critical value. The algorithm is available as part of a new multinode well package (MNW2) for MODFLOW.

  1. Representing pump-capacity relations in groundwater simulati on models

    USGS Publications Warehouse

    Konikow, Leonard F.

    2010-01-01

    The yield (or discharge) of constant-speed pumps varies with the total dynamic head (or lift) against which the pump is discharging. The variation in yield over the operating range of the pump may be substantial. In groundwater simulations that are used for management evaluations or other purposes, where predictive accuracy depends on the reliability of future discharge estimates, model reliability may be enhanced by including the effects of head-capacity (or pump-capacity) relations on the discharge from the well. A relatively simple algorithm has been incorporated into the widely used MODFLOW groundwater flow model that allows a model user to specify head-capacity curves. The algorithm causes the model to automatically adjust the pumping rate each time step to account for the effect of drawdown in the cell and changing lift, and will shut the pump off if lift exceeds a critical value. The algorithm is available as part of a new multinode well package (MNW2) for MODFLOW. ?? 2009 National Ground Water Association.

  2. Theoretical and experimental study on near infrared time-resolved optical diffuse tomography

    NASA Astrophysics Data System (ADS)

    Zhao, Huijuan; Gao, Feng; Tanikawa, Yukari; Yamada, Yukio

    2006-08-01

    Parts of the works of our group in the past five years on near infrared time-resolved (TR) optical tomography are summarized in this paper. The image reconstruction algorithm is based on Newton Raphson scheme with a datatype R generated from modified Generalized Pulse Spectrum Technique. Firstly, the algorithm is evaluated with simulated data from a 2-D model and the datatype R is compared with other popularly used datatypes. In this second part of the paper, the in vitro and in vivo NIR DOT imaging on a chicken leg and a human forearm, respectively are presented for evaluating both the image reconstruction algorithm and the TR measurement system. The third part of this paper is about the differential pathlength factor of human head while monitoring head activity with NIRS and applying the modified Lambert-Beer law. Benefiting from the TR system, the measured DPF maps of the three import areas of human head are presented in this paper.

  3. Pulsation Detection from Noisy Ultrasound-Echo Moving Images of Newborn Baby Head Using Fourier Transform

    NASA Astrophysics Data System (ADS)

    Yamada, Masayoshi; Fukuzawa, Masayuki; Kitsunezuka, Yoshiki; Kishida, Jun; Nakamori, Nobuyuki; Kanamori, Hitoshi; Sakurai, Takashi; Kodama, Souichi

    1995-05-01

    In order to detect pulsation from a series of noisy ultrasound-echo moving images of a newborn baby's head for pediatric diagnosis, a digital image processing system capable of recording at the video rate and processing the recorded series of images was constructed. The time-sequence variations of each pixel value in a series of moving images were analyzed and then an algorithm based on Fourier transform was developed for the pulsation detection, noting that the pulsation associated with blood flow was periodically changed by heartbeat. Pulsation detection for pediatric diagnosis was successfully made from a series of noisy ultrasound-echo moving images of newborn baby's head by using the image processing system and the pulsation detection algorithm developed here.

  4. Performance Analysis of Cluster Formation in Wireless Sensor Networks.

    PubMed

    Montiel, Edgar Romo; Rivero-Angeles, Mario E; Rubino, Gerardo; Molina-Lozano, Heron; Menchaca-Mendez, Rolando; Menchaca-Mendez, Ricardo

    2017-12-13

    Clustered-based wireless sensor networks have been extensively used in the literature in order to achieve considerable energy consumption reductions. However, two aspects of such systems have been largely overlooked. Namely, the transmission probability used during the cluster formation phase and the way in which cluster heads are selected. Both of these issues have an important impact on the performance of the system. For the former, it is common to consider that sensor nodes in a clustered-based Wireless Sensor Network (WSN) use a fixed transmission probability to send control data in order to build the clusters. However, due to the highly variable conditions experienced by these networks, a fixed transmission probability may lead to extra energy consumption. In view of this, three different transmission probability strategies are studied: optimal, fixed and adaptive. In this context, we also investigate cluster head selection schemes, specifically, we consider two intelligent schemes based on the fuzzy C-means and k-medoids algorithms and a random selection with no intelligence. We show that the use of intelligent schemes greatly improves the performance of the system, but their use entails higher complexity and selection delay. The main performance metrics considered in this work are energy consumption, successful transmission probability and cluster formation latency. As an additional feature of this work, we study the effect of errors in the wireless channel and the impact on the performance of the system under the different transmission probability schemes.

  5. Performance Analysis of Cluster Formation in Wireless Sensor Networks

    PubMed Central

    Montiel, Edgar Romo; Rivero-Angeles, Mario E.; Rubino, Gerardo; Molina-Lozano, Heron; Menchaca-Mendez, Rolando; Menchaca-Mendez, Ricardo

    2017-01-01

    Clustered-based wireless sensor networks have been extensively used in the literature in order to achieve considerable energy consumption reductions. However, two aspects of such systems have been largely overlooked. Namely, the transmission probability used during the cluster formation phase and the way in which cluster heads are selected. Both of these issues have an important impact on the performance of the system. For the former, it is common to consider that sensor nodes in a clustered-based Wireless Sensor Network (WSN) use a fixed transmission probability to send control data in order to build the clusters. However, due to the highly variable conditions experienced by these networks, a fixed transmission probability may lead to extra energy consumption. In view of this, three different transmission probability strategies are studied: optimal, fixed and adaptive. In this context, we also investigate cluster head selection schemes, specifically, we consider two intelligent schemes based on the fuzzy C-means and k-medoids algorithms and a random selection with no intelligence. We show that the use of intelligent schemes greatly improves the performance of the system, but their use entails higher complexity and selection delay. The main performance metrics considered in this work are energy consumption, successful transmission probability and cluster formation latency. As an additional feature of this work, we study the effect of errors in the wireless channel and the impact on the performance of the system under the different transmission probability schemes. PMID:29236065

  6. Automated Critical Test Findings Identification and Online Notification System Using Artificial Intelligence in Imaging.

    PubMed

    Prevedello, Luciano M; Erdal, Barbaros S; Ryu, John L; Little, Kevin J; Demirer, Mutlu; Qian, Songyue; White, Richard D

    2017-12-01

    Purpose To evaluate the performance of an artificial intelligence (AI) tool using a deep learning algorithm for detecting hemorrhage, mass effect, or hydrocephalus (HMH) at non-contrast material-enhanced head computed tomographic (CT) examinations and to determine algorithm performance for detection of suspected acute infarct (SAI). Materials and Methods This HIPAA-compliant retrospective study was completed after institutional review board approval. A training and validation dataset of noncontrast-enhanced head CT examinations that comprised 100 examinations of HMH, 22 of SAI, and 124 of noncritical findings was obtained resulting in 2583 representative images. Examinations were processed by using a convolutional neural network (deep learning) using two different window and level configurations (brain window and stroke window). AI algorithm performance was tested on a separate dataset containing 50 examinations with HMH findings, 15 with SAI findings, and 35 with noncritical findings. Results Final algorithm performance for HMH showed 90% (45 of 50) sensitivity (95% confidence interval [CI]: 78%, 97%) and 85% (68 of 80) specificity (95% CI: 76%, 92%), with area under the receiver operating characteristic curve (AUC) of 0.91 with the brain window. For SAI, the best performance was achieved with the stroke window showing 62% (13 of 21) sensitivity (95% CI: 38%, 82%) and 96% (27 of 28) specificity (95% CI: 82%, 100%), with AUC of 0.81. Conclusion AI using deep learning demonstrates promise for detecting critical findings at noncontrast-enhanced head CT. A dedicated algorithm was required to detect SAI. Detection of SAI showed lower sensitivity in comparison to detection of HMH, but showed reasonable performance. Findings support further investigation of the algorithm in a controlled and prospective clinical setting to determine whether it can independently screen noncontrast-enhanced head CT examinations and notify the interpreting radiologist of critical findings. © RSNA, 2017 Online supplemental material is available for this article.

  7. Adaptive optics retinal imaging with automatic detection of the pupil and its boundary in real time using Shack-Hartmann images.

    PubMed

    de Castro, Alberto; Sawides, Lucie; Qi, Xiaofeng; Burns, Stephen A

    2017-08-20

    Retinal imaging with an adaptive optics (AO) system usually requires that the eye be centered and stable relative to the exit pupil of the system. Aberrations are then typically corrected inside a fixed circular pupil. This approach can be restrictive when imaging some subjects, since the pupil may not be round and maintaining a stable head position can be difficult. In this paper, we present an automatic algorithm that relaxes these constraints. An image quality metric is computed for each spot of the Shack-Hartmann image to detect the pupil and its boundary, and the control algorithm is applied only to regions within the subject's pupil. Images on a model eye as well as for five subjects were obtained to show that a system exit pupil larger than the subject's eye pupil could be used for AO retinal imaging without a reduction in image quality. This algorithm automates the task of selecting pupil size. It also may relax constraints on centering the subject's pupil and on the shape of the pupil.

  8. Model of head-neck joint fast movements in the frontal plane.

    PubMed

    Pedrocchi, A; Ferrigno, G

    2004-06-01

    The objective of this work is to develop a model representing the physiological systems driving fast head movements in frontal plane. All the contributions occurring mechanically in the head movement are considered: damping, stiffness, physiological limit of range of motion, gravitational field, and muscular torques due to voluntary activation as well as to stretch reflex depending on fusal afferences. Model parameters are partly derived from the literature, when possible, whereas undetermined block parameters are determined by optimising the model output, fitting to real kinematics data acquired by a motion capture system in specific experimental set-ups. The optimisation for parameter identification is performed by genetic algorithms. Results show that the model represents very well fast head movements in the whole range of inclination in the frontal plane. Such a model could be proposed as a tool for transforming kinematics data on head movements in 'neural equivalent data', especially for assessing head control disease and properly planning the rehabilitation process. In addition, the use of genetic algorithms seems to fit well the problem of parameter identification, allowing for the use of a very simple experimental set-up and granting model robustness.

  9. Integrated beam orientation and scanning-spot optimization in intensity-modulated proton therapy for brain and unilateral head and neck tumors.

    PubMed

    Gu, Wenbo; O'Connor, Daniel; Nguyen, Dan; Yu, Victoria Y; Ruan, Dan; Dong, Lei; Sheng, Ke

    2018-04-01

    Intensity-Modulated Proton Therapy (IMPT) is the state-of-the-art method of delivering proton radiotherapy. Previous research has been mainly focused on optimization of scanning spots with manually selected beam angles. Due to the computational complexity, the potential benefit of simultaneously optimizing beam orientations and spot pattern could not be realized. In this study, we developed a novel integrated beam orientation optimization (BOO) and scanning-spot optimization algorithm for intensity-modulated proton therapy (IMPT). A brain chordoma and three unilateral head-and-neck patients with a maximal target size of 112.49 cm 3 were included in this study. A total number of 1162 noncoplanar candidate beams evenly distributed across 4π steradians were included in the optimization. For each candidate beam, the pencil-beam doses of all scanning spots covering the PTV and a margin were calculated. The beam angle selection and spot intensity optimization problem was formulated to include three terms: a dose fidelity term to penalize the deviation of PTV and OAR doses from ideal dose distribution; an L1-norm sparsity term to reduce the number of active spots and improve delivery efficiency; a group sparsity term to control the number of active beams between 2 and 4. For the group sparsity term, convex L2,1-norm and nonconvex L2,1/2-norm were tested. For the dose fidelity term, both quadratic function and linearized equivalent uniform dose (LEUD) cost function were implemented. The optimization problem was solved using the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA). The IMPT BOO method was tested on three head-and-neck patients and one skull base chordoma patient. The results were compared with IMPT plans created using column generation selected beams or manually selected beams. The L2,1-norm plan selected spatially aggregated beams, indicating potential degeneracy using this norm. L2,1/2-norm was able to select spatially separated beams and achieve smaller deviation from the ideal dose. In the L2,1/2-norm plans, the [mean dose, maximum dose] of OAR were reduced by an average of [2.38%, 4.24%] and[2.32%, 3.76%] of the prescription dose for the quadratic and LEUD cost function, respectively, compared with the IMPT plan using manual beam selection while maintaining the same PTV coverage. The L2,1/2 group sparsity plans were dosimetrically superior to the column generation plans as well. Besides beam orientation selection, spot sparsification was observed. Generally, with the quadratic cost function, 30%~60% spots in the selected beams remained active. With the LEUD cost function, the percentages of active spots were in the range of 35%~85%.The BOO-IMPT run time was approximately 20 min. This work shows the first IMPT approach integrating noncoplanar BOO and scanning-spot optimization in a single mathematical framework. This method is computationally efficient, dosimetrically superior and produces delivery-friendly IMPT plans. © 2018 American Association of Physicists in Medicine.

  10. Clinical Practice Update: Pediculosis Capitis.

    PubMed

    Bohl, Brittany; Evetts, Jessica; McClain, Kymberli; Rosenauer, Amanda; Stellitano, Emily

    2015-01-01

    A review of the current evidence on primary treatment modalities of head lice demonstrates increasing resistance to current regimens. New and alternative therapies are now available. A treatment algorithm was created to address safety and efficacy of treatments, as well as to guide clinicians through navigation of the regimens. Through an online journal search, 59 articles were selected for the review. Literature searches were performed through PubMed, Medline, Ebsco Host, and CINAHL, with key search words of "Pediculosis capitis" and "head lice" in the title, abstract, and index. Meta-analyses and controlled clinical trials were viewed with greater weight if they had a large sample size, were statistically significant, and did not allude to bias. When resistant infestations are well-documented in a locality, changes to the treatment regimen are indicated, and alternative treatments should be considered. Recent studies and U.S. Food and Drug Administration (FDA) approvals have changed the available treatment options for Pediculosis capitis, including benzyl alcohol, topical ivermectin, spinosad, and the LouseBuster. Further, environmental management and prevention measures should be taken to avoid reinfestation and to prevent the spread of head lice. Continued study is recommended to establish long-term safety of new and alternative agents.

  11. Advancing RF pulse design using an open-competition format: Report from the 2015 ISMRM challenge.

    PubMed

    Grissom, William A; Setsompop, Kawin; Hurley, Samuel A; Tsao, Jeffrey; Velikina, Julia V; Samsonov, Alexey A

    2017-10-01

    To advance the best solutions to two important RF pulse design problems with an open head-to-head competition. Two sub-challenges were formulated in which contestants competed to design the shortest simultaneous multislice (SMS) refocusing pulses and slice-selective parallel transmission (pTx) excitation pulses, subject to realistic hardware and safety constraints. Short refocusing pulses are needed for spin echo SMS imaging at high multiband factors, and short slice-selective pTx pulses are needed for multislice imaging in ultra-high field MRI. Each sub-challenge comprised two phases, in which the first phase posed problems with a low barrier of entry, and the second phase encouraged solutions that performed well in general. The Challenge ran from October 2015 to May 2016. The pTx Challenge winners developed a spokes pulse design method that combined variable-rate selective excitation with an efficient method to enforce SAR constraints, which achieved 10.6 times shorter pulse durations than conventional approaches. The SMS Challenge winners developed a time-optimal control multiband pulse design algorithm that achieved 5.1 times shorter pulse durations than conventional approaches. The Challenge led to rapid step improvements in solutions to significant problems in RF excitation for SMS imaging and ultra-high field MRI. Magn Reson Med 78:1352-1361, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  12. Algorithm for fuel conservative horizontal capture trajectories

    NASA Technical Reports Server (NTRS)

    Neuman, F.; Erzberger, H.

    1981-01-01

    A real time algorithm for computing constant altitude fuel-conservative approach trajectories for aircraft is described. The characteristics of the trajectory computed were chosen to approximate the extremal trajectories obtained from the optimal control solution to the problem and showed a fuel difference of only 0.5 to 2 percent for the real time algorithm in favor of the extremals. The trajectories may start at any initial position, heading, and speed and end at any other final position, heading, and speed. They consist of straight lines and a series of circular arcs of varying radius to approximate constant bank-angle decelerating turns. Throttle control is maximum thrust, nominal thrust, or zero thrust. Bank-angle control is either zero or aproximately 30 deg.

  13. Dense-HOG-based drift-reduced 3D face tracking for infant pain monitoring

    NASA Astrophysics Data System (ADS)

    Saeijs, Ronald W. J. J.; Tjon A Ten, Walther E.; de With, Peter H. N.

    2017-03-01

    This paper presents a new algorithm for 3D face tracking intended for clinical infant pain monitoring. The algorithm uses a cylinder head model and 3D head pose recovery by alignment of dynamically extracted templates based on dense-HOG features. The algorithm includes extensions for drift reduction, using re-registration in combination with multi-pose state estimation by means of a square-root unscented Kalman filter. The paper reports experimental results on videos of moving infants in hospital who are relaxed or in pain. Results show good tracking behavior for poses up to 50 degrees from upright-frontal. In terms of eye location error relative to inter-ocular distance, the mean tracking error is below 9%.

  14. Evaluation of an Automated Swallow-Detection Algorithm Using Visual Biofeedback in Healthy Adults and Head and Neck Cancer Survivors.

    PubMed

    Constantinescu, Gabriela; Kuffel, Kristina; Aalto, Daniel; Hodgetts, William; Rieger, Jana

    2018-06-01

    Mobile health (mHealth) technologies may offer an opportunity to address longstanding clinical challenges, such as access and adherence to swallowing therapy. Mobili-T ® is an mHealth device that uses surface electromyography (sEMG) to provide biofeedback on submental muscles activity during exercise. An automated swallow-detection algorithm was developed for Mobili-T ® . This study evaluated the performance of the swallow-detection algorithm. Ten healthy participants and 10 head and neck cancer (HNC) patients were fitted with the device. Signal was acquired during regular, effortful, and Mendelsohn maneuver saliva swallows, as well as lip presses, tongue, and head movements. Signals of interest were tagged during data acquisition and used to evaluate algorithm performance. Sensitivity and positive predictive values (PPV) were calculated for each participant. Saliva swallows were compared between HNC and controls in the four sEMG-based parameters used in the algorithm: duration, peak amplitude ratio, median frequency, and 15th percentile of the power spectrum density. In healthy participants, sensitivity and PPV were 92.3 and 83.9%, respectively. In HNC patients, sensitivity was 92.7% and PPV was 72.2%. In saliva swallows, HNC patients had longer event durations (U = 1925.5, p < 0.001), lower median frequency (U = 2674.0, p < 0.001), and lower 15th percentile of the power spectrum density [t(176.9) = 2.07, p < 0.001] than healthy participants. The automated swallow-detection algorithm performed well with healthy participants and retained a high sensitivity, but had lowered PPV with HNC patients. With respect to Mobili-T ® , the algorithm will next be evaluated using the mHealth system.

  15. Autonomous Motion Planning Using a Predictive Temporal Method

    DTIC Science & Technology

    2009-01-01

    interception test. ......150 5-20 Target and solution path heading angles for target interception test. ..............................151 10 LIST...environment as a series of distances and angles . Regardless of the technique, this knowledge of the surrounding area is crucial for the issue of...to, the rather simplistic vector driver algorithms which compute the angle between the current vehicle heading and the heading to the goal and

  16. Management of High-energy Avulsive Ballistic Facial Injury: A Review of the Literature and Algorithmic Approach.

    PubMed

    Vaca, Elbert E; Bellamy, Justin L; Sinno, Sammy; Rodriguez, Eduardo D

    2018-03-01

    High-energy avulsive ballistic facial injuries pose one of the most significant reconstructive challenges. We conducted a systematic review of the literature to evaluate management trends and outcomes for the treatment of devastating ballistic facial trauma. Furthermore, we describe the senior author's early and definitive staged reconstructive approach to these challenging patients. A Medline search was conducted to include studies that described timing of treatment, interventions, complications, and/or aesthetic outcomes. Initial query revealed 41 articles, of which 17 articles met inclusion criteria. A single comparative study revealed that early versus delayed management resulted in a decreased incidence of soft-tissue contracture, required fewer total procedures, and resulted in shorter hospitalizations (level 3 evidence). Seven of the 9 studies (78%) that advocated delayed reconstruction were from the Middle East, whereas 5 of the 6 studies (83%) advocating immediate or early definitive reconstruction were from the United States. No study compared debridement timing directly in a head-to-head fashion, nor described flap selection based on defect characteristics. Existing literature suggests that early and aggressive intervention improves outcomes following avulsive ballistic injuries. Further comparative studies are needed; however, although evidence is limited, the senior author presents a 3-stage reconstructive algorithm advocating early and definitive reconstruction with aesthetic free tissue transfer in an attempt to optimize reconstructive outcomes of these complex injuries.

  17. Management of High-energy Avulsive Ballistic Facial Injury: A Review of the Literature and Algorithmic Approach

    PubMed Central

    Vaca, Elbert E.; Bellamy, Justin L.; Sinno, Sammy

    2018-01-01

    Background: High-energy avulsive ballistic facial injuries pose one of the most significant reconstructive challenges. We conducted a systematic review of the literature to evaluate management trends and outcomes for the treatment of devastating ballistic facial trauma. Furthermore, we describe the senior author’s early and definitive staged reconstructive approach to these challenging patients. Methods: A Medline search was conducted to include studies that described timing of treatment, interventions, complications, and/or aesthetic outcomes. Results: Initial query revealed 41 articles, of which 17 articles met inclusion criteria. A single comparative study revealed that early versus delayed management resulted in a decreased incidence of soft-tissue contracture, required fewer total procedures, and resulted in shorter hospitalizations (level 3 evidence). Seven of the 9 studies (78%) that advocated delayed reconstruction were from the Middle East, whereas 5 of the 6 studies (83%) advocating immediate or early definitive reconstruction were from the United States. No study compared debridement timing directly in a head-to-head fashion, nor described flap selection based on defect characteristics. Conclusions: Existing literature suggests that early and aggressive intervention improves outcomes following avulsive ballistic injuries. Further comparative studies are needed; however, although evidence is limited, the senior author presents a 3-stage reconstructive algorithm advocating early and definitive reconstruction with aesthetic free tissue transfer in an attempt to optimize reconstructive outcomes of these complex injuries. PMID:29707453

  18. Automatic partitioning of head CTA for enabling segmentation

    NASA Astrophysics Data System (ADS)

    Suryanarayanan, Srikanth; Mullick, Rakesh; Mallya, Yogish; Kamath, Vidya; Nagaraj, Nithin

    2004-05-01

    Radiologists perform a CT Angiography procedure to examine vascular structures and associated pathologies such as aneurysms. Volume rendering is used to exploit volumetric capabilities of CT that provides complete interactive 3-D visualization. However, bone forms an occluding structure and must be segmented out. The anatomical complexity of the head creates a major challenge in the segmentation of bone and vessel. An analysis of the head volume reveals varying spatial relationships between vessel and bone that can be separated into three sub-volumes: "proximal", "middle", and "distal". The "proximal" and "distal" sub-volumes contain good spatial separation between bone and vessel (carotid referenced here). Bone and vessel appear contiguous in the "middle" partition that remains the most challenging region for segmentation. The partition algorithm is used to automatically identify these partition locations so that different segmentation methods can be developed for each sub-volume. The partition locations are computed using bone, image entropy, and sinus profiles along with a rule-based method. The algorithm is validated on 21 cases (varying volume sizes, resolution, clinical sites, pathologies) using ground truth identified visually. The algorithm is also computationally efficient, processing a 500+ slice volume in 6 seconds (an impressive 0.01 seconds / slice) that makes it an attractive algorithm for pre-processing large volumes. The partition algorithm is integrated into the segmentation workflow. Fast and simple algorithms are implemented for processing the "proximal" and "distal" partitions. Complex methods are restricted to only the "middle" partition. The partitionenabled segmentation has been successfully tested and results are shown from multiple cases.

  19. SU-F-J-180: A Reference Data Set for Testing Two Dimension Registration Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dankwa, A; Castillo, E; Guerrero, T

    Purpose: To create and characterize a reference data set for testing image registration algorithms that transform portal image (PI) to digitally reconstructed radiograph (DRR). Methods: Anterior-posterior (AP) and Lateral (LAT) projection and DRR image pairs from nine cases representing four different anatomical sites (head and neck, thoracic, abdominal, and pelvis) were selected for this study. Five experts will perform manual registration by placing landmarks points (LMPs) on the DRR and finding their corresponding points on the PI using computer assisted manual point selection tool (CAMPST), a custom-made MATLAB software tool developed in house. The landmark selection process will be repeatedmore » on both the PI and the DRR in order to characterize inter- and -intra observer variations associated with the point selection process. Inter and an intra observer variation in LMPs was done using Bland-Altman (B&A) analysis and one-way analysis of variance. We set our limit such that the absolute value of the mean difference between the readings should not exceed 3mm. Later on in this project we will test different two dimension (2D) image registration algorithms and quantify the uncertainty associated with their registration. Results: Using one-way analysis of variance (ANOVA) there was no variations within the readers. When Bland-Altman analysis was used the variation within the readers was acceptable. The variation was higher in the PI compared to the DRR.ConclusionThe variation seen for the PI is because although the PI has a much better spatial resolution the poor resolution on the DRR makes it difficult to locate the actual corresponding anatomical feature on the PI. We hope this becomes more evident when all the readers complete the point selection. The reason for quantifying inter- and -intra observer variation tells us to what degree of accuracy a manual registration can be done. Research supported by William Beaumont Hospital Research Start Up Fund.« less

  20. SU-F-T-619: Dose Evaluation of Specific Patient Plans Based On Monte Carlo Algorithm for a CyberKnife Stereotactic Radiosurgery System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piao, J; PLA 302 Hospital, Beijing; Xu, S

    2016-06-15

    Purpose: This study will use Monte Carlo to simulate the Cyberknife system, and intend to develop the third-party tool to evaluate the dose verification of specific patient plans in TPS. Methods: By simulating the treatment head using the BEAMnrc and DOSXYZnrc software, the comparison between the calculated and measured data will be done to determine the beam parameters. The dose distribution calculated in the Raytracing, Monte Carlo algorithms of TPS (Multiplan Ver4.0.2) and in-house Monte Carlo simulation method for 30 patient plans, which included 10 head, lung and liver cases in each, were analyzed. The γ analysis with the combinedmore » 3mm/3% criteria would be introduced to quantitatively evaluate the difference of the accuracy between three algorithms. Results: More than 90% of the global error points were less than 2% for the comparison of the PDD and OAR curves after determining the mean energy and FWHM.The relative ideal Monte Carlo beam model had been established. Based on the quantitative evaluation of dose accuracy for three algorithms, the results of γ analysis shows that the passing rates (84.88±9.67% for head,98.83±1.05% for liver,98.26±1.87% for lung) of PTV in 30 plans between Monte Carlo simulation and TPS Monte Carlo algorithms were good. And the passing rates (95.93±3.12%,99.84±0.33% in each) of PTV in head and liver plans between Monte Carlo simulation and TPS Ray-tracing algorithms were also good. But the difference of DVHs in lung plans between Monte Carlo simulation and Ray-tracing algorithms was obvious, and the passing rate (51.263±38.964%) of γ criteria was not good. It is feasible that Monte Carlo simulation was used for verifying the dose distribution of patient plans. Conclusion: Monte Carlo simulation algorithm developed in the CyberKnife system of this study can be used as a reference tool for the third-party tool, which plays an important role in dose verification of patient plans. This work was supported in part by the grant from Chinese Natural Science Foundation (Grant No. 11275105). Thanks for the support from Accuray Corp.« less

  1. [Head and neck adaptive radiotherapy].

    PubMed

    Graff, P; Huger, S; Kirby, N; Pouliot, J

    2013-10-01

    Onboard volumetric imaging systems can provide accurate data of the patient's anatomy during a course of head and neck radiotherapy making it possible to assess the actual delivered dose and to evaluate the dosimetric impact of complex daily positioning variations and gradual anatomic changes such as geometric variations of tumors and normal tissues or shrinkage of external contours. Adaptive radiotherapy is defined as the correction of a patient's treatment planning to adapt for individual variations observed during treatment. Strategies are developed to selectively identify patients that require replanning because of an intolerable dosimetric drift. Automated tools are designed to limit time consumption. Deformable image registration algorithms are the cornerstones of these strategies, but a better understanding of their limits of validity is required before adaptive radiotherapy can be safely introduced to daily practice. Moreover, strict evaluation of the clinical benefits is yet to be proven. Copyright © 2013 Société française de radiothérapie oncologique (SFRO). Published by Elsevier SAS. All rights reserved.

  2. A virtual phantom library for the quantification of deformable image registration uncertainties in patients with cancers of the head and neck.

    PubMed

    Pukala, Jason; Meeks, Sanford L; Staton, Robert J; Bova, Frank J; Mañon, Rafael R; Langen, Katja M

    2013-11-01

    Deformable image registration (DIR) is being used increasingly in various clinical applications. However, the underlying uncertainties of DIR are not well-understood and a comprehensive methodology has not been developed for assessing a range of interfraction anatomic changes during head and neck cancer radiotherapy. This study describes the development of a library of clinically relevant virtual phantoms for the purpose of aiding clinicians in the QA of DIR software. These phantoms will also be available to the community for the independent study and comparison of other DIR algorithms and processes. Each phantom was derived from a pair of kVCT volumetric image sets. The first images were acquired of head and neck cancer patients prior to the start-of-treatment and the second were acquired near the end-of-treatment. A research algorithm was used to autosegment and deform the start-of-treatment (SOT) images according to a biomechanical model. This algorithm allowed the user to adjust the head position, mandible position, and weight loss in the neck region of the SOT images to resemble the end-of-treatment (EOT) images. A human-guided thin-plate splines algorithm was then used to iteratively apply further deformations to the images with the objective of matching the EOT anatomy as closely as possible. The deformations from each algorithm were combined into a single deformation vector field (DVF) and a simulated end-of-treatment (SEOT) image dataset was generated from that DVF. Artificial noise was added to the SEOT images and these images, along with the original SOT images, created a virtual phantom where the underlying "ground-truth" DVF is known. Images from ten patients were deformed in this fashion to create ten clinically relevant virtual phantoms. The virtual phantoms were evaluated to identify unrealistic DVFs using the normalized cross correlation (NCC) and the determinant of the Jacobian matrix. A commercial deformation algorithm was applied to the virtual phantoms to show how they may be used to generate estimates of DIR uncertainty. The NCC showed that the simulated phantom images had greater similarity to the actual EOT images than the images from which they were derived, supporting the clinical relevance of the synthetic deformation maps. Calculation of the Jacobian of the "ground-truth" DVFs resulted in only positive values. As an example, mean error statistics are presented for all phantoms for the brainstem, cord, mandible, left parotid, and right parotid. It is essential that DIR algorithms be evaluated using a range of possible clinical scenarios for each treatment site. This work introduces a library of virtual phantoms intended to resemble real cases for interfraction head and neck DIR that may be used to estimate and compare the uncertainty of any DIR algorithm.

  3. Dynamic Hierarchical Energy-Efficient Method Based on Combinatorial Optimization for Wireless Sensor Networks.

    PubMed

    Chang, Yuchao; Tang, Hongying; Cheng, Yongbo; Zhao, Qin; Yuan, Baoqing Li andXiaobing

    2017-07-19

    Routing protocols based on topology control are significantly important for improving network longevity in wireless sensor networks (WSNs). Traditionally, some WSN routing protocols distribute uneven network traffic load to sensor nodes, which is not optimal for improving network longevity. Differently to conventional WSN routing protocols, we propose a dynamic hierarchical protocol based on combinatorial optimization (DHCO) to balance energy consumption of sensor nodes and to improve WSN longevity. For each sensor node, the DHCO algorithm obtains the optimal route by establishing a feasible routing set instead of selecting the cluster head or the next hop node. The process of obtaining the optimal route can be formulated as a combinatorial optimization problem. Specifically, the DHCO algorithm is carried out by the following procedures. It employs a hierarchy-based connection mechanism to construct a hierarchical network structure in which each sensor node is assigned to a special hierarchical subset; it utilizes the combinatorial optimization theory to establish the feasible routing set for each sensor node, and takes advantage of the maximum-minimum criterion to obtain their optimal routes to the base station. Various results of simulation experiments show effectiveness and superiority of the DHCO algorithm in comparison with state-of-the-art WSN routing algorithms, including low-energy adaptive clustering hierarchy (LEACH), hybrid energy-efficient distributed clustering (HEED), genetic protocol-based self-organizing network clustering (GASONeC), and double cost function-based routing (DCFR) algorithms.

  4. Multiple sparse volumetric priors for distributed EEG source reconstruction.

    PubMed

    Strobbe, Gregor; van Mierlo, Pieter; De Vos, Maarten; Mijović, Bogdan; Hallez, Hans; Van Huffel, Sabine; López, José David; Vandenberghe, Stefaan

    2014-10-15

    We revisit the multiple sparse priors (MSP) algorithm implemented in the statistical parametric mapping software (SPM) for distributed EEG source reconstruction (Friston et al., 2008). In the present implementation, multiple cortical patches are introduced as source priors based on a dipole source space restricted to a cortical surface mesh. In this note, we present a technique to construct volumetric cortical regions to introduce as source priors by restricting the dipole source space to a segmented gray matter layer and using a region growing approach. This extension allows to reconstruct brain structures besides the cortical surface and facilitates the use of more realistic volumetric head models including more layers, such as cerebrospinal fluid (CSF), compared to the standard 3-layered scalp-skull-brain head models. We illustrated the technique with ERP data and anatomical MR images in 12 subjects. Based on the segmented gray matter for each of the subjects, cortical regions were created and introduced as source priors for MSP-inversion assuming two types of head models. The standard 3-layered scalp-skull-brain head models and extended 4-layered head models including CSF. We compared these models with the current implementation by assessing the free energy corresponding with each of the reconstructions using Bayesian model selection for group studies. Strong evidence was found in favor of the volumetric MSP approach compared to the MSP approach based on cortical patches for both types of head models. Overall, the strongest evidence was found in favor of the volumetric MSP reconstructions based on the extended head models including CSF. These results were verified by comparing the reconstructed activity. The use of volumetric cortical regions as source priors is a useful complement to the present implementation as it allows to introduce more complex head models and volumetric source priors in future studies. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Multi-resolution statistical image reconstruction for mitigation of truncation effects: application to cone-beam CT of the head

    NASA Astrophysics Data System (ADS)

    Dang, Hao; Webster Stayman, J.; Sisniega, Alejandro; Zbijewski, Wojciech; Xu, Jennifer; Wang, Xiaohui; Foos, David H.; Aygun, Nafi; Koliatsos, Vassilis E.; Siewerdsen, Jeffrey H.

    2017-01-01

    A prototype cone-beam CT (CBCT) head scanner featuring model-based iterative reconstruction (MBIR) has been recently developed and demonstrated the potential for reliable detection of acute intracranial hemorrhage (ICH), which is vital to diagnosis of traumatic brain injury and hemorrhagic stroke. However, data truncation (e.g. due to the head holder) can result in artifacts that reduce image uniformity and challenge ICH detection. We propose a multi-resolution MBIR method with an extended reconstruction field of view (RFOV) to mitigate truncation effects in CBCT of the head. The image volume includes a fine voxel size in the (inner) nontruncated region and a coarse voxel size in the (outer) truncated region. This multi-resolution scheme allows extension of the RFOV to mitigate truncation effects while introducing minimal increase in computational complexity. The multi-resolution method was incorporated in a penalized weighted least-squares (PWLS) reconstruction framework previously developed for CBCT of the head. Experiments involving an anthropomorphic head phantom with truncation due to a carbon-fiber holder were shown to result in severe artifacts in conventional single-resolution PWLS, whereas extending the RFOV within the multi-resolution framework strongly reduced truncation artifacts. For the same extended RFOV, the multi-resolution approach reduced computation time compared to the single-resolution approach (viz. time reduced by 40.7%, 83.0%, and over 95% for an image volume of 6003, 8003, 10003 voxels). Algorithm parameters (e.g. regularization strength, the ratio of the fine and coarse voxel size, and RFOV size) were investigated to guide reliable parameter selection. The findings provide a promising method for truncation artifact reduction in CBCT and may be useful for other MBIR methods and applications for which truncation is a challenge.

  6. Three-dimensional digital mapping of the optic nerve head cupping in glaucoma

    NASA Astrophysics Data System (ADS)

    Mitra, Sunanda; Ramirez, Manuel; Morales, Jose

    1992-08-01

    Visualization of the optic nerve head cupping is clinically achieved by stereoscopic viewing of a fundus image pair of the suspected eye. A novel algorithm for three-dimensional digital surface representation of the optic nerve head, using fusion of stereo depth map with a linearly stretched intensity image of a stereo fundus image pair, is presented. Prior to depth map acquisition, a number of preprocessing tasks including feature extraction, registration by cepstral analysis, and correction for intensity variations are performed. The depth map is obtained by using a coarse to fine strategy for obtaining disparities between corresponding areas. The required matching techniques to obtain the translational differences in every step, uses cepstral analysis and correlation-like scanning technique in the spatial domain for the finest details. The quantitative and precise representation of the optic nerve head surface topography following this algorithm is not computationally intensive and should provide more useful information than just qualitative stereoscopic viewing of the fundus as one of the diagnostic criteria for diagnosis of glaucoma.

  7. Selective Listening Point Audio Based on Blind Signal Separation and Stereophonic Technology

    NASA Astrophysics Data System (ADS)

    Niwa, Kenta; Nishino, Takanori; Takeda, Kazuya

    A sound field reproduction method is proposed that uses blind source separation and a head-related transfer function. In the proposed system, multichannel acoustic signals captured at distant microphones are decomposed to a set of location/signal pairs of virtual sound sources based on frequency-domain independent component analysis. After estimating the locations and the signals of the virtual sources by convolving the controlled acoustic transfer functions with each signal, the spatial sound is constructed at the selected point. In experiments, a sound field made by six sound sources is captured using 48 distant microphones and decomposed into sets of virtual sound sources. Since subjective evaluation shows no significant difference between natural and reconstructed sound when six virtual sources and are used, the effectiveness of the decomposing algorithm as well as the virtual source representation are confirmed.

  8. Identification of the optic nerve head with genetic algorithms.

    PubMed

    Carmona, Enrique J; Rincón, Mariano; García-Feijoó, Julián; Martínez-de-la-Casa, José M

    2008-07-01

    This work proposes creating an automatic system to locate and segment the optic nerve head (ONH) in eye fundus photographic images using genetic algorithms. Domain knowledge is used to create a set of heuristics that guide the various steps involved in the process. Initially, using an eye fundus colour image as input, a set of hypothesis points was obtained that exhibited geometric properties and intensity levels similar to the ONH contour pixels. Next, a genetic algorithm was used to find an ellipse containing the maximum number of hypothesis points in an offset of its perimeter, considering some constraints. The ellipse thus obtained is the approximation to the ONH. The segmentation method is tested in a sample of 110 eye fundus images, belonging to 55 patients with glaucoma (23.1%) and eye hypertension (76.9%) and random selected from an eye fundus image base belonging to the Ophthalmology Service at Miguel Servet Hospital, Saragossa (Spain). The results obtained are competitive with those in the literature. The method's generalization capability is reinforced when it is applied to a different image base from the one used in our study and a discrepancy curve is obtained very similar to the one obtained in our image base. In addition, the robustness of the method proposed can be seen in the high percentage of images obtained with a discrepancy delta<5 (96% and 99% in our and a different image base, respectively). The results also confirm the hypothesis that the ONH contour can be properly approached with a non-deformable ellipse. Another important aspect of the method is that it directly provides the parameters characterising the shape of the papilla: lengths of its major and minor axes, its centre of location and its orientation with regard to the horizontal position.

  9. A Simple Two Aircraft Conflict Resolution Algorithm

    NASA Technical Reports Server (NTRS)

    Chatterji, Gano B.

    1999-01-01

    Conflict detection and resolution methods are crucial for distributed air-ground traffic management in which the crew in the cockpit, dispatchers in operation control centers and air traffic controllers in the ground-based air traffic management facilities share information and participate in the traffic flow and traffic control imctions.This paper describes a conflict detection and a conflict resolution method. The conflict detection method predicts the minimum separation and the time-to-go to the closest point of approach by assuming that both the aircraft will continue to fly at their current speeds along their current headings. The conflict resolution method described here is motivated by the proportional navigation algorithm. It generates speed and heading commands to rotate the line-of-sight either clockwise or counter-clockwise for conflict resolution. Once the aircraft achieve a positive range-rate and no further conflict is predicted, the algorithm generates heading commands to turn back the aircraft to their nominal trajectories. The speed commands are set to the optimal pre-resolution speeds. Six numerical examples are presented to demonstrate the conflict detection and resolution method.

  10. Robust head pose estimation via supervised manifold learning.

    PubMed

    Wang, Chao; Song, Xubo

    2014-05-01

    Head poses can be automatically estimated using manifold learning algorithms, with the assumption that with the pose being the only variable, the face images should lie in a smooth and low-dimensional manifold. However, this estimation approach is challenging due to other appearance variations related to identity, head location in image, background clutter, facial expression, and illumination. To address the problem, we propose to incorporate supervised information (pose angles of training samples) into the process of manifold learning. The process has three stages: neighborhood construction, graph weight computation and projection learning. For the first two stages, we redefine inter-point distance for neighborhood construction as well as graph weight by constraining them with the pose angle information. For Stage 3, we present a supervised neighborhood-based linear feature transformation algorithm to keep the data points with similar pose angles close together but the data points with dissimilar pose angles far apart. The experimental results show that our method has higher estimation accuracy than the other state-of-art algorithms and is robust to identity and illumination variations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Improved CT Detection of Acute Herpes Simplex Virus Type 1 Encephalitis Based on a Frequency-Selective Nonlinear Blending: Comparison With MRI.

    PubMed

    Bongers, Malte Niklas; Bier, Georg; Ditt, Hendrik; Beck, Robert; Ernemann, Ulrike; Nikolaou, Konstantin; Horger, Marius

    2016-11-01

    The purpose of this study is to compare the diagnostic efficacy of a new CT postprocessing tool based on frequency-selective nonlinear blending (best-contrast CT) with that of standard linear blending of unenhanced head CT in patients with herpes simplex virus type 1 and herpes simplex virus encephalitis (HSE), using FLAIR MRI sequences as the standard of reference. Fifteen consecutive patients (six women and nine men; mean [± SD] age, 60 ± 19 years) with proven HSE (positive polymerase chain reaction results from CSF analysis and the presence of neurologic deficits) were retrospectively enrolled. All patients had undergone head CT and MRI (mean time interval, 2 ± 2 days). After standardized unenhanced head CT scans were read, presets of the best-contrast algorithm were determined (center, 30 HU; delta, 5 HU; slope, 5 nondimensional), and resulting images were analyzed. Contrast enhancement was objectively measured by ROI analysis, comparing contrast-to-noise ratios (CNRs) of unenhanced CT and best-contrast CT. FLAIR and DWI MRI sequences were analyzed, and FLAIR was considered as the standard of reference. For assessment of disease extent, a previously reported 50-point score (HSE score) was used. CNR values for unenhanced head CT (CNR, 5.42 ± 2.77) could be statistically significantly increased using best-contrast CT (CNR, 9.62 ± 4.28) (p = 0.003). FLAIR sequences yielded a median HSE score of 9.0 (range, 6-17) and DWI sequences yielded HSE scores of 6.0 (range, 5-17). By comparison, unenhanced head CT resulted in a median HSE score of 3.5 (range, 1-6). The median best-contrast CT HSE score was 7.5 (range, 6-10). Agreement between FLAIR and unenhanced CT was 54.44%, that between DWI and best-contrast CT was 95.36%, and that between FLAIR and best-contrast CT was 85.21%. The most frequently overseen findings were located at the level of the upper part of the mesencephalon and at the subthalamic or insular level. Frequency-selective nonlinear blending significantly increases contrast and detects brain parenchymal involvement in HSE more sensitively compared with unenhanced CT. The sensitivity of best-contrast CT seems to be equal to that of DWI and almost as good as that of FLAIR.

  12. Angiogram, fundus, and oxygen saturation optic nerve head image fusion

    NASA Astrophysics Data System (ADS)

    Cao, Hua; Khoobehi, Bahram

    2009-02-01

    A novel multi-modality optic nerve head image fusion approach has been successfully designed. The new approach has been applied on three ophthalmologic modalities: angiogram, fundus, and oxygen saturation retinal optic nerve head images. It has achieved an excellent result by giving the visualization of fundus or oxygen saturation images with a complete angiogram overlay. During this study, two contributions have been made in terms of novelty, efficiency, and accuracy. The first contribution is the automated control point detection algorithm for multi-sensor images. The new method employs retina vasculature and bifurcation features by identifying the initial good-guess of control points using the Adaptive Exploratory Algorithm. The second contribution is the heuristic optimization fusion algorithm. In order to maximize the objective function (Mutual-Pixel-Count), the iteration algorithm adjusts the initial guess of the control points at the sub-pixel level. A refinement of the parameter set is obtained at the end of each loop, and finally an optimal fused image is generated at the end of the iteration. It is the first time that Mutual-Pixel-Count concept has been introduced into biomedical image fusion area. By locking the images in one place, the fused image allows ophthalmologists to match the same eye over time and get a sense of disease progress and pinpoint surgical tools. The new algorithm can be easily expanded to human or animals' 3D eye, brain, or body image registration and fusion.

  13. Automatic segmentation of the optic nerve head for deformation measurements in video rate optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Hidalgo-Aguirre, Maribel; Gitelman, Julian; Lesk, Mark Richard; Costantino, Santiago

    2015-11-01

    Optical coherence tomography (OCT) imaging has become a standard diagnostic tool in ophthalmology, providing essential information associated with various eye diseases. In order to investigate the dynamics of the ocular fundus, we present a simple and accurate automated algorithm to segment the inner limiting membrane in video-rate optic nerve head spectral domain (SD) OCT images. The method is based on morphological operations including a two-step contrast enhancement technique, proving to be very robust when dealing with low signal-to-noise ratio images and pathological eyes. An analysis algorithm was also developed to measure neuroretinal tissue deformation from the segmented retinal profiles. The performance of the algorithm is demonstrated, and deformation results are presented for healthy and glaucomatous eyes.

  14. Study on compensation algorithm of head skew in hard disk drives

    NASA Astrophysics Data System (ADS)

    Xiao, Yong; Ge, Xiaoyu; Sun, Jingna; Wang, Xiaoyan

    2011-10-01

    In hard disk drives (HDDs), head skew among multiple heads is pre-calibrated during manufacturing process. In real applications with high capacity of storage, the head stack may be tilted due to environmental change, resulting in additional head skew errors from outer diameter (OD) to inner diameter (ID). In case these errors are below the preset threshold for power on recalibration, the current strategy may not be aware, and drive performance under severe environment will be degraded. In this paper, in-the-field compensation of small DC head skew variation across stroke is proposed, where a zone table has been equipped. Test results demonstrating its effectiveness to reduce observer error and to enhance drive performance via accurate prediction of DC head skew are provided.

  15. Penetrating Bihemispheric Traumatic Brain Injury: A Collective Review of Gunshot Wounds to the Head.

    PubMed

    Turco, Lauren; Cornell, David L; Phillips, Bradley

    2017-08-01

    Head injuries that cross midline structures of the brain are bihemispheric. Other terms have been used to describe such injuries, but bihemispheric is the most accurate and should be standard nomenclature. Bihemispheric head injuries are associated with greater mortality and morbidity than other penetrating traumatic brain injuries (TBIs). Currently, there is a tendency to manage severe gunshot wounds (GSWs) to the head nonoperatively, despite reports of improved outcome in military patients treated aggressively. Thus, controversy exists in the management of civilian TBI. PubMed was searched for query terms, and PRISMA guidelines were used. Studies were selected by relevance and inclusion of data regarding etiology, diagnosis, and management of bihemispheric TBI. Case reports, studies not in English, and records lacking information on mechanism or bihemispheric injuries were excluded. Thirteen studies were included and most contained level IV evidence. The mean mortality rate of all head GSWs was 62% in adults and 32% in children. Bihemispheric GSWs had greater mortality rates of 82% in adults and 60% in children. There was a larger proportion of self-inflicted injury in studies with greater rates of bihemispheric injuries. Bihemispheric injuries have greater mortality rates than other penetrating TBI. Violation of midline brain structures such as the diencephalon and mesencephalon, increased rate of self-inflicted wounds, and lack of a standard management algorithm may increase the lethality of these injuries. Although bihemispheric injuries historically have been considered nonsalvageable, an aggressive surgical approach has been shown to improve outcomes, particularly in the military population. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Automated analysis of individual sperm cells using stain-free interferometric phase microscopy and machine learning.

    PubMed

    Mirsky, Simcha K; Barnea, Itay; Levi, Mattan; Greenspan, Hayit; Shaked, Natan T

    2017-09-01

    Currently, the delicate process of selecting sperm cells to be used for in vitro fertilization (IVF) is still based on the subjective, qualitative analysis of experienced clinicians using non-quantitative optical microscopy techniques. In this work, a method was developed for the automated analysis of sperm cells based on the quantitative phase maps acquired through use of interferometric phase microscopy (IPM). Over 1,400 human sperm cells from 8 donors were imaged using IPM, and an algorithm was designed to digitally isolate sperm cell heads from the quantitative phase maps while taking into consideration both the cell 3D morphology and contents, as well as acquire features describing sperm head morphology. A subset of these features was used to train a support vector machine (SVM) classifier to automatically classify sperm of good and bad morphology. The SVM achieves an area under the receiver operating characteristic curve of 88.59% and an area under the precision-recall curve of 88.67%, as well as precisions of 90% or higher. We believe that our automatic analysis can become the basis for objective and automatic sperm cell selection in IVF. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.

  17. Standardized shrinking LORETA-FOCUSS (SSLOFO): a new algorithm for spatio-temporal EEG source reconstruction.

    PubMed

    Liu, Hesheng; Schimpf, Paul H; Dong, Guoya; Gao, Xiaorong; Yang, Fusheng; Gao, Shangkai

    2005-10-01

    This paper presents a new algorithm called Standardized Shrinking LORETA-FOCUSS (SSLOFO) for solving the electroencephalogram (EEG) inverse problem. Multiple techniques are combined in a single procedure to robustly reconstruct the underlying source distribution with high spatial resolution. This algorithm uses a recursive process which takes the smooth estimate of sLORETA as initialization and then employs the re-weighted minimum norm introduced by FOCUSS. An important technique called standardization is involved in the recursive process to enhance the localization ability. The algorithm is further improved by automatically adjusting the source space according to the estimate of the previous step, and by the inclusion of temporal information. Simulation studies are carried out on both spherical and realistic head models. The algorithm achieves very good localization ability on noise-free data. It is capable of recovering complex source configurations with arbitrary shapes and can produce high quality images of extended source distributions. We also characterized the performance with noisy data in a realistic head model. An important feature of this algorithm is that the temporal waveforms are clearly reconstructed, even for closely spaced sources. This provides a convenient way to estimate neural dynamics directly from the cortical sources.

  18. Quality Assurance Assessment of Diagnostic and Radiation Therapy–Simulation CT Image Registration for Head and Neck Radiation Therapy: Anatomic Region of Interest–based Comparison of Rigid and Deformable Algorithms

    PubMed Central

    Mohamed, Abdallah S. R.; Ruangskul, Manee-Naad; Awan, Musaddiq J.; Baron, Charles A.; Kalpathy-Cramer, Jayashree; Castillo, Richard; Castillo, Edward; Guerrero, Thomas M.; Kocak-Uzel, Esengul; Yang, Jinzhong; Court, Laurence E.; Kantor, Michael E.; Gunn, G. Brandon; Colen, Rivka R.; Frank, Steven J.; Garden, Adam S.; Rosenthal, David I.

    2015-01-01

    Purpose To develop a quality assurance (QA) workflow by using a robust, curated, manually segmented anatomic region-of-interest (ROI) library as a benchmark for quantitative assessment of different image registration techniques used for head and neck radiation therapy–simulation computed tomography (CT) with diagnostic CT coregistration. Materials and Methods Radiation therapy–simulation CT images and diagnostic CT images in 20 patients with head and neck squamous cell carcinoma treated with curative-intent intensity-modulated radiation therapy between August 2011 and May 2012 were retrospectively retrieved with institutional review board approval. Sixty-eight reference anatomic ROIs with gross tumor and nodal targets were then manually contoured on images from each examination. Diagnostic CT images were registered with simulation CT images rigidly and by using four deformable image registration (DIR) algorithms: atlas based, B-spline, demons, and optical flow. The resultant deformed ROIs were compared with manually contoured reference ROIs by using similarity coefficient metrics (ie, Dice similarity coefficient) and surface distance metrics (ie, 95% maximum Hausdorff distance). The nonparametric Steel test with control was used to compare different DIR algorithms with rigid image registration (RIR) by using the post hoc Wilcoxon signed-rank test for stratified metric comparison. Results A total of 2720 anatomic and 50 tumor and nodal ROIs were delineated. All DIR algorithms showed improved performance over RIR for anatomic and target ROI conformance, as shown for most comparison metrics (Steel test, P < .008 after Bonferroni correction). The performance of different algorithms varied substantially with stratification by specific anatomic structures or category and simulation CT section thickness. Conclusion Development of a formal ROI-based QA workflow for registration assessment demonstrated improved performance with DIR techniques over RIR. After QA, DIR implementation should be the standard for head and neck diagnostic CT and simulation CT allineation, especially for target delineation. © RSNA, 2014 Online supplemental material is available for this article. PMID:25380454

  19. Multimedia systems in ultrasound image boundary detection and measurements

    NASA Astrophysics Data System (ADS)

    Pathak, Sayan D.; Chalana, Vikram; Kim, Yongmin

    1997-05-01

    Ultrasound as a medical imaging modality offers the clinician a real-time of the anatomy of the internal organs/tissues, their movement, and flow noninvasively. One of the applications of ultrasound is to monitor fetal growth by measuring biparietal diameter (BPD) and head circumference (HC). We have been working on automatic detection of fetal head boundaries in ultrasound images. These detected boundaries are used to measure BPD and HC. The boundary detection algorithm is based on active contour models and takes 32 seconds on an external high-end workstation, SUN SparcStation 20/71. Our goal has been to make this tool available within an ultrasound machine and at the same time significantly improve its performance utilizing multimedia technology. With the advent of high- performance programmable digital signal processors (DSP), the software solution within an ultrasound machine instead of the traditional hardwired approach or requiring an external computer is now possible. We have integrated our boundary detection algorithm into a programmable ultrasound image processor (PUIP) that fits into a commercial ultrasound machine. The PUIP provides both the high computing power and flexibility needed to support computationally-intensive image processing algorithms within an ultrasound machine. According to our data analysis, BPD/HC measurements made on PUIP lie within the interobserver variability. Hence, the errors in the automated BPD/HC measurements using the algorithm are on the same order as the average interobserver differences. On PUIP, it takes 360 ms to measure the values of BPD/HC on one head image. When processing multiple head images in sequence, it takes 185 ms per image, thus enabling 5.4 BPD/HC measurements per second. Reduction in the overall execution time from 32 seconds to a fraction of a second and making this multimedia system available within an ultrasound machine will help this image processing algorithm and other computer-intensive imaging applications become a practical tool for the sonographers in the feature.

  20. Rapid Calculation of Spacecraft Trajectories Using Efficient Taylor Series Integration

    NASA Technical Reports Server (NTRS)

    Scott, James R.; Martini, Michael C.

    2011-01-01

    A variable-order, variable-step Taylor series integration algorithm was implemented in NASA Glenn's SNAP (Spacecraft N-body Analysis Program) code. SNAP is a high-fidelity trajectory propagation program that can propagate the trajectory of a spacecraft about virtually any body in the solar system. The Taylor series algorithm's very high order accuracy and excellent stability properties lead to large reductions in computer time relative to the code's existing 8th order Runge-Kutta scheme. Head-to-head comparison on near-Earth, lunar, Mars, and Europa missions showed that Taylor series integration is 15.8 times faster than Runge- Kutta on average, and is more accurate. These speedups were obtained for calculations involving central body, other body, thrust, and drag forces. Similar speedups have been obtained for calculations that include J2 spherical harmonic for central body gravitation. The algorithm includes a step size selection method that directly calculates the step size and never requires a repeat step. High-order Taylor series integration algorithms have been shown to provide major reductions in computer time over conventional integration methods in numerous scientific applications. The objective here was to directly implement Taylor series integration in an existing trajectory analysis code and demonstrate that large reductions in computer time (order of magnitude) could be achieved while simultaneously maintaining high accuracy. This software greatly accelerates the calculation of spacecraft trajectories. At each time level, the spacecraft position, velocity, and mass are expanded in a high-order Taylor series whose coefficients are obtained through efficient differentiation arithmetic. This makes it possible to take very large time steps at minimal cost, resulting in large savings in computer time. The Taylor series algorithm is implemented primarily through three subroutines: (1) a driver routine that automatically introduces auxiliary variables and sets up initial conditions and integrates; (2) a routine that calculates system reduced derivatives using recurrence relations for quotients and products; and (3) a routine that determines the step size and sums the series. The order of accuracy used in a trajectory calculation is arbitrary and can be set by the user. The algorithm directly calculates the motion of other planetary bodies and does not require ephemeris files (except to start the calculation). The code also runs with Taylor series and Runge-Kutta used interchangeably for different phases of a mission.

  1. Node Self-Deployment Algorithm Based on Pigeon Swarm Optimization for Underwater Wireless Sensor Networks

    PubMed Central

    Yu, Shanen; Xu, Yiming; Jiang, Peng; Wu, Feng; Xu, Huan

    2017-01-01

    At present, free-to-move node self-deployment algorithms aim at event coverage and cannot improve network coverage under the premise of considering network connectivity, network reliability and network deployment energy consumption. Thus, this study proposes pigeon-based self-deployment algorithm (PSA) for underwater wireless sensor networks to overcome the limitations of these existing algorithms. In PSA, the sink node first finds its one-hop nodes and maximizes the network coverage in its one-hop region. The one-hop nodes subsequently divide the network into layers and cluster in each layer. Each cluster head node constructs a connected path to the sink node to guarantee network connectivity. Finally, the cluster head node regards the ratio of the movement distance of the node to the change in the coverage redundancy ratio as the target function and employs pigeon swarm optimization to determine the positions of the nodes. Simulation results show that PSA improves both network connectivity and network reliability, decreases network deployment energy consumption, and increases network coverage. PMID:28338615

  2. [Effect of algorithms for calibration set selection on quantitatively determining asiaticoside content in Centella total glucosides by near infrared spectroscopy].

    PubMed

    Zhan, Xue-yan; Zhao, Na; Lin, Zhao-zhou; Wu, Zhi-sheng; Yuan, Rui-juan; Qiao, Yan-jiang

    2014-12-01

    The appropriate algorithm for calibration set selection was one of the key technologies for a good NIR quantitative model. There are different algorithms for calibration set selection, such as Random Sampling (RS) algorithm, Conventional Selection (CS) algorithm, Kennard-Stone(KS) algorithm and Sample set Portioning based on joint x-y distance (SPXY) algorithm, et al. However, there lack systematic comparisons between two algorithms of the above algorithms. The NIR quantitative models to determine the asiaticoside content in Centella total glucosides were established in the present paper, of which 7 indexes were classified and selected, and the effects of CS algorithm, KS algorithm and SPXY algorithm for calibration set selection on the accuracy and robustness of NIR quantitative models were investigated. The accuracy indexes of NIR quantitative models with calibration set selected by SPXY algorithm were significantly different from that with calibration set selected by CS algorithm or KS algorithm, while the robustness indexes, such as RMSECV and |RMSEP-RMSEC|, were not significantly different. Therefore, SPXY algorithm for calibration set selection could improve the predicative accuracy of NIR quantitative models to determine asiaticoside content in Centella total glucosides, and have no significant effect on the robustness of the models, which provides a reference to determine the appropriate algorithm for calibration set selection when NIR quantitative models are established for the solid system of traditional Chinese medcine.

  3. Interactions among mechanisms of sexual selection on male body size and head shape in a sexually dimorphic fly.

    PubMed

    Bonduriansky, Russell; Rowe, Locke

    2003-09-01

    Darwin envisaged male-male and male-female interactions as mutually supporting mechanisms of sexual selection, in which the best armed males were also the most attractive to females. Although this belief continues to predominate today, it has been challenged by sexual conflict theory, which suggests that divergence in the interests of males and females may result in conflicting sexual selection. This raises the empirical question of how multiple mechanisms of sexual selection interact to shape targeted traits. We investigated sexual selection on male morphology in the sexually dimorphic fly Prochyliza xanthostoma, using indices of male performance in male-male and male-female interactions in laboratory arenas to calculate gradients of direct, linear selection on male body size and an index of head elongation. In male-male combat, the first interaction with a new opponent selected for large body size but reduced head elongation, whereas multiple interactions with the same opponent favored large body size only. In male-female interactions, females preferred males with relatively elongated heads, but male performance of the precopulatory leap favored large body size and, possibly, reduced head elongation. In addition, the amount of sperm transferred (much of which is ingested by females) was an increasing function of both body size and head elongation. Thus, whereas both male-male and male-female interactions favored large male body size, male head shape appeared to be subject to conflicting sexual selection. We argue that conflicting sexual selection may be a common result of divergence in the interests of the sexes.

  4. MeSH indexing based on automatically generated summaries.

    PubMed

    Jimeno-Yepes, Antonio J; Plaza, Laura; Mork, James G; Aronson, Alan R; Díaz, Alberto

    2013-06-26

    MEDLINE citations are manually indexed at the U.S. National Library of Medicine (NLM) using as reference the Medical Subject Headings (MeSH) controlled vocabulary. For this task, the human indexers read the full text of the article. Due to the growth of MEDLINE, the NLM Indexing Initiative explores indexing methodologies that can support the task of the indexers. Medical Text Indexer (MTI) is a tool developed by the NLM Indexing Initiative to provide MeSH indexing recommendations to indexers. Currently, the input to MTI is MEDLINE citations, title and abstract only. Previous work has shown that using full text as input to MTI increases recall, but decreases precision sharply. We propose using summaries generated automatically from the full text for the input to MTI to use in the task of suggesting MeSH headings to indexers. Summaries distill the most salient information from the full text, which might increase the coverage of automatic indexing approaches based on MEDLINE. We hypothesize that if the results were good enough, manual indexers could possibly use automatic summaries instead of the full texts, along with the recommendations of MTI, to speed up the process while maintaining high quality of indexing results. We have generated summaries of different lengths using two different summarizers, and evaluated the MTI indexing on the summaries using different algorithms: MTI, individual MTI components, and machine learning. The results are compared to those of full text articles and MEDLINE citations. Our results show that automatically generated summaries achieve similar recall but higher precision compared to full text articles. Compared to MEDLINE citations, summaries achieve higher recall but lower precision. Our results show that automatic summaries produce better indexing than full text articles. Summaries produce similar recall to full text but much better precision, which seems to indicate that automatic summaries can efficiently capture the most important contents within the original articles. The combination of MEDLINE citations and automatically generated summaries could improve the recommendations suggested by MTI. On the other hand, indexing performance might be dependent on the MeSH heading being indexed. Summarization techniques could thus be considered as a feature selection algorithm that might have to be tuned individually for each MeSH heading.

  5. Multi-atlas-based segmentation of the parotid glands of MR images in patients following head-and-neck cancer radiotherapy

    NASA Astrophysics Data System (ADS)

    Cheng, Guanghui; Yang, Xiaofeng; Wu, Ning; Xu, Zhijian; Zhao, Hongfu; Wang, Yuefeng; Liu, Tian

    2013-02-01

    Xerostomia (dry mouth), resulting from radiation damage to the parotid glands, is one of the most common and distressing side effects of head-and-neck cancer radiotherapy. Recent MRI studies have demonstrated that the volume reduction of parotid glands is an important indicator for radiation damage and xerostomia. In the clinic, parotid-volume evaluation is exclusively based on physicians' manual contours. However, manual contouring is time-consuming and prone to inter-observer and intra-observer variability. Here, we report a fully automated multi-atlas-based registration method for parotid-gland delineation in 3D head-and-neck MR images. The multi-atlas segmentation utilizes a hybrid deformable image registration to map the target subject to multiple patients' images, applies the transformation to the corresponding segmented parotid glands, and subsequently uses the multiple patient-specific pairs (head-and-neck MR image and transformed parotid-gland mask) to train support vector machine (SVM) to reach consensus to segment the parotid gland of the target subject. This segmentation algorithm was tested with head-and-neck MRIs of 5 patients following radiotherapy for the nasopharyngeal cancer. The average parotid-gland volume overlapped 85% between the automatic segmentations and the physicians' manual contours. In conclusion, we have demonstrated the feasibility of an automatic multi-atlas based segmentation algorithm to segment parotid glands in head-and-neck MR images.

  6. View-Invariant Gait Recognition Through Genetic Template Segmentation

    NASA Astrophysics Data System (ADS)

    Isaac, Ebenezer R. H. P.; Elias, Susan; Rajagopalan, Srinivasan; Easwarakumar, K. S.

    2017-08-01

    Template-based model-free approach provides by far the most successful solution to the gait recognition problem in literature. Recent work discusses how isolating the head and leg portion of the template increase the performance of a gait recognition system making it robust against covariates like clothing and carrying conditions. However, most involve a manual definition of the boundaries. The method we propose, the genetic template segmentation (GTS), employs the genetic algorithm to automate the boundary selection process. This method was tested on the GEI, GEnI and AEI templates. GEI seems to exhibit the best result when segmented with our approach. Experimental results depict that our approach significantly outperforms the existing implementations of view-invariant gait recognition.

  7. Development of a Telehealth Intervention for Head and Neck Cancer Patients

    PubMed Central

    Studts, Jamie L.; Bumpous, Jeffrey M.; Gregg, Jennifer L.; Wilson, Liz; Keeney, Cynthia; Scharfenberger, Jennifer A.; Pfeifer, Mark P.

    2009-01-01

    Abstract Treatment for head and neck cancer precipitates a myriad of distressing symptoms. Patients may be isolated both physically and socially and may lack the self-efficacy to report problems and participate as partners in their care. The goal of this project was to design a telehealth intervention to address such isolation, develop patient self-efficacy, and improve symptom management during the treatment experience. Participatory action research and a review of the literature were used to develop electronically administered symptom management algorithms addressing all major symptoms experienced by patients undergoing treatment for head and neck cancers. Daily questions and related messages were then programmed into an easy-to-use telehealth messaging device, the Health Buddy®. Clinician and patient acceptance, feasibility, and technology issues were measured. Using participatory action research is an effective means for developing electronic algorithms acceptable to both clinicians and patients. The use of a simple tele-messaging device as an adjunct to symptom management is feasible, affordable, and acceptable to patients. This telehealth intervention provides support and education to patients undergoing treatment for head and neck cancers. PMID:19199847

  8. Development of a telehealth intervention for head and neck cancer patients.

    PubMed

    Head, Barbara A; Studts, Jamie L; Bumpous, Jeffrey M; Gregg, Jennifer L; Wilson, Liz; Keeney, Cynthia; Scharfenberger, Jennifer A; Pfeifer, Mark P

    2009-01-01

    Treatment for head and neck cancer precipitates a myriad of distressing symptoms. Patients may be isolated both physically and socially and may lack the self-efficacy to report problems and participate as partners in their care. The goal of this project was to design a telehealth intervention to address such isolation, develop patient self-efficacy, and improve symptom management during the treatment experience. Participatory action research and a review of the literature were used to develop electronically administered symptom management algorithms addressing all major symptoms experienced by patients undergoing treatment for head and neck cancers. Daily questions and related messages were then programmed into an easy-to-use telehealth messaging device, the Health Buddy(R). Clinician and patient acceptance, feasibility, and technology issues were measured. Using participatory action research is an effective means for developing electronic algorithms acceptable to both clinicians and patients. The use of a simple tele-messaging device as an adjunct to symptom management is feasible, affordable, and acceptable to patients. This telehealth intervention provides support and education to patients undergoing treatment for head and neck cancers.

  9. The Selection and Appointment of School Heads. A Manual of Suggestions to Boards of Trustees and Candidates. Third Edition.

    ERIC Educational Resources Information Center

    Driscoll, Eileen R.

    To assist in the selection of new private school heads, this manual provides advice to both boards of trustees and candidates. Advice directed to the board covers timing of the search and selection, announcement of the current head's resignation, search committee formation, the committee chairperson, staff assistance, search budget, job…

  10. A Simple Two Aircraft Conflict Resolution Algorithm

    NASA Technical Reports Server (NTRS)

    Chatterji, Gano B.

    2006-01-01

    Conflict detection and resolution methods are crucial for distributed air-ground traffic management in which the crew in, the cockpit, dispatchers in operation control centers sad and traffic controllers in the ground-based air traffic management facilities share information and participate in the traffic flow and traffic control functions. This paper describes a conflict detection, and a conflict resolution method. The conflict detection method predicts the minimum separation and the time-to-go to the closest point of approach by assuming that both the aircraft will continue to fly at their current speeds along their current headings. The conflict resolution method described here is motivated by the proportional navigation algorithm, which is often used for missile guidance during the terminal phase. It generates speed and heading commands to rotate the line-of-sight either clockwise or counter-clockwise for conflict resolution. Once the aircraft achieve a positive range-rate and no further conflict is predicted, the algorithm generates heading commands to turn back the aircraft to their nominal trajectories. The speed commands are set to the optimal pre-resolution speeds. Six numerical examples are presented to demonstrate the conflict detection, and the conflict resolution methods.

  11. Particle swarm optimization algorithm based low cost magnetometer calibration

    NASA Astrophysics Data System (ADS)

    Ali, A. S.; Siddharth, S., Syed, Z., El-Sheimy, N.

    2011-12-01

    Inertial Navigation Systems (INS) consist of accelerometers, gyroscopes and a microprocessor provide inertial digital data from which position and orientation is obtained by integrating the specific forces and rotation rates. In addition to the accelerometers and gyroscopes, magnetometers can be used to derive the absolute user heading based on Earth's magnetic field. Unfortunately, the measurements of the magnetic field obtained with low cost sensors are corrupted by several errors including manufacturing defects and external electro-magnetic fields. Consequently, proper calibration of the magnetometer is required to achieve high accuracy heading measurements. In this paper, a Particle Swarm Optimization (PSO) based calibration algorithm is presented to estimate the values of the bias and scale factor of low cost magnetometer. The main advantage of this technique is the use of the artificial intelligence which does not need any error modeling or awareness of the nonlinearity. The estimated bias and scale factor errors from the proposed algorithm improve the heading accuracy and the results are also statistically significant. Also, it can help in the development of the Pedestrian Navigation Devices (PNDs) when combined with the INS and GPS/Wi-Fi especially in the indoor environments

  12. Dynamic Hierarchical Energy-Efficient Method Based on Combinatorial Optimization for Wireless Sensor Networks

    PubMed Central

    Tang, Hongying; Cheng, Yongbo; Zhao, Qin; Li, Baoqing; Yuan, Xiaobing

    2017-01-01

    Routing protocols based on topology control are significantly important for improving network longevity in wireless sensor networks (WSNs). Traditionally, some WSN routing protocols distribute uneven network traffic load to sensor nodes, which is not optimal for improving network longevity. Differently to conventional WSN routing protocols, we propose a dynamic hierarchical protocol based on combinatorial optimization (DHCO) to balance energy consumption of sensor nodes and to improve WSN longevity. For each sensor node, the DHCO algorithm obtains the optimal route by establishing a feasible routing set instead of selecting the cluster head or the next hop node. The process of obtaining the optimal route can be formulated as a combinatorial optimization problem. Specifically, the DHCO algorithm is carried out by the following procedures. It employs a hierarchy-based connection mechanism to construct a hierarchical network structure in which each sensor node is assigned to a special hierarchical subset; it utilizes the combinatorial optimization theory to establish the feasible routing set for each sensor node, and takes advantage of the maximum–minimum criterion to obtain their optimal routes to the base station. Various results of simulation experiments show effectiveness and superiority of the DHCO algorithm in comparison with state-of-the-art WSN routing algorithms, including low-energy adaptive clustering hierarchy (LEACH), hybrid energy-efficient distributed clustering (HEED), genetic protocol-based self-organizing network clustering (GASONeC), and double cost function-based routing (DCFR) algorithms. PMID:28753962

  13. 45 CFR 1305.6 - Selection process.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 4 2010-10-01 2010-10-01 false Selection process. 1305.6 Section 1305.6 Public... PROGRAM ELIGIBILITY, RECRUITMENT, SELECTION, ENROLLMENT AND ATTENDANCE IN HEAD START § 1305.6 Selection process. (a) Each Head Start program must have a formal process for establishing selection criteria and...

  14. An evaluation to design high performance pinhole array detector module for four head SPECT: a simulation study

    NASA Astrophysics Data System (ADS)

    Rahman, Tasneem; Tahtali, Murat; Pickering, Mark R.

    2014-09-01

    The purpose of this study is to derive optimized parameters for a detector module employing an off-the-shelf X-ray camera and a pinhole array collimator applicable for a range of different SPECT systems. Monte Carlo simulations using the Geant4 application for tomographic emission (GATE) were performed to estimate the performance of the pinhole array collimators and were compared to that of low energy high resolution (LEHR) parallel-hole collimator in a four head SPECT system. A detector module was simulated to have 48 mm by 48 mm active area along with 1mm, 1.6mm and 2 mm pinhole aperture sizes at 0.48 mm pitch on a tungsten plate. Perpendicular lead septa were employed to verify overlapping and non-overlapping projections against a proper acceptance angle without lead septa. A uniform shape cylindrical water phantom was used to evaluate the performance of the proposed four head SPECT system of the pinhole array detector module. For each head, 100 pinhole configurations were evaluated based on sensitivity and detection efficiency for 140 keV γ-rays, and compared to LEHR parallel-hole collimator. SPECT images were reconstructed based on filtered back projection (FBP) algorithm where neither scatter nor attenuation corrections were performed. A better reconstruction algorithm development for this specific system is in progress. Nevertheless, activity distribution was well visualized using the backprojection algorithm. In this study, we have evaluated several quantitative and comparative analyses for a pinhole array imaging system providing high detection efficiency and better system sensitivity over a large FOV, comparing to the conventional four head SPECT system. The proposed detector module is expected to provide improved performance in various SPECT imaging.

  15. Final findings on the development and evaluation of an en-route fuel optimal conflict resolution algorithm to support strategic decision-making.

    DOT National Transportation Integrated Search

    2012-01-01

    The novel strategic conflict-resolution algorithm for fuel minimization that is documented in this report : provides air traffic controllers and/or pilots with fuel-optimal heading, speed, and altitude : recommendations in the en route flight phase, ...

  16. Form Subdivisions: Their Identification and Use in LCSH.

    ERIC Educational Resources Information Center

    O'Neill, Edward T.; Chan, Lois Mai; Childress, Eric; Dean, Rebecca; El-Hoshy, Lynn M.; Vizine-Goetz, Diane

    2001-01-01

    Discusses form subdivisions as part of Library of Congress Subject Headings (LCSH) and the MARC format, which did not have a separate subfield code to identify form subdivisions. Describes the development of an algorithm to identify form subdivisions and reports results of an evaluation of the algorithm. (LRW)

  17. Effect of inhomogeneity in a patient's body on the accuracy of the pencil beam algorithm in comparison to Monte Carlo

    NASA Astrophysics Data System (ADS)

    Yamashita, T.; Akagi, T.; Aso, T.; Kimura, A.; Sasaki, T.

    2012-11-01

    The pencil beam algorithm (PBA) is reasonably accurate and fast. It is, therefore, the primary method used in routine clinical treatment planning for proton radiotherapy; still, it needs to be validated for use in highly inhomogeneous regions. In our investigation of the effect of patient inhomogeneity, PBA was compared with Monte Carlo (MC). A software framework was developed for the MC simulation of radiotherapy based on Geant4. Anatomical sites selected for the comparison were the head/neck, liver, lung and pelvis region. The dose distributions calculated by the two methods in selected examples were compared, as well as a dose volume histogram (DVH) derived from the dose distributions. The comparison of the off-center ratio (OCR) at the iso-center showed good agreement between the PBA and MC, while discrepancies were seen around the distal fall-off regions. While MC showed a fine structure on the OCR in the distal fall-off region, the PBA showed smoother distribution. The fine structures in MC calculation appeared downstream of very low-density regions. Comparison of DVHs showed that most of the target volumes were similarly covered, while some OARs located around the distal region received a higher dose when calculated by MC than the PBA.

  18. Continuous intensity map optimization (CIMO): A novel approach to leaf sequencing in step and shoot IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao Daliang; Earl, Matthew A.; Luan, Shuang

    2006-04-15

    A new leaf-sequencing approach has been developed that is designed to reduce the number of required beam segments for step-and-shoot intensity modulated radiation therapy (IMRT). This approach to leaf sequencing is called continuous-intensity-map-optimization (CIMO). Using a simulated annealing algorithm, CIMO seeks to minimize differences between the optimized and sequenced intensity maps. Two distinguishing features of the CIMO algorithm are (1) CIMO does not require that each optimized intensity map be clustered into discrete levels and (2) CIMO is not rule-based but rather simultaneously optimizes both the aperture shapes and weights. To test the CIMO algorithm, ten IMRT patient cases weremore » selected (four head-and-neck, two pancreas, two prostate, one brain, and one pelvis). For each case, the optimized intensity maps were extracted from the Pinnacle{sup 3} treatment planning system. The CIMO algorithm was applied, and the optimized aperture shapes and weights were loaded back into Pinnacle. A final dose calculation was performed using Pinnacle's convolution/superposition based dose calculation. On average, the CIMO algorithm provided a 54% reduction in the number of beam segments as compared with Pinnacle's leaf sequencer. The plans sequenced using the CIMO algorithm also provided improved target dose uniformity and a reduced discrepancy between the optimized and sequenced intensity maps. For ten clinical intensity maps, comparisons were performed between the CIMO algorithm and the power-of-two reduction algorithm of Xia and Verhey [Med. Phys. 25(8), 1424-1434 (1998)]. When the constraints of a Varian Millennium multileaf collimator were applied, the CIMO algorithm resulted in a 26% reduction in the number of segments. For an Elekta multileaf collimator, the CIMO algorithm resulted in a 67% reduction in the number of segments. An average leaf sequencing time of less than one minute per beam was observed.« less

  19. Drift Reduction in Pedestrian Navigation System by Exploiting Motion Constraints and Magnetic Field.

    PubMed

    Ilyas, Muhammad; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok

    2016-09-09

    Pedestrian navigation systems (PNS) using foot-mounted MEMS inertial sensors use zero-velocity updates (ZUPTs) to reduce drift in navigation solutions and estimate inertial sensor errors. However, it is well known that ZUPTs cannot reduce all errors, especially as heading error is not observable. Hence, the position estimates tend to drift and even cyclic ZUPTs are applied in updated steps of the Extended Kalman Filter (EKF). This urges the use of other motion constraints for pedestrian gait and any other valuable heading reduction information that is available. In this paper, we exploit two more motion constraints scenarios of pedestrian gait: (1) walking along straight paths; (2) standing still for a long time. It is observed that these motion constraints (called "virtual sensor"), though considerably reducing drift in PNS, still need an absolute heading reference. One common absolute heading estimation sensor is the magnetometer, which senses the Earth's magnetic field and, hence, the true heading angle can be calculated. However, magnetometers are susceptible to magnetic distortions, especially in indoor environments. In this work, an algorithm, called magnetic anomaly detection (MAD) and compensation is designed by incorporating only healthy magnetometer data in the EKF updating step, to reduce drift in zero-velocity updated INS. Experiments are conducted in GPS-denied and magnetically distorted environments to validate the proposed algorithms.

  20. Comparative study of anatomical normalization errors in SPM and 3D-SSP using digital brain phantom.

    PubMed

    Onishi, Hideo; Matsutake, Yuki; Kawashima, Hiroki; Matsutomo, Norikazu; Amijima, Hizuru

    2011-01-01

    In single photon emission computed tomography (SPECT) cerebral blood flow studies, two major algorithms are widely used statistical parametric mapping (SPM) and three-dimensional stereotactic surface projections (3D-SSP). The aim of this study is to compare an SPM algorithm-based easy Z score imaging system (eZIS) and a 3D-SSP system in the errors of anatomical standardization using 3D-digital brain phantom images. We developed a 3D-brain digital phantom based on MR images to simulate the effects of head tilt, perfusion defective region size, and count value reduction rate on the SPECT images. This digital phantom was used to compare the errors of anatomical standardization by the eZIS and the 3D-SSP algorithms. While the eZIS allowed accurate standardization of the images of the phantom simulating a head in rotation, lateroflexion, anteflexion, or retroflexion without angle dependency, the standardization by 3D-SSP was not accurate enough at approximately 25° or more head tilt. When the simulated head contained perfusion defective regions, one of the 3D-SSP images showed an error of 6.9% from the true value. Meanwhile, one of the eZIS images showed an error as large as 63.4%, revealing a significant underestimation. When required to evaluate regions with decreased perfusion due to such causes as hemodynamic cerebral ischemia, the 3D-SSP is desirable. In a statistical image analysis, we must reconfirm the image after anatomical standardization by all means.

  1. Superhuman AI for heads-up no-limit poker: Libratus beats top professionals.

    PubMed

    Brown, Noam; Sandholm, Tuomas

    2018-01-26

    No-limit Texas hold'em is the most popular form of poker. Despite artificial intelligence (AI) successes in perfect-information games, the private information and massive game tree have made no-limit poker difficult to tackle. We present Libratus, an AI that, in a 120,000-hand competition, defeated four top human specialist professionals in heads-up no-limit Texas hold'em, the leading benchmark and long-standing challenge problem in imperfect-information game solving. Our game-theoretic approach features application-independent techniques: an algorithm for computing a blueprint for the overall strategy, an algorithm that fleshes out the details of the strategy for subgames that are reached during play, and a self-improver algorithm that fixes potential weaknesses that opponents have identified in the blueprint strategy. Copyright © 2018, The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  2. A Cancer Gene Selection Algorithm Based on the K-S Test and CFS.

    PubMed

    Su, Qiang; Wang, Yina; Jiang, Xiaobing; Chen, Fuxue; Lu, Wen-Cong

    2017-01-01

    To address the challenging problem of selecting distinguished genes from cancer gene expression datasets, this paper presents a gene subset selection algorithm based on the Kolmogorov-Smirnov (K-S) test and correlation-based feature selection (CFS) principles. The algorithm selects distinguished genes first using the K-S test, and then, it uses CFS to select genes from those selected by the K-S test. We adopted support vector machines (SVM) as the classification tool and used the criteria of accuracy to evaluate the performance of the classifiers on the selected gene subsets. This approach compared the proposed gene subset selection algorithm with the K-S test, CFS, minimum-redundancy maximum-relevancy (mRMR), and ReliefF algorithms. The average experimental results of the aforementioned gene selection algorithms for 5 gene expression datasets demonstrate that, based on accuracy, the performance of the new K-S and CFS-based algorithm is better than those of the K-S test, CFS, mRMR, and ReliefF algorithms. The experimental results show that the K-S test-CFS gene selection algorithm is a very effective and promising approach compared to the K-S test, CFS, mRMR, and ReliefF algorithms.

  3. Use of a quality trait index to increase the reliability of phenotypic evaluations in broccoli

    USDA-ARS?s Scientific Manuscript database

    Selection of superior broccoli hybrids involves multiple considerations, including optimization of head quality traits. Quality assessment of broccoli heads is often confounded by relatively subjective human preferences for optimal appearance of heads. To assist the selection process, we assessed fi...

  4. Design and Experimental Evaluation of a Non-Invasive Microwave Head Imaging System for Intracranial Haemorrhage Detection

    PubMed Central

    Mobashsher, A. T.; Bialkowski, K. S.; Abbosh, A. M.; Crozier, S.

    2016-01-01

    An intracranial haemorrhage is a life threatening medical emergency, yet only a fraction of the patients receive treatment in time, primarily due to the transport delay in accessing diagnostic equipment in hospitals such as Magnetic Resonance Imaging or Computed Tomography. A mono-static microwave head imaging system that can be carried in an ambulance for the detection and localization of intracranial haemorrhage is presented. The system employs a single ultra-wideband antenna as sensing element to transmit signals in low microwave frequencies towards the head and capture backscattered signals. The compact and low-profile antenna provides stable directional radiation patterns over the operating bandwidth in both near and far-fields. Numerical analysis of the head imaging system with a realistic head model in various situations is performed to realize the scattering mechanism of haemorrhage. A modified delay-and-summation back-projection algorithm, which includes effects of surface waves and a distance-dependent effective permittivity model, is proposed for signal and image post-processing. The efficacy of the automated head imaging system is evaluated using a 3D-printed human head phantom with frequency dispersive dielectric properties including emulated haemorrhages with different sizes located at different depths. Scattered signals are acquired with a compact transceiver in a mono-static circular scanning profile. The reconstructed images demonstrate that the system is capable of detecting haemorrhages as small as 1 cm3. While quantitative analyses reveal that the quality of images gradually degrades with the increase of the haemorrhage’s depth due to the reduction of signal penetration inside the head; rigorous statistical analysis suggests that substantial improvement in image quality can be obtained by increasing the data samples collected around the head. The proposed head imaging prototype along with the processing algorithm demonstrates its feasibility for potential use in ambulances as an effective and low cost diagnostic tool to assure timely triaging of intracranial hemorrhage patients. PMID:27073994

  5. Postcopulatory Sexual Selection Results in Spermatozoa with More Uniform Head and Flagellum Sizes in Rodents

    PubMed Central

    Varea-Sánchez, María; Gómez Montoto, Laura; Tourmente, Maximiliano; Roldan, Eduardo R. S.

    2014-01-01

    Interspecific comparative studies have shown that, in most taxa, postcopulatory sexual selection (PCSS) in the form of sperm competition drives the evolution of longer and faster swimming sperm. Work on passserine birds has revealed that PCSS also reduces variation in sperm size between males at the intraspecific level. However, the influence of PCSS upon intra-male sperm size diversity is poorly understood, since the few studies carried out to date in birds have yielded contradictory results. In mammals, PCSS increases sperm size but there is little information on the effects of this selective force on variations in sperm size and shape. Here, we test whether sperm competition associates with a reduction in the degree of variation of sperm dimensions in rodents. We found that as sperm competition levels increase males produce sperm that are more similar in both the size of the head and the size of the flagellum. On the other hand, whereas with increasing levels of sperm competition there is less variation in head length in relation to head width (ratio CV head length/CV head width), there is no relation between variation in head and flagellum sizes (ratio CV head length/CV flagellum length). Thus, it appears that, in addition to a selection for longer sperm, sperm competition may select more uniform sperm heads and flagella, which together may enhance swimming velocity. Overall, sperm competition seems to drive sperm components towards an optimum design that may affect sperm performance which, in turn, will be crucial for successful fertilization. PMID:25243923

  6. Evaluation of a simple method for the automatic assignment of MeSH descriptors to health resources in a French online catalogue.

    PubMed

    Névéol, Aurélie; Pereira, Suzanne; Kerdelhué, Gaetan; Dahamna, Badisse; Joubert, Michel; Darmoni, Stéfan J

    2007-01-01

    The growing number of resources to be indexed in the catalogue of online health resources in French (CISMeF) calls for curating strategies involving automatic indexing tools while maintaining the catalogue's high indexing quality standards. To develop a simple automatic tool that retrieves MeSH descriptors from documents titles. In parallel to research on advanced indexing methods, a bag-of-words tool was developed for timely inclusion in CISMeF's maintenance system. An evaluation was carried out on a corpus of 99 documents. The indexing sets retrieved by the automatic tool were compared to manual indexing based on the title and on the full text of resources. 58% of the major main headings were retrieved by the bag-of-words algorithm and the precision on main heading retrieval was 69%. Bag-of-words indexing has effectively been used on selected resources to be included in CISMeF since August 2006. Meanwhile, on going work aims at improving the current version of the tool.

  7. An Efficient Data Compression Model Based on Spatial Clustering and Principal Component Analysis in Wireless Sensor Networks.

    PubMed

    Yin, Yihang; Liu, Fengzheng; Zhou, Xiang; Li, Quanzhong

    2015-08-07

    Wireless sensor networks (WSNs) have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA). First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms.

  8. Atlas ranking and selection for automatic segmentation of the esophagus from CT scans

    NASA Astrophysics Data System (ADS)

    Yang, Jinzhong; Haas, Benjamin; Fang, Raymond; Beadle, Beth M.; Garden, Adam S.; Liao, Zhongxing; Zhang, Lifei; Balter, Peter; Court, Laurence

    2017-12-01

    In radiation treatment planning, the esophagus is an important organ-at-risk that should be spared in patients with head and neck cancer or thoracic cancer who undergo intensity-modulated radiation therapy. However, automatic segmentation of the esophagus from CT scans is extremely challenging because of the structure’s inconsistent intensity, low contrast against the surrounding tissues, complex and variable shape and location, and random air bubbles. The goal of this study is to develop an online atlas selection approach to choose a subset of optimal atlases for multi-atlas segmentation to the delineate esophagus automatically. We performed atlas selection in two phases. In the first phase, we used the correlation coefficient of the image content in a cubic region between each atlas and the new image to evaluate their similarity and to rank the atlases in an atlas pool. A subset of atlases based on this ranking was selected, and deformable image registration was performed to generate deformed contours and deformed images in the new image space. In the second phase of atlas selection, we used Kullback-Leibler divergence to measure the similarity of local-intensity histograms between the new image and each of the deformed images, and the measurements were used to rank the previously selected atlases. Deformed contours were overlapped sequentially, from the most to the least similar, and the overlap ratio was examined. We further identified a subset of optimal atlases by analyzing the variation of the overlap ratio versus the number of atlases. The deformed contours from these optimal atlases were fused together using a modified simultaneous truth and performance level estimation algorithm to produce the final segmentation. The approach was validated with promising results using both internal data sets (21 head and neck cancer patients and 15 thoracic cancer patients) and external data sets (30 thoracic patients).

  9. Temporal and Spatial prediction of groundwater levels using Artificial Neural Networks, Fuzzy logic and Kriging interpolation.

    NASA Astrophysics Data System (ADS)

    Tapoglou, Evdokia; Karatzas, George P.; Trichakis, Ioannis C.; Varouchakis, Emmanouil A.

    2014-05-01

    The purpose of this study is to examine the use of Artificial Neural Networks (ANN) combined with kriging interpolation method, in order to simulate the hydraulic head both spatially and temporally. Initially, ANNs are used for the temporal simulation of the hydraulic head change. The results of the most appropriate ANNs, determined through a fuzzy logic system, are used as an input for the kriging algorithm where the spatial simulation is conducted. The proposed algorithm is tested in an area located across Isar River in Bayern, Germany and covers an area of approximately 7800 km2. The available data extend to a time period from 1/11/2008 to 31/10/2012 (1460 days) and include the hydraulic head at 64 wells, temperature and rainfall at 7 weather stations and surface water elevation at 5 monitoring stations. One feedforward ANN was trained for each of the 64 wells, where hydraulic head data are available, using a backpropagation algorithm. The most appropriate input parameters for each wells' ANN are determined considering their proximity to the measuring station, as well as their statistical characteristics. For the rainfall, the data for two consecutive time lags for best correlated weather station, as well as a third and fourth input from the second best correlated weather station, are used as an input. The surface water monitoring stations with the three best correlations for each well are also used in every case. Finally, the temperature for the best correlated weather station is used. Two different architectures are considered and the one with the best results is used henceforward. The output of the ANNs corresponds to the hydraulic head change per time step. These predictions are used in the kriging interpolation algorithm. However, not all 64 simulated values should be used. The appropriate neighborhood for each prediction point is constructed based not only on the distance between known and prediction points, but also on the training and testing error of the ANN. Therefore, the neighborhood of each prediction point is the best available. Then, the appropriate variogram is determined, by fitting the experimental variogram to a theoretical variogram model. Three models are examined, the linear, the exponential and the power-law. Finally, the hydraulic head change is predicted for every grid cell and for every time step used. All the algorithms used were developed in Visual Basic .NET, while the visualization of the results was performed in MATLAB using the .NET COM Interoperability. The results are evaluated using leave one out cross-validation and various performance indicators. The best results were achieved by using ANNs with two hidden layers, consisting of 20 and 15 nodes respectively and by using power-law variogram with the fuzzy logic system.

  10. Learning-based scan plane identification from fetal head ultrasound images

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoming; Annangi, Pavan; Gupta, Mithun; Yu, Bing; Padfield, Dirk; Banerjee, Jyotirmoy; Krishnan, Kajoli

    2012-03-01

    Acquisition of a clinically acceptable scan plane is a pre-requisite for ultrasonic measurement of anatomical features from B-mode images. In obstetric ultrasound, measurement of gestational age predictors, such as biparietal diameter and head circumference, is performed at the level of the thalami and cavum septum pelucidi. In an accurate scan plane, the head can be modeled as an ellipse, the thalami looks like a butterfly, the cavum appears like an empty box and the falx is a straight line along the major axis of a symmetric ellipse inclined either parallel to or at small angles to the probe surface. Arriving at the correct probe placement on the mother's belly to obtain an accurate scan plane is a task of considerable challenge especially for a new user of ultrasound. In this work, we present a novel automated learning-based algorithm to identify an acceptable fetal head scan plane. We divide the problem into cranium detection and a template matching to capture the composite "butterfly" structure present inside the head, which mimics the visual cues used by an expert. The algorithm uses the stateof- the-art Active Appearance Models techniques from the image processing and computer vision literature and tie them to presence or absence of the inclusions within the head to automatically compute a score to represent the goodness of a scan plane. This automated technique can be potentially used to train and aid new users of ultrasound.

  11. SU-E-T-202: Impact of Monte Carlo Dose Calculation Algorithm On Prostate SBRT Treatments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venencia, C; Garrigo, E; Cardenas, J

    2014-06-01

    Purpose: The purpose of this work was to quantify the dosimetric impact of using Monte Carlo algorithm on pre calculated SBRT prostate treatment with pencil beam dose calculation algorithm. Methods: A 6MV photon beam produced by a Novalis TX (BrainLAB-Varian) linear accelerator equipped with HDMLC was used. Treatment plans were done using 9 fields with Iplanv4.5 (BrainLAB) and dynamic IMRT modality. Institutional SBRT protocol uses a total dose to the prostate of 40Gy in 5 fractions, every other day. Dose calculation is done by pencil beam (2mm dose resolution), heterogeneity correction and dose volume constraint (UCLA) for PTV D95%=40Gy andmore » D98%>39.2Gy, Rectum V20Gy<50%, V32Gy<20%, V36Gy<10% and V40Gy<5%, Bladder V20Gy<40% and V40Gy<10%, femoral heads V16Gy<5%, penile bulb V25Gy<3cc, urethra and overlap region between PTV and PRV Rectum Dmax<42Gy. 10 SBRT treatments plans were selected and recalculated using Monte Carlo with 2mm spatial resolution and mean variance of 2%. DVH comparisons between plans were done. Results: The average difference between PTV doses constraints were within 2%. However 3 plans have differences higher than 3% which does not meet the D98% criteria (>39.2Gy) and should have been renormalized. Dose volume constraint differences for rectum, bladder, femoral heads and penile bulb were les than 2% and within tolerances. Urethra region and overlapping between PTV and PRV Rectum shows increment of dose in all plans. The average difference for urethra region was 2.1% with a maximum of 7.8% and for the overlapping region 2.5% with a maximum of 8.7%. Conclusion: Monte Carlo dose calculation on dynamic IMRT treatments could affects on plan normalization. Dose increment in critical region of urethra and PTV overlapping region with PTV could have clinical consequences which need to be studied. The use of Monte Carlo dose calculation algorithm is limited because inverse planning dose optimization use only pencil beam.« less

  12. Self-Adaptive Correction of Heading Direction in Stair Climbing for Tracked Mobile Robots Using Visual Servoing Approach

    NASA Astrophysics Data System (ADS)

    Ji, Peng; Song, Aiguo; Song, Zimo; Liu, Yuqing; Jiang, Guohua; Zhao, Guopu

    2017-02-01

    In this paper, we describe a heading direction correction algorithm for a tracked mobile robot. To save hardware resources as far as possible, the mobile robot’s wrist camera is used as the only sensor, which is rotated to face stairs. An ensemble heading deviation detector is proposed to help the mobile robot correct its heading direction. To improve the generalization ability, a multi-scale Gabor filter is used to process the input image previously. Final deviation result is acquired by applying the majority vote strategy on all the classifiers’ results. The experimental results show that our detector is able to enable the mobile robot to correct its heading direction adaptively while it is climbing the stairs.

  13. Optimal field-splitting algorithm in intensity-modulated radiotherapy: Evaluations using head-and-neck and female pelvic IMRT cases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dou, Xin; Kim, Yusung, E-mail: yusung-kim@uiowa.edu; Bayouth, John E.

    2013-04-01

    To develop an optimal field-splitting algorithm of minimal complexity and verify the algorithm using head-and-neck (H and N) and female pelvic intensity-modulated radiotherapy (IMRT) cases. An optimal field-splitting algorithm was developed in which a large intensity map (IM) was split into multiple sub-IMs (≥2). The algorithm reduced the total complexity by minimizing the monitor units (MU) delivered and segment number of each sub-IM. The algorithm was verified through comparison studies with the algorithm as used in a commercial treatment planning system. Seven IMRT, H and N, and female pelvic cancer cases (54 IMs) were analyzed by MU, segment numbers, andmore » dose distributions. The optimal field-splitting algorithm was found to reduce both total MU and the total number of segments. We found on average a 7.9 ± 11.8% and 9.6 ± 18.2% reduction in MU and segment numbers for H and N IMRT cases with an 11.9 ± 17.4% and 11.1 ± 13.7% reduction for female pelvic cases. The overall percent (absolute) reduction in the numbers of MU and segments were found to be on average −9.7 ± 14.6% (−15 ± 25 MU) and −10.3 ± 16.3% (−3 ± 5), respectively. In addition, all dose distributions from the optimal field-splitting method showed improved dose distributions. The optimal field-splitting algorithm shows considerable improvements in both total MU and total segment number. The algorithm is expected to be beneficial for the radiotherapy treatment of large-field IMRT.« less

  14. The force control and path planning of electromagnetic induction-based massage robot.

    PubMed

    Wang, Wendong; Zhang, Lei; Li, Jinzhe; Yuan, Xiaoqing; Shi, Yikai; Jiang, Qinqin; He, Lijing

    2017-07-20

    Massage robot is considered as an effective physiological treatment to relieve fatigue, improve blood circulation, relax muscle tone, etc. The simple massage equipment quickly spread into market due to low cost, but they are not widely accepted due to restricted massage function. Complicated structure and high cost caused difficulties for developing multi-function massage equipment. This paper presents a novel massage robot which can achieve tapping, rolling, kneading and other massage operations, and proposes an improved reciprocating path planning algorithm to improve massage effect. The number of coil turns, the coil current and the distance between massage head and yoke were chosen to investigate the influence on massage force by finite element method. The control system model of the wheeled massage robot was established, including control subsystem of the motor, path algorithm control subsystem, parameter module of the massage robot and virtual reality interface module. The improved reciprocating path planning algorithm was proposed to improve regional coverage rate and massage effect. The influence caused by coil current, the number of coil turns and the distance between massage head and yoke were simulated in Maxwell. It indicated that coil current has more important influence compared to the other two factors. The path planning simulation of the massage robot was completed in Matlab, and the results show that the improved reciprocating path planning algorithm achieved higher coverage rate than the traditional algorithm. With the analysis of simulation results, it can be concluded that the number of coil turns and the distance between the moving iron core and the yoke could be determined prior to coil current, and the force can be controllable by optimizing structure parameters of massage head and adjusting coil current. Meanwhile, it demonstrates that the proposed algorithm could effectively improve path coverage rate during massage operations, therefore the massage effect can be improved.

  15. SU-E-T-538: Evaluation of IMRT Dose Calculation Based on Pencil-Beam and AAA Algorithms.

    PubMed

    Yuan, Y; Duan, J; Popple, R; Brezovich, I

    2012-06-01

    To evaluate the accuracy of dose calculation for intensity modulated radiation therapy (IMRT) based on Pencil Beam (PB) and Analytical Anisotropic Algorithm (AAA) computation algorithms. IMRT plans of twelve patients with different treatment sites, including head/neck, lung and pelvis, were investigated. For each patient, dose calculation with PB and AAA algorithms using dose grid sizes of 0.5 mm, 0.25 mm, and 0.125 mm, were compared with composite-beam ion chamber and film measurements in patient specific QA. Discrepancies between the calculation and the measurement were evaluated by percentage error for ion chamber dose and γ〉l failure rate in gamma analysis (3%/3mm) for film dosimetry. For 9 patients, ion chamber dose calculated with AAA-algorithms is closer to ion chamber measurement than that calculated with PB algorithm with grid size of 2.5 mm, though all calculated ion chamber doses are within 3% of the measurements. For head/neck patients and other patients with large treatment volumes, γ〉l failure rate is significantly reduced (within 5%) with AAA-based treatment planning compared to generally more than 10% with PB-based treatment planning (grid size=2.5 mm). For lung and brain cancer patients with medium and small treatment volumes, γ〉l failure rates are typically within 5% for both AAA and PB-based treatment planning (grid size=2.5 mm). For both PB and AAA-based treatment planning, improvements of dose calculation accuracy with finer dose grids were observed in film dosimetry of 11 patients and in ion chamber measurements for 3 patients. AAA-based treatment planning provides more accurate dose calculation for head/neck patients and other patients with large treatment volumes. Compared with film dosimetry, a γ〉l failure rate within 5% can be achieved for AAA-based treatment planning. © 2012 American Association of Physicists in Medicine.

  16. Autoshaped Head Poking in the Mouse: A Quantitative Analysis of the Learning Curve

    ERIC Educational Resources Information Center

    Papachristos, Efstathios B.; Gallistel, C. R.

    2006-01-01

    In autoshaping experiments, we quantified the acquisition of anticipatory head poking in individual mice, using an algorithm that finds changes in the slope of a cumulative record. In most mice, upward changes in the amount of anticipatory poking per trial were abrupt, and tended to occur at session boundaries, suggesting that the session is as…

  17. Head Teachers and Teachers as Pioneers in Facilitating Dyslexic Children in Primary Mainstream Schools

    ERIC Educational Resources Information Center

    Jaka, Fahima Salman

    2015-01-01

    This study explores the perceptions of school heads and teachers in facilitating young dyslexic children in primary mainstream schools of Pakistan. Through purposive sampling, the researcher selected eight participants: Four primary school heads and four primary teachers from elite schools of Karachi. The research instrument selected for this…

  18. Baseline Face Detection, Head Pose Estimation, and Coarse Direction Detection for Facial Data in the SHRP2 Naturalistic Driving Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paone, Jeffrey R; Bolme, David S; Ferrell, Regina Kay

    Keeping a driver focused on the road is one of the most critical steps in insuring the safe operation of a vehicle. The Strategic Highway Research Program 2 (SHRP2) has over 3,100 recorded videos of volunteer drivers during a period of 2 years. This extensive naturalistic driving study (NDS) contains over one million hours of video and associated data that could aid safety researchers in understanding where the driver s attention is focused. Manual analysis of this data is infeasible, therefore efforts are underway to develop automated feature extraction algorithms to process and characterize the data. The real-world nature, volume,more » and acquisition conditions are unmatched in the transportation community, but there are also challenges because the data has relatively low resolution, high compression rates, and differing illumination conditions. A smaller dataset, the head pose validation study, is available which used the same recording equipment as SHRP2 but is more easily accessible with less privacy constraints. In this work we report initial head pose accuracy using commercial and open source face pose estimation algorithms on the head pose validation data set.« less

  19. HITCal: a software tool for analysis of video head impulse test responses.

    PubMed

    Rey-Martinez, Jorge; Batuecas-Caletrio, Angel; Matiño, Eusebi; Perez Fernandez, Nicolás

    2015-09-01

    The developed software (HITCal) may be a useful tool in the analysis and measurement of the saccadic video head impulse test (vHIT) responses and with the experience obtained during its use the authors suggest that HITCal is an excellent method for enhanced exploration of vHIT outputs. To develop a (software) method to analyze and explore the vHIT responses, mainly saccades. HITCal was written using a computational development program; the function to access a vHIT file was programmed; extended head impulse exploration and measurement tools were created and an automated saccade analysis was developed using an experimental algorithm. For pre-release HITCal laboratory tests, a database of head impulse tests (HITs) was created with the data collected retrospectively in three reference centers. This HITs database was evaluated by humans and was also computed with HITCal. The authors have successfully built HITCal and it has been released as open source software; the developed software was fully operative and all the proposed characteristics were incorporated in the released version. The automated saccades algorithm implemented in HITCal has good concordance with the assessment by human observers (Cohen's kappa coefficient = 0.7).

  20. Adaptive Control of Small Outboard-Powered Boats for Survey Applications

    NASA Technical Reports Server (NTRS)

    VanZwieten, T.S.; VanZwieten, J.H.; Fisher, A.D.

    2009-01-01

    Four autopilot controllers have been developed in this work that can both hold a desired heading and follow a straight line. These PID, adaptive PID, neuro-adaptive, and adaptive augmenting control algorithms have all been implemented into a numerical simulation of a 33-foot center console vessel with wind, waves, and current disturbances acting in the perpendicular (across-track) direction of the boat s desired trajectory. Each controller is tested for its ability to follow a desired heading in the presence of these disturbances and then to follow a straight line at two different throttle settings for the same disturbances. These controllers were tuned for an input thrust of 2000 N and all four controllers showed good performance with none of the controllers significantly outperforming the others when holding a constant heading and following a straight line at this engine thrust. Each controller was then tested for a reduced engine thrust of 1200 N per engine where each of the three adaptive controllers reduced heading error and across-track error by approximately 50% after a 300 second tuning period when compared to the fixed gain PID, showing that significant robustness to changes in throttle setting was gained by using an adaptive algorithm.

  1. The dosimetric effects of tissue heterogeneities in intensity-modulated radiation therapy (IMRT) of the head and neck

    NASA Astrophysics Data System (ADS)

    Al-Hallaq, H. A.; Reft, C. S.; Roeske, J. C.

    2006-03-01

    The dosimetric effects of bone and air heterogeneities in head and neck IMRT treatments were quantified. An anthropomorphic RANDO phantom was CT-scanned with 16 thermoluminescent dosimeter (TLD) chips placed in and around the target volume. A standard IMRT plan generated with CORVUS was used to irradiate the phantom five times. On average, measured dose was 5.1% higher than calculated dose. Measurements were higher by 7.1% near the heterogeneities and by 2.6% in tissue. The dose difference between measurement and calculation was outside the 95% measurement confidence interval for six TLDs. Using CORVUS' heterogeneity correction algorithm, the average difference between measured and calculated doses decreased by 1.8% near the heterogeneities and by 0.7% in tissue. Furthermore, dose differences lying outside the 95% confidence interval were eliminated for five of the six TLDs. TLD doses recalculated by Pinnacle3's convolution/superposition algorithm were consistently higher than CORVUS doses, a trend that matched our measured results. These results indicate that the dosimetric effects of air cavities are larger than those of bone heterogeneities, thereby leading to a higher delivered dose compared to CORVUS calculations. More sophisticated algorithms such as convolution/superposition or Monte Carlo should be used for accurate tailoring of IMRT dose in head and neck tumours.

  2. Laser scanning measurements on trees for logging harvesting operations.

    PubMed

    Zheng, Yili; Liu, Jinhao; Wang, Dian; Yang, Ruixi

    2012-01-01

    Logging harvesters represent a set of high-performance modern forestry machinery, which can finish a series of continuous operations such as felling, delimbing, peeling, bucking and so forth with human intervention. It is found by experiment that during the process of the alignment of the harvesting head to capture the trunk, the operator needs a lot of observation, judgment and repeated operations, which lead to the time and fuel losses. In order to improve the operation efficiency and reduce the operating costs, the point clouds for standing trees are collected with a low-cost 2D laser scanner. A cluster extracting algorithm and filtering algorithm are used to classify each trunk from the point cloud. On the assumption that every cross section of the target trunk is approximate a standard circle and combining the information of an Attitude and Heading Reference System, the radii and center locations of the trunks in the scanning range are calculated by the Fletcher-Reeves conjugate gradient algorithm. The method is validated through experiments in an aspen forest, and the optimized calculation time consumption is compared with the previous work of other researchers. Moreover, the implementation of the calculation result for automotive capturing trunks by the harvesting head during the logging operation is discussed in particular.

  3. Rough sets and Laplacian score based cost-sensitive feature selection

    PubMed Central

    Yu, Shenglong

    2018-01-01

    Cost-sensitive feature selection learning is an important preprocessing step in machine learning and data mining. Recently, most existing cost-sensitive feature selection algorithms are heuristic algorithms, which evaluate the importance of each feature individually and select features one by one. Obviously, these algorithms do not consider the relationship among features. In this paper, we propose a new algorithm for minimal cost feature selection called the rough sets and Laplacian score based cost-sensitive feature selection. The importance of each feature is evaluated by both rough sets and Laplacian score. Compared with heuristic algorithms, the proposed algorithm takes into consideration the relationship among features with locality preservation of Laplacian score. We select a feature subset with maximal feature importance and minimal cost when cost is undertaken in parallel, where the cost is given by three different distributions to simulate different applications. Different from existing cost-sensitive feature selection algorithms, our algorithm simultaneously selects out a predetermined number of “good” features. Extensive experimental results show that the approach is efficient and able to effectively obtain the minimum cost subset. In addition, the results of our method are more promising than the results of other cost-sensitive feature selection algorithms. PMID:29912884

  4. Rough sets and Laplacian score based cost-sensitive feature selection.

    PubMed

    Yu, Shenglong; Zhao, Hong

    2018-01-01

    Cost-sensitive feature selection learning is an important preprocessing step in machine learning and data mining. Recently, most existing cost-sensitive feature selection algorithms are heuristic algorithms, which evaluate the importance of each feature individually and select features one by one. Obviously, these algorithms do not consider the relationship among features. In this paper, we propose a new algorithm for minimal cost feature selection called the rough sets and Laplacian score based cost-sensitive feature selection. The importance of each feature is evaluated by both rough sets and Laplacian score. Compared with heuristic algorithms, the proposed algorithm takes into consideration the relationship among features with locality preservation of Laplacian score. We select a feature subset with maximal feature importance and minimal cost when cost is undertaken in parallel, where the cost is given by three different distributions to simulate different applications. Different from existing cost-sensitive feature selection algorithms, our algorithm simultaneously selects out a predetermined number of "good" features. Extensive experimental results show that the approach is efficient and able to effectively obtain the minimum cost subset. In addition, the results of our method are more promising than the results of other cost-sensitive feature selection algorithms.

  5. [Head and neck paragangliomas: An interdisciplinary challenge].

    PubMed

    Künzel, J; Bahr, K; Hainz, M; Rossmann, H; Matthias, C

    2015-12-01

    Current treatment strategies for head and neck paragangliomas are moving away from radical resection and toward surgical tumor reduction, in order to preserve function and reduce morbidity. Radiotherapy modalities are alternative primary treatment options. A PubMed search of the relevant literature on genetics and treatment of head and neck paragangliomas was conducted. The rapid progress made in genetic research was mainly triggered by two factors: firstly, the establishment of central registries for paraganglioma patients and secondly, the availability of next-generation sequencing methods. Exome sequencing and a gene-panel sequencing approach have already been successfully applied to paraganglioma syndromes. The latter method in particular is rapid and cost-effective, and may soon replace complex genotyping algorithms. The literature provides good evidence that diversified modern treatment options are available to realize individual treatment concepts for almost all paraganglioma manifestations. Generally, small and symptomatic tumors should be completely resected, particularly in younger patients. Considering the patient's age, symptoms, morbidity risk, and comorbidities, larger tumors should be surgically treated in a function-preserving manner. In these cases, primary radiotherapy is an equivalent alternative option. A "wait and scan" strategy is possible in selected cases. The potential morbidity of surgical treatment must be weighed against the expectable quality of life. Comprehensive consultation with the patient about possible treatment modalities is mandatory. Treatment decision making should involve a multidisciplinary team of experts.

  6. Filtered-backprojection reconstruction for a cone-beam computed tomography scanner with independent source and detector rotations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rit, Simon, E-mail: simon.rit@creatis.insa-lyon.fr; Clackdoyle, Rolf; Keuschnigg, Peter

    Purpose: A new cone-beam CT scanner for image-guided radiotherapy (IGRT) can independently rotate the source and the detector along circular trajectories. Existing reconstruction algorithms are not suitable for this scanning geometry. The authors propose and evaluate a three-dimensional (3D) filtered-backprojection reconstruction for this situation. Methods: The source and the detector trajectories are tuned to image a field-of-view (FOV) that is offset with respect to the center-of-rotation. The new reconstruction formula is derived from the Feldkamp algorithm and results in a similar three-step algorithm: projection weighting, ramp filtering, and weighted backprojection. Simulations of a Shepp Logan digital phantom were used tomore » evaluate the new algorithm with a 10 cm-offset FOV. A real cone-beam CT image with an 8.5 cm-offset FOV was also obtained from projections of an anthropomorphic head phantom. Results: The quality of the cone-beam CT images reconstructed using the new algorithm was similar to those using the Feldkamp algorithm which is used in conventional cone-beam CT. The real image of the head phantom exhibited comparable image quality to that of existing systems. Conclusions: The authors have proposed a 3D filtered-backprojection reconstruction for scanners with independent source and detector rotations that is practical and effective. This algorithm forms the basis for exploiting the scanner’s unique capabilities in IGRT protocols.« less

  7. Robotic real-time translational and rotational head motion correction during frameless stereotactic radiosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Xinmin; Belcher, Andrew H.; Grelewicz, Zachary

    Purpose: To develop a control system to correct both translational and rotational head motion deviations in real-time during frameless stereotactic radiosurgery (SRS). Methods: A novel feedback control with a feed-forward algorithm was utilized to correct for the coupling of translation and rotation present in serial kinematic robotic systems. Input parameters for the algorithm include the real-time 6DOF target position, the frame pitch pivot point to target distance constant, and the translational and angular Linac beam off (gating) tolerance constants for patient safety. Testing of the algorithm was done using a 4D (XY Z + pitch) robotic stage, an infrared headmore » position sensing unit and a control computer. The measured head position signal was processed and a resulting command was sent to the interface of a four-axis motor controller, through which four stepper motors were driven to perform motion compensation. Results: The control of the translation of a brain target was decoupled with the control of the rotation. For a phantom study, the corrected position was within a translational displacement of 0.35 mm and a pitch displacement of 0.15° 100% of the time. For a volunteer study, the corrected position was within displacements of 0.4 mm and 0.2° over 98.5% of the time, while it was 10.7% without correction. Conclusions: The authors report a control design approach for both translational and rotational head motion correction. The experiments demonstrated that control performance of the 4D robotic stage meets the submillimeter and subdegree accuracy required by SRS.« less

  8. A Routing Protocol for Multisink Wireless Sensor Networks in Underground Coalmine Tunnels

    PubMed Central

    Xia, Xu; Chen, Zhigang; Liu, Hui; Wang, Huihui; Zeng, Feng

    2016-01-01

    Traditional underground coalmine monitoring systems are mainly based on the use of wired transmission. However, when cables are damaged during an accident, it is difficult to obtain relevant data on environmental parameters and the emergency situation underground. To address this problem, the use of wireless sensor networks (WSNs) has been proposed. However, the shape of coalmine tunnels is not conducive to the deployment of WSNs as they are long and narrow. Therefore, issues with the network arise, such as extremely large energy consumption, very weak connectivity, long time delays, and a short lifetime. To solve these problems, in this study, a new routing protocol algorithm for multisink WSNs based on transmission power control is proposed. First, a transmission power control algorithm is used to negotiate the optimal communication radius and transmission power of each sink. Second, the non-uniform clustering idea is adopted to optimize the cluster head selection. Simulation results are subsequently compared to the Centroid of the Nodes in a Partition (CNP) strategy and show that the new algorithm delivers a good performance: power efficiency is increased by approximately 70%, connectivity is increased by approximately 15%, the cluster interference is diminished by approximately 50%, the network lifetime is increased by approximately 6%, and the delay is reduced with an increase in the number of sinks. PMID:27916917

  9. Active Structural Acoustic Control of Interior Noise on a Raytheon 1900D

    NASA Technical Reports Server (NTRS)

    Palumbo, Dan; Cabell, Ran; Sullivan, Brenda; Cline, John

    2000-01-01

    An active structural acoustic control system has been demonstrated on a Raytheon Aircraft Company 1900D turboprop airliner. Both single frequency and multi-frequency control of the blade passage frequency and its harmonics was accomplished. The control algorithm was a variant of the popular filtered-x LMS implemented in the principal component domain. The control system consisted of 21 inertial actuators and 32 microphones. The actuators were mounted to the aircraft's ring frames. The microphones were distributed uniformly throughout the interior at head height, both seated and standing. Actuator locations were selected using a combinatorial search optimization algorithm. The control system achieved a 14 dB noise reduction of the blade passage frequency during single frequency tests. Multi-frequency control of the first 1st, 2nd and 3rd harmonics resulted in 10.2 dB, 3.3 dB and 1.6 dB noise reductions respectively. These results fall short of the predictions which were produced by the optimization algorithm (13.5 dB, 8.6 dB and 6.3 dB). The optimization was based on actuator transfer functions taken on the ground and it is postulated that cabin pressurization at flight altitude was a factor in this discrepancy.

  10. A Routing Protocol for Multisink Wireless Sensor Networks in Underground Coalmine Tunnels.

    PubMed

    Xia, Xu; Chen, Zhigang; Liu, Hui; Wang, Huihui; Zeng, Feng

    2016-11-30

    Traditional underground coalmine monitoring systems are mainly based on the use of wired transmission. However, when cables are damaged during an accident, it is difficult to obtain relevant data on environmental parameters and the emergency situation underground. To address this problem, the use of wireless sensor networks (WSNs) has been proposed. However, the shape of coalmine tunnels is not conducive to the deployment of WSNs as they are long and narrow. Therefore, issues with the network arise, such as extremely large energy consumption, very weak connectivity, long time delays, and a short lifetime. To solve these problems, in this study, a new routing protocol algorithm for multisink WSNs based on transmission power control is proposed. First, a transmission power control algorithm is used to negotiate the optimal communication radius and transmission power of each sink. Second, the non-uniform clustering idea is adopted to optimize the cluster head selection. Simulation results are subsequently compared to the Centroid of the Nodes in a Partition (CNP) strategy and show that the new algorithm delivers a good performance: power efficiency is increased by approximately 70%, connectivity is increased by approximately 15%, the cluster interference is diminished by approximately 50%, the network lifetime is increased by approximately 6%, and the delay is reduced with an increase in the number of sinks.

  11. Drift Reduction in Pedestrian Navigation System by Exploiting Motion Constraints and Magnetic Field

    PubMed Central

    Ilyas, Muhammad; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok

    2016-01-01

    Pedestrian navigation systems (PNS) using foot-mounted MEMS inertial sensors use zero-velocity updates (ZUPTs) to reduce drift in navigation solutions and estimate inertial sensor errors. However, it is well known that ZUPTs cannot reduce all errors, especially as heading error is not observable. Hence, the position estimates tend to drift and even cyclic ZUPTs are applied in updated steps of the Extended Kalman Filter (EKF). This urges the use of other motion constraints for pedestrian gait and any other valuable heading reduction information that is available. In this paper, we exploit two more motion constraints scenarios of pedestrian gait: (1) walking along straight paths; (2) standing still for a long time. It is observed that these motion constraints (called “virtual sensor”), though considerably reducing drift in PNS, still need an absolute heading reference. One common absolute heading estimation sensor is the magnetometer, which senses the Earth’s magnetic field and, hence, the true heading angle can be calculated. However, magnetometers are susceptible to magnetic distortions, especially in indoor environments. In this work, an algorithm, called magnetic anomaly detection (MAD) and compensation is designed by incorporating only healthy magnetometer data in the EKF updating step, to reduce drift in zero-velocity updated INS. Experiments are conducted in GPS-denied and magnetically distorted environments to validate the proposed algorithms. PMID:27618056

  12. Fraction-variant beam orientation optimization for non-coplanar IMRT

    NASA Astrophysics Data System (ADS)

    O'Connor, Daniel; Yu, Victoria; Nguyen, Dan; Ruan, Dan; Sheng, Ke

    2018-02-01

    Conventional beam orientation optimization (BOO) algorithms for IMRT assume that the same set of beam angles is used for all treatment fractions. In this paper we present a BOO formulation based on group sparsity that simultaneously optimizes non-coplanar beam angles for all fractions, yielding a fraction-variant (FV) treatment plan. Beam angles are selected by solving a multi-fraction fluence map optimization problem involving 500-700 candidate beams per fraction, with an additional group sparsity term that encourages most candidate beams to be inactive. The optimization problem is solved using the fast iterative shrinkage-thresholding algorithm. Our FV BOO algorithm is used to create five-fraction treatment plans for digital phantom, prostate, and lung cases as well as a 30-fraction plan for a head and neck case. A homogeneous PTV dose coverage is maintained in all fractions. The treatment plans are compared with fraction-invariant plans that use a fixed set of beam angles for all fractions. The FV plans reduced OAR mean dose and D 2 values on average by 3.3% and 3.8% of the prescription dose, respectively. Notably, mean OAR dose was reduced by 14.3% of prescription dose (rectum), 11.6% (penile bulb), 10.7% (seminal vesicle), 5.5% (right femur), 3.5% (bladder), 4.0% (normal left lung), 15.5% (cochleas), and 5.2% (chiasm). D 2 was reduced by 14.9% of prescription dose (right femur), 8.2% (penile bulb), 12.7% (proximal bronchus), 4.1% (normal left lung), 15.2% (cochleas), 10.1% (orbits), 9.1% (chiasm), 8.7% (brainstem), and 7.1% (parotids). Meanwhile, PTV homogeneity defined as D 95/D 5 improved from .92 to .95 (digital phantom), from .95 to .98 (prostate case), and from .94 to .97 (lung case), and remained constant for the head and neck case. Moreover, the FV plans are dosimetrically similar to conventional plans that use twice as many beams per fraction. Thus, FV BOO offers the potential to reduce delivery time for non-coplanar IMRT.

  13. A biomimetic, energy-harvesting, obstacle-avoiding, path-planning algorithm for UAVs

    NASA Astrophysics Data System (ADS)

    Gudmundsson, Snorri

    This dissertation presents two new approaches to energy harvesting for Unmanned Aerial Vehicles (UAV). One method is based on the Potential Flow Method (PFM); the other method seeds a wind-field map based on updraft peak analysis and then applies a variant of the Bellman-Ford algorithm to find the minimum-cost path. Both methods are enhanced by taking into account the performance characteristics of the aircraft using advanced performance theory. The combined approach yields five possible trajectories from which the one with the minimum energy cost is selected. The dissertation concludes by using the developed theory and modeling tools to simulate the flight paths of two small Unmanned Aerial Vehicles (sUAV) in the 500 kg and 250 kg class. The results show that, in mountainous regions, substantial energy can be recovered, depending on topography and wind characteristics. For the examples presented, as much as 50% of the energy was recovered for a complex, multi-heading, multi-altitude, 170 km mission in an average wind speed of 9 m/s. The algorithms constitute a Generic Intelligent Control Algorithm (GICA) for autonomous unmanned aerial vehicles that enables an extraction of atmospheric energy while completing a mission trajectory. At the same time, the algorithm. automatically adjusts the flight path in order to avoid obstacles, in a fashion not unlike what one would expect from living organisms, such as birds and insects. This multi-disciplinary approach renders the approach biomimetic, i.e. it constitutes a synthetic system that “mimics the formation and function of biological mechanisms and processes.”.

  14. A Network Selection Algorithm Considering Power Consumption in Hybrid Wireless Networks

    NASA Astrophysics Data System (ADS)

    Joe, Inwhee; Kim, Won-Tae; Hong, Seokjoon

    In this paper, we propose a novel network selection algorithm considering power consumption in hybrid wireless networks for vertical handover. CDMA, WiBro, WLAN networks are candidate networks for this selection algorithm. This algorithm is composed of the power consumption prediction algorithm and the final network selection algorithm. The power consumption prediction algorithm estimates the expected lifetime of the mobile station based on the current battery level, traffic class and power consumption for each network interface card of the mobile station. If the expected lifetime of the mobile station in a certain network is not long enough compared the handover delay, this particular network will be removed from the candidate network list, thereby preventing unnecessary handovers in the preprocessing procedure. On the other hand, the final network selection algorithm consists of AHP (Analytic Hierarchical Process) and GRA (Grey Relational Analysis). The global factors of the network selection structure are QoS, cost and lifetime. If user preference is lifetime, our selection algorithm selects the network that offers longest service duration due to low power consumption. Also, we conduct some simulations using the OPNET simulation tool. The simulation results show that the proposed algorithm provides longer lifetime in the hybrid wireless network environment.

  15. Using the time shift in single pushbroom datatakes to detect ships and their heading

    NASA Astrophysics Data System (ADS)

    Willburger, Katharina A. M.; Schwenk, Kurt

    2017-10-01

    The detection of ships from remote sensing data has become an essential task for maritime security. The variety of application scenarios includes piracy, illegal fishery, ocean dumping and ships carrying refugees. While techniques using data from SAR sensors for ship detection are widely common, there is only few literature discussing algorithms based on imagery of optical camera systems. A ship detection algorithm for optical pushbroom data has been developed. It takes advantage of the special detector assembly of most of those scanners, which allows apart from the detection of a ship also the calculation of its heading out of a single acquisition. The proposed algorithm for the detection of moving ships was developed with RapidEye imagery. It algorithm consists mainly of three steps: the creation of a land-watermask, the object extraction and the deeper examination of each single object. The latter step is built up by several spectral and geometric filters, making heavy use of the inter-channel displacement typical for pushbroom sensors with multiple CCD lines, finally yielding a set of ships and their direction of movement. The working principle of time-shifted pushbroom sensors and the developed algorithm is explained in detail. Furthermore, we present our first results and give an outlook to future improvements.

  16. Simulated annealing two-point ray tracing

    NASA Astrophysics Data System (ADS)

    Velis, Danilo R.; Ulrych, Tadeusz J.

    We present a new method for solving the two-point seismic ray tracing problem based on Fermat's principle. The algorithm overcomes some well known difficulties that arise in standard ray shooting and bending methods. Problems related to: (1) the selection of new take-off angles, and (2) local minima in multipathing cases, are overcome by using an efficient simulated annealing (SA) algorithm. At each iteration, the ray is propagated from the source by solving a standard initial value problem. The last portion of the raypath is then forced to pass through the receiver. Using SA, the total traveltime is then globally minimized by obtaining the initial conditions that produce the absolute minimum path. The procedure is suitable for tracing rays through 2D complex structures, although it can be extended to deal with 3D velocity media. Not only direct waves, but also reflected and head-waves can be incorporated in the scheme. One important advantage is its simplicity, in as much as any available or user-preferred initial value solver system can be used. A number of clarifying examples of multipathing in 2D media are examined.

  17. 48 CFR 636.602-1 - Selection criteria.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Selection criteria. 636.602-1 Section 636.602-1 Federal Acquisition Regulations System DEPARTMENT OF STATE SPECIAL CATEGORIES... Selection criteria. (b) The head of the contracting activity is the agency head's designee for the purpose...

  18. The fetal head evaluation during labor in the occiput posterior position: the ESA (evaluation by simulation algorithm) approach.

    PubMed

    Malvasi, Antonio; Bochicchio, Mario; Vaira, Lucia; Longo, Antonella; Pacella, Elena; Tinelli, Andrea

    2014-07-01

    The determination of fetal head position can be useful in labor to predict the success of labor management, especially in case of malpositions. Malpositions are abnormal positions of the vertex of the fetal head and account for the large part of indication for cesarean sections for dystocic labor. The occiput posterior position occurs in 15-25% of patients before labor at term and, however, most occiput posterior presentations rotate during labor, so that the incidence of occiput posterior at vaginal birth is approximately 5-7%. Persistence of the occiput posterior position is associated with higher rate of interventions and with maternal and neonatal complications and the knowledge of the exact position of the fetal head is of paramount importance prior to any operative vaginal delivery, for both the safe positioning of the instrument that may be used (i.e. forceps versus vacuum) and for its successful outcome. Ultrasound (US) diagnosed occiput posterior position during labor can predict occiput posterior position at birth. By these evidences, the time requested for fetal head descent and the position in the birth canal, had an impact on the diagnosis of labor progression or arrested labor. To try to reduce this pitfalls, authors developed a new algorithm, applied to intrapartum US and based on suitable US pictures, that sets out, in detail, the quantitative evaluation, in degrees, of the occiput posterior position of the fetal head in the pelvis and the birth canal, respectively, in the first and second stage of labor. Authors tested this computer system in a settle of patients in labor.

  19. Bounded Kalman filter method for motion-robust, non-contact heart rate estimation

    PubMed Central

    Prakash, Sakthi Kumar Arul; Tucker, Conrad S.

    2018-01-01

    The authors of this work present a real-time measurement of heart rate across different lighting conditions and motion categories. This is an advancement over existing remote Photo Plethysmography (rPPG) methods that require a static, controlled environment for heart rate detection, making them impractical for real-world scenarios wherein a patient may be in motion, or remotely connected to a healthcare provider through telehealth technologies. The algorithm aims to minimize motion artifacts such as blurring and noise due to head movements (uniform, random) by employing i) a blur identification and denoising algorithm for each frame and ii) a bounded Kalman filter technique for motion estimation and feature tracking. A case study is presented that demonstrates the feasibility of the algorithm in non-contact estimation of the pulse rate of subjects performing everyday head and body movements. The method in this paper outperforms state of the art rPPG methods in heart rate detection, as revealed by the benchmarked results. PMID:29552419

  20. 14 CFR Appendix F to Part 135 - Airplane Flight Recorder Specification

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    .... Heading (Primary flight crew reference) 0−360° and Discrete “true” or “mag” ±2° 1 0.5° When true or magnetic heading can be selected as the primary heading reference, a discrete indicating selection must be... synchronization reference On-Off (Discrete)None 1 Preferably each crew member but one discrete acceptable for all...

  1. Two-Year versus One-Year Head Start Program Impact: Addressing Selection Bias by Comparing Regression Modeling with Propensity Score Analysis

    ERIC Educational Resources Information Center

    Leow, Christine; Wen, Xiaoli; Korfmacher, Jon

    2015-01-01

    This article compares regression modeling and propensity score analysis as different types of statistical techniques used in addressing selection bias when estimating the impact of two-year versus one-year Head Start on children's school readiness. The analyses were based on the national Head Start secondary dataset. After controlling for…

  2. A multimodality segmentation framework for automatic target delineation in head and neck radiotherapy.

    PubMed

    Yang, Jinzhong; Beadle, Beth M; Garden, Adam S; Schwartz, David L; Aristophanous, Michalis

    2015-09-01

    To develop an automatic segmentation algorithm integrating imaging information from computed tomography (CT), positron emission tomography (PET), and magnetic resonance imaging (MRI) to delineate target volume in head and neck cancer radiotherapy. Eleven patients with unresectable disease at the tonsil or base of tongue who underwent MRI, CT, and PET/CT within two months before the start of radiotherapy or chemoradiotherapy were recruited for the study. For each patient, PET/CT and T1-weighted contrast MRI scans were first registered to the planning CT using deformable and rigid registration, respectively, to resample the PET and magnetic resonance (MR) images to the planning CT space. A binary mask was manually defined to identify the tumor area. The resampled PET and MR images, the planning CT image, and the binary mask were fed into the automatic segmentation algorithm for target delineation. The algorithm was based on a multichannel Gaussian mixture model and solved using an expectation-maximization algorithm with Markov random fields. To evaluate the algorithm, we compared the multichannel autosegmentation with an autosegmentation method using only PET images. The physician-defined gross tumor volume (GTV) was used as the "ground truth" for quantitative evaluation. The median multichannel segmented GTV of the primary tumor was 15.7 cm(3) (range, 6.6-44.3 cm(3)), while the PET segmented GTV was 10.2 cm(3) (range, 2.8-45.1 cm(3)). The median physician-defined GTV was 22.1 cm(3) (range, 4.2-38.4 cm(3)). The median difference between the multichannel segmented and physician-defined GTVs was -10.7%, not showing a statistically significant difference (p-value = 0.43). However, the median difference between the PET segmented and physician-defined GTVs was -19.2%, showing a statistically significant difference (p-value =0.0037). The median Dice similarity coefficient between the multichannel segmented and physician-defined GTVs was 0.75 (range, 0.55-0.84), and the median sensitivity and positive predictive value between them were 0.76 and 0.81, respectively. The authors developed an automated multimodality segmentation algorithm for tumor volume delineation and validated this algorithm for head and neck cancer radiotherapy. The multichannel segmented GTV agreed well with the physician-defined GTV. The authors expect that their algorithm will improve the accuracy and consistency in target definition for radiotherapy.

  3. Priori mask guided image reconstruction (p-MGIR) for ultra-low dose cone-beam computed tomography

    NASA Astrophysics Data System (ADS)

    Park, Justin C.; Zhang, Hao; Chen, Yunmei; Fan, Qiyong; Kahler, Darren L.; Liu, Chihray; Lu, Bo

    2015-11-01

    Recently, the compressed sensing (CS) based iterative reconstruction method has received attention because of its ability to reconstruct cone beam computed tomography (CBCT) images with good quality using sparsely sampled or noisy projections, thus enabling dose reduction. However, some challenges remain. In particular, there is always a tradeoff between image resolution and noise/streak artifact reduction based on the amount of regularization weighting that is applied uniformly across the CBCT volume. The purpose of this study is to develop a novel low-dose CBCT reconstruction algorithm framework called priori mask guided image reconstruction (p-MGIR) that allows reconstruction of high-quality low-dose CBCT images while preserving the image resolution. In p-MGIR, the unknown CBCT volume was mathematically modeled as a combination of two regions: (1) where anatomical structures are complex, and (2) where intensities are relatively uniform. The priori mask, which is the key concept of the p-MGIR algorithm, was defined as the matrix that distinguishes between the two separate CBCT regions where the resolution needs to be preserved and where streak or noise needs to be suppressed. We then alternately updated each part of image by solving two sub-minimization problems iteratively, where one minimization was focused on preserving the edge information of the first part while the other concentrated on the removal of noise/artifacts from the latter part. To evaluate the performance of the p-MGIR algorithm, a numerical head-and-neck phantom, a Catphan 600 physical phantom, and a clinical head-and-neck cancer case were used for analysis. The results were compared with the standard Feldkamp-Davis-Kress as well as conventional CS-based algorithms. Examination of the p-MGIR algorithm showed that high-quality low-dose CBCT images can be reconstructed without compromising the image resolution. For both phantom and the patient cases, the p-MGIR is able to achieve a clinically-reasonable image with 60 projections. Therefore, a clinically-viable, high-resolution head-and-neck CBCT image can be obtained while cutting the dose by 83%. Moreover, the image quality obtained using p-MGIR is better than the quality obtained using other algorithms. In this work, we propose a novel low-dose CBCT reconstruction algorithm called p-MGIR. It can be potentially used as a CBCT reconstruction algorithm with low dose scan requests

  4. MeSH indexing based on automatically generated summaries

    PubMed Central

    2013-01-01

    Background MEDLINE citations are manually indexed at the U.S. National Library of Medicine (NLM) using as reference the Medical Subject Headings (MeSH) controlled vocabulary. For this task, the human indexers read the full text of the article. Due to the growth of MEDLINE, the NLM Indexing Initiative explores indexing methodologies that can support the task of the indexers. Medical Text Indexer (MTI) is a tool developed by the NLM Indexing Initiative to provide MeSH indexing recommendations to indexers. Currently, the input to MTI is MEDLINE citations, title and abstract only. Previous work has shown that using full text as input to MTI increases recall, but decreases precision sharply. We propose using summaries generated automatically from the full text for the input to MTI to use in the task of suggesting MeSH headings to indexers. Summaries distill the most salient information from the full text, which might increase the coverage of automatic indexing approaches based on MEDLINE. We hypothesize that if the results were good enough, manual indexers could possibly use automatic summaries instead of the full texts, along with the recommendations of MTI, to speed up the process while maintaining high quality of indexing results. Results We have generated summaries of different lengths using two different summarizers, and evaluated the MTI indexing on the summaries using different algorithms: MTI, individual MTI components, and machine learning. The results are compared to those of full text articles and MEDLINE citations. Our results show that automatically generated summaries achieve similar recall but higher precision compared to full text articles. Compared to MEDLINE citations, summaries achieve higher recall but lower precision. Conclusions Our results show that automatic summaries produce better indexing than full text articles. Summaries produce similar recall to full text but much better precision, which seems to indicate that automatic summaries can efficiently capture the most important contents within the original articles. The combination of MEDLINE citations and automatically generated summaries could improve the recommendations suggested by MTI. On the other hand, indexing performance might be dependent on the MeSH heading being indexed. Summarization techniques could thus be considered as a feature selection algorithm that might have to be tuned individually for each MeSH heading. PMID:23802936

  5. Distributed Multihop Clustering Approach for Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Israr, Nauman; Awan, Irfan

    Prolonging the life time of Wireless Sensor Networks (WSNs) has been the focus of current research. One of the issues that needs to be addressed along with prolonging the network life time is to ensure uniform energy consumption across the network in WSNs especially in case of random network deployment. Cluster based routing algorithms are believed to be the best choice for WSNs because they work on the principle of divide and conquer and also improve the network life time considerably compared to flat based routing schemes. In this paper we propose a new routing strategy based on two layers clustering which exploits the redundancy property of the network in order to minimise duplicate data transmission and also make the intercluster and intracluster communication multihop. The proposed algorithm makes use of the nodes in a network whose area coverage is covered by the neighbouring nodes. These nodes are marked as temporary cluster heads and later use these temporary cluster heads randomly for multihop intercluster communication. Performance studies indicate that the proposed algorithm solves effectively the problem of load balancing across the network and is more energy efficient compared to the enhanced version of widely used Leach algorithm.

  6. SU-G-JeP1-12: Head-To-Head Performance Characterization of Two Multileaf Collimator Tracking Algorithms for Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caillet, V; Colvill, E; Royal North Shore Hospital, St Leonards, Sydney

    2016-06-15

    Purpose: Multi-leaf collimator (MLC) tracking is being clinically pioneered to continuously compensate for thoracic and abdominal motion during radiotherapy. The purpose of this work is to characterize the performance of two MLC tracking algorithms for cancer radiotherapy, based on a direct optimization and a piecewise leaf fitting approach respectively. Methods: To test the algorithms, both physical and in silico experiments were performed. Previously published high and low modulation VMAT plans for lung and prostate cancer cases were used along with eight patient-measured organ-specific trajectories. For both MLC tracking algorithm, the plans were run with their corresponding patient trajectories. The physicalmore » experiments were performed on a Trilogy Varian linac and a programmable phantom (HexaMotion platform). For each MLC tracking algorithm, plan and patient trajectory, the tracking accuracy was quantified as the difference in aperture area between ideal and fitted MLC. To compare algorithms, the average cumulative tracking error area for each experiment was calculated. The two-sample Kolmogorov-Smirnov (KS) test was used to evaluate the cumulative tracking errors between algorithms. Results: Comparison of tracking errors for the physical and in silico experiments showed minor differences between the two algorithms. The KS D-statistics for the physical experiments were below 0.05 denoting no significant differences between the two distributions pattern and the average error area (direct optimization/piecewise leaf-fitting) were comparable (66.64 cm2/65.65 cm2). For the in silico experiments, the KS D-statistics were below 0.05 and the average errors area were also equivalent (49.38 cm2/48.98 cm2). Conclusion: The comparison between the two leaf fittings algorithms demonstrated no significant differences in tracking errors, neither in a clinically realistic environment nor in silico. The similarities in the two independent algorithms give confidence in the use of either algorithm for clinical implementation.« less

  7. SU-F-I-09: Improvement of Image Registration Using Total-Variation Based Noise Reduction Algorithms for Low-Dose CBCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mukherjee, S; Farr, J; Merchant, T

    Purpose: To study the effect of total-variation based noise reduction algorithms to improve the image registration of low-dose CBCT for patient positioning in radiation therapy. Methods: In low-dose CBCT, the reconstructed image is degraded by excessive quantum noise. In this study, we developed a total-variation based noise reduction algorithm and studied the effect of the algorithm on noise reduction and image registration accuracy. To study the effect of noise reduction, we have calculated the peak signal-to-noise ratio (PSNR). To study the improvement of image registration, we performed image registration between volumetric CT and MV- CBCT images of different head-and-neck patientsmore » and calculated the mutual information (MI) and Pearson correlation coefficient (PCC) as a similarity metric. The PSNR, MI and PCC were calculated for both the noisy and noise-reduced CBCT images. Results: The algorithms were shown to be effective in reducing the noise level and improving the MI and PCC for the low-dose CBCT images tested. For the different head-and-neck patients, a maximum improvement of PSNR of 10 dB with respect to the noisy image was calculated. The improvement of MI and PCC was 9% and 2% respectively. Conclusion: Total-variation based noise reduction algorithm was studied to improve the image registration between CT and low-dose CBCT. The algorithm had shown promising results in reducing the noise from low-dose CBCT images and improving the similarity metric in terms of MI and PCC.« less

  8. Fuzzy Integral-Based Gaze Control of a Robotic Head for Human Robot Interaction.

    PubMed

    Yoo, Bum-Soo; Kim, Jong-Hwan

    2015-09-01

    During the last few decades, as a part of effort to enhance natural human robot interaction (HRI), considerable research has been carried out to develop human-like gaze control. However, most studies did not consider hardware implementation, real-time processing, and the real environment, factors that should be taken into account to achieve natural HRI. This paper proposes a fuzzy integral-based gaze control algorithm, operating in real-time and the real environment, for a robotic head. We formulate the gaze control as a multicriteria decision making problem and devise seven human gaze-inspired criteria. Partial evaluations of all candidate gaze directions are carried out with respect to the seven criteria defined from perceived visual, auditory, and internal inputs, and fuzzy measures are assigned to a power set of the criteria to reflect the user defined preference. A fuzzy integral of the partial evaluations with respect to the fuzzy measures is employed to make global evaluations of all candidate gaze directions. The global evaluation values are adjusted by applying inhibition of return and are compared with the global evaluation values of the previous gaze directions to decide the final gaze direction. The effectiveness of the proposed algorithm is demonstrated with a robotic head, developed in the Robot Intelligence Technology Laboratory at Korea Advanced Institute of Science and Technology, through three interaction scenarios and three comparison scenarios with another algorithm.

  9. Motion compensation for fully 4D PET reconstruction using PET superset data

    NASA Astrophysics Data System (ADS)

    Verhaeghe, J.; Gravel, P.; Mio, R.; Fukasawa, R.; Rosa-Neto, P.; Soucy, J.-P.; Thompson, C. J.; Reader, A. J.

    2010-07-01

    Fully 4D PET image reconstruction is receiving increasing research interest due to its ability to significantly reduce spatiotemporal noise in dynamic PET imaging. However, thus far in the literature, the important issue of correcting for subject head motion has not been considered. Specifically, as a direct consequence of using temporally extensive basis functions, a single instance of movement propagates to impair the reconstruction of multiple time frames, even if no further movement occurs in those frames. Existing 3D motion compensation strategies have not yet been adapted to 4D reconstruction, and as such the benefits of 4D algorithms have not yet been reaped in a clinical setting where head movement undoubtedly occurs. This work addresses this need, developing a motion compensation method suitable for fully 4D reconstruction methods which exploits an optical tracking system to measure the head motion along with PET superset data to store the motion compensated data. List-mode events are histogrammed as PET superset data according to the measured motion, and a specially devised normalization scheme for motion compensated reconstruction from the superset data is required. This work proceeds to propose the corresponding time-dependent normalization modifications which are required for a major class of fully 4D image reconstruction algorithms (those which use linear combinations of temporal basis functions). Using realistically simulated as well as real high-resolution PET data from the HRRT, we demonstrate both the detrimental impact of subject head motion in fully 4D PET reconstruction and the efficacy of our proposed modifications to 4D algorithms. Benefits are shown both for the individual PET image frames as well as for parametric images of tracer uptake and volume of distribution for 18F-FDG obtained from Patlak analysis.

  10. Motion compensation for fully 4D PET reconstruction using PET superset data.

    PubMed

    Verhaeghe, J; Gravel, P; Mio, R; Fukasawa, R; Rosa-Neto, P; Soucy, J-P; Thompson, C J; Reader, A J

    2010-07-21

    Fully 4D PET image reconstruction is receiving increasing research interest due to its ability to significantly reduce spatiotemporal noise in dynamic PET imaging. However, thus far in the literature, the important issue of correcting for subject head motion has not been considered. Specifically, as a direct consequence of using temporally extensive basis functions, a single instance of movement propagates to impair the reconstruction of multiple time frames, even if no further movement occurs in those frames. Existing 3D motion compensation strategies have not yet been adapted to 4D reconstruction, and as such the benefits of 4D algorithms have not yet been reaped in a clinical setting where head movement undoubtedly occurs. This work addresses this need, developing a motion compensation method suitable for fully 4D reconstruction methods which exploits an optical tracking system to measure the head motion along with PET superset data to store the motion compensated data. List-mode events are histogrammed as PET superset data according to the measured motion, and a specially devised normalization scheme for motion compensated reconstruction from the superset data is required. This work proceeds to propose the corresponding time-dependent normalization modifications which are required for a major class of fully 4D image reconstruction algorithms (those which use linear combinations of temporal basis functions). Using realistically simulated as well as real high-resolution PET data from the HRRT, we demonstrate both the detrimental impact of subject head motion in fully 4D PET reconstruction and the efficacy of our proposed modifications to 4D algorithms. Benefits are shown both for the individual PET image frames as well as for parametric images of tracer uptake and volume of distribution for (18)F-FDG obtained from Patlak analysis.

  11. Head-to-head comparison of adaptive statistical and model-based iterative reconstruction algorithms for submillisievert coronary CT angiography.

    PubMed

    Benz, Dominik C; Fuchs, Tobias A; Gräni, Christoph; Studer Bruengger, Annina A; Clerc, Olivier F; Mikulicic, Fran; Messerli, Michael; Stehli, Julia; Possner, Mathias; Pazhenkottil, Aju P; Gaemperli, Oliver; Kaufmann, Philipp A; Buechel, Ronny R

    2018-02-01

    Iterative reconstruction (IR) algorithms allow for a significant reduction in radiation dose of coronary computed tomography angiography (CCTA). We performed a head-to-head comparison of adaptive statistical IR (ASiR) and model-based IR (MBIR) algorithms to assess their impact on quantitative image parameters and diagnostic accuracy for submillisievert CCTA. CCTA datasets of 91 patients were reconstructed using filtered back projection (FBP), increasing contributions of ASiR (20, 40, 60, 80, and 100%), and MBIR. Signal and noise were measured in the aortic root to calculate signal-to-noise ratio (SNR). In a subgroup of 36 patients, diagnostic accuracy of ASiR 40%, ASiR 100%, and MBIR for diagnosis of coronary artery disease (CAD) was compared with invasive coronary angiography. Median radiation dose was 0.21 mSv for CCTA. While increasing levels of ASiR gradually reduced image noise compared with FBP (up to - 48%, P < 0.001), MBIR provided largest noise reduction (-79% compared with FBP) outperforming ASiR (-59% compared with ASiR 100%; P < 0.001). Increased noise and lower SNR with ASiR 40% and ASiR 100% resulted in substantially lower diagnostic accuracy to detect CAD as diagnosed by invasive coronary angiography compared with MBIR: sensitivity and specificity were 100 and 37%, 100 and 57%, and 100 and 74% for ASiR 40%, ASiR 100%, and MBIR, respectively. MBIR offers substantial noise reduction with increased SNR, paving the way for implementation of submillisievert CCTA protocols in clinical routine. In contrast, inferior noise reduction by ASiR negatively affects diagnostic accuracy of submillisievert CCTA for CAD detection. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2017. For permissions, please email: journals.permissions@oup.com.

  12. Cost Metric Algorithms for Internetwork Applications

    DTIC Science & Technology

    1989-04-01

    5000. Released by Under authority of M. B. Vineberg, Head . X E Jahn, Head System Design and Battle Force and Theater Architechture Branch...for public release; distribution unlimited. 4. PERFORMING ORGANIZATION REPORT NUMBER(S) S. MONITORING ORGANIZATION REPORT NUMBER(S) NOSC TR 1284 6a...NAME OF PERFORMING ORGANIZATION 6b. OFFICE SYMBO 7a. NAME OF MONITORING ORGANIZATION Naval Ocean Systems Center Code 854 6c. ADDRESS (C, SftW&WZPCa

  13. A New Automated Way to Measure Polyethylene Wear in THA Using a High Resolution CT Scanner: Method and Analysis

    PubMed Central

    Maguire Jr., Gerald Q.; Noz, Marilyn E.; Olivecrona, Henrik; Zeleznik, Michael P.

    2014-01-01

    As the most advantageous total hip arthroplasty (THA) operation is the first, timely replacement of only the liner is socially and economically important because the utilization of THA is increasing as younger and more active patients are receiving implants and they are living longer. Automatic algorithms were developed to infer liner wear by estimating the separation between the acetabular cup and femoral component head given a computed tomography (CT) volume. Two series of CT volumes of a hip phantom were acquired with the femoral component head placed at 14 different positions relative to the acetabular cup. The mean and standard deviation (SD) of the diameter of the acetabular cup and femoral component head, in addition to the range of error in the expected wear values and the repeatability of all the measurements, were calculated. The algorithms resulted in a mean (±SD) for the diameter of the acetabular cup of 54.21 (±0.011) mm and for the femoral component head of 22.09 (±0.02) mm. The wear error was ±0.1 mm and the repeatability was 0.077 mm. This approach is applicable clinically as it utilizes readily available computed tomography imaging systems and requires only five minutes of human interaction. PMID:24587727

  14. Unveiling the development of intracranial injury using dynamic brain EIT: an evaluation of current reconstruction algorithms.

    PubMed

    Li, Haoting; Chen, Rongqing; Xu, Canhua; Liu, Benyuan; Tang, Mengxing; Yang, Lin; Dong, Xiuzhen; Fu, Feng

    2017-08-21

    Dynamic brain electrical impedance tomography (EIT) is a promising technique for continuously monitoring the development of cerebral injury. While there are many reconstruction algorithms available for brain EIT, there is still a lack of study to compare their performance in the context of dynamic brain monitoring. To address this problem, we develop a framework for evaluating different current algorithms with their ability to correctly identify small intracranial conductivity changes. Firstly, a simulation 3D head phantom with realistic layered structure and impedance distribution is developed. Next several reconstructing algorithms, such as back projection (BP), damped least-square (DLS), Bayesian, split Bregman (SB) and GREIT are introduced. We investigate their temporal response, noise performance, location and shape error with respect to different noise levels on the simulation phantom. The results show that the SB algorithm demonstrates superior performance in reducing image error. To further improve the location accuracy, we optimize SB by incorporating the brain structure-based conductivity distribution priors, in which differences of the conductivities between different brain tissues and the inhomogeneous conductivity distribution of the skull are considered. We compare this novel algorithm (called SB-IBCD) with SB and DLS using anatomically correct head shaped phantoms with spatial varying skull conductivity. Main results and Significance: The results showed that SB-IBCD is the most effective in unveiling small intracranial conductivity changes, where it can reduce the image error by an average of 30.0% compared to DLS.

  15. A Feature and Algorithm Selection Method for Improving the Prediction of Protein Structural Class.

    PubMed

    Ni, Qianwu; Chen, Lei

    2017-01-01

    Correct prediction of protein structural class is beneficial to investigation on protein functions, regulations and interactions. In recent years, several computational methods have been proposed in this regard. However, based on various features, it is still a great challenge to select proper classification algorithm and extract essential features to participate in classification. In this study, a feature and algorithm selection method was presented for improving the accuracy of protein structural class prediction. The amino acid compositions and physiochemical features were adopted to represent features and thirty-eight machine learning algorithms collected in Weka were employed. All features were first analyzed by a feature selection method, minimum redundancy maximum relevance (mRMR), producing a feature list. Then, several feature sets were constructed by adding features in the list one by one. For each feature set, thirtyeight algorithms were executed on a dataset, in which proteins were represented by features in the set. The predicted classes yielded by these algorithms and true class of each protein were collected to construct a dataset, which were analyzed by mRMR method, yielding an algorithm list. From the algorithm list, the algorithm was taken one by one to build an ensemble prediction model. Finally, we selected the ensemble prediction model with the best performance as the optimal ensemble prediction model. Experimental results indicate that the constructed model is much superior to models using single algorithm and other models that only adopt feature selection procedure or algorithm selection procedure. The feature selection procedure or algorithm selection procedure are really helpful for building an ensemble prediction model that can yield a better performance. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  16. Poster — Thur Eve — 59: Atlas Selection for Automated Segmentation of Pelvic CT for Prostate Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mallawi, A; Farrell, T; Diamond, K

    2014-08-15

    Automated atlas-based segmentation has recently been evaluated for use in planning prostate cancer radiotherapy. In the typical approach, the essential step is the selection of an atlas from a database that best matches the target image. This work proposes an atlas selection strategy and evaluates its impact on the final segmentation accuracy. Prostate length (PL), right femoral head diameter (RFHD), and left femoral head diameter (LFHD) were measured in CT images of 20 patients. Each subject was then taken as the target image to which all remaining 19 images were affinely registered. For each pair of registered images, the overlapmore » between prostate and femoral head contours was quantified using the Dice Similarity Coefficient (DSC). Finally, we designed an atlas selection strategy that computed the ratio of PL (prostate segmentation), RFHD (right femur segmentation), and LFHD (left femur segmentation) between the target subject and each subject in the atlas database. Five atlas subjects yielding ratios nearest to one were then selected for further analysis. RFHD and LFHD were excellent parameters for atlas selection, achieving a mean femoral head DSC of 0.82 ± 0.06. PL had a moderate ability to select the most similar prostate, with a mean DSC of 0.63 ± 0.18. The DSC obtained with the proposed selection method were slightly lower than the maximums established using brute force, but this does not include potential improvements expected with deformable registration. Atlas selection based on PL for prostate and femoral diameter for femoral heads provides reasonable segmentation accuracy.« less

  17. Kinetic constrained optimization of the golf swing hub path.

    PubMed

    Nesbit, Steven M; McGinnis, Ryan S

    2014-12-01

    This study details an optimization of the golf swing, where the hand path and club angular trajectories are manipulated. The optimization goal was to maximize club head velocity at impact within the interaction kinetic limitations (force, torque, work, and power) of the golfer as determined through the analysis of a typical swing using a two-dimensional dynamic model. The study was applied to four subjects with diverse swing capabilities and styles. It was determined that it is possible for all subjects to increase their club head velocity at impact within their respective kinetic limitations through combined modifications to their respective hand path and club angular trajectories. The manner of the modifications, the degree of velocity improvement, the amount of kinetic reduction, and the associated kinetic limitation quantities were subject dependent. By artificially minimizing selected kinetic inputs within the optimization algorithm, it was possible to identify swing trajectory characteristics that indicated relative kinetic weaknesses of a subject. Practical implications are offered based upon the findings of the study. Key PointsThe hand path trajectory is an important characteristic of the golf swing and greatly affects club head velocity and golfer/club energy transfer.It is possible to increase the energy transfer from the golfer to the club by modifying the hand path and swing trajectories without increasing the kinetic output demands on the golfer.It is possible to identify relative kinetic output strengths and weakness of a golfer through assessment of the hand path and swing trajectories.Increasing any one of the kinetic outputs of the golfer can potentially increase the club head velocity at impact.The hand path trajectory has important influences over the club swing trajectory.

  18. Kinetic Constrained Optimization of the Golf Swing Hub Path

    PubMed Central

    Nesbit, Steven M.; McGinnis, Ryan S.

    2014-01-01

    This study details an optimization of the golf swing, where the hand path and club angular trajectories are manipulated. The optimization goal was to maximize club head velocity at impact within the interaction kinetic limitations (force, torque, work, and power) of the golfer as determined through the analysis of a typical swing using a two-dimensional dynamic model. The study was applied to four subjects with diverse swing capabilities and styles. It was determined that it is possible for all subjects to increase their club head velocity at impact within their respective kinetic limitations through combined modifications to their respective hand path and club angular trajectories. The manner of the modifications, the degree of velocity improvement, the amount of kinetic reduction, and the associated kinetic limitation quantities were subject dependent. By artificially minimizing selected kinetic inputs within the optimization algorithm, it was possible to identify swing trajectory characteristics that indicated relative kinetic weaknesses of a subject. Practical implications are offered based upon the findings of the study. Key Points The hand path trajectory is an important characteristic of the golf swing and greatly affects club head velocity and golfer/club energy transfer. It is possible to increase the energy transfer from the golfer to the club by modifying the hand path and swing trajectories without increasing the kinetic output demands on the golfer. It is possible to identify relative kinetic output strengths and weakness of a golfer through assessment of the hand path and swing trajectories. Increasing any one of the kinetic outputs of the golfer can potentially increase the club head velocity at impact. The hand path trajectory has important influences over the club swing trajectory. PMID:25435779

  19. McTwo: a two-step feature selection algorithm based on maximal information coefficient.

    PubMed

    Ge, Ruiquan; Zhou, Manli; Luo, Youxi; Meng, Qinghan; Mai, Guoqin; Ma, Dongli; Wang, Guoqing; Zhou, Fengfeng

    2016-03-23

    High-throughput bio-OMIC technologies are producing high-dimension data from bio-samples at an ever increasing rate, whereas the training sample number in a traditional experiment remains small due to various difficulties. This "large p, small n" paradigm in the area of biomedical "big data" may be at least partly solved by feature selection algorithms, which select only features significantly associated with phenotypes. Feature selection is an NP-hard problem. Due to the exponentially increased time requirement for finding the globally optimal solution, all the existing feature selection algorithms employ heuristic rules to find locally optimal solutions, and their solutions achieve different performances on different datasets. This work describes a feature selection algorithm based on a recently published correlation measurement, Maximal Information Coefficient (MIC). The proposed algorithm, McTwo, aims to select features associated with phenotypes, independently of each other, and achieving high classification performance of the nearest neighbor algorithm. Based on the comparative study of 17 datasets, McTwo performs about as well as or better than existing algorithms, with significantly reduced numbers of selected features. The features selected by McTwo also appear to have particular biomedical relevance to the phenotypes from the literature. McTwo selects a feature subset with very good classification performance, as well as a small feature number. So McTwo may represent a complementary feature selection algorithm for the high-dimensional biomedical datasets.

  20. Development of a microcomputer-based magnetic heading sensor

    NASA Technical Reports Server (NTRS)

    Garner, H. D.

    1987-01-01

    This paper explores the development of a flux-gate magnetic heading reference using a single-chip microcomputer to process heading information and to present it to the pilot in appropriate form. This instrument is intended to replace the conventional combination of mechanical compass and directional gyroscope currently in use in general aviation aircraft, at appreciable savings in cost and reduction in maintenance. Design of the sensing element, the signal processing electronics, and the computer algorithms which calculate the magnetic heading of the aircraft from the magnetometer data have been integrated in such a way as to minimize hardware requirements and simplify calibration procedures. Damping and deviation errors are avoided by the inherent design of the device, and a technique for compensating for northerly-turning-error is described.

  1. Key Management Scheme Based on Route Planning of Mobile Sink in Wireless Sensor Networks.

    PubMed

    Zhang, Ying; Liang, Jixing; Zheng, Bingxin; Jiang, Shengming; Chen, Wei

    2016-01-29

    In many wireless sensor network application scenarios the key management scheme with a Mobile Sink (MS) should be fully investigated. This paper proposes a key management scheme based on dynamic clustering and optimal-routing choice of MS. The concept of Traveling Salesman Problem with Neighbor areas (TSPN) in dynamic clustering for data exchange is proposed, and the selection probability is used in MS route planning. The proposed scheme extends static key management to dynamic key management by considering the dynamic clustering and mobility of MSs, which can effectively balance the total energy consumption during the activities. Considering the different resources available to the member nodes and sink node, the session key between cluster head and MS is established by modified an ECC encryption with Diffie-Hellman key exchange (ECDH) algorithm and the session key between member node and cluster head is built with a binary symmetric polynomial. By analyzing the security of data storage, data transfer and the mechanism of dynamic key management, the proposed scheme has more advantages to help improve the resilience of the key management system of the network on the premise of satisfying higher connectivity and storage efficiency.

  2. Feature Selection Method Based on Neighborhood Relationships: Applications in EEG Signal Identification and Chinese Character Recognition

    PubMed Central

    Zhao, Yu-Xiang; Chou, Chien-Hsing

    2016-01-01

    In this study, a new feature selection algorithm, the neighborhood-relationship feature selection (NRFS) algorithm, is proposed for identifying rat electroencephalogram signals and recognizing Chinese characters. In these two applications, dependent relationships exist among the feature vectors and their neighboring feature vectors. Therefore, the proposed NRFS algorithm was designed for solving this problem. By applying the NRFS algorithm, unselected feature vectors have a high priority of being added into the feature subset if the neighboring feature vectors have been selected. In addition, selected feature vectors have a high priority of being eliminated if the neighboring feature vectors are not selected. In the experiments conducted in this study, the NRFS algorithm was compared with two feature algorithms. The experimental results indicated that the NRFS algorithm can extract the crucial frequency bands for identifying rat vigilance states and identifying crucial character regions for recognizing Chinese characters. PMID:27314346

  3. Embedded System Implementation of Sound Localization in Proximal Region

    NASA Astrophysics Data System (ADS)

    Iwanaga, Nobuyuki; Matsumura, Tomoya; Yoshida, Akihiro; Kobayashi, Wataru; Onoye, Takao

    A sound localization method in the proximal region is proposed, which is based on a low-cost 3D sound localization algorithm with the use of head-related transfer functions (HRTFs). The auditory parallax model is applied to the current algorithm so that more accurate HRTFs can be used for sound localization in the proximal region. In addition, head-shadowing effects based on rigid-sphere model are reproduced in the proximal region by means of a second-order IIR filter. A subjective listening test demonstrates the effectiveness of the proposed method. Embedded system implementation of the proposed method is also described claiming that the proposed method improves sound effects in the proximal region only with 5.1% increase of memory capacity and 8.3% of computational costs.

  4. Optimizing Energy Consumption in Vehicular Sensor Networks by Clustering Using Fuzzy C-Means and Fuzzy Subtractive Algorithms

    NASA Astrophysics Data System (ADS)

    Ebrahimi, A.; Pahlavani, P.; Masoumi, Z.

    2017-09-01

    Traffic monitoring and managing in urban intelligent transportation systems (ITS) can be carried out based on vehicular sensor networks. In a vehicular sensor network, vehicles equipped with sensors such as GPS, can act as mobile sensors for sensing the urban traffic and sending the reports to a traffic monitoring center (TMC) for traffic estimation. The energy consumption by the sensor nodes is a main problem in the wireless sensor networks (WSNs); moreover, it is the most important feature in designing these networks. Clustering the sensor nodes is considered as an effective solution to reduce the energy consumption of WSNs. Each cluster should have a Cluster Head (CH), and a number of nodes located within its supervision area. The cluster heads are responsible for gathering and aggregating the information of clusters. Then, it transmits the information to the data collection center. Hence, the use of clustering decreases the volume of transmitting information, and, consequently, reduces the energy consumption of network. In this paper, Fuzzy C-Means (FCM) and Fuzzy Subtractive algorithms are employed to cluster sensors and investigate their performance on the energy consumption of sensors. It can be seen that the FCM algorithm and Fuzzy Subtractive have been reduced energy consumption of vehicle sensors up to 90.68% and 92.18%, respectively. Comparing the performance of the algorithms implies the 1.5 percent improvement in Fuzzy Subtractive algorithm in comparison.

  5. Genetic Bee Colony (GBC) algorithm: A new gene selection method for microarray cancer classification.

    PubMed

    Alshamlan, Hala M; Badr, Ghada H; Alohali, Yousef A

    2015-06-01

    Naturally inspired evolutionary algorithms prove effectiveness when used for solving feature selection and classification problems. Artificial Bee Colony (ABC) is a relatively new swarm intelligence method. In this paper, we propose a new hybrid gene selection method, namely Genetic Bee Colony (GBC) algorithm. The proposed algorithm combines the used of a Genetic Algorithm (GA) along with Artificial Bee Colony (ABC) algorithm. The goal is to integrate the advantages of both algorithms. The proposed algorithm is applied to a microarray gene expression profile in order to select the most predictive and informative genes for cancer classification. In order to test the accuracy performance of the proposed algorithm, extensive experiments were conducted. Three binary microarray datasets are use, which include: colon, leukemia, and lung. In addition, another three multi-class microarray datasets are used, which are: SRBCT, lymphoma, and leukemia. Results of the GBC algorithm are compared with our recently proposed technique: mRMR when combined with the Artificial Bee Colony algorithm (mRMR-ABC). We also compared the combination of mRMR with GA (mRMR-GA) and Particle Swarm Optimization (mRMR-PSO) algorithms. In addition, we compared the GBC algorithm with other related algorithms that have been recently published in the literature, using all benchmark datasets. The GBC algorithm shows superior performance as it achieved the highest classification accuracy along with the lowest average number of selected genes. This proves that the GBC algorithm is a promising approach for solving the gene selection problem in both binary and multi-class cancer classification. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Selection of head and whisker coordination strategies during goal-oriented active touch.

    PubMed

    Schroeder, Joseph B; Ritt, Jason T

    2016-04-01

    In the rodent whisker system, a key model for neural processing and behavioral choices during active sensing, whisker motion is increasingly recognized as only part of a broader motor repertoire employed by rodents during active touch. In particular, recent studies suggest whisker and head motions are tightly coordinated. However, conditions governing the selection and temporal organization of such coordinated sensing strategies remain poorly understood. We videographically reconstructed head and whisker motions of freely moving mice searching for a randomly located rewarded aperture, focusing on trials in which animals appeared to rapidly "correct" their trajectory under tactile guidance. Mice orienting after unilateral contact repositioned their whiskers similarly to previously reported head-turning asymmetry. However, whisker repositioning preceded head turn onsets and was not bilaterally symmetric. Moreover, mice selectively employed a strategy we term contact maintenance, with whisking modulated to counteract head motion and facilitate repeated contacts on subsequent whisks. Significantly, contact maintenance was not observed following initial contact with an aperture boundary, when the mouse needed to make a large corrective head motion to the front of the aperture, but only following contact by the same whisker field with the opposite aperture boundary, when the mouse needed to precisely align its head with the reward spout. Together these results suggest that mice can select from a diverse range of sensing strategies incorporating both knowledge of the task and whisk-by-whisk sensory information and, moreover, suggest the existence of high level control (not solely reflexive) of sensing motions coordinated between multiple body parts. Copyright © 2016 the American Physiological Society.

  7. Selection of head and whisker coordination strategies during goal-oriented active touch

    PubMed Central

    2016-01-01

    In the rodent whisker system, a key model for neural processing and behavioral choices during active sensing, whisker motion is increasingly recognized as only part of a broader motor repertoire employed by rodents during active touch. In particular, recent studies suggest whisker and head motions are tightly coordinated. However, conditions governing the selection and temporal organization of such coordinated sensing strategies remain poorly understood. We videographically reconstructed head and whisker motions of freely moving mice searching for a randomly located rewarded aperture, focusing on trials in which animals appeared to rapidly “correct” their trajectory under tactile guidance. Mice orienting after unilateral contact repositioned their whiskers similarly to previously reported head-turning asymmetry. However, whisker repositioning preceded head turn onsets and was not bilaterally symmetric. Moreover, mice selectively employed a strategy we term contact maintenance, with whisking modulated to counteract head motion and facilitate repeated contacts on subsequent whisks. Significantly, contact maintenance was not observed following initial contact with an aperture boundary, when the mouse needed to make a large corrective head motion to the front of the aperture, but only following contact by the same whisker field with the opposite aperture boundary, when the mouse needed to precisely align its head with the reward spout. Together these results suggest that mice can select from a diverse range of sensing strategies incorporating both knowledge of the task and whisk-by-whisk sensory information and, moreover, suggest the existence of high level control (not solely reflexive) of sensing motions coordinated between multiple body parts. PMID:26792880

  8. SU-E-J-224: Multimodality Segmentation of Head and Neck Tumors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aristophanous, M; Yang, J; Beadle, B

    2014-06-01

    Purpose: Develop an algorithm that is able to automatically segment tumor volume in Head and Neck cancer by integrating information from CT, PET and MR imaging simultaneously. Methods: Twenty three patients that were recruited under an adaptive radiotherapy protocol had MR, CT and PET/CT scans within 2 months prior to start of radiotherapy. The patients had unresectable disease and were treated either with chemoradiotherapy or radiation therapy alone. Using the Velocity software, the PET/CT and MR (T1 weighted+contrast) scans were registered to the planning CT using deformable and rigid registration respectively. The PET and MR images were then resampled accordingmore » to the registration to match the planning CT. The resampled images, together with the planning CT, were fed into a multi-channel segmentation algorithm, which is based on Gaussian mixture models and solved with the expectation-maximization algorithm and Markov random fields. A rectangular region of interest (ROI) was manually placed to identify the tumor area and facilitate the segmentation process. The auto-segmented tumor contours were compared with the gross tumor volume (GTV) manually defined by the physician. The volume difference and Dice similarity coefficient (DSC) between the manual and autosegmented GTV contours were calculated as the quantitative evaluation metrics. Results: The multimodality segmentation algorithm was applied to all 23 patients. The volumes of the auto-segmented GTV ranged from 18.4cc to 32.8cc. The average (range) volume difference between the manual and auto-segmented GTV was −42% (−32.8%–63.8%). The average DSC value was 0.62, ranging from 0.39 to 0.78. Conclusion: An algorithm for the automated definition of tumor volume using multiple imaging modalities simultaneously was successfully developed and implemented for Head and Neck cancer. This development along with more accurate registration algorithms can aid physicians in the efforts to interpret the multitude of imaging information available in radiotherapy today. This project was supported by a grant by Varian Medical Systems.« less

  9. Kernel-based least squares policy iteration for reinforcement learning.

    PubMed

    Xu, Xin; Hu, Dewen; Lu, Xicheng

    2007-07-01

    In this paper, we present a kernel-based least squares policy iteration (KLSPI) algorithm for reinforcement learning (RL) in large or continuous state spaces, which can be used to realize adaptive feedback control of uncertain dynamic systems. By using KLSPI, near-optimal control policies can be obtained without much a priori knowledge on dynamic models of control plants. In KLSPI, Mercer kernels are used in the policy evaluation of a policy iteration process, where a new kernel-based least squares temporal-difference algorithm called KLSTD-Q is proposed for efficient policy evaluation. To keep the sparsity and improve the generalization ability of KLSTD-Q solutions, a kernel sparsification procedure based on approximate linear dependency (ALD) is performed. Compared to the previous works on approximate RL methods, KLSPI makes two progresses to eliminate the main difficulties of existing results. One is the better convergence and (near) optimality guarantee by using the KLSTD-Q algorithm for policy evaluation with high precision. The other is the automatic feature selection using the ALD-based kernel sparsification. Therefore, the KLSPI algorithm provides a general RL method with generalization performance and convergence guarantee for large-scale Markov decision problems (MDPs). Experimental results on a typical RL task for a stochastic chain problem demonstrate that KLSPI can consistently achieve better learning efficiency and policy quality than the previous least squares policy iteration (LSPI) algorithm. Furthermore, the KLSPI method was also evaluated on two nonlinear feedback control problems, including a ship heading control problem and the swing up control of a double-link underactuated pendulum called acrobot. Simulation results illustrate that the proposed method can optimize controller performance using little a priori information of uncertain dynamic systems. It is also demonstrated that KLSPI can be applied to online learning control by incorporating an initial controller to ensure online performance.

  10. Photon migration through fetal head in utero using continuous wave, near infrared spectroscopy: development and evaluation of experimental and numerical models

    NASA Astrophysics Data System (ADS)

    Vishnoi, Gargi; Hielscher, Andreas H.; Ramanujam, Nirmala; Chance, Britton

    2000-04-01

    In this work experimental tissue phantoms and numerical models were developed to estimate photon migration through the fetal head in utero. The tissue phantoms incorporate a fetal head within an amniotic fluid sac surrounded by a maternal tissue layer. A continuous wave, dual-wavelength ((lambda) equals 760 and 850 nm) spectrometer was employed to make near-infrared measurements on the tissue phantoms for various source-detector separations, fetal-head positions, and fetal-head optical properties. In addition, numerical simulations of photon propagation were performed with finite-difference algorithms that provide solutions to the equation of radiative transfer as well as the diffusion equation. The simulations were compared with measurements on tissue phantoms to determine the best numerical model to describe photon migration through the fetal head in utero. Evaluation of the results indicates that tissue phantoms in which the contact between fetal head and uterine wall is uniform best simulates the fetal head in utero for near-term pregnancies. Furthermore, we found that maximum sensitivity to the head can be achieved if the source of the probe is positioned directly above the fetal head. By optimizing the source-detector separation, this signal originating from photons that have traveled through the fetal head can drastically be increased.

  11. An Integrated Intrusion Detection Model of Cluster-Based Wireless Sensor Network

    PubMed Central

    Sun, Xuemei; Yan, Bo; Zhang, Xinzhong; Rong, Chuitian

    2015-01-01

    Considering wireless sensor network characteristics, this paper combines anomaly and mis-use detection and proposes an integrated detection model of cluster-based wireless sensor network, aiming at enhancing detection rate and reducing false rate. Adaboost algorithm with hierarchical structures is used for anomaly detection of sensor nodes, cluster-head nodes and Sink nodes. Cultural-Algorithm and Artificial-Fish–Swarm-Algorithm optimized Back Propagation is applied to mis-use detection of Sink node. Plenty of simulation demonstrates that this integrated model has a strong performance of intrusion detection. PMID:26447696

  12. An Integrated Intrusion Detection Model of Cluster-Based Wireless Sensor Network.

    PubMed

    Sun, Xuemei; Yan, Bo; Zhang, Xinzhong; Rong, Chuitian

    2015-01-01

    Considering wireless sensor network characteristics, this paper combines anomaly and mis-use detection and proposes an integrated detection model of cluster-based wireless sensor network, aiming at enhancing detection rate and reducing false rate. Adaboost algorithm with hierarchical structures is used for anomaly detection of sensor nodes, cluster-head nodes and Sink nodes. Cultural-Algorithm and Artificial-Fish-Swarm-Algorithm optimized Back Propagation is applied to mis-use detection of Sink node. Plenty of simulation demonstrates that this integrated model has a strong performance of intrusion detection.

  13. The improvement and simulation for LEACH clustering routing protocol

    NASA Astrophysics Data System (ADS)

    Ji, Ai-guo; Zhao, Jun-xiang

    2017-01-01

    An energy-balanced unequal multi-hop clustering routing protocol LEACH-EUMC is proposed in this paper. The candidate cluster head nodes are elected firstly, then they compete to be formal cluster head nodes by adding energy and distance factors, finally the date are transferred to sink through multi-hop. The results of simulation show that the improved algorithm is better than LEACH in network lifetime, energy consumption and the amount of data transmission.

  14. An affine projection algorithm using grouping selection of input vectors

    NASA Astrophysics Data System (ADS)

    Shin, JaeWook; Kong, NamWoong; Park, PooGyeon

    2011-10-01

    This paper present an affine projection algorithm (APA) using grouping selection of input vectors. To improve the performance of conventional APA, the proposed algorithm adjusts the number of the input vectors using two procedures: grouping procedure and selection procedure. In grouping procedure, the some input vectors that have overlapping information for update is grouped using normalized inner product. Then, few input vectors that have enough information for for coefficient update is selected using steady-state mean square error (MSE) in selection procedure. Finally, the filter coefficients update using selected input vectors. The experimental results show that the proposed algorithm has small steady-state estimation errors comparing with the existing algorithms.

  15. SU-E-J-119: Head-And-Neck Digital Phantoms for Geometric and Dosimetric Uncertainty Evaluation of CT-CBCT Deformable Image Registration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Z; Koyfman, S; Xia, P

    2015-06-15

    Purpose: To evaluate geometric and dosimetric uncertainties of CT-CBCT deformable image registration (DIR) algorithms using digital phantoms generated from real patients. Methods: We selected ten H&N cancer patients with adaptive IMRT. For each patient, a planning CT (CT1), a replanning CT (CT2), and a pretreatment CBCT (CBCT1) were used as the basis for digital phantom creation. Manually adjusted meshes were created for selected ROIs (e.g. PTVs, brainstem, spinal cord, mandible, and parotids) on CT1 and CT2. The mesh vertices were input into a thin-plate spline algorithm to generate a reference displacement vector field (DVF). The reference DVF was applied tomore » CBCT1 to create a simulated mid-treatment CBCT (CBCT2). The CT-CBCT digital phantom consisted of CT1 and CBCT2, which were linked by the reference DVF. Three DIR algorithms (Demons, B-Spline, and intensity-based) were applied to these ten digital phantoms. The images, ROIs, and volumetric doses were mapped from CT1 to CBCT2 using the DVFs computed by these three DIRs and compared to those mapped using the reference DVF. Results: The average Dice coefficients for selected ROIs were from 0.83 to 0.94 for Demons, from 0.82 to 0.95 for B-Spline, and from 0.67 to 0.89 for intensity-based DIR. The average Hausdorff distances for selected ROIs were from 2.4 to 6.2 mm for Demons, from 1.8 to 5.9 mm for B-Spline, and from 2.8 to 11.2 mm for intensity-based DIR. The average absolute dose errors for selected ROIs were from 0.7 to 2.1 Gy for Demons, from 0.7 to 2.9 Gy for B- Spline, and from 1.3 to 4.5 Gy for intensity-based DIR. Conclusion: Using clinically realistic CT-CBCT digital phantoms, Demons and B-Spline were shown to have similar geometric and dosimetric uncertainties while intensity-based DIR had the worst uncertainties. CT-CBCT DIR has the potential to provide accurate CBCT-based dose verification for H&N adaptive radiotherapy. Z Shen: None; K Bzdusek: an employee of Philips Healthcare; S Koyfman: None; P Xia: received research grants from Philips Healthcare and Siemens Healthcare.« less

  16. A study of metaheuristic algorithms for high dimensional feature selection on microarray data

    NASA Astrophysics Data System (ADS)

    Dankolo, Muhammad Nasiru; Radzi, Nor Haizan Mohamed; Sallehuddin, Roselina; Mustaffa, Noorfa Haszlinna

    2017-11-01

    Microarray systems enable experts to examine gene profile at molecular level using machine learning algorithms. It increases the potentials of classification and diagnosis of many diseases at gene expression level. Though, numerous difficulties may affect the efficiency of machine learning algorithms which includes vast number of genes features comprised in the original data. Many of these features may be unrelated to the intended analysis. Therefore, feature selection is necessary to be performed in the data pre-processing. Many feature selection algorithms are developed and applied on microarray which including the metaheuristic optimization algorithms. This paper discusses the application of the metaheuristics algorithms for feature selection in microarray dataset. This study reveals that, the algorithms have yield an interesting result with limited resources thereby saving computational expenses of machine learning algorithms.

  17. SU-E-T-764: Track Repeating Algorithm for Proton Therapy Applied to Intensity Modulated Proton Therapy for Head-And-Neck Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yepes, P; Mirkovic, D; Mohan, R

    Purpose: To determine the suitability of fast Monte Carlo techniques for dose calculation in particle therapy based on track-repeating algorithm for Intensity Modulated Proton Therapy, IMPT. The application of this technique will make possible detailed retrospective studies of large cohort of patients, which may lead to a better determination of Relative Biological Effects from the analysis of patient data. Methods: A cohort of six head-and-neck patients treated at the University of Texas MD Anderson Cancer Center with IMPT were utilized. The dose distributions were calculated with the standard Treatment Plan System, TPS, MCNPX, GEANT4 and FDC, a fast track-repeating algorithmmore » for proton therapy for the verification and the patient plans. FDC is based on a GEANT4 database of trajectories of protons in a water. The obtained dose distributions were compared to each other utilizing the g-index criteria for 3mm-3% and 2mm-2%, for the maximum spatial and dose differences. The γ-index was calculated for voxels with a dose at least 10% of the maximum delivered dose. Dose Volume Histograms are also calculated for the various dose distributions. Results: Good agreement between GEANT4 and FDC is found with less than 1% of the voxels with a γ-index larger than 1 for 2 mm-2%. The agreement between MCNPX with FDC is within the requirements of clinical standards, even though it is slightly worse than the comparison with GEANT4.The comparison with TPS yielded larger differences, what is also to be expected because pencil beam algorithm do not always performed well in highly inhomogeneous areas like head-and-neck. Conclusion: The good agreement between a track-repeating algorithm and a full Monte Carlo for a large cohort of patients and a challenging, site like head-and-neck, opens the path to systematic and detailed studies of large cohorts, which may yield better understanding of biological effects.« less

  18. Quantitative susceptibility mapping: Report from the 2016 reconstruction challenge.

    PubMed

    Langkammer, Christian; Schweser, Ferdinand; Shmueli, Karin; Kames, Christian; Li, Xu; Guo, Li; Milovic, Carlos; Kim, Jinsuh; Wei, Hongjiang; Bredies, Kristian; Buch, Sagar; Guo, Yihao; Liu, Zhe; Meineke, Jakob; Rauscher, Alexander; Marques, José P; Bilgic, Berkin

    2018-03-01

    The aim of the 2016 quantitative susceptibility mapping (QSM) reconstruction challenge was to test the ability of various QSM algorithms to recover the underlying susceptibility from phase data faithfully. Gradient-echo images of a healthy volunteer acquired at 3T in a single orientation with 1.06 mm isotropic resolution. A reference susceptibility map was provided, which was computed using the susceptibility tensor imaging algorithm on data acquired at 12 head orientations. Susceptibility maps calculated from the single orientation data were compared against the reference susceptibility map. Deviations were quantified using the following metrics: root mean squared error (RMSE), structure similarity index (SSIM), high-frequency error norm (HFEN), and the error in selected white and gray matter regions. Twenty-seven submissions were evaluated. Most of the best scoring approaches estimated the spatial frequency content in the ill-conditioned domain of the dipole kernel using compressed sensing strategies. The top 10 maps in each category had similar error metrics but substantially different visual appearance. Because QSM algorithms were optimized to minimize error metrics, the resulting susceptibility maps suffered from over-smoothing and conspicuity loss in fine features such as vessels. As such, the challenge highlighted the need for better numerical image quality criteria. Magn Reson Med 79:1661-1673, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  19. Sequential and Mixed Genetic Algorithm and Learning Automata (SGALA, MGALA) for Feature Selection in QSAR

    PubMed Central

    MotieGhader, Habib; Gharaghani, Sajjad; Masoudi-Sobhanzadeh, Yosef; Masoudi-Nejad, Ali

    2017-01-01

    Feature selection is of great importance in Quantitative Structure-Activity Relationship (QSAR) analysis. This problem has been solved using some meta-heuristic algorithms such as GA, PSO, ACO and so on. In this work two novel hybrid meta-heuristic algorithms i.e. Sequential GA and LA (SGALA) and Mixed GA and LA (MGALA), which are based on Genetic algorithm and learning automata for QSAR feature selection are proposed. SGALA algorithm uses advantages of Genetic algorithm and Learning Automata sequentially and the MGALA algorithm uses advantages of Genetic Algorithm and Learning Automata simultaneously. We applied our proposed algorithms to select the minimum possible number of features from three different datasets and also we observed that the MGALA and SGALA algorithms had the best outcome independently and in average compared to other feature selection algorithms. Through comparison of our proposed algorithms, we deduced that the rate of convergence to optimal result in MGALA and SGALA algorithms were better than the rate of GA, ACO, PSO and LA algorithms. In the end, the results of GA, ACO, PSO, LA, SGALA, and MGALA algorithms were applied as the input of LS-SVR model and the results from LS-SVR models showed that the LS-SVR model had more predictive ability with the input from SGALA and MGALA algorithms than the input from all other mentioned algorithms. Therefore, the results have corroborated that not only is the predictive efficiency of proposed algorithms better, but their rate of convergence is also superior to the all other mentioned algorithms. PMID:28979308

  20. Sequential and Mixed Genetic Algorithm and Learning Automata (SGALA, MGALA) for Feature Selection in QSAR.

    PubMed

    MotieGhader, Habib; Gharaghani, Sajjad; Masoudi-Sobhanzadeh, Yosef; Masoudi-Nejad, Ali

    2017-01-01

    Feature selection is of great importance in Quantitative Structure-Activity Relationship (QSAR) analysis. This problem has been solved using some meta-heuristic algorithms such as GA, PSO, ACO and so on. In this work two novel hybrid meta-heuristic algorithms i.e. Sequential GA and LA (SGALA) and Mixed GA and LA (MGALA), which are based on Genetic algorithm and learning automata for QSAR feature selection are proposed. SGALA algorithm uses advantages of Genetic algorithm and Learning Automata sequentially and the MGALA algorithm uses advantages of Genetic Algorithm and Learning Automata simultaneously. We applied our proposed algorithms to select the minimum possible number of features from three different datasets and also we observed that the MGALA and SGALA algorithms had the best outcome independently and in average compared to other feature selection algorithms. Through comparison of our proposed algorithms, we deduced that the rate of convergence to optimal result in MGALA and SGALA algorithms were better than the rate of GA, ACO, PSO and LA algorithms. In the end, the results of GA, ACO, PSO, LA, SGALA, and MGALA algorithms were applied as the input of LS-SVR model and the results from LS-SVR models showed that the LS-SVR model had more predictive ability with the input from SGALA and MGALA algorithms than the input from all other mentioned algorithms. Therefore, the results have corroborated that not only is the predictive efficiency of proposed algorithms better, but their rate of convergence is also superior to the all other mentioned algorithms.

  1. A new head phantom with realistic shape and spatially varying skull resistivity distribution.

    PubMed

    Li, Jian-Bo; Tang, Chi; Dai, Meng; Liu, Geng; Shi, Xue-Tao; Yang, Bin; Xu, Can-Hua; Fu, Feng; You, Fu-Sheng; Tang, Meng-Xing; Dong, Xiu-Zhen

    2014-02-01

    Brain electrical impedance tomography (EIT) is an emerging method for monitoring brain injuries. To effectively evaluate brain EIT systems and reconstruction algorithms, we have developed a novel head phantom that features realistic anatomy and spatially varying skull resistivity. The head phantom was created with three layers, representing scalp, skull, and brain tissues. The fabrication process entailed 3-D printing of the anatomical geometry for mold creation followed by casting to ensure high geometrical precision and accuracy of the resistivity distribution. We evaluated the accuracy and stability of the phantom. Results showed that the head phantom achieved high geometric accuracy, accurate skull resistivity values, and good stability over time and in the frequency domain. Experimental impedance reconstructions performed using the head phantom and computer simulations were found to be consistent for the same perturbation object. In conclusion, this new phantom could provide a more accurate test platform for brain EIT research.

  2. Human leader and robot follower team: correcting leader's position from follower's heading

    NASA Astrophysics Data System (ADS)

    Borenstein, Johann; Thomas, David; Sights, Brandon; Ojeda, Lauro; Bankole, Peter; Fellars, Donald

    2010-04-01

    In multi-agent scenarios, there can be a disparity in the quality of position estimation amongst the various agents. Here, we consider the case of two agents - a leader and a follower - following the same path, in which the follower has a significantly better estimate of position and heading. This may be applicable to many situations, such as a robotic "mule" following a soldier. Another example is that of a convoy, in which only one vehicle (not necessarily the leading one) is instrumented with precision navigation instruments while all other vehicles use lower-precision instruments. We present an algorithm, called Follower-derived Heading Correction (FDHC), which substantially improves estimates of the leader's heading and, subsequently, position. Specifically, FHDC produces a very accurate estimate of heading errors caused by slow-changing errors (e.g., those caused by drift in gyros) of the leader's navigation system and corrects those errors.

  3. Attitude Heading Reference System Using MEMS Inertial Sensors with Dual-Axis Rotation

    PubMed Central

    Kang, Li; Ye, Lingyun; Song, Kaichen; Zhou, Yang

    2014-01-01

    This paper proposes a low cost and small size attitude and heading reference system based on MEMS inertial sensors. A dual-axis rotation structure with a proper rotary scheme according to the design principles is applied in the system to compensate for the attitude and heading drift caused by the large gyroscope biases. An optimization algorithm is applied to compensate for the installation angle error between the body frame and the rotation table's frame. Simulations and experiments are carried out to evaluate the performance of the AHRS. The results show that the proper rotation could significantly reduce the attitude and heading drifts. Moreover, the new AHRS is not affected by magnetic interference. After the rotation, the attitude and heading are almost just oscillating in a range. The attitude error is about 3° and the heading error is less than 3° which are at least 5 times better than the non-rotation condition. PMID:25268911

  4. Optimum location of external markers using feature selection algorithms for real‐time tumor tracking in external‐beam radiotherapy: a virtual phantom study

    PubMed Central

    Nankali, Saber; Miandoab, Payam Samadi; Baghizadeh, Amin

    2016-01-01

    In external‐beam radiotherapy, using external markers is one of the most reliable tools to predict tumor position, in clinical applications. The main challenge in this approach is tumor motion tracking with highest accuracy that depends heavily on external markers location, and this issue is the objective of this study. Four commercially available feature selection algorithms entitled 1) Correlation‐based Feature Selection, 2) Classifier, 3) Principal Components, and 4) Relief were proposed to find optimum location of external markers in combination with two “Genetic” and “Ranker” searching procedures. The performance of these algorithms has been evaluated using four‐dimensional extended cardiac‐torso anthropomorphic phantom. Six tumors in lung, three tumors in liver, and 49 points on the thorax surface were taken into account to simulate internal and external motions, respectively. The root mean square error of an adaptive neuro‐fuzzy inference system (ANFIS) as prediction model was considered as metric for quantitatively evaluating the performance of proposed feature selection algorithms. To do this, the thorax surface region was divided into nine smaller segments and predefined tumors motion was predicted by ANFIS using external motion data of given markers at each small segment, separately. Our comparative results showed that all feature selection algorithms can reasonably select specific external markers from those segments where the root mean square error of the ANFIS model is minimum. Moreover, the performance accuracy of proposed feature selection algorithms was compared, separately. For this, each tumor motion was predicted using motion data of those external markers selected by each feature selection algorithm. Duncan statistical test, followed by F‐test, on final results reflected that all proposed feature selection algorithms have the same performance accuracy for lung tumors. But for liver tumors, a correlation‐based feature selection algorithm, in combination with a genetic search algorithm, proved to yield best performance accuracy for selecting optimum markers. PACS numbers: 87.55.km, 87.56.Fc PMID:26894358

  5. Optimum location of external markers using feature selection algorithms for real-time tumor tracking in external-beam radiotherapy: a virtual phantom study.

    PubMed

    Nankali, Saber; Torshabi, Ahmad Esmaili; Miandoab, Payam Samadi; Baghizadeh, Amin

    2016-01-08

    In external-beam radiotherapy, using external markers is one of the most reliable tools to predict tumor position, in clinical applications. The main challenge in this approach is tumor motion tracking with highest accuracy that depends heavily on external markers location, and this issue is the objective of this study. Four commercially available feature selection algorithms entitled 1) Correlation-based Feature Selection, 2) Classifier, 3) Principal Components, and 4) Relief were proposed to find optimum location of external markers in combination with two "Genetic" and "Ranker" searching procedures. The performance of these algorithms has been evaluated using four-dimensional extended cardiac-torso anthropomorphic phantom. Six tumors in lung, three tumors in liver, and 49 points on the thorax surface were taken into account to simulate internal and external motions, respectively. The root mean square error of an adaptive neuro-fuzzy inference system (ANFIS) as prediction model was considered as metric for quantitatively evaluating the performance of proposed feature selection algorithms. To do this, the thorax surface region was divided into nine smaller segments and predefined tumors motion was predicted by ANFIS using external motion data of given markers at each small segment, separately. Our comparative results showed that all feature selection algorithms can reasonably select specific external markers from those segments where the root mean square error of the ANFIS model is minimum. Moreover, the performance accuracy of proposed feature selection algorithms was compared, separately. For this, each tumor motion was predicted using motion data of those external markers selected by each feature selection algorithm. Duncan statistical test, followed by F-test, on final results reflected that all proposed feature selection algorithms have the same performance accuracy for lung tumors. But for liver tumors, a correlation-based feature selection algorithm, in combination with a genetic search algorithm, proved to yield best performance accuracy for selecting optimum markers.

  6. Effects of Device on Video Head Impulse Test (vHIT) Gain.

    PubMed

    Janky, Kristen L; Patterson, Jessie N; Shepard, Neil T; Thomas, Megan L A; Honaker, Julie A

    2017-10-01

    Numerous video head impulse test (vHIT) devices are available commercially; however, gain is not calculated uniformly. An evaluation of these devices/algorithms in healthy controls and patients with vestibular loss is necessary for comparing and synthesizing work that utilizes different devices and gain calculations. Using three commercially available vHIT devices/algorithms, the purpose of the present study was to compare: (1) horizontal canal vHIT gain among devices/algorithms in normal control subjects; (2) the effects of age on vHIT gain for each device/algorithm in normal control subjects; and (3) the clinical performance of horizontal canal vHIT gain between devices/algorithms for differentiating normal versus abnormal vestibular function. Prospective. Sixty-one normal control adult subjects (range 20-78) and eleven adults with unilateral or bilateral vestibular loss (range 32-79). vHIT was administered using three different devices/algorithms, randomized in order, for each subject on the same day: (1) Impulse (Otometrics, Schaumberg, IL; monocular eye recording, right eye only; using area under the curve gain), (2) EyeSeeCam (Interacoustics, Denmark; monocular eye recording, left eye only; using instantaneous gain), and (3) VisualEyes (MicroMedical, Chatham, IL, binocular eye recording; using position gain). There was a significant mean difference in vHIT gain among devices/algorithms for both the normal control and vestibular loss groups. vHIT gain was significantly larger in the ipsilateral direction of the eye used to measure gain; however, in spite of the significant mean differences in vHIT gain among devices/algorithms and the significant directional bias, classification of "normal" versus "abnormal" gain is consistent across all compared devices/algorithms, with the exception of instantaneous gain at 40 msec. There was not an effect of age on vHIT gain up to 78 years regardless of the device/algorithm. These findings support that vHIT gain is significantly different between devices/algorithms, suggesting that care should be taken when making direct comparisons of absolute gain values between devices/algorithms. American Academy of Audiology

  7. SU-F-J-211: Scatter Correction for Clinical Cone-Beam CT System Using An Optimized Stationary Beam Blocker with a Single Scan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, X; Zhang, Z; Xie, Y

    Purpose: X-ray scatter photons result in significant image quality degradation of cone-beam CT (CBCT). Measurement based algorithms using beam blocker directly acquire the scatter samples and achieve significant improvement on the quality of CBCT image. Within existing algorithms, single-scan and stationary beam blocker proposed previously is promising due to its simplicity and practicability. Although demonstrated effectively on tabletop system, the blocker fails to estimate the scatter distribution on clinical CBCT system mainly due to the gantry wobble. In addition, the uniform distributed blocker strips in our previous design results in primary data loss in the CBCT system and leads tomore » the image artifacts due to data insufficiency. Methods: We investigate the motion behavior of the beam blocker in each projection and design an optimized non-uniform blocker strip distribution which accounts for the data insufficiency issue. An accurate scatter estimation is then achieved from the wobble modeling. Blocker wobble curve is estimated using threshold-based segmentation algorithms in each projection. In the blocker design optimization, the quality of final image is quantified using the number of the primary data loss voxels and the mesh adaptive direct search algorithm is applied to minimize the objective function. Scatter-corrected CT images are obtained using the optimized blocker. Results: The proposed method is evaluated using Catphan@504 phantom and a head patient. On the Catphan©504, our approach reduces the average CT number error from 115 Hounsfield unit (HU) to 11 HU in the selected regions of interest, and improves the image contrast by a factor of 1.45 in the high-contrast regions. On the head patient, the CT number error is reduced from 97 HU to 6 HU in the soft tissue region and image spatial non-uniformity is decreased from 27% to 5% after correction. Conclusion: The proposed optimized blocker design is practical and attractive for CBCT guided radiation therapy. This work is supported by grants from Guangdong Innovative Research Team Program of China (Grant No. 2011S013), National 863 Programs of China (Grant Nos. 2012AA02A604 and 2015AA043203), the National High-tech R&D Program for Young Scientists by the Ministry of Science and Technology of China (Grant No. 2015AA020917)« less

  8. SU-E-T-219: Comprehensive Validation of the Electron Monte Carlo Dose Calculation Algorithm in RayStation Treatment Planning System for An Elekta Linear Accelerator with AgilityTM Treatment Head

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yi; Park, Yang-Kyun; Doppke, Karen P.

    2015-06-15

    Purpose: This study evaluated the performance of the electron Monte Carlo dose calculation algorithm in RayStation v4.0 for an Elekta machine with Agility™ treatment head. Methods: The machine has five electron energies (6–8 MeV) and five applicators (6×6 to 25×25 cm {sup 2}). The dose (cGy/MU at d{sub max}), depth dose and profiles were measured in water using an electron diode at 100 cm SSD for nine square fields ≥2×2 cm{sup 2} and four complex fields at normal incidence, and a 14×14 cm{sup 2} field at 15° and 30° incidence. The dose was also measured for three square fields ≥4×4more » cm{sup 2} at 98, 105 and 110 cm SSD. Using selected energies, the EBT3 radiochromic film was used for dose measurements in slab-shaped inhomogeneous phantoms and a breast phantom with surface curvature. The measured and calculated doses were analyzed using a gamma criterion of 3%/3 mm. Results: The calculated and measured doses varied by <3% for 116 of the 120 points, and <5% for the 4×4 cm{sup 2} field at 110 cm SSD at 9–18 MeV. The gamma analysis comparing the 105 pairs of in-water isodoses passed by >98.1%. The planar doses measured from films placed at 0.5 cm below a lung/tissue layer (12 MeV) and 1.0 cm below a bone/air layer (15 MeV) showed excellent agreement with calculations, with gamma passing by 99.9% and 98.5%, respectively. At the breast-tissue interface, the gamma passing rate is >98.8% at 12–18 MeV. The film results directly validated the accuracy of MU calculation and spatial dose distribution in presence of tissue inhomogeneity and surface curvature - situations challenging for simpler pencil-beam algorithms. Conclusion: The electron Monte Carlo algorithm in RayStation v4.0 is fully validated for clinical use for the Elekta Agility™ machine. The comprehensive validation included small fields, complex fields, oblique beams, extended distance, tissue inhomogeneity and surface curvature.« less

  9. Beam’s-eye-view dosimetrics (BEVD) guided rotational station parameter optimized radiation therapy (SPORT) planning based on reweighted total-variation minimization

    NASA Astrophysics Data System (ADS)

    Kim, Hojin; Li, Ruijiang; Lee, Rena; Xing, Lei

    2015-03-01

    Conventional VMAT optimizes aperture shapes and weights at uniformly sampled stations, which is a generalization of the concept of a control point. Recently, rotational station parameter optimized radiation therapy (SPORT) has been proposed to improve the plan quality by inserting beams to the regions that demand additional intensity modulations, thus formulating non-uniform beam sampling. This work presents a new rotational SPORT planning strategy based on reweighted total-variation (TV) minimization (min.), using beam’s-eye-view dosimetrics (BEVD) guided beam selection. The convex programming based reweighted TV min. assures the simplified fluence-map, which facilitates single-aperture selection at each station for single-arc delivery. For the rotational arc treatment planning and non-uniform beam angle setting, the mathematical model needs to be modified by additional penalty term describing the fluence-map similarity and by determination of appropriate angular weighting factors. The proposed algorithm with additional penalty term is capable of achieving more efficient and deliverable plans adaptive to the conventional VMAT and SPORT planning schemes by reducing the dose delivery time about 5 to 10 s in three clinical cases (one prostate and two head-and-neck (HN) cases with a single and multiple targets). The BEVD guided beam selection provides effective and yet easy calculating methodology to select angles for denser, non-uniform angular sampling in SPORT planning. Our BEVD guided SPORT treatment schemes improve the dose sparing to femoral heads in the prostate and brainstem, parotid glands and oral cavity in the two HN cases, where the mean dose reduction of those organs ranges from 0.5 to 2.5 Gy. Also, it increases the conformation number assessing the dose conformity to the target from 0.84, 0.75 and 0.74 to 0.86, 0.79 and 0.80 in the prostate and two HN cases, while preserving the delivery efficiency, relative to conventional single-arc VMAT plans.

  10. Predicting Vasovagal Syncope from Heart Rate and Blood Pressure: A Prospective Study in 140 Subjects.

    PubMed

    Virag, Nathalie; Erickson, Mark; Taraborrelli, Patricia; Vetter, Rolf; Lim, Phang Boon; Sutton, Richard

    2018-04-28

    We developed a vasovagal syncope (VVS) prediction algorithm for use during head-up tilt with simultaneous analysis of heart rate (HR) and systolic blood pressure (SBP). We previously tested this algorithm retrospectively in 1155 subjects, showing sensitivity 95%, specificity 93% and median prediction time of 59s. This study was prospective, single center, on 140 subjects to evaluate this VVS prediction algorithm and assess if retrospective results were reproduced and clinically relevant. Primary endpoint was VVS prediction: sensitivity and specificity >80%. In subjects, referred for 60° head-up tilt (Italian protocol), non-invasive HR and SBP were supplied to the VVS prediction algorithm: simultaneous analysis of RR intervals, SBP trends and their variability represented by low-frequency power generated cumulative risk which was compared with a predetermined VVS risk threshold. When cumulative risk exceeded threshold, an alert was generated. Prediction time was duration between first alert and syncope. Of 140 subjects enrolled, data was usable for 134. Of 83 tilt+ve (61.9%), 81 VVS events were correctly predicted and of 51 tilt-ve subjects (38.1%), 45 were correctly identified as negative by the algorithm. Resulting algorithm performance was sensitivity 97.6%, specificity 88.2%, meeting primary endpoint. Mean VVS prediction time was 2min 26s±3min16s with median 1min 25s. Using only HR and HR variability (without SBP) the mean prediction time reduced to 1min34s±1min45s with median 1min13s. The VVS prediction algorithm, is clinically-relevant tool and could offer applications including providing a patient alarm, shortening tilt-test time, or triggering pacing intervention in implantable devices. Copyright © 2018. Published by Elsevier Inc.

  11. Development of head injury assessment reference values based on NASA injury modeling.

    PubMed

    Somers, Jeffrey T; Granderson, Bradley; Melvin, John W; Tabiei, Ala; Lawrence, Charles; Feiveson, Alan; Gernhardt, Michael; Ploutz-Snyder, Robert; Patalak, John

    2011-11-01

    NASA is developing a new crewed vehicle and desires a lower risk of injury compared to automotive or commercial aviation. Through an agreement with the National Association of Stock Car Auto Racing, Inc. (NASCAR®), an analysis of NASCAR impacts was performed to develop new injury assessment reference values (IARV) that may be more relevant to NASA's context of vehicle landing operations. Head IARVs associated with race car impacts were investigated by analyzing all NASCAR recorded impact data for the 2002-2008 race seasons. From the 4015 impact files, 274 impacts were selected for numerical simulation using a custom NASCAR restraint system and Hybrid III 50th percentile male Finite Element Model (FEM) in LS-DYNA. Head injury occurred in 27 of the 274 selected impacts, and all of the head injuries were mild concussions with or without brief loss of consciousness. The 247 noninjury impacts selected were representative of the range of crash dynamics present in the total set of impacts. The probability of head injury was estimated for each metric using an ordered probit regression analysis. Four metrics had good correlation with the head injury data: head resultant acceleration, head change in velocity, HIC 15, and HIC 36. For a 5% risk of AIS≥1/AIS≥2 head injuries, the following IARVs were found: 121.3/133.2 G (head resultant acceleration), 20.3/22.0 m/s (head change in velocity), 1,156/1,347 (HIC 15), and 1,152/1,342 (HIC 36) respectively. Based on the results of this study, further analysis of additional datasets is recommended before applying these results to future NASA vehicles.

  12. Intraspecific divergence in sperm morphology of the green sea urchin, Strongylocentrotus droebachiensis: implications for selection in broadcast spawners

    PubMed Central

    2008-01-01

    Background Sperm morphology can be highly variable among species, but less is known about patterns of population differentiation within species. Most studies of sperm morphometric variation are done in species with internal fertilization, where sexual selection can be mediated by complex mating behavior and the environment of the female reproductive tract. Far less is known about patterns of sperm evolution in broadcast spawners, where reproductive dynamics are largely carried out at the gametic level. We investigated variation in sperm morphology of a broadcast spawner, the green sea urchin (Strongylocentrotus droebachiensis), within and among spawnings of an individual, among individuals within a population, and among populations. We also examined population-level variation between two reproductive seasons for one population. We then compared among-population quantitative genetic divergence (QST) for sperm characters to divergence at neutral microsatellite markers (FST). Results All sperm traits except total length showed strong patterns of high diversity among populations, as did overall sperm morphology quantified using multivariate analysis. We also found significant differences in almost all traits among individuals in all populations. Head length, axoneme length, and total length had high within-male repeatability across multiple spawnings. Only sperm head width had significant within-population variation across two reproductive seasons. We found signatures of directional selection on head length and head width, with strong selection possibly acting on head length between the Pacific and West Atlantic populations. We also discuss the strengths and limitations of the QST-FST comparison. Conclusion Sperm morphology in S. droebachiensis is highly variable, both among populations and among individuals within populations, and has low variation within an individual across multiple spawnings. Selective pressures acting among populations may differ from those acting within, with directional selection implicated in driving divergence among populations and balancing selection as a possible mechanism for producing variability among males. Sexual selection in broadcast spawners may be mediated by different processes from those acting on internal fertilizers. Selective divergence in sperm head length among populations is associated with ecological differences among populations that may play a large role in mediating sexual selection in this broadcast spawner. PMID:18851755

  13. Realization of a CORDIC-Based Plug-In Accelerometer Module for PSG System in Head Position Monitoring for OSAS Patients

    PubMed Central

    Chou, Wen-Cheng; Shiao, Tsu-Hui; Shiao, Guang-Ming; Luo, Chin-Shan

    2017-01-01

    Overnight polysomnography (PSG) is currently the standard diagnostic procedure for obstructive sleep apnea (OSA). It has been known that monitoring of head position in sleep is crucial not only for the diagnosis (positional sleep apnea) but also for the management of OSA (positional therapy). However, there are no sensor systems available clinically to hook up with PSG for accurate head position monitoring. In this paper, an accelerometer-based sensing system for accurate head position monitoring is developed and realized. The core CORDIC- (COordinate Rotation DIgital Computer-) based tilting sensing algorithm is realized in the system to quickly and accurately convert accelerometer raw data into the desired head position tilting angles. The system can hook up with PSG devices for diagnosis to have head position information integrated with other PSG-monitored signals. It has been applied in an IRB test in Taipei Veterans General Hospital and has been proved that it can meet the medical needs of accurate head position monitoring for PSG diagnosis. PMID:29065608

  14. Localized overlap algorithm for unexpanded dispersion energies

    NASA Astrophysics Data System (ADS)

    Rob, Fazle; Misquitta, Alston J.; Podeszwa, Rafał; Szalewicz, Krzysztof

    2014-03-01

    First-principles-based, linearly scaling algorithm has been developed for calculations of dispersion energies from frequency-dependent density susceptibility (FDDS) functions with account of charge-overlap effects. The transition densities in FDDSs are fitted by a set of auxiliary atom-centered functions. The terms in the dispersion energy expression involving products of such functions are computed using either the unexpanded (exact) formula or from inexpensive asymptotic expansions, depending on the location of these functions relative to the dimer configuration. This approach leads to significant savings of computational resources. In particular, for a dimer consisting of two elongated monomers with 81 atoms each in a head-to-head configuration, the most favorable case for our algorithm, a 43-fold speedup has been achieved while the approximate dispersion energy differs by less than 1% from that computed using the standard unexpanded approach. In contrast, the dispersion energy computed from the distributed asymptotic expansion differs by dozens of percent in the van der Waals minimum region. A further increase of the size of each monomer would result in only small increased costs since all the additional terms would be computed from the asymptotic expansion.

  15. A Survey of U.S. Navy Medical Communications and Evacuations at Sea

    DTIC Science & Technology

    1984-07-05

    specialized 0 sector of the health care system . The majority of these medical departments are headed by an independent duty corpsman who, unlike many...the U.S. Navy has focused increasing attention on the development and implementation of clinical algorithms and telemedicine systems to enhance...a computer assisted clinical algorithm system for use aboard submarines. 5- 7 Although initial work focused upon acute abdominal pain, future

  16. A multimodality segmentation framework for automatic target delineation in head and neck radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Jinzhong; Aristophanous, Michalis, E-mail: MAristophanous@mdanderson.org; Beadle, Beth M.

    2015-09-15

    Purpose: To develop an automatic segmentation algorithm integrating imaging information from computed tomography (CT), positron emission tomography (PET), and magnetic resonance imaging (MRI) to delineate target volume in head and neck cancer radiotherapy. Methods: Eleven patients with unresectable disease at the tonsil or base of tongue who underwent MRI, CT, and PET/CT within two months before the start of radiotherapy or chemoradiotherapy were recruited for the study. For each patient, PET/CT and T1-weighted contrast MRI scans were first registered to the planning CT using deformable and rigid registration, respectively, to resample the PET and magnetic resonance (MR) images to themore » planning CT space. A binary mask was manually defined to identify the tumor area. The resampled PET and MR images, the planning CT image, and the binary mask were fed into the automatic segmentation algorithm for target delineation. The algorithm was based on a multichannel Gaussian mixture model and solved using an expectation–maximization algorithm with Markov random fields. To evaluate the algorithm, we compared the multichannel autosegmentation with an autosegmentation method using only PET images. The physician-defined gross tumor volume (GTV) was used as the “ground truth” for quantitative evaluation. Results: The median multichannel segmented GTV of the primary tumor was 15.7 cm{sup 3} (range, 6.6–44.3 cm{sup 3}), while the PET segmented GTV was 10.2 cm{sup 3} (range, 2.8–45.1 cm{sup 3}). The median physician-defined GTV was 22.1 cm{sup 3} (range, 4.2–38.4 cm{sup 3}). The median difference between the multichannel segmented and physician-defined GTVs was −10.7%, not showing a statistically significant difference (p-value = 0.43). However, the median difference between the PET segmented and physician-defined GTVs was −19.2%, showing a statistically significant difference (p-value =0.0037). The median Dice similarity coefficient between the multichannel segmented and physician-defined GTVs was 0.75 (range, 0.55–0.84), and the median sensitivity and positive predictive value between them were 0.76 and 0.81, respectively. Conclusions: The authors developed an automated multimodality segmentation algorithm for tumor volume delineation and validated this algorithm for head and neck cancer radiotherapy. The multichannel segmented GTV agreed well with the physician-defined GTV. The authors expect that their algorithm will improve the accuracy and consistency in target definition for radiotherapy.« less

  17. Comparison of Automated Atlas-Based Segmentation Software for Postoperative Prostate Cancer Radiotherapy

    PubMed Central

    Delpon, Grégory; Escande, Alexandre; Ruef, Timothée; Darréon, Julien; Fontaine, Jimmy; Noblet, Caroline; Supiot, Stéphane; Lacornerie, Thomas; Pasquier, David

    2016-01-01

    Automated atlas-based segmentation (ABS) algorithms present the potential to reduce the variability in volume delineation. Several vendors offer software that are mainly used for cranial, head and neck, and prostate cases. The present study will compare the contours produced by a radiation oncologist to the contours computed by different automated ABS algorithms for prostate bed cases, including femoral heads, bladder, and rectum. Contour comparison was evaluated by different metrics such as volume ratio, Dice coefficient, and Hausdorff distance. Results depended on the volume of interest showed some discrepancies between the different software. Automatic contours could be a good starting point for the delineation of organs since efficient editing tools are provided by different vendors. It should become an important help in the next few years for organ at risk delineation. PMID:27536556

  18. Deep 3D convolution neural network for CT brain hemorrhage classification

    NASA Astrophysics Data System (ADS)

    Jnawali, Kamal; Arbabshirani, Mohammad R.; Rao, Navalgund; Patel, Alpen A.

    2018-02-01

    Intracranial hemorrhage is a critical conditional with the high mortality rate that is typically diagnosed based on head computer tomography (CT) images. Deep learning algorithms, in particular, convolution neural networks (CNN), are becoming the methodology of choice in medical image analysis for a variety of applications such as computer-aided diagnosis, and segmentation. In this study, we propose a fully automated deep learning framework which learns to detect brain hemorrhage based on cross sectional CT images. The dataset for this work consists of 40,367 3D head CT studies (over 1.5 million 2D images) acquired retrospectively over a decade from multiple radiology facilities at Geisinger Health System. The proposed algorithm first extracts features using 3D CNN and then detects brain hemorrhage using the logistic function as the last layer of the network. Finally, we created an ensemble of three different 3D CNN architectures to improve the classification accuracy. The area under the curve (AUC) of the receiver operator characteristic (ROC) curve of the ensemble of three architectures was 0.87. Their results are very promising considering the fact that the head CT studies were not controlled for slice thickness, scanner type, study protocol or any other settings. Moreover, the proposed algorithm reliably detected various types of hemorrhage within the skull. This work is one of the first applications of 3D CNN trained on a large dataset of cross sectional medical images for detection of a critical radiological condition

  19. Estimating Aircraft Heading Based on Laserscanner Derived Point Clouds

    NASA Astrophysics Data System (ADS)

    Koppanyi, Z.; Toth, C., K.

    2015-03-01

    Using LiDAR sensors for tracking and monitoring an operating aircraft is a new application. In this paper, we present data processing methods to estimate the heading of a taxiing aircraft using laser point clouds. During the data acquisition, a Velodyne HDL-32E laser scanner tracked a moving Cessna 172 airplane. The point clouds captured at different times were used for heading estimation. After addressing the problem and specifying the equation of motion to reconstruct the aircraft point cloud from the consecutive scans, three methods are investigated here. The first requires a reference model to estimate the relative angle from the captured data by fitting different cross-sections (horizontal profiles). In the second approach, iterative closest point (ICP) method is used between the consecutive point clouds to determine the horizontal translation of the captured aircraft body. Regarding the ICP, three different versions were compared, namely, the ordinary 3D, 3-DoF 3D and 2-DoF 3D ICP. It was found that 2-DoF 3D ICP provides the best performance. Finally, the last algorithm searches for the unknown heading and velocity parameters by minimizing the volume of the reconstructed plane. The three methods were compared using three test datatypes which are distinguished by object-sensor distance, heading and velocity. We found that the ICP algorithm fails at long distances and when the aircraft motion direction perpendicular to the scan plane, but the first and the third methods give robust and accurate results at 40m object distance and at ~12 knots for a small Cessna airplane.

  20. FTUC: A Flooding Tree Uneven Clustering Protocol for a Wireless Sensor Network.

    PubMed

    He, Wei; Pillement, Sebastien; Xu, Du

    2017-11-23

    Clustering is an efficient approach in a wireless sensor network (WSN) to reduce the energy consumption of nodes and to extend the lifetime of the network. Unfortunately, this approach requires that all cluster heads (CHs) transmit their data to the base station (BS), which gives rise to the long distance communications problem, and in multi-hop routing, the CHs near the BS have to forward data from other nodes that lead those CHs to die prematurely, creating the hot zones problem. Unequal clustering has been proposed to solve these problems. Most of the current algorithms elect CH only by considering their competition radius, leading to unevenly distributed cluster heads. Furthermore, global distances values are needed when calculating the competition radius, which is a tedious task in large networks. To face these problems, we propose a flooding tree uneven clustering protocol (FTUC) suited for large networks. Based on the construction of a tree type sub-network to calculate the minimum and maximum distances values of the network, we then apply the unequal cluster theory. We also introduce referenced position circles to evenly elect cluster heads. Therefore, cluster heads are elected depending on the node's residual energy and their distance to a referenced circle. FTUC builds the best inter-cluster communications route by evaluating a cluster head cost function to find the best next hop to the BS. The simulation results show that the FTUC algorithm decreases the energy consumption of the nodes and balances the global energy consumption effectively, thus extending the lifetime of the network.

  1. Sexual dimorphism of head morphology in three-spined stickleback Gasterosteus aculeatus.

    PubMed

    Aguirre, W E; Akinpelu, O

    2010-09-01

    This study examined sexual dimorphism of head morphology in the ecologically diverse three-spined stickleback Gasterosteus aculeatus. Male G. aculeatus had longer heads than female G. aculeatus in all 10 anadromous, stream and lake populations examined, and head length growth rates were significantly higher in males in half of the populations sampled, indicating that differences in head size increased with body size in many populations. Despite consistently larger heads in males, there was significant variation in size-adjusted head length among populations, suggesting that the relationship between head length and body length was flexible. Inter-population differences in head length were correlated between sexes, thus population-level factors influenced head length in both sexes despite the sexual dimorphism present. Head shape variation between lake and anadromous populations was greater than that between sexes. The common divergence in head shape between sexes across populations was about twice as important as the sexual dimorphism unique to each population. Finally, much of the sexual dimorphism in head length was due to divergence in the anterior region of the head, where the primary trophic structures were found. It is unclear whether the sexual dimorphism was due to natural selection for niche divergence between sexes or sexual selection. This study improves knowledge of the magnitude, growth rate divergence, inter-population variation and location of sexual dimorphism in G. aculeatus head morphology. © 2010 The Authors. Journal compilation © 2010 The Fisheries Society of the British Isles.

  2. A method to incorporate leakage and head scatter corrections into a tomotherapy inverse treatment planning algorithm

    NASA Astrophysics Data System (ADS)

    Holmes, Timothy W.

    2001-01-01

    A detailed tomotherapy inverse treatment planning method is described which incorporates leakage and head scatter corrections during each iteration of the optimization process, allowing these effects to be directly accounted for in the optimized dose distribution. It is shown that the conventional inverse planning method for optimizing incident intensity can be extended to include a `concurrent' leaf sequencing operation from which the leakage and head scatter corrections are determined. The method is demonstrated using the steepest-descent optimization technique with constant step size and a least-squared error objective. The method was implemented using the MATLAB scientific programming environment and its feasibility demonstrated for 2D test cases simulating treatment delivery using a single coplanar rotation. The results indicate that this modification does not significantly affect convergence of the intensity optimization method when exposure times of individual leaves are stratified to a large number of levels (>100) during leaf sequencing. In general, the addition of aperture dependent corrections, especially `head scatter', reduces incident fluence in local regions of the modulated fan beam, resulting in increased exposure times for individual collimator leaves. These local variations can result in 5% or greater local variation in the optimized dose distribution compared to the uncorrected case. The overall efficiency of the modified intensity optimization algorithm is comparable to that of the original unmodified case.

  3. Digibaro pressure instrument onboard the Phoenix Lander

    NASA Astrophysics Data System (ADS)

    Harri, A.-M.; Polkko, J.; Kahanpää, H. H.; Schmidt, W.; Genzer, M. M.; Haukka, H.; Savijarv1, H.; Kauhanen, J.

    2009-04-01

    The Phoenix Lander landed successfully on the Martian northern polar region. The mission is part of the National Aeronautics and Space Administration's (NASA's) Scout program. Pressure observations onboard the Phoenix lander were performed by an FMI (Finnish Meteorological Institute) instrument, based on a silicon diaphragm sensor head manufactured by Vaisala Inc., combined with MDA data processing electronics. The pressure instrument performed successfully throughout the Phoenix mission. The pressure instrument had 3 pressure sensor heads. One of these was the primary sensor head and the other two were used for monitoring the condition of the primary sensor head during the mission. During the mission the primary sensor was read with a sampling interval of 2 s and the other two were read less frequently as a check of instrument health. The pressure sensor system had a real-time data-processing and calibration algorithm that allowed the removal of temperature dependent calibration effects. In the same manner as the temperature sensor, a total of 256 data records (8.53 min) were buffered and they could either be stored at full resolution, or processed to provide mean, standard deviation, maximum and minimum values for storage on the Phoenix Lander's Meteorological (MET) unit.The time constant was approximately 3s due to locational constraints and dust filtering requirements. Using algorithms compensating for the time constant effect the temporal resolution was good enough to detect pressure drops associated with the passage of nearby dust devils.

  4. Repurposing the Microsoft Kinect for Windows v2 for external head motion tracking for brain PET.

    PubMed

    Noonan, P J; Howard, J; Hallett, W A; Gunn, R N

    2015-11-21

    Medical imaging systems such as those used in positron emission tomography (PET) are capable of spatial resolutions that enable the imaging of small, functionally important brain structures. However, the quality of data from PET brain studies is often limited by subject motion during acquisition. This is particularly challenging for patients with neurological disorders or with dynamic research studies that can last 90 min or more. Restraining head movement during the scan does not eliminate motion entirely and can be unpleasant for the subject. Head motion can be detected and measured using a variety of techniques that either use the PET data itself or an external tracking system. Advances in computer vision arising from the video gaming industry could offer significant benefits when re-purposed for medical applications. A method for measuring rigid body type head motion using the Microsoft Kinect v2 is described with results presenting  ⩽0.5 mm spatial accuracy. Motion data is measured in real-time at 30 Hz using the KinectFusion algorithm. Non-rigid motion is detected using the residual alignment energy data of the KinectFusion algorithm allowing for unreliable motion to be discarded. Motion data is aligned to PET listmode data using injected pulse sequences into the PET/CT gantry allowing for correction of rigid body motion. Pilot data from a clinical dynamic PET/CT examination is shown.

  5. Design of optimized piezoelectric HDD-sliders

    NASA Astrophysics Data System (ADS)

    Nakasone, Paulo H.; Yoo, Jeonghoon; Silva, Emilio C. N.

    2010-04-01

    As storage data density in hard-disk drives (HDDs) increases for constant or miniaturizing sizes, precision positioning of HDD heads becomes a more relevant issue to ensure enormous amounts of data to be properly written and read. Since the traditional single-stage voice coil motor (VCM) cannot satisfy the positioning requirement of high-density tracks per inch (TPI) HDDs, dual-stage servo systems have been proposed to overcome this matter, by using VCMs to coarsely move the HDD head while piezoelectric actuators provides fine and fast positioning. Thus, the aim of this work is to apply topology optimization method (TOM) to design novel piezoelectric HDD heads, by finding optimal placement of base-plate and piezoelectric material to high precision positioning HDD heads. Topology optimization method is a structural optimization technique that combines the finite element method (FEM) with optimization algorithms. The laminated finite element employs the MITC (mixed interpolation of tensorial components) formulation to provide accurate and reliable results. The topology optimization uses a rational approximation of material properties to vary the material properties between 'void' and 'filled' portions. The design problem consists in generating optimal structures that provide maximal displacements, appropriate structural stiffness and resonance phenomena avoidance. The requirements are achieved by applying formulations to maximize displacements, minimize structural compliance and maximize resonance frequencies. This paper presents the implementation of the algorithms and show results to confirm the feasibility of this approach.

  6. Repurposing the Microsoft Kinect for Windows v2 for external head motion tracking for brain PET

    NASA Astrophysics Data System (ADS)

    Noonan, P. J.; Howard, J.; Hallett, W. A.; Gunn, R. N.

    2015-11-01

    Medical imaging systems such as those used in positron emission tomography (PET) are capable of spatial resolutions that enable the imaging of small, functionally important brain structures. However, the quality of data from PET brain studies is often limited by subject motion during acquisition. This is particularly challenging for patients with neurological disorders or with dynamic research studies that can last 90 min or more. Restraining head movement during the scan does not eliminate motion entirely and can be unpleasant for the subject. Head motion can be detected and measured using a variety of techniques that either use the PET data itself or an external tracking system. Advances in computer vision arising from the video gaming industry could offer significant benefits when re-purposed for medical applications. A method for measuring rigid body type head motion using the Microsoft Kinect v2 is described with results presenting  ⩽0.5 mm spatial accuracy. Motion data is measured in real-time at 30 Hz using the KinectFusion algorithm. Non-rigid motion is detected using the residual alignment energy data of the KinectFusion algorithm allowing for unreliable motion to be discarded. Motion data is aligned to PET listmode data using injected pulse sequences into the PET/CT gantry allowing for correction of rigid body motion. Pilot data from a clinical dynamic PET/CT examination is shown.

  7. Folic acid-decorated polyamidoamine dendrimer mediates selective uptake and high expression of genes in head and neck cancer cells.

    PubMed

    Xu, Leyuan; Kittrell, Shannon; Yeudall, W Andrew; Yang, Hu

    2016-11-01

    Folic acid (FA)-decorated polyamidoamine dendrimer G4 (G4-FA) was synthesized and studied for targeted delivery of genes to head and neck cancer cells expressing high levels of folate receptors (FRs). Cellular uptake, targeting specificity, cytocompatibility and transfection efficiency were evaluated. G4-FA competes with free FA for the same binding site. G4-FA facilitates the cellular uptake of DNA plasmids in a FR-dependent manner and selectively delivers plasmids to FR-high cells, leading to enhanced gene expression. G4-FA is a suitable vector to deliver genes selectively to head and neck cancer cells. The fundamental understandings of G4-FA as a vector and its encouraging transfection results for head and neck cancer cells provided support for its further testing in vivo.

  8. Modified Bat Algorithm for Feature Selection with the Wisconsin Diagnosis Breast Cancer (WDBC) Dataset

    PubMed

    Jeyasingh, Suganthi; Veluchamy, Malathi

    2017-05-01

    Early diagnosis of breast cancer is essential to save lives of patients. Usually, medical datasets include a large variety of data that can lead to confusion during diagnosis. The Knowledge Discovery on Database (KDD) process helps to improve efficiency. It requires elimination of inappropriate and repeated data from the dataset before final diagnosis. This can be done using any of the feature selection algorithms available in data mining. Feature selection is considered as a vital step to increase the classification accuracy. This paper proposes a Modified Bat Algorithm (MBA) for feature selection to eliminate irrelevant features from an original dataset. The Bat algorithm was modified using simple random sampling to select the random instances from the dataset. Ranking was with the global best features to recognize the predominant features available in the dataset. The selected features are used to train a Random Forest (RF) classification algorithm. The MBA feature selection algorithm enhanced the classification accuracy of RF in identifying the occurrence of breast cancer. The Wisconsin Diagnosis Breast Cancer Dataset (WDBC) was used for estimating the performance analysis of the proposed MBA feature selection algorithm. The proposed algorithm achieved better performance in terms of Kappa statistic, Mathew’s Correlation Coefficient, Precision, F-measure, Recall, Mean Absolute Error (MAE), Root Mean Square Error (RMSE), Relative Absolute Error (RAE) and Root Relative Squared Error (RRSE). Creative Commons Attribution License

  9. Statistical analysis for validating ACO-KNN algorithm as feature selection in sentiment analysis

    NASA Astrophysics Data System (ADS)

    Ahmad, Siti Rohaidah; Yusop, Nurhafizah Moziyana Mohd; Bakar, Azuraliza Abu; Yaakub, Mohd Ridzwan

    2017-10-01

    This research paper aims to propose a hybrid of ant colony optimization (ACO) and k-nearest neighbor (KNN) algorithms as feature selections for selecting and choosing relevant features from customer review datasets. Information gain (IG), genetic algorithm (GA), and rough set attribute reduction (RSAR) were used as baseline algorithms in a performance comparison with the proposed algorithm. This paper will also discuss the significance test, which was used to evaluate the performance differences between the ACO-KNN, IG-GA, and IG-RSAR algorithms. This study evaluated the performance of the ACO-KNN algorithm using precision, recall, and F-score, which were validated using the parametric statistical significance tests. The evaluation process has statistically proven that this ACO-KNN algorithm has been significantly improved compared to the baseline algorithms. The evaluation process has statistically proven that this ACO-KNN algorithm has been significantly improved compared to the baseline algorithms. In addition, the experimental results have proven that the ACO-KNN can be used as a feature selection technique in sentiment analysis to obtain quality, optimal feature subset that can represent the actual data in customer review data.

  10. An Algorithmic Approach to the Management of Infantile Digital Fibromatosis: Review of Literature and a Case Report.

    PubMed

    Eypper, Elizabeth H; Lee, Johnson C; Tarasen, Ashley J; Weinberg, Maxene H; Adetayo, Oluwaseun A

    2018-01-01

    Objective: Infantile digital fibromatosis is a rare benign childhood tumor, infrequently cited in the literature. Hallmarks include nodular growths exclusive to fingers and toes and the presence of eosinophilic cytoplasmic inclusions on histology. This article aims to exemplify diagnoses of infantile digital fibromatosis and possible treatment options. Methods: A computerized English literature search was performed in the PubMed/MEDLINE database using MeSH headings "infantile," "juvenile," "digital," and "fibromatosis." Twenty electronic publications were selected and their clinical and histological data recorded and used to compile a treatment algorithm. Results: A 9-month-old male child was referred for a persistent, symptomatic nodule on the third left toe. A direct excision with Brunner-type incisions was performed under general anesthesia. The procedure was successful without complications. The patient has no recurrence at 2 years postsurgery and continues to be followed. Histological examination revealed a proliferation of bland, uniformly plump spindle cells with elongated nuclei and small central nucleoli without paranuclear inclusions consistent with fibromatosis. Conclusions: Asymptomatic nodules should be observed for spontaneous regression or treated with nonsurgical techniques such as chemotherapeutic or steroid injection. Surgical removal should be reserved for cases with structural or functional compromise.

  11. Optimal marker placement in hadrontherapy: intelligent optimization strategies with augmented Lagrangian pattern search.

    PubMed

    Altomare, Cristina; Guglielmann, Raffaella; Riboldi, Marco; Bellazzi, Riccardo; Baroni, Guido

    2015-02-01

    In high precision photon radiotherapy and in hadrontherapy, it is crucial to minimize the occurrence of geometrical deviations with respect to the treatment plan in each treatment session. To this end, point-based infrared (IR) optical tracking for patient set-up quality assessment is performed. Such tracking depends on external fiducial points placement. The main purpose of our work is to propose a new algorithm based on simulated annealing and augmented Lagrangian pattern search (SAPS), which is able to take into account prior knowledge, such as spatial constraints, during the optimization process. The SAPS algorithm was tested on data related to head and neck and pelvic cancer patients, and that were fitted with external surface markers for IR optical tracking applied for patient set-up preliminary correction. The integrated algorithm was tested considering optimality measures obtained with Computed Tomography (CT) images (i.e. the ratio between the so-called target registration error and fiducial registration error, TRE/FRE) and assessing the marker spatial distribution. Comparison has been performed with randomly selected marker configuration and with the GETS algorithm (Genetic Evolutionary Taboo Search), also taking into account the presence of organs at risk. The results obtained with SAPS highlight improvements with respect to the other approaches: (i) TRE/FRE ratio decreases; (ii) marker distribution satisfies both marker visibility and spatial constraints. We have also investigated how the TRE/FRE ratio is influenced by the number of markers, obtaining significant TRE/FRE reduction with respect to the random configurations, when a high number of markers is used. The SAPS algorithm is a valuable strategy for fiducial configuration optimization in IR optical tracking applied for patient set-up error detection and correction in radiation therapy, showing that taking into account prior knowledge is valuable in this optimization process. Further work will be focused on the computational optimization of the SAPS algorithm toward fast point-of-care applications. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. A GPU-Based Implementation of the Firefly Algorithm for Variable Selection in Multivariate Calibration Problems

    PubMed Central

    de Paula, Lauro C. M.; Soares, Anderson S.; de Lima, Telma W.; Delbem, Alexandre C. B.; Coelho, Clarimar J.; Filho, Arlindo R. G.

    2014-01-01

    Several variable selection algorithms in multivariate calibration can be accelerated using Graphics Processing Units (GPU). Among these algorithms, the Firefly Algorithm (FA) is a recent proposed metaheuristic that may be used for variable selection. This paper presents a GPU-based FA (FA-MLR) with multiobjective formulation for variable selection in multivariate calibration problems and compares it with some traditional sequential algorithms in the literature. The advantage of the proposed implementation is demonstrated in an example involving a relatively large number of variables. The results showed that the FA-MLR, in comparison with the traditional algorithms is a more suitable choice and a relevant contribution for the variable selection problem. Additionally, the results also demonstrated that the FA-MLR performed in a GPU can be five times faster than its sequential implementation. PMID:25493625

  13. A GPU-Based Implementation of the Firefly Algorithm for Variable Selection in Multivariate Calibration Problems.

    PubMed

    de Paula, Lauro C M; Soares, Anderson S; de Lima, Telma W; Delbem, Alexandre C B; Coelho, Clarimar J; Filho, Arlindo R G

    2014-01-01

    Several variable selection algorithms in multivariate calibration can be accelerated using Graphics Processing Units (GPU). Among these algorithms, the Firefly Algorithm (FA) is a recent proposed metaheuristic that may be used for variable selection. This paper presents a GPU-based FA (FA-MLR) with multiobjective formulation for variable selection in multivariate calibration problems and compares it with some traditional sequential algorithms in the literature. The advantage of the proposed implementation is demonstrated in an example involving a relatively large number of variables. The results showed that the FA-MLR, in comparison with the traditional algorithms is a more suitable choice and a relevant contribution for the variable selection problem. Additionally, the results also demonstrated that the FA-MLR performed in a GPU can be five times faster than its sequential implementation.

  14. Selecting materialized views using random algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Hao, Zhongxiao; Liu, Chi

    2007-04-01

    The data warehouse is a repository of information collected from multiple possibly heterogeneous autonomous distributed databases. The information stored at the data warehouse is in form of views referred to as materialized views. The selection of the materialized views is one of the most important decisions in designing a data warehouse. Materialized views are stored in the data warehouse for the purpose of efficiently implementing on-line analytical processing queries. The first issue for the user to consider is query response time. So in this paper, we develop algorithms to select a set of views to materialize in data warehouse in order to minimize the total view maintenance cost under the constraint of a given query response time. We call it query_cost view_ selection problem. First, cost graph and cost model of query_cost view_ selection problem are presented. Second, the methods for selecting materialized views by using random algorithms are presented. The genetic algorithm is applied to the materialized views selection problem. But with the development of genetic process, the legal solution produced become more and more difficult, so a lot of solutions are eliminated and producing time of the solutions is lengthened in genetic algorithm. Therefore, improved algorithm has been presented in this paper, which is the combination of simulated annealing algorithm and genetic algorithm for the purpose of solving the query cost view selection problem. Finally, in order to test the function and efficiency of our algorithms experiment simulation is adopted. The experiments show that the given methods can provide near-optimal solutions in limited time and works better in practical cases. Randomized algorithms will become invaluable tools for data warehouse evolution.

  15. Genetic Particle Swarm Optimization-Based Feature Selection for Very-High-Resolution Remotely Sensed Imagery Object Change Detection.

    PubMed

    Chen, Qiang; Chen, Yunhao; Jiang, Weiguo

    2016-07-30

    In the field of multiple features Object-Based Change Detection (OBCD) for very-high-resolution remotely sensed images, image objects have abundant features and feature selection affects the precision and efficiency of OBCD. Through object-based image analysis, this paper proposes a Genetic Particle Swarm Optimization (GPSO)-based feature selection algorithm to solve the optimization problem of feature selection in multiple features OBCD. We select the Ratio of Mean to Variance (RMV) as the fitness function of GPSO, and apply the proposed algorithm to the object-based hybrid multivariate alternative detection model. Two experiment cases on Worldview-2/3 images confirm that GPSO can significantly improve the speed of convergence, and effectively avoid the problem of premature convergence, relative to other feature selection algorithms. According to the accuracy evaluation of OBCD, GPSO is superior at overall accuracy (84.17% and 83.59%) and Kappa coefficient (0.6771 and 0.6314) than other algorithms. Moreover, the sensitivity analysis results show that the proposed algorithm is not easily influenced by the initial parameters, but the number of features to be selected and the size of the particle swarm would affect the algorithm. The comparison experiment results reveal that RMV is more suitable than other functions as the fitness function of GPSO-based feature selection algorithm.

  16. Affine Projection Algorithm with Improved Data-Selective Method Using the Condition Number

    NASA Astrophysics Data System (ADS)

    Ban, Sung Jun; Lee, Chang Woo; Kim, Sang Woo

    Recently, a data-selective method has been proposed to achieve low misalignment in affine projection algorithm (APA) by keeping the condition number of an input data matrix small. We present an improved method, and a complexity reduction algorithm for the APA with the data-selective method. Experimental results show that the proposed algorithm has lower misalignment and a lower condition number for an input data matrix than both the conventional APA and the APA with the previous data-selective method.

  17. The impact of robustness of deformable image registration on contour propagation and dose accumulation for head and neck adaptive radiotherapy.

    PubMed

    Zhang, Lian; Wang, Zhi; Shi, Chengyu; Long, Tengfei; Xu, X George

    2018-05-30

    Deformable image registration (DIR) is the key process for contour propagation and dose accumulation in adaptive radiation therapy (ART). However, currently, ART suffers from a lack of understanding of "robustness" of the process involving the image contour based on DIR and subsequent dose variations caused by algorithm itself and the presetting parameters. The purpose of this research is to evaluate the DIR caused variations for contour propagation and dose accumulation during ART using the RayStation treatment planning system. Ten head and neck cancer patients were selected for retrospective studies. Contours were performed by a single radiation oncologist and new treatment plans were generated on the weekly CT scans for all patients. For each DIR process, four deformation vector fields (DVFs) were generated to propagate contours and accumulate weekly dose by the following algorithms: (a) ANACONDA with simple presetting parameters, (b) ANACONDA with detailed presetting parameters, (c) MORFEUS with simple presetting parameters, and (d) MORFEUS with detailed presetting parameters. The geometric evaluation considered DICE coefficient and Hausdorff distance. The dosimetric evaluation included D 95 , D max , D mean , D min , and Homogeneity Index. For geometric evaluation, the DICE coefficient variations of the GTV were found to be 0.78 ± 0.11, 0.96 ± 0.02, 0.64 ± 0.15, and 0.91 ± 0.03 for simple ANACONDA, detailed ANACONDA, simple MORFEUS, and detailed MORFEUS, respectively. For dosimetric evaluation, the corresponding Homogeneity Index variations were found to be 0.137 ± 0.115, 0.006 ± 0.032, 0.197 ± 0.096, and 0.006 ± 0.033, respectively. The coherent geometric and dosimetric variations also consisted in large organs and small organs. Overall, the results demonstrated that the contour propagation and dose accumulation in clinical ART were influenced by the DIR algorithm, and to a greater extent by the presetting parameters. A quality assurance procedure should be established for the proper use of a commercial DIR for adaptive radiation therapy. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  18. A selective-update affine projection algorithm with selective input vectors

    NASA Astrophysics Data System (ADS)

    Kong, NamWoong; Shin, JaeWook; Park, PooGyeon

    2011-10-01

    This paper proposes an affine projection algorithm (APA) with selective input vectors, which based on the concept of selective-update in order to reduce estimation errors and computations. The algorithm consists of two procedures: input- vector-selection and state-decision. The input-vector-selection procedure determines the number of input vectors by checking with mean square error (MSE) whether the input vectors have enough information for update. The state-decision procedure determines the current state of the adaptive filter by using the state-decision criterion. As the adaptive filter is in transient state, the algorithm updates the filter coefficients with the selected input vectors. On the other hand, as soon as the adaptive filter reaches the steady state, the update procedure is not performed. Through these two procedures, the proposed algorithm achieves small steady-state estimation errors, low computational complexity and low update complexity for colored input signals.

  19. Enriching 3D optical surface scans with prior knowledge: tissue thickness computation by exploiting local neighborhoods.

    PubMed

    Wissel, Tobias; Stüber, Patrick; Wagner, Benjamin; Bruder, Ralf; Schweikard, Achim; Ernst, Floris

    2016-04-01

    Patient immobilization and X-ray-based imaging provide neither a convenient nor a very accurate way to ensure low repositioning errors or to compensate for motion in cranial radiotherapy. We therefore propose an optical tracking device that exploits subcutaneous structures as landmarks in addition to merely spatial registration. To develop such head tracking algorithms, precise and robust computation of these structures is necessary. Here, we show that the tissue thickness can be predicted with high accuracy and moreover exploit local neighborhood information within the laser spot grid on the forehead to further increase this estimation accuracy. We use statistical learning with Support Vector Regression and Gaussian Processes to learn a relationship between optical backscatter features and an MR tissue thickness ground truth. We compare different kernel functions for the data of five different subjects. The incident angle of the laser on the forehead as well as local neighborhoods is incorporated into the feature space. The latter represent the backscatter features from four neighboring laser spots. We confirm that the incident angle has a positive effect on the estimation error of the tissue thickness. The root-mean-square error falls even below 0.15 mm when adding the complete neighborhood information. This prior knowledge also leads to a smoothing effect on the reconstructed skin patch. Learning between different head poses yields similar results. The partial overlap of the point clouds makes the trade-off between novel information and increased feature space dimension obvious and hence feature selection by e.g., sequential forward selection necessary.

  20. NCCN Guidelines Insights: Head and Neck Cancers, Version 1.2018.

    PubMed

    Colevas, A Dimitrios; Yom, Sue S; Pfister, David G; Spencer, Sharon; Adelstein, David; Adkins, Douglas; Brizel, David M; Burtness, Barbara; Busse, Paul M; Caudell, Jimmy J; Cmelak, Anthony J; Eisele, David W; Fenton, Moon; Foote, Robert L; Gilbert, Jill; Gillison, Maura L; Haddad, Robert I; Hicks, Wesley L; Hitchcock, Ying J; Jimeno, Antonio; Leizman, Debra; Maghami, Ellie; Mell, Loren K; Mittal, Bharat B; Pinto, Harlan A; Ridge, John A; Rocco, James; Rodriguez, Cristina P; Shah, Jatin P; Weber, Randal S; Witek, Matthew; Worden, Frank; Zhen, Weining; Burns, Jennifer L; Darlow, Susan D

    2018-05-01

    The NCCN Guidelines for Head and Neck (H&N) Cancers provide treatment recommendations for cancers of the lip, oral cavity, pharynx, larynx, ethmoid and maxillary sinuses, and salivary glands. Recommendations are also provided for occult primary of the H&N, and separate algorithms have been developed by the panel for very advanced H&N cancers. These NCCN Guidelines Insights summarize the panel's discussion and most recent recommendations regarding evaluation and treatment of nasopharyngeal carcinoma. Copyright © 2018 by the National Comprehensive Cancer Network.

  1. Revisiting negative selection algorithms.

    PubMed

    Ji, Zhou; Dasgupta, Dipankar

    2007-01-01

    This paper reviews the progress of negative selection algorithms, an anomaly/change detection approach in Artificial Immune Systems (AIS). Following its initial model, we try to identify the fundamental characteristics of this family of algorithms and summarize their diversities. There exist various elements in this method, including data representation, coverage estimate, affinity measure, and matching rules, which are discussed for different variations. The various negative selection algorithms are categorized by different criteria as well. The relationship and possible combinations with other AIS or other machine learning methods are discussed. Prospective development and applicability of negative selection algorithms and their influence on related areas are then speculated based on the discussion.

  2. Recent Landmark Studies on Head and Neck Cancers: Evidence-Based Fundamentals of Modern Therapeutic Approaches.

    PubMed

    Aydil, Utku

    2015-03-01

    Evidence-based medicine, established on prospective studies and related algorithms is living its golden age in recent years. Within the last few decades, medical knowledge has been systematically produced, categorized, and spread in a way never seen before. One of the most important factors in realizing this situation is the expansion of the communication facilities. The area of the management of head and neck cancers was also affected by these advances, and studies with high-level evidence became the mainstay in the determination of the management strategies. However, probably almost all of these studies are about non-surgical modalities, and studies with high-level evidence regarding the surgical treatment of head and neck cancers are scarce. In this paper, important studies on head and neck cancers and their results will be reviewed.

  3. The applicability of a computer model for predicting head injury incurred during actual motor vehicle collisions.

    PubMed

    Moran, Stephan G; Key, Jason S; McGwin, Gerald; Keeley, Jason W; Davidson, James S; Rue, Loring W

    2004-07-01

    Head injury is a significant cause of both morbidity and mortality. Motor vehicle collisions (MVCs) are the most common source of head injury in the United States. No studies have conclusively determined the applicability of computer models for accurate prediction of head injuries sustained in actual MVCs. This study sought to determine the applicability of such models for predicting head injuries sustained by MVC occupants. The Crash Injury Research and Engineering Network (CIREN) database was queried for restrained drivers who sustained a head injury. These collisions were modeled using occupant dynamic modeling (MADYMO) software, and head injury scores were generated. The computer-generated head injury scores then were evaluated with respect to the actual head injuries sustained by the occupants to determine the applicability of MADYMO computer modeling for predicting head injury. Five occupants meeting the selection criteria for the study were selected from the CIREN database. The head injury scores generated by MADYMO were lower than expected given the actual injuries sustained. In only one case did the computer analysis predict a head injury of a severity similar to that actually sustained by the occupant. Although computer modeling accurately simulates experimental crash tests, it may not be applicable for predicting head injury in actual MVCs. Many complicating factors surrounding actual MVCs make accurate computer modeling difficult. Future modeling efforts should consider variables such as age of the occupant and should account for a wider variety of crash scenarios.

  4. Relationships Between Habitat and Snag Characteristics and the Reproductive Success of the Brown-headed Nuthatch (Sitta pusilla) in Eastern Texas

    Treesearch

    L. Lynnette Dornak; D. Brent Burt; Dean W. Coble; Richard N. Conner

    2004-01-01

    Habitat use and reproductive success of the Brown-headed Nuthatch (Sitta pusilla Latham) were studied in East Texas during the 2001­2002 breed- ing seasons. We compared nest cavity selection at used and randomly selected non-used areas. Height of nest trees, midstory density, and percent leaf litter were negatively correlated with nest site selection...

  5. Robust human machine interface based on head movements applied to assistive robotics.

    PubMed

    Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano

    2013-01-01

    This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair.

  6. Robust Human Machine Interface Based on Head Movements Applied to Assistive Robotics

    PubMed Central

    Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano

    2013-01-01

    This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair. PMID:24453877

  7. Technical Note: A direct ray-tracing method to compute integral depth dose in pencil beam proton radiography with a multilayer ionization chamber.

    PubMed

    Farace, Paolo; Righetto, Roberto; Deffet, Sylvain; Meijers, Arturs; Vander Stappen, Francois

    2016-12-01

    To introduce a fast ray-tracing algorithm in pencil proton radiography (PR) with a multilayer ionization chamber (MLIC) for in vivo range error mapping. Pencil beam PR was obtained by delivering spots uniformly positioned in a square (45 × 45 mm 2 field-of-view) of 9 × 9 spots capable of crossing the phantoms (210 MeV). The exit beam was collected by a MLIC to sample the integral depth dose (IDD MLIC ). PRs of an electron-density and of a head phantom were acquired by moving the couch to obtain multiple 45 × 45 mm 2 frames. To map the corresponding range errors, the two-dimensional set of IDD MLIC was compared with (i) the integral depth dose computed by the treatment planning system (TPS) by both analytic (IDD TPS ) and Monte Carlo (IDD MC ) algorithms in a volume of water simulating the MLIC at the CT, and (ii) the integral depth dose directly computed by a simple ray-tracing algorithm (IDD direct ) through the same CT data. The exact spatial position of the spot pattern was numerically adjusted testing different in-plane positions and selecting the one that minimized the range differences between IDD direct and IDD MLIC . Range error mapping was feasible by both the TPS and the ray-tracing methods, but very sensitive to even small misalignments. In homogeneous regions, the range errors computed by the direct ray-tracing algorithm matched the results obtained by both the analytic and the Monte Carlo algorithms. In both phantoms, lateral heterogeneities were better modeled by the ray-tracing and the Monte Carlo algorithms than by the analytic TPS computation. Accordingly, when the pencil beam crossed lateral heterogeneities, the range errors mapped by the direct algorithm matched better the Monte Carlo maps than those obtained by the analytic algorithm. Finally, the simplicity of the ray-tracing algorithm allowed to implement a prototype procedure for automated spatial alignment. The ray-tracing algorithm can reliably replace the TPS method in MLIC PR for in vivo range verification and it can be a key component to develop software tools for spatial alignment and correction of CT calibration.

  8. Recovery of Neonatal Head Turning to Decreased Sound Pressure Level.

    ERIC Educational Resources Information Center

    Tarquinio, Nancy; And Others

    1990-01-01

    Investigated newborns' responses to decreased sound pressure level (SPL) by means of a localized head turning habituation procedure. Findings, which demonstrated recovery of neonatal head turning to decreased SPL, were inconsistent with the selective receptor adaptation model. (RH)

  9. Moving human full body and body parts detection, tracking, and applications on human activity estimation, walking pattern and face recognition

    NASA Astrophysics Data System (ADS)

    Chen, Hai-Wen; McGurr, Mike

    2016-05-01

    We have developed a new way for detection and tracking of human full-body and body-parts with color (intensity) patch morphological segmentation and adaptive thresholding for security surveillance cameras. An adaptive threshold scheme has been developed for dealing with body size changes, illumination condition changes, and cross camera parameter changes. Tests with the PETS 2009 and 2014 datasets show that we can obtain high probability of detection and low probability of false alarm for full-body. Test results indicate that our human full-body detection method can considerably outperform the current state-of-the-art methods in both detection performance and computational complexity. Furthermore, in this paper, we have developed several methods using color features for detection and tracking of human body-parts (arms, legs, torso, and head, etc.). For example, we have developed a human skin color sub-patch segmentation algorithm by first conducting a RGB to YIQ transformation and then applying a Subtractive I/Q image Fusion with morphological operations. With this method, we can reliably detect and track human skin color related body-parts such as face, neck, arms, and legs. Reliable body-parts (e.g. head) detection allows us to continuously track the individual person even in the case that multiple closely spaced persons are merged. Accordingly, we have developed a new algorithm to split a merged detection blob back to individual detections based on the detected head positions. Detected body-parts also allow us to extract important local constellation features of the body-parts positions and angles related to the full-body. These features are useful for human walking gait pattern recognition and human pose (e.g. standing or falling down) estimation for potential abnormal behavior and accidental event detection, as evidenced with our experimental tests. Furthermore, based on the reliable head (face) tacking, we have applied a super-resolution algorithm to enhance the face resolution for improved human face recognition performance.

  10. Green Supercomputing at Argonne

    ScienceCinema

    Pete Beckman

    2017-12-09

    Pete Beckman, head of Argonne's Leadership Computing Facility (ALCF) talks about Argonne National Laboratory's green supercomputing—everything from designing algorithms to use fewer kilowatts per operation to using cold Chicago winter air to cool the machine more efficiently.

  11. Enhancement of Fast Face Detection Algorithm Based on a Cascade of Decision Trees

    NASA Astrophysics Data System (ADS)

    Khryashchev, V. V.; Lebedev, A. A.; Priorov, A. L.

    2017-05-01

    Face detection algorithm based on a cascade of ensembles of decision trees (CEDT) is presented. The new approach allows detecting faces other than the front position through the use of multiple classifiers. Each classifier is trained for a specific range of angles of the rotation head. The results showed a high rate of productivity for CEDT on images with standard size. The algorithm increases the area under the ROC-curve of 13% compared to a standard Viola-Jones face detection algorithm. Final realization of given algorithm consist of 5 different cascades for frontal/non-frontal faces. One more thing which we take from the simulation results is a low computational complexity of CEDT algorithm in comparison with standard Viola-Jones approach. This could prove important in the embedded system and mobile device industries because it can reduce the cost of hardware and make battery life longer.

  12. Examining applying high performance genetic data feature selection and classification algorithms for colon cancer diagnosis.

    PubMed

    Al-Rajab, Murad; Lu, Joan; Xu, Qiang

    2017-07-01

    This paper examines the accuracy and efficiency (time complexity) of high performance genetic data feature selection and classification algorithms for colon cancer diagnosis. The need for this research derives from the urgent and increasing need for accurate and efficient algorithms. Colon cancer is a leading cause of death worldwide, hence it is vitally important for the cancer tissues to be expertly identified and classified in a rapid and timely manner, to assure both a fast detection of the disease and to expedite the drug discovery process. In this research, a three-phase approach was proposed and implemented: Phases One and Two examined the feature selection algorithms and classification algorithms employed separately, and Phase Three examined the performance of the combination of these. It was found from Phase One that the Particle Swarm Optimization (PSO) algorithm performed best with the colon dataset as a feature selection (29 genes selected) and from Phase Two that the Support Vector Machine (SVM) algorithm outperformed other classifications, with an accuracy of almost 86%. It was also found from Phase Three that the combined use of PSO and SVM surpassed other algorithms in accuracy and performance, and was faster in terms of time analysis (94%). It is concluded that applying feature selection algorithms prior to classification algorithms results in better accuracy than when the latter are applied alone. This conclusion is important and significant to industry and society. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Automated Bone Screw Tightening to Adaptive Levels of Stripping Torque.

    PubMed

    Reynolds, Karen J; Mohtar, Aaron A; Cleek, Tammy M; Ryan, Melissa K; Hearn, Trevor C

    2017-06-01

    To use relationships between tightening parameters, related to bone quality, to develop an automated system that determines and controls the level of screw tightening. An algorithm relating current at head contact (IHC) to current at construct failure (Imax) was developed. The algorithm was used to trigger cessation of screw insertion at a predefined tightening level, in real time, between head contact and maximum current. The ability of the device to stop at the predefined level was assessed. The mean (±SD) current at which screw insertion ceased was calculated to be [51.47 ± 9.75% × (Imax - IHC)] + IHC, with no premature bone failures. A smart screwdriver was developed that uses the current from the motor driving the screw to predict the current at which the screw will strip the bone threads. The device was implemented and was able to achieve motor shut-off and cease tightening at a predefined threshold, with no premature bone failures.

  14. EEG and MEG source localization using recursively applied (RAP) MUSIC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosher, J.C.; Leahy, R.M.

    1996-12-31

    The multiple signal characterization (MUSIC) algorithm locates multiple asynchronous dipolar sources from electroencephalography (EEG) and magnetoencephalography (MEG) data. A signal subspace is estimated from the data, then the algorithm scans a single dipole model through a three-dimensional head volume and computes projections onto this subspace. To locate the sources, the user must search the head volume for local peaks in the projection metric. Here we describe a novel extension of this approach which we refer to as RAP (Recursively APplied) MUSIC. This new procedure automatically extracts the locations of the sources through a recursive use of subspace projections, which usesmore » the metric of principal correlations as a multidimensional form of correlation analysis between the model subspace and the data subspace. The dipolar orientations, a form of `diverse polarization,` are easily extracted using the associated principal vectors.« less

  15. SamSelect: a sample sequence selection algorithm for quorum planted motif search on large DNA datasets.

    PubMed

    Yu, Qiang; Wei, Dingbang; Huo, Hongwei

    2018-06-18

    Given a set of t n-length DNA sequences, q satisfying 0 < q ≤ 1, and l and d satisfying 0 ≤ d < l < n, the quorum planted motif search (qPMS) finds l-length strings that occur in at least qt input sequences with up to d mismatches and is mainly used to locate transcription factor binding sites in DNA sequences. Existing qPMS algorithms have been able to efficiently process small standard datasets (e.g., t = 20 and n = 600), but they are too time consuming to process large DNA datasets, such as ChIP-seq datasets that contain thousands of sequences or more. We analyze the effects of t and q on the time performance of qPMS algorithms and find that a large t or a small q causes a longer computation time. Based on this information, we improve the time performance of existing qPMS algorithms by selecting a sample sequence set D' with a small t and a large q from the large input dataset D and then executing qPMS algorithms on D'. A sample sequence selection algorithm named SamSelect is proposed. The experimental results on both simulated and real data show (1) that SamSelect can select D' efficiently and (2) that the qPMS algorithms executed on D' can find implanted or real motifs in a significantly shorter time than when executed on D. We improve the ability of existing qPMS algorithms to process large DNA datasets from the perspective of selecting high-quality sample sequence sets so that the qPMS algorithms can find motifs in a short time in the selected sample sequence set D', rather than take an unfeasibly long time to search the original sequence set D. Our motif discovery method is an approximate algorithm.

  16. SU-E-J-89: Comparative Analysis of MIM and Velocity’s Image Deformation Algorithm Using Simulated KV-CBCT Images for Quality Assurance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cline, K; Narayanasamy, G; Obediat, M

    Purpose: Deformable image registration (DIR) is used routinely in the clinic without a formalized quality assurance (QA) process. Using simulated deformations to digitally deform images in a known way and comparing to DIR algorithm predictions is a powerful technique for DIR QA. This technique must also simulate realistic image noise and artifacts, especially between modalities. This study developed an algorithm to create simulated daily kV cone-beam computed-tomography (CBCT) images from CT images for DIR QA between these modalities. Methods: A Catphan and physical head-and-neck phantom, with known deformations, were used. CT and kV-CBCT images of the Catphan were utilized tomore » characterize the changes in Hounsfield units, noise, and image cupping that occur between these imaging modalities. The algorithm then imprinted these changes onto a CT image of the deformed head-and-neck phantom, thereby creating a simulated-CBCT image. CT and kV-CBCT images of the undeformed and deformed head-and-neck phantom were also acquired. The Velocity and MIM DIR algorithms were applied between the undeformed CT image and each of the deformed CT, CBCT, and simulated-CBCT images to obtain predicted deformations. The error between the known and predicted deformations was used as a metric to evaluate the quality of the simulated-CBCT image. Ideally, the simulated-CBCT image registration would produce the same accuracy as the deformed CBCT image registration. Results: For Velocity, the mean error was 1.4 mm for the CT-CT registration, 1.7 mm for the CT-CBCT registration, and 1.4 mm for the CT-simulated-CBCT registration. These same numbers were 1.5, 4.5, and 5.9 mm, respectively, for MIM. Conclusion: All cases produced similar accuracy for Velocity. MIM produced similar values of accuracy for CT-CT registration, but was not as accurate for CT-CBCT registrations. The MIM simulated-CBCT registration followed this same trend, but overestimated MIM DIR errors relative to the CT-CBCT registration.« less

  17. Automatic Calibration Method for Driver’s Head Orientation in Natural Driving Environment

    PubMed Central

    Fu, Xianping; Guan, Xiao; Peli, Eli; Liu, Hongbo; Luo, Gang

    2013-01-01

    Gaze tracking is crucial for studying driver’s attention, detecting fatigue, and improving driver assistance systems, but it is difficult in natural driving environments due to nonuniform and highly variable illumination and large head movements. Traditional calibrations that require subjects to follow calibrators are very cumbersome to be implemented in daily driving situations. A new automatic calibration method, based on a single camera for determining the head orientation and which utilizes the side mirrors, the rear-view mirror, the instrument board, and different zones in the windshield as calibration points, is presented in this paper. Supported by a self-learning algorithm, the system tracks the head and categorizes the head pose in 12 gaze zones based on facial features. The particle filter is used to estimate the head pose to obtain an accurate gaze zone by updating the calibration parameters. Experimental results show that, after several hours of driving, the automatic calibration method without driver’s corporation can achieve the same accuracy as a manual calibration method. The mean error of estimated eye gazes was less than 5°in day and night driving. PMID:24639620

  18. Investigation of cone-beam CT image quality trade-off for image-guided radiation therapy

    NASA Astrophysics Data System (ADS)

    Bian, Junguo; Sharp, Gregory C.; Park, Yang-Kyun; Ouyang, Jinsong; Bortfeld, Thomas; El Fakhri, Georges

    2016-05-01

    It is well-known that projections acquired over an angular range slightly over 180° (so-called short scan) are sufficient for fan-beam reconstruction. However, due to practical imaging conditions (projection data and reconstruction image discretization, physical factors, and data noise), the short-scan reconstructions may have different appearances and properties from the full-scan (scans over 360°) reconstructions. Nevertheless, short-scan configurations have been used in applications such as cone-beam CT (CBCT) for head-neck-cancer image-guided radiation therapy (IGRT) that only requires a small field of view due to the potential reduced imaging time and dose. In this work, we studied the image quality trade-off for full, short, and full/short scan configurations with both conventional filtered-backprojection (FBP) reconstruction and iterative reconstruction algorithms based on total-variation (TV) minimization for head-neck-cancer IGRT. Anthropomorphic and Catphan phantoms were scanned at different exposure levels with a clinical scanner used in IGRT. Both visualization- and numerical-metric-based evaluation studies were performed. The results indicate that the optimal exposure level and number of views are in the middle range for both FBP and TV-based iterative algorithms and the optimization is object-dependent and task-dependent. The optimal view numbers decrease with the total exposure levels for both FBP and TV-based algorithms. The results also indicate there are slight differences between FBP and TV-based iterative algorithms for the image quality trade-off: FBP seems to be more in favor of larger number of views while the TV-based algorithm is more robust to different data conditions (number of views and exposure levels) than the FBP algorithm. The studies can provide a general guideline for image-quality optimization for CBCT used in IGRT and other applications.

  19. Investigation of cone-beam CT image quality trade-off for image-guided radiation therapy.

    PubMed

    Bian, Junguo; Sharp, Gregory C; Park, Yang-Kyun; Ouyang, Jinsong; Bortfeld, Thomas; El Fakhri, Georges

    2016-05-07

    It is well-known that projections acquired over an angular range slightly over 180° (so-called short scan) are sufficient for fan-beam reconstruction. However, due to practical imaging conditions (projection data and reconstruction image discretization, physical factors, and data noise), the short-scan reconstructions may have different appearances and properties from the full-scan (scans over 360°) reconstructions. Nevertheless, short-scan configurations have been used in applications such as cone-beam CT (CBCT) for head-neck-cancer image-guided radiation therapy (IGRT) that only requires a small field of view due to the potential reduced imaging time and dose. In this work, we studied the image quality trade-off for full, short, and full/short scan configurations with both conventional filtered-backprojection (FBP) reconstruction and iterative reconstruction algorithms based on total-variation (TV) minimization for head-neck-cancer IGRT. Anthropomorphic and Catphan phantoms were scanned at different exposure levels with a clinical scanner used in IGRT. Both visualization- and numerical-metric-based evaluation studies were performed. The results indicate that the optimal exposure level and number of views are in the middle range for both FBP and TV-based iterative algorithms and the optimization is object-dependent and task-dependent. The optimal view numbers decrease with the total exposure levels for both FBP and TV-based algorithms. The results also indicate there are slight differences between FBP and TV-based iterative algorithms for the image quality trade-off: FBP seems to be more in favor of larger number of views while the TV-based algorithm is more robust to different data conditions (number of views and exposure levels) than the FBP algorithm. The studies can provide a general guideline for image-quality optimization for CBCT used in IGRT and other applications.

  20. Analytical network process based optimum cluster head selection in wireless sensor network.

    PubMed

    Farman, Haleem; Javed, Huma; Jan, Bilal; Ahmad, Jamil; Ali, Shaukat; Khalil, Falak Naz; Khan, Murad

    2017-01-01

    Wireless Sensor Networks (WSNs) are becoming ubiquitous in everyday life due to their applications in weather forecasting, surveillance, implantable sensors for health monitoring and other plethora of applications. WSN is equipped with hundreds and thousands of small sensor nodes. As the size of a sensor node decreases, critical issues such as limited energy, computation time and limited memory become even more highlighted. In such a case, network lifetime mainly depends on efficient use of available resources. Organizing nearby nodes into clusters make it convenient to efficiently manage each cluster as well as the overall network. In this paper, we extend our previous work of grid-based hybrid network deployment approach, in which merge and split technique has been proposed to construct network topology. Constructing topology through our proposed technique, in this paper we have used analytical network process (ANP) model for cluster head selection in WSN. Five distinct parameters: distance from nodes (DistNode), residual energy level (REL), distance from centroid (DistCent), number of times the node has been selected as cluster head (TCH) and merged node (MN) are considered for CH selection. The problem of CH selection based on these parameters is tackled as a multi criteria decision system, for which ANP method is used for optimum cluster head selection. Main contribution of this work is to check the applicability of ANP model for cluster head selection in WSN. In addition, sensitivity analysis is carried out to check the stability of alternatives (available candidate nodes) and their ranking for different scenarios. The simulation results show that the proposed method outperforms existing energy efficient clustering protocols in terms of optimum CH selection and minimizing CH reselection process that results in extending overall network lifetime. This paper analyzes that ANP method used for CH selection with better understanding of the dependencies of different components involved in the evaluation process.

  1. Analytical network process based optimum cluster head selection in wireless sensor network

    PubMed Central

    Javed, Huma; Jan, Bilal; Ahmad, Jamil; Ali, Shaukat; Khalil, Falak Naz; Khan, Murad

    2017-01-01

    Wireless Sensor Networks (WSNs) are becoming ubiquitous in everyday life due to their applications in weather forecasting, surveillance, implantable sensors for health monitoring and other plethora of applications. WSN is equipped with hundreds and thousands of small sensor nodes. As the size of a sensor node decreases, critical issues such as limited energy, computation time and limited memory become even more highlighted. In such a case, network lifetime mainly depends on efficient use of available resources. Organizing nearby nodes into clusters make it convenient to efficiently manage each cluster as well as the overall network. In this paper, we extend our previous work of grid-based hybrid network deployment approach, in which merge and split technique has been proposed to construct network topology. Constructing topology through our proposed technique, in this paper we have used analytical network process (ANP) model for cluster head selection in WSN. Five distinct parameters: distance from nodes (DistNode), residual energy level (REL), distance from centroid (DistCent), number of times the node has been selected as cluster head (TCH) and merged node (MN) are considered for CH selection. The problem of CH selection based on these parameters is tackled as a multi criteria decision system, for which ANP method is used for optimum cluster head selection. Main contribution of this work is to check the applicability of ANP model for cluster head selection in WSN. In addition, sensitivity analysis is carried out to check the stability of alternatives (available candidate nodes) and their ranking for different scenarios. The simulation results show that the proposed method outperforms existing energy efficient clustering protocols in terms of optimum CH selection and minimizing CH reselection process that results in extending overall network lifetime. This paper analyzes that ANP method used for CH selection with better understanding of the dependencies of different components involved in the evaluation process. PMID:28719616

  2. Fast object detection algorithm based on HOG and CNN

    NASA Astrophysics Data System (ADS)

    Lu, Tongwei; Wang, Dandan; Zhang, Yanduo

    2018-04-01

    In the field of computer vision, object classification and object detection are widely used in many fields. The traditional object detection have two main problems:one is that sliding window of the regional selection strategy is high time complexity and have window redundancy. And the other one is that Robustness of the feature is not well. In order to solve those problems, Regional Proposal Network (RPN) is used to select candidate regions instead of selective search algorithm. Compared with traditional algorithms and selective search algorithms, RPN has higher efficiency and accuracy. We combine HOG feature and convolution neural network (CNN) to extract features. And we use SVM to classify. For TorontoNet, our algorithm's mAP is 1.6 percentage points higher. For OxfordNet, our algorithm's mAP is 1.3 percentage higher.

  3. Quaternion-Based Unscented Kalman Filter for Accurate Indoor Heading Estimation Using Wearable Multi-Sensor System

    PubMed Central

    Yuan, Xuebing; Yu, Shuai; Zhang, Shengzhi; Wang, Guoping; Liu, Sheng

    2015-01-01

    Inertial navigation based on micro-electromechanical system (MEMS) inertial measurement units (IMUs) has attracted numerous researchers due to its high reliability and independence. The heading estimation, as one of the most important parts of inertial navigation, has been a research focus in this field. Heading estimation using magnetometers is perturbed by magnetic disturbances, such as indoor concrete structures and electronic equipment. The MEMS gyroscope is also used for heading estimation. However, the accuracy of gyroscope is unreliable with time. In this paper, a wearable multi-sensor system has been designed to obtain the high-accuracy indoor heading estimation, according to a quaternion-based unscented Kalman filter (UKF) algorithm. The proposed multi-sensor system including one three-axis accelerometer, three single-axis gyroscopes, one three-axis magnetometer and one microprocessor minimizes the size and cost. The wearable multi-sensor system was fixed on waist of pedestrian and the quadrotor unmanned aerial vehicle (UAV) for heading estimation experiments in our college building. The results show that the mean heading estimation errors are less 10° and 5° to multi-sensor system fixed on waist of pedestrian and the quadrotor UAV, respectively, compared to the reference path. PMID:25961384

  4. Fluence map optimization (FMO) with dose-volume constraints in IMRT using the geometric distance sorting method.

    PubMed

    Lan, Yihua; Li, Cunhua; Ren, Haozheng; Zhang, Yong; Min, Zhifang

    2012-10-21

    A new heuristic algorithm based on the so-called geometric distance sorting technique is proposed for solving the fluence map optimization with dose-volume constraints which is one of the most essential tasks for inverse planning in IMRT. The framework of the proposed method is basically an iterative process which begins with a simple linear constrained quadratic optimization model without considering any dose-volume constraints, and then the dose constraints for the voxels violating the dose-volume constraints are gradually added into the quadratic optimization model step by step until all the dose-volume constraints are satisfied. In each iteration step, an interior point method is adopted to solve each new linear constrained quadratic programming. For choosing the proper candidate voxels for the current dose constraint adding, a so-called geometric distance defined in the transformed standard quadratic form of the fluence map optimization model was used to guide the selection of the voxels. The new geometric distance sorting technique can mostly reduce the unexpected increase of the objective function value caused inevitably by the constraint adding. It can be regarded as an upgrading to the traditional dose sorting technique. The geometry explanation for the proposed method is also given and a proposition is proved to support our heuristic idea. In addition, a smart constraint adding/deleting strategy is designed to ensure a stable iteration convergence. The new algorithm is tested on four cases including head-neck, a prostate, a lung and an oropharyngeal, and compared with the algorithm based on the traditional dose sorting technique. Experimental results showed that the proposed method is more suitable for guiding the selection of new constraints than the traditional dose sorting method, especially for the cases whose target regions are in non-convex shapes. It is a more efficient optimization technique to some extent for choosing constraints than the dose sorting method. By integrating a smart constraint adding/deleting scheme within the iteration framework, the new technique builds up an improved algorithm for solving the fluence map optimization with dose-volume constraints.

  5. A systematic literature review of health state utility values in head and neck cancer.

    PubMed

    Meregaglia, Michela; Cairns, John

    2017-09-02

    Health state utility values (HSUVs) are essential parameters in model-based economic evaluations. This study systematically identifies HSUVs in head and neck cancer and provides guidance for selecting them from a growing body of health-related quality of life studies. We systematically reviewed the published literature by searching PubMed, EMBASE and The Cochrane Library using a pre-defined combination of keywords. The Tufts Cost-Effectiveness Analysis Registry and the School of Health and Related Research Health Utilities Database (ScHARRHUD) specifically containing health utilities were also queried, in addition to the Health Economics Research Centre database of mapping studies. Studies were considered for inclusion if reporting original HSUVs assessed using established techniques. The characteristics of each study including country, design, sample size, cancer subsite addressed and demographics of responders were summarized narratively using a data extraction form. Quality scoring and critical appraisal of the included studies were performed based on published recommendations. Of a total 1048 records identified by the search, 28 studies qualified for data extraction and 346 unique HSUVs were retrieved from them. HSUVs were estimated using direct methods (e.g. standard gamble; n = 10 studies), multi-attribute utility instruments (MAUIs; n = 13) and mapping techniques (n = 3); two studies adopted both direct and indirect approaches. Within the MAUIs, the EuroQol 5-dimension questionnaire (EQ-5D) was the most frequently used (n = 11), followed by the Health Utility Index Mark 3 (HUI3; n = 2), the 15D (n = 2) and the Short Form-Six Dimension (SF-6D; n = 1). Different methods and types of responders (i.e. patients, healthy subjects, clinical experts) influenced the magnitude of HSUVs for comparable health states. Only one mapping study developed an original algorithm using head and neck cancer data. The identified studies were considered of intermediate quality. This review provides a dataset of HSUVs systematically retrieved from published studies in head and neck cancer. There is currently a lack of research for some disease phases including recurrent and metastatic cancer, and treatment-related complications. In selecting HSUVs for cost-effectiveness modeling purposes, preference should be given to EQ-5D utility values; however, mapping to EQ-5D is a potentially valuable technique that should be further developed in this cancer population.

  6. Novel and efficient tag SNPs selection algorithms.

    PubMed

    Chen, Wen-Pei; Hung, Che-Lun; Tsai, Suh-Jen Jane; Lin, Yaw-Ling

    2014-01-01

    SNPs are the most abundant forms of genetic variations amongst species; the association studies between complex diseases and SNPs or haplotypes have received great attention. However, these studies are restricted by the cost of genotyping all SNPs; thus, it is necessary to find smaller subsets, or tag SNPs, representing the rest of the SNPs. In fact, the existing tag SNP selection algorithms are notoriously time-consuming. An efficient algorithm for tag SNP selection was presented, which was applied to analyze the HapMap YRI data. The experimental results show that the proposed algorithm can achieve better performance than the existing tag SNP selection algorithms; in most cases, this proposed algorithm is at least ten times faster than the existing methods. In many cases, when the redundant ratio of the block is high, the proposed algorithm can even be thousands times faster than the previously known methods. Tools and web services for haplotype block analysis integrated by hadoop MapReduce framework are also developed using the proposed algorithm as computation kernels.

  7. Robust Head-Pose Estimation Based on Partially-Latent Mixture of Linear Regressions.

    PubMed

    Drouard, Vincent; Horaud, Radu; Deleforge, Antoine; Ba, Sileye; Evangelidis, Georgios

    2017-03-01

    Head-pose estimation has many applications, such as social event analysis, human-robot and human-computer interaction, driving assistance, and so forth. Head-pose estimation is challenging, because it must cope with changing illumination conditions, variabilities in face orientation and in appearance, partial occlusions of facial landmarks, as well as bounding-box-to-face alignment errors. We propose to use a mixture of linear regressions with partially-latent output. This regression method learns to map high-dimensional feature vectors (extracted from bounding boxes of faces) onto the joint space of head-pose angles and bounding-box shifts, such that they are robustly predicted in the presence of unobservable phenomena. We describe in detail the mapping method that combines the merits of unsupervised manifold learning techniques and of mixtures of regressions. We validate our method with three publicly available data sets and we thoroughly benchmark four variants of the proposed algorithm with several state-of-the-art head-pose estimation methods.

  8. SU-E-J-36: Comparison of CBCT Image Quality for Manufacturer Default Imaging Modes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, G

    Purpose CBCT is being increasingly used in patient setup for radiotherapy. Often the manufacturer default scan modes are used for performing these CBCT scans with the assumption that they are the best options. To quantitatively assess the image quality of these scan modes, all of the scan modes were tested as well as options with the reconstruction algorithm. Methods A CatPhan 504 phantom was scanned on a TrueBeam Linear Accelerator using the manufacturer scan modes (FSRT Head, Head, Image Gently, Pelvis, Pelvis Obese, Spotlight, & Thorax). The Head mode scan was then reconstructed multiple times with all filter options (Smooth,more » Standard, Sharp, & Ultra Sharp) and all Ring Suppression options (Disabled, Weak, Medium, & Strong). An open source ImageJ tool was created for analyzing the CatPhan 504 images. Results The MTF curve was primarily dictated by the voxel size and the filter used in the reconstruction algorithm. The filters also impact the image noise. The CNR was worst for the Image Gently mode, followed by FSRT Head and Head. The sharper the filter, the worse the CNR. HU varied significantly between scan modes. Pelvis Obese had lower than expected HU values than most while the Image Gently mode had higher than expected HU values. If a therapist tried to use preset window and level settings, they would not show the desired tissue for some scan modes. Conclusion Knowing the image quality of the set scan modes, will enable users to better optimize their setup CBCT. Evaluation of the scan mode image quality could improve setup efficiency and lead to better treatment outcomes.« less

  9. Genetic Particle Swarm Optimization–Based Feature Selection for Very-High-Resolution Remotely Sensed Imagery Object Change Detection

    PubMed Central

    Chen, Qiang; Chen, Yunhao; Jiang, Weiguo

    2016-01-01

    In the field of multiple features Object-Based Change Detection (OBCD) for very-high-resolution remotely sensed images, image objects have abundant features and feature selection affects the precision and efficiency of OBCD. Through object-based image analysis, this paper proposes a Genetic Particle Swarm Optimization (GPSO)-based feature selection algorithm to solve the optimization problem of feature selection in multiple features OBCD. We select the Ratio of Mean to Variance (RMV) as the fitness function of GPSO, and apply the proposed algorithm to the object-based hybrid multivariate alternative detection model. Two experiment cases on Worldview-2/3 images confirm that GPSO can significantly improve the speed of convergence, and effectively avoid the problem of premature convergence, relative to other feature selection algorithms. According to the accuracy evaluation of OBCD, GPSO is superior at overall accuracy (84.17% and 83.59%) and Kappa coefficient (0.6771 and 0.6314) than other algorithms. Moreover, the sensitivity analysis results show that the proposed algorithm is not easily influenced by the initial parameters, but the number of features to be selected and the size of the particle swarm would affect the algorithm. The comparison experiment results reveal that RMV is more suitable than other functions as the fitness function of GPSO-based feature selection algorithm. PMID:27483285

  10. Online selective kernel-based temporal difference learning.

    PubMed

    Chen, Xingguo; Gao, Yang; Wang, Ruili

    2013-12-01

    In this paper, an online selective kernel-based temporal difference (OSKTD) learning algorithm is proposed to deal with large scale and/or continuous reinforcement learning problems. OSKTD includes two online procedures: online sparsification and parameter updating for the selective kernel-based value function. A new sparsification method (i.e., a kernel distance-based online sparsification method) is proposed based on selective ensemble learning, which is computationally less complex compared with other sparsification methods. With the proposed sparsification method, the sparsified dictionary of samples is constructed online by checking if a sample needs to be added to the sparsified dictionary. In addition, based on local validity, a selective kernel-based value function is proposed to select the best samples from the sample dictionary for the selective kernel-based value function approximator. The parameters of the selective kernel-based value function are iteratively updated by using the temporal difference (TD) learning algorithm combined with the gradient descent technique. The complexity of the online sparsification procedure in the OSKTD algorithm is O(n). In addition, two typical experiments (Maze and Mountain Car) are used to compare with both traditional and up-to-date O(n) algorithms (GTD, GTD2, and TDC using the kernel-based value function), and the results demonstrate the effectiveness of our proposed algorithm. In the Maze problem, OSKTD converges to an optimal policy and converges faster than both traditional and up-to-date algorithms. In the Mountain Car problem, OSKTD converges, requires less computation time compared with other sparsification methods, gets a better local optima than the traditional algorithms, and converges much faster than the up-to-date algorithms. In addition, OSKTD can reach a competitive ultimate optima compared with the up-to-date algorithms.

  11. Small hydropower spot prediction using SWAT and a diversion algorithm, case study: Upper Citarum Basin

    NASA Astrophysics Data System (ADS)

    Kardhana, Hadi; Arya, Doni Khaira; Hadihardaja, Iwan K.; Widyaningtyas, Riawan, Edi; Lubis, Atika

    2017-11-01

    Small-Scale Hydropower (SHP) had been important electric energy power source in Indonesia. Indonesia is vast countries, consists of more than 17.000 islands. It has large fresh water resource about 3 m of rainfall and 2 m of runoff. Much of its topography is mountainous, remote but abundant with potential energy. Millions of people do not have sufficient access to electricity, some live in the remote places. Recently, SHP development was encouraged for energy supply of the places. Development of global hydrology data provides opportunity to predict distribution of hydropower potential. In this paper, we demonstrate run-of-river type SHP spot prediction tool using SWAT and a river diversion algorithm. The use of Soil and Water Assessment Tool (SWAT) with input of CFSR (Climate Forecast System Re-analysis) of 10 years period had been implemented to predict spatially distributed flow cumulative distribution function (CDF). A simple algorithm to maximize potential head of a location by a river diversion expressing head race and penstock had been applied. Firm flow and power of the SHP were estimated from the CDF and the algorithm. The tool applied to Upper Citarum River Basin and three out of four existing hydropower locations had been well predicted. The result implies that this tool is able to support acceleration of SHP development at earlier phase.

  12. Low complexity adaptive equalizers for underwater acoustic communications

    NASA Astrophysics Data System (ADS)

    Soflaei, Masoumeh; Azmi, Paeiz

    2014-08-01

    Interference signals due to scattering from surface and reflecting from bottom is one of the most important problems of reliable communications in shallow water channels. To solve this problem, one of the best suggested ways is to use adaptive equalizers. Convergence rate and misadjustment error in adaptive algorithms play important roles in adaptive equalizer performance. In this paper, affine projection algorithm (APA), selective regressor APA(SR-APA), family of selective partial update (SPU) algorithms, family of set-membership (SM) algorithms and selective partial update selective regressor APA (SPU-SR-APA) are compared with conventional algorithms such as the least mean square (LMS) in underwater acoustic communications. We apply experimental data from the Strait of Hormuz for demonstrating the efficiency of the proposed methods over shallow water channel. We observe that the values of the steady-state mean square error (MSE) of SR-APA, SPU-APA, SPU-normalized least mean square (SPU-NLMS), SPU-SR-APA, SM-APA and SM-NLMS algorithms decrease in comparison with the LMS algorithm. Also these algorithms have better convergence rates than LMS type algorithm.

  13. Head Circumference and Height in Autism

    PubMed Central

    Lainhart, Janet E.; Bigler, Erin D.; Bocian, Maureen; Coon, Hilary; Dinh, Elena; Dawson, Geraldine; Deutsch, Curtis K.; Dunn, Michelle; Estes, Annette; Tager-Flusberg, Helen; Folstein, Susan; Hepburn, Susan; Hyman, Susan; McMahon, William; Minshew, Nancy; Munson, Jeff; Osann, Kathy; Ozonoff, Sally; Rodier, Patricia; Rogers, Sally; Sigman, Marian; Spence, M. Anne; Stodgell, Christopher J.; Volkmar, Fred

    2016-01-01

    Data from 10 sites of the NICHD/NIDCD Collaborative Programs of Excellence in Autism were combined to study the distribution of head circumference and relationship to demographic and clinical variables. Three hundred thirty-eight probands with autism-spectrum disorder (ASD) including 208 probands with autism were studied along with 147 parents, 149 siblings, and typically developing controls. ASDs were diagnosed, and head circumference and clinical variables measured in a standardized manner across all sites. All subjects with autism met ADI-R, ADOS-G, DSM-IV, and ICD-10 criteria. The results show the distribution of standardized head circumference in autism is normal in shape, and the mean, variance, and rate of macrocephaly but not microcephaly are increased. Head circumference tends to be large relative to height in autism. No site, gender, age, SES, verbal, or non-verbal IQ effects were present in the autism sample. In addition to autism itself, standardized height and average parental head circumference were the most important factors predicting head circumference in individuals with autism. Mean standardized head circumference and rates of macrocephaly were similar in probands with autism and their parents. Increased head circumference was associated with a higher (more severe) ADI-R social algorithm score. Macrocephaly is associated with delayed onset of language. Although mean head circumference and rates of macrocephaly are increased in autism, a high degree of variability is present, underscoring the complex clinical heterogeneity of the disorder. The wide distribution of head circumference in autism has major implications for genetic, neuroimaging, and other neurobiological research. PMID:17022081

  14. Costs in Serving Handicapped Children in Head Start: An Analysis of Methods and Cost Estimates. Final Report.

    ERIC Educational Resources Information Center

    Syracuse Univ., NY. Div. of Special Education and Rehabilitation.

    An evaluation of the costs of serving handicapped children in Head Start was based on information collected in conjunction with on-site visits to regular Head Start programs, experimental programs, and specially selected model preschool programs, and from questionnaires completed by 1,353 grantees and delegate agencies of regular Head Start…

  15. Using variable importance measures to identify a small set of single nucleotide polymorphisms capable of predicting heading date in perennial ryegrass

    USDA-ARS?s Scientific Manuscript database

    Prior knowledge on heading date enables the selection of parents for synthetic cultivars that are well-matched with respect to heading date, which is necessary to ensure plants put together will successfully cross with each other. Heading date of individual plants can be determined directly, which h...

  16. Head-started Kemp’s ridley turtle (Lepidochelys kempii) nest recorded in Florida: Possible implications

    USGS Publications Warehouse

    Shaver, Donna J.; Lamont, Margaret M.; Maxwell, Sharon; Walker, Jennifer Shelby; Dillingham, Ted

    2016-01-01

    A head-started Kemp’s ridley sea turtle (Lepidochelys kempii) was documented nesting on South Walton Beach, Florida on 25 May 2015. This record supports the possibility that exposure to Florida waters after being held in captivity through 1–3 yrs of age during the head-starting process may have influenced future nest site selection of this and perhaps other Kemp’s ridley turtles. Such findings could have important ramifications for marine water experimentation and release site selection for turtles that have been reared in captivity.

  17. Guidance control of small UAV with energy and maneuverability limitations for a search and coverage mission

    NASA Astrophysics Data System (ADS)

    Gramajo, German G.

    This thesis presents an algorithm for a search and coverage mission that has increased autonomy in generating an ideal trajectory while explicitly considering the available energy in the optimization. Further, current algorithms used to generate trajectories depend on the operator providing a discrete set of turning rate requirements to obtain an optimal solution. This work proposes an additional modification to the algorithm so that it optimizes the trajectory for a range of turning rates instead of a discrete set of turning rates. This thesis conducts an evaluation of the algorithm with variation in turn duration, entry-heading angle, and entry point. Comparative studies of the algorithm with existing method indicates improved autonomy in choosing the optimization parameters while producing trajectories with better coverage area and closer final distance to the desired terminal point.

  18. Gene selection in cancer classification using sparse logistic regression with Bayesian regularization.

    PubMed

    Cawley, Gavin C; Talbot, Nicola L C

    2006-10-01

    Gene selection algorithms for cancer classification, based on the expression of a small number of biomarker genes, have been the subject of considerable research in recent years. Shevade and Keerthi propose a gene selection algorithm based on sparse logistic regression (SLogReg) incorporating a Laplace prior to promote sparsity in the model parameters, and provide a simple but efficient training procedure. The degree of sparsity obtained is determined by the value of a regularization parameter, which must be carefully tuned in order to optimize performance. This normally involves a model selection stage, based on a computationally intensive search for the minimizer of the cross-validation error. In this paper, we demonstrate that a simple Bayesian approach can be taken to eliminate this regularization parameter entirely, by integrating it out analytically using an uninformative Jeffrey's prior. The improved algorithm (BLogReg) is then typically two or three orders of magnitude faster than the original algorithm, as there is no longer a need for a model selection step. The BLogReg algorithm is also free from selection bias in performance estimation, a common pitfall in the application of machine learning algorithms in cancer classification. The SLogReg, BLogReg and Relevance Vector Machine (RVM) gene selection algorithms are evaluated over the well-studied colon cancer and leukaemia benchmark datasets. The leave-one-out estimates of the probability of test error and cross-entropy of the BLogReg and SLogReg algorithms are very similar, however the BlogReg algorithm is found to be considerably faster than the original SLogReg algorithm. Using nested cross-validation to avoid selection bias, performance estimation for SLogReg on the leukaemia dataset takes almost 48 h, whereas the corresponding result for BLogReg is obtained in only 1 min 24 s, making BLogReg by far the more practical algorithm. BLogReg also demonstrates better estimates of conditional probability than the RVM, which are of great importance in medical applications, with similar computational expense. A MATLAB implementation of the sparse logistic regression algorithm with Bayesian regularization (BLogReg) is available from http://theoval.cmp.uea.ac.uk/~gcc/cbl/blogreg/

  19. Vehicle Related Factors that Influence Injury Outcome in Head-On Collisions

    PubMed Central

    Blum, Jeremy J.; Scullion, Paul; Morgan, Richard M.; Digges, Kennerly; Kan, Cing-Dao; Park, Shinhee; Bae, Hanil

    2008-01-01

    This study specifically investigated a range of vehicle-related factors that are associated with a lower risk of serious or fatal injury to a belted driver in a head-on collision. This analysis investigated a range of structural characteristics, quantities that describes the physical features of a passenger vehicle, e.g., stiffness or frontal geometry. The study used a data-mining approach (classification tree algorithm) to find the most significant relationships between injury outcome and the structural variables. The algorithm was applied to 120,000 real-world, head-on collisions, from the National Highway Traffic Safety Administration’s (NHTSA’s) State Crash data files, that were linked to structural attributes derived from frontal crash tests performed as part of the USA New Car Assessment Program. As with previous literature, the analysis found that the heavier vehicles were correlated with lower injury risk to their drivers. This analysis also found a new and significant correlation between the vehicle’s stiffness and injury risk. When an airbag deployed, the vehicle’s stiffness has the most statistically significant correlation with injury risk. These results suggest that in severe collisions, lower intrusion in the occupant cabin associated with higher stiffness is at least as important to occupant protection as vehicle weight for self-protection of the occupant. Consequently, the safety community might better improve self-protection by a renewed focus on increasing vehicle stiffness in order to improve crashworthiness in head-on collisions. PMID:19026230

  20. Surgical screw segmentation for mobile C-arm CT devices

    NASA Astrophysics Data System (ADS)

    Görres, Joseph; Brehler, Michael; Franke, Jochen; Wolf, Ivo; Vetter, Sven Y.; Grützner, Paul A.; Meinzer, Hans-Peter; Nabers, Diana

    2014-03-01

    Calcaneal fractures are commonly treated by open reduction and internal fixation. An anatomical reconstruction of involved joints is mandatory to prevent cartilage damage and premature arthritis. In order to avoid intraarticular screw placements, the use of mobile C-arm CT devices is required. However, for analyzing the screw placement in detail, a time-consuming human-computer interaction is necessary to navigate through 3D images and therefore to view a single screw in detail. Established interaction procedures of repeatedly positioning and rotating sectional planes are inconvenient and impede the intraoperative assessment of the screw positioning. To simplify the interaction with 3D images, we propose an automatic screw segmentation that allows for an immediate selection of relevant sectional planes. Our algorithm consists of three major steps. At first, cylindrical characteristics are determined from local gradient structures with the help of RANSAC. In a second step, a DBScan clustering algorithm is applied to group similar cylinder characteristics. Each detected cluster represents a screw, whose determined location is then refined by a cylinder-to-image registration in a third step. Our evaluation with 309 screws in 50 images shows robust and precise results. The algorithm detected 98% (303) of the screws correctly. Thirteen clusters led to falsely identified screws. The mean distance error for the screw tip was 0.8 +/- 0.8 mm and for the screw head 1.2 +/- 1 mm. The mean orientation error was 1.4 +/- 1.2 degrees.

  1. Analytical calculation of proton linear energy transfer in voxelized geometries including secondary protons

    NASA Astrophysics Data System (ADS)

    Sanchez-Parcerisa, D.; Cortés-Giraldo, M. A.; Dolney, D.; Kondrla, M.; Fager, M.; Carabe, A.

    2016-02-01

    In order to integrate radiobiological modelling with clinical treatment planning for proton radiotherapy, we extended our in-house treatment planning system FoCa with a 3D analytical algorithm to calculate linear energy transfer (LET) in voxelized patient geometries. Both active scanning and passive scattering delivery modalities are supported. The analytical calculation is much faster than the Monte-Carlo (MC) method and it can be implemented in the inverse treatment planning optimization suite, allowing us to create LET-based objectives in inverse planning. The LET was calculated by combining a 1D analytical approach including a novel correction for secondary protons with pencil-beam type LET-kernels. Then, these LET kernels were inserted into the proton-convolution-superposition algorithm in FoCa. The analytical LET distributions were benchmarked against MC simulations carried out in Geant4. A cohort of simple phantom and patient plans representing a wide variety of sites (prostate, lung, brain, head and neck) was selected. The calculation algorithm was able to reproduce the MC LET to within 6% (1 standard deviation) for low-LET areas (under 1.7 keV μm-1) and within 22% for the high-LET areas above that threshold. The dose and LET distributions can be further extended, using radiobiological models, to include radiobiological effectiveness (RBE) calculations in the treatment planning system. This implementation also allows for radiobiological optimization of treatments by including RBE-weighted dose constraints in the inverse treatment planning process.

  2. Analytical calculation of proton linear energy transfer in voxelized geometries including secondary protons.

    PubMed

    Sanchez-Parcerisa, D; Cortés-Giraldo, M A; Dolney, D; Kondrla, M; Fager, M; Carabe, A

    2016-02-21

    In order to integrate radiobiological modelling with clinical treatment planning for proton radiotherapy, we extended our in-house treatment planning system FoCa with a 3D analytical algorithm to calculate linear energy transfer (LET) in voxelized patient geometries. Both active scanning and passive scattering delivery modalities are supported. The analytical calculation is much faster than the Monte-Carlo (MC) method and it can be implemented in the inverse treatment planning optimization suite, allowing us to create LET-based objectives in inverse planning. The LET was calculated by combining a 1D analytical approach including a novel correction for secondary protons with pencil-beam type LET-kernels. Then, these LET kernels were inserted into the proton-convolution-superposition algorithm in FoCa. The analytical LET distributions were benchmarked against MC simulations carried out in Geant4. A cohort of simple phantom and patient plans representing a wide variety of sites (prostate, lung, brain, head and neck) was selected. The calculation algorithm was able to reproduce the MC LET to within 6% (1 standard deviation) for low-LET areas (under 1.7 keV μm(-1)) and within 22% for the high-LET areas above that threshold. The dose and LET distributions can be further extended, using radiobiological models, to include radiobiological effectiveness (RBE) calculations in the treatment planning system. This implementation also allows for radiobiological optimization of treatments by including RBE-weighted dose constraints in the inverse treatment planning process.

  3. Compact cancer biomarkers discovery using a swarm intelligence feature selection algorithm.

    PubMed

    Martinez, Emmanuel; Alvarez, Mario Moises; Trevino, Victor

    2010-08-01

    Biomarker discovery is a typical application from functional genomics. Due to the large number of genes studied simultaneously in microarray data, feature selection is a key step. Swarm intelligence has emerged as a solution for the feature selection problem. However, swarm intelligence settings for feature selection fail to select small features subsets. We have proposed a swarm intelligence feature selection algorithm based on the initialization and update of only a subset of particles in the swarm. In this study, we tested our algorithm in 11 microarray datasets for brain, leukemia, lung, prostate, and others. We show that the proposed swarm intelligence algorithm successfully increase the classification accuracy and decrease the number of selected features compared to other swarm intelligence methods. Copyright © 2010 Elsevier Ltd. All rights reserved.

  4. Multi-task feature selection in microarray data by binary integer programming.

    PubMed

    Lan, Liang; Vucetic, Slobodan

    2013-12-20

    A major challenge in microarray classification is that the number of features is typically orders of magnitude larger than the number of examples. In this paper, we propose a novel feature filter algorithm to select the feature subset with maximal discriminative power and minimal redundancy by solving a quadratic objective function with binary integer constraints. To improve the computational efficiency, the binary integer constraints are relaxed and a low-rank approximation to the quadratic term is applied. The proposed feature selection algorithm was extended to solve multi-task microarray classification problems. We compared the single-task version of the proposed feature selection algorithm with 9 existing feature selection methods on 4 benchmark microarray data sets. The empirical results show that the proposed method achieved the most accurate predictions overall. We also evaluated the multi-task version of the proposed algorithm on 8 multi-task microarray datasets. The multi-task feature selection algorithm resulted in significantly higher accuracy than when using the single-task feature selection methods.

  5. mRMR-ABC: A Hybrid Gene Selection Algorithm for Cancer Classification Using Microarray Gene Expression Profiling

    PubMed Central

    Alshamlan, Hala; Badr, Ghada; Alohali, Yousef

    2015-01-01

    An artificial bee colony (ABC) is a relatively recent swarm intelligence optimization approach. In this paper, we propose the first attempt at applying ABC algorithm in analyzing a microarray gene expression profile. In addition, we propose an innovative feature selection algorithm, minimum redundancy maximum relevance (mRMR), and combine it with an ABC algorithm, mRMR-ABC, to select informative genes from microarray profile. The new approach is based on a support vector machine (SVM) algorithm to measure the classification accuracy for selected genes. We evaluate the performance of the proposed mRMR-ABC algorithm by conducting extensive experiments on six binary and multiclass gene expression microarray datasets. Furthermore, we compare our proposed mRMR-ABC algorithm with previously known techniques. We reimplemented two of these techniques for the sake of a fair comparison using the same parameters. These two techniques are mRMR when combined with a genetic algorithm (mRMR-GA) and mRMR when combined with a particle swarm optimization algorithm (mRMR-PSO). The experimental results prove that the proposed mRMR-ABC algorithm achieves accurate classification performance using small number of predictive genes when tested using both datasets and compared to previously suggested methods. This shows that mRMR-ABC is a promising approach for solving gene selection and cancer classification problems. PMID:25961028

  6. mRMR-ABC: A Hybrid Gene Selection Algorithm for Cancer Classification Using Microarray Gene Expression Profiling.

    PubMed

    Alshamlan, Hala; Badr, Ghada; Alohali, Yousef

    2015-01-01

    An artificial bee colony (ABC) is a relatively recent swarm intelligence optimization approach. In this paper, we propose the first attempt at applying ABC algorithm in analyzing a microarray gene expression profile. In addition, we propose an innovative feature selection algorithm, minimum redundancy maximum relevance (mRMR), and combine it with an ABC algorithm, mRMR-ABC, to select informative genes from microarray profile. The new approach is based on a support vector machine (SVM) algorithm to measure the classification accuracy for selected genes. We evaluate the performance of the proposed mRMR-ABC algorithm by conducting extensive experiments on six binary and multiclass gene expression microarray datasets. Furthermore, we compare our proposed mRMR-ABC algorithm with previously known techniques. We reimplemented two of these techniques for the sake of a fair comparison using the same parameters. These two techniques are mRMR when combined with a genetic algorithm (mRMR-GA) and mRMR when combined with a particle swarm optimization algorithm (mRMR-PSO). The experimental results prove that the proposed mRMR-ABC algorithm achieves accurate classification performance using small number of predictive genes when tested using both datasets and compared to previously suggested methods. This shows that mRMR-ABC is a promising approach for solving gene selection and cancer classification problems.

  7. Zone-Based Routing Protocol for Wireless Sensor Networks

    PubMed Central

    Venkateswarlu Kumaramangalam, Muni; Adiyapatham, Kandasamy; Kandasamy, Chandrasekaran

    2014-01-01

    Extensive research happening across the globe witnessed the importance of Wireless Sensor Network in the present day application world. In the recent past, various routing algorithms have been proposed to elevate WSN network lifetime. Clustering mechanism is highly successful in conserving energy resources for network activities and has become promising field for researches. However, the problem of unbalanced energy consumption is still open because the cluster head activities are tightly coupled with role and location of a particular node in the network. Several unequal clustering algorithms are proposed to solve this wireless sensor network multihop hot spot problem. Current unequal clustering mechanisms consider only intra- and intercluster communication cost. Proper organization of wireless sensor network into clusters enables efficient utilization of limited resources and enhances lifetime of deployed sensor nodes. This paper considers a novel network organization scheme, energy-efficient edge-based network partitioning scheme, to organize sensor nodes into clusters of equal size. Also, it proposes a cluster-based routing algorithm, called zone-based routing protocol (ZBRP), for elevating sensor network lifetime. Experimental results show that ZBRP out-performs interims of network lifetime and energy conservation with its uniform energy consumption among the cluster heads. PMID:27437455

  8. Zone-Based Routing Protocol for Wireless Sensor Networks.

    PubMed

    Venkateswarlu Kumaramangalam, Muni; Adiyapatham, Kandasamy; Kandasamy, Chandrasekaran

    2014-01-01

    Extensive research happening across the globe witnessed the importance of Wireless Sensor Network in the present day application world. In the recent past, various routing algorithms have been proposed to elevate WSN network lifetime. Clustering mechanism is highly successful in conserving energy resources for network activities and has become promising field for researches. However, the problem of unbalanced energy consumption is still open because the cluster head activities are tightly coupled with role and location of a particular node in the network. Several unequal clustering algorithms are proposed to solve this wireless sensor network multihop hot spot problem. Current unequal clustering mechanisms consider only intra- and intercluster communication cost. Proper organization of wireless sensor network into clusters enables efficient utilization of limited resources and enhances lifetime of deployed sensor nodes. This paper considers a novel network organization scheme, energy-efficient edge-based network partitioning scheme, to organize sensor nodes into clusters of equal size. Also, it proposes a cluster-based routing algorithm, called zone-based routing protocol (ZBRP), for elevating sensor network lifetime. Experimental results show that ZBRP out-performs interims of network lifetime and energy conservation with its uniform energy consumption among the cluster heads.

  9. Wind noise in hearing aids: I. Effect of wide dynamic range compression and modulation-based noise reduction.

    PubMed

    Chung, King

    2012-01-01

    The objectives of this study were: (1) to examine the effect of wide dynamic range compression (WDRC) and modulation-based noise reduction (NR) algorithms on wind noise levels at the hearing aid output; and (2) to derive effective strategies for clinicians and engineers to reduce wind noise in hearing aids. Three digital hearing aids were fitted to KEMAR. The noise output was recorded at flow velocities of 0, 4.5, 9.0, and 13.5 m/s in a wind tunnel as the KEMAR head was turned from 0° to 360°. Flow noise levels were compared between the 1:1 linear and 3:1 WDRC conditions, and between NR-activated and NR-deactivated conditions when the hearing aid was programmed to the directional and omnidirectional modes. The results showed that: (1) WDRC increased low-level noise and reduced high-level noise; and (2) different noise reduction algorithms provided different amounts of wind noise reduction in different microphone modes, frequency regions, flow velocities, and head angles. Wind noise can be reduced by decreasing the gain for low-level inputs, increasing the compression ratio for high-level inputs, and activating modulation-based noise reduction algorithms.

  10. Threshold automatic selection hybrid phase unwrapping algorithm for digital holographic microscopy

    NASA Astrophysics Data System (ADS)

    Zhou, Meiling; Min, Junwei; Yao, Baoli; Yu, Xianghua; Lei, Ming; Yan, Shaohui; Yang, Yanlong; Dan, Dan

    2015-01-01

    Conventional quality-guided (QG) phase unwrapping algorithm is hard to be applied to digital holographic microscopy because of the long execution time. In this paper, we present a threshold automatic selection hybrid phase unwrapping algorithm that combines the existing QG algorithm and the flood-filled (FF) algorithm to solve this problem. The original wrapped phase map is divided into high- and low-quality sub-maps by selecting a threshold automatically, and then the FF and QG unwrapping algorithms are used in each level to unwrap the phase, respectively. The feasibility of the proposed method is proved by experimental results, and the execution speed is shown to be much faster than that of the original QG unwrapping algorithm.

  11. The Impact of Monte Carlo Dose Calculations on Intensity-Modulated Radiation Therapy

    NASA Astrophysics Data System (ADS)

    Siebers, J. V.; Keall, P. J.; Mohan, R.

    The effect of dose calculation accuracy for IMRT was studied by comparing different dose calculation algorithms. A head and neck IMRT plan was optimized using a superposition dose calculation algorithm. Dose was re-computed for the optimized plan using both Monte Carlo and pencil beam dose calculation algorithms to generate patient and phantom dose distributions. Tumor control probabilities (TCP) and normal tissue complication probabilities (NTCP) were computed to estimate the plan outcome. For the treatment plan studied, Monte Carlo best reproduces phantom dose measurements, the TCP was slightly lower than the superposition and pencil beam results, and the NTCP values differed little.

  12. Evaluation of a video-based head motion tracking system for dedicated brain PET

    NASA Astrophysics Data System (ADS)

    Anishchenko, S.; Beylin, D.; Stepanov, P.; Stepanov, A.; Weinberg, I. N.; Schaeffer, S.; Zavarzin, V.; Shaposhnikov, D.; Smith, M. F.

    2015-03-01

    Unintentional head motion during Positron Emission Tomography (PET) data acquisition can degrade PET image quality and lead to artifacts. Poor patient compliance, head tremor, and coughing are examples of movement sources. Head motion due to patient non-compliance can be an issue with the rise of amyloid brain PET in dementia patients. To preserve PET image resolution and quantitative accuracy, head motion can be tracked and corrected in the image reconstruction algorithm. While fiducial markers can be used, a contactless approach is preferable. A video-based head motion tracking system for a dedicated portable brain PET scanner was developed. Four wide-angle cameras organized in two stereo pairs are used for capturing video of the patient's head during the PET data acquisition. Facial points are automatically tracked and used to determine the six degree of freedom head pose as a function of time. The presented work evaluated the newly designed tracking system using a head phantom and a moving American College of Radiology (ACR) phantom. The mean video-tracking error was 0.99±0.90 mm relative to the magnetic tracking device used as ground truth. Qualitative evaluation with the ACR phantom shows the advantage of the motion tracking application. The developed system is able to perform tracking with accuracy close to millimeter and can help to preserve resolution of brain PET images in presence of movements.

  13. GPS interferometric attitude and heading determination: Initial flight test results

    NASA Technical Reports Server (NTRS)

    Vangraas, Frank; Braasch, Michael

    1991-01-01

    Attitude and heading determination using GPS interferometry is a well-understood concept. However, efforts have been concentrated mainly in the development of robust algorithms and applications for low dynamic, rigid platforms (e.g., shipboard). This paper presents results of what is believed by the authors to be the first realtime flight test of a GPS attitude and heading determination system. The system is installed in Ohio University's Douglas DC-3 research aircraft. Signals from four antennas are processed by an Ashtech 3DF 24-channel GPS receiver. Data from the receiver are sent to a microcomputer for storage and further computations. Attitude and heading data are sent to a second computer for display on a software generated artificial horizon. Demonstration of this technique proves its candidacy for augmentation of aircraft state estimation for flight control and navigation as well as for numerous other applications.

  14. Electromagnetic absorption in the head of adults and children due to mobile phone operation close to the head.

    PubMed

    de Salles, Alvaro A; Bulla, Giovani; Rodriguez, Claudio E Fernández

    2006-01-01

    The Specific Absorption Rate (SAR) produced by mobile phones in the head of adults and children is simulated using an algorithm based on the Finite Difference Time Domain (FDTD) method. Realistic models of the child and adult head are used. The electromagnetic parameters are fitted to these models. Comparison also are made with the SAR calculated in the children model when using adult human electromagnetic parameters values. Microstrip (or patch) antennas and quarter wavelength monopole antennas are used in the simulations. The frequencies used to feed the antennas are 1850 MHz and 850 MHz. The SAR results are compared with the available international recommendations. It is shown that under similar conditions, the 1g-SAR calculated for children is higher than that for the adults. When using the 10-year old child model, SAR values higher than 60% than those for adults are obtained.

  15. SU-E-J-25: End-To-End (E2E) Testing On TomoHDA System Using a Real Pig Head for Intracranial Radiosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corradini, N; Leick, M; Bonetti, M

    Purpose: To determine the MVCT imaging uncertainty on the TomoHDA system for intracranial radiosurgery treatments. To determine the end-to-end (E2E) overall accuracy of the TomoHDA system for intracranial radiosurgery. Methods: A pig head was obtained from the butcher, cut coronally through the brain, and preserved in formaldehyde. The base of the head was fixed to a positioning plate allowing precise movement, i.e. translation and rotation, in all 6 axes. A repeatability test was performed on the pig head to determine uncertainty in the image bone registration algorithm. Furthermore, the test studied images with MVCT slice thicknesses of 1 and 3more » mm in unison with differing scan lengths. A sensitivity test was performed to determine the registration algorithm’s ability to find the absolute position of known translations/rotations of the pig head. The algorithm’s ability to determine absolute position was compared against that of manual operators, i.e. a radiation therapist and radiation oncologist. Finally, E2E tests for intracranial radiosurgery were performed by measuring the delivered dose distributions within the pig head using Gafchromic films. Results: The repeatability test uncertainty was lowest for the MVCTs of 1-mm slice thickness, which measured less than 0.10 mm and 0.12 deg for all axes. For the sensitivity tests, the bone registration algorithm performed better than human eyes and a maximum difference of 0.3 mm and 0.4 deg was observed for the axes. E2E test results in absolute position difference measured 0.03 ± 0.21 mm in x-axis and 0.28 ± 0.18 mm in y-axis. A maximum difference of 0.32 and 0.66 mm was observed in x and y, respectively. The average peak dose difference between measured and calculated dose was 2.7 cGy or 0.4%. Conclusion: Our tests using a pig head phantom estimate the TomoHDA system to have a submillimeter overall accuracy for intracranial radiosurgery.« less

  16. Space use by white-headed woodpeckers and selection for recent forest disturbances

    Treesearch

    Teresa J. Lorenz; Kerri T. Vierling; Jeffrey M. Kozma; Janet E. Millard; Martin G. Raphael

    2015-01-01

    White-headed woodpeckers (Picoides albolarvatus) are important cavity excavators that recently have become the focus of much research because of concerns over population declines. Past studies have focused on nest site selection and survival but information is needed on factors influencing their space use when away from the nest. We examined space...

  17. Skin Cancer Education Materials: Selected Annotations.

    ERIC Educational Resources Information Center

    National Cancer Inst. (NIH), Bethesda, MD.

    This annotated bibliography presents 85 entries on a variety of approaches to cancer education. The entries are grouped under three broad headings, two of which contain smaller sub-divisions. The first heading, Public Education, contains prevention and general information, and non-print materials. The second heading, Professional Education,…

  18. Proteomic changes in plasma of broiler chickens with femoral head necrosis

    USDA-ARS?s Scientific Manuscript database

    Femoral head necrosis (FHN) is a skeletal problem in broiler chickens where the proximal femoral head cartilage shows susceptibility to separation from its growth plate. The FHN selected birds showed higher bodyweights and reduced plasma cholesterol. The proteomic differences in the plasma of health...

  19. A Novel Saccadic Strategy Revealed by Suppression Head Impulse Testing of Patients with Bilateral Vestibular Loss.

    PubMed

    de Waele, Catherine; Shen, Qiwen; Magnani, Christophe; Curthoys, Ian S

    2017-01-01

    We examined the eye movement response patterns of a group of patients with bilateral vestibular loss (BVL) during suppression head impulse testing. Some showed a new saccadic strategy that may have potential for explaining how patients use saccades to recover from vestibular loss. Eight patients with severe BVL [vestibulo-ocular reflex (VOR) gains less than 0.35 and absent otolithic function] were tested. All patients were given the Dizziness Handicap Inventory and questioned about oscillopsia during abrupt head movements. Two paradigms of video head impulse testing of the horizontal VOR were used: (1) the classical head impulse paradigm [called head impulse test (HIMPs)]-fixating an earth-fixed target during the head impulse and (2) the new complementary test paradigm-fixating a head-fixed target during the head impulse (called SHIMPs). The VOR gain of HIMPs was quantified by two algorithms. During SHIMPs testing, some BVL patients consistently generated an inappropriate covert compensatory saccade during the head impulse that required a corresponding large anti-compensatory saccade at the end of the head impulse in order to obey the instructions to maintain gaze on the head-fixed target. By contrast, other BVL patients did not generate this inappropriate covert saccade and did not exhibit a corresponding anti-compensatory saccade. The latencies of the covert saccade in SHIMPs and HIMPs were similar. The pattern of covert saccades during SHIMPs appears to be related to the reduction of oscillopsia during abrupt head movements. BVL patients who did not report oscillopsia showed this unusual saccadic pattern, whereas BVL patients who reported oscillopsia did not show this pattern. This inappropriate covert SHIMPs saccade may be an objective indicator of how some patients with vestibular loss have learned to trigger covert saccades during head movements in everyday life.

  20. Head shape evolution in Tropidurinae lizards: does locomotion constrain diet?

    PubMed

    Kohlsdorf, T; Grizante, M B; Navas, C A; Herrel, A

    2008-05-01

    Different components of complex integrated systems may be specialized for different functions, and thus the selective pressures acting on the system as a whole may be conflicting and can ultimately constrain organismal performance and evolution. The vertebrate cranial system is one of the most striking examples of a complex system with several possible functions, being associated to activities as different as locomotion, prey capture, display and defensive behaviours. Therefore, selective pressures on the cranial system as a whole are possibly complex and may be conflicting. The present study focuses on the influence of potentially conflicting selective pressures (diet vs. locomotion) on the evolution of head shape in Tropidurinae lizards. For example, the expected adaptations leading to flat heads and bodies in species living on vertical structures may conflict with the need for improved bite performance associated with the inclusion of hard or tough prey into the diet, a common phenomenon in Tropidurinae lizards. Body size and six variables describing head shape were quantified in preserved specimens of 23 species, and information on diet and substrate usage was obtained from the literature. No phylogenetic signal was observed in the morphological data at any branch length tested, suggesting adaptive evolution of head shape in Tropidurinae. This pattern was confirmed by both factor analysis and independent contrast analysis, which suggested adaptive co-variation between the head shape and the inclusion of hard prey into the diet. In contrast to our expectations, habitat use did not constrain or drive head shape evolution in the group.

  1. Training set optimization under population structure in genomic selection.

    PubMed

    Isidro, Julio; Jannink, Jean-Luc; Akdemir, Deniz; Poland, Jesse; Heslot, Nicolas; Sorrells, Mark E

    2015-01-01

    Population structure must be evaluated before optimization of the training set population. Maximizing the phenotypic variance captured by the training set is important for optimal performance. The optimization of the training set (TRS) in genomic selection has received much interest in both animal and plant breeding, because it is critical to the accuracy of the prediction models. In this study, five different TRS sampling algorithms, stratified sampling, mean of the coefficient of determination (CDmean), mean of predictor error variance (PEVmean), stratified CDmean (StratCDmean) and random sampling, were evaluated for prediction accuracy in the presence of different levels of population structure. In the presence of population structure, the most phenotypic variation captured by a sampling method in the TRS is desirable. The wheat dataset showed mild population structure, and CDmean and stratified CDmean methods showed the highest accuracies for all the traits except for test weight and heading date. The rice dataset had strong population structure and the approach based on stratified sampling showed the highest accuracies for all traits. In general, CDmean minimized the relationship between genotypes in the TRS, maximizing the relationship between TRS and the test set. This makes it suitable as an optimization criterion for long-term selection. Our results indicated that the best selection criterion used to optimize the TRS seems to depend on the interaction of trait architecture and population structure.

  2. Hydraulic head interpolation using ANFIS—model selection and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Kurtulus, Bedri; Flipo, Nicolas

    2012-01-01

    The aim of this study is to investigate the efficiency of ANFIS (adaptive neuro fuzzy inference system) for interpolating hydraulic head in a 40-km 2 agricultural watershed of the Seine basin (France). Inputs of ANFIS are Cartesian coordinates and the elevation of the ground. Hydraulic head was measured at 73 locations during a snapshot campaign on September 2009, which characterizes low-water-flow regime in the aquifer unit. The dataset was then split into three subsets using a square-based selection method: a calibration one (55%), a training one (27%), and a test one (18%). First, a method is proposed to select the best ANFIS model, which corresponds to a sensitivity analysis of ANFIS to the type and number of membership functions (MF). Triangular, Gaussian, general bell, and spline-based MF are used with 2, 3, 4, and 5 MF per input node. Performance criteria on the test subset are used to select the 5 best ANFIS models among 16. Then each is used to interpolate the hydraulic head distribution on a (50×50)-m grid, which is compared to the soil elevation. The cells where the hydraulic head is higher than the soil elevation are counted as "error cells." The ANFIS model that exhibits the less "error cells" is selected as the best ANFIS model. The best model selection reveals that ANFIS models are very sensitive to the type and number of MF. Finally, a sensibility analysis of the best ANFIS model with four triangular MF is performed on the interpolation grid, which shows that ANFIS remains stable to error propagation with a higher sensitivity to soil elevation.

  3. SU-C-207B-02: Maximal Noise Reduction Filter with Anatomical Structures Preservation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maitree, R; Guzman, G; Chundury, A

    Purpose: All medical images contain noise, which can result in an undesirable appearance and can reduce the visibility of anatomical details. There are varieties of techniques utilized to reduce noise such as increasing the image acquisition time and using post-processing noise reduction algorithms. However, these techniques are increasing the imaging time and cost or reducing tissue contrast and effective spatial resolution which are useful diagnosis information. The three main focuses in this study are: 1) to develop a novel approach that can adaptively and maximally reduce noise while preserving valuable details of anatomical structures, 2) to evaluate the effectiveness ofmore » available noise reduction algorithms in comparison to the proposed algorithm, and 3) to demonstrate that the proposed noise reduction approach can be used clinically. Methods: To achieve a maximal noise reduction without destroying the anatomical details, the proposed approach automatically estimated the local image noise strength levels and detected the anatomical structures, i.e. tissue boundaries. Such information was used to adaptively adjust strength of the noise reduction filter. The proposed algorithm was tested on 34 repeating swine head datasets and 54 patients MRI and CT images. The performance was quantitatively evaluated by image quality metrics and manually validated for clinical usages by two radiation oncologists and one radiologist. Results: Qualitative measurements on repeated swine head images demonstrated that the proposed algorithm efficiently removed noise while preserving the structures and tissues boundaries. In comparisons, the proposed algorithm obtained competitive noise reduction performance and outperformed other filters in preserving anatomical structures. Assessments from the manual validation indicate that the proposed noise reduction algorithm is quite adequate for some clinical usages. Conclusion: According to both clinical evaluation (human expert ranking) and qualitative assessment, the proposed approach has superior noise reduction and anatomical structures preservation capabilities over existing noise removal methods. Senior Author Dr. Deshan Yang received research funding form ViewRay and Varian.« less

  4. Well balancing of the SWE schemes for moving-water steady flows

    NASA Astrophysics Data System (ADS)

    Caleffi, Valerio; Valiani, Alessandro

    2017-08-01

    In this work, the exact reproduction of a moving-water steady flow via the numerical solution of the one-dimensional shallow water equations is studied. A new scheme based on a modified version of the HLLEM approximate Riemann solver (Dumbser and Balsara (2016) [18]) that exactly preserves the total head and the discharge in the simulation of smooth steady flows and that correctly dissipates mechanical energy in the presence of hydraulic jumps is presented. This model is compared with a selected set of schemes from the literature, including models that exactly preserve quiescent flows and models that exactly preserve moving-water steady flows. The comparison highlights the strengths and weaknesses of the different approaches. In particular, the results show that the increase in accuracy in the steady state reproduction is counterbalanced by a reduced robustness and numerical efficiency of the models. Some solutions to reduce these drawbacks, at the cost of increased algorithm complexity, are presented.

  5. Feature selection method based on multi-fractal dimension and harmony search algorithm and its application

    NASA Astrophysics Data System (ADS)

    Zhang, Chen; Ni, Zhiwei; Ni, Liping; Tang, Na

    2016-10-01

    Feature selection is an important method of data preprocessing in data mining. In this paper, a novel feature selection method based on multi-fractal dimension and harmony search algorithm is proposed. Multi-fractal dimension is adopted as the evaluation criterion of feature subset, which can determine the number of selected features. An improved harmony search algorithm is used as the search strategy to improve the efficiency of feature selection. The performance of the proposed method is compared with that of other feature selection algorithms on UCI data-sets. Besides, the proposed method is also used to predict the daily average concentration of PM2.5 in China. Experimental results show that the proposed method can obtain competitive results in terms of both prediction accuracy and the number of selected features.

  6. Bridging the gap between marker-assisted and genomic selection of heading time and plant height in hybrid wheat.

    PubMed

    Zhao, Y; Mette, M F; Gowda, M; Longin, C F H; Reif, J C

    2014-06-01

    Based on data from field trials with a large collection of 135 elite winter wheat inbred lines and 1604 F1 hybrids derived from them, we compared the accuracy of prediction of marker-assisted selection and current genomic selection approaches for the model traits heading time and plant height in a cross-validation approach. For heading time, the high accuracy seen with marker-assisted selection severely dropped with genomic selection approaches RR-BLUP (ridge regression best linear unbiased prediction) and BayesCπ, whereas for plant height, accuracy was low with marker-assisted selection as well as RR-BLUP and BayesCπ. Differences in the linkage disequilibrium structure of the functional and single-nucleotide polymorphism markers relevant for the two traits were identified in a simulation study as a likely explanation for the different trends in accuracies of prediction. A new genomic selection approach, weighted best linear unbiased prediction (W-BLUP), designed to treat the effects of known functional markers more appropriately, proved to increase the accuracy of prediction for both traits and thus closes the gap between marker-assisted and genomic selection.

  7. Bridging the gap between marker-assisted and genomic selection of heading time and plant height in hybrid wheat

    PubMed Central

    Zhao, Y; Mette, M F; Gowda, M; Longin, C F H; Reif, J C

    2014-01-01

    Based on data from field trials with a large collection of 135 elite winter wheat inbred lines and 1604 F1 hybrids derived from them, we compared the accuracy of prediction of marker-assisted selection and current genomic selection approaches for the model traits heading time and plant height in a cross-validation approach. For heading time, the high accuracy seen with marker-assisted selection severely dropped with genomic selection approaches RR-BLUP (ridge regression best linear unbiased prediction) and BayesCπ, whereas for plant height, accuracy was low with marker-assisted selection as well as RR-BLUP and BayesCπ. Differences in the linkage disequilibrium structure of the functional and single-nucleotide polymorphism markers relevant for the two traits were identified in a simulation study as a likely explanation for the different trends in accuracies of prediction. A new genomic selection approach, weighted best linear unbiased prediction (W-BLUP), designed to treat the effects of known functional markers more appropriately, proved to increase the accuracy of prediction for both traits and thus closes the gap between marker-assisted and genomic selection. PMID:24518889

  8. Classification of Medical Datasets Using SVMs with Hybrid Evolutionary Algorithms Based on Endocrine-Based Particle Swarm Optimization and Artificial Bee Colony Algorithms.

    PubMed

    Lin, Kuan-Cheng; Hsieh, Yi-Hsiu

    2015-10-01

    The classification and analysis of data is an important issue in today's research. Selecting a suitable set of features makes it possible to classify an enormous quantity of data quickly and efficiently. Feature selection is generally viewed as a problem of feature subset selection, such as combination optimization problems. Evolutionary algorithms using random search methods have proven highly effective in obtaining solutions to problems of optimization in a diversity of applications. In this study, we developed a hybrid evolutionary algorithm based on endocrine-based particle swarm optimization (EPSO) and artificial bee colony (ABC) algorithms in conjunction with a support vector machine (SVM) for the selection of optimal feature subsets for the classification of datasets. The results of experiments using specific UCI medical datasets demonstrate that the accuracy of the proposed hybrid evolutionary algorithm is superior to that of basic PSO, EPSO and ABC algorithms, with regard to classification accuracy using subsets with a reduced number of features.

  9. 5 CFR 410.302 - Responsibilities of the head of an agency.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... training plans, expenditures, and activities. (e) The head of the agency shall establish written procedures... REGULATIONS TRAINING Establishing and Implementing Training Programs § 410.302 Responsibilities of the head of... are necessary to ensure that the selection of employees for training is made without regard to...

  10. Photoacoustic Imaging of Animals with an Annular Transducer Array

    NASA Astrophysics Data System (ADS)

    Yang, Di-Wu; Zhou, Zhi-Bin; Zeng, Lv-Ming; Zhou, Xin; Chen, Xing-Hui

    2014-07-01

    A photoacoustic system with an annular transducer array is presented for rapid, high-resolution photoacoustic tomography of animals. An eight-channel data acquisition system is applied to capture the photoacoustic signals by using multiplexing and the total time of data acquisition and transferring is within 3 s. A limited-view filtered back projection algorithm is used to reconstruct the photoacoustic images. Experiments are performed on a mouse head and a rabbit head and clear photoacoustic images are obtained. The experimental results demonstrate that this imaging system holds the potential for imaging the human brain.

  11. Parameter selection in limited data cone-beam CT reconstruction using edge-preserving total variation algorithms

    NASA Astrophysics Data System (ADS)

    Lohvithee, Manasavee; Biguri, Ander; Soleimani, Manuchehr

    2017-12-01

    There are a number of powerful total variation (TV) regularization methods that have great promise in limited data cone-beam CT reconstruction with an enhancement of image quality. These promising TV methods require careful selection of the image reconstruction parameters, for which there are no well-established criteria. This paper presents a comprehensive evaluation of parameter selection in a number of major TV-based reconstruction algorithms. An appropriate way of selecting the values for each individual parameter has been suggested. Finally, a new adaptive-weighted projection-controlled steepest descent (AwPCSD) algorithm is presented, which implements the edge-preserving function for CBCT reconstruction with limited data. The proposed algorithm shows significant robustness compared to three other existing algorithms: ASD-POCS, AwASD-POCS and PCSD. The proposed AwPCSD algorithm is able to preserve the edges of the reconstructed images better with fewer sensitive parameters to tune.

  12. CURE-SMOTE algorithm and hybrid algorithm for feature selection and parameter optimization based on random forests.

    PubMed

    Ma, Li; Fan, Suohai

    2017-03-14

    The random forests algorithm is a type of classifier with prominent universality, a wide application range, and robustness for avoiding overfitting. But there are still some drawbacks to random forests. Therefore, to improve the performance of random forests, this paper seeks to improve imbalanced data processing, feature selection and parameter optimization. We propose the CURE-SMOTE algorithm for the imbalanced data classification problem. Experiments on imbalanced UCI data reveal that the combination of Clustering Using Representatives (CURE) enhances the original synthetic minority oversampling technique (SMOTE) algorithms effectively compared with the classification results on the original data using random sampling, Borderline-SMOTE1, safe-level SMOTE, C-SMOTE, and k-means-SMOTE. Additionally, the hybrid RF (random forests) algorithm has been proposed for feature selection and parameter optimization, which uses the minimum out of bag (OOB) data error as its objective function. Simulation results on binary and higher-dimensional data indicate that the proposed hybrid RF algorithms, hybrid genetic-random forests algorithm, hybrid particle swarm-random forests algorithm and hybrid fish swarm-random forests algorithm can achieve the minimum OOB error and show the best generalization ability. The training set produced from the proposed CURE-SMOTE algorithm is closer to the original data distribution because it contains minimal noise. Thus, better classification results are produced from this feasible and effective algorithm. Moreover, the hybrid algorithm's F-value, G-mean, AUC and OOB scores demonstrate that they surpass the performance of the original RF algorithm. Hence, this hybrid algorithm provides a new way to perform feature selection and parameter optimization.

  13. A novel artificial immune clonal selection classification and rule mining with swarm learning model

    NASA Astrophysics Data System (ADS)

    Al-Sheshtawi, Khaled A.; Abdul-Kader, Hatem M.; Elsisi, Ashraf B.

    2013-06-01

    Metaheuristic optimisation algorithms have become popular choice for solving complex problems. By integrating Artificial Immune clonal selection algorithm (CSA) and particle swarm optimisation (PSO) algorithm, a novel hybrid Clonal Selection Classification and Rule Mining with Swarm Learning Algorithm (CS2) is proposed. The main goal of the approach is to exploit and explore the parallel computation merit of Clonal Selection and the speed and self-organisation merits of Particle Swarm by sharing information between clonal selection population and particle swarm. Hence, we employed the advantages of PSO to improve the mutation mechanism of the artificial immune CSA and to mine classification rules within datasets. Consequently, our proposed algorithm required less training time and memory cells in comparison to other AIS algorithms. In this paper, classification rule mining has been modelled as a miltiobjective optimisation problem with predictive accuracy. The multiobjective approach is intended to allow the PSO algorithm to return an approximation to the accuracy and comprehensibility border, containing solutions that are spread across the border. We compared our proposed algorithm classification accuracy CS2 with five commonly used CSAs, namely: AIRS1, AIRS2, AIRS-Parallel, CLONALG, and CSCA using eight benchmark datasets. We also compared our proposed algorithm classification accuracy CS2 with other five methods, namely: Naïve Bayes, SVM, MLP, CART, and RFB. The results show that the proposed algorithm is comparable to the 10 studied algorithms. As a result, the hybridisation, built of CSA and PSO, can develop respective merit, compensate opponent defect, and make search-optimal effect and speed better.

  14. Feature selection for elderly faller classification based on wearable sensors.

    PubMed

    Howcroft, Jennifer; Kofman, Jonathan; Lemaire, Edward D

    2017-05-30

    Wearable sensors can be used to derive numerous gait pattern features for elderly fall risk and faller classification; however, an appropriate feature set is required to avoid high computational costs and the inclusion of irrelevant features. The objectives of this study were to identify and evaluate smaller feature sets for faller classification from large feature sets derived from wearable accelerometer and pressure-sensing insole gait data. A convenience sample of 100 older adults (75.5 ± 6.7 years; 76 non-fallers, 24 fallers based on 6 month retrospective fall occurrence) walked 7.62 m while wearing pressure-sensing insoles and tri-axial accelerometers at the head, pelvis, left and right shanks. Feature selection was performed using correlation-based feature selection (CFS), fast correlation based filter (FCBF), and Relief-F algorithms. Faller classification was performed using multi-layer perceptron neural network, naïve Bayesian, and support vector machine classifiers, with 75:25 single stratified holdout and repeated random sampling. The best performing model was a support vector machine with 78% accuracy, 26% sensitivity, 95% specificity, 0.36 F1 score, and 0.31 MCC and one posterior pelvis accelerometer input feature (left acceleration standard deviation). The second best model achieved better sensitivity (44%) and used a support vector machine with 74% accuracy, 83% specificity, 0.44 F1 score, and 0.29 MCC. This model had ten input features: maximum, mean and standard deviation posterior acceleration; maximum, mean and standard deviation anterior acceleration; mean superior acceleration; and three impulse features. The best multi-sensor model sensitivity (56%) was achieved using posterior pelvis and both shank accelerometers and a naïve Bayesian classifier. The best single-sensor model sensitivity (41%) was achieved using the posterior pelvis accelerometer and a naïve Bayesian classifier. Feature selection provided models with smaller feature sets and improved faller classification compared to faller classification without feature selection. CFS and FCBF provided the best feature subset (one posterior pelvis accelerometer feature) for faller classification. However, better sensitivity was achieved by the second best model based on a Relief-F feature subset with three pressure-sensing insole features and seven head accelerometer features. Feature selection should be considered as an important step in faller classification using wearable sensors.

  15. A Comparative Study of Optimization Algorithms for Engineering Synthesis.

    DTIC Science & Technology

    1983-03-01

    the ADS program demonstrates the flexibility a design engineer would have in selecting an optimization algorithm best suited to solve a particular...demonstrates the flexibility a design engineer would have in selecting an optimization algorithm best suited to solve a particular problem. 4 TABLE OF...algorithm to suit a particular problem. The ADS library of design optimization algorithms was . developed by Vanderplaats in response to the first

  16. Human-like object tracking and gaze estimation with PKD android

    PubMed Central

    Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K; Bugnariu, Nicoleta L.; Popa, Dan O.

    2018-01-01

    As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold : to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans. PMID:29416193

  17. Restoration of MRI Data for Field Nonuniformities using High Order Neighborhood Statistics

    PubMed Central

    Hadjidemetriou, Stathis; Studholme, Colin; Mueller, Susanne; Weiner, Michael; Schuff, Norbert

    2007-01-01

    MRI at high magnetic fields (> 3.0 T ) is complicated by strong inhomogeneous radio-frequency fields, sometimes termed the “bias field”. These lead to nonuniformity of image intensity, greatly complicating further analysis such as registration and segmentation. Existing methods for bias field correction are effective for 1.5 T or 3.0 T MRI, but are not completely satisfactory for higher field data. This paper develops an effective bias field correction for high field MRI based on the assumption that the nonuniformity is smoothly varying in space. Also, nonuniformity is quantified and unmixed using high order neighborhood statistics of intensity cooccurrences. They are computed within spherical windows of limited size over the entire image. The restoration is iterative and makes use of a novel stable stopping criterion that depends on the scaled entropy of the cooccurrence statistics, which is a non monotonic function of the iterations; the Shannon entropy of the cooccurrence statistics normalized to the effective dynamic range of the image. The algorithm restores whole head data, is robust to intense nonuniformities present in high field acquisitions, and is robust to variations in anatomy. This algorithm significantly improves bias field correction in comparison to N3 on phantom 1.5 T head data and high field 4 T human head data. PMID:18193095

  18. Human-like object tracking and gaze estimation with PKD android

    NASA Astrophysics Data System (ADS)

    Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K.; Bugnariu, Nicoleta L.; Popa, Dan O.

    2016-05-01

    As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold: to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans.

  19. 14 CFR Appendix E to Part 125 - Airplane Flight Recorder Specifications

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... flight crew reference) 0-360° and Discrete “true” or “mag” ±2° 1 0.5° When true or magnetic heading can be selected as the primary heading reference, a discrete indicating selection must be recorded. 5... synchronizationreference On-Off (Discrete)None. 1 Preferably each crew member but one discrete acceptable for all...

  20. 14 CFR Appendix M to Part 121 - Airplane Flight Recorder Specifications

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Discrete “true” or “mag” ±2° 1 0.5° When true or magnetic heading can be selected as the primary heading reference, a discrete indicating selection must be recorded. 5. Normal acceleration (vertical) 9 −3g to +6g.... Manual Radio Transmitter Keying or CVR/DFDR synchronization reference On-Off (Discrete)None 1 Preferably...

  1. Head circumference and height in autism: a study by the Collaborative Program of Excellence in Autism.

    PubMed

    Lainhart, Janet E; Bigler, Erin D; Bocian, Maureen; Coon, Hilary; Dinh, Elena; Dawson, Geraldine; Deutsch, Curtis K; Dunn, Michelle; Estes, Annette; Tager-Flusberg, Helen; Folstein, Susan; Hepburn, Susan; Hyman, Susan; McMahon, William; Minshew, Nancy; Munson, Jeff; Osann, Kathy; Ozonoff, Sally; Rodier, Patricia; Rogers, Sally; Sigman, Marian; Spence, M Anne; Stodgell, Christopher J; Volkmar, Fred

    2006-11-01

    Data from 10 sites of the NICHD/NIDCD Collaborative Programs of Excellence in Autism were combined to study the distribution of head circumference and relationship to demographic and clinical variables. Three hundred thirty-eight probands with autism-spectrum disorder (ASD) including 208 probands with autism were studied along with 147 parents, 149 siblings, and typically developing controls. ASDs were diagnosed, and head circumference and clinical variables measured in a standardized manner across all sites. All subjects with autism met ADI-R, ADOS-G, DSM-IV, and ICD-10 criteria. The results show the distribution of standardized head circumference in autism is normal in shape, and the mean, variance, and rate of macrocephaly but not microcephaly are increased. Head circumference tends to be large relative to height in autism. No site, gender, age, SES, verbal, or non-verbal IQ effects were present in the autism sample. In addition to autism itself, standardized height and average parental head circumference were the most important factors predicting head circumference in individuals with autism. Mean standardized head circumference and rates of macrocephaly were similar in probands with autism and their parents. Increased head circumference was associated with a higher (more severe) ADI-R social algorithm score. Macrocephaly is associated with delayed onset of language. Although mean head circumference and rates of macrocephaly are increased in autism, a high degree of variability is present, underscoring the complex clinical heterogeneity of the disorder. The wide distribution of head circumference in autism has major implications for genetic, neuroimaging, and other neurobiological research.

  2. Genetic Parameters And Selection Response For Yield Traits In Bread Wheat Under Irrigated And Rainfed Environments

    NASA Astrophysics Data System (ADS)

    Khalil, Iftikhar Hussain; at-ur-Rahman, Hiday; Khan, Imran

    2008-01-01

    A set of 22 F5:7 experimental wheat lines along with four check cultivars (Dera-98, Fakhr-e-Sarhad, Ghaznavi-98 and Tatara) were evaluated as independent experiments under irrigated and rainfed environments using a randomized complete block design at NWFP Agricultural University, Peshawar during 2004-05. The two environments were statistically different for days to heading and spike length only. Highly significant genetic variability existed among the wheat lines (P<0.01) in the combined analysis across environments for all traits. Genotype×environment interactions were non-significant for all traits except 1000-grain weight indicating consistent performance of wheat genotypes across the two environments. Wheat lines and check cultivars were 2 to 5 days early in heading under rainfed environment compared to the irrigated. Plant height, spike length, 1000-grain weight, biological and grain yields were generally reduced under rainfed environment. Genetic variances were of greater magnitude than environmental variances for most of the traits in both environments. The heritability estimates were of higher magnitude (0.74 to 0.96) for days to heading, plant height, spike length, biological and grain yield, while medium (0.31 to 0.51) for 1000-grain weight. Selection differentials were negative for heading (-7.3 days in irrigated vs -9.4 days in rainfed) and plant height (-9.0 cm in irrigated vs -8.7 cm in rainfed) indicating possibility of selecting wheat genotypes with early heading and short plant stature. Positive selection differentials of 1.3 vs 1.6 cm for spike length, 3.8 vs 3.4 g for 1000-grain weight, 2488.2 vs 3139.7 kg ha-1 for biological yield and 691.6 vs 565.4 kg ha-1 for grain yield at 20% selection intensity were observed under irrigated and rainfed environments, respectively. Expected selection responses were 7.98 vs 8.91 days for heading, 8.20 vs 9.52 cm for plant height, 1.01 vs 1.61 cm for spike length, 2.12 vs 1.15 g for 1000-grain weight, 1655.8 vs 2317.2 kg ha-1 for biological yield and 691.6 vs 565.4 kg ha-1 for grain yield under the two test environments, respectively. The differential heritability and selection responses for yield and related traits suggest the simultaneous evaluation and selection of wheat lines under the two environments.

  3. Identifying relevant hyperspectral bands using Boruta: a temporal analysis of water hyacinth biocontrol

    NASA Astrophysics Data System (ADS)

    Agjee, Na'eem Hoosen; Ismail, Riyad; Mutanga, Onisimo

    2016-10-01

    Water hyacinth plants (Eichhornia crassipes) are threatening freshwater ecosystems throughout Africa. The Neochetina spp. weevils are seen as an effective solution that can combat the proliferation of the invasive alien plant. We aimed to determine if multitemporal hyperspectral data could be utilized to detect the efficacy of the biocontrol agent. The random forest (RF) algorithm was used to classify variable infestation levels for 6 weeks using: (1) all the hyperspectral bands, (2) bands selected by the recursive feature elimination (RFE) algorithm, and (3) bands selected by the Boruta algorithm. Results showed that the RF model using all the bands successfully produced low-classification errors (12.50% to 32.29%) for all 6 weeks. However, the RF model using Boruta selected bands produced lower classification errors (8.33% to 15.62%) than the RF model using all the bands or bands selected by the RFE algorithm (11.25% to 21.25%) for all 6 weeks, highlighting the utility of Boruta as an all relevant band selection algorithm. All relevant bands selected by Boruta included: 352, 754, 770, 771, 775, 781, 782, 783, 786, and 789 nm. It was concluded that RF coupled with Boruta band-selection algorithm can be utilized to undertake multitemporal monitoring of variable infestation levels on water hyacinth plants.

  4. A genetic algorithm based global search strategy for population pharmacokinetic/pharmacodynamic model selection

    PubMed Central

    Sale, Mark; Sherer, Eric A

    2015-01-01

    The current algorithm for selecting a population pharmacokinetic/pharmacodynamic model is based on the well-established forward addition/backward elimination method. A central strength of this approach is the opportunity for a modeller to continuously examine the data and postulate new hypotheses to explain observed biases. This algorithm has served the modelling community well, but the model selection process has essentially remained unchanged for the last 30 years. During this time, more robust approaches to model selection have been made feasible by new technology and dramatic increases in computation speed. We review these methods, with emphasis on genetic algorithm approaches and discuss the role these methods may play in population pharmacokinetic/pharmacodynamic model selection. PMID:23772792

  5. Developing operation algorithms for vision subsystems in autonomous mobile robots

    NASA Astrophysics Data System (ADS)

    Shikhman, M. V.; Shidlovskiy, S. V.

    2018-05-01

    The paper analyzes algorithms for selecting keypoints on the image for the subsequent automatic detection of people and obstacles. The algorithm is based on the histogram of oriented gradients and the support vector method. The combination of these methods allows successful selection of dynamic and static objects. The algorithm can be applied in various autonomous mobile robots.

  6. Solving TSP problem with improved genetic algorithm

    NASA Astrophysics Data System (ADS)

    Fu, Chunhua; Zhang, Lijun; Wang, Xiaojing; Qiao, Liying

    2018-05-01

    The TSP is a typical NP problem. The optimization of vehicle routing problem (VRP) and city pipeline optimization can use TSP to solve; therefore it is very important to the optimization for solving TSP problem. The genetic algorithm (GA) is one of ideal methods in solving it. The standard genetic algorithm has some limitations. Improving the selection operator of genetic algorithm, and importing elite retention strategy can ensure the select operation of quality, In mutation operation, using the adaptive algorithm selection can improve the quality of search results and variation, after the chromosome evolved one-way evolution reverse operation is added which can make the offspring inherit gene of parental quality improvement opportunities, and improve the ability of searching the optimal solution algorithm.

  7. A Feature Selection Algorithm to Compute Gene Centric Methylation from Probe Level Methylation Data.

    PubMed

    Baur, Brittany; Bozdag, Serdar

    2016-01-01

    DNA methylation is an important epigenetic event that effects gene expression during development and various diseases such as cancer. Understanding the mechanism of action of DNA methylation is important for downstream analysis. In the Illumina Infinium HumanMethylation 450K array, there are tens of probes associated with each gene. Given methylation intensities of all these probes, it is necessary to compute which of these probes are most representative of the gene centric methylation level. In this study, we developed a feature selection algorithm based on sequential forward selection that utilized different classification methods to compute gene centric DNA methylation using probe level DNA methylation data. We compared our algorithm to other feature selection algorithms such as support vector machines with recursive feature elimination, genetic algorithms and ReliefF. We evaluated all methods based on the predictive power of selected probes on their mRNA expression levels and found that a K-Nearest Neighbors classification using the sequential forward selection algorithm performed better than other algorithms based on all metrics. We also observed that transcriptional activities of certain genes were more sensitive to DNA methylation changes than transcriptional activities of other genes. Our algorithm was able to predict the expression of those genes with high accuracy using only DNA methylation data. Our results also showed that those DNA methylation-sensitive genes were enriched in Gene Ontology terms related to the regulation of various biological processes.

  8. Semi-automatic 10/20 Identification Method for MRI-Free Probe Placement in Transcranial Brain Mapping Techniques.

    PubMed

    Xiao, Xiang; Zhu, Hao; Liu, Wei-Jie; Yu, Xiao-Ting; Duan, Lian; Li, Zheng; Zhu, Chao-Zhe

    2017-01-01

    The International 10/20 system is an important head-surface-based positioning system for transcranial brain mapping techniques, e.g., fNIRS and TMS. As guidance for probe placement, the 10/20 system permits both proper ROI coverage and spatial consistency among multiple subjects and experiments in a MRI-free context. However, the traditional manual approach to the identification of 10/20 landmarks faces problems in reliability and time cost. In this study, we propose a semi-automatic method to address these problems. First, a novel head surface reconstruction algorithm reconstructs head geometry from a set of points uniformly and sparsely sampled on the subject's head. Second, virtual 10/20 landmarks are determined on the reconstructed head surface in computational space. Finally, a visually-guided real-time navigation system guides the experimenter to each of the identified 10/20 landmarks on the physical head of the subject. Compared with the traditional manual approach, our proposed method provides a significant improvement both in reliability and time cost and thus could contribute to improving both the effectiveness and efficiency of 10/20-guided MRI-free probe placement.

  9. A polarized low-coherence interferometry demodulation algorithm by recovering the absolute phase of a selected monochromatic frequency.

    PubMed

    Jiang, Junfeng; Wang, Shaohua; Liu, Tiegen; Liu, Kun; Yin, Jinde; Meng, Xiange; Zhang, Yimo; Wang, Shuang; Qin, Zunqi; Wu, Fan; Li, Dingjie

    2012-07-30

    A demodulation algorithm based on absolute phase recovery of a selected monochromatic frequency is proposed for optical fiber Fabry-Perot pressure sensing system. The algorithm uses Fourier transform to get the relative phase and intercept of the unwrapped phase-frequency linear fit curve to identify its interference-order, which are then used to recover the absolute phase. A simplified mathematical model of the polarized low-coherence interference fringes was established to illustrate the principle of the proposed algorithm. Phase unwrapping and the selection of monochromatic frequency were discussed in detail. Pressure measurement experiment was carried out to verify the effectiveness of the proposed algorithm. Results showed that the demodulation precision by our algorithm could reach up to 0.15kPa, which has been improved by 13 times comparing with phase slope based algorithm.

  10. The design and testing of a novel mechanomyogram-driven switch controlled by small eyebrow movements

    PubMed Central

    2010-01-01

    Background Individuals with severe physical disabilities and minimal motor behaviour may be unable to use conventional mechanical switches for access. These persons may benefit from access technologies that harness the volitional activity of muscles. In this study, we describe the design and demonstrate the performance of a binary switch controlled by mechanomyogram (MMG) signals recorded from the frontalis muscle during eyebrow movements. Methods Muscle contractions, detected in real-time with a continuous wavelet transform algorithm, were used to control a binary switch for computer access. The automatic selection of scale-specific thresholds reduced the effect of artefact, such as eye blinks and head movement, on the performance of the switch. Switch performance was estimated by cued response-tests performed by eleven participants (one with severe physical disabilities). Results The average sensitivity and specificity of the switch was 99.7 ± 0.4% and 99.9 ± 0.1%, respectively. The algorithm performance was robust against typical participant movement. Conclusions The results suggest that the frontalis muscle is a suitable site for controlling the MMG-driven switch. The high accuracies combined with the minimal requisite effort and training show that MMG is a promising binary control signal. Further investigation of the potential benefits of MMG-control for the target population is warranted. PMID:20492680

  11. The design and testing of a novel mechanomyogram-driven switch controlled by small eyebrow movements.

    PubMed

    Alves, Natasha; Chau, Tom

    2010-05-21

    Individuals with severe physical disabilities and minimal motor behaviour may be unable to use conventional mechanical switches for access. These persons may benefit from access technologies that harness the volitional activity of muscles. In this study, we describe the design and demonstrate the performance of a binary switch controlled by mechanomyogram (MMG) signals recorded from the frontalis muscle during eyebrow movements. Muscle contractions, detected in real-time with a continuous wavelet transform algorithm, were used to control a binary switch for computer access. The automatic selection of scale-specific thresholds reduced the effect of artefact, such as eye blinks and head movement, on the performance of the switch. Switch performance was estimated by cued response-tests performed by eleven participants (one with severe physical disabilities). The average sensitivity and specificity of the switch was 99.7 +/- 0.4% and 99.9 +/- 0.1%, respectively. The algorithm performance was robust against typical participant movement. The results suggest that the frontalis muscle is a suitable site for controlling the MMG-driven switch. The high accuracies combined with the minimal requisite effort and training show that MMG is a promising binary control signal. Further investigation of the potential benefits of MMG-control for the target population is warranted.

  12. An Eccentricity Based Data Routing Protocol with Uniform Node Distribution in 3D WSN.

    PubMed

    Hosen, A S M Sanwar; Cho, Gi Hwan; Ra, In-Ho

    2017-09-16

    Due to nonuniform node distribution, the energy consumption of nodes are imbalanced in clustering-based wireless sensor networks (WSNs). It might have more impact when nodes are deployed in a three-dimensional (3D) environment. In this regard, we propose the eccentricity based data routing (EDR) protocol in a 3D WSN with uniform node distribution. It includes network partitions called 3D subspaces/clusters of equal member nodes, an energy-efficient routing centroid (RC) nodes election and data routing algorithm. The RC nodes election conducts in a quasi-static nature until a certain period unlike the periodic cluster heads election of typical clustering-based routing. It not only reduces the energy consumption of nodes during the election phase, but also in intra-communication. At the same time, the routing algorithm selects a forwarding node in such a way that balances the energy consumption among RC nodes and reduces the number of hops towards the sink. The simulation results validate and ensure the performance supremacy of the EDR protocol compared to existing protocols in terms of various metrics such as steady state and network lifetime in particular. Meanwhile, the results show the EDR is more robust in uniform node distribution compared to nonuniform.

  13. An Eccentricity Based Data Routing Protocol with Uniform Node Distribution in 3D WSN

    PubMed Central

    Hosen, A. S. M. Sanwar; Cho, Gi Hwan; Ra, In-Ho

    2017-01-01

    Due to nonuniform node distribution, the energy consumption of nodes are imbalanced in clustering-based wireless sensor networks (WSNs). It might have more impact when nodes are deployed in a three-dimensional (3D) environment. In this regard, we propose the eccentricity based data routing (EDR) protocol in a 3D WSN with uniform node distribution. It includes network partitions called 3D subspaces/clusters of equal member nodes, an energy-efficient routing centroid (RC) nodes election and data routing algorithm. The RC nodes election conducts in a quasi-static nature until a certain period unlike the periodic cluster heads election of typical clustering-based routing. It not only reduces the energy consumption of nodes during the election phase, but also in intra-communication. At the same time, the routing algorithm selects a forwarding node in such a way that balances the energy consumption among RC nodes and reduces the number of hops towards the sink. The simulation results validate and ensure the performance supremacy of the EDR protocol compared to existing protocols in terms of various metrics such as steady state and network lifetime in particular. Meanwhile, the results show the EDR is more robust in uniform node distribution compared to nonuniform. PMID:28926958

  14. A Bluetooth/PDR Integration Algorithm for an Indoor Positioning System.

    PubMed

    Li, Xin; Wang, Jian; Liu, Chunyan

    2015-09-25

    This paper proposes two schemes for indoor positioning by fusing Bluetooth beacons and a pedestrian dead reckoning (PDR) technique to provide meter-level positioning without additional infrastructure. As to the PDR approach, a more effective multi-threshold step detection algorithm is used to improve the positioning accuracy. According to pedestrians' different walking patterns such as walking or running, this paper makes a comparative analysis of multiple step length calculation models to determine a linear computation model and the relevant parameters. In consideration of the deviation between the real heading and the value of the orientation sensor, a heading estimation method with real-time compensation is proposed, which is based on a Kalman filter with map geometry information. The corrected heading can inhibit the positioning error accumulation and improve the positioning accuracy of PDR. Moreover, this paper has implemented two positioning approaches integrated with Bluetooth and PDR. One is the PDR-based positioning method based on map matching and position correction through Bluetooth. There will not be too much calculation work or too high maintenance costs using this method. The other method is a fusion calculation method based on the pedestrians' moving status (direct movement or making a turn) to determine adaptively the noise parameters in an Extended Kalman Filter (EKF) system. This method has worked very well in the elimination of various phenomena, including the "go and back" phenomenon caused by the instability of the Bluetooth-based positioning system and the "cross-wall" phenomenon due to the accumulative errors caused by the PDR algorithm. Experiments performed on the fourth floor of the School of Environmental Science and Spatial Informatics (SESSI) building in the China University of Mining and Technology (CUMT) campus showed that the proposed scheme can reliably achieve a 2-meter precision.

  15. A Bluetooth/PDR Integration Algorithm for an Indoor Positioning System

    PubMed Central

    Li, Xin; Wang, Jian; Liu, Chunyan

    2015-01-01

    This paper proposes two schemes for indoor positioning by fusing Bluetooth beacons and a pedestrian dead reckoning (PDR) technique to provide meter-level positioning without additional infrastructure. As to the PDR approach, a more effective multi-threshold step detection algorithm is used to improve the positioning accuracy. According to pedestrians’ different walking patterns such as walking or running, this paper makes a comparative analysis of multiple step length calculation models to determine a linear computation model and the relevant parameters. In consideration of the deviation between the real heading and the value of the orientation sensor, a heading estimation method with real-time compensation is proposed, which is based on a Kalman filter with map geometry information. The corrected heading can inhibit the positioning error accumulation and improve the positioning accuracy of PDR. Moreover, this paper has implemented two positioning approaches integrated with Bluetooth and PDR. One is the PDR-based positioning method based on map matching and position correction through Bluetooth. There will not be too much calculation work or too high maintenance costs using this method. The other method is a fusion calculation method based on the pedestrians’ moving status (direct movement or making a turn) to determine adaptively the noise parameters in an Extended Kalman Filter (EKF) system. This method has worked very well in the elimination of various phenomena, including the “go and back” phenomenon caused by the instability of the Bluetooth-based positioning system and the “cross-wall” phenomenon due to the accumulative errors caused by the PDR algorithm. Experiments performed on the fourth floor of the School of Environmental Science and Spatial Informatics (SESSI) building in the China University of Mining and Technology (CUMT) campus showed that the proposed scheme can reliably achieve a 2-meter precision. PMID:26404277

  16. 45 CFR 1305.4 - Age of children and family income eligibility.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 4 2010-10-01 2010-10-01 false Age of children and family income eligibility... FAMILIES, HEAD START PROGRAM ELIGIBILITY, RECRUITMENT, SELECTION, ENROLLMENT AND ATTENDANCE IN HEAD START § 1305.4 Age of children and family income eligibility. (a) To be eligible for Head Start services, a...

  17. Covariation between human pelvis shape, stature, and head size alleviates the obstetric dilemma

    PubMed Central

    Fischer, Barbara; Mitteroecker, Philipp

    2015-01-01

    Compared with other primates, childbirth is remarkably difficult in humans because the head of a human neonate is large relative to the birth-relevant dimensions of the maternal pelvis. It seems puzzling that females have not evolved wider pelvises despite the high maternal mortality and morbidity risk connected to childbirth. Despite this seeming lack of change in average pelvic morphology, we show that humans have evolved a complex link between pelvis shape, stature, and head circumference that was not recognized before. The identified covariance patterns contribute to ameliorate the “obstetric dilemma.” Females with a large head, who are likely to give birth to neonates with a large head, possess birth canals that are shaped to better accommodate large-headed neonates. Short females with an increased risk of cephalopelvic mismatch possess a rounder inlet, which is beneficial for obstetrics. We suggest that these covariances have evolved by the strong correlational selection resulting from childbirth. Although males are not subject to obstetric selection, they also show part of these association patterns, indicating a genetic–developmental origin of integration. PMID:25902498

  18. Covariation between human pelvis shape, stature, and head size alleviates the obstetric dilemma.

    PubMed

    Fischer, Barbara; Mitteroecker, Philipp

    2015-05-05

    Compared with other primates, childbirth is remarkably difficult in humans because the head of a human neonate is large relative to the birth-relevant dimensions of the maternal pelvis. It seems puzzling that females have not evolved wider pelvises despite the high maternal mortality and morbidity risk connected to childbirth. Despite this seeming lack of change in average pelvic morphology, we show that humans have evolved a complex link between pelvis shape, stature, and head circumference that was not recognized before. The identified covariance patterns contribute to ameliorate the "obstetric dilemma." Females with a large head, who are likely to give birth to neonates with a large head, possess birth canals that are shaped to better accommodate large-headed neonates. Short females with an increased risk of cephalopelvic mismatch possess a rounder inlet, which is beneficial for obstetrics. We suggest that these covariances have evolved by the strong correlational selection resulting from childbirth. Although males are not subject to obstetric selection, they also show part of these association patterns, indicating a genetic-developmental origin of integration.

  19. Development of Head Injury Assessment Reference Values Based on NASA Injury Modeling

    NASA Technical Reports Server (NTRS)

    Somers, Jeffrey T.; Melvin, John W.; Tabiei, Ala; Lawrence, Charles; Ploutz-Snyder, Robert; Granderson, Bradley; Feiveson, Alan; Gernhardt, Michael; Patalak, John

    2011-01-01

    NASA is developing a new capsule-based, crewed vehicle that will land in the ocean, and the space agency desires to reduce the risk of injury from impact during these landings. Because landing impact occurs for each flight and the crew might need to perform egress tasks, current injury assessment reference values (IARV) were deemed insufficient. Because NASCAR occupant restraint systems are more effective than the systems used to determine the current IARVs and are similar to NASA s proposed restraint system, an analysis of NASCAR impacts was performed to develop new IARVs that may be more relevant to NASA s context of vehicle landing operations. Head IARVs associated with race car impacts were investigated by completing a detailed analysis of all of the 2002-2008 NASCAR impact data. Specific inclusion and exclusion criteria were used to select 4071 impacts from the 4015 recorder files provided (each file could contain multiple impact events). Of the 4071 accepted impacts, 274 were selected for numerical simulation using a custom NASCAR restraint system and Humanetics Hybrid-III 50th percentile numerical dummy model in LS-DYNA. Injury had occurred in 32 of the 274 selected impacts, and 27 of those injuries involved the head. A majority of the head injuries were mild concussions with or without brief loss of consciousness. The 242 non-injury impacts were randomly selected and representative of the range of crash dynamics present in the total set of 4071 impacts. Head dynamics data (head translational acceleration, translational change in velocity, rotational acceleration, rotational velocity, HIC-15, HIC-36, and the Head 3ms clip) were filtered according to SAE J211 specifications and then transformed to a log scale. The probability of head injury was estimated using a separate logistic regression analysis for each log-transformed predictor candidate. Using the log transformation constrains the estimated probability of injury to become negligible as IARVs approach zero. For the parameters head translational acceleration, head translational velocity change, head rotational acceleration, HIC-15, and HIC-36, conservative values (in the lower 95% confidence interval) that gave rise to a 5% risk of any injury occurring were estimated as 40.0 G, 7.9 m/s, 2200 rad/s2, 98.4, and 77.4 respectively. Because NASA is interested in the consequence of any particular injury on the ability of the crew to perform egress tasks, the head injuries that occurred in the NASCAR dataset were classified according to a NASA-developed scale (Classes I - III) for operationally relevant injuries, which classifies injuries on the basis of their operational significance. Additional analysis of the data was performed to determine the probability of each injury class occurring, and this was estimated using an ordered probit model. For head translational acceleration, head translational velocity change, head rotational acceleration, head rotational velocity, HIC-36, and head 3ms clip, conservative values of IARVs that produced a 5% risk of Class II injury were estimated as 50.7 G, 9.5 m/s, 2863 rad/s2, 11.0 rad/s, 30.3, and 46.4 G respectively. The results indicate that head IARVs developed from the NASCAR dataset may be useful to protect crews during landing impact.

  20. Mathematical Optimization Algorithm for Minimizing the Cost Function of GHG Emission in AS/RS Using Positive Selection Based Clonal Selection Principle

    NASA Astrophysics Data System (ADS)

    Mahalakshmi; Murugesan, R.

    2018-04-01

    This paper regards with the minimization of total cost of Greenhouse Gas (GHG) efficiency in Automated Storage and Retrieval System (AS/RS). A mathematical model is constructed based on tax cost, penalty cost and discount cost of GHG emission of AS/RS. A two stage algorithm namely positive selection based clonal selection principle (PSBCSP) is used to find the optimal solution of the constructed model. In the first stage positive selection principle is used to reduce the search space of the optimal solution by fixing a threshold value. In the later stage clonal selection principle is used to generate best solutions. The obtained results are compared with other existing algorithms in the literature, which shows that the proposed algorithm yields a better result compared to others.

  1. SU-E-J-219: A Dixon Based Pseudo-CT Generation Method for MR-Only Radiotherapy Treatment Planning of the Pelvis and Head and Neck

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maspero, M.; Meijer, G.J.; Lagendijk, J.J.W.

    2015-06-15

    Purpose: To develop an image processing method for MRI-based generation of electron density maps, known as pseudo-CT (pCT), without usage of model- or atlas-based segmentation, and to evaluate the method in the pelvic and head-neck region against CT. Methods: CT and MRI scans were obtained from the pelvic region of four patients in supine position using a flat table top only for CT. Stratified CT maps were generated by classifying each voxel based on HU ranges into one of four classes: air, adipose tissue, soft tissue or bone.A hierarchical region-selective algorithm, based on automatic thresholding and clustering, was used tomore » classify tissues from MR Dixon reconstructed fat, In-Phase (IP) and Opposed-Phase (OP) images. First, a body mask was obtained by thresholding the IP image. Subsequently, an automatic threshold on the Dixon fat image differentiated soft and adipose tissue. K-means clustering on IP and OP images resulted in a mask that, via a connected neighborhood analysis, allowing the user to select the components corresponding to bone structures.The pCT was estimated through assignment of bulk HU to the tissue classes. Bone-only Digital Reconstructed Radiographs (DRR) were generated as well. The pCT images were rigidly registered to the stratified CT to allow a volumetric and voxelwise comparison. Moreover, pCTs were also calculated within the head-neck region in two volunteers using the same pipeline. Results: The volumetric comparison resulted in differences <1% for each tissue class. A voxelwise comparison showed a good classification, ranging from 64% to 98%. The primary misclassified classes were adipose/soft tissue and bone/soft tissue. As the patients have been imaged on different table tops, part of the misclassification error can be explained by misregistration. Conclusion: The proposed approach does not rely on an anatomy model providing the flexibility to successfully generate the pCT in two different body sites. This research is founded by ZonMw IMDI Programme, project name: “RASOR sharp: MRI based radiotherapy planning using a single MRI sequence”, project number: 10-104003010.« less

  2. Autotasked Performance in the NAS Workload: A Statistical Analysis

    NASA Technical Reports Server (NTRS)

    Carter, R. L.; Stockdale, I. E.; Kutler, Paul (Technical Monitor)

    1998-01-01

    A statistical analysis of the workload performance of a production quality FORTRAN code for five different Cray Y-MP hardware and system software configurations is performed. The analysis was based on an experimental procedure that was designed to minimize correlations between the number of requested CPUs and the time of day the runs were initiated. Observed autotasking over heads were significantly larger for the set of jobs that requested the maximum number of CPUs. Speedups for UNICOS 6 releases show consistent wall clock speedups in the workload of around 2. which is quite good. The observed speed ups were very similar for the set of jobs that requested 8 CPUs and the set that requested 4 CPUs. The original NAS algorithm for determining charges to the user discourages autotasking in the workload. A new charging algorithm to be applied to jobs run in the NQS multitasking queues also discourages NAS users from using auto tasking. The new algorithm favors jobs requesting 8 CPUs over those that request less, although the jobs requesting 8 CPUs experienced significantly higher over head and presumably degraded system throughput. A charging algorithm is presented that has the following desirable characteristics when applied to the data: higher overhead jobs requesting 8 CPUs are penalized when compared to moderate overhead jobs requesting 4 CPUs, thereby providing a charging incentive to NAS users to use autotasking in a manner that provides them with significantly improved turnaround while also maintaining system throughput.

  3. Restoration of MRI data for intensity non-uniformities using local high order intensity statistics

    PubMed Central

    Hadjidemetriou, Stathis; Studholme, Colin; Mueller, Susanne; Weiner, Michael; Schuff, Norbert

    2008-01-01

    MRI at high magnetic fields (>3.0 T) is complicated by strong inhomogeneous radio-frequency fields, sometimes termed the “bias field”. These lead to non-biological intensity non-uniformities across the image. They can complicate further image analysis such as registration and tissue segmentation. Existing methods for intensity uniformity restoration have been optimized for 1.5 T, but they are less effective for 3.0 T MRI, and not at all satisfactory for higher fields. Also, many of the existing restoration algorithms require a brain template or use a prior atlas, which can restrict their practicalities. In this study an effective intensity uniformity restoration algorithm has been developed based on non-parametric statistics of high order local intensity co-occurrences. These statistics are restored with a non-stationary Wiener filter. The algorithm also assumes a smooth non-uniformity and is stable. It does not require a prior atlas and is robust to variations in anatomy. In geriatric brain imaging it is robust to variations such as enlarged ventricles and low contrast to noise ratio. The co-occurrence statistics improve robustness to whole head images with pronounced non-uniformities present in high field acquisitions. Its significantly improved performance and lower time requirements have been demonstrated by comparing it to the very commonly used N3 algorithm on BrainWeb MR simulator images as well as on real 4 T human head images. PMID:18621568

  4. A Simulation-Optimization Model for the Management of Seawater Intrusion

    NASA Astrophysics Data System (ADS)

    Stanko, Z.; Nishikawa, T.

    2012-12-01

    Seawater intrusion is a common problem in coastal aquifers where excessive groundwater pumping can lead to chloride contamination of a freshwater resource. Simulation-optimization techniques have been developed to determine optimal management strategies while mitigating seawater intrusion. The simulation models are often density-independent groundwater-flow models that may assume a sharp interface and/or use equivalent freshwater heads. The optimization methods are often linear-programming (LP) based techniques that that require simplifications of the real-world system. However, seawater intrusion is a highly nonlinear, density-dependent flow and transport problem, which requires the use of nonlinear-programming (NLP) or global-optimization (GO) techniques. NLP approaches are difficult because of the need for gradient information; therefore, we have chosen a GO technique for this study. Specifically, we have coupled a multi-objective genetic algorithm (GA) with a density-dependent groundwater-flow and transport model to simulate and identify strategies that optimally manage seawater intrusion. GA is a heuristic approach, often chosen when seeking optimal solutions to highly complex and nonlinear problems where LP or NLP methods cannot be applied. The GA utilized in this study is the Epsilon-Nondominated Sorted Genetic Algorithm II (ɛ-NSGAII), which can approximate a pareto-optimal front between competing objectives. This algorithm has several key features: real and/or binary variable capabilities; an efficient sorting scheme; preservation and diversity of good solutions; dynamic population sizing; constraint handling; parallelizable implementation; and user controlled precision for each objective. The simulation model is SEAWAT, the USGS model that couples MODFLOW with MT3DMS for variable-density flow and transport. ɛ-NSGAII and SEAWAT were efficiently linked together through a C-Fortran interface. The simulation-optimization model was first tested by using a published density-independent flow model test case that was originally solved using a sequential LP method with the USGS's Ground-Water Management Process (GWM). For the problem formulation, the objective is to maximize net groundwater extraction, subject to head and head-gradient constraints. The decision variables are pumping rates at fixed wells and the system's state is represented with freshwater hydraulic head. The results of the proposed algorithm were similar to the published results (within 1%); discrepancies may be attributed to differences in the simulators and inherent differences between LP and GA. The GWM test case was then extended to a density-dependent flow and transport version. As formulated, the optimization problem is infeasible because of the density effects on hydraulic head. Therefore, the sum of the squared constraint violation (SSC) was used as a second objective. The result is a pareto curve showing optimal pumping rates versus the SSC. Analysis of this curve indicates that a similar net-extraction rate to the test case can be obtained with a minor violation in vertical head-gradient constraints. This study shows that a coupled ɛ-NSGAII/SEAWAT model can be used for the management of groundwater seawater intrusion. In the future, the proposed methodology will be applied to a real-world seawater intrusion and resource management problem for Santa Barbara, CA.

  5. Identification and Validation of Reference Genes for RT-qPCR Analysis in Non-Heading Chinese Cabbage Flowers

    PubMed Central

    Wang, Cheng; Cui, Hong-Mi; Huang, Tian-Hong; Liu, Tong-Kun; Hou, Xi-Lin; Li, Ying

    2016-01-01

    Non-heading Chinese cabbage (Brassica rapa ssp. chinensis Makino) is an important vegetable member of Brassica rapa crops. It exhibits a typical sporophytic self-incompatibility (SI) system and is an ideal model plant to explore the mechanism of SI. Gene expression research are frequently used to unravel the complex genetic mechanism and in such studies appropriate reference selection is vital. Validation of reference genes have neither been conducted in Brassica rapa flowers nor in SI trait. In this study, 13 candidate reference genes were selected and examined systematically in 96 non-heading Chinese cabbage flower samples that represent four strategic groups in compatible and self-incompatible lines of non-heading Chinese cabbage. Two RT-qPCR analysis software, geNorm and NormFinder, were used to evaluate the expression stability of these genes systematically. Results revealed that best-ranked references genes should be selected according to specific sample subsets. DNAJ, UKN1, and PP2A were identified as the most stable reference genes among all samples. Moreover, our research further revealed that the widely used reference genes, CYP and ACP, were the least suitable reference genes in most non-heading Chinese cabbage flower sample sets. To further validate the suitability of the reference genes identified in this study, the expression level of SRK and Exo70A1 genes which play important roles in regulating interaction between pollen and stigma were studied. Our study presented the first systematic study of reference gene(s) selection for SI study and provided guidelines to obtain more accurate RT-qPCR results in non-heading Chinese cabbage. PMID:27375663

  6. 3D digital headform models of Australian cyclists.

    PubMed

    Ellena, Thierry; Skals, Sebastian; Subic, Aleksandar; Mustafa, Helmy; Pang, Toh Yen

    2017-03-01

    Traditional 1D anthropometric data have been the primary source of information used by ergonomists for the dimensioning of head and facial gear. Although these data are simple to use and understand, they only provide univariate measures of key dimensions. 3D anthropometric data, however, describe the complete shape characteristics of the head surface, but are complicated to interpret due to the abundance of information they contain. Consequently, current headform standards based on 1D measurements may not adequately represent the actual head shape variations of the intended user groups. The purpose of this study was to introduce a set of new digital headform models representative of the adult cyclists' community in Australia. Four models were generated based on an Australian 3D anthropometric database of head shapes and a modified hierarchical clustering algorithm. Considerable shape differences were identified between our models and the current headforms from the Australian standard. We conclude that the design of head and facial gear based on current standards might not be favorable for optimal fitting results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Head Start: Undercover Testing Finds Fraud and Abuse at Selected Head Start Centers. Testimony before the Committee on Education and Labor, House of Representatives. GAO-10-733T

    ERIC Educational Resources Information Center

    Kutz, Gregory D.

    2010-01-01

    The Head Start program, overseen by the Department of Health and Human Services and administered by the Office of Head Start, provides child development services primarily to low-income families and their children. Federal law allows up to 10 percent of enrolled families to have incomes above 130 percent of the poverty line--GAO (Government…

  8. V2.1.4 L2AS Detailed Release Description September 27, 2001

    Atmospheric Science Data Center

    2013-03-14

    ... 27, 2001 Algorithm Changes Change method of selecting radiance pixels to use in aerosol retrieval over ... het. surface retrieval algorithm over areas of 100% dark water. Modify algorithm for selecting a default aerosol model to use in ...

  9. Preliminary results of an in-beam PET prototype for proton therapy

    NASA Astrophysics Data System (ADS)

    Attanasi, F.; Belcari, N.; Camarda, M.; Cirrone, G. A. P.; Cuttone, G.; Del Guerra, A.; Di Rosa, F.; Lanconelli, N.; Rosso, V.; Russo, G.; Vecchio, S.

    2008-06-01

    Proton therapy can overcome the limitations of conventional radiotherapy due to the more selective energy deposition in depth and to the increased biological effectiveness. Verification of the delivered dose is desirable, but the complete stopping of the protons in patient prevents the application of electronic portal imaging methods that are used in conventional radiotherapy During proton therapy β + emitters like 11C, 15O, 10C are generated in irradiated tissues by nuclear reactions. The measurement of the spatial distribution of this activity, immediately after patient irradiation, can lead to information on the effective delivered dose. First, results of a feasibility study of an in-beam PET for proton therapy are reported. The prototype is based on two planar heads with an active area of about 5×5 cm 2. Each head is made up of a position sensitive photomultiplier coupled to a square matrix of same size of LYSO scintillating crystals (2×2×18 mm 3 pixel dimensions). Four signals from each head are acquired through a dedicated electronic board that performs signal amplification and digitization. A 3D reconstruction of the activity distribution is calculated using an expectation maximization algorithm. To characterize the PET prototype, the detection efficiency and the spatial resolution were measured using a point-like radioactive source. The validation of the prototype was performed using 62 MeV protons at the CATANA beam line of INFN LNS and PMMA phantoms. Using the full energy proton beam and various range shifters, a good correlation between the position of the activity distal edge and the thickness of the beam range shifter was found along the axial direction.

  10. Early processing variations in selective attention to the color and direction of moving stimuli during 30 days head-down bed rest

    NASA Astrophysics Data System (ADS)

    Wang, Lin-Jie; He, Si-Yang; Niu, Dong-Bin; Guo, Jian-Ping; Xu, Yun-Long; Wang, De-Sheng; Cao, Yi; Zhao, Qi; Tan, Cheng; Li, Zhi-Li; Tang, Guo-Hua; Li, Yin-Hui; Bai, Yan-Qiang

    2013-11-01

    Dynamic variations in early selective attention to the color and direction of moving stimuli were explored during a 30 days period of head-down bed rest. Event-related potentials (ERPs) were recorded at F5, F6, P5, P6 scalp locations in seven male subjects who attended to pairs of bicolored light emitting diodes that flashed sequentially to produce a perception of movement. Subjects were required to attend selectively to a critical feature of the moving target, e.g., color or direction. The tasks included: a no response task, a color selective response task, a moving direction selective response task, and a combined color-direction selective response task. Subjects were asked to perform these four tasks on: the 3rd day before bed rest; the 3rd, 15th and 30th day during the bed rest; and the 5th day after bed rest. Subjects responded quickly to the color than moving direction and combined color-direction response. And they had a longer reaction time during bed rest on the 15th and 30th day during bed rest after a relatively quicker response on the 3rd day. Using brain event-related potentials technique, we found that in the color selective response task, the mean amplitudes of P1 and N1 for target ERPs decreased in the 3rd day during bed rest and 5th day after bed rest in comparison with pre-bed rest, 15th day and 30th day during bed rest. In the combined color-direction selective response task, the P1 latencies for target ERPs on the 3rd and 30th day during bed rest were longer than on the 15th day during bed rest. As 3rd day during bed rest was in the acute adaptation period and 30th day during bed rest was in the relatively adaptation stage of head-down bed rest, the results help to clarify the effects of bed rest on different task loads and patterns of attention. It was suggested that subjects expended more time to give correct decision in the head-down tilt bed rest state. A difficulty in the recruitment of brain resources was found in feature selection task, but no variations were detected in the no response and direction selective response tasks. It is suggested that the negative shift in color selective response task on the 3rd day of bed rest are a result of fluid redistribution. And feature selection was more affected than motion selection in the head down bed rest. The variations in cognitive processing speed observed for the combined color-direction selective response task are suggested to reflect the interaction between top-down mechanisms and hierarchical physiological characteristics during 30 days head-down bed rest.

  11. 5 CFR 2638.202 - Responsibilities of agency head.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... and shall exercise personal leadership in establishing, maintaining, and carrying out the agency's... program in a positive and effective manner. (b) Selection of a designated agency ethics official. The head...

  12. Optic disc segmentation for glaucoma screening system using fundus images.

    PubMed

    Almazroa, Ahmed; Sun, Weiwei; Alodhayb, Sami; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan

    2017-01-01

    Segmenting the optic disc (OD) is an important and essential step in creating a frame of reference for diagnosing optic nerve head pathologies such as glaucoma. Therefore, a reliable OD segmentation technique is necessary for automatic screening of optic nerve head abnormalities. The main contribution of this paper is in presenting a novel OD segmentation algorithm based on applying a level set method on a localized OD image. To prevent the blood vessels from interfering with the level set process, an inpainting technique was applied. As well an important contribution was to involve the variations in opinions among the ophthalmologists in detecting the disc boundaries and diagnosing the glaucoma. Most of the previous studies were trained and tested based on only one opinion, which can be assumed to be biased for the ophthalmologist. In addition, the accuracy was calculated based on the number of images that coincided with the ophthalmologists' agreed-upon images, and not only on the overlapping images as in previous studies. The ultimate goal of this project is to develop an automated image processing system for glaucoma screening. The disc algorithm is evaluated using a new retinal fundus image dataset called RIGA (retinal images for glaucoma analysis). In the case of low-quality images, a double level set was applied, in which the first level set was considered to be localization for the OD. Five hundred and fifty images are used to test the algorithm accuracy as well as the agreement among the manual markings of six ophthalmologists. The accuracy of the algorithm in marking the optic disc area and centroid was 83.9%, and the best agreement was observed between the results of the algorithm and manual markings in 379 images.

  13. A Programming Environment for Parallel Vision Algorithms

    DTIC Science & Technology

    1990-04-11

    industrial arm on the market , while the unique head was designed by Rochester’s Computer Science and Mechanical Engineering Departments. 9a 4.1 Introduction...R. Constraining-Unification and the Programming Language Unicorn . In Logic Programming, Functions, Relations, and Equations, Degroot and Lind- strom

  14. Swarm Optimization-Based Magnetometer Calibration for Personal Handheld Devices

    PubMed Central

    Ali, Abdelrahman; Siddharth, Siddharth; Syed, Zainab; El-Sheimy, Naser

    2012-01-01

    Inertial Navigation Systems (INS) consist of accelerometers, gyroscopes and a processor that generates position and orientation solutions by integrating the specific forces and rotation rates. In addition to the accelerometers and gyroscopes, magnetometers can be used to derive the user heading based on Earth's magnetic field. Unfortunately, the measurements of the magnetic field obtained with low cost sensors are usually corrupted by several errors, including manufacturing defects and external electro-magnetic fields. Consequently, proper calibration of the magnetometer is required to achieve high accuracy heading measurements. In this paper, a Particle Swarm Optimization (PSO)-based calibration algorithm is presented to estimate the values of the bias and scale factor of low cost magnetometers. The main advantage of this technique is the use of the artificial intelligence which does not need any error modeling or awareness of the nonlinearity. Furthermore, the proposed algorithm can help in the development of Pedestrian Navigation Devices (PNDs) when combined with inertial sensors and GPS/Wi-Fi for indoor navigation and Location Based Services (LBS) applications.

  15. Linear functional minimization for inverse modeling

    DOE PAGES

    Barajas-Solano, David A.; Wohlberg, Brendt Egon; Vesselinov, Velimir Valentinov; ...

    2015-06-01

    In this paper, we present a novel inverse modeling strategy to estimate spatially distributed parameters of nonlinear models. The maximum a posteriori (MAP) estimators of these parameters are based on a likelihood functional, which contains spatially discrete measurements of the system parameters and spatiotemporally discrete measurements of the transient system states. The piecewise continuity prior for the parameters is expressed via Total Variation (TV) regularization. The MAP estimator is computed by minimizing a nonquadratic objective equipped with the TV operator. We apply this inversion algorithm to estimate hydraulic conductivity of a synthetic confined aquifer from measurements of conductivity and hydraulicmore » head. The synthetic conductivity field is composed of a low-conductivity heterogeneous intrusion into a high-conductivity heterogeneous medium. Our algorithm accurately reconstructs the location, orientation, and extent of the intrusion from the steady-state data only. Finally, addition of transient measurements of hydraulic head improves the parameter estimation, accurately reconstructing the conductivity field in the vicinity of observation locations.« less

  16. Toward real-time diffuse optical tomography: accelerating light propagation modeling employing parallel computing on GPU and CPU

    NASA Astrophysics Data System (ADS)

    Doulgerakis, Matthaios; Eggebrecht, Adam; Wojtkiewicz, Stanislaw; Culver, Joseph; Dehghani, Hamid

    2017-12-01

    Parameter recovery in diffuse optical tomography is a computationally expensive algorithm, especially when used for large and complex volumes, as in the case of human brain functional imaging. The modeling of light propagation, also known as the forward problem, is the computational bottleneck of the recovery algorithm, whereby the lack of a real-time solution is impeding practical and clinical applications. The objective of this work is the acceleration of the forward model, within a diffusion approximation-based finite-element modeling framework, employing parallelization to expedite the calculation of light propagation in realistic adult head models. The proposed methodology is applicable for modeling both continuous wave and frequency-domain systems with the results demonstrating a 10-fold speed increase when GPU architectures are available, while maintaining high accuracy. It is shown that, for a very high-resolution finite-element model of the adult human head with ˜600,000 nodes, consisting of heterogeneous layers, light propagation can be calculated at ˜0.25 s/excitation source.

  17. Game playing.

    PubMed

    Rosin, Christopher D

    2014-03-01

    Game playing has been a core domain of artificial intelligence research since the beginnings of the field. Game playing provides clearly defined arenas within which computational approaches can be readily compared to human expertise through head-to-head competition and other benchmarks. Game playing research has identified several simple core algorithms that provide successful foundations, with development focused on the challenges of defeating human experts in specific games. Key developments include minimax search in chess, machine learning from self-play in backgammon, and Monte Carlo tree search in Go. These approaches have generalized successfully to additional games. While computers have surpassed human expertise in a wide variety of games, open challenges remain and research focuses on identifying and developing new successful algorithmic foundations. WIREs Cogn Sci 2014, 5:193-205. doi: 10.1002/wcs.1278 CONFLICT OF INTEREST: The author has declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website. © 2014 John Wiley & Sons, Ltd.

  18. Optimal Parameter Design of Coarse Alignment for Fiber Optic Gyro Inertial Navigation System.

    PubMed

    Lu, Baofeng; Wang, Qiuying; Yu, Chunmei; Gao, Wei

    2015-06-25

    Two different coarse alignment algorithms for Fiber Optic Gyro (FOG) Inertial Navigation System (INS) based on inertial reference frame are discussed in this paper. Both of them are based on gravity vector integration, therefore, the performance of these algorithms is determined by integration time. In previous works, integration time is selected by experience. In order to give a criterion for the selection process, and make the selection of the integration time more accurate, optimal parameter design of these algorithms for FOG INS is performed in this paper. The design process is accomplished based on the analysis of the error characteristics of these two coarse alignment algorithms. Moreover, this analysis and optimal parameter design allow us to make an adequate selection of the most accurate algorithm for FOG INS according to the actual operational conditions. The analysis and simulation results show that the parameter provided by this work is the optimal value, and indicate that in different operational conditions, the coarse alignment algorithms adopted for FOG INS are different in order to achieve better performance. Lastly, the experiment results validate the effectiveness of the proposed algorithm.

  19. Log-linear model based behavior selection method for artificial fish swarm algorithm.

    PubMed

    Huang, Zhehuang; Chen, Yidong

    2015-01-01

    Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.

  20. Anticipation of the Impact of Human Papillomavirus on Clinical Decision Making for the Head and Neck Cancer Patient.

    PubMed

    Gillison, Maura L; Restighini, Carlo

    2015-12-01

    Human papillomavirus (HPV) is the cause of a distinct subset of oropharyngeal cancer rising in incidence in the United States and other developed countries. This increased incidence, combined with the strong effect of tumor HPV status on survival, has had a profound effect on the head and neck cancer discipline. The multidisciplinary field of head and neck cancer is in the midst of re-evaluating evidence-based algorithms for clinical decision making, developed from clinical trials conducted in an era when HPV-negative cancer predominated. This article reviews relationships between tumor HPV status and gender, cancer incidence trends, overall survival, treatment response, racial disparities, tumor staging, risk stratification, survival post disease progression, and clinical trial design. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Optimization of Smart Structure for Improving Servo Performance of Hard Disk Drive

    NASA Astrophysics Data System (ADS)

    Kajiwara, Itsuro; Takahashi, Masafumi; Arisaka, Toshihiro

    Head positioning accuracy of the hard disk drive should be improved to meet today's increasing performance demands. Vibration suppression of the arm in the hard disk drive is very important to enhance the servo bandwidth of the head positioning system. In this study, smart structure technology is introduced into the hard disk drive to suppress the vibration of the head actuator. It has been expected that the smart structure technology will contribute to the development of small and light-weight mechatronics devices with the required performance. First, modeling of the system is conducted with finite element method and modal analysis. Next, the actuator location and the control system are simultaneously optimized using genetic algorithm. Vibration control effect with the proposed vibration control mechanisms has been evaluated by some simulations.

  2. Network intrusion detection by the coevolutionary immune algorithm of artificial immune systems with clonal selection

    NASA Astrophysics Data System (ADS)

    Salamatova, T.; Zhukov, V.

    2017-02-01

    The paper presents the application of the artificial immune systems apparatus as a heuristic method of network intrusion detection for algorithmic provision of intrusion detection systems. The coevolutionary immune algorithm of artificial immune systems with clonal selection was elaborated. In testing different datasets the empirical results of evaluation of the algorithm effectiveness were achieved. To identify the degree of efficiency the algorithm was compared with analogs. The fundamental rules based of solutions generated by this algorithm are described in the article.

  3. Experience from the in-flight calibration of the Extreme Ultraviolet Explorer (EUVE) and Upper Atmosphere Research Satellite (UARS) fixed head star trackers (FHSTs)

    NASA Technical Reports Server (NTRS)

    Lee, Michael

    1995-01-01

    Since the original post-launch calibration of the FHSTs (Fixed Head Star Trackers) on EUVE (Extreme Ultraviolet Explorer) and UARS (Upper Atmosphere Research Satellite), the Flight Dynamics task has continued to analyze the FHST performance. The algorithm used for inflight alignment of spacecraft sensors is described and the equations for the errors in the relative alignment for the simple 2 star tracker case are shown. Simulated data and real data are used to compute the covariance of the relative alignment errors. Several methods for correcting the alignment are compared and results analyzed. The specific problems seen on orbit with UARS and EUVE are then discussed. UARS has experienced anomalous tracker performance on an FHST resulting in continuous variation in apparent tracker alignment. On EUVE, the FHST residuals from the attitude determination algorithm showed a dependence on the direction of roll during survey mode. This dependence is traced back to time tagging errors and the original post launch alignment is found to be in error due to the impact of the time tagging errors on the alignment algorithm. The methods used by the FDF (Flight Dynamics Facility) to correct for these problems is described.

  4. Optimization of the transition path of the head hardening with using the genetic algorithms

    NASA Astrophysics Data System (ADS)

    Wróbel, Joanna; Kulawik, Adam

    2016-06-01

    An automated method of choice of the transition path of the head hardening in heat treatment process for the plane steel element is proposed in this communication. This method determines the points on the path of moving heat source using the genetic algorithms. The fitness function of the used algorithm is determined on the basis of effective stresses and yield point depending on the phase composition. The path of the hardening tool and also the area of the heat affected zone is determined on the basis of obtained points. A numerical model of thermal phenomena, phase transformations in the solid state and mechanical phenomena for the hardening process is implemented in order to verify the presented method. A finite element method (FEM) was used for solving the heat transfer equation and getting required temperature fields. The moving heat source is modeled with a Gaussian distribution and the water cooling is also included. The macroscopic model based on the analysis of the CCT and CHT diagrams of the medium-carbon steel is used to determine the phase transformations in the solid state. A finite element method is also used for solving the equilibrium equations giving us the stress field. The thermal and structural strains are taken into account in the constitutive relations.

  5. Nonlinear Motion Cueing Algorithm: Filtering at Pilot Station and Development of the Nonlinear Optimal Filters for Pitch and Roll

    NASA Technical Reports Server (NTRS)

    Zaychik, Kirill B.; Cardullo, Frank M.

    2012-01-01

    Telban and Cardullo have developed and successfully implemented the non-linear optimal motion cueing algorithm at the Visual Motion Simulator (VMS) at the NASA Langley Research Center in 2005. The latest version of the non-linear algorithm performed filtering of motion cues in all degrees-of-freedom except for pitch and roll. This manuscript describes the development and implementation of the non-linear optimal motion cueing algorithm for the pitch and roll degrees of freedom. Presented results indicate improved cues in the specified channels as compared to the original design. To further advance motion cueing in general, this manuscript describes modifications to the existing algorithm, which allow for filtering at the location of the pilot's head as opposed to the centroid of the motion platform. The rational for such modification to the cueing algorithms is that the location of the pilot's vestibular system must be taken into account as opposed to the off-set of the centroid of the cockpit relative to the center of rotation alone. Results provided in this report suggest improved performance of the motion cueing algorithm.

  6. A hybrid intelligent algorithm for portfolio selection problem with fuzzy returns

    NASA Astrophysics Data System (ADS)

    Li, Xiang; Zhang, Yang; Wong, Hau-San; Qin, Zhongfeng

    2009-11-01

    Portfolio selection theory with fuzzy returns has been well developed and widely applied. Within the framework of credibility theory, several fuzzy portfolio selection models have been proposed such as mean-variance model, entropy optimization model, chance constrained programming model and so on. In order to solve these nonlinear optimization models, a hybrid intelligent algorithm is designed by integrating simulated annealing algorithm, neural network and fuzzy simulation techniques, where the neural network is used to approximate the expected value and variance for fuzzy returns and the fuzzy simulation is used to generate the training data for neural network. Since these models are used to be solved by genetic algorithm, some comparisons between the hybrid intelligent algorithm and genetic algorithm are given in terms of numerical examples, which imply that the hybrid intelligent algorithm is robust and more effective. In particular, it reduces the running time significantly for large size problems.

  7. A family of variable step-size affine projection adaptive filter algorithms using statistics of channel impulse response

    NASA Astrophysics Data System (ADS)

    Shams Esfand Abadi, Mohammad; AbbasZadeh Arani, Seyed Ali Asghar

    2011-12-01

    This paper extends the recently introduced variable step-size (VSS) approach to the family of adaptive filter algorithms. This method uses prior knowledge of the channel impulse response statistic. Accordingly, optimal step-size vector is obtained by minimizing the mean-square deviation (MSD). The presented algorithms are the VSS affine projection algorithm (VSS-APA), the VSS selective partial update NLMS (VSS-SPU-NLMS), the VSS-SPU-APA, and the VSS selective regressor APA (VSS-SR-APA). In VSS-SPU adaptive algorithms the filter coefficients are partially updated which reduce the computational complexity. In VSS-SR-APA, the optimal selection of input regressors is performed during the adaptation. The presented algorithms have good convergence speed, low steady state mean square error (MSE), and low computational complexity features. We demonstrate the good performance of the proposed algorithms through several simulations in system identification scenario.

  8. Divergent Hd1, Ghd7, and DTH7 Alleles Control Heading Date and Yield Potential of Japonica Rice in Northeast China.

    PubMed

    Ye, Jing; Niu, Xiaojun; Yang, Yaolong; Wang, Shan; Xu, Qun; Yuan, Xiaoping; Yu, Hanyong; Wang, Yiping; Wang, Shu; Feng, Yue; Wei, Xinghua

    2018-01-01

    The heading date is a vital factor in achieving a full rice yield. Cultivars with particular flowering behaviors have been artificially selected to survive in the long-day and low-temperature conditions of Northeast China. To dissect the genetic mechanism responsible for heading date in rice populations from Northeast China, association mapping was performed to identify major controlling loci. A genome-wide association study (GWAS) identified three genetic loci, Hd1 , Ghd7 , and DTH7 , using general and mixed linear models. The three genes were sequenced to analyze natural variations and identify their functions. Loss-of-function alleles of these genes contributed to early rice heading dates in the northern regions of Northeast China, while functional alleles promoted late rice heading dates in the southern regions of Northeast China. Selecting environmentally appropriate allele combinations in new varieties is recommended during breeding. Introducing the early indica rice's genetic background into Northeast japonica rice is a reasonable strategy for improving genetic diversity.

  9. SU-E-J-109: Evaluation of Deformable Accumulated Parotid Doses Using Different Registration Algorithms in Adaptive Head and Neck Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, S; Chinese PLA General Hospital, Beijing, 100853 China; Liu, B

    2015-06-15

    Purpose: Three deformable image registration (DIR) algorithms are utilized to perform deformable dose accumulation for head and neck tomotherapy treatment, and the differences of the accumulated doses are evaluated. Methods: Daily MVCT data for 10 patients with pathologically proven nasopharyngeal cancers were analyzed. The data were acquired using tomotherapy (TomoTherapy, Accuray) at the PLA General Hospital. The prescription dose to the primary target was 70Gy in 33 fractions.Three DIR methods (B-spline, Diffeomorphic Demons and MIMvista) were used to propagate parotid structures from planning CTs to the daily CTs and accumulate fractionated dose on the planning CTs. The mean accumulated dosesmore » of parotids were quantitatively compared and the uncertainties of the propagated parotid contours were evaluated using Dice similarity index (DSI). Results: The planned mean dose of the ipsilateral parotids (32.42±3.13Gy) was slightly higher than those of the contralateral parotids (31.38±3.19Gy)in 10 patients. The difference between the accumulated mean doses of the ipsilateral parotids in the B-spline, Demons and MIMvista deformation algorithms (36.40±5.78Gy, 34.08±6.72Gy and 33.72±2.63Gy ) were statistically significant (B-spline vs Demons, P<0.0001, B-spline vs MIMvista, p =0.002). And The difference between those of the contralateral parotids in the B-spline, Demons and MIMvista deformation algorithms (34.08±4.82Gy, 32.42±4.80Gy and 33.92±4.65Gy ) were also significant (B-spline vs Demons, p =0.009, B-spline vs MIMvista, p =0.074). For the DSI analysis, the scores of B-spline, Demons and MIMvista DIRs were 0.90, 0.89 and 0.76. Conclusion: Shrinkage of parotid volumes results in the dose increase to the parotid glands in adaptive head and neck radiotherapy. The accumulated doses of parotids show significant difference using the different DIR algorithms between kVCT and MVCT. Therefore, the volume-based criterion (i.e. DSI) as a quantitative evaluation of registration accuracy is essential besides the visual assessment by the treating physician. This work was supported in part by the grant from Chinese Natural Science Foundation (Grant No. 11105225)« less

  10. [Combining speech sample and feature bilateral selection algorithm for classification of Parkinson's disease].

    PubMed

    Zhang, Xiaoheng; Wang, Lirui; Cao, Yao; Wang, Pin; Zhang, Cheng; Yang, Liuyang; Li, Yongming; Zhang, Yanling; Cheng, Oumei

    2018-02-01

    Diagnosis of Parkinson's disease (PD) based on speech data has been proved to be an effective way in recent years. However, current researches just care about the feature extraction and classifier design, and do not consider the instance selection. Former research by authors showed that the instance selection can lead to improvement on classification accuracy. However, no attention is paid on the relationship between speech sample and feature until now. Therefore, a new diagnosis algorithm of PD is proposed in this paper by simultaneously selecting speech sample and feature based on relevant feature weighting algorithm and multiple kernel method, so as to find their synergy effects, thereby improving classification accuracy. Experimental results showed that this proposed algorithm obtained apparent improvement on classification accuracy. It can obtain mean classification accuracy of 82.5%, which was 30.5% higher than the relevant algorithm. Besides, the proposed algorithm detected the synergy effects of speech sample and feature, which is valuable for speech marker extraction.

  11. Impact of exercise selection on hamstring muscle activation.

    PubMed

    Bourne, Matthew N; Williams, Morgan D; Opar, David A; Al Najjar, Aiman; Kerr, Graham K; Shield, Anthony J

    2017-07-01

    To determine which strength training exercises selectively activate the biceps femoris long head (BF LongHead ) muscle. We recruited 24 recreationally active men for this two-part observational study . Part 1: We explored the amplitudes and the ratios of lateral (BF) to medial hamstring (MH) normalised electromyography (nEMG) during the concentric and eccentric phases of 10 common strength training exercises. Part 2: We used functional MRI (fMRI) to determine the spatial patterns of hamstring activation during two exercises which (1) most selectively and (2) least selectively activated the BF in part 1. Eccentrically, the largest BF/MH nEMG ratio occurred in the 45° hip-extension exercise; the lowest was in the Nordic hamstring (Nordic) and bent-knee bridge exercises. Concentrically, the highest BF/MH nEMG ratio occurred during the lunge and 45° hip extension; the lowest was during the leg curl and bent-knee bridge. fMRI revealed a greater BF (LongHead) to semitendinosus activation ratio in the 45° hip extension than the Nordic (p<0.001). The T2 increase after hip extension for BF LongHead , semitendinosus and semimembranosus muscles was greater than that for BF ShortHead (p<0.001). During the Nordic, the T2 increase was greater for the semitendinosus than for the other hamstring muscles (p≤0.002). We highlight the heterogeneity of hamstring activation patterns in different tasks. Hip-extension exercise selectively activates the long hamstrings, and the Nordic exercise preferentially recruits the semitendinosus. These findings have implications for strategies to prevent hamstring injury as well as potentially for clinicians targeting specific hamstring components for treatment (mechanotherapy). Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  12. Backset and cervical retraction capacity among occupants in a modern car.

    PubMed

    Jonsson, Bertil; Stenlund, Hans; Svensson, Mats Y; Björnstig, Ulf

    2007-03-01

    The horizontal distance between the back of the head and the frontal of the head restraint (backset) and rearward head movement relative to the torso (cervical retraction) were studied in different occupant postures and positions in a modern car. A stratified randomized population of 154 test subjects was studied in a Volvo V70 year model 2003 car, in driver, front passenger, and rear passenger position. In each position, the subjects adopted (i) a self-selected posture, (ii) a sagging posture, and (iii) an erect posture. Cervical retraction, backset, and vertical distance from the top of the head restraint to the occipital protuberance in the back of the head of the test subject were measured. These data were analyzed using repeated measures ANOVA and linear regression analysis with a significance level set to p < 0.05. In the self-selected posture, the average backset was 61 mm for drivers, 29 mm for front passengers, and 103 mm for rear passengers (p < 0.001). Women had lower mean backset (40 mm) than men (81 mm), particularly in the self-selected driving position. Backset was larger and cervical retraction capacity lower in the sagging posture than in the self-selected posture for occupants in all three occupant positions. Rear passengers had the largest backset values. Backset values decreased with increased age. The average cervical retraction capacity in self-selected posture was 35 mm for drivers, 30 mm for front passengers, and 33 mm for rear passengers (p < 0.001). Future design of rear-end impact protection may take these study results into account when trying to reduce backset before impact. Our results might be used for future development and use of BioRID manikins and rear-end tests in consumer rating test programs such as Euro-NCAP.

  13. Behavioral and Psychological Issues in Long Duration Head-down Bed Rest

    NASA Technical Reports Server (NTRS)

    Seaton, Kimberly A.; Bowie, Kendra; Sipes, Walter A.

    2008-01-01

    Behavioral health services, similar to those offered to the U.S. astronauts who complete six-month missions on board the International Space Station, were provided to 13 long-duration head-down bed rest participants. Issues in psychological screening, selection, and support are discussed as they relate to other isolated and confined environments. Psychological services offered to participants are described, and challenges in subject selection and retention are discussed. Psychological support and training provided to both subjects and study personnel have successfully improved the well-being of study participants. Behavioral health services are indispensable to long-duration head-down tilt bed rest studies.

  14. The Effects of Head Start Enrollment Duration on Migrant Children's Dental Health

    ERIC Educational Resources Information Center

    Lee, Kyunghee

    2017-01-01

    The purpose of this study was to identify factors affecting Migrant Head Start (MHS) children's dental health. Enrollment duration (number of years and weeks enrolled) and individual and family factors were considered. Children (N = 931) who enrolled in Michigan Migrant Head Start during 2012-2013 were selected for the study sample and classified…

  15. The Anti-Resonance Criterion in Selecting Pick Systems for Fully Operational Cutting Machinery Used in Mining

    NASA Astrophysics Data System (ADS)

    Cheluszka, Piotr

    2017-12-01

    This article discusses the issue of selecting a pick system for cutting mining machinery, concerning the reduction of vibrations in the cutting system, particularly in a load-carrying structure at work. Numerical analysis was performed on a telescopic roadheader boom equipped with transverse heads. A frequency range of the boom's free vibrations with a set structure and dynamic properties were determined based on a dynamic model. The main components excited by boom vibrations, generated through the process of cutting rock, were identified. This was closely associated with the stereometry of the cutting heads. The impact on the pick system (the number of picks and their arrangement along the side of the cutting head) was determined by the intensity of the external boom load elements, especially in resonance zones. In terms of the anti-resonance criterion, an advantageous system of cutting head picks was determined as a result of the analysis undertaken. The correct selection of the pick system was ascertained based on a computer simulation of the dynamic loads and vibrations of a roadheader telescopic boom.

  16. Materials for a Stirling engine heater head

    NASA Technical Reports Server (NTRS)

    Noble, J. E.; Lehmann, G. A.; Emigh, S. G.

    1990-01-01

    Work done on the 25-kW advanced Stirling conversion system (ASCS) terrestrial solar program in establishing criteria and selecting materials for the engine heater head and heater tubes is described. Various mechanisms contributing to incompatibility between materials are identified and discussed. Large thermal gradients, coupled with requirements for long life (60,000 h at temperature) and a large number of heatup and cooldown cycles (20,000) drive the design from a structural standpoint. The pressurized cylinder is checked for creep rupture, localized yielding, reverse plasticity, creep and fatigue damage, and creep ratcheting, in addition to the basic requirements for bust and proof pressure. In general, creep rupture and creep and fatigue interaction are the dominant factors in the design. A wide range of materials for the heater head and tubes was evaluated. Factors involved in the assessment were strength and effect on engine efficiency, reliability, and cost. A preliminary selection of Inconel 713LC for the heater head is based on acceptable structural properties but driven mainly by low cost. The criteria for failure, the structural analysis, and the material characteristics with basis for selection are discussed.

  17. Development of a two-stage gene selection method that incorporates a novel hybrid approach using the cuckoo optimization algorithm and harmony search for cancer classification.

    PubMed

    Elyasigomari, V; Lee, D A; Screen, H R C; Shaheed, M H

    2017-03-01

    For each cancer type, only a few genes are informative. Due to the so-called 'curse of dimensionality' problem, the gene selection task remains a challenge. To overcome this problem, we propose a two-stage gene selection method called MRMR-COA-HS. In the first stage, the minimum redundancy and maximum relevance (MRMR) feature selection is used to select a subset of relevant genes. The selected genes are then fed into a wrapper setup that combines a new algorithm, COA-HS, using the support vector machine as a classifier. The method was applied to four microarray datasets, and the performance was assessed by the leave one out cross-validation method. Comparative performance assessment of the proposed method with other evolutionary algorithms suggested that the proposed algorithm significantly outperforms other methods in selecting a fewer number of genes while maintaining the highest classification accuracy. The functions of the selected genes were further investigated, and it was confirmed that the selected genes are biologically relevant to each cancer type. Copyright © 2017. Published by Elsevier Inc.

  18. A Power-Efficient Clustering Protocol for Coal Mine Face Monitoring with Wireless Sensor Networks Under Channel Fading Conditions

    PubMed Central

    Ren, Peng; Qian, Jiansheng

    2016-01-01

    This study proposes a novel power-efficient and anti-fading clustering based on a cross-layer that is specific to the time-varying fading characteristics of channels in the monitoring of coal mine faces with wireless sensor networks. The number of active sensor nodes and a sliding window are set up such that the optimal number of cluster heads (CHs) is selected in each round. Based on a stable expected number of CHs, we explore the channel efficiency between nodes and the base station by using a probe frame and the joint surplus energy in assessing the CH selection. Moreover, the sending power of a node in different periods is regulated by the signal fade margin method. The simulation results demonstrate that compared with several common algorithms, the power-efficient and fading-aware clustering with a cross-layer (PEAFC-CL) protocol features a stable network topology and adaptability under signal time-varying fading, which effectively prolongs the lifetime of the network and reduces network packet loss, thus making it more applicable to the complex and variable environment characteristic of a coal mine face. PMID:27338380

  19. Multi-atlas and label fusion approach for patient-specific MRI based skull estimation.

    PubMed

    Torrado-Carvajal, Angel; Herraiz, Joaquin L; Hernandez-Tamames, Juan A; San Jose-Estepar, Raul; Eryaman, Yigitcan; Rozenholc, Yves; Adalsteinsson, Elfar; Wald, Lawrence L; Malpica, Norberto

    2016-04-01

    MRI-based skull segmentation is a useful procedure for many imaging applications. This study describes a methodology for automatic segmentation of the complete skull from a single T1-weighted volume. The skull is estimated using a multi-atlas segmentation approach. Using a whole head computed tomography (CT) scan database, the skull in a new MRI volume is detected by nonrigid image registration of the volume to every CT, and combination of the individual segmentations by label-fusion. We have compared Majority Voting, Simultaneous Truth and Performance Level Estimation (STAPLE), Shape Based Averaging (SBA), and the Selective and Iterative Method for Performance Level Estimation (SIMPLE) algorithms. The pipeline has been evaluated quantitatively using images from the Retrospective Image Registration Evaluation database (reaching an overlap of 72.46 ± 6.99%), a clinical CT-MR dataset (maximum overlap of 78.31 ± 6.97%), and a whole head CT-MRI pair (maximum overlap 78.68%). A qualitative evaluation has also been performed on MRI acquisition of volunteers. It is possible to automatically segment the complete skull from MRI data using a multi-atlas and label fusion approach. This will allow the creation of complete MRI-based tissue models that can be used in electromagnetic dosimetry applications and attenuation correction in PET/MR. © 2015 Wiley Periodicals, Inc.

  20. CT-detected intracranial hemorrhage among patients with head injury in Lagos, Nigeria.

    PubMed

    Eze, Cletus Uche; Abonyi, Livinus Chibuzo; Olowoyeye, Omodele; Njoku, Jerome; Ohagwu, Christopher; Babalola, Sherifat

    2013-01-01

    To evaluate the computed tomography (CT) findings of intracranial hemorrhage among patients with head trauma in Lagos, Nigeria. In this retrospective, cross-sectional study, a convenience sample of 500 patients with head trauma who had diagnostic cranial CT scans was selected. All the radiological reports and CT scans of patients with head trauma were retrieved in the hospitals selected as study sites. The reports were sorted into 2 groups - normal findings and intracranial bleeding. The reports of intracranial bleeding were sorted again into different classes of intracranial bleeding as identified by the radiologist who reported it. All data were analyzed using the Epi Info public domain software package. The chi-square test was used to measure the statistical significance of study results at P < .05. Most of the study subjects (68%) were men. Traffic accidents accounted for 44% of all the head traumas found in the study, and 58% of the head traumas resulted in intracranial bleeding. Among the hemorrhages found, 37% were intracerebral, 25% were subdural, 16% were intraventricular, 15% were subarachnoid, and 7% were epidural. Intracranial hemorrhage was a common consequence of acute head trauma sustained from traffic accidents in the population studied, with intracerebral hemorrhage being the most prevalent type. Traffic accidents are the main cause of acute head trauma in Lagos, Nigeria. The use of CT for early diagnosis of intracranial hemorrhage appears justifiable.

  1. Adaptive control for eye-gaze input system

    NASA Astrophysics Data System (ADS)

    Zhao, Qijie; Tu, Dawei; Yin, Hairong

    2004-01-01

    The characteristics of the vision-based human-computer interaction system have been analyzed, and the practical application and its limited factors at present time have also been mentioned. The information process methods have been put forward. In order to make the communication flexible and spontaneous, the algorithms to adaptive control of user"s head movement has been designed, and the events-based methods and object-oriented computer language is used to develop the system software, by experiment testing, we found that under given condition, these methods and algorithms can meet the need of the HCI.

  2. SU-E-J-97: Quality Assurance of Deformable Image Registration Algorithms: How Realistic Should Phantoms Be?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saenz, D; Stathakis, S; Kirby, N

    Purpose: Deformable image registration (DIR) has widespread uses in radiotherapy for applications such as dose accumulation studies, multi-modality image fusion, and organ segmentation. The quality assurance (QA) of such algorithms, however, remains largely unimplemented. This work aims to determine how detailed a physical phantom needs to be to accurately perform QA of a DIR algorithm. Methods: Virtual prostate and head-and-neck phantoms, made from patient images, were used for this study. Both sets consist of an undeformed and deformed image pair. The images were processed to create additional image pairs with one through five homogeneous tissue levels using Otsu’s method. Realisticmore » noise was then added to each image. The DIR algorithms from MIM and Velocity (Deformable Multipass) were applied to the original phantom images and the processed ones. The resulting deformations were then compared to the known warping. A higher number of tissue levels creates more contrast in an image and enables DIR algorithms to produce more accurate results. For this reason, error (distance between predicted and known deformation) is utilized as a metric to evaluate how many levels are required for a phantom to be a realistic patient proxy. Results: For the prostate image pairs, the mean error decreased from 1–2 tissue levels and remained constant for 3+ levels. The mean error reduction was 39% and 26% for Velocity and MIM respectively. For head and neck, mean error fell similarly through 2 levels and flattened with total reduction of 16% and 49% for Velocity and MIM. For Velocity, 3+ levels produced comparable accuracy as the actual patient images, whereas MIM showed further accuracy improvement. Conclusion: The number of tissue levels needed to produce an accurate patient proxy depends on the algorithm. For Velocity, three levels were enough, whereas five was still insufficient for MIM.« less

  3. Gain-Scheduled Complementary Filter Design for a MEMS Based Attitude and Heading Reference System

    PubMed Central

    Yoo, Tae Suk; Hong, Sung Kyung; Yoon, Hyok Min; Park, Sungsu

    2011-01-01

    This paper describes a robust and simple algorithm for an attitude and heading reference system (AHRS) based on low-cost MEMS inertial and magnetic sensors. The proposed approach relies on a gain-scheduled complementary filter, augmented by an acceleration-based switching architecture to yield robust performance, even when the vehicle is subject to strong accelerations. Experimental results are provided for a road captive test during which the vehicle dynamics are in high-acceleration mode and the performance of the proposed filter is evaluated against the output from a conventional linear complementary filter. PMID:22163824

  4. Multiobjective immune algorithm with nondominated neighbor-based selection.

    PubMed

    Gong, Maoguo; Jiao, Licheng; Du, Haifeng; Bo, Liefeng

    2008-01-01

    Abstract Nondominated Neighbor Immune Algorithm (NNIA) is proposed for multiobjective optimization by using a novel nondominated neighbor-based selection technique, an immune inspired operator, two heuristic search operators, and elitism. The unique selection technique of NNIA only selects minority isolated nondominated individuals in the population. The selected individuals are then cloned proportionally to their crowding-distance values before heuristic search. By using the nondominated neighbor-based selection and proportional cloning, NNIA pays more attention to the less-crowded regions of the current trade-off front. We compare NNIA with NSGA-II, SPEA2, PESA-II, and MISA in solving five DTLZ problems, five ZDT problems, and three low-dimensional problems. The statistical analysis based on three performance metrics including the coverage of two sets, the convergence metric, and the spacing, show that the unique selection method is effective, and NNIA is an effective algorithm for solving multiobjective optimization problems. The empirical study on NNIA's scalability with respect to the number of objectives shows that the new algorithm scales well along the number of objectives.

  5. Parameters selection in gene selection using Gaussian kernel support vector machines by genetic algorithm.

    PubMed

    Mao, Yong; Zhou, Xiao-Bo; Pi, Dao-Ying; Sun, You-Xian; Wong, Stephen T C

    2005-10-01

    In microarray-based cancer classification, gene selection is an important issue owing to the large number of variables and small number of samples as well as its non-linearity. It is difficult to get satisfying results by using conventional linear statistical methods. Recursive feature elimination based on support vector machine (SVM RFE) is an effective algorithm for gene selection and cancer classification, which are integrated into a consistent framework. In this paper, we propose a new method to select parameters of the aforementioned algorithm implemented with Gaussian kernel SVMs as better alternatives to the common practice of selecting the apparently best parameters by using a genetic algorithm to search for a couple of optimal parameter. Fast implementation issues for this method are also discussed for pragmatic reasons. The proposed method was tested on two representative hereditary breast cancer and acute leukaemia datasets. The experimental results indicate that the proposed method performs well in selecting genes and achieves high classification accuracies with these genes.

  6. Selection, Training and Simulation

    DTIC Science & Technology

    2000-03-01

    most Neck training, Altitudetehamber, PBG, Gas nixtures, Trampoline , important in flying. In years to come we will have a Statoergometer, Raling...superagile world, are mentioned neck, more if X-tra head worn equipment is used put below. a lot of stress to this system. In addition stress will 6-6 be...acceleration Pilot selection criteria like body-type, heart-cerebral forces, mainly head to foot (Gz). The heart itself is distance, vagal and sympathetic nerve

  7. Multiple Drosophila Tracking System with Heading Direction

    PubMed Central

    Sirigrivatanawong, Pudith; Arai, Shogo; Thoma, Vladimiros; Hashimoto, Koichi

    2017-01-01

    Machine vision systems have been widely used for image analysis, especially that which is beyond human ability. In biology, studies of behavior help scientists to understand the relationship between sensory stimuli and animal responses. This typically requires the analysis and quantification of animal locomotion. In our work, we focus on the analysis of the locomotion of the fruit fly Drosophila melanogaster, a widely used model organism in biological research. Our system consists of two components: fly detection and tracking. Our system provides the ability to extract a group of flies as the objects of concern and furthermore determines the heading direction of each fly. As each fly moves, the system states are refined with a Kalman filter to obtain the optimal estimation. For the tracking step, combining information such as position and heading direction with assignment algorithms gives a successful tracking result. The use of heading direction increases the system efficiency when dealing with identity loss and flies swapping situations. The system can also operate with a variety of videos with different light intensities. PMID:28067800

  8. High-resolution single photon planar and spect imaging of brain and neck employing a system of two co-registered opposed gamma imaging heads

    DOEpatents

    Majewski, Stanislaw [Yorktown, VA; Proffitt, James [Newport News, VA

    2011-12-06

    A compact, mobile, dedicated SPECT brain imager that can be easily moved to the patient to provide in-situ imaging, especially when the patient cannot be moved to the Nuclear Medicine imaging center. As a result of the widespread availability of single photon labeled biomarkers, the SPECT brain imager can be used in many locations, including remote locations away from medical centers. The SPECT imager improves the detection of gamma emission from the patient's head and neck area with a large field of view. Two identical lightweight gamma imaging detector heads are mounted to a rotating gantry and precisely mechanically co-registered to each other at 180 degrees. A unique imaging algorithm combines the co-registered images from the detector heads and provides several SPECT tomographic reconstructions of the imaged object thereby improving the diagnostic quality especially in the case of imaging requiring higher spatial resolution and sensitivity at the same time.

  9. Status Report on the First Round of the Development of the Advanced Encryption Standard

    PubMed Central

    Nechvatal, James; Barker, Elaine; Dodson, Donna; Dworkin, Morris; Foti, James; Roback, Edward

    1999-01-01

    In 1997, the National Institute of Standards and Technology (NIST) initiated a process to select a symmetric-key encryption algorithm to be used to protect sensitive (unclassified) Federal information in furtherance of NIST’s statutory responsibilities. In 1998, NIST announced the acceptance of 15 candidate algorithms and requested the assistance of the cryptographic research community in analyzing the candidates. This analysis included an initial examination of the security and efficiency characteristics for each algorithm. NIST has reviewed the results of this research and selected five algorithms (MARS, RC6™, Rijndael, Serpent and Twofish) as finalists. The research results and rationale for the selection of the finalists are documented in this report. The five finalists will be the subject of further study before the selection of one or more of these algorithms for inclusion in the Advanced Encryption Standard.

  10. Genetic algorithm based input selection for a neural network function approximator with applications to SSME health monitoring

    NASA Technical Reports Server (NTRS)

    Peck, Charles C.; Dhawan, Atam P.; Meyer, Claudia M.

    1991-01-01

    A genetic algorithm is used to select the inputs to a neural network function approximator. In the application considered, modeling critical parameters of the space shuttle main engine (SSME), the functional relationship between measured parameters is unknown and complex. Furthermore, the number of possible input parameters is quite large. Many approaches have been used for input selection, but they are either subjective or do not consider the complex multivariate relationships between parameters. Due to the optimization and space searching capabilities of genetic algorithms they were employed to systematize the input selection process. The results suggest that the genetic algorithm can generate parameter lists of high quality without the explicit use of problem domain knowledge. Suggestions for improving the performance of the input selection process are also provided.

  11. SU-E-T-422: Fast Analytical Beamlet Optimization for Volumetric Intensity-Modulated Arc Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chan, Kenny S K; Lee, Louis K Y; Xing, L

    2015-06-15

    Purpose: To implement a fast optimization algorithm on CPU/GPU heterogeneous computing platform and to obtain an optimal fluence for a given target dose distribution from the pre-calculated beamlets in an analytical approach. Methods: The 2D target dose distribution was modeled as an n-dimensional vector and estimated by a linear combination of independent basis vectors. The basis set was composed of the pre-calculated beamlet dose distributions at every 6 degrees of gantry angle and the cost function was set as the magnitude square of the vector difference between the target and the estimated dose distribution. The optimal weighting of the basis,more » which corresponds to the optimal fluence, was obtained analytically by the least square method. Those basis vectors with a positive weighting were selected for entering into the next level of optimization. Totally, 7 levels of optimization were implemented in the study.Ten head-and-neck and ten prostate carcinoma cases were selected for the study and mapped to a round water phantom with a diameter of 20cm. The Matlab computation was performed in a heterogeneous programming environment with Intel i7 CPU and NVIDIA Geforce 840M GPU. Results: In all selected cases, the estimated dose distribution was in a good agreement with the given target dose distribution and their correlation coefficients were found to be in the range of 0.9992 to 0.9997. Their root-mean-square error was monotonically decreasing and converging after 7 cycles of optimization. The computation took only about 10 seconds and the optimal fluence maps at each gantry angle throughout an arc were quickly obtained. Conclusion: An analytical approach is derived for finding the optimal fluence for a given target dose distribution and a fast optimization algorithm implemented on the CPU/GPU heterogeneous computing environment greatly reduces the optimization time.« less

  12. Geomagnetic matching navigation algorithm based on robust estimation

    NASA Astrophysics Data System (ADS)

    Xie, Weinan; Huang, Liping; Qu, Zhenshen; Wang, Zhenhuan

    2017-08-01

    The outliers in the geomagnetic survey data seriously affect the precision of the geomagnetic matching navigation and badly disrupt its reliability. A novel algorithm which can eliminate the outliers influence is investigated in this paper. First, the weight function is designed and its principle of the robust estimation is introduced. By combining the relation equation between the matching trajectory and the reference trajectory with the Taylor series expansion for geomagnetic information, a mathematical expression of the longitude, latitude and heading errors is acquired. The robust target function is obtained by the weight function and the mathematical expression. Then the geomagnetic matching problem is converted to the solutions of nonlinear equations. Finally, Newton iteration is applied to implement the novel algorithm. Simulation results show that the matching error of the novel algorithm is decreased to 7.75% compared to the conventional mean square difference (MSD) algorithm, and is decreased to 18.39% to the conventional iterative contour matching algorithm when the outlier is 40nT. Meanwhile, the position error of the novel algorithm is 0.017° while the other two algorithms fail to match when the outlier is 400nT.

  13. 9 CFR 51.24 - Maximum per-head indemnity amounts.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... BECAUSE OF BRUCELLOSIS Indemnity for Sheep, Goats, and Horses § 51.24 Maximum per-head indemnity amounts... maximum of $20,000 per animal in the case of horses. An independent appraiser selected by the...

  14. SU-E-J-238: Monitoring Lymph Node Volumes During Radiotherapy Using Semi-Automatic Segmentation of MRI Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veeraraghavan, H; Tyagi, N; Riaz, N

    2014-06-01

    Purpose: Identification and image-based monitoring of lymph nodes growing due to disease, could be an attractive alternative to prophylactic head and neck irradiation. We evaluated the accuracy of the user-interactive Grow Cut algorithm for volumetric segmentation of radiotherapy relevant lymph nodes from MRI taken weekly during radiotherapy. Method: The algorithm employs user drawn strokes in the image to volumetrically segment multiple structures of interest. We used a 3D T2-wturbo spin echo images with an isotropic resolution of 1 mm3 and FOV of 492×492×300 mm3 of head and neck cancer patients who underwent weekly MR imaging during the course of radiotherapy.more » Various lymph node (LN) levels (N2, N3, N4'5) were individually contoured on the weekly MR images by an expert physician and used as ground truth in our study. The segmentation results were compared with the physician drawn lymph nodes based on DICE similarity score. Results: Three head and neck patients with 6 weekly MR images were evaluated. Two patients had level 2 LN drawn and one patient had level N2, N3 and N4'5 drawn on each MR image. The algorithm took an average of a minute to segment the entire volume (512×512×300 mm3). The algorithm achieved an overall DICE similarity score of 0.78. The time taken for initializing and obtaining the volumetric mask was about 5 mins for cases with only N2 LN and about 15 mins for the case with N2,N3 and N4'5 level nodes. The longer initialization time for the latter case was due to the need for accurate user inputs to separate overlapping portions of the different LN. The standard deviation in segmentation accuracy at different time points was utmost 0.05. Conclusions: Our initial evaluation of the grow cut segmentation shows reasonably accurate and consistent volumetric segmentations of LN with minimal user effort and time.« less

  15. SU-C-BRC-04: Efficient Dose Calculation Algorithm for FFF IMRT with a Simplified Bivariate Gaussian Source Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, F; Park, J; Barraclough, B

    2016-06-15

    Purpose: To develop an efficient and accurate independent dose calculation algorithm with a simplified analytical source model for the quality assurance and safe delivery of Flattening Filter Free (FFF)-IMRT on an Elekta Versa HD. Methods: The source model consisted of a point source and a 2D bivariate Gaussian source, respectively modeling the primary photons and the combined effect of head scatter, monitor chamber backscatter and collimator exchange effect. The in-air fluence was firstly calculated by back-projecting the edges of beam defining devices onto the source plane and integrating the visible source distribution. The effect of the rounded MLC leaf end,more » tongue-and-groove and interleaf transmission was taken into account in the back-projection. The in-air fluence was then modified with a fourth degree polynomial modeling the cone-shaped dose distribution of FFF beams. Planar dose distribution was obtained by convolving the in-air fluence with a dose deposition kernel (DDK) consisting of the sum of three 2D Gaussian functions. The parameters of the source model and the DDK were commissioned using measured in-air output factors (Sc) and cross beam profiles, respectively. A novel method was used to eliminate the volume averaging effect of ion chambers in determining the DDK. Planar dose distributions of five head-and-neck FFF-IMRT plans were calculated and compared against measurements performed with a 2D diode array (MapCHECK™) to validate the accuracy of the algorithm. Results: The proposed source model predicted Sc for both 6MV and 10MV with an accuracy better than 0.1%. With a stringent gamma criterion (2%/2mm/local difference), the passing rate of the FFF-IMRT dose calculation was 97.2±2.6%. Conclusion: The removal of the flattening filter represents a simplification of the head structure which allows the use of a simpler source model for very accurate dose calculation. The proposed algorithm offers an effective way to ensure the safe delivery of FFF-IMRT.« less

  16. Selection method of terrain matching area for TERCOM algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Qieqie; Zhao, Long

    2017-10-01

    The performance of terrain aided navigation is closely related to the selection of terrain matching area. The different matching algorithms have different adaptability to terrain. This paper mainly studies the adaptability to terrain of TERCOM algorithm, analyze the relation between terrain feature and terrain characteristic parameters by qualitative and quantitative methods, and then research the relation between matching probability and terrain characteristic parameters by the Monte Carlo method. After that, we propose a selection method of terrain matching area for TERCOM algorithm, and verify the method correctness with real terrain data by simulation experiment. Experimental results show that the matching area obtained by the method in this paper has the good navigation performance and the matching probability of TERCOM algorithm is great than 90%

  17. Assessment of Leadership Training of Head Teachers and Secondary School Performance in Mubende District, Uganda

    ERIC Educational Resources Information Center

    Benson, Kayiwa

    2011-01-01

    The purpose of the study was to establish the relationship between leadership training of head teachers and school performance in secondary schools in Mubende district, Uganda. Descriptive-correlational research design was used. Six schools out of 32 were selected and the sample size of head teachers, teachers and students leaders was 287 out of…

  18. Analysis of groundwater flow and stream depletion in L-shaped fluvial aquifers

    NASA Astrophysics Data System (ADS)

    Lin, Chao-Chih; Chang, Ya-Chi; Yeh, Hund-Der

    2018-04-01

    Understanding the head distribution in aquifers is crucial for the evaluation of groundwater resources. This article develops a model for describing flow induced by pumping in an L-shaped fluvial aquifer bounded by impermeable bedrocks and two nearly fully penetrating streams. A similar scenario for numerical studies was reported in Kihm et al. (2007). The water level of the streams is assumed to be linearly varying with distance. The aquifer is divided into two subregions and the continuity conditions of the hydraulic head and flux are imposed at the interface of the subregions. The steady-state solution describing the head distribution for the model without pumping is first developed by the method of separation of variables. The transient solution for the head distribution induced by pumping is then derived based on the steady-state solution as initial condition and the methods of finite Fourier transform and Laplace transform. Moreover, the solution for stream depletion rate (SDR) from each of the two streams is also developed based on the head solution and Darcy's law. Both head and SDR solutions in the real time domain are obtained by a numerical inversion scheme called the Stehfest algorithm. The software MODFLOW is chosen to compare with the proposed head solution for the L-shaped aquifer. The steady-state and transient head distributions within the L-shaped aquifer predicted by the present solution are compared with the numerical simulations and measurement data presented in Kihm et al. (2007).

  19. Log-Linear Model Based Behavior Selection Method for Artificial Fish Swarm Algorithm

    PubMed Central

    Huang, Zhehuang; Chen, Yidong

    2015-01-01

    Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm. PMID:25691895

  20. From AAA to Acuros XB-clinical implications of selecting either Acuros XB dose-to-water or dose-to-medium.

    PubMed

    Zifodya, Jackson M; Challens, Cameron H C; Hsieh, Wen-Long

    2016-06-01

    When implementing Acuros XB (AXB) as a substitute for anisotropic analytic algorithm (AAA) in the Eclipse Treatment Planning System, one is faced with a dilemma of reporting either dose to medium, AXB-Dm or dose to water, AXB-Dw. To assist with decision making on selecting either AXB-Dm or AXB-Dw for dose reporting, a retrospective study of treated patients for head & neck (H&N), prostate, breast and lung is presented. Ten patients, previously treated using AAA plans, were selected for each site and re-planned with AXB-Dm and AXB-Dw. Re-planning was done with fixed monitor units (MU) as well as non-fixed MUs. Dose volume histograms (DVH) of targets and organs at risk (OAR), were analyzed in conjunction with ICRU-83 recommended dose reporting metrics. Additionally, comparisons of plan homogeneity indices (HI) and MUs were done to further highlight the differences between the algorithms. Results showed that, on average AAA overestimated dose to the target volume and OARs by less than 2.0 %. Comparisons between AXB-Dw and AXB-Dm, for all sites, also showed overall dose differences to be small (<1.5 %). However, in non-water biological media, dose differences between AXB-Dw and AXB-Dm, as large as 4.6 % were observed. AXB-Dw also tended to have unexpectedly high 3D maximum dose values (>135 % of prescription dose) for target volumes with high density materials. Homogeneity indices showed that AAA planning and optimization templates would need to be adjusted only for the H&N and Lung sites. MU comparison showed insignificant differences between AXB-Dw relative to AAA and between AXB-Dw relative to AXB-Dm. However AXB-Dm MUs relative to AAA, showed an average difference of about 1.3 % signifying an underdosage by AAA. In conclusion, when dose is reported as AXB-Dw, the effect that high density structures in the PTV has on the dose distribution should be carefully considered. As the results show overall small dose differences between the algorithms, when transitioning from AAA to AXB, no significant change to existing prescription protocols is expected. As most of the clinical experience is dose-to-water based and calibration protocols and clinical trials are also dose-to-water based and there still exists uncertainties in converting CT number to medium, selecting AXB-Dw is strongly recommended.

  1. EBG Based Microstrip Patch Antenna for Brain Tumor Detection via Scattering Parameters in Microwave Imaging System.

    PubMed

    Inum, Reefat; Rana, Md Masud; Shushama, Kamrun Nahar; Quader, Md Anwarul

    2018-01-01

    A microwave brain imaging system model is envisaged to detect and visualize tumor inside the human brain. A compact and efficient microstrip patch antenna is used in the imaging technique to transmit equivalent signal and receive backscattering signal from the stratified human head model. Electromagnetic band gap (EBG) structure is incorporated on the antenna ground plane to enhance the performance. Rectangular and circular EBG structures are proposed to investigate the antenna performance. Incorporation of circular EBG on the antenna ground plane provides an improvement of 22.77% in return loss, 5.84% in impedance bandwidth, and 16.53% in antenna gain with respect to the patch antenna with rectangular EBG. The simulation results obtained from CST are compared to those obtained from HFSS to validate the design. Specific absorption rate (SAR) of the modeled head tissue for the proposed antenna is determined. Different SAR values are compared with the established standard SAR limit to provide a safety regulation of the imaging system. A monostatic radar-based confocal microwave imaging algorithm is applied to generate the image of tumor inside a six-layer human head phantom model. S -parameter signals obtained from circular EBG loaded patch antenna in different scanning modes are utilized in the imaging algorithm to effectively produce a high-resolution image which reliably indicates the presence of tumor inside human brain.

  2. EBG Based Microstrip Patch Antenna for Brain Tumor Detection via Scattering Parameters in Microwave Imaging System

    PubMed Central

    Rana, Md. Masud; Shushama, Kamrun Nahar; Quader, Md. Anwarul

    2018-01-01

    A microwave brain imaging system model is envisaged to detect and visualize tumor inside the human brain. A compact and efficient microstrip patch antenna is used in the imaging technique to transmit equivalent signal and receive backscattering signal from the stratified human head model. Electromagnetic band gap (EBG) structure is incorporated on the antenna ground plane to enhance the performance. Rectangular and circular EBG structures are proposed to investigate the antenna performance. Incorporation of circular EBG on the antenna ground plane provides an improvement of 22.77% in return loss, 5.84% in impedance bandwidth, and 16.53% in antenna gain with respect to the patch antenna with rectangular EBG. The simulation results obtained from CST are compared to those obtained from HFSS to validate the design. Specific absorption rate (SAR) of the modeled head tissue for the proposed antenna is determined. Different SAR values are compared with the established standard SAR limit to provide a safety regulation of the imaging system. A monostatic radar-based confocal microwave imaging algorithm is applied to generate the image of tumor inside a six-layer human head phantom model. S-parameter signals obtained from circular EBG loaded patch antenna in different scanning modes are utilized in the imaging algorithm to effectively produce a high-resolution image which reliably indicates the presence of tumor inside human brain. PMID:29623087

  3. Signal Estimation, Inverse Scattering, and Problems in One and Two Dimensions.

    DTIC Science & Technology

    1982-11-01

    attention to implication for new estimation algorithms and signal processing and, to a lesser extent, for system theory . The publications resulting...from the work are listed by category and date. They are briefly organized and reviewed under five major headings: (1) Two-Dimensional System Theory ; (2

  4. Binocular disparity tuning and visual-vestibular congruency of multisensory neurons in macaque parietal cortex

    PubMed Central

    Yang, Yun; Liu, Sheng; Chowdhury, Syed A.; DeAngelis, Gregory C.; Angelaki, Dora E.

    2012-01-01

    Many neurons in the dorsal medial superior temporal (MSTd) and ventral intraparietal (VIP) areas of the macaque brain are multisensory, responding to both optic flow and vestibular cues to self-motion. The heading tuning of visual and vestibular responses can be either congruent or opposite, but only congruent cells have been implicated in cue integration for heading perception. Because of the geometric properties of motion parallax, however, both congruent and opposite cells could be involved in coding self-motion when observers fixate a world-fixed target during translation, if congruent cells prefer near disparities and opposite cells prefer far disparities. We characterized the binocular disparity selectivity and heading tuning of MSTd and VIP cells using random-dot stimuli. Most (70%) MSTd neurons were disparity-selective with monotonic tuning, and there was no consistent relationship between depth preference and congruency of visual and vestibular heading tuning. One-third of disparity-selective MSTd cells reversed their depth preference for opposite directions of motion (direction-dependent disparity tuning, DDD), but most of these cells were unisensory with no tuning for vestibular stimuli. Inconsistent with previous reports, the direction preferences of most DDD neurons do not reverse with disparity. By comparison to MSTd, VIP contains fewer disparity-selective neurons (41%) and very few DDD cells. On average, VIP neurons also preferred higher speeds and nearer disparities than MSTd cells. Our findings are inconsistent with the hypothesis that visual/vestibular congruency is linked to depth preference, and also suggest that DDD cells are not involved in multisensory integration for heading perception. PMID:22159105

  5. Evaluation of a computer-aided detection algorithm for timely diagnosis of small acute intracranial hemorrhage on computed tomography in a critical care environment

    NASA Astrophysics Data System (ADS)

    Lee, Joon K.; Chan, Tao; Liu, Brent J.; Huang, H. K.

    2009-02-01

    Detection of acute intracranial hemorrhage (AIH) is a primary task in the interpretation of computed tomography (CT) brain scans of patients suffering from acute neurological disturbances or after head trauma. Interpretation can be difficult especially when the lesion is inconspicuous or the reader is inexperienced. We have previously developed a computeraided detection (CAD) algorithm to detect small AIH. One hundred and thirty five small AIH CT studies from the Los Angeles County (LAC) + USC Hospital were identified and matched by age and sex with one hundred and thirty five normal studies. These cases were then processed using our AIH CAD system to evaluate the efficacy and constraints of the algorithm.

  6. Dual energy computed tomography for the head.

    PubMed

    Naruto, Norihito; Itoh, Toshihide; Noguchi, Kyo

    2018-02-01

    Dual energy CT (DECT) is a promising technology that provides better diagnostic accuracy in several brain diseases. DECT can generate various types of CT images from a single acquisition data set at high kV and low kV based on material decomposition algorithms. The two-material decomposition algorithm can separate bone/calcification from iodine accurately. The three-material decomposition algorithm can generate a virtual non-contrast image, which helps to identify conditions such as brain hemorrhage. A virtual monochromatic image has the potential to eliminate metal artifacts by reducing beam-hardening effects. DECT also enables exploration of advanced imaging to make diagnosis easier. One such novel application of DECT is the X-Map, which helps to visualize ischemic stroke in the brain without using iodine contrast medium.

  7. Comprehensible knowledge model creation for cancer treatment decision making.

    PubMed

    Afzal, Muhammad; Hussain, Maqbool; Ali Khan, Wajahat; Ali, Taqdir; Lee, Sungyoung; Huh, Eui-Nam; Farooq Ahmad, Hafiz; Jamshed, Arif; Iqbal, Hassan; Irfan, Muhammad; Abbas Hydari, Manzar

    2017-03-01

    A wealth of clinical data exists in clinical documents in the form of electronic health records (EHRs). This data can be used for developing knowledge-based recommendation systems that can assist clinicians in clinical decision making and education. One of the big hurdles in developing such systems is the lack of automated mechanisms for knowledge acquisition to enable and educate clinicians in informed decision making. An automated knowledge acquisition methodology with a comprehensible knowledge model for cancer treatment (CKM-CT) is proposed. With the CKM-CT, clinical data are acquired automatically from documents. Quality of data is ensured by correcting errors and transforming various formats into a standard data format. Data preprocessing involves dimensionality reduction and missing value imputation. Predictive algorithm selection is performed on the basis of the ranking score of the weighted sum model. The knowledge builder prepares knowledge for knowledge-based services: clinical decisions and education support. Data is acquired from 13,788 head and neck cancer (HNC) documents for 3447 patients, including 1526 patients of the oral cavity site. In the data quality task, 160 staging values are corrected. In the preprocessing task, 20 attributes and 106 records are eliminated from the dataset. The Classification and Regression Trees (CRT) algorithm is selected and provides 69.0% classification accuracy in predicting HNC treatment plans, consisting of 11 decision paths that yield 11 decision rules. Our proposed methodology, CKM-CT, is helpful to find hidden knowledge in clinical documents. In CKM-CT, the prediction models are developed to assist and educate clinicians for informed decision making. The proposed methodology is generalizable to apply to data of other domains such as breast cancer with a similar objective to assist clinicians in decision making and education. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. The virtual morphology and the main movements of the human neck simulations used for car crash studies

    NASA Astrophysics Data System (ADS)

    Ciunel, St.; Tica, B.

    2016-08-01

    The paper presents the studies made on a similar biomechanical system composed by neck, head and thorax bones. The models were defined in a CAD environment which includes Adams algorithm for dynamic simulations. The virtual models and the entire morphology were obtained starting with CT images made on a living human subject. The main movements analyzed were: axial rotation (left-right), lateral bending (left-right) and flexion- extension movement. After simulation was obtained the entire biomechanical behavior based on data tables or diagrams. That virtual model composed by neck and head can be included in complex system (as a car system) and supposed to several impact simulations (virtual crash tests). Also, our research team built main components of a testing device for dummy car crash neck-head system using anatomical data.

  9. Hybrid Binary Imperialist Competition Algorithm and Tabu Search Approach for Feature Selection Using Gene Expression Data.

    PubMed

    Wang, Shuaiqun; Aorigele; Kong, Wei; Zeng, Weiming; Hong, Xiaomin

    2016-01-01

    Gene expression data composed of thousands of genes play an important role in classification platforms and disease diagnosis. Hence, it is vital to select a small subset of salient features over a large number of gene expression data. Lately, many researchers devote themselves to feature selection using diverse computational intelligence methods. However, in the progress of selecting informative genes, many computational methods face difficulties in selecting small subsets for cancer classification due to the huge number of genes (high dimension) compared to the small number of samples, noisy genes, and irrelevant genes. In this paper, we propose a new hybrid algorithm HICATS incorporating imperialist competition algorithm (ICA) which performs global search and tabu search (TS) that conducts fine-tuned search. In order to verify the performance of the proposed algorithm HICATS, we have tested it on 10 well-known benchmark gene expression classification datasets with dimensions varying from 2308 to 12600. The performance of our proposed method proved to be superior to other related works including the conventional version of binary optimization algorithm in terms of classification accuracy and the number of selected genes.

  10. Hybrid Binary Imperialist Competition Algorithm and Tabu Search Approach for Feature Selection Using Gene Expression Data

    PubMed Central

    Aorigele; Zeng, Weiming; Hong, Xiaomin

    2016-01-01

    Gene expression data composed of thousands of genes play an important role in classification platforms and disease diagnosis. Hence, it is vital to select a small subset of salient features over a large number of gene expression data. Lately, many researchers devote themselves to feature selection using diverse computational intelligence methods. However, in the progress of selecting informative genes, many computational methods face difficulties in selecting small subsets for cancer classification due to the huge number of genes (high dimension) compared to the small number of samples, noisy genes, and irrelevant genes. In this paper, we propose a new hybrid algorithm HICATS incorporating imperialist competition algorithm (ICA) which performs global search and tabu search (TS) that conducts fine-tuned search. In order to verify the performance of the proposed algorithm HICATS, we have tested it on 10 well-known benchmark gene expression classification datasets with dimensions varying from 2308 to 12600. The performance of our proposed method proved to be superior to other related works including the conventional version of binary optimization algorithm in terms of classification accuracy and the number of selected genes. PMID:27579323

  11. Comparison of the Performance of the Warfarin Pharmacogenetics Algorithms in Patients with Surgery of Heart Valve Replacement and Heart Valvuloplasty.

    PubMed

    Xu, Hang; Su, Shi; Tang, Wuji; Wei, Meng; Wang, Tao; Wang, Dongjin; Ge, Weihong

    2015-09-01

    A large number of warfarin pharmacogenetics algorithms have been published. Our research was aimed to evaluate the performance of the selected pharmacogenetic algorithms in patients with surgery of heart valve replacement and heart valvuloplasty during the phase of initial and stable anticoagulation treatment. 10 pharmacogenetic algorithms were selected by searching PubMed. We compared the performance of the selected algorithms in a cohort of 193 patients during the phase of initial and stable anticoagulation therapy. Predicted dose was compared to therapeutic dose by using a predicted dose percentage that falls within 20% threshold of the actual dose (percentage within 20%) and mean absolute error (MAE). The average warfarin dose for patients was 3.05±1.23mg/day for initial treatment and 3.45±1.18mg/day for stable treatment. The percentages of the predicted dose within 20% of the therapeutic dose were 44.0±8.8% and 44.6±9.7% for the initial and stable phases, respectively. The MAEs of the selected algorithms were 0.85±0.18mg/day and 0.93±0.19mg/day, respectively. All algorithms had better performance in the ideal group than in the low dose and high dose groups. The only exception is the Wadelius et al. algorithm, which had better performance in the high dose group. The algorithms had similar performance except for the Wadelius et al. and Miao et al. algorithms, which had poor accuracy in our study cohort. The Gage et al. algorithm had better performance in both phases of initial and stable treatment. Algorithms had relatively higher accuracy in the >50years group of patients on the stable phase. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Enhancing Breast Cancer Recurrence Algorithms Through Selective Use of Medical Record Data.

    PubMed

    Kroenke, Candyce H; Chubak, Jessica; Johnson, Lisa; Castillo, Adrienne; Weltzien, Erin; Caan, Bette J

    2016-03-01

    The utility of data-based algorithms in research has been questioned because of errors in identification of cancer recurrences. We adapted previously published breast cancer recurrence algorithms, selectively using medical record (MR) data to improve classification. We evaluated second breast cancer event (SBCE) and recurrence-specific algorithms previously published by Chubak and colleagues in 1535 women from the Life After Cancer Epidemiology (LACE) and 225 women from the Women's Health Initiative cohorts and compared classification statistics to published values. We also sought to improve classification with minimal MR examination. We selected pairs of algorithms-one with high sensitivity/high positive predictive value (PPV) and another with high specificity/high PPV-using MR information to resolve discrepancies between algorithms, properly classifying events based on review; we called this "triangulation." Finally, in LACE, we compared associations between breast cancer survival risk factors and recurrence using MR data, single Chubak algorithms, and triangulation. The SBCE algorithms performed well in identifying SBCE and recurrences. Recurrence-specific algorithms performed more poorly than published except for the high-specificity/high-PPV algorithm, which performed well. The triangulation method (sensitivity = 81.3%, specificity = 99.7%, PPV = 98.1%, NPV = 96.5%) improved recurrence classification over two single algorithms (sensitivity = 57.1%, specificity = 95.5%, PPV = 71.3%, NPV = 91.9%; and sensitivity = 74.6%, specificity = 97.3%, PPV = 84.7%, NPV = 95.1%), with 10.6% MR review. Triangulation performed well in survival risk factor analyses vs analyses using MR-identified recurrences. Use of multiple recurrence algorithms in administrative data, in combination with selective examination of MR data, may improve recurrence data quality and reduce research costs. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Computationally-Efficient Minimum-Time Aircraft Routes in the Presence of Winds

    NASA Technical Reports Server (NTRS)

    Jardin, Matthew R.

    2004-01-01

    A computationally efficient algorithm for minimizing the flight time of an aircraft in a variable wind field has been invented. The algorithm, referred to as Neighboring Optimal Wind Routing (NOWR), is based upon neighboring-optimal-control (NOC) concepts and achieves minimum-time paths by adjusting aircraft heading according to wind conditions at an arbitrary number of wind measurement points along the flight route. The NOWR algorithm may either be used in a fast-time mode to compute minimum- time routes prior to flight, or may be used in a feedback mode to adjust aircraft heading in real-time. By traveling minimum-time routes instead of direct great-circle (direct) routes, flights across the United States can save an average of about 7 minutes, and as much as one hour of flight time during periods of strong jet-stream winds. The neighboring optimal routes computed via the NOWR technique have been shown to be within 1.5 percent of the absolute minimum-time routes for flights across the continental United States. On a typical 450-MHz Sun Ultra workstation, the NOWR algorithm produces complete minimum-time routes in less than 40 milliseconds. This corresponds to a rate of 25 optimal routes per second. The closest comparable optimization technique runs approximately 10 times slower. Airlines currently use various trial-and-error search techniques to determine which of a set of commonly traveled routes will minimize flight time. These algorithms are too computationally expensive for use in real-time systems, or in systems where many optimal routes need to be computed in a short amount of time. Instead of operating in real-time, airlines will typically plan a trajectory several hours in advance using wind forecasts. If winds change significantly from forecasts, the resulting flights will no longer be minimum-time. The need for a computationally efficient wind-optimal routing algorithm is even greater in the case of new air-traffic-control automation concepts. For air-traffic-control automation, thousands of wind-optimal routes may need to be computed and checked for conflicts in just a few minutes. These factors motivated the need for a more efficient wind-optimal routing algorithm.

  14. Using a knowledge-based planning solution to select patients for proton therapy.

    PubMed

    Delaney, Alexander R; Dahele, Max; Tol, Jim P; Kuijper, Ingrid T; Slotman, Ben J; Verbakel, Wilko F A R

    2017-08-01

    Patient selection for proton therapy by comparing proton/photon treatment plans is time-consuming and prone to bias. RapidPlan™, a knowledge-based-planning solution, uses plan-libraries to model and predict organ-at-risk (OAR) dose-volume-histograms (DVHs). We investigated whether RapidPlan, utilizing an algorithm based only on photon beam characteristics, could generate proton DVH-predictions and whether these could correctly identify patients for proton therapy. Model PROT and Model PHOT comprised 30 head-and-neck cancer proton and photon plans, respectively. Proton and photon knowledge-based-plans (KBPs) were made for ten evaluation-patients. DVH-prediction accuracy was analyzed by comparing predicted-vs-achieved mean OAR doses. KBPs and manual plans were compared using salivary gland and swallowing muscle mean doses. For illustration, patients were selected for protons if predicted Model PHOT mean dose minus predicted Model PROT mean dose (ΔPrediction) for combined OARs was ≥6Gy, and benchmarked using achieved KBP doses. Achieved and predicted Model PROT /Model PHOT mean dose R 2 was 0.95/0.98. Generally, achieved mean dose for Model PHOT /Model PROT KBPs was respectively lower/higher than predicted. Comparing Model PROT /Model PHOT KBPs with manual plans, salivary and swallowing mean doses increased/decreased by <2Gy, on average. ΔPrediction≥6Gy correctly selected 4 of 5 patients for protons. Knowledge-based DVH-predictions can provide efficient, patient-specific selection for protons. A proton-specific RapidPlan-solution could improve results. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Evaluating motion processing algorithms for use with functional near-infrared spectroscopy data from young children.

    PubMed

    Delgado Reyes, Lourdes M; Bohache, Kevin; Wijeakumar, Sobanawartiny; Spencer, John P

    2018-04-01

    Motion artifacts are often a significant component of the measured signal in functional near-infrared spectroscopy (fNIRS) experiments. A variety of methods have been proposed to address this issue, including principal components analysis (PCA), correlation-based signal improvement (CBSI), wavelet filtering, and spline interpolation. The efficacy of these techniques has been compared using simulated data; however, our understanding of how these techniques fare when dealing with task-based cognitive data is limited. Brigadoi et al. compared motion correction techniques in a sample of adult data measured during a simple cognitive task. Wavelet filtering showed the most promise as an optimal technique for motion correction. Given that fNIRS is often used with infants and young children, it is critical to evaluate the effectiveness of motion correction techniques directly with data from these age groups. This study addresses that problem by evaluating motion correction algorithms implemented in HomER2. The efficacy of each technique was compared quantitatively using objective metrics related to the physiological properties of the hemodynamic response. Results showed that targeted PCA (tPCA), spline, and CBSI retained a higher number of trials. These techniques also performed well in direct head-to-head comparisons with the other approaches using quantitative metrics. The CBSI method corrected many of the artifacts present in our data; however, this approach produced sometimes unstable HRFs. The targeted PCA and spline methods proved to be the most robust, performing well across all comparison metrics. When compared head to head, tPCA consistently outperformed spline. We conclude, therefore, that tPCA is an effective technique for correcting motion artifacts in fNIRS data from young children.

  16. Head-to-Head Comparison and Evaluation of 92 Plasma Protein Biomarkers for Early Detection of Colorectal Cancer in a True Screening Setting.

    PubMed

    Chen, Hongda; Zucknick, Manuela; Werner, Simone; Knebel, Phillip; Brenner, Hermann

    2015-07-15

    Novel noninvasive blood-based screening tests are strongly desirable for early detection of colorectal cancer. We aimed to conduct a head-to-head comparison of the diagnostic performance of 92 plasma-based tumor-associated protein biomarkers for early detection of colorectal cancer in a true screening setting. Among all available 35 carriers of colorectal cancer and a representative sample of 54 men and women free of colorectal neoplasms recruited in a cohort of screening colonoscopy participants in 2005-2012 (N = 5,516), the plasma levels of 92 protein biomarkers were measured. ROC analyses were conducted to evaluate the diagnostic performance. A multimarker algorithm was developed through the Lasso logistic regression model and validated in an independent validation set. The .632+ bootstrap method was used to adjust for the potential overestimation of diagnostic performance. Seventeen protein markers were identified to show statistically significant differences in plasma levels between colorectal cancer cases and controls. The adjusted area under the ROC curves (AUC) of these 17 individual markers ranged from 0.55 to 0.70. An eight-marker classifier was constructed that increased the adjusted AUC to 0.77 [95% confidence interval (CI), 0.59-0.91]. When validating this algorithm in an independent validation set, the AUC was 0.76 (95% CI, 0.65-0.85), and sensitivities at cutoff levels yielding 80% and 90% specificities were 65% (95% CI, 41-80%) and 44% (95% CI, 24-72%), respectively. The identified profile of protein biomarkers could contribute to the development of a powerful multimarker blood-based test for early detection of colorectal cancer. ©2015 American Association for Cancer Research.

  17. Verification measurements and clinical evaluation of the iPlan RT Monte Carlo dose algorithm for 6 MV photon energy

    NASA Astrophysics Data System (ADS)

    Petoukhova, A. L.; van Wingerden, K.; Wiggenraad, R. G. J.; van de Vaart, P. J. M.; van Egmond, J.; Franken, E. M.; van Santvoort, J. P. C.

    2010-08-01

    This study presents data for verification of the iPlan RT Monte Carlo (MC) dose algorithm (BrainLAB, Feldkirchen, Germany). MC calculations were compared with pencil beam (PB) calculations and verification measurements in phantoms with lung-equivalent material, air cavities or bone-equivalent material to mimic head and neck and thorax and in an Alderson anthropomorphic phantom. Dosimetric accuracy of MC for the micro-multileaf collimator (MLC) simulation was tested in a homogeneous phantom. All measurements were performed using an ionization chamber and Kodak EDR2 films with Novalis 6 MV photon beams. Dose distributions measured with film and calculated with MC in the homogeneous phantom are in excellent agreement for oval, C and squiggle-shaped fields and for a clinical IMRT plan. For a field with completely closed MLC, MC is much closer to the experimental result than the PB calculations. For fields larger than the dimensions of the inhomogeneities the MC calculations show excellent agreement (within 3%/1 mm) with the experimental data. MC calculations in the anthropomorphic phantom show good agreement with measurements for conformal beam plans and reasonable agreement for dynamic conformal arc and IMRT plans. For 6 head and neck and 15 lung patients a comparison of the MC plan with the PB plan was performed. Our results demonstrate that MC is able to accurately predict the dose in the presence of inhomogeneities typical for head and neck and thorax regions with reasonable calculation times (5-20 min). Lateral electron transport was well reproduced in MC calculations. We are planning to implement MC calculations for head and neck and lung cancer patients.

  18. Postburn Head and Neck Reconstruction: An Algorithmic Approach.

    PubMed

    Heidekrueger, Paul Immanuel; Broer, Peter Niclas; Tanna, Neil; Ninkovic, Milomir

    2016-01-01

    Optimizing functional and aesthetic outcomes in postburn head and neck reconstruction remains a surgical challenge. Recurrent contractures, impaired range of motion, and disfigurement because of disruption of the aesthetic subunits of the face, can result in poor patient satisfaction and ultimately, contribute to social isolation of the patient. In an effort to improve the quality of life of these patients, this study evaluates different surgical approaches with an emphasis on tissue expansion of free and regional flaps. Regional and free-flap reconstruction was performed in 20 patients (26 flaps) with severe postburn head and neck contractures. To minimize donor site morbidity and obtain large amounts of thin and pliable tissue, pre-expansion was performed in all patients treated with locoregional flap reconstructions (12/12), and 62% (8/14) of patients with free-flap reconstructions. Algorithms regarding pre- and intraoperative decision-making are discussed, and complications between the techniques as well as long-term (mean follow-up 3 years) results are analyzed. Complications, including tissue expander infection with need for removal or exchange, partial or full flap loss, were evaluated and occurred in 25% (3/12) of patients with locoregional and 36% (5/14) of patients receiving free-flap reconstructions. Secondary revision surgery was performed in 33% (4/12) of locoregional flaps and 93% (13/14) of free flaps. Both locoregional as well as distant tissue transfers have their role in postburn head and neck reconstruction, whereas pre-expansion remains an invaluable tool. Paying attention to the presented principles and keeping the importance of aesthetic facial subunits in mind, range of motion, aesthetics, and patient satisfaction were improved long term in all our patients, while minimizing donor site morbidity.

  19. Shadow Detection Based on Regions of Light Sources for Object Extraction in Nighttime Video

    PubMed Central

    Lee, Gil-beom; Lee, Myeong-jin; Lee, Woo-Kyung; Park, Joo-heon; Kim, Tae-Hwan

    2017-01-01

    Intelligent video surveillance systems detect pre-configured surveillance events through background modeling, foreground and object extraction, object tracking, and event detection. Shadow regions inside video frames sometimes appear as foreground objects, interfere with ensuing processes, and finally degrade the event detection performance of the systems. Conventional studies have mostly used intensity, color, texture, and geometric information to perform shadow detection in daytime video, but these methods lack the capability of removing shadows in nighttime video. In this paper, a novel shadow detection algorithm for nighttime video is proposed; this algorithm partitions each foreground object based on the object’s vertical histogram and screens out shadow objects by validating their orientations heading toward regions of light sources. From the experimental results, it can be seen that the proposed algorithm shows more than 93.8% shadow removal and 89.9% object extraction rates for nighttime video sequences, and the algorithm outperforms conventional shadow removal algorithms designed for daytime videos. PMID:28327515

  20. Application of machine vision in inspecting stem and shape of fruits

    NASA Astrophysics Data System (ADS)

    Ying, Yibin; Jing, Hansong; Tao, Yang; Jin, Juanqin; Ibarra, Juan G.; Chen, Zhikuan

    2000-12-01

    The shape and the condition of stem are important features in classification of Huanghua pears. As the commonly used thinning and erosion-dilation algorithm in judging the presence of the stem is too slow, a new fast algorithm was put forward. Compared with other part of the pear, the stem is obviously thin and long, with the help of various sized templates, the judgment of whether the stem is present was easily made, meanwhile the stem head and the intersection point of stem bottom and pear were labeled. Furthermore, after the slopes of the tangential line of stem head and tangential line of stem bottom were found, the included angle of these two lines was calculated. It was found that the included angle of the broken stem was obviously different from that of the good stem. After the analysis of 53 pictures of pears, the accuracy to judge whether the stem is present is 100% and whether the stem is good reaches 93%. Also, the algorithm is of robustness and can be made invariant to translation and rotation Meanwhile, the method to describe the shape of irregular fruits was studied. Fourier transformation and inverse Fourier transformation pair were adopted to describe the shape of Huanghua pears, and the algorithm for shape identification, which was based on artificial neural network, was developed. The first sixteen harmonic components of the Fourier descriptor were enough to represent the primary shape of pear, and the identification accuracy could reach 90% by applying the Fourier descriptor in combination with artificial neural network.

  1. Comparison of proton therapy treatment planning for head tumors with a pencil beam algorithm on dual and single energy CT images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hudobivnik, Nace; Dedes, George; Parodi, Katia

    2016-01-15

    Purpose: Dual energy CT (DECT) has recently been proposed as an improvement over single energy CT (SECT) for stopping power ratio (SPR) estimation for proton therapy treatment planning (TP), thereby potentially reducing range uncertainties. Published literature investigated phantoms. This study aims at performing proton therapy TP on SECT and DECT head images of the same patients and at evaluating whether the reported improved DECT SPR accuracy translates into clinically relevant range shifts in clinical head treatment scenarios. Methods: Two phantoms were scanned at a last generation dual source DECT scanner at 90 and 150 kVp with Sn filtration. The firstmore » phantom (Gammex phantom) was used to calibrate the scanner in terms of SPR while the second served as evaluation (CIRS phantom). DECT images of five head trauma patients were used as surrogate cancer patient images for TP of proton therapy. Pencil beam algorithm based TP was performed on SECT and DECT images and the dose distributions corresponding to the optimized proton plans were calculated using a Monte Carlo (MC) simulation platform using the same patient geometry for both plans obtained from conversion of the 150 kVp images. Range shifts between the MC dose distributions from SECT and DECT plans were assessed using 2D range maps. Results: SPR root mean square errors (RMSEs) for the inserts of the Gammex phantom were 1.9%, 1.8%, and 1.2% for SECT phantom calibration (SECT{sub phantom}), SECT stoichiometric calibration (SECT{sub stoichiometric}), and DECT calibration, respectively. For the CIRS phantom, these were 3.6%, 1.6%, and 1.0%. When investigating patient anatomy, group median range differences of up to −1.4% were observed for head cases when comparing SECT{sub stoichiometric} with DECT. For this calibration the 25th and 75th percentiles varied from −2% to 0% across the five patients. The group median was found to be limited to 0.5% when using SECT{sub phantom} and the 25th and 75th percentiles varied from −1% to 2%. Conclusions: Proton therapy TP using a pencil beam algorithm and DECT images was performed for the first time. Given that the DECT accuracy as evaluated by two phantoms was 1.2% and 1.0% RMSE, it is questionable whether the range differences reported here are significant.« less

  2. Selection of floating-point or fixed-point for adaptive noise canceller in somatosensory evoked potential measurement.

    PubMed

    Shen, Chongfei; Liu, Hongtao; Xie, Xb; Luk, Keith Dk; Hu, Yong

    2007-01-01

    Adaptive noise canceller (ANC) has been used to improve signal to noise ratio (SNR) of somsatosensory evoked potential (SEP). In order to efficiently apply the ANC in hardware system, fixed-point algorithm based ANC can achieve fast, cost-efficient construction, and low-power consumption in FPGA design. However, it is still questionable whether the SNR improvement performance by fixed-point algorithm is as good as that by floating-point algorithm. This study is to compare the outputs of ANC by floating-point and fixed-point algorithm ANC when it was applied to SEP signals. The selection of step-size parameter (micro) was found different in fixed-point algorithm from floating-point algorithm. In this simulation study, the outputs of fixed-point ANC showed higher distortion from real SEP signals than that of floating-point ANC. However, the difference would be decreased with increasing micro value. In the optimal selection of micro, fixed-point ANC can get as good results as floating-point algorithm.

  3. Selective epidemic vaccination under the performant routing algorithms

    NASA Astrophysics Data System (ADS)

    Bamaarouf, O.; Alweimine, A. Ould Baba; Rachadi, A.; EZ-Zahraouy, H.

    2018-04-01

    Despite the extensive research on traffic dynamics and epidemic spreading, the effect of the routing algorithms strategies on the traffic-driven epidemic spreading has not received an adequate attention. It is well known that more performant routing algorithm strategies are used to overcome the congestion problem. However, our main result shows unexpectedly that these algorithms favor the virus spreading more than the case where the shortest path based algorithm is used. In this work, we studied the virus spreading in a complex network using the efficient path and the global dynamic routing algorithms as compared to shortest path strategy. Some previous studies have tried to modify the routing rules to limit the virus spreading, but at the expense of reducing the traffic transport efficiency. This work proposed a solution to overcome this drawback by using a selective vaccination procedure instead of a random vaccination used often in the literature. We found that the selective vaccination succeeded in eradicating the virus better than a pure random intervention for the performant routing algorithm strategies.

  4. Identification of Genes Involved in Breast Cancer Metastasis by Integrating Protein-Protein Interaction Information with Expression Data.

    PubMed

    Tian, Xin; Xin, Mingyuan; Luo, Jian; Liu, Mingyao; Jiang, Zhenran

    2017-02-01

    The selection of relevant genes for breast cancer metastasis is critical for the treatment and prognosis of cancer patients. Although much effort has been devoted to the gene selection procedures by use of different statistical analysis methods or computational techniques, the interpretation of the variables in the resulting survival models has been limited so far. This article proposes a new Random Forest (RF)-based algorithm to identify important variables highly related with breast cancer metastasis, which is based on the important scores of two variable selection algorithms, including the mean decrease Gini (MDG) criteria of Random Forest and the GeneRank algorithm with protein-protein interaction (PPI) information. The new gene selection algorithm can be called PPIRF. The improved prediction accuracy fully illustrated the reliability and high interpretability of gene list selected by the PPIRF approach.

  5. Data communications in a parallel active messaging interface of a parallel computer

    DOEpatents

    Davis, Kristan D.; Faraj, Daniel A.

    2014-07-22

    Algorithm selection for data communications in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including specifications of a client, a context, and a task, endpoints coupled for data communications through the PAMI, including associating in the PAMI data communications algorithms and ranges of message sizes so that each algorithm is associated with a separate range of message sizes; receiving in an origin endpoint of the PAMI a data communications instruction, the instruction specifying transmission of a data communications message from the origin endpoint to a target endpoint, the data communications message characterized by a message size; selecting, from among the associated algorithms and ranges, a data communications algorithm in dependence upon the message size; and transmitting, according to the selected data communications algorithm from the origin endpoint to the target endpoint, the data communications message.

  6. Data communications in a parallel active messaging interface of a parallel computer

    DOEpatents

    Davis, Kristan D; Faraj, Daniel A

    2013-07-09

    Algorithm selection for data communications in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including specifications of a client, a context, and a task, endpoints coupled for data communications through the PAMI, including associating in the PAMI data communications algorithms and ranges of message sizes so that each algorithm is associated with a separate range of message sizes; receiving in an origin endpoint of the PAMI a data communications instruction, the instruction specifying transmission of a data communications message from the origin endpoint to a target endpoint, the data communications message characterized by a message size; selecting, from among the associated algorithms and ranges, a data communications algorithm in dependence upon the message size; and transmitting, according to the selected data communications algorithm from the origin endpoint to the target endpoint, the data communications message.

  7. featsel: A framework for benchmarking of feature selection algorithms and cost functions

    NASA Astrophysics Data System (ADS)

    Reis, Marcelo S.; Estrela, Gustavo; Ferreira, Carlos Eduardo; Barrera, Junior

    In this paper, we introduce featsel, a framework for benchmarking of feature selection algorithms and cost functions. This framework allows the user to deal with the search space as a Boolean lattice and has its core coded in C++ for computational efficiency purposes. Moreover, featsel includes Perl scripts to add new algorithms and/or cost functions, generate random instances, plot graphs and organize results into tables. Besides, this framework already comes with dozens of algorithms and cost functions for benchmarking experiments. We also provide illustrative examples, in which featsel outperforms the popular Weka workbench in feature selection procedures on data sets from the UCI Machine Learning Repository.

  8. Low-Complexity User Selection for Rate Maximization in MIMO Broadcast Channels with Downlink Beamforming

    PubMed Central

    Silva, Adão; Gameiro, Atílio

    2014-01-01

    We present in this work a low-complexity algorithm to solve the sum rate maximization problem in multiuser MIMO broadcast channels with downlink beamforming. Our approach decouples the user selection problem from the resource allocation problem and its main goal is to create a set of quasiorthogonal users. The proposed algorithm exploits physical metrics of the wireless channels that can be easily computed in such a way that a null space projection power can be approximated efficiently. Based on the derived metrics we present a mathematical model that describes the dynamics of the user selection process which renders the user selection problem into an integer linear program. Numerical results show that our approach is highly efficient to form groups of quasiorthogonal users when compared to previously proposed algorithms in the literature. Our user selection algorithm achieves a large portion of the optimum user selection sum rate (90%) for a moderate number of active users. PMID:24574928

  9. Selective head cooling during neonatal seizures prevents postictal cerebral vascular dysfunction without reducing epileptiform activity

    PubMed Central

    Harsono, Mimily; Pourcyrous, Massroor; Jolly, Elliott J.; de Jongh Curry, Amy; Fedinec, Alexander L.; Liu, Jianxiong; Basuroy, Shyamali; Zhuang, Daming; Leffler, Charles W.

    2016-01-01

    Epileptic seizures in neonates cause cerebrovascular injury and impairment of cerebral blood flow (CBF) regulation. In the bicuculline model of seizures in newborn pigs, we tested the hypothesis that selective head cooling prevents deleterious effects of seizures on cerebral vascular functions. Preventive or therapeutic ictal head cooling was achieved by placing two head ice packs during the preictal and/or ictal states, respectively, for the ∼2-h period of seizures. Head cooling lowered the brain and core temperatures to 25.6 ± 0.3 and 33.5 ± 0.1°C, respectively. Head cooling had no anticonvulsant effects, as it did not affect the bicuculline-evoked electroencephalogram parameters, including amplitude, duration, spectral power, and spike frequency distribution. Acute and long-term cerebral vascular effects of seizures in the normothermic and head-cooled groups were tested during the immediate (2–4 h) and delayed (48 h) postictal periods. Seizure-induced cerebral vascular injury during the immediate postictal period was detected as terminal deoxynucleotidyl transferase-mediated dUTP nick-end labeling-positive staining of cerebral arterioles and a surge of brain-derived circulating endothelial cells in peripheral blood in the normothermic group, but not in the head-cooled groups. During the delayed postictal period, endothelium-dependent cerebral vasodilator responses were greatly reduced in the normothermic group, indicating impaired CBF regulation. Preventive or therapeutic ictal head cooling mitigated the endothelial injury and greatly reduced loss of postictal cerebral vasodilator functions. Overall, head cooling during seizures is a clinically relevant approach to protecting the neonatal brain by preventing cerebrovascular injury and the loss of the endothelium-dependent control of CBF without reducing epileptiform activity. PMID:27591217

  10. Prevalence of head lice infestation and pediculicidal effect of permethrine shampoo in primary school girls in a low-income area in southeast of Iran.

    PubMed

    Soleimani-Ahmadi, Moussa; Jaberhashemi, Seyed Aghil; Zare, Mehdi; Sanei-Dehkordi, Alireza

    2017-07-24

    Head lice infestation is a common public health problem that is most prevalent in primary school children throughout the world, especially in developing countries including different parts of Iran. This study aimed to determine the prevalence and risk factors associated with head lice infestation and pediculicidal effect of 1% permethrin shampoo in primary schools girls of Bashagard County, one of the low socioeconomic areas in southeast of Iran. In this interventional study six villages with similar demographical situations were selected and randomly assigned into intervention and control areas. In each area 150 girl students aged 7-12 years were selected randomly and screened for head lice infestation by visual scalp examination. In intervention area, treatment efficacy of 1% permethrin shampoo was evaluated via re-examination for infestation after one, two, and three weeks. Pre-tested structured questionnaire was used to collect data on socio-demographic and associated factors of head lice infestation. The prevalence of head lice infestation was 67.3%. There was significant association between head lice infestation and school grade, family size, parents' literacy, bathing facilities, frequency of hair washing, and use of shared articles (p < 0.05). The effectiveness of 1% permethrin shampoo for head lice treatment was 29.2, 68.9, and 90.3% after the first, second, and third weeks, respectively. The head lice infestation is a health problem in primary school girls of Bashagard County. Improvement of socioeconomic status and providing appropriate educational programs about head lice risk factors and prevention can be effective for reduction of infestation in this area. This trial has been registered and approved by Hormozgan University of Medical Sciences ethical committee (Trial No.764). Trial registration date: March 17 2014.

  11. Flying Drosophilamelanogaster maintain arbitrary but stable headings relative to the angle of polarized light.

    PubMed

    Warren, Timothy L; Weir, Peter T; Dickinson, Michael H

    2018-05-11

    Animals must use external cues to maintain a straight course over long distances. In this study, we investigated how the fruit fly Drosophila melanogaster selects and maintains a flight heading relative to the axis of linearly polarized light, a visual cue produced by the atmospheric scattering of sunlight. To track flies' headings over extended periods, we used a flight simulator that coupled the angular velocity of dorsally presented polarized light to the stroke amplitude difference of the animals' wings. In the simulator, most flies actively maintained a stable heading relative to the axis of polarized light for the duration of 15 min flights. We found that individuals selected arbitrary, unpredictable headings relative to the polarization axis, which demonstrates that D . melanogaster can perform proportional navigation using a polarized light pattern. When flies flew in two consecutive bouts separated by a 5 min gap, the two flight headings were correlated, suggesting individuals retain a memory of their chosen heading. We found that adding a polarized light pattern to a light intensity gradient enhanced flies' orientation ability, suggesting D . melanogaster use a combination of cues to navigate. For both polarized light and intensity cues, flies' capacity to maintain a stable heading gradually increased over several minutes from the onset of flight. Our findings are consistent with a model in which each individual initially orients haphazardly but then settles on a heading which is maintained via a self-reinforcing process. This may be a general dispersal strategy for animals with no target destination. © 2018. Published by The Company of Biologists Ltd.

  12. 48 CFR 36.602-4 - Selection authority.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Selection authority. 36... Selection authority. (a) The final selection decision shall be made by the agency head or a designated selection authority. (b) The selection authority shall review the recommendations of the evaluation board...

  13. An Automatic Image Processing System for Glaucoma Screening

    PubMed Central

    Alodhayb, Sami; Lakshminarayanan, Vasudevan

    2017-01-01

    Horizontal and vertical cup to disc ratios are the most crucial parameters used clinically to detect glaucoma or monitor its progress and are manually evaluated from retinal fundus images of the optic nerve head. Due to the rarity of the glaucoma experts as well as the increasing in glaucoma's population, an automatically calculated horizontal and vertical cup to disc ratios (HCDR and VCDR, resp.) can be useful for glaucoma screening. We report on two algorithms to calculate the HCDR and VCDR. In the algorithms, level set and inpainting techniques were developed for segmenting the disc, while thresholding using Type-II fuzzy approach was developed for segmenting the cup. The results from the algorithms were verified using the manual markings of images from a dataset of glaucomatous images (retinal fundus images for glaucoma analysis (RIGA dataset)) by six ophthalmologists. The algorithm's accuracy for HCDR and VCDR combined was 74.2%. Only the accuracy of manual markings by one ophthalmologist was higher than the algorithm's accuracy. The algorithm's best agreement was with markings by ophthalmologist number 1 in 230 images (41.8%) of the total tested images. PMID:28947898

  14. Involvement of glutamatergic N-methyl-d-aspartate receptors in the expression of increased head-dipping behaviors in the hole-board tests of olfactory bulbectomized mice.

    PubMed

    Hirose, Noritaka; Saitoh, Akiyoshi; Kamei, Junzo

    2016-10-01

    Olfactory bulbectomized (OB) mice produce agitated anxiety-like behaviors in the hole-board test, which was expressed by an increase in head-dipping counts and a decrease in head-dipping latencies. However, the associated mechanisms remain unclear. In the present study, MK-801 (10, 100μg/kg), a selective N-methyl-d-aspartate (NMDA) receptor antagonist, significantly and dose-dependently suppressed the increased head-dipping behaviors in OB mice, without affecting sham mice. Similar results were obtained with another selective NMDA receptor antagonist D-AP5 treatment in OB mice. On the other hand, muscimol, a selective aminobutyric acid type A (GABAA) receptor agonist produced no effects on these hyperemotional behaviors in OB mice at a dose (100μg/kg) that produced anxiolytic-like effects in sham mice. Interestingly, glutamine contents and glutamine/glutamate ratios were significantly increased in the amygdala and frontal cortex of OB mice compared to sham mice. Based on these results, we concluded that the glutamatergic NMDA receptors are involved in the expression of increased head-dipping behaviors in the hole-board tests of OB mice. Accordingly, the changes in glutamatergic transmission in frontal cortex and amygdala may play important roles in the expression of these abnormal behaviors in OB mice. Copyright © 2016. Published by Elsevier B.V.

  15. MULTIPLE SHAFT TOOL HEAD

    DOEpatents

    Colbert, H.P.

    1962-10-23

    An improved tool head arrangement is designed for the automatic expanding of a plurality of ferruled tubes simultaneously. A plurality of output shafts of a multiple spindle drill head are driven in unison by a hydraulic motor. A plurality of tube expanders are respectively coupled to the shafts through individual power train arrangements. The axial or thrust force required for the rolling operation is provided by a double acting hydraulic cylinder having a hollow through shaft with the shaft cooperating with an internally rotatable splined shaft slidably coupled to a coupling rigidly attached to the respectlve output shaft of the drill head, thereby transmitting rotary motion and axial thrust simultaneously to the tube expander. A hydraulic power unit supplies power to each of the double acting cylinders through respective two-position, four-way valves, under control of respective solenoids for each of the cylinders. The solenoids are in turn selectively controlled by a tool selection control unit which in turn is controlled by signals received from a programmed, coded tape from a tape reader. The number of expanders that are extended in a rolling operation, which may be up to 42 expanders, is determined by a predetermined program of operations depending upon the arrangement of the ferruled tubes to be expanded in the tube bundle. The tape reader also supplies dimensional information to a machine tool servo control unit for imparting selected, horizontal and/or vertical movement to the tool head assembly. (AEC)

  16. Extremum Seeking Control of Smart Inverters for VAR Compensation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnold, Daniel; Negrete-Pincetic, Matias; Stewart, Emma

    2015-09-04

    Reactive power compensation is used by utilities to ensure customer voltages are within pre-defined tolerances and reduce system resistive losses. While much attention has been paid to model-based control algorithms for reactive power support and Volt Var Optimization (VVO), these strategies typically require relatively large communications capabilities and accurate models. In this work, a non-model-based control strategy for smart inverters is considered for VAR compensation. An Extremum Seeking control algorithm is applied to modulate the reactive power output of inverters based on real power information from the feeder substation, without an explicit feeder model. Simulation results using utility demand informationmore » confirm the ability of the control algorithm to inject VARs to minimize feeder head real power consumption. In addition, we show that the algorithm is capable of improving feeder voltage profiles and reducing reactive power supplied by the distribution substation.« less

  17. A simplified analytical random walk model for proton dose calculation

    NASA Astrophysics Data System (ADS)

    Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.

    2016-10-01

    We propose an analytical random walk model for proton dose calculation in a laterally homogeneous medium. A formula for the spatial fluence distribution of primary protons is derived. The variance of the spatial distribution is in the form of a distance-squared law of the angular distribution. To improve the accuracy of dose calculation in the Bragg peak region, the energy spectrum of the protons is used. The accuracy is validated against Monte Carlo simulation in water phantoms with either air gaps or a slab of bone inserted. The algorithm accurately reflects the dose dependence on the depth of the bone and can deal with small-field dosimetry. We further applied the algorithm to patients’ cases in the highly heterogeneous head and pelvis sites and used a gamma test to show the reasonable accuracy of the algorithm in these sites. Our algorithm is fast for clinical use.

  18. The association between head and cervical posture and temporomandibular disorders: a systematic review.

    PubMed

    Olivo, Susan Armijo; Bravo, Jaime; Magee, David J; Thie, Norman M R; Major, Paul W; Flores-Mir, Carlos

    2006-01-01

    To carry out a systematic review to assess the evidence concerning the association between head and cervical posture and temporomandibular disorders (TMD). A search of Medline, Pubmed, Embase, Web of Science, Lilacs, and Cochrane Library databases was conducted in all languages with the help of a health sciences librarian. Key words used in the search were posture, head posture, cervical spine or neck, vertebrae, cervical lordosis, craniomandibular disorders or temporomandibular disorders, temporomandibular disorders, and orofacial pain or facial pain. Abstracts which appeared to fulfill the initial selection criteria were selected by consensus. The original articles were retrieved and evaluated to ensure they met the inclusion criteria. A methodological checklist was used to evaluate the quality of the selected articles and their references were hand-searched for possible missing articles. Twelve studies met all inclusion criteria and were analyzed in detail for their methodology and information quality. Nine articles that analyzed the association between head posture and TMD included patients with mixed TMD diagnosis; 1 article differentiated among muscular, articular, and mixed symptomatology; and 3 articles analyzed information from patients with only articular problems. Finally, 2 studies evaluated the association between head posture and TMD in patients with muscular TMD. Several methodological defects were noted in the 12 studies. Since most of the studies included in this systematic review were of poor methodological quality, the findings of the studies should be interpreted with caution. The association between intra-articular and muscular TMD and head and cervical posture is still unclear, and better controlled studies with comprehensive TMD diagnoses, greater sample sizes, and objective posture evaluation are necessary.

  19. a Empirical Modelation of Runoff in Small Watersheds Using LIDAR Data

    NASA Astrophysics Data System (ADS)

    Lopatin, J.; Hernández, J.; Galleguillos, M.; Mancilla, G.

    2013-12-01

    Hydrological models allow the simulation of water natural processes and also the quantification and prediction of the effects of human impacts in runoff behavior. However, obtaining the information that is need for applying these models can be costly in both time and resources, especially in large and difficult to access areas. The objective of this research was to integrate LiDAR data in the hydrological modeling of runoff in small watersheds, using derivated hydrologic, vegetation and topography variables. The study area includes 10 small head watersheds cover bay forest, between 2 and 16 ha, which are located in the south-central coastal range of Chile. In each of the former instantaneous rainfall and runoff flow of a total of 15 rainfall events were measured, between August 2012 and July 2013, yielding a total of 79 observations. In March 2011 a Harrier 54/G4 Dual System was used to obtain a LiDAR point cloud of discrete pulse with an average of 4.64 points per square meter. A Digital Terrain Model (DTM) of 1 meter resolution was obtained from the point cloud, and subsequently 55 topographic variables were derived, such as physical watershed parameters and morphometric features. At the same time, 30 vegetation descriptive variables were obtained directly from the point cloud and from a Digital Canopy Model (DCM). The classification and regression "Random Forest" (RF) algorithm was used to select the most important variables in predicting water height (liters), and the "Partial Least Squares Path Modeling" (PLS-PM) algorithm was used to fit a model using the selected set of variables. Four Latent variables were selected (outer model) related to: climate, topography, vegetation and runoff, where in each one was designated a group of the predictor variables selected by RF (inner model). The coefficient of determination (R2) and Goodnes-of-Fit (GoF) of the final model were obtained. The best results were found when modeling using only the upper 50th percentile of rainfall events. The best variables selected by the RF algorithm were three topographic variables and three vegetation related ones. We obtained an R2 of 0.82 and a GoF of 0.87 with a 95% of confidence interval. This study shows that it is possible to predict the water harvesting collected during a rainstorm event in forest environment using only LiDAR data. However, this type of methodology does not have good result in flow produced by low magnitude rainfall events, as these are more influenced by initial conditions of soil, vegetation and climate, which make their behavior slower and erratic.

  20. Tractable Goal Selection with Oversubscribed Resources

    NASA Technical Reports Server (NTRS)

    Rabideau, Gregg; Chien, Steve; McLaren, David

    2009-01-01

    We describe an efficient, online goal selection algorithm and its use for selecting goals at runtime. Our focus is on the re-planning that must be performed in a timely manner on the embedded system where computational resources are limited. In particular, our algorithm generates near optimal solutions to problems with fully specified goal requests that oversubscribe available resources but have no temporal flexibility. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. This enables shorter response cycles and greater autonomy for the system under control.

  1. The IBM HeadTracking Pointer: improvements in vision-based pointer control.

    PubMed

    Kjeldsen, Rick

    2008-07-01

    Vision-based head trackers have been around for some years and are even beginning to be commercialized, but problems remain with respect to usability. Users without the ability to use traditional pointing devices--the intended audience of such systems--have no alternative if the automatic bootstrapping process fails. There is room for improvement in face tracking, and the pointer movement dynamics do not support accurate and efficient pointing. This paper describes the IBM HeadTracking Pointer, a system which attempts to directly address some of these issues. Head gestures are used to provide the end user a greater level of autonomous control over the system. A novel face-tracking algorithm reduces drift under variable lighting conditions, allowing the use of absolute, rather than relative, pointer positioning. Most importantly, the pointer dynamics have been designed to take into account the constraints of head-based pointing, with a non-linear gain which allows stability in fine pointer movement, high speed on long transitions and adjustability to support users with different movement dynamics. User studies have identified some difficulties with training the system and some characteristics of the pointer motion that take time to get used to, but also good user feedback and very promising performance results.

  2. Simulation tools for two-dimensional experiments in x-ray computed tomography using the FORBILD head phantom

    PubMed Central

    Yu, Zhicong; Noo, Frédéric; Dennerlein, Frank; Wunderlich, Adam; Lauritsch, Günter; Hornegger, Joachim

    2012-01-01

    Mathematical phantoms are essential for the development and early-stage evaluation of image reconstruction algorithms in x-ray computed tomography (CT). This note offers tools for computer simulations using a two-dimensional (2D) phantom that models the central axial slice through the FORBILD head phantom. Introduced in 1999, in response to a need for a more robust test, the FORBILD head phantom is now seen by many as the gold standard. However, the simple Shepp-Logan phantom is still heavily used by researchers working on 2D image reconstruction. Universal acceptance of the FORBILD head phantom may have been prevented by its significantly-higher complexity: software that allows computer simulations with the Shepp-Logan phantom is not readily applicable to the FORBILD head phantom. The tools offered here address this problem. They are designed for use with Matlab®, as well as open-source variants, such as FreeMat and Octave, which are all widely used in both academia and industry. To get started, the interested user can simply copy and paste the codes from this PDF document into Matlab® M-files. PMID:22713335

  3. Simulation tools for two-dimensional experiments in x-ray computed tomography using the FORBILD head phantom.

    PubMed

    Yu, Zhicong; Noo, Frédéric; Dennerlein, Frank; Wunderlich, Adam; Lauritsch, Günter; Hornegger, Joachim

    2012-07-07

    Mathematical phantoms are essential for the development and early stage evaluation of image reconstruction algorithms in x-ray computed tomography (CT). This note offers tools for computer simulations using a two-dimensional (2D) phantom that models the central axial slice through the FORBILD head phantom. Introduced in 1999, in response to a need for a more robust test, the FORBILD head phantom is now seen by many as the gold standard. However, the simple Shepp-Logan phantom is still heavily used by researchers working on 2D image reconstruction. Universal acceptance of the FORBILD head phantom may have been prevented by its significantly higher complexity: software that allows computer simulations with the Shepp-Logan phantom is not readily applicable to the FORBILD head phantom. The tools offered here address this problem. They are designed for use with Matlab®, as well as open-source variants, such as FreeMat and Octave, which are all widely used in both academia and industry. To get started, the interested user can simply copy and paste the codes from this PDF document into Matlab® M-files.

  4. A Gender Based Study on Job Satisfaction among Higher Secondary School Heads in Khyber Pakhtunkhwa, (Pakistan)

    ERIC Educational Resources Information Center

    Mumtaz, Safina; Suleman, Qaiser; Ahmad, Zubair

    2016-01-01

    The purpose of the study was to analyze and compare the job satisfaction with twenty dimensions of male and female higher secondary school heads in Khyber Pakhtunkhwa. A total of 108 higher secondary school heads were selected from eleven districts as sample through multi-stage sampling technique in which 66 were male and 42 were female. The study…

  5. A modified genetic algorithm with fuzzy roulette wheel selection for job-shop scheduling problems

    NASA Astrophysics Data System (ADS)

    Thammano, Arit; Teekeng, Wannaporn

    2015-05-01

    The job-shop scheduling problem is one of the most difficult production planning problems. Since it is in the NP-hard class, a recent trend in solving the job-shop scheduling problem is shifting towards the use of heuristic and metaheuristic algorithms. This paper proposes a novel metaheuristic algorithm, which is a modification of the genetic algorithm. This proposed algorithm introduces two new concepts to the standard genetic algorithm: (1) fuzzy roulette wheel selection and (2) the mutation operation with tabu list. The proposed algorithm has been evaluated and compared with several state-of-the-art algorithms in the literature. The experimental results on 53 JSSPs show that the proposed algorithm is very effective in solving the combinatorial optimization problems. It outperforms all state-of-the-art algorithms on all benchmark problems in terms of the ability to achieve the optimal solution and the computational time.

  6. History-based route selection for reactive ad hoc routing protocols

    NASA Astrophysics Data System (ADS)

    Medidi, Sirisha; Cappetto, Peter

    2007-04-01

    Ad hoc networks rely on cooperation in order to operate, but in a resource constrained environment not all nodes behave altruistically. Selfish nodes preserve their own resources and do not forward packets not in their own self interest. These nodes degrade the performance of the network, but judicious route selection can help maintain performance despite this behavior. Many route selection algorithms place importance on shortness of the route rather than its reliability. We introduce a light-weight route selection algorithm that uses past behavior to judge the quality of a route rather than solely on the length of the route. It draws information from the underlying routing layer at no extra cost and selects routes with a simple algorithm. This technique maintains this data in a small table, which does not place a high cost on memory. History-based route selection's minimalism suits the needs the portable wireless devices and is easy to implement. We implemented our algorithm and tested it in the ns2 environment. Our simulation results show that history-based route selection achieves higher packet delivery and improved stability than its length-based counterpart.

  7. Online feature selection with streaming features.

    PubMed

    Wu, Xindong; Yu, Kui; Ding, Wei; Wang, Hao; Zhu, Xingquan

    2013-05-01

    We propose a new online feature selection framework for applications with streaming features where the knowledge of the full feature space is unknown in advance. We define streaming features as features that flow in one by one over time whereas the number of training examples remains fixed. This is in contrast with traditional online learning methods that only deal with sequentially added observations, with little attention being paid to streaming features. The critical challenges for Online Streaming Feature Selection (OSFS) include 1) the continuous growth of feature volumes over time, 2) a large feature space, possibly of unknown or infinite size, and 3) the unavailability of the entire feature set before learning starts. In the paper, we present a novel Online Streaming Feature Selection method to select strongly relevant and nonredundant features on the fly. An efficient Fast-OSFS algorithm is proposed to improve feature selection performance. The proposed algorithms are evaluated extensively on high-dimensional datasets and also with a real-world case study on impact crater detection. Experimental results demonstrate that the algorithms achieve better compactness and higher prediction accuracy than existing streaming feature selection algorithms.

  8. Progressive sampling-based Bayesian optimization for efficient and automatic machine learning model selection.

    PubMed

    Zeng, Xueqiang; Luo, Gang

    2017-12-01

    Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.

  9. The admissible portfolio selection problem with transaction costs and an improved PSO algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Wei; Zhang, Wei-Guo

    2010-05-01

    In this paper, we discuss the portfolio selection problem with transaction costs under the assumption that there exist admissible errors on expected returns and risks of assets. We propose a new admissible efficient portfolio selection model and design an improved particle swarm optimization (PSO) algorithm because traditional optimization algorithms fail to work efficiently for our proposed problem. Finally, we offer a numerical example to illustrate the proposed effective approaches and compare the admissible portfolio efficient frontiers under different constraints.

  10. Does the hair influence heat extraction from the head during head cooling under heat stress?

    PubMed Central

    SHIN, Sora; PARK, Joonhee; LEE, Joo-Young

    2015-01-01

    The purpose of this study was to investigate the effects of head hair on thermoregulatory responses when cooling the head under heat stress. Eight young males participated in six experimental conditions: normal hair (100–130 mm length) and cropped hair (5 mm length) with three water inlet temperatures of 10, 15, and 20°C. The head and neck of subjects were cooled by a liquid perfused hood while immersing legs at 42°C water for 60 min in a sitting position at the air temperature of 28°C with 30% RH. The results showed that heat removal from the normal hair condition was not significantly different from the cropped hair condition. Rectal and mean skin temperatures, and sweat rate showed no significant differences between the normal and cropped hair conditions. Heat extraction from the head was significantly greater in 10°C than in 15 or 20°C cooling (p<0.05) for both normal and cropped hair, whereas subjects preferred the 15°C more than the 10 or 20°C cooling regimen. These results indicate that the selection of effective cooling temperature is more crucial than the length of workers’ hair during head cooling under heat stress, and such selection should be under the consideration of subjective perceptions with physiological responses. PMID:26165361

  11. Matrix Multiplication Algorithm Selection with Support Vector Machines

    DTIC Science & Technology

    2015-05-01

    libraries that could intelligently choose the optimal algorithm for a particular set of inputs. Users would be oblivious to the underlying algorithmic...SAT.” J. Artif . Intell. Res.(JAIR), vol. 32, pp. 565–606, 2008. [9] M. G. Lagoudakis and M. L. Littman, “Algorithm selection using reinforcement...Artificial Intelligence , vol. 21, no. 05, pp. 961–976, 2007. [15] C.-C. Chang and C.-J. Lin, “LIBSVM: A library for support vector machines,” ACM

  12. Clinical evaluation of multi-atlas based segmentation of lymph node regions in head and neck and prostate cancer patients.

    PubMed

    Sjöberg, Carl; Lundmark, Martin; Granberg, Christoffer; Johansson, Silvia; Ahnesjö, Anders; Montelius, Anders

    2013-10-03

    Semi-automated segmentation using deformable registration of selected atlas cases consisting of expert segmented patient images has been proposed to facilitate the delineation of lymph node regions for three-dimensional conformal and intensity-modulated radiotherapy planning of head and neck and prostate tumours. Our aim is to investigate if fusion of multiple atlases will lead to clinical workload reductions and more accurate segmentation proposals compared to the use of a single atlas segmentation, due to a more complete representation of the anatomical variations. Atlases for lymph node regions were constructed using 11 head and neck patients and 15 prostate patients based on published recommendations for segmentations. A commercial registration software (Velocity AI) was used to create individual segmentations through deformable registration. Ten head and neck patients, and ten prostate patients, all different from the atlas patients, were randomly chosen for the study from retrospective data. Each patient was first delineated three times, (a) manually by a radiation oncologist, (b) automatically using a single atlas segmentation proposal from a chosen atlas and (c) automatically by fusing the atlas proposals from all cases in the database using the probabilistic weighting fusion algorithm. In a subsequent step a radiation oncologist corrected the segmentation proposals achieved from step (b) and (c) without using the result from method (a) as reference. The time spent for editing the segmentations was recorded separately for each method and for each individual structure. Finally, the Dice Similarity Coefficient and the volume of the structures were used to evaluate the similarity between the structures delineated with the different methods. For the single atlas method, the time reduction compared to manual segmentation was 29% and 23% for head and neck and pelvis lymph nodes, respectively, while editing the fused atlas proposal resulted in time reductions of 49% and 34%. The average volume of the fused atlas proposals was only 74% of the manual segmentation for the head and neck cases and 82% for the prostate cases due to a blurring effect from the fusion process. After editing of the proposals the resulting volume differences were no longer statistically significant, although a slight influence by the proposals could be noticed since the average edited volume was still slightly smaller than the manual segmentation, 9% and 5%, respectively. Segmentation based on fusion of multiple atlases reduces the time needed for delineation of lymph node regions compared to the use of a single atlas segmentation. Even though the time saving is large, the quality of the segmentation is maintained compared to manual segmentation.

  13. A novel method of comparing mating success and survival reveals similar sexual and viability selection for mobility traits in female tree crickets.

    PubMed

    Ercit, K; Gwynne, D T

    2016-06-01

    The relationship between sexual and viability selection in females is necessarily different than that in males, as investment in sexual traits potentially comes at the expense of both fecundity and survival. Accordingly, females do not usually invest in sexually selected traits. However, direct benefits obtained from mating, such as nuptial gifts, may encourage competition among females and subsidize investment into sexually selected traits. We compared sexual and viability selection on female tree crickets Oecanthus nigricornis, a species where females mate frequently to obtain nuptial gifts and sexual selection on females is likely. If male choice determines female mating success in this species, we expect sexual selection for fecundity traits, as males of many species prefer more fecund females. Alternatively, intrasexual scramble or combat competition on females may select for larger jumping legs or wider heads (respectively). We estimated mating success in wild caught crickets using microsatellite analysis of stored sperm and estimated relative viability by comparing surviving female O. nigricornis to those captured by a common wasp predator. In support of the scramble competition hypothesis, we found sexual selection for females with larger hind legs and narrower heads. We also found stabilizing viability selection for intermediate head width and hind leg size. As predicted, traits under viability and sexual selection were very similar, and the direction of that selection was not opposing. However, because the shape of sexual and viability selection differs, these episodes of selection may favour slightly different trait sizes. © 2016 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2016 European Society For Evolutionary Biology.

  14. Algorithms and programming tools for image processing on the MPP:3

    NASA Technical Reports Server (NTRS)

    Reeves, Anthony P.

    1987-01-01

    This is the third and final report on the work done for NASA Grant 5-403 on Algorithms and Programming Tools for Image Processing on the MPP:3. All the work done for this grant is summarized in the introduction. Work done since August 1986 is reported in detail. Research for this grant falls under the following headings: (1) fundamental algorithms for the MPP; (2) programming utilities for the MPP; (3) the Parallel Pascal Development System; and (4) performance analysis. In this report, the results of two efforts are reported: region growing, and performance analysis of important characteristic algorithms. In each case, timing results from MPP implementations are included. A paper is included in which parallel algorithms for region growing on the MPP is discussed. These algorithms permit different sized regions to be merged in parallel. Details on the implementation and peformance of several important MPP algorithms are given. These include a number of standard permutations, the FFT, convolution, arbitrary data mappings, image warping, and pyramid operations, all of which have been implemented on the MPP. The permutation and image warping functions have been included in the standard development system library.

  15. Enhancing Breast Cancer Recurrence Algorithms Through Selective Use of Medical Record Data

    PubMed Central

    Chubak, Jessica; Johnson, Lisa; Castillo, Adrienne; Weltzien, Erin; Caan, Bette J.

    2016-01-01

    Abstract Background: The utility of data-based algorithms in research has been questioned because of errors in identification of cancer recurrences. We adapted previously published breast cancer recurrence algorithms, selectively using medical record (MR) data to improve classification. Methods: We evaluated second breast cancer event (SBCE) and recurrence-specific algorithms previously published by Chubak and colleagues in 1535 women from the Life After Cancer Epidemiology (LACE) and 225 women from the Women’s Health Initiative cohorts and compared classification statistics to published values. We also sought to improve classification with minimal MR examination. We selected pairs of algorithms—one with high sensitivity/high positive predictive value (PPV) and another with high specificity/high PPV—using MR information to resolve discrepancies between algorithms, properly classifying events based on review; we called this “triangulation.” Finally, in LACE, we compared associations between breast cancer survival risk factors and recurrence using MR data, single Chubak algorithms, and triangulation. Results: The SBCE algorithms performed well in identifying SBCE and recurrences. Recurrence-specific algorithms performed more poorly than published except for the high-specificity/high-PPV algorithm, which performed well. The triangulation method (sensitivity = 81.3%, specificity = 99.7%, PPV = 98.1%, NPV = 96.5%) improved recurrence classification over two single algorithms (sensitivity = 57.1%, specificity = 95.5%, PPV = 71.3%, NPV = 91.9%; and sensitivity = 74.6%, specificity = 97.3%, PPV = 84.7%, NPV = 95.1%), with 10.6% MR review. Triangulation performed well in survival risk factor analyses vs analyses using MR-identified recurrences. Conclusions: Use of multiple recurrence algorithms in administrative data, in combination with selective examination of MR data, may improve recurrence data quality and reduce research costs. PMID:26582243

  16. Effective traffic features selection algorithm for cyber-attacks samples

    NASA Astrophysics Data System (ADS)

    Li, Yihong; Liu, Fangzheng; Du, Zhenyu

    2018-05-01

    By studying the defense scheme of Network attacks, this paper propose an effective traffic features selection algorithm based on k-means++ clustering to deal with the problem of high dimensionality of traffic features which extracted from cyber-attacks samples. Firstly, this algorithm divide the original feature set into attack traffic feature set and background traffic feature set by the clustering. Then, we calculates the variation of clustering performance after removing a certain feature. Finally, evaluating the degree of distinctiveness of the feature vector according to the result. Among them, the effective feature vector is whose degree of distinctiveness exceeds the set threshold. The purpose of this paper is to select out the effective features from the extracted original feature set. In this way, it can reduce the dimensionality of the features so as to reduce the space-time overhead of subsequent detection. The experimental results show that the proposed algorithm is feasible and it has some advantages over other selection algorithms.

  17. Diversified models for portfolio selection based on uncertain semivariance

    NASA Astrophysics Data System (ADS)

    Chen, Lin; Peng, Jin; Zhang, Bo; Rosyida, Isnaini

    2017-02-01

    Since the financial markets are complex, sometimes the future security returns are represented mainly based on experts' estimations due to lack of historical data. This paper proposes a semivariance method for diversified portfolio selection, in which the security returns are given subjective to experts' estimations and depicted as uncertain variables. In the paper, three properties of the semivariance of uncertain variables are verified. Based on the concept of semivariance of uncertain variables, two types of mean-semivariance diversified models for uncertain portfolio selection are proposed. Since the models are complex, a hybrid intelligent algorithm which is based on 99-method and genetic algorithm is designed to solve the models. In this hybrid intelligent algorithm, 99-method is applied to compute the expected value and semivariance of uncertain variables, and genetic algorithm is employed to seek the best allocation plan for portfolio selection. At last, several numerical examples are presented to illustrate the modelling idea and the effectiveness of the algorithm.

  18. Towards adaptive radiotherapy for head and neck patients: validation of an in-house deformable registration algorithm

    NASA Astrophysics Data System (ADS)

    Veiga, C.; McClelland, J.; Moinuddin, S.; Ricketts, K.; Modat, M.; Ourselin, S.; D'Souza, D.; Royle, G.

    2014-03-01

    The purpose of this work is to validate an in-house deformable image registration (DIR) algorithm for adaptive radiotherapy for head and neck patients. We aim to use the registrations to estimate the "dose of the day" and assess the need to replan. NiftyReg is an open-source implementation of the B-splines deformable registration algorithm, developed in our institution. We registered a planning CT to a CBCT acquired midway through treatment for 5 HN patients that required replanning. We investigated 16 different parameter settings that previously showed promising results. To assess the registrations, structures delineated in the CT were warped and compared with contours manually drawn by the same clinical expert on the CBCT. This structure set contained vertebral bodies and soft tissue. Dice similarity coefficient (DSC), overlap index (OI), centroid position and distance between structures' surfaces were calculated for every registration, and a set of parameters that produces good results for all datasets was found. We achieve a median value of 0.845 in DSC, 0.889 in OI, error smaller than 2 mm in centroid position and over 90% of the warped surface pixels are distanced less than 2 mm of the manually drawn ones. By using appropriate DIR parameters, we are able to register the planning geometry (pCT) to the daily geometry (CBCT).

  19. Firefly algorithm versus genetic algorithm as powerful variable selection tools and their effect on different multivariate calibration models in spectroscopy: A comparative study

    NASA Astrophysics Data System (ADS)

    Attia, Khalid A. M.; Nassar, Mohammed W. I.; El-Zeiny, Mohamed B.; Serag, Ahmed

    2017-01-01

    For the first time, a new variable selection method based on swarm intelligence namely firefly algorithm is coupled with three different multivariate calibration models namely, concentration residual augmented classical least squares, artificial neural network and support vector regression in UV spectral data. A comparative study between the firefly algorithm and the well-known genetic algorithm was developed. The discussion revealed the superiority of using this new powerful algorithm over the well-known genetic algorithm. Moreover, different statistical tests were performed and no significant differences were found between all the models regarding their predictabilities. This ensures that simpler and faster models were obtained without any deterioration of the quality of the calibration.

  20. Strategies for Pre-Emptive Mid-Air Collision Avoidance in Budgerigars

    PubMed Central

    Schiffner, Ingo; Srinivasan, Mandyam V.

    2016-01-01

    We have investigated how birds avoid mid-air collisions during head-on encounters. Trajectories of birds flying towards each other in a tunnel were recorded using high speed video cameras. Analysis and modelling of the data suggest two simple strategies for collision avoidance: (a) each bird veers to its right and (b) each bird changes its altitude relative to the other bird according to a preset preference. Both strategies suggest simple rules by which collisions can be avoided in head-on encounters by two agents, be they animals or machines. The findings are potentially applicable to the design of guidance algorithms for automated collision avoidance on aircraft. PMID:27680488

Top