Science.gov

Sample records for algorithm results show

  1. Storing CO2 underground shows promising results

    NASA Astrophysics Data System (ADS)

    Zweigel, Peter; Gale, John

    Long-term underground storage of CO2 is an important element in concepts to reduce atmospheric CO2 emissions as the use of fossil fuels continues. The first results of a multinational research project evaluating the injection of CO2 into a saline aquifer in the North Sea are validating this method of CO2 reduction, and are serving to further define the research needed to develop the technology for large-scale applicability. Reducing the emission of substances that have potentially harmful effects on global climate— for example, CO2—has become a central issue of environmental policy at least since the 1997 Kyoto conference on climate change.

  2. Wake Vortex Algorithm Scoring Results

    NASA Technical Reports Server (NTRS)

    Robins, R. E.; Delisi, D. P.; Hinton, David (Technical Monitor)

    2002-01-01

    This report compares the performance of two models of trailing vortex evolution for which interaction with the ground is not a significant factor. One model uses eddy dissipation rate (EDR) and the other uses the kinetic energy of turbulence fluctuations (TKE) to represent the effect of turbulence. In other respects, the models are nearly identical. The models are evaluated by comparing their predictions of circulation decay, vertical descent, and lateral transport to observations for over four hundred cases from Memphis and Dallas/Fort Worth International Airports. These observations were obtained during deployments in support of NASA's Aircraft Vortex Spacing System (AVOSS). The results of the comparisons show that the EDR model usually performs slightly better than the TKE model.

  3. New Results in Astrodynamics Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Coverstone-Carroll, V.; Hartmann, J. W.; Williams, S. N.; Mason, W. J.

    1998-01-01

    Generic algorithms have gained popularity as an effective procedure for obtaining solutions to traditionally difficult space mission optimization problems. In this paper, a brief survey of the use of genetic algorithms to solve astrodynamics problems is presented and is followed by new results obtained from applying a Pareto genetic algorithm to the optimization of low-thrust interplanetary spacecraft missions.

  4. 14. DETAIL VIEW OF BUTTRESS 4 SHOWING THE RESULTS OF ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    14. DETAIL VIEW OF BUTTRESS 4 SHOWING THE RESULTS OF INADEQUATE TAMPING. THE SIZE OF THE GRANITE AGGREGATE USED IN THE DAMS CONCRETE IS CLEARLY SHOWN. - Hume Lake Dam, Sequioa National Forest, Hume, Fresno County, CA

  5. 13. DETAIL VIEW OF BUTTRESS 4 SHOWING THE RESULTS OF ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    13. DETAIL VIEW OF BUTTRESS 4 SHOWING THE RESULTS OF POOR CONSTRUCTION WORK. THOUGH NOT A SERIOUS STRUCTURAL DEFICIENCY, THE 'HONEYCOMB' TEXTURE OF THE CONCRETE SURFACE WAS THE RESULT OF INADEQUATE TAMPING AT THE TIME OF THE INITIAL 'POUR'. - Hume Lake Dam, Sequioa National Forest, Hume, Fresno County, CA

  6. Emerging Trends in Contextual Learning Show Positive Results for Students.

    ERIC Educational Resources Information Center

    WorkAmerica, 2001

    2001-01-01

    This issue focuses on contextual learning (CL), in which students master rigorous academic content in real-world or work-based learning experiences. "Emerging Trends in CL Show Positive Results for Students" discusses CL as an important strategy for improving student achievement. It describes: how CL raises the bar for all students, challenging…

  7. Breast vibro-acoustography: initial results show promise

    PubMed Central

    2012-01-01

    Introduction Vibro-acoustography (VA) is a recently developed imaging modality that is sensitive to the dynamic characteristics of tissue. It detects low-frequency harmonic vibrations in tissue that are induced by the radiation force of ultrasound. Here, we have investigated applications of VA for in vivo breast imaging. Methods A recently developed combined mammography-VA system for in vivo breast imaging was tested on female volunteers, aged 25 years or older, with suspected breast lesions on their clinical examination. After mammography, a set of VA scans was acquired by the experimental device. In a masked assessment, VA images were evaluated independently by 3 reviewers who identified mass lesions and calcifications. The diagnostic accuracy of this imaging method was determined by comparing the reviewers' responses with clinical data. Results We collected images from 57 participants: 7 were used for training and 48 for evaluation of diagnostic accuracy (images from 2 participants were excluded because of unexpected imaging artifacts). In total, 16 malignant and 32 benign lesions were examined. Specificity for diagnostic accuracy was 94% or higher for all 3 reviewers, but sensitivity varied (69% to 100%). All reviewers were able to detect 97% of masses, but sensitivity for detection of calcification was lower (≤ 72% for all reviewers). Conclusions VA can be used to detect various breast abnormalities, including calcifications and benign and malignant masses, with relatively high specificity. VA technology may lead to a new clinical tool for breast imaging applications. PMID:23021305

  8. The Aquarius Salinity Retrieval Algorithm: Early Results

    NASA Technical Reports Server (NTRS)

    Meissner, Thomas; Wentz, Frank J.; Lagerloef, Gary; LeVine, David

    2012-01-01

    The Aquarius L-band radiometer/scatterometer system is designed to provide monthly salinity maps at 150 km spatial scale to a 0.2 psu accuracy. The sensor was launched on June 10, 2011, aboard the Argentine CONAE SAC-D spacecraft. The L-band radiometers and the scatterometer have been taking science data observations since August 25, 2011. The first part of this presentation gives an overview over the Aquarius salinity retrieval algorithm. The instrument calibration converts Aquarius radiometer counts into antenna temperatures (TA). The salinity retrieval algorithm converts those TA into brightness temperatures (TB) at a flat ocean surface. As a first step, contributions arising from the intrusion of solar, lunar and galactic radiation are subtracted. The antenna pattern correction (APC) removes the effects of cross-polarization contamination and spillover. The Aquarius radiometer measures the 3rd Stokes parameter in addition to vertical (v) and horizontal (h) polarizations, which allows for an easy removal of ionospheric Faraday rotation. The atmospheric absorption at L-band is almost entirely due to O2, which can be calculated based on auxiliary input fields from numerical weather prediction models and then successively removed from the TB. The final step in the TA to TB conversion is the correction for the roughness of the sea surface due to wind. This is based on the radar backscatter measurements by the scatterometer. The TB of the flat ocean surface can now be matched to a salinity value using a surface emission model that is based on a model for the dielectric constant of sea water and an auxiliary field for the sea surface temperature. In the current processing (as of writing this abstract) only v-pol TB are used for this last process and NCEP winds are used for the roughness correction. Before the salinity algorithm can be operationally implemented and its accuracy assessed by comparing versus in situ measurements, an extensive calibration and validation

  9. Experimental Results in the Comparison of Search Algorithms Used with Room Temperature Detectors

    SciTech Connect

    Guss, P., Yuan, D., Cutler, M., Beller, D.

    2010-11-01

    Analysis of time sequence data was run for several higher resolution scintillation detectors using a variety of search algorithms, and results were obtained in predicting the relative performance for these detectors, which included a slightly superior performance by CeBr{sub 3}. Analysis of several search algorithms shows that inclusion of the RSPRT methodology can improve sensitivity.

  10. Evaluation of registration, compression and classification algorithms. Volume 1: Results

    NASA Technical Reports Server (NTRS)

    Jayroe, R.; Atkinson, R.; Callas, L.; Hodges, J.; Gaggini, B.; Peterson, J.

    1979-01-01

    The registration, compression, and classification algorithms were selected on the basis that such a group would include most of the different and commonly used approaches. The results of the investigation indicate clearcut, cost effective choices for registering, compressing, and classifying multispectral imagery.

  11. The Effect of Pansharpening Algorithms on the Resulting Orthoimagery

    NASA Astrophysics Data System (ADS)

    Agrafiotis, P.; Georgopoulos, A.; Karantzalos, K.

    2016-06-01

    This paper evaluates the geometric effects of pansharpening algorithms on automatically generated DSMs and thus on the resulting orthoimagery through a quantitative assessment of the accuracy on the end products. The main motivation was based on the fact that for automatically generated Digital Surface Models, an image correlation step is employed for extracting correspondences between the overlapping images. Thus their accuracy and reliability is strictly related to image quality, while pansharpening may result into lower image quality which may affect the DSM generation and the resulting orthoimage accuracy. To this direction, an iterative methodology was applied in order to combine the process described by Agrafiotis and Georgopoulos (2015) with different pansharpening algorithms and check the accuracy of orthoimagery resulting from pansharpened data. Results are thoroughly examined and statistically analysed. The overall evaluation indicated that the pansharpening process didn't affect the geometric accuracy of the resulting DSM with a 10m interval, as well as the resulting orthoimagery. Although some residuals in the orthoimages were observed, their magnitude cannot adversely affect the accuracy of the final orthoimagery.

  12. Shuttle Entry Air Data System (SEADS) - Optimization of preflight algorithms based on flight results

    NASA Technical Reports Server (NTRS)

    Wolf, H.; Henry, M. W.; Siemers, Paul M., III

    1988-01-01

    The SEADS pressure model algorithm results were tested against other sources of air data, in particular, the Shuttle Best Estimated Trajectory (BET). The algorithm basis was also tested through a comparison of flight-measured pressure distribution vs the wind tunnel database. It is concluded that the successful flight of SEADS and the subsequent analysis of the data shows good agreement between BET and SEADS air data.

  13. Adaptively resizing populations: Algorithm, analysis, and first results

    NASA Technical Reports Server (NTRS)

    Smith, Robert E.; Smuda, Ellen

    1993-01-01

    Deciding on an appropriate population size for a given Genetic Algorithm (GA) application can often be critical to the algorithm's success. Too small, and the GA can fall victim to sampling error, affecting the efficacy of its search. Too large, and the GA wastes computational resources. Although advice exists for sizing GA populations, much of this advice involves theoretical aspects that are not accessible to the novice user. An algorithm for adaptively resizing GA populations is suggested. This algorithm is based on recent theoretical developments that relate population size to schema fitness variance. The suggested algorithm is developed theoretically, and simulated with expected value equations. The algorithm is then tested on a problem where population sizing can mislead the GA. The work presented suggests that the population sizing algorithm may be a viable way to eliminate the population sizing decision from the application of GA's.

  14. A comprehensive performance evaluation on the prediction results of existing cooperative transcription factors identification algorithms

    PubMed Central

    2014-01-01

    Background Eukaryotic transcriptional regulation is known to be highly connected through the networks of cooperative transcription factors (TFs). Measuring the cooperativity of TFs is helpful for understanding the biological relevance of these TFs in regulating genes. The recent advances in computational techniques led to various predictions of cooperative TF pairs in yeast. As each algorithm integrated different data resources and was developed based on different rationales, it possessed its own merit and claimed outperforming others. However, the claim was prone to subjectivity because each algorithm compared with only a few other algorithms and only used a small set of performance indices for comparison. This motivated us to propose a series of indices to objectively evaluate the prediction performance of existing algorithms. And based on the proposed performance indices, we conducted a comprehensive performance evaluation. Results We collected 14 sets of predicted cooperative TF pairs (PCTFPs) in yeast from 14 existing algorithms in the literature. Using the eight performance indices we adopted/proposed, the cooperativity of each PCTFP was measured and a ranking score according to the mean cooperativity of the set was given to each set of PCTFPs under evaluation for each performance index. It was seen that the ranking scores of a set of PCTFPs vary with different performance indices, implying that an algorithm used in predicting cooperative TF pairs is of strength somewhere but may be of weakness elsewhere. We finally made a comprehensive ranking for these 14 sets. The results showed that Wang J's study obtained the best performance evaluation on the prediction of cooperative TF pairs in yeast. Conclusions In this study, we adopted/proposed eight performance indices to make a comprehensive performance evaluation on the prediction results of 14 existing cooperative TFs identification algorithms. Most importantly, these proposed indices can be easily applied to

  15. Astronomy Diagnostic Test Results Reflect Course Goals and Show Room for Improvement

    ERIC Educational Resources Information Center

    LoPresto, Michael C.

    2007-01-01

    The results of administering the Astronomy Diagnostic Test (ADT) to introductory astronomy students at Henry Ford Community College over three years have shown gains comparable with national averages. Results have also accurately corresponded to course goals, showing greater gains in topics covered in more detail, and lower gains in topics covered…

  16. Gun shows and gun violence: fatally flawed study yields misleading results.

    PubMed

    Wintemute, Garen J; Hemenway, David; Webster, Daniel; Pierce, Glenn; Braga, Anthony A

    2010-10-01

    A widely publicized but unpublished study of the relationship between gun shows and gun violence is being cited in debates about the regulation of gun shows and gun commerce. We believe the study is fatally flawed. A working paper entitled "The Effect of Gun Shows on Gun-Related Deaths: Evidence from California and Texas" outlined this study, which found no association between gun shows and gun-related deaths. We believe the study reflects a limited understanding of gun shows and gun markets and is not statistically powered to detect even an implausibly large effect of gun shows on gun violence. In addition, the research contains serious ascertainment and classification errors, produces results that are sensitive to minor specification changes in key variables and in some cases have no face validity, and is contradicted by 1 of its own authors' prior research. The study should not be used as evidence in formulating gun policy.

  17. Respiratory rate detection algorithm based on RGB-D camera: theoretical background and experimental results

    PubMed Central

    Freddi, Alessandro; Monteriù, Andrea; Longhi, Sauro

    2014-01-01

    Both the theoretical background and the experimental results of an algorithm developed to perform human respiratory rate measurements without any physical contact are presented. Based on depth image sensing techniques, the respiratory rate is derived by measuring morphological changes of the chest wall. The algorithm identifies the human chest, computes its distance from the camera and compares this value with the instantaneous distance, discerning if it is due to the respiratory act or due to a limited movement of the person being monitored. To experimentally validate the proposed algorithm, the respiratory rate measurements coming from a spirometer were taken as a benchmark and compared with those estimated by the algorithm. Five tests were performed, with five different persons sat in front of the camera. The first test aimed to choose the suitable sampling frequency. The second test was conducted to compare the performances of the proposed system with respect to the gold standard in ideal conditions of light, orientation and clothing. The third, fourth and fifth tests evaluated the algorithm performances under different operating conditions. The experimental results showed that the system can correctly measure the respiratory rate, and it is a viable alternative to monitor the respiratory activity of a person without using invasive sensors. PMID:26609383

  18. Comparative Evaluation of Registration Algorithms in Different Brain Databases With Varying Difficulty: Results and Insights

    PubMed Central

    Akbari, Hamed; Bilello, Michel; Da, Xiao; Davatzikos, Christos

    2015-01-01

    Evaluating various algorithms for the inter-subject registration of brain magnetic resonance images (MRI) is a necessary topic receiving growing attention. Existing studies evaluated image registration algorithms in specific tasks or using specific databases (e.g., only for skull-stripped images, only for single-site images, etc.). Consequently, the choice of registration algorithms seems task- and usage/parameter-dependent. Nevertheless, recent large-scale, often multi-institutional imaging-related studies create the need and raise the question whether some registration algorithms can 1) generally apply to various tasks/databases posing various challenges; 2) perform consistently well, and while doing so, 3) require minimal or ideally no parameter tuning. In seeking answers to this question, we evaluated 12 general-purpose registration algorithms, for their generality, accuracy and robustness. We fixed their parameters at values suggested by algorithm developers as reported in the literature. We tested them in 7 databases/tasks, which present one or more of 4 commonly-encountered challenges: 1) inter-subject anatomical variability in skull-stripped images; 2) intensity homogeneity, noise and large structural differences in raw images; 3) imaging protocol and field-of-view (FOV) differences in multi-site data; and 4) missing correspondences in pathology-bearing images. Totally 7,562 registrations were performed. Registration accuracies were measured by (multi-)expert-annotated landmarks or regions of interest (ROIs). To ensure reproducibility, we used public software tools, public databases (whenever possible), and we fully disclose the parameter settings. We show evaluation results, and discuss the performances in light of algorithms’ similarity metrics, transformation models and optimization strategies. We also discuss future directions for the algorithm development and evaluations. PMID:24951685

  19. Modal characterization of the ASCIE segmented optics testbed: New algorithms and experimental results

    NASA Technical Reports Server (NTRS)

    Carrier, Alain C.; Aubrun, Jean-Noel

    1993-01-01

    New frequency response measurement procedures, on-line modal tuning techniques, and off-line modal identification algorithms are developed and applied to the modal identification of the Advanced Structures/Controls Integrated Experiment (ASCIE), a generic segmented optics telescope test-bed representative of future complex space structures. The frequency response measurement procedure uses all the actuators simultaneously to excite the structure and all the sensors to measure the structural response so that all the transfer functions are measured simultaneously. Structural responses to sinusoidal excitations are measured and analyzed to calculate spectral responses. The spectral responses in turn are analyzed as the spectral data become available and, which is new, the results are used to maintain high quality measurements. Data acquisition, processing, and checking procedures are fully automated. As the acquisition of the frequency response progresses, an on-line algorithm keeps track of the actuator force distribution that maximizes the structural response to automatically tune to a structural mode when approaching a resonant frequency. This tuning is insensitive to delays, ill-conditioning, and nonproportional damping. Experimental results show that is useful for modal surveys even in high modal density regions. For thorough modeling, a constructive procedure is proposed to identify the dynamics of a complex system from its frequency response with the minimization of a least-squares cost function as a desirable objective. This procedure relies on off-line modal separation algorithms to extract modal information and on least-squares parameter subset optimization to combine the modal results and globally fit the modal parameters to the measured data. The modal separation algorithms resolved modal density of 5 modes/Hz in the ASCIE experiment. They promise to be useful in many challenging applications.

  20. Long-Term Trial Results Show No Mortality Benefit from Annual Prostate Cancer Screening

    Cancer.gov

    Thirteen year follow-up data from the Prostate, Lung, Colorectal and Ovarian (PLCO) cancer screening trial show higher incidence but similar mortality among men screened annually with the prostate-specific antigen (PSA) test and digital rectal examination

  1. Comparison of some results of program SHOW with other solar hot water computer programs

    NASA Astrophysics Data System (ADS)

    Young, M. F.; Baughn, J. W.

    Subroutines and the driver program for the simulation code SHOW (solar hot water) for solar thermosyphon systems are discussed, and simulations are compared with predictions by the F-CHART and TRNSYS codes. SHOW has the driver program MAIN, which defines the system control logic for choosing the appropriate system subroutine for analysis. Ten subroutines are described, which account for the solar system physical parameters, the weather data, the manufacturer-supplied system specifications, mass flow rates, pumped systems, total transformed radiation, load use profiles, stratification in storage, an electric water heater, and economic analyses. The three programs are employed to analyze a thermosiphon installation in Sacramento with two storage tanks. TRNSYS and SHOW were in agreement and lower than F-CHARt for annual predictions, although significantly more computer time was necessary to make TRNSYS converge.

  2. Algorithms for personalized therapy of type 2 diabetes: results of a web-based international survey

    PubMed Central

    Gallo, Marco; Mannucci, Edoardo; De Cosmo, Salvatore; Gentile, Sandro; Candido, Riccardo; De Micheli, Alberto; Di Benedetto, Antonino; Esposito, Katherine; Genovese, Stefano; Medea, Gerardo; Ceriello, Antonio

    2015-01-01

    Objective In recent years increasing interest in the issue of treatment personalization for type 2 diabetes (T2DM) has emerged. This international web-based survey aimed to evaluate opinions of physicians about tailored therapeutic algorithms developed by the Italian Association of Diabetologists (AMD) and available online, and to get suggestions for future developments. Another aim of this initiative was to assess whether the online advertising and the survey would have increased the global visibility of the AMD algorithms. Research design and methods The web-based survey, which comprised five questions, has been available from the homepage of the web-version of the journal Diabetes Care throughout the month of December 2013, and on the AMD website between December 2013 and September 2014. Participation was totally free and responders were anonymous. Results Overall, 452 physicians (M=58.4%) participated in the survey. Diabetologists accounted for 76.8% of responders. The results of the survey show wide agreement (>90%) by participants on the utility of the algorithms proposed, even if they do not cover all possible needs of patients with T2DM for a personalized therapeutic approach. In the online survey period and in the months after its conclusion, a relevant and durable increase in the number of unique users who visited the websites was registered, compared to the period preceding the survey. Conclusions Patients with T2DM are heterogeneous, and there is interest toward accessible and easy to use personalized therapeutic algorithms. Responders opinions probably reflect the peculiar organization of diabetes care in each country. PMID:26301097

  3. Evaluation of observation-driven evaporation algorithms: results of the WACMOS-ET project

    NASA Astrophysics Data System (ADS)

    Miralles, Diego G.; Jimenez, Carlos; Ershadi, Ali; McCabe, Matthew F.; Michel, Dominik; Hirschi, Martin; Seneviratne, Sonia I.; Jung, Martin; Wood, Eric F.; (Bob) Su, Z.; Timmermans, Joris; Chen, Xuelong; Fisher, Joshua B.; Mu, Quiaozen; Fernandez, Diego

    2015-04-01

    Terrestrial evaporation (ET) links the continental water, energy and carbon cycles. Understanding the magnitude and variability of ET at the global scale is an essential step towards reducing uncertainties in our projections of climatic conditions and water availability for the future. However, the requirement of global observational data of ET can neither be satisfied with our sparse global in-situ networks, nor with the existing satellite sensors (which cannot measure evaporation directly from space). This situation has led to the recent rise of several algorithms dedicated to deriving ET fields from satellite data indirectly, based on the combination of ET-drivers that can be observed from space (e.g. radiation, temperature, phenological variability, water content, etc.). These algorithms can either be based on physics (e.g. Priestley and Taylor or Penman-Monteith approaches) or be purely statistical (e.g., machine learning). However, and despite the efforts from different initiatives like GEWEX LandFlux (Jimenez et al., 2011; Mueller et al., 2013), the uncertainties inherent in the resulting global ET datasets remain largely unexplored, partly due to a lack of inter-product consistency in forcing data. In response to this need, the ESA WACMOS-ET project started in 2012 with the main objectives of (a) developing a Reference Input Data Set to derive and validate ET estimates, and (b) performing a cross-comparison, error characterization and validation exercise of a group of selected ET algorithms driven by this Reference Input Data Set and by in-situ forcing data. The algorithms tested are SEBS (Su et al., 2002), the Penman- Monteith approach from MODIS (Mu et al., 2011), the Priestley and Taylor JPL model (Fisher et al., 2008), the MPI-MTE model (Jung et al., 2010) and GLEAM (Miralles et al., 2011). In this presentation we will show the first results from the ESA WACMOS-ET project. The performance of the different algorithms at multiple spatial and temporal

  4. Data for behavioral results and brain regions showing a time effect during pair-association retrieval.

    PubMed

    Jimura, Koji; Hirose, Satoshi; Wada, Hiroyuki; Yoshizawa, Yasunori; Imai, Yoshio; Akahane, Masaaki; Machida, Toru; Shirouzu, Ichiro; Koike, Yasuharu; Konishi, Seiki

    2016-09-01

    The current data article provides behavioral and neuroimaging data for the research article "Relatedness-dependent rapid development of brain activity in anterior temporal cortex during pair-association retrieval" (Jimura et al., 2016) [1]. Behavioral performance is provided in a table. Fig. 2 of the article is based on this table. Brain regions showing time effect are provided in a table. A statistical activation map for the time effect is shown in Fig. 3C of the article. PMID:27508239

  5. Stem cells show promising results for lymphoedema treatment--a literature review.

    PubMed

    Toyserkani, Navid Mohamadpour; Christensen, Marlene Louise; Sheikh, Søren Paludan; Sørensen, Jens Ahm

    2015-04-01

    Lymphoedema is a debilitating condition, manifesting in excess lymphatic fluid and swelling of subcutaneous tissues. Lymphoedema is as of yet still an incurable condition and current treatment modalities are not satisfactory. The capacity of mesenchymal stem cells to promote angiogenesis, secrete growth factors, regulate the inflammatory process, and differentiate into multiple cell types make them a potential ideal therapy for lymphoedema. Adipose tissue is the richest and most accessible source of mesenchymal stem cells and they can be harvested, isolated, and used for therapy in a single stage procedure as an autologous treatment. The aim of this paper was to review all studies using mesenchymal stem cells for lymphoedema treatment with a special focus on the potential use of adipose-derived stem cells. A systematic search was performed and five preclinical and two clinical studies were found. Different stem cell sources and lymphoedema models were used in the described studies. Most studies showed a decrease in lymphoedema and an increased lymphangiogenesis when treated with stem cells and this treatment modality has so far shown great potential. The present studies are, however, subject to bias and more preclinical studies and large-scale high quality clinical trials are needed to show if this emerging therapy can satisfy expectations.

  6. Aortic emboli show surprising size dependent predilection for cerebral arteries: Results from computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Carr, Ian; Schwartz, Robert; Shadden, Shawn

    2012-11-01

    Cardiac emboli can have devastating consequences if they enter the cerebral circulation, and are the most common cause of embolic stroke. Little is known about relationships of embolic origin/density/size to cerebral events; as these relationships are difficult to observe. To better understand stoke risk from cardiac and aortic emboli, we developed a computational model to track emboli from the heart to the brain. Patient-specific models of the human aorta and arteries to the brain were derived from CT angiography from 10 MHIF patients. Blood flow was modeled by the Navier-Stokes equations using pulsatile inflow at the aortic valve, and physiologic Windkessel models at the outlets. Particulate was injected at the aortic valve and tracked using modified Maxey-Riley equations with a wall collision model. Results demonstrate aortic emboli that entered the cerebral circulation through the carotid or vertebral arteries were localized to specific locations of the proximal aorta. The percentage of released particles embolic to the brain markedly increased with particle size from 0 to ~1-1.5 mm in all patients. Larger particulate became less likely to traverse the cerebral vessels. These findings are consistent with sparse literature based on transesophageal echo measurements. This work was supported in part by the National Science Foundation, award number 1157041.

  7. Animation shows promise in initiating timely cardiopulmonary resuscitation: results of a pilot study.

    PubMed

    Attin, Mina; Winslow, Katheryn; Smith, Tyler

    2014-04-01

    Delayed responses during cardiac arrest are common. Timely interventions during cardiac arrest have a direct impact on patient survival. Integration of technology in nursing education is crucial to enhance teaching effectiveness. The goal of this study was to investigate the effect of animation on nursing students' response time to cardiac arrest, including initiation of timely chest compression. Nursing students were randomized into experimental and control groups prior to practicing in a high-fidelity simulation laboratory. The experimental group was educated, by discussion and animation, about the importance of starting cardiopulmonary resuscitation upon recognizing an unresponsive patient. Afterward, a discussion session allowed students in the experimental group to gain more in-depth knowledge about the most recent changes in the cardiac resuscitation guidelines from the American Heart Association. A linear mixed model was run to investigate differences in time of response between the experimental and control groups while controlling for differences in those with additional degrees, prior code experience, and basic life support certification. The experimental group had a faster response time compared with the control group and initiated timely cardiopulmonary resuscitation upon recognition of deteriorating conditions (P < .0001). The results demonstrated the efficacy of combined teaching modalities for timely cardiopulmonary resuscitation. Providing opportunities for repetitious practice when a patient's condition is deteriorating is crucial for teaching safe practice.

  8. Potential for false positive HIV test results with the serial rapid HIV testing algorithm

    PubMed Central

    2012-01-01

    Background Rapid HIV tests provide same-day results and are widely used in HIV testing programs in areas with limited personnel and laboratory infrastructure. The Uganda Ministry of Health currently recommends the serial rapid testing algorithm with Determine, STAT-PAK, and Uni-Gold for diagnosis of HIV infection. Using this algorithm, individuals who test positive on Determine, negative to STAT-PAK and positive to Uni-Gold are reported as HIV positive. We conducted further testing on this subgroup of samples using qualitative DNA PCR to assess the potential for false positive tests in this situation. Results Of the 3388 individuals who were tested, 984 were HIV positive on two consecutive tests, and 29 were considered positive by a tiebreaker (positive on Determine, negative on STAT-PAK, and positive on Uni-Gold). However, when the 29 samples were further tested using qualitative DNA PCR, 14 (48.2%) were HIV negative. Conclusion Although this study was not primarily designed to assess the validity of rapid HIV tests and thus only a subset of the samples were retested, the findings show a potential for false positive HIV results in the subset of individuals who test positive when a tiebreaker test is used in serial testing. These findings highlight a need for confirmatory testing for this category of individuals. PMID:22429706

  9. Corroboration of mechanoregulatory algorithms for tissue differentiation during fracture healing: Comparison with in vivo results.

    PubMed

    Isaksson, Hanna; van Donkelaar, Corrinus C; Huiskes, Rik; Ito, Keita

    2006-05-01

    Several mechanoregulation algorithms proposed to control tissue differentiation during bone healing have been shown to accurately predict temporal and spatial tissue distributions during normal fracture healing. As these algorithms are different in nature and biophysical parameters, it raises the question of which reflects the actual mechanobiological processes the best. The aim of this study was to resolve this issue by corroborating the mechanoregulatory algorithms with more extensive in vivo bone healing data from animal experiments. A poroelastic three-dimensional finite element model of an ovine tibia with a 2.4 mm gap and external callus was used to simulate the course of tissue differentiation during fracture healing in an adaptive model. The mechanical conditions applied were similar to those used experimentally, with axial compression or torsional rotation as two distinct cases. Histological data at 4 and 8 weeks, and weekly radiographs, were used for comparison. By applying new mechanical conditions, torsional rotation, the predictions of the algorithms were distinguished successfully. In torsion, the algorithms regulated by strain and hydrostatic pressure failed to predict healing and bone formation as seen in experimental data. The algorithm regulated by deviatoric strain and fluid velocity predicted bridging and healing in torsion, as observed in vivo. The predictions of the algorithm regulated by deviatoric strain alone did not agree with in vivo data. None of the algorithms predicted patterns of healing entirely similar to those observed experimentally for both loading modes. However, patterns predicted by the algorithm based on deviatoric strain and fluid velocity was closest to experimental results. It was the only algorithm able to predict healing with torsional loading as seen in vivo.

  10. Algorithmic and complexity results for decompositions of biological networks into monotone subsystems.

    PubMed

    DasGupta, Bhaskar; Enciso, German Andres; Sontag, Eduardo; Zhang, Yi

    2007-01-01

    A useful approach to the mathematical analysis of large-scale biological networks is based upon their decompositions into monotone dynamical systems. This paper deals with two computational problems associated to finding decompositions which are optimal in an appropriate sense. In graph-theoretic language, the problems can be recast in terms of maximal sign-consistent subgraphs. The theoretical results include polynomial-time approximation algorithms as well as constant-ratio inapproximability results. One of the algorithms, which has a worst-case guarantee of 87.9% from optimality, is based on the semidefinite programming relaxation approach of Goemans-Williamson [Goemans, M., Williamson, D., 1995. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. J. ACM 42 (6), 1115-1145]. The algorithm was implemented and tested on a Drosophila segmentation network and an Epidermal Growth Factor Receptor pathway model, and it was found to perform close to optimally.

  11. Sensitivity of Structural Results to Initial Configurations and Quench Algorithms of Lead Silicate Glass

    SciTech Connect

    Hemesath, Eric R.; Corrales, Louis R.

    2005-06-15

    The sensitivity of resulting structures to starting configurations and quench algorithms were characterized using molecular dynamics (MD) simulations. The classical potential model introduced by Damodaran, Rao, and Rao (DRR) Phys. Chem. Glasses 31, 212 (1990) for lead silicate glass was used. Glasses were prepared using five distinct initial configurations and four glass forming algorithms. In previous MD work of bulk lead silicate glasses the ability of this potential model to provide good structural results were established by comparing to experimental results. Here the sensitivity of the results to the simulation methodology and the persistence of clustering with attention to details of molecular structure are determined.

  12. Flight test results of failure detection and isolation algorithms for a redundant strapdown inertial measurement unit

    NASA Technical Reports Server (NTRS)

    Morrell, F. R.; Motyka, P. R.; Bailey, M. L.

    1990-01-01

    Flight test results for two sensor fault-tolerant algorithms developed for a redundant strapdown inertial measurement unit are presented. The inertial measurement unit (IMU) consists of four two-degrees-of-freedom gyros and accelerometers mounted on the faces of a semi-octahedron. Fault tolerance is provided by edge vector test and generalized likelihood test algorithms, each of which can provide dual fail-operational capability for the IMU. To detect the wide range of failure magnitudes in inertial sensors, which provide flight crucial information for flight control and navigation, failure detection and isolation are developed in terms of a multi level structure. Threshold compensation techniques, developed to enhance the sensitivity of the failure detection process to navigation level failures, are presented. Four flight tests were conducted in a commercial transport-type environment to compare and determine the performance of the failure detection and isolation methods. Dual flight processors enabled concurrent tests for the algorithms. Failure signals such as hard-over, null, or bias shift, were added to the sensor outputs as simple or multiple failures during the flights. Both algorithms provided timely detection and isolation of flight control level failures. The generalized likelihood test algorithm provided more timely detection of low-level sensor failures, but it produced one false isolation. Both algorithms demonstrated the capability to provide dual fail-operational performance for the skewed array of inertial sensors.

  13. A super-resolution algorithm for enhancement of flash lidar data: flight test results

    NASA Astrophysics Data System (ADS)

    Bulyshev, Alexander; Amzajerdian, Farzin; Roback, Eric; Reisse, Robert

    2013-03-01

    This paper describes the results of a 3D super-resolution algorithm applied to the range data obtained from a recent Flash Lidar helicopter flight test. The flight test was conducted by the NASA's Autonomous Landing and Hazard Avoidance Technology (ALHAT) project over a simulated lunar terrain facility at NASA Kennedy Space Center. ALHAT is developing the technology for safe autonomous landing on the surface of celestial bodies: Moon, Mars, asteroids. One of the test objectives was to verify the ability of 3D super-resolution technique to generate high resolution digital elevation models (DEMs) and to determine time resolved relative positions and orientations of the vehicle. 3D super-resolution algorithm was developed earlier and tested in computational modeling, and laboratory experiments, and in a few dynamic experiments using a moving truck. Prior to the helicopter flight test campaign, a 100mX100m hazard field was constructed having most of the relevant extraterrestrial hazard: slopes, rocks, and craters with different sizes. Data were collected during the flight and then processed by the super-resolution code. The detailed DEM of the hazard field was constructed using independent measurement to be used for comparison. ALHAT navigation system data were used to verify abilities of super-resolution method to provide accurate relative navigation information. Namely, the 6 degree of freedom state vector of the instrument as a function of time was restored from super-resolution data. The results of comparisons show that the super-resolution method can construct high quality DEMs and allows for identifying hazards like rocks and craters within the accordance of ALHAT requirements.

  14. A Super-Resolution Algorithm for Enhancement of FLASH LIDAR Data: Flight Test Results

    NASA Technical Reports Server (NTRS)

    Bulyshev, Alexander; Amzajerdian, Farzin; Roback, Eric; Reisse Robert

    2014-01-01

    This paper describes the results of a 3D super-resolution algorithm applied to the range data obtained from a recent Flash Lidar helicopter flight test. The flight test was conducted by the NASA's Autonomous Landing and Hazard Avoidance Technology (ALHAT) project over a simulated lunar terrain facility at NASA Kennedy Space Center. ALHAT is developing the technology for safe autonomous landing on the surface of celestial bodies: Moon, Mars, asteroids. One of the test objectives was to verify the ability of 3D super-resolution technique to generate high resolution digital elevation models (DEMs) and to determine time resolved relative positions and orientations of the vehicle. 3D super-resolution algorithm was developed earlier and tested in computational modeling, and laboratory experiments, and in a few dynamic experiments using a moving truck. Prior to the helicopter flight test campaign, a 100mX100m hazard field was constructed having most of the relevant extraterrestrial hazard: slopes, rocks, and craters with different sizes. Data were collected during the flight and then processed by the super-resolution code. The detailed DEM of the hazard field was constructed using independent measurement to be used for comparison. ALHAT navigation system data were used to verify abilities of super-resolution method to provide accurate relative navigation information. Namely, the 6 degree of freedom state vector of the instrument as a function of time was restored from super-resolution data. The results of comparisons show that the super-resolution method can construct high quality DEMs and allows for identifying hazards like rocks and craters within the accordance of ALHAT requirements.

  15. A class of collinear scaling algorithms for bound-constrained optimization: Derivation and computational results

    NASA Astrophysics Data System (ADS)

    Ariyawansa, K. A.; Tabor, Wayne L.

    2009-08-01

    A family of algorithms for the approximate solution of the bound-constrained minimization problem is described. These algorithms employ the standard barrier method, with the inner iteration based on trust region methods. Local models are conic functions rather than the usual quadratic functions, and are required to match first and second derivatives of the barrier function at the current iterate. The various members of the family are distinguished by the choice of a vector-valued parameter, which is the zero vector in the degenerate case that quadratic local models are used. Computational results are used to compare the efficiency of various members of the family on a selection of test functions.

  16. Image Artifacts Resulting from Gamma-Ray Tracking Algorithms Used with Compton Imagers

    SciTech Connect

    Seifert, Carolyn E.; He, Zhong

    2005-10-01

    For Compton imaging it is necessary to determine the sequence of gamma-ray interactions in a single detector or array of detectors. This can be done by time-of-flight measurements if the interactions are sufficiently far apart. However, in small detectors the time between interactions can be too small to measure, and other means of gamma-ray sequencing must be used. In this work, several popular sequencing algorithms are reviewed for sequences with two observed events and three or more observed events in the detector. These algorithms can result in poor imaging resolution and introduce artifacts in the backprojection images. The effects of gamma-ray tracking algorithms on Compton imaging are explored in the context of the 4π Compton imager built by the University of Michigan.

  17. OPTIMAL DNA TIER FOR THE IRT/DNA ALGORITHM DETERMINED BY CFTR MUTATION RESULTS OVER 14 YEARS OF NEWBORN SCREENING

    PubMed Central

    Baker, Mei W.; Groose, Molly; Hoffman, Gary; Rock, Michael; Levy, Hara; Farrell, Philip M.

    2011-01-01

    Background There has been great variation and uncertainty about how many and what CFTR mutations to include in cystic fibrosis (CF) newborn screening algorithms, and very little research on this topic using large populations of newborns. Methods We reviewed Wisconsin screening results for 1994–2008 to identify an ideal panel. Results Upon analyzing approximately 1 million screening results, we found it optimal to use a 23 CFTR mutation panel as a second tier when an immunoreactive trypsinogen (IRT)/DNA algorithm was applied for CF screening. This panel in association with a 96th percentile IRT cutoff gave a sensitivity of 97.3%, but restricting the DNA tier to F508del was associated with 90% (P<.0001). Conclusions Although CFTR panel selection has been challenging, our data show that a 23 mutation method optimizes sensitivity and is practically advantageous. The IRT cutoff value, however, is actually more critical than DNA in determining CF newborn screening sensitivity. PMID:21388895

  18. A few results for using genetic algorithms in the design of electrical machines

    SciTech Connect

    Wurtz, F.; Richomme, M.; Bigeon, J.; Sabonnadiere, J.C.

    1997-03-01

    Genetic algorithms (GAs) seem to be attractive for the design of electrical machines but their main difficulty is to find a configuration so that they are efficient. This paper exposes a criterion and a methodology the authors have imagined to find efficient configurations. The first configuration they obtained will then be detailed. The results based on this configuration will be exposed with an example of a design problem.

  19. Performance analysis results of a battery fuel gauge algorithm at multiple temperatures

    NASA Astrophysics Data System (ADS)

    Balasingam, B.; Avvari, G. V.; Pattipati, K. R.; Bar-Shalom, Y.

    2015-01-01

    Evaluating a battery fuel gauge (BFG) algorithm is a challenging problem due to the fact that there are no reliable mathematical models to represent the complex features of a Li-ion battery, such as hysteresis and relaxation effects, temperature effects on parameters, aging, power fade (PF), and capacity fade (CF) with respect to the chemical composition of the battery. The existing literature is largely focused on developing different BFG strategies and BFG validation has received little attention. In this paper, using hardware in the loop (HIL) data collected form three Li-ion batteries at nine different temperatures ranging from -20 °C to 40 °C, we demonstrate detailed validation results of a battery fuel gauge (BFG) algorithm. The BFG validation is based on three different BFG validation metrics; we provide implementation details of these three BFG evaluation metrics by proposing three different BFG validation load profiles that satisfy varying levels of user requirements.

  20. Efficient algorithms for mixed aleatory-epistemic uncertainty quantification with application to radiation-hardened electronics. Part I, algorithms and benchmark results.

    SciTech Connect

    Swiler, Laura Painton; Eldred, Michael Scott

    2009-09-01

    This report documents the results of an FY09 ASC V&V Methods level 2 milestone demonstrating new algorithmic capabilities for mixed aleatory-epistemic uncertainty quantification. Through the combination of stochastic expansions for computing aleatory statistics and interval optimization for computing epistemic bounds, mixed uncertainty analysis studies are shown to be more accurate and efficient than previously achievable. Part I of the report describes the algorithms and presents benchmark performance results. Part II applies these new algorithms to UQ analysis of radiation effects in electronic devices and circuits for the QASPR program.

  1. Photometric redshifts with the quasi Newton algorithm (MLPQNA) Results in the PHAT1 contest

    NASA Astrophysics Data System (ADS)

    Cavuoti, S.; Brescia, M.; Longo, G.; Mercurio, A.

    2012-10-01

    Context. Since the advent of modern multiband digital sky surveys, photometric redshifts (photo-z's) have become relevant if not crucial to many fields of observational cosmology, such as the characterization of cosmic structures and the weak and strong lensing. Aims: We describe an application to an astrophysical context, namely the evaluation of photometric redshifts, of MLPQNA, which is a machine-learning method based on the quasi Newton algorithm. Methods: Theoretical methods for photo-z evaluation are based on the interpolation of a priori knowledge (spectroscopic redshifts or SED templates), and they represent an ideal comparison ground for neural network-based methods. The MultiLayer Perceptron with quasi Newton learning rule (MLPQNA) described here is an effective computing implementation of neural networks exploited for the first time to solve regression problems in the astrophysical context. It is offered to the community through the DAMEWARE (DAta Mining & Exploration Web Application REsource) infrastructure. Results: The PHAT contest (Hildebrandt et al. 2010, A&A, 523, A31) provides a standard dataset to test old and new methods for photometric redshift evaluation and with a set of statistical indicators that allow a straightforward comparison among different methods. The MLPQNA model has been applied on the whole PHAT1 dataset of 1984 objects after an optimization of the model performed with the 515 available spectroscopic redshifts as training set. When applied to the PHAT1 dataset, MLPQNA obtains the best bias accuracy (0.0006) and very competitive accuracies in terms of scatter (0.056) and outlier percentage (16.3%), scoring as the second most effective empirical method among those that have so far participated in the contest. MLPQNA shows better generalization capabilities than most other empirical methods especially in the presence of underpopulated regions of the knowledge base.

  2. Orion Guidance and Control Ascent Abort Algorithm Design and Performance Results

    NASA Technical Reports Server (NTRS)

    Proud, Ryan W.; Bendle, John R.; Tedesco, Mark B.; Hart, Jeremy J.

    2009-01-01

    During the ascent flight phase of NASA s Constellation Program, the Ares launch vehicle propels the Orion crew vehicle to an agreed to insertion target. If a failure occurs at any point in time during ascent then a system must be in place to abort the mission and return the crew to a safe landing with a high probability of success. To achieve continuous abort coverage one of two sets of effectors is used. Either the Launch Abort System (LAS), consisting of the Attitude Control Motor (ACM) and the Abort Motor (AM), or the Service Module (SM), consisting of SM Orion Main Engine (OME), Auxiliary (Aux) Jets, and Reaction Control System (RCS) jets, is used. The LAS effectors are used for aborts from liftoff through the first 30 seconds of second stage flight. The SM effectors are used from that point through Main Engine Cutoff (MECO). There are two distinct sets of Guidance and Control (G&C) algorithms that are designed to maximize the performance of these abort effectors. This paper will outline the necessary inputs to the G&C subsystem, the preliminary design of the G&C algorithms, the ability of the algorithms to predict what abort modes are achievable, and the resulting success of the abort system. Abort success will be measured against the Preliminary Design Review (PDR) abort performance metrics and overall performance will be reported. Finally, potential improvements to the G&C design will be discussed.

  3. A new algorithm and results of ionospheric delay correction for satellite-based augmentation system

    NASA Astrophysics Data System (ADS)

    Huang, Z.; Yuan, H.

    Ionospheric delay resulted from radio signals traveling ionosphere is the largest source of errors for single-frequency users of the Global Positioning System GPS In order to improve users position accuracy augmentation systems based on satellite have been developed to provide accurate calibration since the nineties A famous one is Wide Area Augmentation System WAAS which is aimed to the efficiency of navigation over the conterminous United States and has been operating successfully so far The main idea of ionospheric correction algorithm for WAAS is to establish ionospheric grid model i e ionosphere is discretized into a set of regularly-spaced intervals in latitude and longitude at an altitude of 350km above the earth surface The users calculate their pseudoranges by interpolating estimates of vertical ionospheric delay modeled at ionospheric grid points The Chinese crust deformation monitoring network has been established since the eighties and now it is in good operation with 25 permanent GPS stations which provide feasibility to construct similar satellite-based augmentation system SBAS in China For the west region of China the distribution of stations is relatively sparse not to ensure sufficient data If we follow the ionospheric grid correction algorithm some grid points can t obtain their estimate and lost availability Consequently ionospheric correction measurement on the users situated in that region is inestimable which constitute a fatal threat to navigation users In this paper we presented a new algorithm that

  4. Surface reflectance retrieval from satellite and aircraft sensors - Results of sensors and algorithm comparisons during FIFE

    NASA Technical Reports Server (NTRS)

    Markham, B. L.; Halthore, R. N.; Goetz, S. J.

    1992-01-01

    Visible to shortwave infrared radiometric data collected by a number of remote sensing instruments on aircraft and satellite platforms were compared over common areas in the First International Satellite Land Surface Climatology Project (ISLSCP) Field Experiment (FIFE) site on August 4, 1989, to assess their radiometric consistency and the adequacy of atmospheric correction algorithms. The instruments in the study included the Landsat 5 Thematic Mapper (TM), the SPOT 1 high-resolution visible (HRV) 1 sensor, the NS001 Thematic Mapper simulator, and the modular multispectral radiometers (MMRs). Atmospheric correction routines analyzed were an algorithm developed for FIFE, LOWTRAN 7, and 5S. A comparison between corresponding bands of the SPOT 1 HRV 1 and the Landsat 5 TM sensors indicated that the two instruments were radiometrically consistent to within about 5 percent. Retrieved surface reflectance factors using the FIFE algorithm over one site under clear atmospheric conditions indicated a capability to determine near-nadir surface reflectance factors to within about 0.01 at a reflectance of 0.06 in the visible (0.4-0.7 microns) and about 0.30 in the near infrared (0.7-1.2 microns) for all but the NS001 sensor. All three atmospheric correction procedures produced absolute reflectances to within 0.005 in the visible and near infrared. In the shortwave infrared (1.2-2.5 microns) region the three algorithms differed in the retrieved surface reflectances primarily owing to differences in predicted gaseous absorption. Although uncertainties in the measured surface reflectance in the shortwave infrared precluded definitive results, the 5S code appeared to predict gaseous transmission marginally more accurately than LOWTRAN 7.

  5. Surface reflectance retrieval from satellite and aircraft sensors: Results of sensor and algorithm comparisons during FIFE

    NASA Astrophysics Data System (ADS)

    Markham, B. L.; Halthore, R. N.; Goetz, S. J.

    1992-11-01

    Visible to shortwave infrared radiometric data collected by a number of remote sensing instruments on aircraft and satellite platforms were compared over common areas in the First International Satellite Land Surface Climatology Project (ISLSCP) Field Experiment (FIFE) site on August 4, 1989, to assess their radiometric consistency and the adequacy of atmospheric correction algorithms. The instruments in the study included the Landsat 5 thematic mapper (TM), the SPOT 1 high-resolution visible (HRV) 1 sensor, the NS001 thematic mapper simulator, and the modular multispectral radiometers (MMRs). Atmospheric correction routines analyzed were an algorithm developed for FIFE, LOWTRAN 7, and 5S. A comparison between corresponding bands of the SPOT 1 HRV 1 and the Landsat 5 TM sensors indicated that the two instruments were radiometrically consistent to within about 5%. Retrieved surface reflectance factors using the FIFE algorithm over one site under clear atmospheric conditions indicated a capability to determine near-nadir surface reflectance factors to within about 0.01 at a reflectance of 0.06 in the visible (0.4-0.7 μm) and about 0.30 in the near infrared (0.7-1.2 μm) for all but the NS001 sensor. All three atmospheric correction procedures produced absolute reflectances to within 0.005 in the visible and near infrared. In the shortwave infrared (1.2-2.5 μm) region the three algorithms differed in the retrieved surface reflectances primarily owing to differences in predicted gaseous absorption. Although uncertainties in the measured surface reflectance in the shortwave infrared precluded definitive results, the 5S code appeared to predict gaseous transmission marginally more accurately than LOWTRAN 7.

  6. Results from CrIS/ATMS Obtained Using an AIRS "Version-6 like" Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Kouvaris, Louis; Iredell, Lena

    2015-01-01

    We tested and evaluated Version-6.22 AIRS and Version-6.22 CrIS products on a single day, December 4, 2013, and compared results to those derived using AIRS Version-6. AIRS and CrIS Version-6.22 O3(p) and q(p) products are both superior to those of AIRS Version-6All AIRS and CrIS products agree reasonably well with each other. CrIS Version-6.22 T(p) and q(p) results are slightly poorer than AIRS over land, especially under very cloudy conditions. Both AIRS and CrIS Version-6.22 run now at JPL. Our short term plans are to analyze many common months at JPL in the near future using Version-6.22 or a further improved algorithm to assess the compatibility of AIRS and CrIS monthly mean products and their interannual differences. Updates to the calibration of both CrIS and ATMS are still being finalized. JPL plans, in collaboration with the Goddard DISC, to reprocess all AIRS data using a still to be finalized Version-7 retrieval algorithm, and to reprocess all recalibrated CrISATMS data using Version-7 as well.

  7. Results from CrIS/ATMS Obtained Using an AIRS "Version-6 Like" Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Kouvaris, Louis; Iredell, Lena

    2015-01-01

    We have tested and evaluated Version-6.22 AIRS and Version-6.22 CrIS products on a single day, December 4, 2013, and compared results to those derived using AIRS Version-6. AIRS and CrIS Version-6.22 O3(p) and q(p) products are both superior to those of AIRS Version-6All AIRS and CrIS products agree reasonably well with each other CrIS Version-6.22 T(p) and q(p) results are slightly poorer than AIRS under very cloudy conditions. Both AIRS and CrIS Version-6.22 run now at JPL. Our short term plans are to analyze many common months at JPL in the near future using Version-6.22 or a further improved algorithm to assess the compatibility of AIRS and CrIS monthly mean products and their interannual differencesUpdates to the calibration of both CrIS and ATMS are still being finalized. JPL plans, in collaboration with the Goddard DISC, to reprocess all AIRS data using a still to be finalized Version-7 retrieval algorithm, and to reprocess all recalibrated CrISATMS data using Version-7 as well.

  8. Statistically significant performance results of a mine detector and fusion algorithm from an x-band high-resolution SAR

    NASA Astrophysics Data System (ADS)

    Williams, Arnold C.; Pachowicz, Peter W.

    2004-09-01

    Current mine detection research indicates that no single sensor or single look from a sensor will detect mines/minefields in a real-time manner at a performance level suitable for a forward maneuver unit. Hence, the integrated development of detectors and fusion algorithms are of primary importance. A problem in this development process has been the evaluation of these algorithms with relatively small data sets, leading to anecdotal and frequently over trained results. These anecdotal results are often unreliable and conflicting among various sensors and algorithms. Consequently, the physical phenomena that ought to be exploited and the performance benefits of this exploitation are often ambiguous. The Army RDECOM CERDEC Night Vision Laboratory and Electron Sensors Directorate has collected large amounts of multisensor data such that statistically significant evaluations of detection and fusion algorithms can be obtained. Even with these large data sets care must be taken in algorithm design and data processing to achieve statistically significant performance results for combined detectors and fusion algorithms. This paper discusses statistically significant detection and combined multilook fusion results for the Ellipse Detector (ED) and the Piecewise Level Fusion Algorithm (PLFA). These statistically significant performance results are characterized by ROC curves that have been obtained through processing this multilook data for the high resolution SAR data of the Veridian X-Band radar. We discuss the implications of these results on mine detection and the importance of statistical significance, sample size, ground truth, and algorithm design in performance evaluation.

  9. Horizon Acquisition for Attitude Determination Using Image Processing Algorithms- Results of HORACE on REXUS 16

    NASA Astrophysics Data System (ADS)

    Barf, J.; Rapp, T.; Bergmann, M.; Geiger, S.; Scharf, A.; Wolz, F.

    2015-09-01

    The aim of the Horizon Acquisition Experiment (HORACE) was to prove a new concept for a two-axis horizon sensor using algorithms processing ordinary images, which is also operable at high spinning rates occurring during emergencies. The difficulty to cope with image distortions, which is avoided by conventional horizon sensors, was introduced on purpose as we envision a system being capable of using any optical data. During the flight on REXUS1 16, which provided a suitable platform similar to the future application scenario, a malfunction of the payload cameras caused severe degradation of the collected scientific data. Nevertheless, with the aid of simulations we could show that the concept is accurate (±0.6°), fast (~ lOOms/frame) and robust enough for coarse attitude determination during emergencies and also applicable for small satellites. Besides, technical knowledge regarding the design of REXUS-experiments, including the detection of interferences between SATA and GPS, was gained.

  10. Mars Entry Atmospheric Data System Trajectory Reconstruction Algorithms and Flight Results

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark; Shidner, Jeremy; Munk, Michelle

    2013-01-01

    The Mars Entry Atmospheric Data System is a part of the Mars Science Laboratory, Entry, Descent, and Landing Instrumentation project. These sensors are a system of seven pressure transducers linked to ports on the entry vehicle forebody to record the pressure distribution during atmospheric entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. Specifically, angle of attack, angle of sideslip, dynamic pressure, Mach number, and freestream atmospheric properties are reconstructed from the measured pressures. Such data allows for the aerodynamics to become decoupled from the assumed atmospheric properties, allowing for enhanced trajectory reconstruction and performance analysis as well as an aerodynamic reconstruction, which has not been possible in past Mars entry reconstructions. This paper provides details of the data processing algorithms that are utilized for this purpose. The data processing algorithms include two approaches that have commonly been utilized in past planetary entry trajectory reconstruction, and a new approach for this application that makes use of the pressure measurements. The paper describes assessments of data quality and preprocessing, and results of the flight data reduction from atmospheric entry, which occurred on August 5th, 2012.

  11. MUlti-Dimensional Spline-Based Estimator (MUSE) for motion estimation: algorithm development and initial results.

    PubMed

    Viola, Francesco; Coe, Ryan L; Owen, Kevin; Guenther, Drake A; Walker, William F

    2008-12-01

    Image registration and motion estimation play central roles in many fields, including RADAR, SONAR, light microscopy, and medical imaging. Because of its central significance, estimator accuracy, precision, and computational cost are of critical importance. We have previously presented a highly accurate, spline-based time delay estimator that directly determines sub-sample time delay estimates from sampled data. The algorithm uses cubic splines to produce a continuous representation of a reference signal and then computes an analytical matching function between this reference and a delayed signal. The location of the minima of this function yields estimates of the time delay. In this paper we describe the MUlti-dimensional Spline-based Estimator (MUSE) that allows accurate and precise estimation of multi-dimensional displacements/strain components from multi-dimensional data sets. We describe the mathematical formulation for two- and three-dimensional motion/strain estimation and present simulation results to assess the intrinsic bias and standard deviation of this algorithm and compare it to currently available multi-dimensional estimators. In 1000 noise-free simulations of ultrasound data we found that 2D MUSE exhibits maximum bias of 2.6 x 10(-4) samples in range and 2.2 x 10(-3) samples in azimuth (corresponding to 4.8 and 297 nm, respectively). The maximum simulated standard deviation of estimates in both dimensions was comparable at roughly 2.8 x 10(-3) samples (corresponding to 54 nm axially and 378 nm laterally). These results are between two and three orders of magnitude better than currently used 2D tracking methods. Simulation of performance in 3D yielded similar results to those observed in 2D. We also present experimental results obtained using 2D MUSE on data acquired by an Ultrasonix Sonix RP imaging system with an L14-5/38 linear array transducer operating at 6.6 MHz. While our validation of the algorithm was performed using ultrasound data, MUSE is

  12. Genomic and Enzymatic Results Show Bacillus cellulosilyticus Uses a Novel Set of LPXTA Carbohydrases to Hydrolyze Polysaccharides

    PubMed Central

    Mead, David; Drinkwater, Colleen; Brumm, Phillip J.

    2013-01-01

    Background Alkaliphilic Bacillus species are intrinsically interesting due to the bioenergetic problems posed by growth at high pH and high salt. Three alkaline cellulases have been cloned, sequenced and expressed from Bacillus cellulosilyticus N-4 (Bcell) making it an excellent target for genomic sequencing and mining of biomass-degrading enzymes. Methodology/Principal Findings The genome of Bcell is a single chromosome of 4.7 Mb with no plasmids present and three large phage insertions. The most unusual feature of the genome is the presence of 23 LPXTA membrane anchor proteins; 17 of these are annotated as involved in polysaccharide degradation. These two values are significantly higher than seen in any other Bacillus species. This high number of membrane anchor proteins is seen only in pathogenic Gram-positive organisms such as Listeria monocytogenes or Staphylococcus aureus. Bcell also possesses four sortase D subfamily 4 enzymes that incorporate LPXTA-bearing proteins into the cell wall; three of these are closely related to each other and unique to Bcell. Cell fractionation and enzymatic assay of Bcell cultures show that the majority of polysaccharide degradation is associated with the cell wall LPXTA-enzymes, an unusual feature in Gram-positive aerobes. Genomic analysis and growth studies both strongly argue against Bcell being a truly cellulolytic organism, in spite of its name. Preliminary results suggest that fungal mycelia may be the natural substrate for this organism. Conclusions/Significance Bacillus cellulosilyticus N-4, in spite of its name, does not possess any of the genes necessary for crystalline cellulose degradation, demonstrating the risk of classifying microorganisms without the benefit of genomic analysis. Bcell is the first Gram-positive aerobic organism shown to use predominantly cell-bound, non-cellulosomal enzymes for polysaccharide degradation. The LPXTA-sortase system utilized by Bcell may have applications both in anchoring

  13. Development of region processing algorithm for HSTAMIDS: status and field test results

    NASA Astrophysics Data System (ADS)

    Ngan, Peter; Burke, Sean; Cresci, Roger; Wilson, Joseph N.; Gader, Paul; Ho, K. C.; Bartosz, Elizabeth; Duvoisin, Herbert

    2007-04-01

    The Region Processing Algorithm (RPA) has been developed by the Office of the Army Humanitarian Demining Research and Development (HD R&D) Program as part of improvements for the AN/PSS-14. The effort was a collaboration between the HD R&D Program, L-3 Communication CyTerra Corporation, University of Florida, Duke University and University of Missouri. RPA has been integrated into and implemented in a real-time AN/PSS-14. The subject unit was used to collect data and tested for its performance at three Army test sites within the United States of America. This paper describes the status of the technology and its recent test results.

  14. One-year results of an algorithmic approach to managing failed back surgery syndrome

    PubMed Central

    Avellanal, Martín; Diaz-Reganon, Gonzalo; Orts, Alejandro; Soto, Silvia

    2014-01-01

    BACKGROUND: Failed back surgery syndrome (FBSS) is a major clinical problem. Different etiologies with different incidence rates have been proposed. There are currently no standards regarding the management of these patients. Epiduroscopy is an endoscopic technique that may play a role in the management of FBSS. OBJECTIVE: To evaluate an algorithm for management of severe FBSS including epiduroscopy as a diagnostic and therapeutic tool. METHODS: A total of 133 patients with severe symptoms of FBSS (visual analogue scale score ≥7) and no response to pharmacological treatment and physical therapy were included. A six-step management algorithm was applied. Data, including patient demographics, pain and surgical procedure, were analyzed. In all cases, one or more objective causes of pain were established. Treatment success was defined as ≥50% long-term pain relief maintained during the first year of follow-up. Final allocation of patients was registered: good outcome with conservative treatment, surgical reintervention and palliative treatment with implantable devices. RESULTS: Of 122 patients enrolled, 59.84% underwent instrumented surgery and 40.16% a noninstrumented procedure. Most (64.75%) experienced significant pain relief with conventional pain clinic treatments; 15.57% required surgical treatment. Palliative spinal cord stimulation and spinal analgesia were applied in 9.84% and 2.46% of the cases, respectively. The most common diagnosis was epidural fibrosis, followed by disc herniation, global or lateral stenosis, and foraminal stenosis. CONCLUSIONS: A new six-step ladder approach to severe FBSS management that includes epiduroscopy was analyzed. Etiologies are accurately described and a useful role of epiduroscopy was confirmed. PMID:25222573

  15. Different quantification algorithms may lead to different results: a comparison using proton MRS lipid signals.

    PubMed

    Mosconi, E; Sima, D M; Osorio Garcia, M I; Fontanella, M; Fiorini, S; Van Huffel, S; Marzola, P

    2014-04-01

    Proton magnetic resonance spectroscopy (MRS) is a sensitive method for investigating the biochemical compounds in a tissue. The interpretation of the data relies on the quantification algorithms applied to MR spectra. Each of these algorithms has certain underlying assumptions and may allow one to incorporate prior knowledge, which could influence the quality of the fit. The most commonly considered types of prior knowledge include the line-shape model (Lorentzian, Gaussian, Voigt), knowledge of the resonating frequencies, modeling of the baseline, constraints on the damping factors and phase, etc. In this article, we study whether the statistical outcome of a biological investigation can be influenced by the quantification method used. We chose to study lipid signals because of their emerging role in the investigation of metabolic disorders. Lipid spectra, in particular, are characterized by peaks that are in most cases not Lorentzian, because measurements are often performed in difficult body locations, e.g. in visceral fats close to peristaltic movements in humans or very small areas close to different tissues in animals. This leads to spectra with several peak distortions. Linear combination of Model spectra (LCModel), Advanced Method for Accurate Robust and Efficient Spectral fitting (AMARES), quantitation based on QUantum ESTimation (QUEST), Automated Quantification of Short Echo-time MRS (AQSES)-Lineshape and Integration were applied to simulated spectra, and area under the curve (AUC) values, which are proportional to the quantity of the resonating molecules in the tissue, were compared with true values. A comparison between techniques was also carried out on lipid signals from obese and lean Zucker rats, for which the polyunsaturation value expressed in white adipose tissue should be statistically different, as confirmed by high-resolution NMR measurements (considered the gold standard) on the same animals. LCModel, AQSES-Lineshape, QUEST and Integration

  16. A collaborative accountable care model in three practices showed promising early results on costs and quality of care.

    PubMed

    Salmon, Richard B; Sanderson, Mark I; Walters, Barbara A; Kennedy, Karen; Flores, Robert C; Muney, Alan M

    2012-11-01

    Cigna's Collaborative Accountable Care initiative provides financial incentives to physician groups and integrated delivery systems to improve the quality and efficiency of care for patients in commercial open-access benefit plans. Registered nurses who serve as care coordinators employed by participating practices are a central feature of the initiative. They use patient-specific reports and practice performance reports provided by Cigna to improve care coordination, identify and close care gaps, and address other opportunities for quality improvement. We report interim quality and cost results for three geographically and structurally diverse provider practices in Arizona, New Hampshire, and Texas. Although not statistically significant, these early results revealed favorable trends in total medical costs and quality of care, suggesting that a shared-savings accountable care model and collaborative support from the payer can enable practices to take meaningful steps toward full accountability for care quality and efficiency.

  17. Approximation of HRPITS results for SI GaAs by large scale support vector machine algorithms

    NASA Astrophysics Data System (ADS)

    Jankowski, Stanisław; Wojdan, Konrad; Szymański, Zbigniew; Kozłowski, Roman

    2006-10-01

    For the first time large-scale support vector machine algorithms are used to extraction defect parameters in semi-insulating (SI) GaAs from high resolution photoinduced transient spectroscopy experiment. By smart decomposition of the data set the SVNTorch algorithm enabled to obtain good approximation of analyzed correlation surface by a parsimonious model (with small number of support vector). The extracted parameters of deep level defect centers from SVM approximation are of good quality as compared to the reference data.

  18. Presentation Showing Results of a Hydrogeochemical Investigation of the Standard Mine Vicinity, Upper Elk Creek Basin, Colorado

    USGS Publications Warehouse

    Manning, Andrew H.; Verplanck, Philip L.; Mast, M. Alisa; Wanty, Richard B.

    2008-01-01

    PREFACE This Open-File Report consists of a presentation given in Crested Butte, Colorado on December 13, 2007 to the Standard Mine Advisory Group. The presentation was paired with another presentation given by the Colorado Division of Reclamation, Mining, and Safety on the physical features and geology of the Standard Mine. The presentation in this Open-File Report summarizes the results and conclusions of a hydrogeochemical investigation of the Standard Mine performed by the U.S. Geological Survey (Manning and others, in press). The purpose of the investigation was to aid the U.S. Environmental Protection Agency in evaluating remediation options for the Standard Mine site. Additional details and supporting data related to the information in this presentation can be found in Manning and others (in press).

  19. Deriving rules from activity diary data: A learning algorithm and results of computer experiments

    NASA Astrophysics Data System (ADS)

    Arentze, Theo A.; Hofman, Frank; Timmermans, Harry J. P.

    Activity-based models consider travel as a derived demand from the activities households need to conduct in space and time. Over the last 15 years, computational or rule-based models of activity scheduling have gained increasing interest in time-geography and transportation research. This paper argues that a lack of techniques for deriving rules from empirical data hinders the further development of rule-based systems in this area. To overcome this problem, this paper develops and tests an algorithm for inductively deriving rules from activity-diary data. The decision table formalism is used to exhaustively represent the theoretically possible decision rules that individuals may use in sequencing a given set of activities. Actual activity patterns of individuals are supplied to the system as examples. In an incremental learning process, the system progressively improves on the selection of rules used for reproducing the examples. Computer experiments based on simulated data are performed to fine-tune rule selection and rule value update functions. The results suggest that the system is effective and fairly robust for parameter settings. It is concluded, therefore, that the proposed approach opens up possibilities to derive empirically tested rule-based models of activity scheduling. Follow-up research will be concerned with testing the system on empirical data.

  20. Low-frequency ac electroporation shows strong frequency dependence and yields comparable transfection results to dc electroporation.

    PubMed

    Zhan, Yihong; Cao, Zhenning; Bao, Ning; Li, Jianbo; Wang, Jun; Geng, Tao; Lin, Hao; Lu, Chang

    2012-06-28

    Conventional electroporation has been conducted by employing short direct current (dc) pulses for delivery of macromolecules such as DNA into cells. The use of alternating current (ac) field for electroporation has mostly been explored in the frequency range of 10kHz-1MHz. Based on Schwan equation, it was thought that with low ac frequencies (10Hz-10kHz), the transmembrane potential does not vary with the frequency. In this report, we utilized a flow-through electroporation technique that employed continuous 10Hz-10kHz ac field (based on either sine waves or square waves) for electroporation of cells with defined duration and intensity. Our results reveal that electropermeabilization becomes weaker with increased frequency in this range. In contrast, transfection efficiency with DNA reaches its maximum at medium frequencies (100-1000Hz) in the range. We postulate that the relationship between the transfection efficiency and the ac frequency is determined by combined effects from electrophoretic movement of DNA in the ac field, dependence of the DNA/membrane interaction on the ac frequency, and variation of transfection under different electropermeabilization intensities. The fact that ac electroporation in this frequency range yields high efficiency for transfection (up to ~71% for Chinese hamster ovary cells) and permeabilization suggests its potential for gene delivery.

  1. Volar locking distal radius plates show better short-term results than other treatment options: A prospective randomised controlled trial

    PubMed Central

    Drobetz, Herwig; Koval, Lidia; Weninger, Patrick; Luscombe, Ruth; Jeffries, Paula; Ehrendorfer, Stefan; Heal, Clare

    2016-01-01

    AIM To compare the outcomes of displaced distal radius fractures treated with volar locking plates and with immediate postoperative mobilisation with the outcomes of these fractures treated with modalities that necessitate 6 wk wrist immobilisation. METHODS A prospective, randomised controlled single-centre trial was conducted with 56 patients who had a displaced radius fracture were randomised to treatment either with a volar locking plate (n = 29), or another treatment modality (n = 27; cast immobilisation with or without wires or external fixator). Outcomes were measured at 12 wk. Functional outcome scores measured were the Patient-Rated Wrist Evaluation (PRWE) Score; Disabilities of the Arm, Shoulder and Hand and activities of daily living (ADLs). Clinical outcomes were wrist range of motion and grip strength. Radiographic parameters were volar inclination and ulnar variance. RESULTS Patients in the volar locking plate group had significantly better PRWE scores, ADL scores, grip strength and range of extension at three months compared with the control group. All radiological parameters were significantly better in the volar locking plate group at 3 mo. CONCLUSION The present study suggests that volar locking plates produced significantly better functional and clinical outcomes at 3 mo compared with other treatment modalities. Anatomical reduction was significantly more likely to be preserved in the plating group. Level of evidence: II. PMID:27795951

  2. Selection Indices and Multivariate Analysis Show Similar Results in the Evaluation of Growth and Carcass Traits in Beef Cattle

    PubMed Central

    Brito Lopes, Fernando; da Silva, Marcelo Corrêa; Magnabosco, Cláudio Ulhôa; Goncalves Narciso, Marcelo; Sainz, Roberto Daniel

    2016-01-01

    This research evaluated a multivariate approach as an alternative tool for the purpose of selection regarding expected progeny differences (EPDs). Data were fitted using a multi-trait model and consisted of growth traits (birth weight and weights at 120, 210, 365 and 450 days of age) and carcass traits (longissimus muscle area (LMA), back-fat thickness (BF), and rump fat thickness (RF)), registered over 21 years in extensive breeding systems of Polled Nellore cattle in Brazil. Multivariate analyses were performed using standardized (zero mean and unit variance) EPDs. The k mean method revealed that the best fit of data occurred using three clusters (k = 3) (P < 0.001). Estimates of genetic correlation among growth and carcass traits and the estimates of heritability were moderate to high, suggesting that a correlated response approach is suitable for practical decision making. Estimates of correlation between selection indices and the multivariate index (LD1) were moderate to high, ranging from 0.48 to 0.97. This reveals that both types of indices give similar results and that the multivariate approach is reliable for the purpose of selection. The alternative tool seems very handy when economic weights are not available or in cases where more rapid identification of the best animals is desired. Interestingly, multivariate analysis allowed forecasting information based on the relationships among breeding values (EPDs). Also, it enabled fine discrimination, rapid data summarization after genetic evaluation, and permitted accounting for maternal ability and the genetic direct potential of the animals. In addition, we recommend the use of longissimus muscle area and subcutaneous fat thickness as selection criteria, to allow estimation of breeding values before the first mating season in order to accelerate the response to individual selection. PMID:26789008

  3. Flight test results of a vector-based failure detection and isolation algorithm for a redundant strapdown inertial measurement unit

    NASA Technical Reports Server (NTRS)

    Morrell, F. R.; Bailey, M. L.; Motyka, P. R.

    1988-01-01

    Flight test results of a vector-based fault-tolerant algorithm for a redundant strapdown inertial measurement unit are presented. Because the inertial sensors provide flight-critical information for flight control and navigation, failure detection and isolation is developed in terms of a multi-level structure. Threshold compensation techniques for gyros and accelerometers, developed to enhance the sensitivity of the failure detection process to low-level failures, are presented. Four flight tests, conducted in a commercial transport type environment, were used to determine the ability of the failure detection and isolation algorithm to detect failure signals, such a hard-over, null, or bias shifts. The algorithm provided timely detection and correct isolation of flight control- and low-level failures. The flight tests of the vector-based algorithm demonstrated its capability to provide false alarm free dual fail-operational performance for the skewed array of inertial sensors.

  4. Improving the Interpretability of Classification Rules Discovered by an Ant Colony Algorithm: Extended Results.

    PubMed

    Otero, Fernando E B; Freitas, Alex A

    2016-01-01

    Most ant colony optimization (ACO) algorithms for inducing classification rules use a ACO-based procedure to create a rule in a one-at-a-time fashion. An improved search strategy has been proposed in the cAnt-Miner[Formula: see text] algorithm, where an ACO-based procedure is used to create a complete list of rules (ordered rules), i.e., the ACO search is guided by the quality of a list of rules instead of an individual rule. In this paper we propose an extension of the cAnt-Miner[Formula: see text] algorithm to discover a set of rules (unordered rules). The main motivations for this work are to improve the interpretation of individual rules by discovering a set of rules and to evaluate the impact on the predictive accuracy of the algorithm. We also propose a new measure to evaluate the interpretability of the discovered rules to mitigate the fact that the commonly used model size measure ignores how the rules are used to make a class prediction. Comparisons with state-of-the-art rule induction algorithms, support vector machines, and the cAnt-Miner[Formula: see text] producing ordered rules are also presented.

  5. Philippines: decentralized approach shows results.

    PubMed

    1983-01-01

    In the Philippines several steps have been taken to meet the challenge of increasing population growth. Commencing with the Republic Act 6365, known as the Population Act (1971) program directives focus on achieving and maintaining population levels most conducive to the national welfare. In 1978 a Special Committee was constituted by the President to review the population program. Pursuant to the Committee's findings certain changes were adopted. The thrust is now towards longterm planning to ensure a more significant and perceptible demographic impact of development programs and policies. Increasing attention is paid to regional development and spatial distribution in the country. The 1978-82 Development Plan states more clearly the interaction between population and development. The National Economic and Development Authority, the central policy and planning agency of the government, takes charge of formulation and coordinating the broader aspects of population policy and integrating population with socioeconomic plans and policies. At present the National Economic and Development Authority (NEDA) is implementing a project known as the Population/Development Planning and Research (PDPR) project with financial support from the UN Fund for Population Activities (UNFPA). This project promotes and facilitates the integration of the population dimension in the planning process. It does this by maintaining linkages and instituting collaborative mechanisms with the different NEDA regional offices and sectoral ministries. It also trains government planners in ways of integrating population concerns into the development plan. PDPR promotes the use of population and development research for planning purposes and policy formation. The Philippine Development Plan, 1978-82, recognized that an improvement in the level of 1 sector reinforces the performance of the other sectors. Since the establishment of the National Population Program 12 years ago, population and family planning have been successfully integrated with various development sectors, notably, labor, health, and education. Through the policies of integration, multiagency participation, and partnership of the public and private sectors, the Commission on Population uses existing development programs of government and private organizations as vehicles for family planning information and services and shares the responsibility of implementing all facets of the population program with various participating agencies in the government and private sector.

  6. An algorithm for tailoring pharmacotherapy for smoking cessation: results from a Delphi panel of international experts

    PubMed Central

    Bader, P; McDonald, P; Selby, P

    2009-01-01

    Background: Evidence-based smoking cessation guidelines recommend nicotine replacement therapy (NRT), bupropion SR and varenicline as first-line therapy in combination with behavioural interventions. However, there are limited data to guide clinicians in recommending one form over another, using combinations, or matching individual smokers to particular forms. Objective: To develop decision rules for clinicians to guide differential prescribing practices and tailoring of pharmacotherapy for smoking cessation. Methods: A Delphi approach was used to build consensus among a panel of 37 international experts from various health disciplines. Through an iterative process, panellists responded to three rounds of questionnaires. Participants identified and ranked “best practices” used by them to tailor pharmacotherapy to aid smoking cessation. An independent panel of 10 experts provided cross-validation of findings. Results: There was a 100% response rate to all three rounds. A high level of consensus was achieved in determining the most important priorities: (1) factors to consider in prescribing pharmacotherapy: evidence, patient preference, patient experience; (2) combinations based on: failed attempt with monotherapy, patients with breakthrough cravings, level of tobacco dependence; (3) specific combinations, main categories: (a) two or more forms of NRT, (b) bupropion + form of NRT; (4) specific combinations, subcategories: (1a) patch + gum, (1b) patch + inhaler, (1c) patch + lozenge; (2a) bupropion + patch, (2b) bupropion + gum; (5) impact of comorbidities on selection of pharmacotherapy: contraindications, specific pharmacotherapy useful for certain comorbidities, dual purpose medications; (6) frequency of monitoring determined by patient needs and type of pharmacotherapy. Conclusion: An algorithm and guide were developed to assist clinicians in prescribing pharmacotherapy for smoking cessation. There appears to be good justification for “off-label” use such

  7. Atomic hydrogen in the mesopause region derived from SABER: Algorithm theoretical basis, measurement uncertainty, and results

    NASA Astrophysics Data System (ADS)

    Mlynczak, Martin G.; Hunt, Linda A.; Marshall, B. Thomas; Mertens, Christopher J.; Marsh, Daniel R.; Smith, Anne K.; Russell, James M.; Siskind, David E.; Gordley, Larry L.

    2014-03-01

    Atomic hydrogen (H) is a fundamental component in the photochemistry and energy balance of the terrestrial mesopause region (80-100 km). H is generated primarily by photolysis of water vapor and participates in a highly exothermic reaction with ozone. This reaction is a significant source of heat in the mesopause region and also creates highly vibrationally excited hydroxyl (OH) from which the Meinel band radiative emission features originate. Concentrations (cm-3) and volume mixing ratios of H are derived from observations of infrared emission from the OH (υ = 9 + 8, Δυ = 2) vibration-rotation bands near 2.0 µm made by the Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) instrument on the NASA Thermosphere Ionosphere Mesosphere Energetics and Dynamics satellite. The algorithms for deriving day and night H are described herein. Day and night concentrations exhibit excellent agreement between 87 and 95 km. SABER H results also exhibit good agreement with observations from the Solar Mesosphere Explorer made nearly 30 years ago. An apparent inverse dependence on the solar cycle is observed in the SABER H concentrations, with the H increasing as solar activity decreases. This increase is shown to be primarily due to the temperature dependence of various reaction rate coefficients for H photochemistry. The SABER H data, coupled with SABER atomic oxygen, ozone, and temperature, enable tests of mesospheric photochemistry and energetics in atmospheric models, studies of formation of polar mesospheric clouds, and studies of atmospheric evolution via escape of hydrogen. These data and studies are made possible by the wide range of parameters measured simultaneously by the SABER instrument.

  8. Planning fuel-conservative descents in an airline environmental using a small programmable calculator: Algorithm development and flight test results

    NASA Technical Reports Server (NTRS)

    Knox, C. E.; Vicroy, D. D.; Simmon, D. A.

    1985-01-01

    A simple, airborne, flight-management descent algorithm was developed and programmed into a small programmable calculator. The algorithm may be operated in either a time mode or speed mode. The time mode was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The speed model was designed for planning fuel-conservative descents when time is not a consideration. The descent path for both modes was calculated for a constant with considerations given for the descent Mach/airspeed schedule, gross weight, wind, wind gradient, and nonstandard temperature effects. Flight tests, using the algorithm on the programmable calculator, showed that the open-loop guidance could be useful to airline flight crews for planning and executing fuel-conservative descents.

  9. Planning fuel-conservative descents in an airline environmental using a small programmable calculator: algorithm development and flight test results

    SciTech Connect

    Knox, C.E.; Vicroy, D.D.; Simmon, D.A.

    1985-05-01

    A simple, airborne, flight-management descent algorithm was developed and programmed into a small programmable calculator. The algorithm may be operated in either a time mode or speed mode. The time mode was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The speed model was designed for planning fuel-conservative descents when time is not a consideration. The descent path for both modes was calculated for a constant with considerations given for the descent Mach/airspeed schedule, gross weight, wind, wind gradient, and nonstandard temperature effects. Flight tests, using the algorithm on the programmable calculator, showed that the open-loop guidance could be useful to airline flight crews for planning and executing fuel-conservative descents.

  10. Remote sensing of gases by hyperspectral imaging: algorithms and results of field measurements

    NASA Astrophysics Data System (ADS)

    Sabbah, Samer; Rusch, Peter; Eichmann, Jens; Gerhard, Jörn-Hinnrich; Harig, Roland

    2012-09-01

    Remote gas detection and visualization provides vital information in scenarios involving chemical accidents, terrorist attacks or gas leaks. Previous work showed how imaging infrared spectroscopy can be used to assess the location, the dimensions, and the dispersion of a potentially hazardous cloud. In this work the latest developments of an infrared hyperspectral imager based on a Michelson interferometer in combination with a focal plane array detector are presented. The performance of the system is evaluated by laboratory measurements. The system was deployed in field measurements to identify industrial gas emissions. Excellent results were obtained by successfully identifying released gases from relatively long distances.

  11. Results from CrIS/ATMS Obtained Using an "AIRS Version-6 Like" Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Kouvaris, Louis; Iredell, Lena

    2015-01-01

    A main objective of AIRS/AMSU on EOS is to provide accurate sounding products that are used to generate climate data sets. Suomi NPP carries CrIS/ATMS that were designed as follow-ons to AIRS/AMSU. Our objective is to generate a long term climate data set of products derived from CrIS/ATMS to serve as a continuation of the AIRS/AMSU products. We have modified an improved version of the operational AIRS Version-6 retrieval algorithm for use with CrIS/ATMS. CrIS/ATMS products are of very good quality, and are comparable to, and consistent with, those of AIRS.

  12. Results from CrIS/ATMS obtained using an "AIRS Version-6 like" retrieval algorithm

    NASA Astrophysics Data System (ADS)

    Susskind, Joel; Kouvaris, Louis; Iredell, Lena

    2015-09-01

    A main objective of AIRS/AMSU on EOS is to provide accurate sounding products that are used to generate climate data sets. Suomi NPP carries CrIS/ATMS that were designed as follow-ons to AIRS/AMSU. Our objective is to generate a long term climate data set of products derived from CrIS/ATMS to serve as a continuation of the AIRS/AMSU products. We have modified an improved version of the operational AIRS Version-6 retrieval algorithm for use with CrIS/ATMS. CrIS/ATMS products are of very good quality, and are comparable to, and consistent with, those of AIRS.

  13. Advanced Transport Delay Compensation Algorithms: Results of Delay Measurement and Piloted Performance Tests

    NASA Technical Reports Server (NTRS)

    Guo, Liwen; Cardullo, Frank M.; Kelly, Lon C.

    2007-01-01

    This report summarizes the results of delay measurement and piloted performance tests that were conducted to assess the effectiveness of the adaptive compensator and the state space compensator for alleviating the phase distortion of transport delay in the visual system in the VMS at the NASA Langley Research Center. Piloted simulation tests were conducted to assess the effectiveness of two novel compensators in comparison to the McFarland predictor and the baseline system with no compensation. Thirteen pilots with heterogeneous flight experience executed straight-in and offset approaches, at various delay configurations, on a flight simulator where different predictors were applied to compensate for transport delay. The glideslope and touchdown errors, power spectral density of the pilot control inputs, NASA Task Load Index, and Cooper-Harper rating of the handling qualities were employed for the analyses. The overall analyses show that the adaptive predictor results in slightly poorer compensation for short added delay (up to 48 ms) and better compensation for long added delay (up to 192 ms) than the McFarland compensator. The analyses also show that the state space predictor is fairly superior for short delay and significantly superior for long delay than the McFarland compensator.

  14. Results from CrIS/ATMS Obtained Using an "AIRS Version-6 Like" Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Kouvaris, Louis; Iredell, Lena; Blaisdell, John

    2015-01-01

    AIRS and CrIS Version-6.22 O3(p) and q(p) products are both superior to those of AIRS Version-6.Monthly mean August 2014 Version-6.22 AIRS and CrIS products agree reasonably well with OMPS, CERES, and witheach other. JPL plans to process AIRS and CrIS for many months and compare interannual differences. Updates to thecalibration of both CrIS and ATMS are still being finalized. We are also working with JPL to develop a joint AIRS/CrISlevel-1 to level-3 processing system using a still to be finalized Version-7 retrieval algorithm. The NASA Goddard DISCwill eventually use this system to reprocess all AIRS and recalibrated CrIS/ATMS. .

  15. An Algorithm Approach to Determining Smoking Cessation Treatment for Persons Living with HIV/AIDS: Results of a Pilot Trial

    PubMed Central

    Cropsey, Karen L.; Jardin, Bianca; Burkholder, Greer; Clark, C. Brendan; Raper, James L.; Saag, Michael

    2015-01-01

    Background Smoking now represents one of the biggest modifiable risk factors for disease and mortality in PLHIV. To produce significant changes in smoking rates among this population, treatments will need to be both acceptable to the larger segment of PLHIV smokers as well as feasible to implement in busy HIV clinics. The purpose of this study was to evaluate the feasibility and effects of a novel proactive algorithm-based intervention in an HIV/AIDS clinic. Methods PLHIV smokers (N =100) were proactively identified via their electronic medical records and were subsequently randomized at baseline to receive a 12-week pharmacotherapy-based algorithm treatment or treatment as usual. Participants were tracked in-person for 12-weeks. Participants provided information on smoking behaviors and associated constructs of cessation at each follow-up session. Results The findings revealed that many smokers reported utilizing prescribed medications when provided with a supply of cessation medication as determined by an algorithm. Compared to smokers receiving treatment as usual, PLHIV smokers prescribed these medications reported more quit attempts and greater reduction in smoking. Proxy measures of cessation readiness (e.g., motivation, self-efficacy) also favored participants receiving algorithm treatment. Conclusions This algorithm-derived treatment produced positive changes across a number of important clinical markers associated with smoking cessation. Given these promising findings coupled with the brief nature of this treatment, the overall pattern of results suggests strong potential for dissemination into clinical settings as well as significant promise for further advancing clinical health outcomes in this population. PMID:26181705

  16. A New Retrieval Algorithm for OMI NO2: Tropospheric Results and Comparisons with Measurements and Models

    NASA Technical Reports Server (NTRS)

    Swartz, W. H.; Bucesla, E. J.; Lamsal, L. N.; Celarier, E. A.; Krotkov, N. A.; Bhartia, P, K,; Strahan, S. E.; Gleason, J. F.; Herman, J.; Pickering, K.

    2012-01-01

    Nitrogen oxides (NOx =NO+NO2) are important atmospheric trace constituents that impact tropospheric air pollution chemistry and air quality. We have developed a new NASA algorithm for the retrieval of stratospheric and tropospheric NO2 vertical column densities using measurements from the nadir-viewing Ozone Monitoring Instrument (OMI) on NASA's Aura satellite. The new products rely on an improved approach to stratospheric NO2 column estimation and stratosphere-troposphere separation and a new monthly NO2 climatology based on the NASA Global Modeling Initiative chemistry-transport model. The retrieval does not rely on daily model profiles, minimizing the influence of a priori information. We evaluate the retrieved tropospheric NO2 columns using surface in situ (e.g., AQS/EPA), ground-based (e.g., DOAS), and airborne measurements (e.g., DISCOVER-AQ). The new, improved OMI tropospheric NO2 product is available at high spatial resolution for the years 200S-present. We believe that this product is valuable for the evaluation of chemistry-transport models, examining the spatial and temporal patterns of NOx emissions, constraining top-down NOx inventories, and for the estimation of NOx lifetimes.

  17. [Fractal dimension and histogram method: algorithm and some preliminary results of noise-like time series analysis].

    PubMed

    Pancheliuga, V A; Pancheliuga, M S

    2013-01-01

    In the present work a methodological background for the histogram method of time series analysis is developed. Connection between shapes of smoothed histograms constructed on the basis of short segments of time series of fluctuations and the fractal dimension of the segments is studied. It is shown that the fractal dimension possesses all main properties of the histogram method. Based on it a further development of fractal dimension determination algorithm is proposed. This algorithm allows more precision determination of the fractal dimension by using the "all possible combination" method. The application of the method to noise-like time series analysis leads to results, which could be obtained earlier only by means of the histogram method based on human expert comparisons of histograms shapes. PMID:23755565

  18. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  19. First results from the COST-HOME monthly benchmark dataset with temperature and precipitation data for testing homogenisation algorithms

    NASA Astrophysics Data System (ADS)

    Venema, Victor; Mestre, Olivier

    2010-05-01

    As part of the COST Action HOME (Advances in homogenisation methods of climate series: an integrated approach) a dataset was generated that serves as a benchmark for homogenisation algorithms. Members of the Action and third parties have been invited to homogenise this dataset. The results of this exercise are analysed by the HOME Working Groups (WG) on detection (WG2) and correction (WG3) algorithms to obtain recommendations for a standard homogenisation procedure for climate data. This talk will shortly describe this benchmark dataset and present first results comparing the quality of the about 25 contributions. Based upon a survey among homogenisation experts we chose to work with monthly values for temperature and precipitation. Temperature and precipitation were selected because most participants consider these elements the most relevant for their studies. Furthermore, they represent two important types of statistics (additive and multiplicative). The benchmark has three different types of datasets: real data, surrogate data and synthetic data. The real datasets allow comparing the different homogenisation methods with the most realistic type of data and inhomogeneities. Thus this part of the benchmark is important for a faithful comparison of algorithms with each other. However, as in this case the truth is not known, it is not possible to quantify the improvements due to homogenisation. Therefore, the benchmark also has two datasets with artificial data to which we inserted known inhomogeneities: surrogate and synthetic data. The aim of surrogate data is to reproduce the structure of measured data accurately enough that it can be used as substitute for measurements. The surrogate climate networks have the spatial and temporal auto- and cross-correlation functions of real homogenised networks as well as the exact (non-Gaussian) distribution for each station. The idealised synthetic data is based on the surrogate networks. The change is that the difference

  20. Simulation Results of the Huygens Probe Entry and Descent Trajectory Reconstruction Algorithm

    NASA Technical Reports Server (NTRS)

    Kazeminejad, B.; Atkinson, D. H.; Perez-Ayucar, M.

    2005-01-01

    Cassini/Huygens is a joint NASA/ESA mission to explore the Saturnian system. The ESA Huygens probe is scheduled to be released from the Cassini spacecraft on December 25, 2004, enter the atmosphere of Titan in January, 2005, and descend to Titan s surface using a sequence of different parachutes. To correctly interpret and correlate results from the probe science experiments and to provide a reference set of data for "ground-truthing" Orbiter remote sensing measurements, it is essential that the probe entry and descent trajectory reconstruction be performed as early as possible in the postflight data analysis phase. The Huygens Descent Trajectory Working Group (DTWG), a subgroup of the Huygens Science Working Team (HSWT), is responsible for developing a methodology and performing the entry and descent trajectory reconstruction. This paper provides an outline of the trajectory reconstruction methodology, preliminary probe trajectory retrieval test results using a simulated synthetic Huygens dataset developed by the Huygens Project Scientist Team at ESA/ESTEC, and a discussion of strategies for recovery from possible instrument failure.

  1. Deriving Arctic Cloud Microphysics at Barrow, Alaska. Algorithms, Results, and Radiative Closure

    SciTech Connect

    Shupe, Matthew D.; Turner, David D.; Zwink, Alexander; Thieman, Mandana M.; Mlawer, Eli J.; Shippert, Timothy

    2015-07-01

    Cloud phase and microphysical properties control the radiative effects of clouds in the climate system and are therefore crucial to characterize in a variety of conditions and locations. An Arctic-specific, ground-based, multi-sensor cloud retrieval system is described here and applied to two years of observations from Barrow, Alaska. Over these two years, clouds occurred 75% of the time, with cloud ice and liquid each occurring nearly 60% of the time. Liquid water occurred at least 25% of the time even in the winter, and existed up to heights of 8 km. The vertically integrated mass of liquid was typically larger than that of ice. While it is generally difficult to evaluate the overall uncertainty of a comprehensive cloud retrieval system of this type, radiative flux closure analyses were performed where flux calculations using the derived microphysical properties were compared to measurements at the surface and top-of-atmosphere. Radiative closure biases were generally smaller for cloudy scenes relative to clear skies, while the variability of flux closure results was only moderately larger than under clear skies. The best closure at the surface was obtained for liquid-containing clouds. Radiative closure results were compared to those based on a similar, yet simpler, cloud retrieval system. These comparisons demonstrated the importance of accurate cloud phase classification, and specifically the identification of liquid water, for determining radiative fluxes. Enhanced retrievals of liquid water path for thin clouds were also shown to improve radiative flux calculations.

  2. Minimal Sign Representation of Boolean Functions: Algorithms and Exact Results for Low Dimensions.

    PubMed

    Sezener, Can Eren; Oztop, Erhan

    2015-08-01

    Boolean functions (BFs) are central in many fields of engineering and mathematics, such as cryptography, circuit design, and combinatorics. Moreover, they provide a simple framework for studying neural computation mechanisms of the brain. Many representation schemes for BFs exist to satisfy the needs of the domain they are used in. In neural computation, it is of interest to know how many input lines a neuron would need to represent a given BF. A common BF representation to study this is the so-called polynomial sign representation where [Formula: see text] and 1 are associated with true and false, respectively. The polynomial is treated as a real-valued function and evaluated at its parameters, and the sign of the polynomial is then taken as the function value. The number of input lines for the modeled neuron is exactly the number of terms in the polynomial. This letter investigates the minimum number of terms, that is, the minimum threshold density, that is sufficient to represent a given BF and more generally aims to find the maximum over this quantity for all BFs in a given dimension. With this work, for the first time exact results for four- and five-variable BFs are obtained, and strong bounds for six-variable BFs are derived. In addition, some connections between the sign representation framework and bent functions are derived, which are generally studied for their desirable cryptographic properties.

  3. The equation of state for stellar envelopes. II - Algorithm and selected results

    NASA Technical Reports Server (NTRS)

    Mihalas, Dimitri; Dappen, Werner; Hummer, D. G.

    1988-01-01

    A free-energy-minimization method for computing the dissociation and ionization equilibrium of a multicomponent gas is discussed. The adopted free energy includes terms representing the translational free energy of atoms, ions, and molecules; the internal free energy of particles with excited states; the free energy of a partially degenerate electron gas; and the configurational free energy from shielded Coulomb interactions among charged particles. Internal partition functions are truncated using an occupation probability formalism that accounts for perturbations of bound states by both neutral and charged perturbers. The entire theory is analytical and differentiable to all orders, so it is possible to write explicit analytical formulas for all derivatives required in a Newton-Raphson iteration; these are presented to facilitate future work. Some representative results for both Saha and free-energy-minimization equilibria are presented for a hydrogen-helium plasma with N(He)/N(H) = 0.10. These illustrate nicely the phenomena of pressure dissociation and ionization, and also demonstrate vividly the importance of choosing a reliable cutoff procedure for internal partition functions.

  4. Minimal Sign Representation of Boolean Functions: Algorithms and Exact Results for Low Dimensions.

    PubMed

    Sezener, Can Eren; Oztop, Erhan

    2015-08-01

    Boolean functions (BFs) are central in many fields of engineering and mathematics, such as cryptography, circuit design, and combinatorics. Moreover, they provide a simple framework for studying neural computation mechanisms of the brain. Many representation schemes for BFs exist to satisfy the needs of the domain they are used in. In neural computation, it is of interest to know how many input lines a neuron would need to represent a given BF. A common BF representation to study this is the so-called polynomial sign representation where [Formula: see text] and 1 are associated with true and false, respectively. The polynomial is treated as a real-valued function and evaluated at its parameters, and the sign of the polynomial is then taken as the function value. The number of input lines for the modeled neuron is exactly the number of terms in the polynomial. This letter investigates the minimum number of terms, that is, the minimum threshold density, that is sufficient to represent a given BF and more generally aims to find the maximum over this quantity for all BFs in a given dimension. With this work, for the first time exact results for four- and five-variable BFs are obtained, and strong bounds for six-variable BFs are derived. In addition, some connections between the sign representation framework and bent functions are derived, which are generally studied for their desirable cryptographic properties. PMID:26079754

  5. Automated analysis of Kokee-Wettzell Intensive VLBI sessions—algorithms, results, and recommendations

    NASA Astrophysics Data System (ADS)

    Kareinen, Niko; Hobiger, Thomas; Haas, Rüdiger

    2015-11-01

    The time-dependent variations in the rotation and orientation of the Earth are represented by a set of Earth Orientation Parameters (EOP). Currently, Very Long Baseline Interferometry (VLBI) is the only technique able to measure all EOP simultaneously and to provide direct observation of universal time, usually expressed as UT1-UTC. To produce estimates for UT1-UTC on a daily basis, 1-h VLBI experiments involving two or three stations are organised by the International VLBI Service for Geodesy and Astrometry (IVS), the IVS Intensive (INT) series. There is an ongoing effort to minimise the turn-around time for the INT sessions in order to achieve near real-time and high quality UT1-UTC estimates. As a step further towards true fully automated real-time analysis of UT1-UTC, we carry out an extensive investigation with INT sessions on the Kokee-Wettzell baseline. Our analysis starts with the first versions of the observational files in S- and X-band and includes an automatic group delay ambiguity resolution and ionospheric calibration. Several different analysis strategies are investigated. In particular, we focus on the impact of external information, such as meteorological and cable delay data provided in the station log-files, and a priori EOP information. The latter is studied by extensive Monte Carlo simulations. Our main findings are that it is easily possible to analyse the INT sessions in a fully automated mode to provide UT1-UTC with very low latency. The information found in the station log-files is important for the accuracy of the UT1-UTC results, provided that the data in the station log-files are reliable. Furthermore, to guarantee UT1-UTC with an accuracy of less than 20 μs, it is necessary to use predicted a priori polar motion data in the analysis that are not older than 12 h.

  6. "The Show"

    ERIC Educational Resources Information Center

    Gehring, John

    2004-01-01

    For the past 16 years, the blue-collar city of Huntington, West Virginia, has rolled out the red carpet to welcome young wrestlers and their families as old friends. They have come to town chasing the same dream for a spot in what many of them call "The Show". For three days, under the lights of an arena packed with 5,000 fans, the state's best…

  7. Multipartite entanglement in quantum algorithms

    SciTech Connect

    Bruss, D.; Macchiavello, C.

    2011-05-15

    We investigate the entanglement features of the quantum states employed in quantum algorithms. In particular, we analyze the multipartite entanglement properties in the Deutsch-Jozsa, Grover, and Simon algorithms. Our results show that for these algorithms most instances involve multipartite entanglement.

  8. Automatic design of decision-tree algorithms with evolutionary algorithms.

    PubMed

    Barros, Rodrigo C; Basgalupp, Márcio P; de Carvalho, André C P L F; Freitas, Alex A

    2013-01-01

    This study reports the empirical analysis of a hyper-heuristic evolutionary algorithm that is capable of automatically designing top-down decision-tree induction algorithms. Top-down decision-tree algorithms are of great importance, considering their ability to provide an intuitive and accurate knowledge representation for classification problems. The automatic design of these algorithms seems timely, given the large literature accumulated over more than 40 years of research in the manual design of decision-tree induction algorithms. The proposed hyper-heuristic evolutionary algorithm, HEAD-DT, is extensively tested using 20 public UCI datasets and 10 microarray gene expression datasets. The algorithms automatically designed by HEAD-DT are compared with traditional decision-tree induction algorithms, such as C4.5 and CART. Experimental results show that HEAD-DT is capable of generating algorithms which are significantly more accurate than C4.5 and CART.

  9. Preliminary test results of a flight management algorithm for fuel conservative descents in a time based metered traffic environment. [flight tests of an algorithm to minimize fuel consumption of aircraft based on flight time

    NASA Technical Reports Server (NTRS)

    Knox, C. E.; Cannon, D. G.

    1979-01-01

    A flight management algorithm designed to improve the accuracy of delivering the airplane fuel efficiently to a metering fix at a time designated by air traffic control is discussed. The algorithm provides a 3-D path with time control (4-D) for a test B 737 airplane to make an idle thrust, clean configured descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path is calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithms and the results of the flight tests are discussed.

  10. Competing Sudakov veto algorithms

    NASA Astrophysics Data System (ADS)

    Kleiss, Ronald; Verheyen, Rob

    2016-07-01

    We present a formalism to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The formal analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance in a semi-realistic setting and show that there are significantly faster alternatives to the commonly used algorithms.

  11. Analysis of conservative tracer measurement results using the Frechet distribution at planted horizontal subsurface flow constructed wetlands filled with coarse gravel and showing the effect of clogging processes.

    PubMed

    Dittrich, Ernő; Klincsik, Mihály

    2015-11-01

    A mathematical process, developed in Maple environment, has been successful in decreasing the error of measurement results and in the precise calculation of the moments of corrected tracer functions. It was proved that with this process, the measured tracer results of horizontal subsurface flow constructed wetlands filled with coarse gravel (HSFCW-C) can be fitted more accurately than with the conventionally used distribution functions (Gaussian, Lognormal, Fick (Inverse Gaussian) and Gamma). This statement is true only for the planted HSFCW-Cs. The analysis of unplanted HSFCW-Cs needs more research. The result of the analysis shows that the conventional solutions (completely stirred series tank reactor (CSTR) model and convection-dispersion transport (CDT) model) cannot describe these types of transport processes with sufficient accuracy. These outcomes can help in developing better process descriptions of very difficult transport processes in HSFCW-Cs. Furthermore, a new mathematical process can be developed for the calculation of real hydraulic residence time (HRT) and dispersion coefficient values. The presented method can be generalized to other kinds of hydraulic environments.

  12. How often do German children and adolescents show signs of common mental health problems? Results from different methodological approaches – a cross-sectional study

    PubMed Central

    2014-01-01

    Background Child and adolescent mental health problems are ubiquitous and burdensome. Their impact on functional disability, the high rates of accompanying medical illnesses and the potential to last until adulthood make them a major public health issue. While methodological factors cause variability of the results from epidemiological studies, there is a lack of prevalence rates of mental health problems in children and adolescents according to ICD-10 criteria from nationally representative samples. International findings suggest only a small proportion of children with function impairing mental health problems receive treatment, but information about the health care situation of children and adolescents is scarce. The aim of this epidemiological study was a) to classify symptoms of common mental health problems according to ICD-10 criteria in order to compare the statistical and clinical case definition strategies using a single set of data and b) to report ICD-10 codes from health insurance claims data. Methods a) Based on a clinical expert rating, questionnaire items were mapped on ICD-10 criteria; data from the Mental Health Module (BELLA study) were analyzed for relevant ICD-10 and cut-off criteria; b) Claims data were analyzed for relevant ICD-10 codes. Results According to parent report 7.5% (n = 208) met the ICD-10 criteria of a mild depressive episode and 11% (n = 305) showed symptoms of depression according to cut-off score; Anxiety is reported in 5.6% (n = 156) and 11.6% (n = 323), conduct disorder in 15.2% (n = 373) and 14.6% (n = 357). Self-reported symptoms in 11 to 17 year olds resulted in 15% (n = 279) reporting signs of a mild depression according to ICD-10 criteria (vs. 16.7% (n = 307) based on cut-off) and 10.9% (n = 201) reported symptoms of anxiety (vs. 15.4% (n = 283)). Results from routine data identify 0.9% (n = 1,196) with a depression diagnosis, 3.1% (n = 6,729) with anxiety and 1.4% (n

  13. Comparative Results of AIRS AMSU and CrIS/ATMS Retrievals Using a Scientifically Equivalent Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Kouvaris, Louis; Iredell, Lena

    2016-01-01

    The AIRS Science Team Version 6 retrieval algorithm is currently producing high quality level-3 Climate Data Records (CDRs) from AIRSAMSU which are critical for understanding climate processes. The AIRS Science Team is finalizing an improved Version-7 retrieval algorithm to reprocess all old and future AIRS data. AIRS CDRs should eventually cover the period September 2002 through at least 2020. CrISATMS is the only scheduled follow on to AIRSAMSU. The objective of this research is to prepare for generation of a long term CrISATMS level-3 data using a finalized retrieval algorithm that is scientifically equivalent to AIRSAMSU Version-7.

  14. Comparative Results of AIRS/AMSU and CrIS/ATMS Retrievals Using a Scientifically Equivalent Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Kouvaris, Louis; Iredell, Lena

    2016-01-01

    The AIRS Science Team Version-6 retrieval algorithm is currently producing high quality level-3 Climate Data Records (CDRs) from AIRS/AMSU which are critical for understanding climate processes. The AIRS Science Team is finalizing an improved Version-7 retrieval algorithm to reprocess all old and future AIRS data. AIRS CDRs should eventually cover the period September 2002 through at least 2020. CrIS/ATMS is the only scheduled follow on to AIRS/AMSU. The objective of this research is to prepare for generation of long term CrIS/ATMS CDRs using a retrieval algorithm that is scientifically equivalent to AIRS/AMSU Version-7.

  15. Transgene silencing of the Hutchinson-Gilford progeria syndrome mutation results in a reversible bone phenotype, whereas resveratrol treatment does not show overall beneficial effects.

    PubMed

    Strandgren, Charlotte; Nasser, Hasina Abdul; McKenna, Tomás; Koskela, Antti; Tuukkanen, Juha; Ohlsson, Claes; Rozell, Björn; Eriksson, Maria

    2015-08-01

    Hutchinson-Gilford progeria syndrome (HGPS) is a rare premature aging disorder that is most commonly caused by a de novo point mutation in exon 11 of the LMNA gene, c.1824C>T, which results in an increased production of a truncated form of lamin A known as progerin. In this study, we used a mouse model to study the possibility of recovering from HGPS bone disease upon silencing of the HGPS mutation, and the potential benefits from treatment with resveratrol. We show that complete silencing of the transgenic expression of progerin normalized bone morphology and mineralization already after 7 weeks. The improvements included lower frequencies of rib fractures and callus formation, an increased number of osteocytes in remodeled bone, and normalized dentinogenesis. The beneficial effects from resveratrol treatment were less significant and to a large extent similar to mice treated with sucrose alone. However, the reversal of the dental phenotype of overgrown and laterally displaced lower incisors in HGPS mice could be attributed to resveratrol. Our results indicate that the HGPS bone defects were reversible upon suppressed transgenic expression and suggest that treatments targeting aberrant progerin splicing give hope to patients who are affected by HGPS.

  16. Magnetic Sphincter Augmentation for Gastroesophageal Reflux at 5 Years: Final Results of a Pilot Study Show Long-Term Acid Reduction and Symptom Improvement

    PubMed Central

    Saino, Greta; Bonavina, Luigi; Lipham, John C.; Dunn, Daniel

    2015-01-01

    Abstract Background: As previously reported, the magnetic sphincter augmentation device (MSAD) preserves gastric anatomy and results in less severe side effects than traditional antireflux surgery. The final 5-year results of a pilot study are reported here. Patients and Methods: A prospective, multicenter study evaluated safety and efficacy of the MSAD for 5 years. Prior to MSAD placement, patients had abnormal esophageal acid and symptoms poorly controlled by proton pump inhibitors (PPIs). Patients served as their own control, which allowed comparison between baseline and postoperative measurements to determine individual treatment effect. At 5 years, gastroesophageal reflux disease (GERD)-Health Related Quality of Life (HRQL) questionnaire score, esophageal pH, PPI use, and complications were evaluated. Results: Between February 2007 and October 2008, 44 patients (26 males) had an MSAD implanted by laparoscopy, and 33 patients were followed up at 5 years. Mean total percentage of time with pH <4 was 11.9% at baseline and 4.6% at 5 years (P < .001), with 85% of patients achieving pH normalization or at least a 50% reduction. Mean total GERD-HRQL score improved significantly from 25.7 to 2.9 (P < .001) when comparing baseline and 5 years, and 93.9% of patients had at least a 50% reduction in total score compared with baseline. Complete discontinuation of PPIs was achieved by 87.8% of patients. No complications occurred in the long term, including no device erosions or migrations at any point. Conclusions: Based on long-term reduction in esophageal acid, symptom improvement, and no late complications, this study shows the relative safety and efficacy of magnetic sphincter augmentation for GERD. PMID:26437027

  17. Parallel algorithms for unconstrained optimizations by multisplitting

    SciTech Connect

    He, Qing

    1994-12-31

    In this paper a new parallel iterative algorithm for unconstrained optimization using the idea of multisplitting is proposed. This algorithm uses the existing sequential algorithms without any parallelization. Some convergence and numerical results for this algorithm are presented. The experiments are performed on an Intel iPSC/860 Hyper Cube with 64 nodes. It is interesting that the sequential implementation on one node shows that if the problem is split properly, the algorithm converges much faster than one without splitting.

  18. Rapamycin and Chloroquine: The In Vitro and In Vivo Effects of Autophagy-Modifying Drugs Show Promising Results in Valosin Containing Protein Multisystem Proteinopathy

    PubMed Central

    Nalbandian, Angèle; Llewellyn, Katrina J.; Nguyen, Christopher; Yazdi, Puya G.; Kimonis, Virginia E.

    2015-01-01

    Mutations in the valosin containing protein (VCP) gene cause hereditary Inclusion body myopathy (hIBM) associated with Paget disease of bone (PDB), frontotemporal dementia (FTD), more recently termed multisystem proteinopathy (MSP). Affected individuals exhibit scapular winging and die from progressive muscle weakness, and cardiac and respiratory failure, typically in their 40s to 50s. Histologically, patients show the presence of rimmed vacuoles and TAR DNA-binding protein 43 (TDP-43)-positive large ubiquitinated inclusion bodies in the muscles. We have generated a VCPR155H/+ mouse model which recapitulates the disease phenotype and impaired autophagy typically observed in patients with VCP disease. Autophagy-modifying agents, such as rapamycin and chloroquine, at pharmacological doses have previously shown to alter the autophagic flux. Herein, we report results of administration of rapamycin, a specific inhibitor of the mechanistic target of rapamycin (mTOR) signaling pathway, and chloroquine, a lysosomal inhibitor which reverses autophagy by accumulating in lysosomes, responsible for blocking autophagy in 20-month old VCPR155H/+ mice. Rapamycin-treated mice demonstrated significant improvement in muscle performance, quadriceps histological analysis, and rescue of ubiquitin, and TDP-43 pathology and defective autophagy as indicated by decreased protein expression levels of LC3-I/II, p62/SQSTM1, optineurin and inhibiting the mTORC1 substrates. Conversely, chloroquine-treated VCPR155H/+ mice revealed progressive muscle weakness, cytoplasmic accumulation of TDP-43, ubiquitin-positive inclusion bodies and increased LC3-I/II, p62/SQSTM1, and optineurin expression levels. Our in vitro patient myoblasts studies treated with rapamycin demonstrated an overall improvement in the autophagy markers. Targeting the mTOR pathway ameliorates an increasing list of disorders, and these findings suggest that VCP disease and related neurodegenerative multisystem proteinopathies can

  19. Modeling upward brine migration through faults as a result of CO2 storage in the Northeast German Basin shows negligible salinization in shallow aquifers

    NASA Astrophysics Data System (ADS)

    Kuehn, M.; Tillner, E.; Kempka, T.; Nakaten, B.

    2012-12-01

    The geological storage of CO2 in deep saline formations may cause salinization of shallower freshwater resources by upward flow of displaced brine from the storage formation into potable groundwater. In this regard, permeable faults or fractures can serve as potential leakage pathways for upward brine migration. The present study uses a regional-scale 3D model based on real structural data of a prospective CO2 storage site in Northeastern Germany to determine the impact of compartmentalization and fault permeability on upward brine migration as a result of pressure elevation by CO2 injection. To evaluate the degree of salinization in the shallower aquifers, different fault leakage scenarios were carried out using a newly developed workflow in which the model grid from the software package Petrel applied for pre-processing is transferred to the reservoir simulator TOUGH2-MP/ECO2N. A discrete fault description is achieved by using virtual elements. A static 3D geological model of the CO2 storage site with an a real size of 40 km x 40 km and a thickness of 766 m was implemented. Subsequently, large-scale numerical multi-phase multi-component (CO2, NaCl, H2O) flow simulations were carried out on a high performance computing system. The prospective storage site, located in the Northeast German Basin is part of an anticline structure characterized by a saline multi-layer aquifer system. The NE and SW boundaries of the study area are confined by the Fuerstenwalde Gubener and the Lausitzer Abbruch fault zones represented by four discrete faults in the model. Two formations of the Middle Bunter were chosen to assess brine migration through faults triggered by an annual injection rate of 1.7 Mt CO2 into the lowermost formation over a time span of 20 years. In addition to varying fault permeabilities, different boundary conditions were applied to evaluate the effects of reservoir compartmentalization. Simulation results show that the highest pressurization within the storage

  20. Preliminary Structural Design Using Topology Optimization with a Comparison of Results from Gradient and Genetic Algorithm Methods

    NASA Technical Reports Server (NTRS)

    Burt, Adam O.; Tinker, Michael L.

    2014-01-01

    In this paper, genetic algorithm based and gradient-based topology optimization is presented in application to a real hardware design problem. Preliminary design of a planetary lander mockup structure is accomplished using these methods that prove to provide major weight savings by addressing the structural efficiency during the design cycle. This paper presents two alternative formulations of the topology optimization problem. The first is the widely-used gradient-based implementation using commercially available algorithms. The second is formulated using genetic algorithms and internally developed capabilities. These two approaches are applied to a practical design problem for hardware that has been built, tested and proven to be functional. Both formulations converged on similar solutions and therefore were proven to be equally valid implementations of the process. This paper discusses both of these formulations at a high level.

  1. ICPES analyses using full image spectra and astronomical data fitting algorithms to provide diagnostic and result information

    SciTech Connect

    Spencer, W.A.; Goode, S.R.

    1997-10-01

    ICP emission analyses are prone to errors due to changes in power level, nebulization rate, plasma temperature, and sample matrix. As a result, accurate analyses of complex samples often require frequent bracketing with matrix matched standards. Information needed to track and correct the matrix errors is contained in the emission spectrum. But most commercial software packages use only the analyte line emission to determine concentrations. Changes in plasma temperature and the nebulization rate are reflected by changes in the hydrogen line widths, the oxygen emission, and neutral ion line ratios. Argon and off-line emissions provide a measure to correct the power level and the background scattering occurring in the polychromator. The authors` studies indicated that changes in the intensity of the Ar 404.4 nm line readily flag most matrix and plasma condition modifications. Carbon lines can be used to monitor the impact of organics on the analyses and calcium and argon lines can be used to correct for spectral drift and alignment. Spectra of contaminated groundwater and simulated defense waste glasses were obtained using a Thermo Jarrell Ash ICP that has an echelle CID detector system covering the 190-850 nm range. The echelle images were translated to the FITS data format, which astronomers recommend for data storage. Data reduction packages such as those in the ESO-MIDAS/ECHELLE and DAOPHOT programs were tried with limited success. The radial point spread function was evaluated as a possible improved peak intensity measurement instead of the common pixel averaging approach used in the commercial ICP software. Several algorithms were evaluated to align and automatically scale the background and reference spectra. A new data reduction approach that utilizes standard reference images, successive subtractions, and residual analyses has been evaluated to correct for matrix effects.

  2. Storytelling Slide Shows to Improve Diabetes and High Blood Pressure Knowledge and Self-Efficacy: Three-Year Results among Community Dwelling Older African Americans

    ERIC Educational Resources Information Center

    Bertera, Elizabeth M.

    2014-01-01

    This study combined the African American tradition of oral storytelling with the Hispanic medium of "Fotonovelas." A staggered pretest posttest control group design was used to evaluate four Storytelling Slide Shows on health that featured community members. A total of 212 participants were recruited for the intervention and 217 for the…

  3. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  4. Mathematical modelling in Matlab of the experimental results shows the electrochemical potential difference - temperature of the WC coatings immersed in a NaCl solution

    NASA Astrophysics Data System (ADS)

    Benea, M. L.; Benea, O. D.

    2016-02-01

    The method used for purchasing the corrosion behaviour the WC coatings deposited by plasma spraying, on a martensitic stainless steel substrate consists in measuring the electrochemical potential of the coating, respectively that of the substrate, immersed in a NaCl solution as corrosive agent. The mathematical processing of the obtained experimental results in Matlab allowed us to make some correlations between the electrochemical potential of the coating and the solution temperature is very well described by some curves having equations obtained by interpolation order 4.

  5. Transitioning from preclinical to clinical chemopreventive assessments of lyophilized black raspberries: interim results show berries modulate markers of oxidative stress in Barrett's esophagus patients.

    PubMed

    Kresty, Laura A; Frankel, Wendy L; Hammond, Cynthia D; Baird, Maureen E; Mele, Jennifer M; Stoner, Gary D; Fromkes, John J

    2006-01-01

    Increased fruit and vegetable consumption is associated with decreased risk of a number of cancers of epithelial origin, including esophageal cancer. Dietary administration of lyophilized black raspberries (LBRs) has significantly inhibited chemically induced oral, esophageal, and colon carcinogenesis in animal models. Likewise, berry extracts added to cell cultures significantly inhibited cancer-associated processes. Positive results in preclinical studies have supported further investigation of berries and berry extracts in high-risk human cohorts, including patients with existing premalignancy or patients at risk for cancer recurrence. We are currently conducting a 6-mo chemopreventive pilot study administering 32 or 45 g (female and male, respectively) of LBRs to patients with Barrett's esophagus (BE), a premalignant esophageal condition in which the normal stratified squamous epithelium changes to a metaplastic columnar-lined epithelium. BE's importance lies in the fact that it confers a 30- to 40-fold increased risk for the development of esophageal adenocarcinoma, a rapidly increasing and extremely deadly malignancy. This is a report on interim findings from 10 patients. To date, the results support that daily consumption of LBRs promotes reductions in the urinary excretion of two markers of oxidative stress, 8-epi-prostaglandin F2alpha (8-Iso-PGF2) and, to a lesser more-variable extent, 8-hydroxy-2'-deoxyguanosine (8-OHdG), among patients with BE.

  6. Firefly algorithm with chaos

    NASA Astrophysics Data System (ADS)

    Gandomi, A. H.; Yang, X.-S.; Talatahari, S.; Alavi, A. H.

    2013-01-01

    A recently developed metaheuristic optimization algorithm, firefly algorithm (FA), mimics the social behavior of fireflies based on the flashing and attraction characteristics of fireflies. In the present study, we will introduce chaos into FA so as to increase its global search mobility for robust global optimization. Detailed studies are carried out on benchmark problems with different chaotic maps. Here, 12 different chaotic maps are utilized to tune the attractive movement of the fireflies in the algorithm. The results show that some chaotic FAs can clearly outperform the standard FA.

  7. Planning fuel-conservative descents with or without time constraints using a small programmable calculator: Algorithm development and flight test results

    NASA Technical Reports Server (NTRS)

    Knox, C. E.

    1983-01-01

    A simplified flight-management descent algorithm, programmed on a small programmable calculator, was developed and flight tested. It was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel-conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight-management descent algorithm is described. The results of flight tests flown with a T-39A (Sabreliner) airplane are presented.

  8. Development and test results of a flight management algorithm for fuel conservative descents in a time-based metered traffic environment

    NASA Technical Reports Server (NTRS)

    Knox, C. E.; Cannon, D. G.

    1980-01-01

    A simple flight management descent algorithm designed to improve the accuracy of delivering an airplane in a fuel-conservative manner to a metering fix at a time designated by air traffic control was developed and flight tested. This algorithm provides a three dimensional path with terminal area time constraints (four dimensional) for an airplane to make an idle thrust, clean configured (landing gear up, flaps zero, and speed brakes retracted) descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithm is described. The results of the flight tests flown with the Terminal Configured Vehicle airplane are presented.

  9. The Operational MODIS Cloud Optical and Microphysical Property Product: Overview of the Collection 6 Algorithm and Preliminary Results

    NASA Technical Reports Server (NTRS)

    Platnick, Steven; King, Michael D.; Wind, Galina; Amarasinghe, Nandana; Marchant, Benjamin; Arnold, G. Thomas

    2012-01-01

    Operational Moderate Resolution Imaging Spectroradiometer (MODIS) retrievals of cloud optical and microphysical properties (part of the archived products MOD06 and MYD06, for MODIS Terra and Aqua, respectively) are currently being reprocessed along with other MODIS Atmosphere Team products. The latest "Collection 6" processing stream, which is expected to begin production by summer 2012, includes updates to the previous cloud retrieval algorithm along with new capabilities. The 1 km retrievals, based on well-known solar reflectance techniques, include cloud optical thickness, effective particle radius, and water path, as well as thermodynamic phase derived from a combination of solar and infrared tests. Being both global and of high spatial resolution requires an algorithm that is computationally efficient and can perform over all surface types. Collection 6 additions and enhancements include: (i) absolute effective particle radius retrievals derived separately from the 1.6 and 3.7 !-lm bands (instead of differences relative to the standard 2.1 !-lm retrieval), (ii) comprehensive look-up tables for cloud reflectance and emissivity (no asymptotic theory) with a wind-speed interpolated Cox-Munk BRDF for ocean surfaces, (iii) retrievals for both liquid water and ice phases for each pixel, and a subsequent determination of the phase based, in part, on effective radius retrieval outcomes for the two phases, (iv) new ice cloud radiative models using roughened particles with a specified habit, (v) updated spatially-complete global spectral surface albedo maps derived from MODIS Collection 5, (vi) enhanced pixel-level uncertainty calculations incorporating additional radiative error sources including the MODIS L1 B uncertainty index for assessing band and scene-dependent radiometric uncertainties, (v) and use of a new 1 km cloud top pressure/temperature algorithm (also part of MOD06) for atmospheric corrections and low cloud non-unity emissivity temperature adjustments.

  10. Preliminary results of real-time PPP-RTK positioning algorithm development for moving platforms and its performance validation

    NASA Astrophysics Data System (ADS)

    Won, Jihye; Park, Kwan-Dong

    2015-04-01

    Real-time PPP-RTK positioning algorithms were developed for the purpose of getting precise coordinates of moving platforms. In this implementation, corrections for the satellite orbit and satellite clock were taken from the IGS-RTS products while the ionospheric delay was removed through ionosphere-free combination and the tropospheric delay was either taken care of using the Global Pressure and Temperature (GPT) model or estimated as a stochastic parameter. To improve the convergence speed, all the available GPS and GLONASS measurements were used and Extended Kalman Filter parameters were optimized. To validate our algorithms, we collected the GPS and GLONASS data from a geodetic-quality receiver installed on a roof of a moving vehicle in an open-sky environment and used IGS final products of satellite orbits and clock offsets. The horizontal positioning error got less than 10 cm within 5 minutes, and the error stayed below 10 cm even after the vehicle start moving. When the IGS-RTS product and the GPT model were used instead of the IGS precise product, the positioning accuracy of the moving vehicle was maintained at better than 20 cm once convergence was achieved at around 6 minutes.

  11. Unified treatment algorithm for the management of crotaline snakebite in the United States: results of an evidence-informed consensus workshop

    PubMed Central

    2011-01-01

    Background Envenomation by crotaline snakes (rattlesnake, cottonmouth, copperhead) is a complex, potentially lethal condition affecting thousands of people in the United States each year. Treatment of crotaline envenomation is not standardized, and significant variation in practice exists. Methods A geographically diverse panel of experts was convened for the purpose of deriving an evidence-informed unified treatment algorithm. Research staff analyzed the extant medical literature and performed targeted analyses of existing databases to inform specific clinical decisions. A trained external facilitator used modified Delphi and structured consensus methodology to achieve consensus on the final treatment algorithm. Results A unified treatment algorithm was produced and endorsed by all nine expert panel members. This algorithm provides guidance about clinical and laboratory observations, indications for and dosing of antivenom, adjunctive therapies, post-stabilization care, and management of complications from envenomation and therapy. Conclusions Clinical manifestations and ideal treatment of crotaline snakebite differ greatly, and can result in severe complications. Using a modified Delphi method, we provide evidence-informed treatment guidelines in an attempt to reduce variation in care and possibly improve clinical outcomes. PMID:21291549

  12. The Results of a Simulator Study to Determine the Effects on Pilot Performance of Two Different Motion Cueing Algorithms and Various Delays, Compensated and Uncompensated

    NASA Technical Reports Server (NTRS)

    Guo, Li-Wen; Cardullo, Frank M.; Telban, Robert J.; Houck, Jacob A.; Kelly, Lon C.

    2003-01-01

    A study was conducted employing the Visual Motion Simulator (VMS) at the NASA Langley Research Center, Hampton, Virginia. This study compared two motion cueing algorithms, the NASA adaptive algorithm and a new optimal control based algorithm. Also, the study included the effects of transport delays and the compensation thereof. The delay compensation algorithm employed is one developed by Richard McFarland at NASA Ames Research Center. This paper reports on the analyses of the results of analyzing the experimental data collected from preliminary simulation tests. This series of tests was conducted to evaluate the protocols and the methodology of data analysis in preparation for more comprehensive tests which will be conducted during the spring of 2003. Therefore only three pilots were used. Nevertheless some useful results were obtained. The experimental conditions involved three maneuvers; a straight-in approach with a rotating wind vector, an offset approach with turbulence and gust, and a takeoff with and without an engine failure shortly after liftoff. For each of the maneuvers the two motion conditions were combined with four delay conditions (0, 50, 100 & 200ms), with and without compensation.

  13. Automated classification of seismic sources in large database using random forest algorithm: First results at Piton de la Fournaise volcano (La Réunion).

    NASA Astrophysics Data System (ADS)

    Hibert, Clément; Provost, Floriane; Malet, Jean-Philippe; Stumpf, André; Maggi, Alessia; Ferrazzini, Valérie

    2016-04-01

    In the past decades the increasing quality of seismic sensors and capability to transfer remotely large quantity of data led to a fast densification of local, regional and global seismic networks for near real-time monitoring. This technological advance permits the use of seismology to document geological and natural/anthropogenic processes (volcanoes, ice-calving, landslides, snow and rock avalanches, geothermal fields), but also led to an ever-growing quantity of seismic data. This wealth of seismic data makes the construction of complete seismicity catalogs, that include earthquakes but also other sources of seismic waves, more challenging and very time-consuming as this critical pre-processing stage is classically done by human operators. To overcome this issue, the development of automatic methods for the processing of continuous seismic data appears to be a necessity. The classification algorithm should satisfy the need of a method that is robust, precise and versatile enough to be deployed to monitor the seismicity in very different contexts. We propose a multi-class detection method based on the random forests algorithm to automatically classify the source of seismic signals. Random forests is a supervised machine learning technique that is based on the computation of a large number of decision trees. The multiple decision trees are constructed from training sets including each of the target classes. In the case of seismic signals, these attributes may encompass spectral features but also waveform characteristics, multi-stations observations and other relevant information. The Random Forests classifier is used because it provides state-of-the-art performance when compared with other machine learning techniques (e.g. SVM, Neural Networks) and requires no fine tuning. Furthermore it is relatively fast, robust, easy to parallelize, and inherently suitable for multi-class problems. In this work, we present the first results of the classification method applied

  14. Results.

    ERIC Educational Resources Information Center

    Zemsky, Robert; Shaman, Susan; Shapiro, Daniel B.

    2001-01-01

    Describes the Collegiate Results Instrument (CRI), which measures a range of collegiate outcomes for alumni 6 years after graduation. The CRI was designed to target alumni from institutions across market segments and assess their values, abilities, work skills, occupations, and pursuit of lifelong learning. (EV)

  15. Performance simulation of a combustion engine charged by a variable geometry turbocharger. I - Prerequirements, boundary conditions and model development. II - Simulation algorithm, computed results

    NASA Astrophysics Data System (ADS)

    Malobabic, M.; Buttschardt, W.; Rautenberg, M.

    The paper presents a theoretical derivation of the relationship between a variable geometry turbocharger and the combustion engine, using simplified boundary conditions and model restraints and taking into account the combustion process itself as well as the nonadiabatic operating conditions for the turbine and the compressor. The simulation algorithm is described, and the results computed using this algorithm are compared with measurements performed on a test engine in combination with a controllable turbocharger with adjustable turbine inlet guide vanes. In addition, the results of theoretical parameter studies are presented, which include the simulation of a given turbocharger with variable geometry in combination with different sized combustion engines and the simulation of different sized variable-geometry turbochargers in combination with a given combustion engine.

  16. Cloud model bat algorithm.

    PubMed

    Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi

    2014-01-01

    Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425

  17. Interior search algorithm (ISA): a novel approach for global optimization.

    PubMed

    Gandomi, Amir H

    2014-07-01

    This paper presents the interior search algorithm (ISA) as a novel method for solving optimization tasks. The proposed ISA is inspired by interior design and decoration. The algorithm is different from other metaheuristic algorithms and provides new insight for global optimization. The proposed method is verified using some benchmark mathematical and engineering problems commonly used in the area of optimization. ISA results are further compared with well-known optimization algorithms. The results show that the ISA is efficiently capable of solving optimization problems. The proposed algorithm can outperform the other well-known algorithms. Further, the proposed algorithm is very simple and it only has one parameter to tune.

  18. Linear Bregman algorithm implemented in parallel GPU

    NASA Astrophysics Data System (ADS)

    Li, Pengyan; Ke, Jue; Sui, Dong; Wei, Ping

    2015-08-01

    At present, most compressed sensing (CS) algorithms have poor converging speed, thus are difficult to run on PC. To deal with this issue, we use a parallel GPU, to implement a broadly used compressed sensing algorithm, the Linear Bregman algorithm. Linear iterative Bregman algorithm is a reconstruction algorithm proposed by Osher and Cai. Compared with other CS reconstruction algorithms, the linear Bregman algorithm only involves the vector and matrix multiplication and thresholding operation, and is simpler and more efficient for programming. We use C as a development language and adopt CUDA (Compute Unified Device Architecture) as parallel computing architectures. In this paper, we compared the parallel Bregman algorithm with traditional CPU realized Bregaman algorithm. In addition, we also compared the parallel Bregman algorithm with other CS reconstruction algorithms, such as OMP and TwIST algorithms. Compared with these two algorithms, the result of this paper shows that, the parallel Bregman algorithm needs shorter time, and thus is more convenient for real-time object reconstruction, which is important to people's fast growing demand to information technology.

  19. Solutions of the Two-Dimensional Hubbard Model: Benchmarks and Results from a Wide Range of Numerical Algorithms

    NASA Astrophysics Data System (ADS)

    LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; Bulik, Ireneusz W.; Chan, Garnet Kin-Lic; Chung, Chia-Min; Deng, Youjin; Ferrero, Michel; Henderson, Thomas M.; Jiménez-Hoyos, Carlos A.; Kozik, E.; Liu, Xuan-Wen; Millis, Andrew J.; Prokof'ev, N. V.; Qin, Mingpu; Scuseria, Gustavo E.; Shi, Hao; Svistunov, B. V.; Tocchio, Luca F.; Tupitsyn, I. S.; White, Steven R.; Zhang, Shiwei; Zheng, Bo-Xiao; Zhu, Zhenyue; Gull, Emanuel; Simons Collaboration on the Many-Electron Problem

    2015-10-01

    Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification of uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.

  20. Solutions of the Two Dimensional Hubbard Model: Benchmarks and Results from a Wide Range of Numerical Algorithms

    NASA Astrophysics Data System (ADS)

    Leblanc, James

    In this talk we present numerical results for ground state and excited state properties (energies, double occupancies, and Matsubara-axis self energies) of the single-orbital Hubbard model on a two-dimensional square lattice. In order to provide an assessment of our ability to compute accurate results in the thermodynamic limit we employ numerous methods including auxiliary field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock. We illustrate cases where agreement between different methods is obtained in order to establish benchmark results that should be useful in the validation of future results.

  1. Spaceborne SAR Imaging Algorithm for Coherence Optimized.

    PubMed

    Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun

    2016-01-01

    This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446

  2. Spaceborne SAR Imaging Algorithm for Coherence Optimized.

    PubMed

    Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun

    2016-01-01

    This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application.

  3. Spaceborne SAR Imaging Algorithm for Coherence Optimized

    PubMed Central

    Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun

    2016-01-01

    This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446

  4. Solutions of the two-dimensional Hubbard model: Benchmarks and results from a wide range of numerical algorithms

    SciTech Connect

    LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; Bulik, Ireneusz W.; Chan, Garnet Kin-Lic; Chung, Chia -Min; Deng, Youjin; Ferrero, Michel; Henderson, Thomas M.; Jiménez-Hoyos, Carlos A.; Kozik, E.; Liu, Xuan -Wen; Millis, Andrew J.; Prokof’ev, N. V.; Qin, Mingpu; Scuseria, Gustavo E.; Shi, Hao; Svistunov, B. V.; Tocchio, Luca F.; Tupitsyn, I. S.; White, Steven R.; Zhang, Shiwei; Zheng, Bo -Xiao; Zhu, Zhenyue; Gull, Emanuel

    2015-12-14

    Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification of uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Furthermore, cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.

  5. Solutions of the two-dimensional Hubbard model: Benchmarks and results from a wide range of numerical algorithms

    DOE PAGES

    LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; Bulik, Ireneusz W.; Chan, Garnet Kin-Lic; Chung, Chia -Min; Deng, Youjin; Ferrero, Michel; Henderson, Thomas M.; Jiménez-Hoyos, Carlos A.; et al

    2015-12-14

    Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification ofmore » uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Furthermore, cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.« less

  6. Cognitive radio resource allocation based on coupled chaotic genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zu, Yun-Xiao; Zhou, Jie; Zeng, Chang-Chang

    2010-11-01

    A coupled chaotic genetic algorithm for cognitive radio resource allocation which is based on genetic algorithm and coupled Logistic map is proposed. A fitness function for cognitive radio resource allocation is provided. Simulations are conducted for cognitive radio resource allocation by using the coupled chaotic genetic algorithm, simple genetic algorithm and dynamic allocation algorithm respectively. The simulation results show that, compared with simple genetic and dynamic allocation algorithm, coupled chaotic genetic algorithm reduces the total transmission power and bit error rate in cognitive radio system, and has faster convergence speed.

  7. A Short Survey of Document Structure Similarity Algorithms

    SciTech Connect

    Buttler, D

    2004-02-27

    This paper provides a brief survey of document structural similarity algorithms, including the optimal Tree Edit Distance algorithm and various approximation algorithms. The approximation algorithms include the simple weighted tag similarity algorithm, Fourier transforms of the structure, and a new application of the shingle technique to structural similarity. We show three surprising results. First, the Fourier transform technique proves to be the least accurate of any of approximation algorithms, while also being slowest. Second, optimal Tree Edit Distance algorithms may not be the best technique for clustering pages from different sites. Third, the simplest approximation to structure may be the most effective and efficient mechanism for many applications.

  8. Parallelized dilate algorithm for remote sensing image.

    PubMed

    Zhang, Suli; Hu, Haoran; Pan, Xin

    2014-01-01

    As an important algorithm, dilate algorithm can give us more connective view of a remote sensing image which has broken lines or objects. However, with the technological progress of satellite sensor, the resolution of remote sensing image has been increasing and its data quantities become very large. This would lead to the decrease of algorithm running speed or cannot obtain a result in limited memory or time. To solve this problem, our research proposed a parallelized dilate algorithm for remote sensing Image based on MPI and MP. Experiments show that our method runs faster than traditional single-process algorithm.

  9. DNABIT Compress - Genome compression algorithm.

    PubMed

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  10. The Soil Moisture Active Passive Mission (SMAP) Science Data Products: Results of Testing with Field Experiment and Algorithm Testbed Simulation Environment Data

    NASA Technical Reports Server (NTRS)

    Entekhabi, Dara; Njoku, Eni E.; O'Neill, Peggy E.; Kellogg, Kent H.; Entin, Jared K.

    2010-01-01

    Talk outline 1. Derivation of SMAP basic and applied science requirements from the NRC Earth Science Decadal Survey applications 2. Data products and latencies 3. Algorithm highlights 4. SMAP Algorithm Testbed 5. SMAP Working Groups and community engagement

  11. Temperature Corrected Bootstrap Algorithm

    NASA Technical Reports Server (NTRS)

    Comiso, Joey C.; Zwally, H. Jay

    1997-01-01

    A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.

  12. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  13. A constraint consensus memetic algorithm for solving constrained optimization problems

    NASA Astrophysics Data System (ADS)

    Hamza, Noha M.; Sarker, Ruhul A.; Essam, Daryl L.; Deb, Kalyanmoy; Elsayed, Saber M.

    2014-11-01

    Constraint handling is an important aspect of evolutionary constrained optimization. Currently, the mechanism used for constraint handling with evolutionary algorithms mainly assists the selection process, but not the actual search process. In this article, first a genetic algorithm is combined with a class of search methods, known as constraint consensus methods, that assist infeasible individuals to move towards the feasible region. This approach is also integrated with a memetic algorithm. The proposed algorithm is tested and analysed by solving two sets of standard benchmark problems, and the results are compared with other state-of-the-art algorithms. The comparisons show that the proposed algorithm outperforms other similar algorithms. The algorithm has also been applied to solve a practical economic load dispatch problem, where it also shows superior performance over other algorithms.

  14. A synthesized heuristic task scheduling algorithm.

    PubMed

    Dai, Yanyan; Zhang, Xiangli

    2014-01-01

    Aiming at the static task scheduling problems in heterogeneous environment, a heuristic task scheduling algorithm named HCPPEFT is proposed. In task prioritizing phase, there are three levels of priority in the algorithm to choose task. First, the critical tasks have the highest priority, secondly the tasks with longer path to exit task will be selected, and then algorithm will choose tasks with less predecessors to schedule. In resource selection phase, the algorithm is selected task duplication to reduce the interresource communication cost, besides forecasting the impact of an assignment for all children of the current task permits better decisions to be made in selecting resources. The algorithm proposed is compared with STDH, PEFT, and HEFT algorithms through randomly generated graphs and sets of task graphs. The experimental results show that the new algorithm can achieve better scheduling performance.

  15. Motion Cueing Algorithm Development: Initial Investigation and Redesign of the Algorithms

    NASA Technical Reports Server (NTRS)

    Telban, Robert J.; Wu, Weimin; Cardullo, Frank M.; Houck, Jacob A. (Technical Monitor)

    2000-01-01

    In this project four motion cueing algorithms were initially investigated. The classical algorithm generated results with large distortion and delay and low magnitude. The NASA adaptive algorithm proved to be well tuned with satisfactory performance, while the UTIAS adaptive algorithm produced less desirable results. Modifications were made to the adaptive algorithms to reduce the magnitude of undesirable spikes. The optimal algorithm was found to have the potential for improved performance with further redesign. The center of simulator rotation was redefined. More terms were added to the cost function to enable more tuning flexibility. A new design approach using a Fortran/Matlab/Simulink setup was employed. A new semicircular canals model was incorporated in the algorithm. With these changes results show the optimal algorithm has some advantages over the NASA adaptive algorithm. Two general problems observed in the initial investigation required solutions. A nonlinear gain algorithm was developed that scales the aircraft inputs by a third-order polynomial, maximizing the motion cues while remaining within the operational limits of the motion system. A braking algorithm was developed to bring the simulator to a full stop at its motion limit and later release the brake to follow the cueing algorithm output.

  16. A Winner Determination Algorithm for Combinatorial Auctions Based on Hybrid Artificial Fish Swarm Algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Genrang; Lin, ZhengChun

    The problem of winner determination in combinatorial auctions is a hotspot electronic business, and a NP hard problem. A Hybrid Artificial Fish Swarm Algorithm(HAFSA), which is combined with First Suite Heuristic Algorithm (FSHA) and Artificial Fish Swarm Algorithm (AFSA), is proposed to solve the problem after probing it base on the theories of AFSA. Experiment results show that the HAFSA is a rapidly and efficient algorithm for The problem of winner determining. Compared with Ant colony Optimization Algorithm, it has a good performance with broad and prosperous application.

  17. Comparing barrier algorithms

    NASA Technical Reports Server (NTRS)

    Arenstorf, Norbert S.; Jordan, Harry F.

    1987-01-01

    A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor, that support these conclusions are detailed.

  18. Segmentation of MRI Brain Images with an Improved Harmony Searching Algorithm

    PubMed Central

    Yang, Zhang; Li, Guo; Weifeng, Ding

    2016-01-01

    The harmony searching (HS) algorithm is a kind of optimization search algorithm currently applied in many practical problems. The HS algorithm constantly revises variables in the harmony database and the probability of different values that can be used to complete iteration convergence to achieve the optimal effect. Accordingly, this study proposed a modified algorithm to improve the efficiency of the algorithm. First, a rough set algorithm was employed to improve the convergence and accuracy of the HS algorithm. Then, the optimal value was obtained using the improved HS algorithm. The optimal value of convergence was employed as the initial value of the fuzzy clustering algorithm for segmenting magnetic resonance imaging (MRI) brain images. Experimental results showed that the improved HS algorithm attained better convergence and more accurate results than those of the original HS algorithm. In our study, the MRI image segmentation effect of the improved algorithm was superior to that of the original fuzzy clustering method. PMID:27403428

  19. Segmentation of MRI Brain Images with an Improved Harmony Searching Algorithm.

    PubMed

    Yang, Zhang; Shufan, Ye; Li, Guo; Weifeng, Ding

    2016-01-01

    The harmony searching (HS) algorithm is a kind of optimization search algorithm currently applied in many practical problems. The HS algorithm constantly revises variables in the harmony database and the probability of different values that can be used to complete iteration convergence to achieve the optimal effect. Accordingly, this study proposed a modified algorithm to improve the efficiency of the algorithm. First, a rough set algorithm was employed to improve the convergence and accuracy of the HS algorithm. Then, the optimal value was obtained using the improved HS algorithm. The optimal value of convergence was employed as the initial value of the fuzzy clustering algorithm for segmenting magnetic resonance imaging (MRI) brain images. Experimental results showed that the improved HS algorithm attained better convergence and more accurate results than those of the original HS algorithm. In our study, the MRI image segmentation effect of the improved algorithm was superior to that of the original fuzzy clustering method. PMID:27403428

  20. A Revision of the NASA Team Sea Ice Algorithm

    NASA Technical Reports Server (NTRS)

    Markus, T.; Cavalieri, Donald J.

    1998-01-01

    In a recent paper, two operational algorithms to derive ice concentration from satellite multichannel passive microwave sensors have been compared. Although the results of these, known as the NASA Team algorithm and the Bootstrap algorithm, have been validated and are generally in good agreement, there are areas where the ice concentrations differ, by up to 30%. These differences can be explained by shortcomings in one or the other algorithm. Here, we present an algorithm which, in addition to the 19 and 37 GHz channels used by both the Bootstrap and NASA Team algorithms, makes use of the 85 GHz channels as well. Atmospheric effects particularly at 85 GHz are reduced by using a forward atmospheric radiative transfer model. Comparisons with the NASA Team and Bootstrap algorithm show that the individual shortcomings of these algorithms are not apparent in this new approach. The results further show better quantitative agreement with ice concentrations derived from NOAA AVHRR infrared data.

  1. Applications and accuracy of the parallel diagonal dominant algorithm

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He

    1993-01-01

    The Parallel Diagonal Dominant (PDD) algorithm is a highly efficient, ideally scalable tridiagonal solver. In this paper, a detailed study of the PDD algorithm is given. First the PDD algorithm is introduced. Then the algorithm is extended to solve periodic tridiagonal systems. A variant, the reduced PDD algorithm, is also proposed. Accuracy analysis is provided for a class of tridiagonal systems, the symmetric, and anti-symmetric Toeplitz tridiagonal systems. Implementation results show that the analysis gives a good bound on the relative error, and the algorithm is a good candidate for the emerging massively parallel machines.

  2. New formulations of monotonically convergent quantum control algorithms

    NASA Astrophysics Data System (ADS)

    Maday, Yvon; Turinici, Gabriel

    2003-05-01

    Most of the numerical simulation in quantum (bilinear) control have used one of the monotonically convergent algorithms of Krotov (introduced by Tannor et al.) or of Zhu and Rabitz. However, until now no explicit relationship has been revealed between the two algorithms in order to understand their common properties. Within this framework, we propose in this paper a unified formulation that comprises both algorithms and that extends to a new class of monotonically convergent algorithms. Numerical results show that the newly derived algorithms behave as well as (and sometimes better than) the well-known algorithms cited above.

  3. A Food Chain Algorithm for Capacitated Vehicle Routing Problem with Recycling in Reverse Logistics

    NASA Astrophysics Data System (ADS)

    Song, Qiang; Gao, Xuexia; Santos, Emmanuel T.

    2015-12-01

    This paper introduces the capacitated vehicle routing problem with recycling in reverse logistics, and designs a food chain algorithm for it. Some illustrative examples are selected to conduct simulation and comparison. Numerical results show that the performance of the food chain algorithm is better than the genetic algorithm, particle swarm optimization as well as quantum evolutionary algorithm.

  4. Fourth Order Algorithms for Solving Diverse Many-Body Problems

    NASA Astrophysics Data System (ADS)

    Chin, Siu A.; Forbert, Harald A.; Chen, Chia-Rong; Kidwell, Donald W.; Ciftja, Orion

    2001-03-01

    We show that the method of factorizing an evolution operator of the form e^ɛ(A+B) to fourth order with purely positive coefficient yields new classes of symplectic algorithms for solving classical dynamical problems, unitary algorithms for solving the time-dependent Schrödinger equation, norm preserving algorithms for solving the Langevin equation and large time step convergent Diffusion Monte Carlo algorithms. Results for each class of problems will be presented and disucss

  5. Genetic Algorithm Tuned Fuzzy Logic for Gliding Return Trajectories

    NASA Technical Reports Server (NTRS)

    Burchett, Bradley T.

    2003-01-01

    The problem of designing and flying a trajectory for successful recovery of a reusable launch vehicle is tackled using fuzzy logic control with genetic algorithm optimization. The plant is approximated by a simplified three degree of freedom non-linear model. A baseline trajectory design and guidance algorithm consisting of several Mamdani type fuzzy controllers is tuned using a simple genetic algorithm. Preliminary results show that the performance of the overall system is shown to improve with genetic algorithm tuning.

  6. An incremental clustering algorithm based on Mahalanobis distance

    NASA Astrophysics Data System (ADS)

    Aik, Lim Eng; Choon, Tan Wee

    2014-12-01

    Classical fuzzy c-means clustering algorithm is insufficient to cluster non-spherical or elliptical distributed datasets. The paper replaces classical fuzzy c-means clustering euclidean distance with Mahalanobis distance. It applies Mahalanobis distance to incremental learning for its merits. A Mahalanobis distance based fuzzy incremental clustering learning algorithm is proposed. Experimental results show the algorithm is an effective remedy for the defect in fuzzy c-means algorithm but also increase training accuracy.

  7. Comprehensive eye evaluation algorithm

    NASA Astrophysics Data System (ADS)

    Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.

    2016-03-01

    In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.

  8. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  9. Profiling wind and greenhouse gases by infrared-laser occultation: algorithm and results from end-to-end simulations in windy air

    NASA Astrophysics Data System (ADS)

    Plach, A.; Proschek, V.; Kirchengast, G.

    2015-01-01

    The new mission concept of microwave and infrared-laser occultation between low-Earth-orbit satellites (LMIO) is designed to provide accurate and long-term stable profiles of atmospheric thermodynamic variables, greenhouse gases (GHGs), and line-of-sight (l.o.s.) wind speed with focus on the upper troposphere and lower stratosphere (UTLS). While the unique quality of GHG retrievals enabled by LMIO over the UTLS has been recently demonstrated based on end-to-end simulations, the promise of l.o.s. wind retrieval, and of joint GHG and wind retrieval, has not yet been analyzed in any realistic simulation setting so far. Here we describe a newly developed l.o.s. wind retrieval algorithm, which we embedded in an end-to-end simulation framework that also includes the retrieval of thermodynamic variables and GHGs, and analyze the performance of both standalone wind retrieval and joint wind and GHG retrieval. The wind algorithm utilizes LMIO laser signals placed on the inflection points at the wings of the highly symmetric C18OO absorption line near 4767 cm-1 and exploits transmission differences from wind-induced Doppler shift. Based on realistic example cases for a diversity of atmospheric conditions, ranging from tropical to high-latitude winter, we find that the retrieved l.o.s wind profiles are of high quality over the lower stratosphere under all conditions, i.e., unbiased and accurate to within about 2 m s-1 over about 15 to 35 km. The wind accuracy degrades into the upper troposphere due to decreasing signal-to-noise ratio of the wind-induced differential transmission signals. The GHG retrieval in windy air is not vulnerable to wind speed uncertainties up to about 10 m s-1 but is found to benefit in case of higher speeds from the integrated wind retrieval that enables correction of wind-induced Doppler shift of GHG signals. Overall both the l.o.s. wind and GHG retrieval results are strongly encouraging towards further development and implementation of a LMIO mission.

  10. Mortality prediction in the ICU: can we do better? Results from the Super ICU Learner Algorithm (SICULA) project, a population-based study

    PubMed Central

    Pirracchio, Romain; Petersen, Maya L.; Carone, Marco; Rigon, Matthieu Resche; Chevret, Sylvie; van der LAAN, Mark J.

    2015-01-01

    Background Improved mortality prediction for patients in intensive care units (ICU) remains an important challenge. Many severity scores have been proposed but validation studies have concluded that they are not adequately calibrated. Many flexible algorithms are available, yet none of these individually outperform all others regardless of context. In contrast, the Super Learner (SL), an ensemble machine learning technique that leverages on multiple learning algorithms to obtain better prediction performance, has been shown to perform at least as well as the optimal member of its library. It might provide an ideal opportunity to construct a novel severity score with an improved performance profile. The aim of the present study was to provide a new mortality prediction algorithm for ICU patients using an implementation of the Super Learner, and to assess its performance relative to prediction based on the SAPS II, APACHE II and SOFA scores. Methods We used the Multiparameter Intelligent Monitoring in Intensive Care II (MIMIC-II) database (v26) including all patients admitted to an ICU at Boston’s Beth Israel Deaconess Medical Center from 2001 to 2008. The calibration, discrimination and risk classification of predicted hospital mortality based on SAPS II, on APACHE II, on SOFA and on our Super Learned-based proposal were evaluated. Performance measures were calculated using cross-validation to avoid making biased assessments. Our proposed score was then externally validated on a dataset of 200 randomly selected patients admitted at the ICU of Hôpital Européen Georges-Pompidou in Paris, France between September 2013 and June 2014. The primary outcome was hospital mortality. The explanatory variables were the same as those included in the SAPS II score. Results 24,508 patients were included, with median SAPS II 38 (IQR: 27–51), median SOFA 5 (IQR: 2–8). A total of 3,002/24,508(12.2%) patients died in the hospital. The two versions of our Super Learner

  11. A flight management algorithm and guidance for fuel-conservative descents in a time-based metered air traffic environment: Development and flight test results

    NASA Technical Reports Server (NTRS)

    Knox, C. E.

    1984-01-01

    A simple airborne flight management descent algorithm designed to define a flight profile subject to the constraints of using idle thrust, a clean airplane configuration (landing gear up, flaps zero, and speed brakes retracted), and fixed-time end conditions was developed and flight tested in the NASA TSRV B-737 research airplane. The research test flights, conducted in the Denver ARTCC automated time-based metering LFM/PD ATC environment, demonstrated that time guidance and control in the cockpit was acceptable to the pilots and ATC controllers and resulted in arrival of the airplane over the metering fix with standard deviations in airspeed error of 6.5 knots, in altitude error of 23.7 m (77.8 ft), and in arrival time accuracy of 12 sec. These accuracies indicated a good representation of airplane performance and wind modeling. Fuel savings will be obtained on a fleet-wide basis through a reduction of the time error dispersions at the metering fix and on a single-airplane basis by presenting the pilot with guidance for a fuel-efficient descent.

  12. Modified OMP Algorithm for Exponentially Decaying Signals

    PubMed Central

    Kazimierczuk, Krzysztof; Kasprzak, Paweł

    2015-01-01

    A group of signal reconstruction methods, referred to as compressed sensing (CS), has recently found a variety of applications in numerous branches of science and technology. However, the condition of the applicability of standard CS algorithms (e.g., orthogonal matching pursuit, OMP), i.e., the existence of the strictly sparse representation of a signal, is rarely met. Thus, dedicated algorithms for solving particular problems have to be developed. In this paper, we introduce a modification of OMP motivated by nuclear magnetic resonance (NMR) application of CS. The algorithm is based on the fact that the NMR spectrum consists of Lorentzian peaks and matches a single Lorentzian peak in each of its iterations. Thus, we propose the name Lorentzian peak matching pursuit (LPMP). We also consider certain modification of the algorithm by introducing the allowed positions of the Lorentzian peaks' centers. Our results show that the LPMP algorithm outperforms other CS algorithms when applied to exponentially decaying signals. PMID:25609044

  13. MO-G-17A-07: Improved Image Quality in Brain F-18 FDG PET Using Penalized-Likelihood Image Reconstruction Via a Generalized Preconditioned Alternating Projection Algorithm: The First Patient Results

    SciTech Connect

    Schmidtlein, CR; Beattie, B; Humm, J; Li, S; Wu, Z; Xu, Y; Zhang, J; Shen, L; Vogelsang, L; Feiglin, D; Krol, A

    2014-06-15

    Purpose: To investigate the performance of a new penalized-likelihood PET image reconstruction algorithm using the 1{sub 1}-norm total-variation (TV) sum of the 1st through 4th-order gradients as the penalty. Simulated and brain patient data sets were analyzed. Methods: This work represents an extension of the preconditioned alternating projection algorithm (PAPA) for emission-computed tomography. In this new generalized algorithm (GPAPA), the penalty term is expanded to allow multiple components, in this case the sum of the 1st to 4th order gradients, to reduce artificial piece-wise constant regions (“staircase” artifacts typical for TV) seen in PAPA images penalized with only the 1st order gradient. Simulated data were used to test for “staircase” artifacts and to optimize the penalty hyper-parameter in the root-mean-squared error (RMSE) sense. Patient FDG brain scans were acquired on a GE D690 PET/CT (370 MBq at 1-hour post-injection for 10 minutes) in time-of-flight mode and in all cases were reconstructed using resolution recovery projectors. GPAPA images were compared PAPA and RMSE-optimally filtered OSEM (fully converged) in simulations and to clinical OSEM reconstructions (3 iterations, 32 subsets) with 2.6 mm XYGaussian and standard 3-point axial smoothing post-filters. Results: The results from the simulated data show a significant reduction in the 'staircase' artifact for GPAPA compared to PAPA and lower RMSE (up to 35%) compared to optimally filtered OSEM. A simple power-law relationship between the RMSE-optimal hyper-parameters and the noise equivalent counts (NEC) per voxel is revealed. Qualitatively, the patient images appear much sharper and with less noise than standard clinical images. The convergence rate is similar to OSEM. Conclusions: GPAPA reconstructions using the 1{sub 1}-norm total-variation sum of the 1st through 4th-order gradients as the penalty show great promise for the improvement of image quality over that currently achieved

  14. [An Algorithm for Correcting Fetal Heart Rate Baseline].

    PubMed

    Li, Xiaodong; Lu, Yaosheng

    2015-10-01

    Fetal heart rate (FHR) baseline estimation is of significance for the computerized analysis of fetal heart rate and the assessment of fetal state. In our work, a fetal heart rate baseline correction algorithm was presented to make the existing baseline more accurate and fit to the tracings. Firstly, the deviation of the existing FHR baseline was found and corrected. And then a new baseline was obtained finally after treatment with some smoothing methods. To assess the performance of FHR baseline correction algorithm, a new FHR baseline estimation algorithm that combined baseline estimation algorithm and the baseline correction algorithm was compared with two existing FHR baseline estimation algorithms. The results showed that the new FHR baseline estimation algorithm did well in both accuracy and efficiency. And the results also proved the effectiveness of the FHR baseline correction algorithm.

  15. Improved Bat Algorithm Applied to Multilevel Image Thresholding

    PubMed Central

    2014-01-01

    Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed. PMID:25165733

  16. Public medical shows.

    PubMed

    Walusinski, Olivier

    2014-01-01

    In the second half of the 19th century, Jean-Martin Charcot (1825-1893) became famous for the quality of his teaching and his innovative neurological discoveries, bringing many French and foreign students to Paris. A hunger for recognition, together with progressive and anticlerical ideals, led Charcot to invite writers, journalists, and politicians to his lessons, during which he presented the results of his work on hysteria. These events became public performances, for which physicians and patients were transformed into actors. Major newspapers ran accounts of these consultations, more like theatrical shows in some respects. The resultant enthusiasm prompted other physicians in Paris and throughout France to try and imitate them. We will compare the form and substance of Charcot's lessons with those given by Jules-Bernard Luys (1828-1897), Victor Dumontpallier (1826-1899), Ambroise-Auguste Liébault (1823-1904), Hippolyte Bernheim (1840-1919), Joseph Grasset (1849-1918), and Albert Pitres (1848-1928). We will also note their impact on contemporary cinema and theatre. PMID:25273491

  17. Modified Landweber algorithm for robust particle sizing by using Fraunhofer diffraction.

    PubMed

    Xu, Lijun; Wei, Tianxiao; Zhou, Jiayi; Cao, Zhang

    2014-09-20

    In this paper, a robust modified Landweber algorithm was proposed to retrieve the particle size distributions from Fraunhofer diffraction. Three typical particle size distributions, i.e., Rosin-Rammler, lognormal, and bimodal normal distributions for particles ranging from 4.8 to 96 μm, were employed to verify the performance of the algorithm. To show its merits, the proposed algorithm was compared with the Tikhonov regularization algorithm and the ℓ1-norm-based algorithm. Simulation results showed that, for noise-free data, both the modified Landweber algorithm and the ℓ1-norm-based algorithm were better than the Tikhonov regularization algorithm in terms of accuracy. When the data was noise-contaminated, the modified Landweber algorithm was superior to the other two algorithms in both accuracy and speed. An experimental setup was also established and the results validated the feasibility and effectiveness of the proposed method.

  18. Damage evaluation on a multi-story framed structures: comparison of results retrieved from algorithms based on modal and non-modal parameters

    NASA Astrophysics Data System (ADS)

    Auletta, Gianluca; Ditommaso, Rocco; Iacovino, Chiara; Carlo Ponzo, Felice; Pina Limongelli, Maria

    2016-04-01

    Continuous monitoring based on vibrational identification methods is increasingly employed with the aim of evaluate the state of the health of existing structures and infrastructures and to evaluate the performance of safety interventions over time. In case of earthquakes, data acquired by means of continuous monitoring systems can be used to localize and quantify a possible damage occurred on a monitored structure using appropriate algorithms based on the variations of structural parameters. Most of the damage identification methods are based on the variation of few modal and/or non-modal parameters: the former, are strictly related to the structural eigenfrequencies, equivalent viscous damping factors and mode shapes; the latter, are based on the variation of parameters related to the geometric characteristics of the monitored structure whose variations could be correlated related to damage. In this work results retrieved from the application of a curvature evolution based method and an interpolation error based method are compared. The first method is based on the evaluation of the curvature variation (related to the fundamental mode of vibration) over time and compares the variations before, during and after the earthquake. The Interpolation Method is based on the detection of localized reductions of smoothness in the Operational Deformed Shapes (ODSs) of the structure. A damage feature is defined in terms of the error related to the use of a spline function in interpolating the ODSs of the structure: statistically significant variations of the interpolation error between two successive inspections of the structure indicate the onset of damage. Both methods have been applied using both numerical data retrieved from nonlinear FE models and experimental tests on scaled structures carried out on the shaking table of the University of Basilicata. Acknowledgements This study was partially funded by the Italian Civil Protection Department within the project DPC

  19. Television Quiz Show Simulation

    ERIC Educational Resources Information Center

    Hill, Jonnie Lynn

    2007-01-01

    This article explores the simulation of four television quiz shows for students in China studying English as a foreign language (EFL). It discusses the adaptation and implementation of television quiz shows and how the students reacted to them.

  20. The Great Cometary Show

    NASA Astrophysics Data System (ADS)

    2007-01-01

    The ESO Very Large Telescope Interferometer, which allows astronomers to scrutinise objects with a precision equivalent to that of a 130-m telescope, is proving itself an unequalled success every day. One of the latest instruments installed, AMBER, has led to a flurry of scientific results, an anthology of which is being published this week as special features in the research journal Astronomy & Astrophysics. ESO PR Photo 06a/07 ESO PR Photo 06a/07 The AMBER Instrument "With its unique capabilities, the VLT Interferometer (VLTI) has created itself a niche in which it provide answers to many astronomical questions, from the shape of stars, to discs around stars, to the surroundings of the supermassive black holes in active galaxies," says Jorge Melnick (ESO), the VLT Project Scientist. The VLTI has led to 55 scientific papers already and is in fact producing more than half of the interferometric results worldwide. "With the capability of AMBER to combine up to three of the 8.2-m VLT Unit Telescopes, we can really achieve what nobody else can do," added Fabien Malbet, from the LAOG (France) and the AMBER Project Scientist. Eleven articles will appear this week in Astronomy & Astrophysics' special AMBER section. Three of them describe the unique instrument, while the other eight reveal completely new results about the early and late stages in the life of stars. ESO PR Photo 06b/07 ESO PR Photo 06b/07 The Inner Winds of Eta Carinae The first results presented in this issue cover various fields of stellar and circumstellar physics. Two papers deal with very young solar-like stars, offering new information about the geometry of the surrounding discs and associated outflowing winds. Other articles are devoted to the study of hot active stars of particular interest: Alpha Arae, Kappa Canis Majoris, and CPD -57o2874. They provide new, precise information about their rotating gas envelopes. An important new result concerns the enigmatic object Eta Carinae. Using AMBER with

  1. Dynamic Programming Algorithm vs. Genetic Algorithm: Which is Faster?

    NASA Astrophysics Data System (ADS)

    Petković, Dušan

    The article compares two different approaches for the optimization problem of large join queries (LJQs). Almost all commercial database systems use a form of the dynamic programming algorithm to solve the ordering of join operations for large join queries, i.e. joins with more than dozen join operations. The property of the dynamic programming algorithm is that the execution time increases significantly in the case, where the number of join operations in a query is large. Genetic algorithms (GAs), as a data mining technique, have been shown as a promising technique in solving the ordering of join operations in LJQs. Using the existing implementation of GA, we compare the dynamic programming algorithm implemented in commercial database systems with the corresponding GA module. Our results show that the use of a genetic algorithm is a better solution for optimization of large join queries, i.e., that such a technique outperforms the implementations of the dynamic programming algorithm in conventional query optimization components for very large join queries.

  2. Advanced optimization of permanent magnet wigglers using a genetic algorithm

    SciTech Connect

    Hajima, Ryoichi

    1995-12-31

    In permanent magnet wigglers, magnetic imperfection of each magnet piece causes field error. This field error can be reduced or compensated by sorting magnet pieces in proper order. We showed a genetic algorithm has good property for this sorting scheme. In this paper, this optimization scheme is applied to the case of permanent magnets which have errors in the direction of field. The result shows the genetic algorithm is superior to other algorithms.

  3. Multi-user cognitive radio network resource allocation based on the adaptive niche immune genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zu, Yun-Xiao; Zhou, Jie

    2012-01-01

    Multi-user cognitive radio network resource allocation based on the adaptive niche immune genetic algorithm is proposed, and a fitness function is provided. Simulations are conducted using the adaptive niche immune genetic algorithm, the simulated annealing algorithm, the quantum genetic algorithm and the simple genetic algorithm, respectively. The results show that the adaptive niche immune genetic algorithm performs better than the other three algorithms in terms of the multi-user cognitive radio network resource allocation, and has quick convergence speed and strong global searching capability, which effectively reduces the system power consumption and bit error rate.

  4. Coronary CTA using scout-based automated tube potential and current selection algorithm, with breast displacement results in lower radiation exposure in females compared to males

    PubMed Central

    Vadvala, Harshna; Kim, Phillip; Mayrhofer, Thomas; Pianykh, Oleg; Kalra, Mannudeep; Hoffmann, Udo

    2014-01-01

    Purpose To evaluate the effect of automatic tube potential selection and automatic exposure control combined with female breast displacement during coronary computed tomography angiography (CCTA) on radiation exposure in women versus men of the same body size. Materials and methods Consecutive clinical exams between January 2012 and July 2013 at an academic medical center were retrospectively analyzed. All examinations were performed using ECG-gating, automated tube potential, and tube current selection algorithm (APS-AEC) with breast displacement in females. Cohorts were stratified by sex and standard World Health Organization body mass index (BMI) ranges. CT dose index volume (CTDIvol), dose length product (DLP) median effective dose (ED), and size specific dose estimate (SSDE) were recorded. Univariable and multivariable regression analyses were performed to evaluate the effect of gender on radiation exposure per BMI. Results A total of 726 exams were included, 343 (47%) were females; mean BMI was similar by gender (28.6±6.9 kg/m2 females vs. 29.2±6.3 kg/m2 males; P=0.168). Median ED was 2.3 mSv (1.4-5.2) for females and 3.6 (2.5-5.9) for males (P<0.001). Females were exposed to less radiation by a difference in median ED of –1.3 mSv, CTDIvol –4.1 mGy, and SSDE –6.8 mGy (all P<0.001). After adjusting for BMI, patient characteristics, and gating mode, females exposure was lower by a median ED of –0.7 mSv, CTDIvol –2.3 mGy, and SSDE –3.15 mGy, respectively (all P<0.01). Conclusions: We observed a difference in radiation exposure to patients undergoing CCTA with the combined use of AEC-APS and breast displacement in female patients as compared to their BMI-matched male counterparts, with female patients receiving one third less exposure. PMID:25610804

  5. Grooming of arbitrary traffic using improved genetic algorithms

    NASA Astrophysics Data System (ADS)

    Jiao, Yueguang; Xu, Zhengchun; Zhang, Hanyi

    2004-04-01

    A genetic algorithm is proposed with permutation based chromosome presentation and roulette wheel selection to solve traffic grooming problems in WDM ring network. The parameters of the algorithm are evaluated by calculating of large amount of traffic patterns at different conditions. Four methods were developed to improve the algorithm, which can be used combining with each other. Effects of them on the algorithm are studied via computer simulations. The results show that they can all make the algorithm more powerful to reduce the number of add-drop multiplexers or wavelengths required in a network.

  6. Solving SAT Problem Based on Hybrid Differential Evolution Algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Kunqi; Zhang, Jingmin; Liu, Gang; Kang, Lishan

    Satisfiability (SAT) problem is an NP-complete problem. Based on the analysis about it, SAT problem is translated equally into an optimization problem on the minimum of objective function. A hybrid differential evolution algorithm is proposed to solve the Satisfiability problem. It makes full use of strong local search capacity of hill-climbing algorithm and strong global search capability of differential evolution algorithm, which makes up their disadvantages, improves the efficiency of algorithm and avoids the stagnation phenomenon. The experiment results show that the hybrid algorithm is efficient in solving SAT problem.

  7. Stretched View Showing 'Victoria'

    NASA Technical Reports Server (NTRS)

    2006-01-01

    [figure removed for brevity, see original site] Stretched View Showing 'Victoria'

    This pair of images from the panoramic camera on NASA's Mars Exploration Rover Opportunity served as initial confirmation that the two-year-old rover is within sight of 'Victoria Crater,' which it has been approaching for more than a year. Engineers on the rover team were unsure whether Opportunity would make it as far as Victoria, but scientists hoped for the chance to study such a large crater with their roving geologist. Victoria Crater is 800 meters (nearly half a mile) in diameter, about six times wider than 'Endurance Crater,' where Opportunity spent several months in 2004 examining rock layers affected by ancient water.

    When scientists using orbital data calculated that they should be able to detect Victoria's rim in rover images, they scrutinized frames taken in the direction of the crater by the panoramic camera. To positively characterize the subtle horizon profile of the crater and some of the features leading up to it, researchers created a vertically-stretched image (top) from a mosaic of regular frames from the panoramic camera (bottom), taken on Opportunity's 804th Martian day (April 29, 2006).

    The stretched image makes mild nearby dunes look like more threatening peaks, but that is only a result of the exaggerated vertical dimension. This vertical stretch technique was first applied to Viking Lander 2 panoramas by Philip Stooke, of the University of Western Ontario, Canada, to help locate the lander with respect to orbiter images. Vertically stretching the image allows features to be more readily identified by the Mars Exploration Rover science team.

    The bright white dot near the horizon to the right of center (barely visible without labeling or zoom-in) is thought to be a light-toned outcrop on the far wall of the crater, suggesting that the rover can see over the low rim of Victoria. In figure 1, the northeast and southeast rims are labeled

  8. A Holographic Road Show.

    ERIC Educational Resources Information Center

    Kirkpatrick, Larry D.; Rugheimer, Mac

    1979-01-01

    Describes the viewing sessions and the holograms of a holographic road show. The traveling exhibits, believed to stimulate interest in physics, include a wide variety of holograms and demonstrate several physical principles. (GA)

  9. Severe sepsis and septic shock in pre-hospital emergency medicine: survey results of medical directors of emergency medical services concerning antibiotics, blood cultures and algorithms.

    PubMed

    Casu, Sebastian; Häske, David

    2016-06-01

    Delayed antibiotic treatment for patients in severe sepsis and septic shock decreases the probability of survival. In this survey, medical directors of different emergency medical services (EMS) in Germany were asked if they are prepared for pre-hospital sepsis therapy with antibiotics or special algorithms to evaluate the individual preparations of the different rescue areas for the treatment of patients with this infectious disease. The objective of the survey was to obtain a general picture of the current status of the EMS with respect to rapid antibiotic treatment for sepsis. A total of 166 medical directors were invited to complete a short survey on behalf of the different rescue service districts in Germany via an electronic cover letter. Of the rescue districts, 25.6 % (n = 20) stated that they keep antibiotics on EMS vehicles. In addition, 2.6 % carry blood cultures on the vehicles. The most common antibiotic is ceftriaxone (third generation cephalosporin). In total, 8 (10.3 %) rescue districts use an algorithm for patients with sepsis, severe sepsis or septic shock. Although the German EMS is an emergency physician-based rescue system, special opportunities in the form of antibiotics on emergency physician vehicles are missing. Simultaneously, only 10.3 % of the rescue districts use a special algorithm for sepsis therapy. Sepsis, severe sepsis and septic shock do not appear to be prioritized as highly as these deadly diseases should be in the pre-hospital setting. PMID:26719078

  10. An Improved Population Migration Algorithm Introducing the Local Search Mechanism of the Leap-Frog Algorithm and Crossover Operator

    PubMed Central

    Zhang, Yanqing; Liu, Xueying

    2013-01-01

    The population migration algorithm (PMA) is a simulation of a population of the intelligent algorithm. Given the prematurity and low precision of PMA, this paper introduces a local search mechanism of the leap-frog algorithm and crossover operator to improve the PMA search speed and global convergence properties. The typical test function verifies the improved algorithm through its performance. Compared with the improved population migration and other intelligential algorithms, the result shows that the convergence rate of the improved PMA is very high and its convergence is proved. PMID:23460807

  11. Modeling algorithm execution time on processor arrays

    NASA Technical Reports Server (NTRS)

    Adams, L. M.; Crockett, T. W.

    1984-01-01

    An approach to modelling the execution time of algorithms on parallel arrays is presented. This time is expressed as a function of the number of processors and system parameters. The resulting model has been applied to a parallel implementation of the conjugate-gradient algorithm on NASA's FEM. Results of experiments performed to compare the model predictions against actual behavior show that the floating-point arithmetic, communication, and synchronization components of the parallel algorithm execution time were correctly modelled. The results also show that the overhead caused by the interaction of the system software and the actual parallel hardware must be reflected in the model parameters. The model has been used to predict the performance of the conjugate gradient algorithm on a given problem as the number of processors and machine characteristics varied.

  12. Algorithms for automated DNA assembly

    PubMed Central

    Densmore, Douglas; Hsiau, Timothy H.-C.; Kittleson, Joshua T.; DeLoache, Will; Batten, Christopher; Anderson, J. Christopher

    2010-01-01

    Generating a defined set of genetic constructs within a large combinatorial space provides a powerful method for engineering novel biological functions. However, the process of assembling more than a few specific DNA sequences can be costly, time consuming and error prone. Even if a correct theoretical construction scheme is developed manually, it is likely to be suboptimal by any number of cost metrics. Modular, robust and formal approaches are needed for exploring these vast design spaces. By automating the design of DNA fabrication schemes using computational algorithms, we can eliminate human error while reducing redundant operations, thus minimizing the time and cost required for conducting biological engineering experiments. Here, we provide algorithms that optimize the simultaneous assembly of a collection of related DNA sequences. We compare our algorithms to an exhaustive search on a small synthetic dataset and our results show that our algorithms can quickly find an optimal solution. Comparison with random search approaches on two real-world datasets show that our algorithms can also quickly find lower-cost solutions for large datasets. PMID:20335162

  13. The Ozone Show.

    ERIC Educational Resources Information Center

    Mathieu, Aaron

    2000-01-01

    Uses a talk show activity for a final assessment tool for students to debate about the ozone hole. Students are assessed on five areas: (1) cooperative learning; (2) the written component; (3) content; (4) self-evaluation; and (5) peer evaluation. (SAH)

  14. Show What You Know

    ERIC Educational Resources Information Center

    Eccleston, Jeff

    2007-01-01

    Big things come in small packages. This saying came to the mind of the author after he created a simple math review activity for his fourth grade students. Though simple, it has proven to be extremely advantageous in reinforcing math concepts. He uses this activity, which he calls "Show What You Know," often. This activity provides the perfect…

  15. Showing What They Know

    ERIC Educational Resources Information Center

    Cech, Scott J.

    2008-01-01

    Having students show their skills in three dimensions, known as performance-based assessment, dates back at least to Socrates. Individual schools such as Barrington High School--located just outside of Providence--have been requiring students to actively demonstrate their knowledge for years. The Rhode Island's high school graduating class became…

  16. Stage a Water Show

    ERIC Educational Resources Information Center

    Frasier, Debra

    2008-01-01

    In the author's book titled "The Incredible Water Show," the characters from "Miss Alaineus: A Vocabulary Disaster" used an ocean of information to stage an inventive performance about the water cycle. In this article, the author relates how she turned the story into hands-on science teaching for real-life fifth-grade students. The author also…

  17. What Do Maps Show?

    ERIC Educational Resources Information Center

    Geological Survey (Dept. of Interior), Reston, VA.

    This curriculum packet, appropriate for grades 4-8, features a teaching poster which shows different types of maps (different views of Salt Lake City, Utah), as well as three reproducible maps and reproducible activity sheets which complement the maps. The poster provides teacher background, including step-by-step lesson plans for four geography…

  18. Obesity in show cats.

    PubMed

    Corbee, R J

    2014-12-01

    Obesity is an important disease with a high prevalence in cats. Because obesity is related to several other diseases, it is important to identify the population at risk. Several risk factors for obesity have been described in the literature. A higher incidence of obesity in certain cat breeds has been suggested. The aim of this study was to determine whether obesity occurs more often in certain breeds. The second aim was to relate the increased prevalence of obesity in certain breeds to the official standards of that breed. To this end, 268 cats of 22 different breeds investigated by determining their body condition score (BCS) on a nine-point scale by inspection and palpation, at two different cat shows. Overall, 45.5% of the show cats had a BCS > 5, and 4.5% of the show cats had a BCS > 7. There were significant differences between breeds, which could be related to the breed standards. Most overweight and obese cats were in the neutered group. It warrants firm discussions with breeders and cat show judges to come to different interpretations of the standards in order to prevent overweight conditions in certain breeds from being the standard of beauty. Neutering predisposes for obesity and requires early nutritional intervention to prevent obese conditions. PMID:24612018

  19. Show Me the Way

    ERIC Educational Resources Information Center

    Dicks, Matthew J.

    2005-01-01

    Because today's students have grown up steeped in video games and the Internet, most of them expect feedback, and usually gratification, very soon after they expend effort on a task. Teachers can get quick feedback to students by showing them videotapes of their learning performances. The author, a 3rd grade teacher describes how the seemingly…

  20. The Art Show

    ERIC Educational Resources Information Center

    Scolarici, Alicia

    2004-01-01

    This article describes what once was thought to be impossible--a formal art show extravaganza at an elementary school with 1,000 students, a Department of Defense Dependent School (DODDS) located overseas, on RAF Lakenheath, England. The dream of this this event involved the transformation of the school cafeteria into an elegant art show…

  1. Honored Teacher Shows Commitment.

    ERIC Educational Resources Information Center

    Ratte, Kathy

    1987-01-01

    Part of the acceptance speech of the 1985 National Council for the Social Studies Teacher of the Year, this article describes the censorship experience of this honored social studies teacher. The incident involved the showing of a videotape version of the feature film entitled "The Seduction of Joe Tynan." (JDH)

  2. Developing evidence-based algorithms for negative pressure wound therapy in adults with acute and chronic wounds: literature and expert-based face validation results.

    PubMed

    Beitz, Janice M; van Rijswijk, Lia

    2012-04-01

    Negative pressure wound therapy (NPWT) is used extensively in the management of acute and chronic wounds, but concerns persist about its efficacy, effectiveness, and safety. Available guidelines and algorithms are wound type-specific, not evidence-based, and many lack clearly described relative and absolute contraindications and stop criteria. The purpose of this research was to: (1) develop evidence-based algorithms for the safe use of NPWT in adults with acute and chronic wounds by nonwound expert clinicians, and (2) obtain face validity for the algorithms. Using NPWT meta-analyses and systematic reviews (n = 10), NPWT guidelines of care (n = 12), general evidence-based guidelines of wound care (n = 11), and a framework for transitioning between moisture-retentive and NPWT care (n = 1), a set of three algorithms was developed. Literature-based validity for each of the 39 discreet algorithm steps/decision points was obtained by reviewing best available evidence from systematic literature reviews (n = 331 publications) and abstraction of all NPWT-relevant publications (n = 182) using the patient-oriented Strength of Recommendation (SORT) taxonomy. Of the 182 NPWT studies abstracted, 25 met criteria for level 1 and 2 evidence but only one general assessment step had both level 1 evidence and an "A" strength of recommendation. Next, an Institutional Review Board-approved, cross-sectional mixed methods survey design face validation pilot study was conducted to solicit comments on, and rate the validity of, the 51 discreet algorithm-related statements, including the 39 decisions/steps. Twelve (12) of the 15 invited interdisciplinary wound experts agreed to participate. The overall algorithm content validity index (CVI) was high (0.96 out of 1). Helpful design suggestions to ensure safe use were made, and participants suggested an examination of commonly used wound definitions in follow-up studies. Results of the literature-based face validation confirm that the

  3. Validation of a treatment algorithm for orthopaedic implant-related infections with device-retention-results from a prospective observational cohort study.

    PubMed

    Tschudin-Sutter, S; Frei, R; Dangel, M; Jakob, M; Balmelli, C; Schaefer, D J; Weisser, M; Elzi, L; Battegay, M; Widmer, A F

    2016-05-01

    Success rates for treatment regimens involving retention of an infected implant are conflicting and failure rates of up to 80% have been reported. We aimed to validate a proposed treatment algorithm, based on strict selection criteria, by assessing long-term outcome of treatment for orthopaedic device-related infection (ODRI) with retention. From January 1999 to December 2009, all patients diagnosed with ODRI at the University Hospital Basel, Switzerland were eligible for treatment with open surgical debridement, implant-retention and antibiotics, if duration of clinical symptoms was ≤3 weeks, the implant was stable, the soft-tissue had no abscess or sinus tract, and the causative pathogen was susceptible to antimicrobial agents with activity against surface-adhering microorganisms. Antimicrobial treatment was administered according to a predefined algorithm. The primary outcome was treatment failure after 2-year follow up. A total of 455 patients were diagnosed with an ODRI, of whom 233 (51.2%) patients were eligible for treatment involving implant-retention. Causative pathogens were mainly Staphylococcus aureus (41.6%) and coagulase-negative staphylococci (33.9%). Among patients with ODRIs related to prostheses, failure was documented in 10.8% (12/111) and in patients with ODRIs related to osteosyntheses, failure occurred in 9.8% (12/122) after 2 years of follow up. In all, 90% of ODRIs were successfully cured with surgical debridement and implant-retention in addition to long-term antimicrobial therapy according to a predefined treatment algorithm: if patients fulfilled strict selection criteria and there was susceptibility to rifampin for Gram-positive pathogens and ciprofloxacin for Gram-negative pathogens. PMID:26806134

  4. [Research on and application of hybrid optimization algorithm in Brillouin scattering spectrum parameter extraction problem].

    PubMed

    Zhang, Yan-jun; Zhang, Shu-guo; Fu, Guang-wei; Li, Da; Liu, Yin; Bi, Wei-hong

    2012-04-01

    This paper presents a novel algorithm which blends optimize particle swarm optimization (PSO) algorithm and Levenberg-Marquardt (LM) algorithm according to the probability. This novel algorithm can be used for Pseudo-Voigt type of Brillouin scattering spectrum to improve the degree of fitting and precision of shift extraction. This algorithm uses PSO algorithm as the main frame. First, PSO algorithm is used in global search, after a certain number of optimization every time there generates a random probability rand (0, 1). If rand (0, 1) is less than or equal to the predetermined probability P, the optimal solution obtained by PSO algorithm will be used as the initial value of LM algorithm. Then LM algorithm is used in local depth search and the solution of LM algorithm is used to replace the previous PSO algorithm for optimal solutions. Again the PSO algorithm is used for global search. If rand (0, 1) was greater than P, PSO algorithm is still used in search, waiting the next optimization to generate random probability rand (0, 1) to judge. Two kinds of algorithms are alternatively used to obtain ideal global optimal solution. Simulation analysis and experimental results show that the new algorithm overcomes the shortcomings of single algorithm and improves the degree of fitting and precision of frequency shift extraction in Brillouin scattering spectrum, and fully prove that the new method is practical and feasible.

  5. Sampling Within k-Means Algorithm to Cluster Large Datasets

    SciTech Connect

    Bejarano, Jeremy; Bose, Koushiki; Brannan, Tyler; Thomas, Anita; Adragni, Kofi; Neerchal, Nagaraj; Ostrouchov, George

    2011-08-01

    Due to current data collection technology, our ability to gather data has surpassed our ability to analyze it. In particular, k-means, one of the simplest and fastest clustering algorithms, is ill-equipped to handle extremely large datasets on even the most powerful machines. Our new algorithm uses a sample from a dataset to decrease runtime by reducing the amount of data analyzed. We perform a simulation study to compare our sampling based k-means to the standard k-means algorithm by analyzing both the speed and accuracy of the two methods. Results show that our algorithm is significantly more efficient than the existing algorithm with comparable accuracy. Further work on this project might include a more comprehensive study both on more varied test datasets as well as on real weather datasets. This is especially important considering that this preliminary study was performed on rather tame datasets. Also, these datasets should analyze the performance of the algorithm on varied values of k. Lastly, this paper showed that the algorithm was accurate for relatively low sample sizes. We would like to analyze this further to see how accurate the algorithm is for even lower sample sizes. We could find the lowest sample sizes, by manipulating width and confidence level, for which the algorithm would be acceptably accurate. In order for our algorithm to be a success, it needs to meet two benchmarks: match the accuracy of the standard k-means algorithm and significantly reduce runtime. Both goals are accomplished for all six datasets analyzed. However, on datasets of three and four dimension, as the data becomes more difficult to cluster, both algorithms fail to obtain the correct classifications on some trials. Nevertheless, our algorithm consistently matches the performance of the standard algorithm while becoming remarkably more efficient with time. Therefore, we conclude that analysts can use our algorithm, expecting accurate results in considerably less time.

  6. Taking in a Show.

    PubMed

    Boden, Timothy W

    2016-01-01

    Many medical practices have cut back on education and staff development expenses, especially those costs associated with conventions and conferences. But there are hard-to-value returns on your investment in these live events--beyond the obvious benefits of acquired knowledge and skills. Major vendors still exhibit their services and wares at many events, and the exhibit hall is a treasure-house of information and resources for the savvy physician or administrator. Make and stick to a purposeful plan to exploit the trade show. You can compare products, gain new insights and ideas, and even negotiate better deals with representatives anxious to realize returns on their exhibition investments. PMID:27249887

  7. Taking in a Show.

    PubMed

    Boden, Timothy W

    2016-01-01

    Many medical practices have cut back on education and staff development expenses, especially those costs associated with conventions and conferences. But there are hard-to-value returns on your investment in these live events--beyond the obvious benefits of acquired knowledge and skills. Major vendors still exhibit their services and wares at many events, and the exhibit hall is a treasure-house of information and resources for the savvy physician or administrator. Make and stick to a purposeful plan to exploit the trade show. You can compare products, gain new insights and ideas, and even negotiate better deals with representatives anxious to realize returns on their exhibition investments.

  8. Obesity in show dogs.

    PubMed

    Corbee, R J

    2013-10-01

    Obesity is an important disease with a growing incidence. Because obesity is related to several other diseases, and decreases life span, it is important to identify the population at risk. Several risk factors for obesity have been described in the literature. A higher incidence of obesity in certain breeds is often suggested. The aim of this study was to determine whether obesity occurs more often in certain breeds. The second aim was to relate the increased prevalence of obesity in certain breeds to the official standards of that breed. To this end, we investigated 1379 dogs of 128 different breeds by determining their body condition score (BCS). Overall, 18.6% of the show dogs had a BCS >5, and 1.1% of the show dogs had a BCS>7. There were significant differences between breeds, which could be correlated to the breed standards. It warrants firm discussions with breeders and judges in order to come to different interpretations of the standards to prevent overweight conditions from being the standard of beauty. PMID:22882163

  9. Spectrum parameter estimation in Brillouin scattering distributed temperature sensor based on cuckoo search algorithm combined with the improved differential evolution algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Yanjun; Yu, Chunjuan; Fu, Xinghu; Liu, Wenzhe; Bi, Weihong

    2015-12-01

    In the distributed optical fiber sensing system based on Brillouin scattering, strain and temperature are the main measuring parameters which can be obtained by analyzing the Brillouin center frequency shift. The novel algorithm which combines the cuckoo search algorithm (CS) with the improved differential evolution (IDE) algorithm is proposed for the Brillouin scattering parameter estimation. The CS-IDE algorithm is compared with CS algorithm and analyzed in different situation. The results show that both the CS and CS-IDE algorithm have very good convergence. The analysis reveals that the CS-IDE algorithm can extract the scattering spectrum features with different linear weight ratio, linewidth combination and SNR. Moreover, the BOTDR temperature measuring system based on electron optical frequency shift is set up to verify the effectiveness of the CS-IDE algorithm. Experimental results show that there is a good linear relationship between the Brillouin center frequency shift and temperature changes.

  10. Not a "reality" show.

    PubMed

    Wrong, Terence; Baumgart, Erica

    2013-01-01

    The authors of the preceding articles raise legitimate questions about patient and staff rights and the unintended consequences of allowing ABC News to film inside teaching hospitals. We explain why we regard their fears as baseless and not supported by what we heard from individuals portrayed in the filming, our decade-long experience making medical documentaries, and the full un-aired context of the scenes shown in the broadcast. The authors don't and can't know what conversations we had, what documents we reviewed, and what protections we put in place in each televised scene. Finally, we hope to correct several misleading examples cited by the authors as well as their offhand mischaracterization of our program as a "reality" show. PMID:23631336

  11. Not a "reality" show.

    PubMed

    Wrong, Terence; Baumgart, Erica

    2013-01-01

    The authors of the preceding articles raise legitimate questions about patient and staff rights and the unintended consequences of allowing ABC News to film inside teaching hospitals. We explain why we regard their fears as baseless and not supported by what we heard from individuals portrayed in the filming, our decade-long experience making medical documentaries, and the full un-aired context of the scenes shown in the broadcast. The authors don't and can't know what conversations we had, what documents we reviewed, and what protections we put in place in each televised scene. Finally, we hope to correct several misleading examples cited by the authors as well as their offhand mischaracterization of our program as a "reality" show.

  12. An adaptive algorithm for motion compensated color image coding

    NASA Technical Reports Server (NTRS)

    Kwatra, Subhash C.; Whyte, Wayne A.; Lin, Chow-Ming

    1987-01-01

    This paper presents an adaptive algorithm for motion compensated color image coding. The algorithm can be used for video teleconferencing or broadcast signals. Activity segmentation is used to reduce the bit rate and a variable stage search is conducted to save computations. The adaptive algorithm is compared with the nonadaptive algorithm and it is shown that with approximately 60 percent savings in computing the motion vector and 33 percent additional compression, the performance of the adaptive algorithm is similar to the nonadaptive algorithm. The adaptive algorithm results also show improvement of up to 1 bit/pel over interframe DPCM coding with nonuniform quantization. The test pictures used for this study were recorded directly from broadcast video in color.

  13. A Multistrategy Optimization Improved Artificial Bee Colony Algorithm

    PubMed Central

    Liu, Wen

    2014-01-01

    Being prone to the shortcomings of premature and slow convergence rate of artificial bee colony algorithm, an improved algorithm was proposed. Chaotic reverse learning strategies were used to initialize swarm in order to improve the global search ability of the algorithm and keep the diversity of the algorithm; the similarity degree of individuals of the population was used to characterize the diversity of population; population diversity measure was set as an indicator to dynamically and adaptively adjust the nectar position; the premature and local convergence were avoided effectively; dual population search mechanism was introduced to the search stage of algorithm; the parallel search of dual population considerably improved the convergence rate. Through simulation experiments of 10 standard testing functions and compared with other algorithms, the results showed that the improved algorithm had faster convergence rate and the capacity of jumping out of local optimum faster. PMID:24982924

  14. Efficient algorithm to compute mutually connected components in interdependent networks.

    PubMed

    Hwang, S; Choi, S; Lee, Deokjae; Kahng, B

    2015-02-01

    Mutually connected components (MCCs) play an important role as a measure of resilience in the study of interdependent networks. Despite their importance, an efficient algorithm to obtain the statistics of all MCCs during the removal of links has thus far been absent. Here, using a well-known fully dynamic graph algorithm, we propose an efficient algorithm to accomplish this task. We show that the time complexity of this algorithm is approximately O(N(1.2)) for random graphs, which is more efficient than O(N(2)) of the brute-force algorithm. We confirm the correctness of our algorithm by comparing the behavior of the order parameter as links are removed with existing results for three types of double-layer multiplex networks. We anticipate that this algorithm will be used for simulations of large-size systems that have been previously inaccessible. PMID:25768559

  15. Ensemble algorithms in reinforcement learning.

    PubMed

    Wiering, Marco A; van Hasselt, Hado

    2008-08-01

    This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and implemented four different ensemble methods combining the following five different RL algorithms: Q-learning, Sarsa, actor-critic (AC), QV-learning, and AC learning automaton. The intuitively designed ensemble methods, namely, majority voting (MV), rank voting, Boltzmann multiplication (BM), and Boltzmann addition, combine the policies derived from the value functions of the different RL algorithms, in contrast to previous work where ensemble methods have been used in RL for representing and learning a single value function. We show experiments on five maze problems of varying complexity; the first problem is simple, but the other four maze tasks are of a dynamic or partially observable nature. The results indicate that the BM and MV ensembles significantly outperform the single RL algorithms.

  16. SDR Input Power Estimation Algorithms

    NASA Technical Reports Server (NTRS)

    Nappier, Jennifer M.; Briones, Janette C.

    2013-01-01

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  17. Ensemble algorithms in reinforcement learning.

    PubMed

    Wiering, Marco A; van Hasselt, Hado

    2008-08-01

    This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and implemented four different ensemble methods combining the following five different RL algorithms: Q-learning, Sarsa, actor-critic (AC), QV-learning, and AC learning automaton. The intuitively designed ensemble methods, namely, majority voting (MV), rank voting, Boltzmann multiplication (BM), and Boltzmann addition, combine the policies derived from the value functions of the different RL algorithms, in contrast to previous work where ensemble methods have been used in RL for representing and learning a single value function. We show experiments on five maze problems of varying complexity; the first problem is simple, but the other four maze tasks are of a dynamic or partially observable nature. The results indicate that the BM and MV ensembles significantly outperform the single RL algorithms. PMID:18632380

  18. SDR input power estimation algorithms

    NASA Astrophysics Data System (ADS)

    Briones, J. C.; Nappier, J. M.

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  19. Fast proximity algorithm for MAP ECT reconstruction

    NASA Astrophysics Data System (ADS)

    Li, Si; Krol, Andrzej; Shen, Lixin; Xu, Yuesheng

    2012-03-01

    We arrived at the fixed-point formulation of the total variation maximum a posteriori (MAP) regularized emission computed tomography (ECT) reconstruction problem and we proposed an iterative alternating scheme to numerically calculate the fixed point. We theoretically proved that our algorithm converges to unique solutions. Because the obtained algorithm exhibits slow convergence speed, we further developed the proximity algorithm in the transformed image space, i.e. the preconditioned proximity algorithm. We used the bias-noise curve method to select optimal regularization hyperparameters for both our algorithm and expectation maximization with total variation regularization (EM-TV). We showed in the numerical experiments that our proposed algorithms, with an appropriately selected preconditioner, outperformed conventional EM-TV algorithm in many critical aspects, such as comparatively very low noise and bias for Shepp-Logan phantom. This has major ramification for nuclear medicine because clinical implementation of our preconditioned fixed-point algorithms might result in very significant radiation dose reduction in the medical applications of emission tomography.

  20. A novel algorithm for Bluetooth ECG.

    PubMed

    Pandya, Utpal T; Desai, Uday B

    2012-11-01

    In wireless transmission of ECG, data latency will be significant when battery power level and data transmission distance are not maintained. In applications like home monitoring or personalized care, to overcome the joint effect of previous issues of wireless transmission and other ECG measurement noises, a novel filtering strategy is required. Here, a novel algorithm, identified as peak rejection adaptive sampling modified moving average (PRASMMA) algorithm for wireless ECG is introduced. This algorithm first removes error in bit pattern of received data if occurred in wireless transmission and then removes baseline drift. Afterward, a modified moving average is implemented except in the region of each QRS complexes. The algorithm also sets its filtering parameters according to different sampling rate selected for acquisition of signals. To demonstrate the work, a prototyped Bluetooth-based ECG module is used to capture ECG with different sampling rate and in different position of patient. This module transmits ECG wirelessly to Bluetooth-enabled devices where the PRASMMA algorithm is applied on captured ECG. The performance of PRASMMA algorithm is compared with moving average and S-Golay algorithms visually as well as numerically. The results show that the PRASMMA algorithm can significantly improve the ECG reconstruction by efficiently removing the noise and its use can be extended to any parameters where peaks are importance for diagnostic purpose.

  1. Children's school-breakfast reports and school-lunch reports (in 24-h dietary recalls): conventional and reporting-error-sensitive measures show inconsistent accuracy results for retention interval and breakfast location.

    PubMed

    Baxter, Suzanne D; Guinn, Caroline H; Smith, Albert F; Hitchcock, David B; Royer, Julie A; Puryear, Megan P; Collins, Kathleen L; Smith, Alyssa L

    2016-04-14

    Validation-study data were analysed to investigate retention interval (RI) and prompt effects on the accuracy of fourth-grade children's reports of school-breakfast and school-lunch (in 24-h recalls), and the accuracy of school-breakfast reports by breakfast location (classroom; cafeteria). Randomly selected fourth-grade children at ten schools in four districts were observed eating school-provided breakfast and lunch, and were interviewed under one of eight conditions created by crossing two RIs ('short'--prior-24-hour recall obtained in the afternoon and 'long'--previous-day recall obtained in the morning) with four prompts ('forward'--distant to recent, 'meal name'--breakfast, etc., 'open'--no instructions, and 'reverse'--recent to distant). Each condition had sixty children (half were girls). Of 480 children, 355 and 409 reported meals satisfying criteria for reports of school-breakfast and school-lunch, respectively. For breakfast and lunch separately, a conventional measure--report rate--and reporting-error-sensitive measures--correspondence rate and inflation ratio--were calculated for energy per meal-reporting child. Correspondence rate and inflation ratio--but not report rate--showed better accuracy for school-breakfast and school-lunch reports with the short RI than with the long RI; this pattern was not found for some prompts for each sex. Correspondence rate and inflation ratio showed better school-breakfast report accuracy for the classroom than for cafeteria location for each prompt, but report rate showed the opposite. For each RI, correspondence rate and inflation ratio showed better accuracy for lunch than for breakfast, but report rate showed the opposite. When choosing RI and prompts for recalls, researchers and practitioners should select a short RI to maximise accuracy. Recommendations for prompt selections are less clear. As report rates distort validation-study accuracy conclusions, reporting-error-sensitive measures are recommended. PMID

  2. Identifying Risk Factors for Recent HIV Infection in Kenya Using a Recent Infection Testing Algorithm: Results from a Nationally Representative Population-Based Survey

    PubMed Central

    Kim, Andrea A.; Parekh, Bharat S.; Umuro, Mamo; Galgalo, Tura; Bunnell, Rebecca; Makokha, Ernest; Dobbs, Trudy; Murithi, Patrick; Muraguri, Nicholas; De Cock, Kevin M.; Mermin, Jonathan

    2016-01-01

    Introduction A recent infection testing algorithm (RITA) that can distinguish recent from long-standing HIV infection can be applied to nationally representative population-based surveys to characterize and identify risk factors for recent infection in a country. Materials and Methods We applied a RITA using the Limiting Antigen Avidity Enzyme Immunoassay (LAg) on stored HIV-positive samples from the 2007 Kenya AIDS Indicator Survey. The case definition for recent infection included testing recent on LAg and having no evidence of antiretroviral therapy use. Multivariate analysis was conducted to determine factors associated with recent and long-standing infection compared to HIV-uninfected persons. All estimates were weighted to adjust for sampling probability and nonresponse. Results Of 1,025 HIV-antibody-positive specimens, 64 (6.2%) met the case definition for recent infection and 961 (93.8%) met the case definition for long-standing infection. Compared to HIV-uninfected individuals, factors associated with higher adjusted odds of recent infection were living in Nairobi (adjusted odds ratio [AOR] 11.37; confidence interval [CI] 2.64–48.87) and Nyanza (AOR 4.55; CI 1.39–14.89) provinces compared to Western province; being widowed (AOR 8.04; CI 1.42–45.50) or currently married (AOR 6.42; CI 1.55–26.58) compared to being never married; having had ≥ 2 sexual partners in the last year (AOR 2.86; CI 1.51–5.41); not using a condom at last sex in the past year (AOR 1.61; CI 1.34–1.93); reporting a sexually transmitted infection (STI) diagnosis or symptoms of STI in the past year (AOR 1.97; CI 1.05–8.37); and being aged <30 years with: 1) HSV-2 infection (AOR 8.84; CI 2.62–29.85), 2) male genital ulcer disease (AOR 8.70; CI 2.36–32.08), or 3) lack of male circumcision (AOR 17.83; CI 2.19–144.90). Compared to HIV-uninfected persons, factors associated with higher adjusted odds of long-standing infection included living in Coast (AOR 1.55; CI 1.04–2

  3. Recent Advancements in Lightning Jump Algorithm Work

    NASA Technical Reports Server (NTRS)

    Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.

    2010-01-01

    In the past year, the primary objectives were to show the usefulness of total lightning as compared to traditional cloud-to-ground (CG) networks, test the lightning jump algorithm configurations in other regions of the country, increase the number of thunderstorms within our thunderstorm database, and to pinpoint environments that could prove difficult for any lightning jump configuration. A total of 561 thunderstorms have been examined in the past year (409 non-severe, 152 severe) from four regions of the country (North Alabama, Washington D.C., High Plains of CO/KS, and Oklahoma). Results continue to indicate that the 2 lightning jump algorithm configuration holds the most promise in terms of prospective operational lightning jump algorithms, with a probability of detection (POD) at 81%, a false alarm rate (FAR) of 45%, a critical success index (CSI) of 49% and a Heidke Skill Score (HSS) of 0.66. The second best performing algorithm configuration was the Threshold 4 algorithm, which had a POD of 72%, FAR of 51%, a CSI of 41% and an HSS of 0.58. Because a more complex algorithm configuration shows the most promise in terms of prospective operational lightning jump algorithms, accurate thunderstorm cell tracking work must be undertaken to track lightning trends on an individual thunderstorm basis over time. While these numbers for the 2 configuration are impressive, the algorithm does have its weaknesses. Specifically, low-topped and tropical cyclone thunderstorm environments are present issues for the 2 lightning jump algorithm, because of the suppressed vertical depth impact on overall flash counts (i.e., a relative dearth in lightning). For example, in a sample of 120 thunderstorms from northern Alabama that contained 72 missed events by the 2 algorithm 36% of the misses were associated with these two environments (17 storms).

  4. An improved localization algorithm based on genetic algorithm in wireless sensor networks.

    PubMed

    Peng, Bo; Li, Lei

    2015-04-01

    Wireless sensor network (WSN) are widely used in many applications. A WSN is a wireless decentralized structure network comprised of nodes, which autonomously set up a network. The node localization that is to be aware of position of the node in the network is an essential part of many sensor network operations and applications. The existing localization algorithms can be classified into two categories: range-based and range-free. The range-based localization algorithm has requirements on hardware, thus is expensive to be implemented in practice. The range-free localization algorithm reduces the hardware cost. Because of the hardware limitations of WSN devices, solutions in range-free localization are being pursued as a cost-effective alternative to more expensive range-based approaches. However, these techniques usually have higher localization error compared to the range-based algorithms. DV-Hop is a typical range-free localization algorithm utilizing hop-distance estimation. In this paper, we propose an improved DV-Hop algorithm based on genetic algorithm. Simulation results show that our proposed algorithm improves the localization accuracy compared with previous algorithms.

  5. Combined string searching algorithm based on knuth-morris- pratt and boyer-moore algorithms

    NASA Astrophysics Data System (ADS)

    Tsarev, R. Yu; Chernigovskiy, A. S.; Tsareva, E. A.; Brezitskaya, V. V.; Nikiforov, A. Yu; Smirnov, N. A.

    2016-04-01

    The string searching task can be classified as a classic information processing task. Users either encounter the solution of this task while working with text processors or browsers, employing standard built-in tools, or this task is solved unseen by the users, while they are working with various computer programmes. Nowadays there are many algorithms for solving the string searching problem. The main criterion of these algorithms’ effectiveness is searching speed. The larger the shift of the pattern relative to the string in case of pattern and string characters’ mismatch is, the higher is the algorithm running speed. This article offers a combined algorithm, which has been developed on the basis of well-known Knuth-Morris-Pratt and Boyer-Moore string searching algorithms. These algorithms are based on two different basic principles of pattern matching. Knuth-Morris-Pratt algorithm is based upon forward pattern matching and Boyer-Moore is based upon backward pattern matching. Having united these two algorithms, the combined algorithm allows acquiring the larger shift in case of pattern and string characters’ mismatch. The article provides an example, which illustrates the results of Boyer-Moore and Knuth-Morris- Pratt algorithms and combined algorithm’s work and shows advantage of the latter in solving string searching problem.

  6. A scalable parallel algorithm for multiple objective linear programs

    NASA Technical Reports Server (NTRS)

    Wiecek, Malgorzata M.; Zhang, Hong

    1994-01-01

    This paper presents an ADBASE-based parallel algorithm for solving multiple objective linear programs (MOLP's). Job balance, speedup and scalability are of primary interest in evaluating efficiency of the new algorithm. Implementation results on Intel iPSC/2 and Paragon multiprocessors show that the algorithm significantly speeds up the process of solving MOLP's, which is understood as generating all or some efficient extreme points and unbounded efficient edges. The algorithm gives specially good results for large and very large problems. Motivation and justification for solving such large MOLP's are also included.

  7. Performance of a parallel algorithm for standard cell placement on the Intel Hypercube

    NASA Technical Reports Server (NTRS)

    Jones, Mark; Banerjee, Prithviraj

    1987-01-01

    A parallel simulated annealing algorithm for standard cell placement on the Intel Hypercube is presented. A novel tree broadcasting strategy is used extensively for updating cell locations in the parallel environment. Studies on the performance of the algorithm on example industrial circuits show that it is faster and gives better final placement results than uniprocessor simulated annealing algorithms.

  8. Performance-Based Seismic Design of Steel Frames Utilizing Colliding Bodies Algorithm

    PubMed Central

    Veladi, H.

    2014-01-01

    A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm. PMID:25202717

  9. What Do Blood Tests Show?

    MedlinePlus

    ... shows the ranges for blood glucose levels after 8 to 12 hours of fasting (not eating). It shows the normal range and the abnormal ranges that are a sign of prediabetes or diabetes. Plasma Glucose Results (mg/dL)* Diagnosis 70 to 99 ...

  10. Genetic Algorithms and Local Search

    NASA Technical Reports Server (NTRS)

    Whitley, Darrell

    1996-01-01

    The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.

  11. Universal lossless compression algorithm for textual images

    NASA Astrophysics Data System (ADS)

    al Zahir, Saif

    2012-03-01

    In recent years, an unparalleled volume of textual information has been transported over the Internet via email, chatting, blogging, tweeting, digital libraries, and information retrieval systems. As the volume of text data has now exceeded 40% of the total volume of traffic on the Internet, compressing textual data becomes imperative. Many sophisticated algorithms were introduced and employed for this purpose including Huffman encoding, arithmetic encoding, the Ziv-Lempel family, Dynamic Markov Compression, and Burrow-Wheeler Transform. My research presents novel universal algorithm for compressing textual images. The algorithm comprises two parts: 1. a universal fixed-to-variable codebook; and 2. our row and column elimination coding scheme. Simulation results on a large number of Arabic, Persian, and Hebrew textual images show that this algorithm has a compression ratio of nearly 87%, which exceeds published results including JBIG2.

  12. Image segmentation using an improved differential algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Hao; Shi, Yujiao; Wu, Dongmei

    2014-10-01

    Among all the existing segmentation techniques, the thresholding technique is one of the most popular due to its simplicity, robustness, and accuracy (e.g. the maximum entropy method, Otsu's method, and K-means clustering). However, the computation time of these algorithms grows exponentially with the number of thresholds due to their exhaustive searching strategy. As a population-based optimization algorithm, differential algorithm (DE) uses a population of potential solutions and decision-making processes. It has shown considerable success in solving complex optimization problems within a reasonable time limit. Thus, applying this method into segmentation algorithm should be a good choice during to its fast computational ability. In this paper, we first propose a new differential algorithm with a balance strategy, which seeks a balance between the exploration of new regions and the exploitation of the already sampled regions. Then, we apply the new DE into the traditional Otsu's method to shorten the computation time. Experimental results of the new algorithm on a variety of images show that, compared with the EA-based thresholding methods, the proposed DE algorithm gets more effective and efficient results. It also shortens the computation time of the traditional Otsu method.

  13. Improved artificial bee colony algorithm based gravity matching navigation method.

    PubMed

    Gao, Wei; Zhao, Bo; Zhou, Guang Tao; Wang, Qiu Ying; Yu, Chun Yang

    2014-07-18

    Gravity matching navigation algorithm is one of the key technologies for gravity aided inertial navigation systems. With the development of intelligent algorithms, the powerful search ability of the Artificial Bee Colony (ABC) algorithm makes it possible to be applied to the gravity matching navigation field. However, existing search mechanisms of basic ABC algorithms cannot meet the need for high accuracy in gravity aided navigation. Firstly, proper modifications are proposed to improve the performance of the basic ABC algorithm. Secondly, a new search mechanism is presented in this paper which is based on an improved ABC algorithm using external speed information. At last, modified Hausdorff distance is introduced to screen the possible matching results. Both simulations and ocean experiments verify the feasibility of the method, and results show that the matching rate of the method is high enough to obtain a precise matching position.

  14. Performance of a parallel algorithm for standard cell placement on the Intel Hypercube

    NASA Technical Reports Server (NTRS)

    Jones, Mark; Banerjee, Prithviraj

    1987-01-01

    A parallel simulated annealing algorithm for standard cell placement that is targeted to run on the Intel Hypercube is presented. A tree broadcasting strategy that is used extensively in our algorithm for updating cell locations in the parallel environment is presented. Studies on the performance of our algorithm on example industrial circuits show that it is faster and gives better final placement results than the uniprocessor simulated annealing algorithms.

  15. Mutation-Based Artificial Fish Swarm Algorithm for Bound Constrained Global Optimization

    NASA Astrophysics Data System (ADS)

    Rocha, Ana Maria A. C.; Fernandes, Edite M. G. P.

    2011-09-01

    The herein presented mutation-based artificial fish swarm (AFS) algorithm includes mutation operators to prevent the algorithm to falling into local solutions, diversifying the search, and to accelerate convergence to the global optima. Three mutation strategies are introduced into the AFS algorithm to define the trial points that emerge from random, leaping and searching behaviors. Computational results show that the new algorithm outperforms other well-known global stochastic solution methods.

  16. Algorithm for Public Electric Transport Schedule Control for Intelligent Embedded Devices

    NASA Astrophysics Data System (ADS)

    Alps, Ivars; Potapov, Andrey; Gorobetz, Mikhail; Levchenkov, Anatoly

    2010-01-01

    In this paper authors present heuristics algorithm for precise schedule fulfilment in city traffic conditions taking in account traffic lights. The algorithm is proposed for programmable controller. PLC is proposed to be installed in electric vehicle to control its motion speed and signals of traffic lights. Algorithm is tested using real controller connected to virtual devices and real functional models of real tram devices. Results of experiments show high precision of public transport schedule fulfilment using proposed algorithm.

  17. Fast parallel algorithm for slicing STL based on pipeline

    NASA Astrophysics Data System (ADS)

    Ma, Xulong; Lin, Feng; Yao, Bo

    2016-05-01

    In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved.

  18. Algorithmic chemistry

    SciTech Connect

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  19. A modified genetic algorithm with fuzzy roulette wheel selection for job-shop scheduling problems

    NASA Astrophysics Data System (ADS)

    Thammano, Arit; Teekeng, Wannaporn

    2015-05-01

    The job-shop scheduling problem is one of the most difficult production planning problems. Since it is in the NP-hard class, a recent trend in solving the job-shop scheduling problem is shifting towards the use of heuristic and metaheuristic algorithms. This paper proposes a novel metaheuristic algorithm, which is a modification of the genetic algorithm. This proposed algorithm introduces two new concepts to the standard genetic algorithm: (1) fuzzy roulette wheel selection and (2) the mutation operation with tabu list. The proposed algorithm has been evaluated and compared with several state-of-the-art algorithms in the literature. The experimental results on 53 JSSPs show that the proposed algorithm is very effective in solving the combinatorial optimization problems. It outperforms all state-of-the-art algorithms on all benchmark problems in terms of the ability to achieve the optimal solution and the computational time.

  20. An affine projection algorithm using grouping selection of input vectors

    NASA Astrophysics Data System (ADS)

    Shin, JaeWook; Kong, NamWoong; Park, PooGyeon

    2011-10-01

    This paper present an affine projection algorithm (APA) using grouping selection of input vectors. To improve the performance of conventional APA, the proposed algorithm adjusts the number of the input vectors using two procedures: grouping procedure and selection procedure. In grouping procedure, the some input vectors that have overlapping information for update is grouped using normalized inner product. Then, few input vectors that have enough information for for coefficient update is selected using steady-state mean square error (MSE) in selection procedure. Finally, the filter coefficients update using selected input vectors. The experimental results show that the proposed algorithm has small steady-state estimation errors comparing with the existing algorithms.

  1. Dual-Byte-Marker Algorithm for Detecting JFIF Header

    NASA Astrophysics Data System (ADS)

    Mohamad, Kamaruddin Malik; Herawan, Tutut; Deris, Mustafa Mat

    The use of efficient algorithm to detect JPEG file is vital to reduce time taken for analyzing ever increasing data in hard drive or physical memory. In the previous paper, single-byte-marker algorithm is proposed for header detection. In this paper, another novel header detection algorithm called dual-byte-marker is proposed. Based on the experiments done on images from hard disk, physical memory and data set from DFRWS 2006 Challenge, results showed that dual-byte-marker algorithm gives better performance with better execution time for header detection as compared to single-byte-marker.

  2. Nios II hardware acceleration of the epsilon quadratic sieve algorithm

    NASA Astrophysics Data System (ADS)

    Meyer-Bäse, Uwe; Botella, Guillermo; Castillo, Encarnacion; García, Antonio

    2010-04-01

    The quadratic sieve (QS) algorithm is one of the most powerful algorithms to factor large composite primes used to break RSA cryptographic systems. The hardware structure of the QS algorithm seems to be a good fit for FPGA acceleration. Our new ɛ-QS algorithm further simplifies the hardware architecture making it an even better candidate for C2H acceleration. This paper shows our design results in FPGA resource and performance when implementing very long arithmetic on the Nios microprocessor platform with C2H acceleration for different libraries (GMP, LIP, FLINT, NRMP) and QS architecture choices for factoring 32-2048 bit RSA numbers.

  3. An Improved Back Propagation Neural Network Algorithm on Classification Problems

    NASA Astrophysics Data System (ADS)

    Nawi, Nazri Mohd; Ransing, R. S.; Salleh, Mohd Najib Mohd; Ghazali, Rozaida; Hamid, Norhamreeza Abdul

    The back propagation algorithm is one the most popular algorithms to train feed forward neural networks. However, the convergence of this algorithm is slow, it is mainly because of gradient descent algorithm. Previous research demonstrated that in 'feed forward' algorithm, the slope of the activation function is directly influenced by a parameter referred to as 'gain'. This research proposed an algorithm for improving the performance of the back propagation algorithm by introducing the adaptive gain of the activation function. The gain values change adaptively for each node. The influence of the adaptive gain on the learning ability of a neural network is analysed. Multi layer feed forward neural networks have been assessed. Physical interpretation of the relationship between the gain value and the learning rate and weight values is given. The efficiency of the proposed algorithm is compared with conventional Gradient Descent Method and verified by means of simulation on four classification problems. In learning the patterns, the simulations result demonstrate that the proposed method converged faster on Wisconsin breast cancer with an improvement ratio of nearly 2.8, 1.76 on diabetes problem, 65% better on thyroid data sets and 97% faster on IRIS classification problem. The results clearly show that the proposed algorithm significantly improves the learning speed of the conventional back-propagation algorithm.

  4. [Multispectral image compression algorithms for color reproduction].

    PubMed

    Liang, Wei; Zeng, Ping; Luo, Xue-mei; Wang, Yi-feng; Xie, Kun

    2015-01-01

    In order to improve multispectral images compression efficiency and further facilitate their storage and transmission for the application of color reproduction and so on, in which fields high color accuracy is desired, WF serial methods is proposed, and APWS_RA algorithm is designed. Then the WF_APWS_RA algorithm, which has advantages of low complexity, good illuminant stability and supporting consistent coior reproduction across devices, is presented. The conventional MSE based wavelet embedded coding principle is first studied. And then color perception distortion criterion and visual characteristic matrix W are proposed. Meanwhile, APWS_RA algorithm is formed by optimizing the. rate allocation strategy of APWS. Finally, combined above technologies, a new coding method named WF_APWS_RA is designed. Colorimetric error criterion is used in the algorithm and APWS_RA is applied on visual weighted multispectral image. In WF_APWS_RA, affinity propagation clustering is utilized to exploit spectral correlation of weighted image. Then two-dimensional wavelet transform is used to remove the spatial redundancy. Subsequently, error compensation mechanism and rate pre-allocation are combined to accomplish the embedded wavelet coding. Experimental results show that at the same bit rate, compared with classical coding algorithms, WF serial algorithms have better performance on color retention. APWS_RA preserves least spectral error and WF APWS_RA algorithm has obvious superiority on color accuracy.

  5. LCD motion blur: modeling, analysis, and algorithm.

    PubMed

    Chan, Stanley H; Nguyen, Truong Q

    2011-08-01

    Liquid crystal display (LCD) devices are well known for their slow responses due to the physical limitations of liquid crystals. Therefore, fast moving objects in a scene are often perceived as blurred. This effect is known as the LCD motion blur. In order to reduce LCD motion blur, an accurate LCD model and an efficient deblurring algorithm are needed. However, existing LCD motion blur models are insufficient to reflect the limitation of human-eye-tracking system. Also, the spatiotemporal equivalence in LCD motion blur models has not been proven directly in the discrete 2-D spatial domain, although it is widely used. There are three main contributions of this paper: modeling, analysis, and algorithm. First, a comprehensive LCD motion blur model is presented, in which human-eye-tracking limits are taken into consideration. Second, a complete analysis of spatiotemporal equivalence is provided and verified using real video sequences. Third, an LCD motion blur reduction algorithm is proposed. The proposed algorithm solves an l(1)-norm regularized least-squares minimization problem using a subgradient projection method. Numerical results show that the proposed algorithm gives higher peak SNR, lower temporal error, and lower spatial error than motion-compensated inverse filtering and Lucy-Richardson deconvolution algorithm, which are two state-of-the-art LCD deblurring algorithms. PMID:21292596

  6. Novel and efficient tag SNPs selection algorithms.

    PubMed

    Chen, Wen-Pei; Hung, Che-Lun; Tsai, Suh-Jen Jane; Lin, Yaw-Ling

    2014-01-01

    SNPs are the most abundant forms of genetic variations amongst species; the association studies between complex diseases and SNPs or haplotypes have received great attention. However, these studies are restricted by the cost of genotyping all SNPs; thus, it is necessary to find smaller subsets, or tag SNPs, representing the rest of the SNPs. In fact, the existing tag SNP selection algorithms are notoriously time-consuming. An efficient algorithm for tag SNP selection was presented, which was applied to analyze the HapMap YRI data. The experimental results show that the proposed algorithm can achieve better performance than the existing tag SNP selection algorithms; in most cases, this proposed algorithm is at least ten times faster than the existing methods. In many cases, when the redundant ratio of the block is high, the proposed algorithm can even be thousands times faster than the previously known methods. Tools and web services for haplotype block analysis integrated by hadoop MapReduce framework are also developed using the proposed algorithm as computation kernels. PMID:24212035

  7. Algorithm for dynamic Speckle pattern processing

    NASA Astrophysics Data System (ADS)

    Cariñe, J.; Guzmán, R.; Torres-Ruiz, F. A.

    2016-07-01

    In this paper we present a new algorithm for determining surface activity by processing speckle pattern images recorded with a CCD camera. Surface activity can be produced by motility or small displacements among other causes, and is manifested as a change in the pattern recorded in the camera with reference to a static background pattern. This intensity variation is considered to be a small perturbation compared with the mean intensity. Based on a perturbative method we obtain an equation with which we can infer information about the dynamic behavior of the surface that generates the speckle pattern. We define an activity index based on our algorithm that can be easily compared with the outcomes from other algorithms. It is shown experimentally that this index evolves in time in the same way as the Inertia Moment method, however our algorithm is based on direct processing of speckle patterns without the need for other kinds of post-processes (like THSP and co-occurrence matrix), making it a viable real-time method. We also show how this algorithm compares with several other algorithms when applied to calibration experiments. From these results we conclude that our algorithm offer qualitative and quantitative advantages over current methods.

  8. Rigorous estimates for the relegation algorithm

    NASA Astrophysics Data System (ADS)

    Sansottera, Marco; Ceccaroni, Marta

    2016-07-01

    We revisit the relegation algorithm by Deprit et al. (Celest. Mech. Dyn. Astron. 79:157-182, 2001) in the light of the rigorous Nekhoroshev's like theory. This relatively recent algorithm is nowadays widely used for implementing closed form analytic perturbation theories, as it generalises the classical Birkhoff normalisation algorithm. The algorithm, here briefly explained by means of Lie transformations, has been so far introduced and used in a formal way, i.e. without providing any rigorous convergence or asymptotic estimates. The overall aim of this paper is to find such quantitative estimates and to show how the results about stability over exponentially long times can be recovered in a simple and effective way, at least in the non-resonant case.

  9. New Drug Shows Mixed Results Against Early Alzheimer's

    MedlinePlus

    ... Sign Up See recent e-Newsletters Preserving Your Memory Magazine Get Your Copy Now Subscribe to our ... 3 Letter Resources Articles Brochure Download Preserving Your Memory Magazine e-Newsletter Resource Locator Videos Charity Navigator ...

  10. Improved multiprocessor garbage collection algorithms

    SciTech Connect

    Newman, I.A.; Stallard, R.P.; Woodward, M.C.

    1983-01-01

    Outlines the results of an investigation of existing multiprocessor garbage collection algorithms and introduces two new algorithms which significantly improve some aspects of the performance of their predecessors. The two algorithms arise from different starting assumptions. One considers the case where the algorithm will terminate successfully whatever list structure is being processed and assumes that the extra data space should be minimised. The other seeks a very fast garbage collection time for list structures that do not contain loops. Results of both theoretical and experimental investigations are given to demonstrate the efficacy of the algorithms. 7 references.

  11. CFAR detection algorithm for acoustic-seismic landmine detection

    NASA Astrophysics Data System (ADS)

    Matalkah, Ghaith M.; Matalgah, Mustafa M.; Sabatier, James M.

    2007-04-01

    Automating the detection process in acoustic-seismic landmine detection speeds up the detection process and eliminates the need for a human operator in the minefield. Previous automatic detection algorithms for acoustic landmine detection showed excellent results for detecting landmines in various environments. However, these algorithms use environment-specific noise-removal procedures that rely on training sets acquired over mine-free areas. In this work, we derive a new detection algorithm that adapts to varying conditions and employs environment-independent techniques. The algorithm is based on the generalized likelihood ratio (GLR) test and asymptotically achieves a constant false alarm rate (CFAR). The algorithm processes the magnitude and phase of the vibrational velocity and shows satisfying results of detecting landmines in gravel and dirt lanes.

  12. Dynamic programming algorithm for detecting dim infrared moving targets

    NASA Astrophysics Data System (ADS)

    He, Lisha; Mao, Liangjing; Xie, Lijun

    2009-10-01

    Infrared (IR) target detection is a key part of airborne infrared weapon system, especially the detection of poor dim moving IR target embedded in complex context. This paper presents an improved Dynamic Programming (DP) algorithm in allusion to low Signal to Noise Ratio (SNR) infrared dim moving targets under cluttered context. The algorithm brings the dim target to prominence by accumulating the energy of pixels in the image sequence, after suppressing the background noise with a mathematical morphology preprocessor. As considering the continuity and stabilization of target's energy and forward direction, this algorithm has well solved the energy scattering problem that exists in the original DP algorithm. An effective energy segmentation threshold is given by a Contrast-Limited Adaptive Histogram Equalization (CLAHE) filter with a regional peak extraction algorithm. Simulation results show that the improved DP tracking algorithm performs well in detecting poor dim targets.

  13. A danger-theory-based immune network optimization algorithm.

    PubMed

    Zhang, Ruirui; Li, Tao; Xiao, Xin; Shi, Yuanquan

    2013-01-01

    Existing artificial immune optimization algorithms reflect a number of shortcomings, such as premature convergence and poor local search ability. This paper proposes a danger-theory-based immune network optimization algorithm, named dt-aiNet. The danger theory emphasizes that danger signals generated from changes of environments will guide different levels of immune responses, and the areas around danger signals are called danger zones. By defining the danger zone to calculate danger signals for each antibody, the algorithm adjusts antibodies' concentrations through its own danger signals and then triggers immune responses of self-regulation. So the population diversity can be maintained. Experimental results show that the algorithm has more advantages in the solution quality and diversity of the population. Compared with influential optimization algorithms, CLONALG, opt-aiNet, and dopt-aiNet, the algorithm has smaller error values and higher success rates and can find solutions to meet the accuracies within the specified function evaluation times.

  14. A novel clustering algorithm inspired by membrane computing.

    PubMed

    Peng, Hong; Luo, Xiaohui; Gao, Zhisheng; Wang, Jun; Pei, Zheng

    2015-01-01

    P systems are a class of distributed parallel computing models; this paper presents a novel clustering algorithm, which is inspired from mechanism of a tissue-like P system with a loop structure of cells, called membrane clustering algorithm. The objects of the cells express the candidate centers of clusters and are evolved by the evolution rules. Based on the loop membrane structure, the communication rules realize a local neighborhood topology, which helps the coevolution of the objects and improves the diversity of objects in the system. The tissue-like P system can effectively search for the optimal partitioning with the help of its parallel computing advantage. The proposed clustering algorithm is evaluated on four artificial data sets and six real-life data sets. Experimental results show that the proposed clustering algorithm is superior or competitive to k-means algorithm and several evolutionary clustering algorithms recently reported in the literature.

  15. Asymmetric intimacy and algorithm for detecting communities in bipartite networks

    NASA Astrophysics Data System (ADS)

    Wang, Xingyuan; Qin, Xiaomeng

    2016-11-01

    In this paper, an algorithm to choose a good partition in bipartite networks has been proposed. Bipartite networks have more theoretical significance and broader prospect of application. In view of distinctive structure of bipartite networks, in our method, two parameters are defined to show the relationships between the same type nodes and heterogeneous nodes respectively. Moreover, our algorithm employs a new method of finding and expanding the core communities in bipartite networks. Two kinds of nodes are handled separately and merged, and then the sub-communities are obtained. After that, objective communities will be found according to the merging rule. The proposed algorithm has been simulated in real-world networks and artificial networks, and the result verifies the accuracy and reliability of the parameters on intimacy for our algorithm. Eventually, comparisons with similar algorithms depict that the proposed algorithm has better performance.

  16. A novel fitness evaluation method for evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Wang, Ji-feng; Tang, Ke-zong

    2013-03-01

    Fitness evaluation is a crucial task in evolutionary algorithms because it can affect the convergence speed and also the quality of the final solution. But these algorithms may require huge computation power for solving nonlinear programming problems. This paper proposes a novel fitness evaluation approach which employs similarity-base learning embedded in a classical differential evolution (SDE) to evaluate all new individuals. Each individual consists of three elements: parameter vector (v), a fitness value (f), and a reliability value(r). The f is calculated using NFEA, and only when the r is below a threshold is the f calculated using true fitness function. Moreover, applying error compensation system to the proposed algorithm further enhances the performance of the algorithm to make r much closer to true fitness value for each new child. Simulation results over a comprehensive set of benchmark functions show that the convergence rate of the proposed algorithm is much faster than much that of the compared algorithms.

  17. Decoherence in optimized quantum random-walk search algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Yu-Chao; Bao, Wan-Su; Wang, Xiang; Fu, Xiang-Qun

    2015-08-01

    This paper investigates the effects of decoherence generated by broken-link-type noise in the hypercube on an optimized quantum random-walk search algorithm. When the hypercube occurs with random broken links, the optimized quantum random-walk search algorithm with decoherence is depicted through defining the shift operator which includes the possibility of broken links. For a given database size, we obtain the maximum success rate of the algorithm and the required number of iterations through numerical simulations and analysis when the algorithm is in the presence of decoherence. Then the computational complexity of the algorithm with decoherence is obtained. The results show that the ultimate effect of broken-link-type decoherence on the optimized quantum random-walk search algorithm is negative. Project supported by the National Basic Research Program of China (Grant No. 2013CB338002).

  18. A Novel Clustering Algorithm Inspired by Membrane Computing

    PubMed Central

    Luo, Xiaohui; Gao, Zhisheng; Wang, Jun; Pei, Zheng

    2015-01-01

    P systems are a class of distributed parallel computing models; this paper presents a novel clustering algorithm, which is inspired from mechanism of a tissue-like P system with a loop structure of cells, called membrane clustering algorithm. The objects of the cells express the candidate centers of clusters and are evolved by the evolution rules. Based on the loop membrane structure, the communication rules realize a local neighborhood topology, which helps the coevolution of the objects and improves the diversity of objects in the system. The tissue-like P system can effectively search for the optimal partitioning with the help of its parallel computing advantage. The proposed clustering algorithm is evaluated on four artificial data sets and six real-life data sets. Experimental results show that the proposed clustering algorithm is superior or competitive to k-means algorithm and several evolutionary clustering algorithms recently reported in the literature. PMID:25874264

  19. An efficient algorithm for function optimization: modified stem cells algorithm

    NASA Astrophysics Data System (ADS)

    Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad

    2013-03-01

    In this paper, we propose an optimization algorithm based on the intelligent behavior of stem cell swarms in reproduction and self-organization. Optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm and Artificial Bee Colony (ABC) algorithm, can give solutions to linear and non-linear problems near to the optimum for many applications; however, in some case, they can suffer from becoming trapped in local optima. The Stem Cells Algorithm (SCA) is an optimization algorithm inspired by the natural behavior of stem cells in evolving themselves into new and improved cells. The SCA avoids the local optima problem successfully. In this paper, we have made small changes in the implementation of this algorithm to obtain improved performance over previous versions. Using a series of benchmark functions, we assess the performance of the proposed algorithm and compare it with that of the other aforementioned optimization algorithms. The obtained results prove the superiority of the Modified Stem Cells Algorithm (MSCA).

  20. A new optimized GA-RBF neural network algorithm.

    PubMed

    Jia, Weikuan; Zhao, Dean; Shen, Tian; Su, Chunyang; Hu, Chanli; Zhao, Yuyan

    2014-01-01

    When confronting the complex problems, radial basis function (RBF) neural network has the advantages of adaptive and self-learning ability, but it is difficult to determine the number of hidden layer neurons, and the weights learning ability from hidden layer to the output layer is low; these deficiencies easily lead to decreasing learning ability and recognition precision. Aiming at this problem, we propose a new optimized RBF neural network algorithm based on genetic algorithm (GA-RBF algorithm), which uses genetic algorithm to optimize the weights and structure of RBF neural network; it chooses new ways of hybrid encoding and optimizing simultaneously. Using the binary encoding encodes the number of the hidden layer's neurons and using real encoding encodes the connection weights. Hidden layer neurons number and connection weights are optimized simultaneously in the new algorithm. However, the connection weights optimization is not complete; we need to use least mean square (LMS) algorithm for further leaning, and finally get a new algorithm model. Using two UCI standard data sets to test the new algorithm, the results show that the new algorithm improves the operating efficiency in dealing with complex problems and also improves the recognition precision, which proves that the new algorithm is valid.

  1. A New Optimized GA-RBF Neural Network Algorithm

    PubMed Central

    Zhao, Dean; Su, Chunyang; Hu, Chanli; Zhao, Yuyan

    2014-01-01

    When confronting the complex problems, radial basis function (RBF) neural network has the advantages of adaptive and self-learning ability, but it is difficult to determine the number of hidden layer neurons, and the weights learning ability from hidden layer to the output layer is low; these deficiencies easily lead to decreasing learning ability and recognition precision. Aiming at this problem, we propose a new optimized RBF neural network algorithm based on genetic algorithm (GA-RBF algorithm), which uses genetic algorithm to optimize the weights and structure of RBF neural network; it chooses new ways of hybrid encoding and optimizing simultaneously. Using the binary encoding encodes the number of the hidden layer's neurons and using real encoding encodes the connection weights. Hidden layer neurons number and connection weights are optimized simultaneously in the new algorithm. However, the connection weights optimization is not complete; we need to use least mean square (LMS) algorithm for further leaning, and finally get a new algorithm model. Using two UCI standard data sets to test the new algorithm, the results show that the new algorithm improves the operating efficiency in dealing with complex problems and also improves the recognition precision, which proves that the new algorithm is valid. PMID:25371666

  2. Efficient Record Linkage Algorithms Using Complete Linkage Clustering

    PubMed Central

    Mamun, Abdullah-Al; Aseltine, Robert; Rajasekaran, Sanguthevar

    2016-01-01

    Data from different agencies share data of the same individuals. Linking these datasets to identify all the records belonging to the same individuals is a crucial and challenging problem, especially given the large volumes of data. A large number of available algorithms for record linkage are prone to either time inefficiency or low-accuracy in finding matches and non-matches among the records. In this paper we propose efficient as well as reliable sequential and parallel algorithms for the record linkage problem employing hierarchical clustering methods. We employ complete linkage hierarchical clustering algorithms to address this problem. In addition to hierarchical clustering, we also use two other techniques: elimination of duplicate records and blocking. Our algorithms use sorting as a sub-routine to identify identical copies of records. We have tested our algorithms on datasets with millions of synthetic records. Experimental results show that our algorithms achieve nearly 100% accuracy. Parallel implementations achieve almost linear speedups. Time complexities of these algorithms do not exceed those of previous best-known algorithms. Our proposed algorithms outperform previous best-known algorithms in terms of accuracy consuming reasonable run times. PMID:27124604

  3. Adaptive image contrast enhancement algorithm for point-based rendering

    NASA Astrophysics Data System (ADS)

    Xu, Shaoping; Liu, Xiaoping P.

    2015-03-01

    Surgical simulation is a major application in computer graphics and virtual reality, and most of the existing work indicates that interactive real-time cutting simulation of soft tissue is a fundamental but challenging research problem in virtual surgery simulation systems. More specifically, it is difficult to achieve a fast enough graphic update rate (at least 30 Hz) on commodity PC hardware by utilizing traditional triangle-based rendering algorithms. In recent years, point-based rendering (PBR) has been shown to offer the potential to outperform the traditional triangle-based rendering in speed when it is applied to highly complex soft tissue cutting models. Nevertheless, the PBR algorithms are still limited in visual quality due to inherent contrast distortion. We propose an adaptive image contrast enhancement algorithm as a postprocessing module for PBR, providing high visual rendering quality as well as acceptable rendering efficiency. Our approach is based on a perceptible image quality technique with automatic parameter selection, resulting in a visual quality comparable to existing conventional PBR algorithms. Experimental results show that our adaptive image contrast enhancement algorithm produces encouraging results both visually and numerically compared to representative algorithms, and experiments conducted on the latest hardware demonstrate that the proposed PBR framework with the postprocessing module is superior to the conventional PBR algorithm and that the proposed contrast enhancement algorithm can be utilized in (or compatible with) various variants of the conventional PBR algorithm.

  4. A motif extraction algorithm based on hashing and modulo-4 arithmetic.

    PubMed

    Sheng, Huitao; Mehrotra, Kishan; Mohan, Chilukuri; Raina, Ramesh

    2008-01-01

    We develop an algorithm to identify cis-elements in promoter regions of coregulated genes. This algorithm searches for subsequences of desired length whose frequency of occurrence is relatively high, while accounting for slightly perturbed variants using hash table and modulo arithmetic. Motifs are evaluated using profile matrices and higher-order Markov background model. Simulation results show that our algorithm discovers more motifs present in the test sequences, when compared with two well-known motif-discovery tools (MDScan and AlignACE). The algorithm produces very promising results on real data set; the output of the algorithm contained many known motifs. PMID:20058489

  5. Comparison of swarm intelligence algorithms in atmospheric compensation for free space optical communication

    NASA Astrophysics Data System (ADS)

    Li, Zhaokun; Cao, Jingtai; Liu, Wei; Feng, Jianfeng; Zhao, Xiaohui

    2015-03-01

    We use conventional adaptive optical system to compensate atmospheric turbulence in free space optical (FSO) communication system under strong scintillation circumstances, undesired wave-front measurements based on Shark-Hartman sensor (SH). Since wavefront sensor-less adaptive optics is a feasible option, we propose several swarm intelligence algorithms to compensate the wavefront aberration from atmospheric interference in FSO and mainly discuss the algorithm principle, basic flows, and simulation result. The numerical simulation experiment and result analysis show that compared with SPGD algorithm, the proposed algorithms can effectively restrain wavefront aberration, and improve convergence rate of the algorithms and the coupling efficiency of receiver in large extent.

  6. An efficient algorithm for retinal blood vessel segmentation using h-maxima transform and multilevel thresholding.

    PubMed

    Saleh, Marwan D; Eswaran, C

    2012-01-01

    Retinal blood vessel detection and analysis play vital roles in early diagnosis and prevention of several diseases, such as hypertension, diabetes, arteriosclerosis, cardiovascular disease and stroke. This paper presents an automated algorithm for retinal blood vessel segmentation. The proposed algorithm takes advantage of powerful image processing techniques such as contrast enhancement, filtration and thresholding for more efficient segmentation. To evaluate the performance of the proposed algorithm, experiments were conducted on 40 images collected from DRIVE database. The results show that the proposed algorithm yields an accuracy rate of 96.5%, which is higher than the results achieved by other known algorithms.

  7. A motif extraction algorithm based on hashing and modulo-4 arithmetic.

    PubMed

    Sheng, Huitao; Mehrotra, Kishan; Mohan, Chilukuri; Raina, Ramesh

    2008-01-01

    We develop an algorithm to identify cis-elements in promoter regions of coregulated genes. This algorithm searches for subsequences of desired length whose frequency of occurrence is relatively high, while accounting for slightly perturbed variants using hash table and modulo arithmetic. Motifs are evaluated using profile matrices and higher-order Markov background model. Simulation results show that our algorithm discovers more motifs present in the test sequences, when compared with two well-known motif-discovery tools (MDScan and AlignACE). The algorithm produces very promising results on real data set; the output of the algorithm contained many known motifs.

  8. Topics in Randomized Algorithms for Numerical Linear Algebra

    NASA Astrophysics Data System (ADS)

    Holodnak, John T.

    In this dissertation, we present results for three topics in randomized algorithms. Each topic is related to random sampling. We begin by studying a randomized algorithm for matrix multiplication that randomly samples outer products. We show that if a set of deterministic conditions is satisfied, then the algorithm can compute the exact product. In addition, we show probabilistic bounds on the two norm relative error of the algorithm. two norm relative error of the algorithm. In the second part, we discuss the sensitivity of leverage scores to perturbations. Leverage scores are scalar quantities that give a notion of importance to the rows of a matrix. They are used as sampling probabilities in many randomized algorithms. We show bounds on the difference between the leverage scores of a matrix and a perturbation of the matrix. In the last part, we approximate functions over an active subspace of parameters. To identify the active subspace, we apply an algorithm that relies on a random sampling scheme. We show bounds on the accuracy of the active subspace identification algorithm and construct an approximation to a function with 3556 parameters using a ten-dimensional active subspace.

  9. National Orange Show Photovoltaic Demonstration

    SciTech Connect

    Dan Jimenez Sheri Raborn, CPA; Tom Baker

    2008-03-31

    National Orange Show Photovoltaic Demonstration created a 400KW Photovoltaic self-generation plant at the National Orange Show Events Center (NOS). The NOS owns a 120-acre state fairground where it operates an events center and produces an annual citrus fair known as the Orange Show. The NOS governing board wanted to employ cost-saving programs for annual energy expenses. It is hoped the Photovoltaic program will result in overall savings for the NOS, help reduce the State's energy demands as relating to electrical power consumption, improve quality of life within the affected grid area as well as increase the energy efficiency of buildings at our venue. In addition, the potential to reduce operational expenses would have a tremendous effect on the ability of the NOS to service its community.

  10. GOES-R Space Environment In-Situ Suite: instruments overview, calibration results, and data processing algorithms, and expected on-orbit performance

    NASA Astrophysics Data System (ADS)

    Galica, G. E.; Dichter, B. K.; Tsui, S.; Golightly, M. J.; Lopate, C.; Connell, J. J.

    2016-05-01

    The space weather instruments (Space Environment In-Situ Suite - SEISS) on the soon to be launched, NOAA GOES-R series spacecraft offer significant space weather measurement performance advances over the previous GOES N-P series instruments. The specifications require that the instruments ensure proper operation under the most stressful high flux conditions corresponding to the largest solar particle event expected during the program, while maintaining high sensitivity at low flux levels. Since the performance of remote sensing instruments is sensitive to local space weather conditions, the SEISS data will be of be of use to a broad community of users. The SEISS suite comprises five individual sensors and a data processing unit: Magnetospheric Particle Sensor-Low (0.03-30 keV electrons and ions), Magnetospheric Particle Sensor-High (0.05-4 MeV electrons, 0.08-12 MeV protons), two Solar And Galactic Proton Sensors (1 to >500 MeV protons), and the Energetic Heavy ion Sensor (10-200 MeV for H, H to Fe with single element resolution). We present comparisons between the enhanced GOES-R instruments and the current GOES space weather measurement capabilities. We provide an overview of the sensor configurations and performance. Results of extensive sensor modeling with GEANT, FLUKA and SIMION are compared with calibration data measured over nearly the entire energy range of the instruments. Combination of the calibration results and model are used to calculate the geometric factors of the various energy channels. The calibrated geometric factors and typical and extreme space weather environments are used to calculate the expected on-orbit performance.

  11. Algorithm for shortest path search in Geographic Information Systems by using reduced graphs.

    PubMed

    Rodríguez-Puente, Rafael; Lazo-Cortés, Manuel S

    2013-01-01

    The use of Geographic Information Systems has increased considerably since the eighties and nineties. As one of their most demanding applications we can mention shortest paths search. Several studies about shortest path search show the feasibility of using graphs for this purpose. Dijkstra's algorithm is one of the classic shortest path search algorithms. This algorithm is not well suited for shortest path search in large graphs. This is the reason why various modifications to Dijkstra's algorithm have been proposed by several authors using heuristics to reduce the run time of shortest path search. One of the most used heuristic algorithms is the A* algorithm, the main goal is to reduce the run time by reducing the search space. This article proposes a modification of Dijkstra's shortest path search algorithm in reduced graphs. It shows that the cost of the path found in this work, is equal to the cost of the path found using Dijkstra's algorithm in the original graph. The results of finding the shortest path, applying the proposed algorithm, Dijkstra's algorithm and A* algorithm, are compared. This comparison shows that, by applying the approach proposed, it is possible to obtain the optimal path in a similar or even in less time than when using heuristic algorithms.

  12. Comparison of ice-sheet satellite altimeter retracking algorithm

    SciTech Connect

    Davis, C.H.

    1996-01-01

    The NASA and ESA retracking algorithms are compared with an algorithm based upon a combined surface and volume (S/V) scattering model. First, the S/V, NASA, and ESA algorithms were used to retrack over 1.3 million altimeter return waveforms from the Greenland and Antarctic ice sheets. The surface elevations from the S/V algorithm were compared with the elevations produced by the NASA and ESA algorithms to determine the relative accuracy of these algorithms when subsurface volume scattering occurs. The results show that the ESA{sub 25%} algorithm produced slightly higher surface elevations than the S/V algorithm. The NASA retracking algorithm produced lower surface elevations than the S/V retracking algorithm, with average differences ranging from {minus}0.3 to {minus}0.9 m. The lower NASA elevations can only account for a portion of previously reported differences between altimeter and geoceiver surface elevations, suggesting that the remainder is probably due to orbital differences. Next, by analyzing several thousand satellite crossover points from the Greenland and Antarctic ice sheets, the author estimated the repeatability of the surface elevations derived from the different retracking algorithms. The elevations derived from the ESA{sub 25%} and S/V algorithm had the smallest standard deviations for the crossover differences for a time period where no significant change in surface elevation should occur. The NASA standard deviations were approximately 0.2 m larger than those from the ESA{sub 25%} and S/V algorithm, which represents an average increase in error of approximately 0.5 m in the datasets.

  13. An improved harmony search algorithm with dynamically varying bandwidth

    NASA Astrophysics Data System (ADS)

    Kalivarapu, J.; Jain, S.; Bag, S.

    2016-07-01

    The present work demonstrates a new variant of the harmony search (HS) algorithm where bandwidth (BW) is one of the deciding factors for the time complexity and the performance of the algorithm. The BW needs to have both explorative and exploitative characteristics. The ideology is to use a large BW to search in the full domain and to adjust the BW dynamically closer to the optimal solution. After trying a series of approaches, a methodology inspired by the functioning of a low-pass filter showed satisfactory results. This approach was implemented in the self-adaptive improved harmony search (SIHS) algorithm and tested on several benchmark functions. Compared to the existing HS algorithm and its variants, SIHS showed better performance on most of the test functions. Thereafter, the algorithm was applied to geometric parameter optimization of a friction stir welding tool.

  14. A semantic characterization of an algorithm for estimating others` beliefs from observation

    SciTech Connect

    Isozaki, Hideki; Katsuno, Hirofumi

    1996-12-31

    Human beings often estimate others beliefs and intentions when they interact with others. Estimation of others beliefs will be useful also in controlling the behavior and utterances of artificial agents, especially when lines of communication are unstable or slow. But, devising such estimation algorithms and background theories for the algorithms is difficult, because of many factors affecting one`s belief. We have proposed an algorithm that estimates others beliefs from observation in the changing world. Experimental results show that this algorithm returns natural answers to various queries. However, the algorithm is only heuristic, and how the algorithm deals with beliefs and their changes is not entirely clear. We propose certain semantics based on a nonstandard structure for modal logic. By using these semantics, we shed light on a logical meaning of the belief estimation that the algorithm deals with. We also discuss how the semantics and the algorithm can be generalized.

  15. Simple, fast codebook training algorithm by entropy sequence for vector quantization

    NASA Astrophysics Data System (ADS)

    Pang, Chao-yang; Yao, Shaowen; Qi, Zhang; Sun, Shi-xin; Liu, Jingde

    2001-09-01

    The traditional training algorithm for vector quantization such as the LBG algorithm uses the convergence of distortion sequence as the condition of the end of algorithm. We presented a novel training algorithm for vector quantization in this paper. The convergence of the entropy sequence of each region sequence is employed as the condition of the end of the algorithm. Compared with the famous LBG algorithm, it is simple, fast and easy to be comprehended and controlled. We test the performance of the algorithm by typical test image Lena and Barb. The result shows that the PSNR difference between the algorithm and LBG is less than 0.1dB, but the running time of it is at most one second of LBG.

  16. Fast Optimal Load Balancing Algorithms for 1D Partitioning

    SciTech Connect

    Pinar, Ali; Aykanat, Cevdet

    2002-12-09

    One-dimensional decomposition of nonuniform workload arrays for optimal load balancing is investigated. The problem has been studied in the literature as ''chains-on-chains partitioning'' problem. Despite extensive research efforts, heuristics are still used in parallel computing community with the ''hope'' of good decompositions and the ''myth'' of exact algorithms being hard to implement and not runtime efficient. The main objective of this paper is to show that using exact algorithms instead of heuristics yields significant load balance improvements with negligible increase in preprocessing time. We provide detailed pseudocodes of our algorithms so that our results can be easily reproduced. We start with a review of literature on chains-on-chains partitioning problem. We propose improvements on these algorithms as well as efficient implementation tips. We also introduce novel algorithms, which are asymptotically and runtime efficient. We experimented with data sets from two different applications: Sparse matrix computations and Direct volume rendering. Experiments showed that the proposed algorithms are 100 times faster than a single sparse-matrix vector multiplication for 64-way decompositions on average. Experiments also verify that load balance can be significantly improved by using exact algorithms instead of heuristics. These two findings show that exact algorithms with efficient implementations discussed in this paper can effectively replace heuristics.

  17. New convergence estimates for multigrid algorithms

    SciTech Connect

    Bramble, J.H.; Pasciak, J.E.

    1987-10-01

    In this paper, new convergence estimates are proved for both symmetric and nonsymmetric multigrid algorithms applied to symmetric positive definite problems. Our theory relates the convergence of multigrid algorithms to a ''regularity and approximation'' parameter ..cap alpha.. epsilon (0, 1) and the number of relaxations m. We show that for the symmetric and nonsymmetric ..nu.. cycles, the multigrid iteration converges for any positive m at a rate which deteriorates no worse than 1-cj/sup -(1-//sup ..cap alpha..//sup )///sup ..cap alpha../, where j is the number of grid levels. We then define a generalized ..nu.. cycle algorithm which involves exponentially increasing (for example, doubling) the number of smoothings on successively coarser grids. We show that the resulting symmetric and nonsymmetric multigrid iterations converge for any ..cap alpha.. with rates that are independent of the mesh size. The theory is presented in an abstract setting which can be applied to finite element multigrid and finite difference multigrid methods.

  18. Evaluating Algorithm Performance Metrics Tailored for Prognostics

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai

    2009-01-01

    Prognostics has taken a center stage in Condition Based Maintenance (CBM) where it is desired to estimate Remaining Useful Life (RUL) of the system so that remedial measures may be taken in advance to avoid catastrophic events or unwanted downtimes. Validation of such predictions is an important but difficult proposition and a lack of appropriate evaluation methods renders prognostics meaningless. Evaluation methods currently used in the research community are not standardized and in many cases do not sufficiently assess key performance aspects expected out of a prognostics algorithm. In this paper we introduce several new evaluation metrics tailored for prognostics and show that they can effectively evaluate various algorithms as compared to other conventional metrics. Specifically four algorithms namely; Relevance Vector Machine (RVM), Gaussian Process Regression (GPR), Artificial Neural Network (ANN), and Polynomial Regression (PR) are compared. These algorithms vary in complexity and their ability to manage uncertainty around predicted estimates. Results show that the new metrics rank these algorithms in different manner and depending on the requirements and constraints suitable metrics may be chosen. Beyond these results, these metrics offer ideas about how metrics suitable to prognostics may be designed so that the evaluation procedure can be standardized. 1

  19. Study of image matching algorithm and sub-pixel fitting algorithm in target tracking

    NASA Astrophysics Data System (ADS)

    Yang, Ming-dong; Jia, Jianjun; Qiang, Jia; Wang, Jian-yu

    2015-03-01

    Image correlation matching is a tracking method that searched a region most approximate to the target template based on the correlation measure between two images. Because there is no need to segment the image, and the computation of this method is little. Image correlation matching is a basic method of target tracking. This paper mainly studies the image matching algorithm of gray scale image, which precision is at sub-pixel level. The matching algorithm used in this paper is SAD (Sum of Absolute Difference) method. This method excels in real-time systems because of its low computation complexity. The SAD method is introduced firstly and the most frequently used sub-pixel fitting algorithms are introduced at the meantime. These fitting algorithms can't be used in real-time systems because they are too complex. However, target tracking often requires high real-time performance, we put forward a fitting algorithm named paraboloidal fitting algorithm based on the consideration above, this algorithm is simple and realized easily in real-time system. The result of this algorithm is compared with that of surface fitting algorithm through image matching simulation. By comparison, the precision difference between these two algorithms is little, it's less than 0.01pixel. In order to research the influence of target rotation on precision of image matching, the experiment of camera rotation was carried on. The detector used in the camera is a CMOS detector. It is fixed to an arc pendulum table, take pictures when the camera rotated different angles. Choose a subarea in the original picture as the template, and search the best matching spot using image matching algorithm mentioned above. The result shows that the matching error is bigger when the target rotation angle is larger. It's an approximate linear relation. Finally, the influence of noise on matching precision was researched. Gaussian noise and pepper and salt noise were added in the image respectively, and the image

  20. Nonlinear physical segmentation algorithm for determining the layer boundary from lidar signal.

    PubMed

    Mao, Feiyue; Li, Jun; Li, Chen; Gong, Wei; Min, Qilong; Wang, Wei

    2015-11-30

    Layer boundary (base and top) detection is a basic problem in lidar data processing, the results of which are used as inputs of optical properties retrieval. However, traditional algorithms not only require manual intervention but also rely heavily on the signal-to-noise ratio. Therefore, we propose a robust and automatic algorithm for layer detection based on a novel algorithm for lidar signal segmentation and representation. Our algorithm is based on the lidar equation and avoids most of the limitations of the traditional algorithms. Testing of the simulated and real signals shows that the algorithm is able to position the base and top accurately even with a low signal to noise ratio. Furthermore, the results of the classification are accurate and satisfactory. The experimental results confirm that our algorithm can be used for automatic detection, retrieval, and analysis of lidar data sets.

  1. Algorithm aversion: people erroneously avoid algorithms after seeing them err.

    PubMed

    Dietvorst, Berkeley J; Simmons, Joseph P; Massey, Cade

    2015-02-01

    Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

  2. Atmospheric compensation in free space optical communication with simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Li, Zhaokun; Cao, Jingtai; Zhao, Xiaohui; Liu, Wei

    2015-03-01

    As we know that the conventional adaptive optics (AO) systems can compensate atmospheric turbulence in free space optical (FSO) communication system. Since in strong scintillation conditions, wave-front measurements based on phase-conjugation principle are undesired. A novel global optimization simulated annealing (SA) algorithm is proposed in this paper to compensate wave-front aberration. With global optimization characteristics, SA algorithm is better than stochastic parallel gradient descent (SPGD) and other algorithms that already exist. Related simulations are conducted and the results show that the SA algorithm can significantly improve performance in FSO communication system and is better than SPGD algorithm with the increase of coupling efficiency.

  3. Discrete Bat Algorithm for Optimal Problem of Permutation Flow Shop Scheduling

    PubMed Central

    Luo, Qifang; Zhou, Yongquan; Xie, Jian; Ma, Mingzhi; Li, Liangliang

    2014-01-01

    A discrete bat algorithm (DBA) is proposed for optimal permutation flow shop scheduling problem (PFSP). Firstly, the discrete bat algorithm is constructed based on the idea of basic bat algorithm, which divide whole scheduling problem into many subscheduling problems and then NEH heuristic be introduced to solve subscheduling problem. Secondly, some subsequences are operated with certain probability in the pulse emission and loudness phases. An intensive virtual population neighborhood search is integrated into the discrete bat algorithm to further improve the performance. Finally, the experimental results show the suitability and efficiency of the present discrete bat algorithm for optimal permutation flow shop scheduling problem. PMID:25243220

  4. An automated blood vessel segmentation algorithm using histogram equalization and automatic threshold selection.

    PubMed

    Saleh, Marwan D; Eswaran, C; Mueen, Ahmed

    2011-08-01

    This paper focuses on the detection of retinal blood vessels which play a vital role in reducing the proliferative diabetic retinopathy and for preventing the loss of visual capability. The proposed algorithm which takes advantage of the powerful preprocessing techniques such as the contrast enhancement and thresholding offers an automated segmentation procedure for retinal blood vessels. To evaluate the performance of the new algorithm, experiments are conducted on 40 images collected from DRIVE database. The results show that the proposed algorithm performs better than the other known algorithms in terms of accuracy. Furthermore, the proposed algorithm being simple and easy to implement, is best suited for fast processing applications.

  5. Multi-path planning algorithm based on fitness sharing and species evolution

    NASA Astrophysics Data System (ADS)

    Zhang, Jing-Juan; Li, Xue-Lian; Hao, Yan-Ling

    2003-06-01

    A new algorithm is proposed for underwater vehicles multi-path planning. This algorithm is based on fitness sharing genetic algorithm, clustering and evolution of multiple populations, which can keep the diversity of the solution path, and decrease the operating time because of the independent evolution of each subpopulation. The multi-path planning algorithm is demonstrated by a number of two-dimensional path planning problems. The results show that the multi-path planning algorithm has the following characteristics: high searching capability, rapid convergence and high reliability.

  6. Fractal Landscape Algorithms for Environmental Simulations

    NASA Astrophysics Data System (ADS)

    Mao, H.; Moran, S.

    2014-12-01

    Natural science and geographical research are now able to take advantage of environmental simulations that more accurately test experimental hypotheses, resulting in deeper understanding. Experiments affected by the natural environment can benefit from 3D landscape simulations capable of simulating a variety of terrains and environmental phenomena. Such simulations can employ random terrain generation algorithms that dynamically simulate environments to test specific models against a variety of factors. Through the use of noise functions such as Perlin noise, Simplex noise, and diamond square algorithms, computers can generate simulations that model a variety of landscapes and ecosystems. This study shows how these algorithms work together to create realistic landscapes. By seeding values into the diamond square algorithm, one can control the shape of landscape. Perlin noise and Simplex noise are also used to simulate moisture and temperature. The smooth gradient created by coherent noise allows more realistic landscapes to be simulated. Terrain generation algorithms can be used in environmental studies and physics simulations. Potential studies that would benefit from simulations include the geophysical impact of flash floods or drought on a particular region and regional impacts on low lying area due to global warming and rising sea levels. Furthermore, terrain generation algorithms also serve as aesthetic tools to display landscapes (Google Earth), and simulate planetary landscapes. Hence, it can be used as a tool to assist science education. Algorithms used to generate these natural phenomena provide scientists a different approach in analyzing our world. The random algorithms used in terrain generation not only contribute to the generating the terrains themselves, but are also capable of simulating weather patterns.

  7. Parallelization of the Pipelined Thomas Algorithm

    NASA Technical Reports Server (NTRS)

    Povitsky, A.

    1998-01-01

    In this study the following questions are addressed. Is it possible to improve the parallelization efficiency of the Thomas algorithm? How should the Thomas algorithm be formulated in order to get solved lines that are used as data for other computational tasks while processors are idle? To answer these questions, two-step pipelined algorithms (PAs) are introduced formally. It is shown that the idle processor time is invariant with respect to the order of backward and forward steps in PAs starting from one outermost processor. The advantage of PAs starting from two outermost processors is small. Versions of the pipelined Thomas algorithms considered here fall into the category of PAs. These results show that the parallelization efficiency of the Thomas algorithm cannot be improved directly. However, the processor idle time can be used if some data has been computed by the time processors become idle. To achieve this goal the Immediate Backward pipelined Thomas Algorithm (IB-PTA) is developed in this article. The backward step is computed immediately after the forward step has been completed for the first portion of lines. This enables the completion of the Thomas algorithm for some of these lines before processors become idle. An algorithm for generating a static processor schedule recursively is developed. This schedule is used to switch between forward and backward computations and to control communications between processors. The advantage of the IB-PTA over the basic PTA is the presence of solved lines, which are available for other computations, by the time processors become idle.

  8. Faster Parameterized Algorithms for Minor Containment

    NASA Astrophysics Data System (ADS)

    Adler, Isolde; Dorn, Frederic; Fomin, Fedor V.; Sau, Ignasi; Thilikos, Dimitrios M.

    The theory of Graph Minors by Robertson and Seymour is one of the deepest and significant theories in modern Combinatorics. This theory has also a strong impact on the recent development of Algorithms, and several areas, like Parameterized Complexity, have roots in Graph Minors. Until very recently it was a common belief that Graph Minors Theory is mainly of theoretical importance. However, it appears that many deep results from Robertson and Seymour's theory can be also used in the design of practical algorithms. Minor containment testing is one of algorithmically most important and technical parts of the theory, and minor containment in graphs of bounded branchwidth is a basic ingredient of this algorithm. In order to implement minor containment testing on graphs of bounded branchwidth, Hicks [NETWORKS 04] described an algorithm, that in time O(3^{k^2}\\cdot (h+k-1)!\\cdot m) decides if a graph G with m edges and branchwidth k, contains a fixed graph H on h vertices as a minor. That algorithm follows the ideas introduced by Robertson and Seymour in [J'CTSB 95]. In this work we improve the dependence on k of Hicks' result by showing that checking if H is a minor of G can be done in time O(2^{(2k +1 )\\cdot log k} \\cdot h^{2k} \\cdot 2^{2h^2} \\cdot m). Our approach is based on a combinatorial object called rooted packing, which captures the properties of the potential models of subgraphs of H that we seek in our dynamic programming algorithm. This formulation with rooted packings allows us to speed up the algorithm when G is embedded in a fixed surface, obtaining the first single-exponential algorithm for minor containment testing. Namely, it runs in time 2^{O(k)} \\cdot h^{2k} \\cdot 2^{O(h)} \\cdot n, with n = |V(G)|. Finally, we show that slight modifications of our algorithm permit to solve some related problems within the same time bounds, like induced minor or contraction minor containment.

  9. MLSD-OSEM reconstruction algorithm for cosmic ray muon radiography

    NASA Astrophysics Data System (ADS)

    Liu, Yuanyuan; Zhao, Ziran; Chen, Zhiqiang; Zhang, Li; Xing, Yuxiang

    2008-03-01

    Cosmic ray muon radiography which has a good penetrability and sensitivity to high-Z materials is an effective way for detecting shielded nuclear materials. Reconstruction algorithm is the key point of this technique. Currently, there are two main algorithms about this technique. One is the Point of Closest Approach (POCA) reconstruction algorithm which uses the track information to reconstruct; the other is the Maximum Likelihood estimation, such as the Maximum Likelihood Scattering (MLS) and the Maximum Likelihood Scattering and Displacement (MLSD) reconstruction algorithms which are proposed by the Los Alamos National Laboratory (LANL). The performance of MLSD is better than MLS. Since MLSD reconstruction algorithm includes scattering and displacement information while MLS reconstruction algorithm only includes scattering information. In order to get this Maximum Likelihood estimation, in this paper, we propose to use EM method to get the estimation (MLS-EM and MLSD-EM). Then, in order to saving reconstruction time we use the OS technique to accelerate MLS and MLSD reconstruction algorithm with the initial value set to be the result of the POCA reconstruction algorithm. That is, the Maximum Likelihood Scattering-OSEM (MLS-OSEM) and the Maximum Likelihood Scattering and Displacement-OSEM (MLSD-OSEM). Numerical simulations show that the MLSD-OSEM is an effective algorithm and the performance of MLSD-OSEM is better than MLS-OSEM.

  10. Plan Showing Cross Bracing Under Upper Stringers, Typical Section Showing ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Plan Showing Cross Bracing Under Upper Stringers, Typical Section Showing End Framing, Plan Showing Cross Bracing Under Lower Stringers, End Elevation - Covered Bridge, Spanning Contoocook River, Hopkinton, Merrimack County, NH

  11. Autonomous photogrammetric network design based on changing environment genetic algorithms

    NASA Astrophysics Data System (ADS)

    Yang, Jian; Lu, Nai-Guang; Dong, Mingli

    2008-10-01

    In order to get good accuracy, designer used to consider how to place cameras. Usually, cameras placement design is a multidimensional optimal problem, so people used genetic algorithms to solve it. But genetic algorithms could result in premature or convergent problem. Sometime we get local minimum and observe vibrating phenomenon. Those will get inaccurate design. So we try to solve the problem using the changing environment genetic algorithms. The work proposes giving those species groups difference environment during difference stage to improve the property. Computer simulation result shows the acceleration in convergent speed and ability of selecting good individual. This work would be used in other application.

  12. Honey Bees Inspired Optimization Method: The Bees Algorithm.

    PubMed

    Yuce, Baris; Packianather, Michael S; Mastrocinque, Ernesto; Pham, Duc Truong; Lambiase, Alfredo

    2013-01-01

    Optimization algorithms are search methods where the goal is to find an optimal solution to a problem, in order to satisfy one or more objective functions, possibly subject to a set of constraints. Studies of social animals and social insects have resulted in a number of computational models of swarm intelligence. Within these swarms their collective behavior is usually very complex. The collective behavior of a swarm of social organisms emerges from the behaviors of the individuals of that swarm. Researchers have developed computational optimization methods based on biology such as Genetic Algorithms, Particle Swarm Optimization, and Ant Colony. The aim of this paper is to describe an optimization algorithm called the Bees Algorithm, inspired from the natural foraging behavior of honey bees, to find the optimal solution. The algorithm performs both an exploitative neighborhood search combined with random explorative search. In this paper, after an explanation of the natural foraging behavior of honey bees, the basic Bees Algorithm and its improved versions are described and are implemented in order to optimize several benchmark functions, and the results are compared with those obtained with different optimization algorithms. The results show that the Bees Algorithm offering some advantage over other optimization methods according to the nature of the problem. PMID:26462528

  13. Honey Bees Inspired Optimization Method: The Bees Algorithm.

    PubMed

    Yuce, Baris; Packianather, Michael S; Mastrocinque, Ernesto; Pham, Duc Truong; Lambiase, Alfredo

    2013-11-06

    Optimization algorithms are search methods where the goal is to find an optimal solution to a problem, in order to satisfy one or more objective functions, possibly subject to a set of constraints. Studies of social animals and social insects have resulted in a number of computational models of swarm intelligence. Within these swarms their collective behavior is usually very complex. The collective behavior of a swarm of social organisms emerges from the behaviors of the individuals of that swarm. Researchers have developed computational optimization methods based on biology such as Genetic Algorithms, Particle Swarm Optimization, and Ant Colony. The aim of this paper is to describe an optimization algorithm called the Bees Algorithm, inspired from the natural foraging behavior of honey bees, to find the optimal solution. The algorithm performs both an exploitative neighborhood search combined with random explorative search. In this paper, after an explanation of the natural foraging behavior of honey bees, the basic Bees Algorithm and its improved versions are described and are implemented in order to optimize several benchmark functions, and the results are compared with those obtained with different optimization algorithms. The results show that the Bees Algorithm offering some advantage over other optimization methods according to the nature of the problem.

  14. Power spectral estimation algorithms

    NASA Technical Reports Server (NTRS)

    Bhatia, Manjit S.

    1989-01-01

    Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.

  15. Case study of isosurface extraction algorithm performance

    SciTech Connect

    Sutton, P M; Hansen, C D; Shen, H; Schikore, D

    1999-12-14

    Isosurface extraction is an important and useful visualization method. Over the past ten years, the field has seen numerous isosurface techniques published leaving the user in a quandary about which one should be used. Some papers have published complexity analysis of the techniques yet empirical evidence comparing different methods is lacking. This case study presents a comparative study of several representative isosurface extraction algorithms. It reports and analyzes empirical measurements of execution times and memory behavior for each algorithm. The results show that asymptotically optimal techniques may not be the best choice when implemented on modern computer architectures.

  16. Speckle imaging algorithms for planetary imaging

    SciTech Connect

    Johansson, E.

    1994-11-15

    I will discuss the speckle imaging algorithms used to process images of the impact sites of the collision of comet Shoemaker-Levy 9 with Jupiter. The algorithms use a phase retrieval process based on the average bispectrum of the speckle image data. High resolution images are produced by estimating the Fourier magnitude and Fourier phase of the image separately, then combining them and inverse transforming to achieve the final result. I will show raw speckle image data and high-resolution image reconstructions from our recent experiment at Lick Observatory.

  17. Statistical pattern recognition algorithms for autofluorescence imaging

    NASA Astrophysics Data System (ADS)

    Kulas, Zbigniew; Bereś-Pawlik, Elżbieta; Wierzbicki, Jarosław

    2009-02-01

    In cancer diagnostics the most important problems are the early identification and estimation of the tumor growth and spread in order to determine the area to be operated. The aim of the work was to design of statistical algorithms helping doctors to objectively estimate pathologically changed areas and to assess the disease advancement. In the research, algorithms for classifying endoscopic autofluorescence images of larynx and intestine were used. The results show that the statistical pattern recognition offers new possibilities for endoscopic diagnostics and can be of a tremendous help in assessing the area of the pathological changes.

  18. Optimization algorithm of digital watermarking anti-coalition attacks in DWT-domain based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Que, Dashun; Li, Gang; Yue, Peng

    2007-12-01

    An adaptive optimization watermarking algorithm based on Genetic Algorithm (GA) and discrete wavelet transform (DWT) is proposed in this paper. The core of this algorithm is the fitness function optimization model for digital watermarking based on GA. The embedding intensity for digital watermarking can be modified adaptively, and the algorithm can effectively ensure the imperceptibility of watermarking while the robustness is ensured. The optimization model research may provide a new idea for anti-coalition attacks of digital watermarking algorithm. The paper has fulfilled many experiments, including the embedding and extracting experiments of watermarking, the influence experiments by the weighting factor, the experiments of embedding same watermarking to the different cover image, the experiments of embedding different watermarking to the same cover image, the comparative analysis experiments between this optimization algorithm and human visual system (HVS) algorithm and etc. The simulation results and the further analysis show the effectiveness and advantage of the new algorithm, which also has versatility and expandability. And meanwhile it has better ability of anti-coalition attacks. Moreover, the robustness and security of watermarking algorithm are improved by scrambling transformation and chaotic encryption while preprocessing the watermarking.

  19. Flocking algorithm for autonomous flying robots.

    PubMed

    Virágh, Csaba; Vásárhelyi, Gábor; Tarcai, Norbert; Szörényi, Tamás; Somorjai, Gergő; Nepusz, Tamás; Vicsek, Tamás

    2014-06-01

    Animal swarms displaying a variety of typical flocking patterns would not exist without the underlying safe, optimal and stable dynamics of the individuals. The emergence of these universal patterns can be efficiently reconstructed with agent-based models. If we want to reproduce these patterns with artificial systems, such as autonomous aerial robots, agent-based models can also be used in their control algorithms. However, finding the proper algorithms and thus understanding the essential characteristics of the emergent collective behaviour requires thorough and realistic modeling of the robot and also the environment. In this paper, we first present an abstract mathematical model of an autonomous flying robot. The model takes into account several realistic features, such as time delay and locality of communication, inaccuracy of the on-board sensors and inertial effects. We present two decentralized control algorithms. One is based on a simple self-propelled flocking model of animal collective motion, the other is a collective target tracking algorithm. Both algorithms contain a viscous friction-like term, which aligns the velocities of neighbouring agents parallel to each other. We show that this term can be essential for reducing the inherent instabilities of such a noisy and delayed realistic system. We discuss simulation results on the stability of the control algorithms, and perform real experiments to show the applicability of the algorithms on a group of autonomous quadcopters. In our case, bio-inspiration works in two ways. On the one hand, the whole idea of trying to build and control a swarm of robots comes from the observation that birds tend to flock to optimize their behaviour as a group. On the other hand, by using a realistic simulation framework and studying the group behaviour of autonomous robots we can learn about the major factors influencing the flight of bird flocks. PMID:24852272

  20. Flocking algorithm for autonomous flying robots.

    PubMed

    Virágh, Csaba; Vásárhelyi, Gábor; Tarcai, Norbert; Szörényi, Tamás; Somorjai, Gergő; Nepusz, Tamás; Vicsek, Tamás

    2014-06-01

    Animal swarms displaying a variety of typical flocking patterns would not exist without the underlying safe, optimal and stable dynamics of the individuals. The emergence of these universal patterns can be efficiently reconstructed with agent-based models. If we want to reproduce these patterns with artificial systems, such as autonomous aerial robots, agent-based models can also be used in their control algorithms. However, finding the proper algorithms and thus understanding the essential characteristics of the emergent collective behaviour requires thorough and realistic modeling of the robot and also the environment. In this paper, we first present an abstract mathematical model of an autonomous flying robot. The model takes into account several realistic features, such as time delay and locality of communication, inaccuracy of the on-board sensors and inertial effects. We present two decentralized control algorithms. One is based on a simple self-propelled flocking model of animal collective motion, the other is a collective target tracking algorithm. Both algorithms contain a viscous friction-like term, which aligns the velocities of neighbouring agents parallel to each other. We show that this term can be essential for reducing the inherent instabilities of such a noisy and delayed realistic system. We discuss simulation results on the stability of the control algorithms, and perform real experiments to show the applicability of the algorithms on a group of autonomous quadcopters. In our case, bio-inspiration works in two ways. On the one hand, the whole idea of trying to build and control a swarm of robots comes from the observation that birds tend to flock to optimize their behaviour as a group. On the other hand, by using a realistic simulation framework and studying the group behaviour of autonomous robots we can learn about the major factors influencing the flight of bird flocks.

  1. Quantum algorithm for an additive approximation of Ising partition functions

    NASA Astrophysics Data System (ADS)

    Matsuo, Akira; Fujii, Keisuke; Imoto, Nobuyuki

    2014-08-01

    We investigate quantum-computational complexity of calculating partition functions of Ising models. We construct a quantum algorithm for an additive approximation of Ising partition functions on square lattices. To this end, we utilize the overlap mapping developed by M. Van den Nest, W. Dür, and H. J. Briegel [Phys. Rev. Lett. 98, 117207 (2007), 10.1103/PhysRevLett.98.117207] and its interpretation through measurement-based quantum computation (MBQC). We specify an algorithmic domain, on which the proposed algorithm works, and an approximation scale, which determines the accuracy of the approximation. We show that the proposed algorithm performs a nontrivial task, which would be intractable on any classical computer, by showing that the problem that is solvable by the proposed quantum algorithm is BQP-complete. In the construction of the BQP-complete problem coupling strengths and magnetic fields take complex values. However, the Ising models that are of central interest in statistical physics and computer science consist of real coupling strengths and magnetic fields. Thus we extend the algorithmic domain of the proposed algorithm to such a real physical parameter region and calculate the approximation scale explicitly. We found that the overlap mapping and its MBQC interpretation improve the approximation scale exponentially compared to a straightforward constant-depth quantum algorithm. On the other hand, the proposed quantum algorithm also provides partial evidence that there exist no efficient classical algorithm for a multiplicative approximation of the Ising partition functions even on the square lattice. This result supports the observation that the proposed quantum algorithm also performs a nontrivial task in the physical parameter region.

  2. Calculation of Computational Complexity for Radix-2 (p) Fast Fourier Transform Algorithms for Medical Signals.

    PubMed

    Amirfattahi, Rassoul

    2013-10-01

    Owing to its simplicity radix-2 is a popular algorithm to implement fast fourier transform. Radix-2(p) algorithms have the same order of computational complexity as higher radices algorithms, but still retain the simplicity of radix-2. By defining a new concept, twiddle factor template, in this paper, we propose a method for exact calculation of multiplicative complexity for radix-2(p) algorithms. The methodology is described for radix-2, radix-2 (2) and radix-2 (3) algorithms. Results show that radix-2 (2) and radix-2 (3) have significantly less computational complexity compared with radix-2. Another interesting result is that while the number of complex multiplications in radix-2 (3) algorithm is slightly more than radix-2 (2), the number of real multiplications for radix-2 (3) is less than radix-2 (2). This is because of the twiddle factors in the form of which need less number of real multiplications and are more frequent in radix-2 (3) algorithm.

  3. Improved Quantum Artificial Fish Algorithm Application to Distributed Network Considering Distributed Generation.

    PubMed

    Du, Tingsong; Hu, Yang; Ke, Xianting

    2015-01-01

    An improved quantum artificial fish swarm algorithm (IQAFSA) for solving distributed network programming considering distributed generation is proposed in this work. The IQAFSA based on quantum computing which has exponential acceleration for heuristic algorithm uses quantum bits to code artificial fish and quantum revolving gate, preying behavior, and following behavior and variation of quantum artificial fish to update the artificial fish for searching for optimal value. Then, we apply the proposed new algorithm, the quantum artificial fish swarm algorithm (QAFSA), the basic artificial fish swarm algorithm (BAFSA), and the global edition artificial fish swarm algorithm (GAFSA) to the simulation experiments for some typical test functions, respectively. The simulation results demonstrate that the proposed algorithm can escape from the local extremum effectively and has higher convergence speed and better accuracy. Finally, applying IQAFSA to distributed network problems and the simulation results for 33-bus radial distribution network system show that IQAFSA can get the minimum power loss after comparing with BAFSA, GAFSA, and QAFSA.

  4. Grid fill algorithm for vector graphics render on mobile devices

    NASA Astrophysics Data System (ADS)

    Zhang, Jixian; Yue, Kun; Yuan, Guowu; Zhang, Binbin

    2015-12-01

    The performance of vector graphics render has always been one of the key elements in mobile devices and the most important step to improve the performance is to enhance the efficiency of polygon fill algorithms. In this paper, we proposed a new and more efficient polygon fill algorithm based on the scan line algorithm and Grid Fill Algorithm (GFA). First, we elaborated the GFA through solid fill. Second, we described the techniques for implementing antialiasing and self-intersection polygon fill with GFA. Then, we discussed the implementation of GFA based on the gradient fill. Generally, compared to other fill algorithms, GFA has better performance and achieves faster fill speed, which is specifically consistent with the inherent characteristics of mobile devices. Experimental results show that better fill effects can be achieved by using GFA.

  5. Time optimal route planning algorithm of LBS online navigation

    NASA Astrophysics Data System (ADS)

    Li, Yong; Bao, Shitai; Su, Kui; Fang, Qiushui; Yang, Jingfeng

    2011-02-01

    This paper proposes a time optimal route planning optimization algorithm in the mode of LBS online navigation based on the improved Dijkstra algorithms. Combined with the returning real-time location information by on-line users' handheld terminals, the algorithm can satisfy requirement of the optimal time in the mode of LBS online navigation. A navigation system is developed and applied in actual navigation operations. Operating results show that the algorithm could form a reasonable coordination on the basis of shortest route and fastest velocity in the requirement of optimal time. The algorithm could also store the calculated real-time route information in the cache to improve the efficiency of route planning and to reduce the planning time-consuming.

  6. An effective one-dimensional anisotropic fingerprint enhancement algorithm

    NASA Astrophysics Data System (ADS)

    Ye, Zhendong; Xie, Mei

    2012-01-01

    Fingerprint identification is one of the most important biometric technologies. The performance of the minutiae extraction and the speed of the fingerprint verification system rely heavily on the quality of the input fingerprint images, so the enhancement of the low fingerprint is a critical and difficult step in a fingerprint verification system. In this paper we proposed an effective algorithm for fingerprint enhancement. Firstly we use normalization algorithm to reduce the variations in gray level values along ridges and valleys. Then we utilize the structure tensor approach to estimate each pixel of the fingerprint orientations. At last we propose a novel algorithm which combines the advantages of onedimensional Gabor filtering method and anisotropic method to enhance the fingerprint in recoverable region. The proposed algorithm has been evaluated on the database of Fingerprint Verification Competition 2004, and the results show that our algorithm performs within less time.

  7. An effective one-dimensional anisotropic fingerprint enhancement algorithm

    NASA Astrophysics Data System (ADS)

    Ye, Zhendong; Xie, Mei

    2011-12-01

    Fingerprint identification is one of the most important biometric technologies. The performance of the minutiae extraction and the speed of the fingerprint verification system rely heavily on the quality of the input fingerprint images, so the enhancement of the low fingerprint is a critical and difficult step in a fingerprint verification system. In this paper we proposed an effective algorithm for fingerprint enhancement. Firstly we use normalization algorithm to reduce the variations in gray level values along ridges and valleys. Then we utilize the structure tensor approach to estimate each pixel of the fingerprint orientations. At last we propose a novel algorithm which combines the advantages of onedimensional Gabor filtering method and anisotropic method to enhance the fingerprint in recoverable region. The proposed algorithm has been evaluated on the database of Fingerprint Verification Competition 2004, and the results show that our algorithm performs within less time.

  8. A Novel Particle Swarm Optimization Algorithm for Global Optimization

    PubMed Central

    Wang, Chun-Feng; Liu, Kui

    2016-01-01

    Particle Swarm Optimization (PSO) is a recently developed optimization method, which has attracted interest of researchers in various areas due to its simplicity and effectiveness, and many variants have been proposed. In this paper, a novel Particle Swarm Optimization algorithm is presented, in which the information of the best neighbor of each particle and the best particle of the entire population in the current iteration is considered. Meanwhile, to avoid premature, an abandoned mechanism is used. Furthermore, for improving the global convergence speed of our algorithm, a chaotic search is adopted in the best solution of the current iteration. To verify the performance of our algorithm, standard test functions have been employed. The experimental results show that the algorithm is much more robust and efficient than some existing Particle Swarm Optimization algorithms. PMID:26955387

  9. An algorithm to retrieve precipitation with synthetic aperture radar

    NASA Astrophysics Data System (ADS)

    Xie, Ya'nan; Liu, Zhikun; An, Dawei

    2016-06-01

    This paper presents a new type of rainfall retrieval algorithm, called the model-oriented statistical and Volterra integration. It is a combination of the model-oriented statistical (MOS) and Volterra integral equation (VIE) approaches. The steps involved in this new algorithm can be briefly illustrated as follows. Firstly, information such as the start point and width of the rain is obtained through pre-analysis of the data received by synthetic aperture radar (SAR). Secondly, the VIE retrieval algorithm is employed over a short distance to obtain information on the shape of the rain. Finally, the rain rate can be calculated by using the MOS retrieval algorithm. Simulation results show that the proposed algorithm is effective and simple, and can lead to time savings of nearly 50% compared with MOS. An example of application of SAR data is also discussed, involving the retrieval of precipitation information over the South China Sea.

  10. On Learning Algorithms for Nash Equilibria

    NASA Astrophysics Data System (ADS)

    Daskalakis, Constantinos; Frongillo, Rafael; Papadimitriou, Christos H.; Pierrakos, George; Valiant, Gregory

    Can learning algorithms find a Nash equilibrium? This is a natural question for several reasons. Learning algorithms resemble the behavior of players in many naturally arising games, and thus results on the convergence or non-convergence properties of such dynamics may inform our understanding of the applicability of Nash equilibria as a plausible solution concept in some settings. A second reason for asking this question is in the hope of being able to prove an impossibility result, not dependent on complexity assumptions, for computing Nash equilibria via a restricted class of reasonable algorithms. In this work, we begin to answer this question by considering the dynamics of the standard multiplicative weights update learning algorithms (which are known to converge to a Nash equilibrium for zero-sum games). We revisit a 3×3 game defined by Shapley [10] in the 1950s in order to establish that fictitious play does not converge in general games. For this simple game, we show via a potential function argument that in a variety of settings the multiplicative updates algorithm impressively fails to find the unique Nash equilibrium, in that the cumulative distributions of players produced by learning dynamics actually drift away from the equilibrium.

  11. A novel bee swarm optimization algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Akbari, Reza; Mohammadi, Alireza; Ziarati, Koorush

    2010-10-01

    The optimization algorithms which are inspired from intelligent behavior of honey bees are among the most recently introduced population based techniques. In this paper, a novel algorithm called bee swarm optimization, or BSO, and its two extensions for improving its performance are presented. The BSO is a population based optimization technique which is inspired from foraging behavior of honey bees. The proposed approach provides different patterns which are used by the bees to adjust their flying trajectories. As the first extension, the BSO algorithm introduces different approaches such as repulsion factor and penalizing fitness (RP) to mitigate the stagnation problem. Second, to maintain efficiently the balance between exploration and exploitation, time-varying weights (TVW) are introduced into the BSO algorithm. The proposed algorithm (BSO) and its two extensions (BSO-RP and BSO-RPTVW) are compared with existing algorithms which are based on intelligent behavior of honey bees, on a set of well known numerical test functions. The experimental results show that the BSO algorithms are effective and robust; produce excellent results, and outperform other algorithms investigated in this consideration.

  12. Improved Ant Colony Clustering Algorithm and Its Performance Study.

    PubMed

    Gao, Wei

    2016-01-01

    Clustering analysis is used in many disciplines and applications; it is an important tool that descriptively identifies homogeneous groups of objects based on attribute values. The ant colony clustering algorithm is a swarm-intelligent method used for clustering problems that is inspired by the behavior of ant colonies that cluster their corpses and sort their larvae. A new abstraction ant colony clustering algorithm using a data combination mechanism is proposed to improve the computational efficiency and accuracy of the ant colony clustering algorithm. The abstraction ant colony clustering algorithm is used to cluster benchmark problems, and its performance is compared with the ant colony clustering algorithm and other methods used in existing literature. Based on similar computational difficulties and complexities, the results show that the abstraction ant colony clustering algorithm produces results that are not only more accurate but also more efficiently determined than the ant colony clustering algorithm and the other methods. Thus, the abstraction ant colony clustering algorithm can be used for efficient multivariate data clustering. PMID:26839533

  13. Improved Ant Colony Clustering Algorithm and Its Performance Study

    PubMed Central

    Gao, Wei

    2016-01-01

    Clustering analysis is used in many disciplines and applications; it is an important tool that descriptively identifies homogeneous groups of objects based on attribute values. The ant colony clustering algorithm is a swarm-intelligent method used for clustering problems that is inspired by the behavior of ant colonies that cluster their corpses and sort their larvae. A new abstraction ant colony clustering algorithm using a data combination mechanism is proposed to improve the computational efficiency and accuracy of the ant colony clustering algorithm. The abstraction ant colony clustering algorithm is used to cluster benchmark problems, and its performance is compared with the ant colony clustering algorithm and other methods used in existing literature. Based on similar computational difficulties and complexities, the results show that the abstraction ant colony clustering algorithm produces results that are not only more accurate but also more efficiently determined than the ant colony clustering algorithm and the other methods. Thus, the abstraction ant colony clustering algorithm can be used for efficient multivariate data clustering. PMID:26839533

  14. Comparative analysis of different variants of the Uzawa algorithm in problems of the theory of elasticity for incompressible materials.

    PubMed

    Styopin, Nikita E; Vershinin, Anatoly V; Zingerman, Konstantin M; Levin, Vladimir A

    2016-09-01

    Different variants of the Uzawa algorithm are compared with one another. The comparison is performed for the case in which this algorithm is applied to large-scale systems of linear algebraic equations. These systems arise in the finite-element solution of the problems of elasticity theory for incompressible materials. A modification of the Uzawa algorithm is proposed. Computational experiments show that this modification improves the convergence of the Uzawa algorithm for the problems of solid mechanics. The results of computational experiments show that each variant of the Uzawa algorithm considered has its advantages and disadvantages and may be convenient in one case or another.

  15. Comparative analysis of different variants of the Uzawa algorithm in problems of the theory of elasticity for incompressible materials.

    PubMed

    Styopin, Nikita E; Vershinin, Anatoly V; Zingerman, Konstantin M; Levin, Vladimir A

    2016-09-01

    Different variants of the Uzawa algorithm are compared with one another. The comparison is performed for the case in which this algorithm is applied to large-scale systems of linear algebraic equations. These systems arise in the finite-element solution of the problems of elasticity theory for incompressible materials. A modification of the Uzawa algorithm is proposed. Computational experiments show that this modification improves the convergence of the Uzawa algorithm for the problems of solid mechanics. The results of computational experiments show that each variant of the Uzawa algorithm considered has its advantages and disadvantages and may be convenient in one case or another. PMID:27595019

  16. Parameter identification using a creeping-random-search algorithm

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.

    1971-01-01

    A creeping-random-search algorithm is applied to different types of problems in the field of parameter identification. The studies are intended to demonstrate that a random-search algorithm can be applied successfully to these various problems, which often cannot be handled by conventional deterministic methods, and, also, to introduce methods that speed convergence to an extremal of the problem under investigation. Six two-parameter identification problems with analytic solutions are solved, and two application problems are discussed in some detail. Results of the study show that a modified version of the basic creeping-random-search algorithm chosen does speed convergence in comparison with the unmodified version. The results also show that the algorithm can successfully solve problems that contain limits on state or control variables, inequality constraints (both independent and dependent, and linear and nonlinear), or stochastic models.

  17. Messy genetic algorithms: Recent developments

    SciTech Connect

    Kargupta, H.

    1996-09-01

    Messy genetic algorithms define a rare class of algorithms that realize the need for detecting appropriate relations among members of the search domain in optimization. This paper reviews earlier works in messy genetic algorithms and describes some recent developments. It also describes the gene expression messy GA (GEMGA)--an {Omicron}({Lambda}{sup {kappa}}({ell}{sup 2} + {kappa})) sample complexity algorithm for the class of order-{kappa} delineable problems (problems that can be solved by considering no higher than order-{kappa} relations) of size {ell} and alphabet size {Lambda}. Experimental results are presented to demonstrate the scalability of the GEMGA.

  18. YAMPA: Yet Another Matching Pursuit Algorithm for compressive sensing

    NASA Astrophysics Data System (ADS)

    Lodhi, Muhammad A.; Voronin, Sergey; Bajwa, Waheed U.

    2016-05-01

    State-of-the-art sparse recovery methods often rely on the restricted isometry property for their theoretical guarantees. However, they cannot explicitly incorporate metrics such as restricted isometry constants within their recovery procedures due to the computational intractability of calculating such metrics. This paper formulates an iterative algorithm, termed yet another matching pursuit algorithm (YAMPA), for recovery of sparse signals from compressive measurements. YAMPA differs from other pursuit algorithms in that: (i) it adapts to the measurement matrix using a threshold that is explicitly dependent on two computable coherence metrics of the matrix, and (ii) it does not require knowledge of the signal sparsity. Performance comparisons of YAMPA against other matching pursuit and approximate message passing algorithms are made for several types of measurement matrices. These results show that while state-of-the-art approximate message passing algorithms outperform other algorithms (including YAMPA) in the case of well-conditioned random matrices, they completely break down in the case of ill-conditioned measurement matrices. On the other hand, YAMPA and comparable pursuit algorithms not only result in reasonable performance for well-conditioned matrices, but their performance also degrades gracefully for ill-conditioned matrices. The paper also shows that YAMPA uniformly outperforms other pursuit algorithms for the case of thresholding parameters chosen in a clairvoyant fashion. Further, when combined with a simple and fast technique for selecting thresholding parameters in the case of ill-conditioned matrices, YAMPA outperforms other pursuit algorithms in the regime of low undersampling, although some of these algorithms can outperform YAMPA in the regime of high undersampling in this setting.

  19. SAGE II inversion algorithm. [Stratospheric Aerosol and Gas Experiment

    NASA Technical Reports Server (NTRS)

    Chu, W. P.; Mccormick, M. P.; Lenoble, J.; Brogniez, C.; Pruvost, P.

    1989-01-01

    The operational Stratospheric Aerosol and Gas Experiment II multichannel data inversion algorithm is described. Aerosol and ozone retrievals obtained with the algorithm are discussed. The algorithm is compared to an independently developed algorithm (Lenoble, 1989), showing that the inverted aerosol and ozone profiles from the two algorithms are similar within their respective uncertainties.

  20. A hybrid cuckoo search algorithm with Nelder Mead method for solving global optimization problems.

    PubMed

    Ali, Ahmed F; Tawhid, Mohamed A

    2016-01-01

    Cuckoo search algorithm is a promising metaheuristic population based method. It has been applied to solve many real life problems. In this paper, we propose a new cuckoo search algorithm by combining the cuckoo search algorithm with the Nelder-Mead method in order to solve the integer and minimax optimization problems. We call the proposed algorithm by hybrid cuckoo search and Nelder-Mead method (HCSNM). HCSNM starts the search by applying the standard cuckoo search for number of iterations then the best obtained solution is passing to the Nelder-Mead algorithm as an intensification process in order to accelerate the search and overcome the slow convergence of the standard cuckoo search algorithm. The proposed algorithm is balancing between the global exploration of the Cuckoo search algorithm and the deep exploitation of the Nelder-Mead method. We test HCSNM algorithm on seven integer programming problems and ten minimax problems and compare against eight algorithms for solving integer programming problems and seven algorithms for solving minimax problems. The experiments results show the efficiency of the proposed algorithm and its ability to solve integer and minimax optimization problems in reasonable time. PMID:27217988

  1. A hybrid cuckoo search algorithm with Nelder Mead method for solving global optimization problems.

    PubMed

    Ali, Ahmed F; Tawhid, Mohamed A

    2016-01-01

    Cuckoo search algorithm is a promising metaheuristic population based method. It has been applied to solve many real life problems. In this paper, we propose a new cuckoo search algorithm by combining the cuckoo search algorithm with the Nelder-Mead method in order to solve the integer and minimax optimization problems. We call the proposed algorithm by hybrid cuckoo search and Nelder-Mead method (HCSNM). HCSNM starts the search by applying the standard cuckoo search for number of iterations then the best obtained solution is passing to the Nelder-Mead algorithm as an intensification process in order to accelerate the search and overcome the slow convergence of the standard cuckoo search algorithm. The proposed algorithm is balancing between the global exploration of the Cuckoo search algorithm and the deep exploitation of the Nelder-Mead method. We test HCSNM algorithm on seven integer programming problems and ten minimax problems and compare against eight algorithms for solving integer programming problems and seven algorithms for solving minimax problems. The experiments results show the efficiency of the proposed algorithm and its ability to solve integer and minimax optimization problems in reasonable time.

  2. Enhanced hybrid search algorithm for protein structure prediction using the 3D-HP lattice model.

    PubMed

    Zhou, Changjun; Hou, Caixia; Zhang, Qiang; Wei, Xiaopeng

    2013-09-01

    The problem of protein structure prediction in the hydrophobic-polar (HP) lattice model is the prediction of protein tertiary structure. This problem is usually referred to as the protein folding problem. This paper presents a method for the application of an enhanced hybrid search algorithm to the problem of protein folding prediction, using the three dimensional (3D) HP lattice model. The enhanced hybrid search algorithm is a combination of the particle swarm optimizer (PSO) and tabu search (TS) algorithms. Since the PSO algorithm entraps local minimum in later evolution extremely easily, we combined PSO with the TS algorithm, which has properties of global optimization. Since the technologies of crossover and mutation are applied many times to PSO and TS algorithms, so enhanced hybrid search algorithm is called the MCMPSO-TS (multiple crossover and mutation PSO-TS) algorithm. Experimental results show that the MCMPSO-TS algorithm can find the best solutions so far for the listed benchmarks, which will help comparison with any future paper approach. Moreover, real protein sequences and Fibonacci sequences are verified in the 3D HP lattice model for the first time. Compared with the previous evolutionary algorithms, the new hybrid search algorithm is novel, and can be used effectively to predict 3D protein folding structure. With continuous development and changes in amino acids sequences, the new algorithm will also make a contribution to the study of new protein sequences. PMID:23824509

  3. A limited-memory algorithm for bound-constrained optimization

    SciTech Connect

    Byrd, R.H.; Peihuang, L.; Nocedal, J. |

    1996-03-01

    An algorithm for solving large nonlinear optimization problems with simple bounds is described. It is based on the gradient projection method and uses a limited-memory BFGS matrix to approximate the Hessian of the objective function. We show how to take advantage of the form of the limited-memory approximation to implement the algorithm efficiently. The results of numerical tests on a set of large problems are reported.

  4. Social Emotional Optimization Algorithm for Nonlinear Constrained Optimization Problems

    NASA Astrophysics Data System (ADS)

    Xu, Yuechun; Cui, Zhihua; Zeng, Jianchao

    Nonlinear programming problem is one important branch in operational research, and has been successfully applied to various real-life problems. In this paper, a new approach called Social emotional optimization algorithm (SEOA) is used to solve this problem which is a new swarm intelligent technique by simulating the human behavior guided by emotion. Simulation results show that the social emotional optimization algorithm proposed in this paper is effective and efficiency for the nonlinear constrained programming problems.

  5. Coevolving memetic algorithms: a review and progress report.

    PubMed

    Smith, Jim E

    2007-02-01

    Coevolving memetic algorithms are a family of metaheuristic search algorithms in which a rule-based representation of local search (LS) is coadapted alongside candidate solutions within a hybrid evolutionary system. Simple versions of these systems have been shown to outperform other nonadaptive memetic and evolutionary algorithms on a range of problems. This paper presents a rationale for such systems and places them in the context of other recent work on adaptive memetic algorithms. It then proposes a general structure within which a population of LS algorithms can be evolved in tandem with the solutions to which they are applied. Previous research started with a simple self-adaptive system before moving on to more complex models. Results showed that the algorithm was able to discover and exploit certain forms of structure and regularities within the problems. This "metalearning" of problem features provided a means of creating highly scalable algorithms. This work is briefly reviewed to highlight some of the important findings and behaviors exhibited. Based on this analysis, new results are then presented from systems with more flexible representations, which, again, show significant improvements. Finally, the current state of, and future directions for, research in this area is discussed.

  6. Subsurface biological activity zone detection using genetic search algorithms

    SciTech Connect

    Mahinthakumar, G.; Gwo, J.P.; Moline, G.R.; Webb, O.F.

    1999-12-01

    Use of generic search algorithms for detection of subsurface biological activity zones (BAZ) is investigated through a series of hypothetical numerical biostimulation experiments. Continuous injection of dissolved oxygen and methane with periodically varying concentration stimulates the cometabolism of indigenous methanotropic bacteria. The observed breakthroughs of methane are used to deduce possible BAZ in the subsurface. The numerical experiments are implemented in a parallel computing environment to make possible the large number of simultaneous transport simulations required by the algorithm. The results show that genetic algorithms are very efficient in locating multiple activity zones, provided the observed signals adequately sample the BAZ.

  7. An efficient cuckoo search algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Ong, Pauline; Zainuddin, Zarita

    2013-04-01

    Cuckoo search algorithm which reproduces the breeding strategy of the best known brood parasitic bird, the cuckoos has demonstrated its superiority in obtaining the global solution for numerical optimization problems. However, the involvement of fixed step approach in its exploration and exploitation behavior might slow down the search process considerably. In this regards, an improved cuckoo search algorithm with adaptive step size adjustment is introduced and its feasibility on a variety of benchmarks is validated. The obtained results show that the proposed scheme outperforms the standard cuckoo search algorithm in terms of convergence characteristic while preserving the fascinating features of the original method.

  8. Fast image matching algorithm based on projection characteristics

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Yue, Xiaobo; Zhou, Lijun

    2011-06-01

    Based on analyzing the traditional template matching algorithm, this paper identified the key factors restricting the speed of matching and put forward a brand new fast matching algorithm based on projection. Projecting the grayscale image, this algorithm converts the two-dimensional information of the image into one-dimensional one, and then matches and identifies through one-dimensional correlation, meanwhile, because of normalization has been done, when the image brightness or signal amplitude increasing in proportion, it could also perform correct matching. Experimental results show that the projection characteristics based image registration method proposed in this article could greatly improve the matching speed, which ensuring the matching accuracy as well.

  9. Restart-Based Genetic Algorithm for the Quadratic Assignment Problem

    NASA Astrophysics Data System (ADS)

    Misevicius, Alfonsas

    The power of genetic algorithms (GAs) has been demonstrated for various domains of the computer science, including combinatorial optimization. In this paper, we propose a new conceptual modification of the genetic algorithm entitled a "restart-based genetic algorithm" (RGA). An effective implementation of RGA for a well-known combinatorial optimization problem, the quadratic assignment problem (QAP), is discussed. The results obtained from the computational experiments on the QAP instances from the publicly available library QAPLIB show excellent performance of RGA. This is especially true for the real-life like QAPs.

  10. A novel image encryption algorithm based on DNA subsequence operation.

    PubMed

    Zhang, Qiang; Xue, Xianglian; Wei, Xiaopeng

    2012-01-01

    We present a novel image encryption algorithm based on DNA subsequence operation. Different from the traditional DNA encryption methods, our algorithm does not use complex biological operation but just uses the idea of DNA subsequence operations (such as elongation operation, truncation operation, deletion operation, etc.) combining with the logistic chaotic map to scramble the location and the value of pixel points from the image. The experimental results and security analysis show that the proposed algorithm is easy to be implemented, can get good encryption effect, has a wide secret key's space, strong sensitivity to secret key, and has the abilities of resisting exhaustive attack and statistic attack.

  11. On the nature of control algorithms for space manipulators

    NASA Technical Reports Server (NTRS)

    Papadopoulos, Evangelos; Dubowsky, Steven

    1990-01-01

    A study of the characteristics of control algorithms that can be applied to the motion control of space manipulators is reported. The results obtained show that nearly any control algorithm that can be applied to conventional terrestrial fixed-base manipulators, with a few additional conditions, can be directly applied to free-floating space manipulators. Barycenters are used to formulate efficiently the kinematic and dynamic equations of free-floating space manipulators. A control algorithm for a space manipulator system is designed to demonstrate the value of the analysis.

  12. Combined Simulated Annealing Algorithm for the Discrete Facility Location Problem

    PubMed Central

    Qin, Jin; Ni, Ling-lin; Shi, Feng

    2012-01-01

    The combined simulated annealing (CSA) algorithm was developed for the discrete facility location problem (DFLP) in the paper. The method is a two-layer algorithm, in which the external subalgorithm optimizes the decision of the facility location decision while the internal subalgorithm optimizes the decision of the allocation of customer's demand under the determined location decision. The performance of the CSA is tested by 30 instances with different sizes. The computational results show that CSA works much better than the previous algorithm on DFLP and offers a new reasonable alternative solution method to it. PMID:23049474

  13. A Flocking Based algorithm for Document Clustering Analysis

    SciTech Connect

    Cui, Xiaohui; Gao, Jinzhu; Potok, Thomas E

    2006-01-01

    Social animals or insects in nature often exhibit a form of emergent collective behavior known as flocking. In this paper, we present a novel Flocking based approach for document clustering analysis. Our Flocking clustering algorithm uses stochastic and heuristic principles discovered from observing bird flocks or fish schools. Unlike other partition clustering algorithm such as K-means, the Flocking based algorithm does not require initial partitional seeds. The algorithm generates a clustering of a given set of data through the embedding of the high-dimensional data items on a two-dimensional grid for easy clustering result retrieval and visualization. Inspired by the self-organized behavior of bird flocks, we represent each document object with a flock boid. The simple local rules followed by each flock boid result in the entire document flock generating complex global behaviors, which eventually result in a clustering of the documents. We evaluate the efficiency of our algorithm with both a synthetic dataset and a real document collection that includes 100 news articles collected from the Internet. Our results show that the Flocking clustering algorithm achieves better performance compared to the K- means and the Ant clustering algorithm for real document clustering.

  14. On algorithmic rate-coded AER generation.

    PubMed

    Linares-Barranco, Alejandro; Jimenez-Moreno, Gabriel; Linares-Barranco, Bernabé; Civit-Balcells, Antón

    2006-05-01

    This paper addresses the problem of converting a conventional video stream based on sequences of frames into the spike event-based representation known as the address-event-representation (AER). In this paper we concentrate on rate-coded AER. The problem is addressed as an algorithmic problem, in which different methods are proposed, implemented and tested through software algorithms. The proposed algorithms are comparatively evaluated according to different criteria. Emphasis is put on the potential of such algorithms for a) doing the frame-based to event-based representation in real time, and b) that the resulting event streams ressemble as much as possible those generated naturally by rate-coded address-event VLSI chips, such as silicon AER retinae. It is found that simple and straightforward algorithms tend to have high potential for real time but produce event distributions that differ considerably from those obtained in AER VLSI chips. On the other hand, sophisticated algorithms that yield better event distributions are not efficient for real time operations. The methods based on linear-feedback-shift-register (LFSR) pseudorandom number generation is a good compromise, which is feasible for real time and yield reasonably well distributed events in time. Our software experiments, on a 1.6-GHz Pentium IV, show that at 50% AER bus load the proposed algorithms require between 0.011 and 1.14 ms per 8 bit-pixel per frame. One of the proposed LFSR methods is implemented in real time hardware using a prototyping board that includes a VirtexE 300 FPGA. The demonstration hardware is capable of transforming frames of 64 x 64 pixels of 8-bit depth at a frame rate of 25 frames per second, producing spike events at a peak rate of 10(7) events per second. PMID:16722179

  15. Introduction and validation of a less painful algorithm to estimate the nociceptive flexion reflex threshold.

    PubMed

    Lichtner, Gregor; Golebiewski, Anna; Schneider, Martin H; von Dincklage, Falk

    2015-05-22

    The nociceptive flexion reflex (NFR) is a widely used tool to investigate spinal nociception for scientific and diagnostic purposes, but its clinical use is currently limited due to the painful measurement procedure, especially restricting its applicability for patients suffering from chronic pain disorders. Here we introduce a less painful algorithm to assess the NFR threshold. Application of this new algorithm leads to a reduction of subjective pain ratings by over 30% compared to the standard algorithm. We show that the reflex threshold estimates resulting from application of the new algorithm can be used interchangeably with those of the standard algorithm after adjusting for the constant difference between the algorithms. Furthermore, we show that the new algorithm can be applied at shorter interstimulus intervals than are commonly used with the standard algorithm, since reflex threshold values remain unchanged and no habituation effects occur when reducing the interstimulus interval for the new algorithm down to 3s. Finally we demonstrate the utility of the new algorithm to investigate the modulation of nociception through different states of attention. Taken together, the here presented new algorithm could increase the utility of the NFR for investigation of nociception in subjects who were previously not able to endure the measurement procedure, such as chronic pain patients.

  16. 8. Detail showing concrete abutment, showing substructure of bridge, specifically ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    8. Detail showing concrete abutment, showing substructure of bridge, specifically west side of arch and substructure. - Presumpscot Falls Bridge, Spanning Presumptscot River at Allen Avenue extension, 0.75 mile west of U.S. Interstate 95, Falmouth, Cumberland County, ME

  17. 28. MAP SHOWING LOCATION OF ARVFS FACILITY AS BUILT. SHOWS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    28. MAP SHOWING LOCATION OF ARVFS FACILITY AS BUILT. SHOWS LINCOLN BOULEVARD, BIG LOST RIVER, AND NAVAL REACTORS FACILITY. F.C. TORKELSON DRAWING NUMBER 842-ARVFS-101-2. DATED OCTOBER 12, 1965. INEL INDEX CODE NUMBER: 075 0101 851 151969. - Idaho National Engineering Laboratory, Advanced Reentry Vehicle Fusing System, Scoville, Butte County, ID

  18. Why is Boris algorithm so good?

    SciTech Connect

    Qin, Hong; Zhang, Shuangxi; Xiao, Jianyuan; Liu, Jian; Sun, Yajuan; Tang, William M.

    2013-08-15

    Due to its excellent long term accuracy, the Boris algorithm is the de facto standard for advancing a charged particle. Despite its popularity, up to now there has been no convincing explanation why the Boris algorithm has this advantageous feature. In this paper, we provide an answer to this question. We show that the Boris algorithm conserves phase space volume, even though it is not symplectic. The global bound on energy error typically associated with symplectic algorithms still holds for the Boris algorithm, making it an effective algorithm for the multi-scale dynamics of plasmas.

  19. Why is Boris Algorithm So Good?

    SciTech Connect

    et al, Hong Qin

    2013-03-03

    Due to its excellent long term accuracy, the Boris algorithm is the de facto standard for advancing a charged particle. Despite its popularity, up to now there has been no convincing explanation why the Boris algorithm has this advantageous feature. In this letter, we provide an answer to this question. We show that the Boris algorithm conserves phase space volume, even though it is not symplectic. The global bound on energy error typically associated with symplectic algorithms still holds for the Boris algorithm, making it an effective algorithm for the multi-scale dynamics of plasmas.

  20. Higher-order force gradient symplectic algorithms

    NASA Astrophysics Data System (ADS)

    Chin, Siu A.; Kidwell, Donald W.

    2000-12-01

    We show that a recently discovered fourth order symplectic algorithm, which requires one evaluation of force gradient in addition to three evaluations of the force, when iterated to higher order, yielded algorithms that are far superior to similarly iterated higher order algorithms based on the standard Forest-Ruth algorithm. We gauge the accuracy of each algorithm by comparing the step-size independent error functions associated with energy conservation and the rotation of the Laplace-Runge-Lenz vector when solving a highly eccentric Kepler problem. For orders 6, 8, 10, and 12, the new algorithms are approximately a factor of 103, 104, 104, and 105 better.

  1. A Parcellation Based Nonparametric Algorithm for Independent Component Analysis with Application to fMRI Data

    PubMed Central

    Li, Shanshan; Chen, Shaojie; Yue, Chen; Caffo, Brian

    2016-01-01

    Independent Component analysis (ICA) is a widely used technique for separating signals that have been mixed together. In this manuscript, we propose a novel ICA algorithm using density estimation and maximum likelihood, where the densities of the signals are estimated via p-spline based histogram smoothing and the mixing matrix is simultaneously estimated using an optimization algorithm. The algorithm is exceedingly simple, easy to implement and blind to the underlying distributions of the source signals. To relax the identically distributed assumption in the density function, a modified algorithm is proposed to allow for different density functions on different regions. The performance of the proposed algorithm is evaluated in different simulation settings. For illustration, the algorithm is applied to a research investigation with a large collection of resting state fMRI datasets. The results show that the algorithm successfully recovers the established brain networks. PMID:26858592

  2. WS-BP: An efficient wolf search based back-propagation algorithm

    NASA Astrophysics Data System (ADS)

    Nawi, Nazri Mohd; Rehman, M. Z.; Khan, Abdullah

    2015-05-01

    Wolf Search (WS) is a heuristic based optimization algorithm. Inspired by the preying and survival capabilities of the wolves, this algorithm is highly capable to search large spaces in the candidate solutions. This paper investigates the use of WS algorithm in combination with back-propagation neural network (BPNN) algorithm to overcome the local minima problem and to improve convergence in gradient descent. The performance of the proposed Wolf Search based Back-Propagation (WS-BP) algorithm is compared with Artificial Bee Colony Back-Propagation (ABC-BP), Bat Based Back-Propagation (Bat-BP), and conventional BPNN algorithms. Specifically, OR and XOR datasets are used for training the network. The simulation results show that the WS-BP algorithm effectively avoids the local minima and converge to global minima.

  3. A hyperspectral images compression algorithm based on 3D bit plane transform

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Xiang, Libin; Zhang, Sam; Quan, Shengxue

    2010-10-01

    According the analyses of the hyper-spectral images, a new compression algorithm based on 3-D bit plane transform is proposed. The spectral coefficient is higher than the spatial. The algorithm is proposed to overcome the shortcoming of 1-D bit plane transform for it can only reduce the correlation when the neighboring pixels have similar values. The algorithm calculates the horizontal, vertical and spectral bit plane transform sequentially. As the spectral bit plane transform, the algorithm can be easily realized by hardware. In addition, because the calculation and encoding of the transform matrix of each bit are independent, the algorithm can be realized by parallel computing model, which can improve the calculation efficiency and save the processing time greatly. The experimental results show that the proposed algorithm achieves improved compression performance. With a certain compression ratios, the algorithm satisfies requirements of hyper-spectral images compression system, by efficiently reducing the cost of computation and memory usage.

  4. A new improved artificial bee colony algorithm for ship hull form optimization

    NASA Astrophysics Data System (ADS)

    Huang, Fuxin; Wang, Lijue; Yang, Chi

    2016-04-01

    The artificial bee colony (ABC) algorithm is a relatively new swarm intelligence-based optimization algorithm. Its simplicity of implementation, relatively few parameter settings and promising optimization capability make it widely used in different fields. However, it has problems of slow convergence due to its solution search equation. Here, a new solution search equation based on a combination of the elite solution pool and the block perturbation scheme is proposed to improve the performance of the algorithm. In addition, two different solution search equations are used by employed bees and onlooker bees to balance the exploration and exploitation of the algorithm. The developed algorithm is validated by a set of well-known numerical benchmark functions. It is then applied to optimize two ship hull forms with minimum resistance. The tested results show that the proposed new improved ABC algorithm can outperform the ABC algorithm in most of the tested problems.

  5. AUV Underwater Positioning Algorithm Based on Interactive Assistance of SINS and LBL.

    PubMed

    Zhang, Tao; Chen, Liping; Li, Yao

    2015-01-01

    This paper studies an underwater positioning algorithm based on the interactive assistance of a strapdown inertial navigation system (SINS) and LBL, and this algorithm mainly includes an optimal correlation algorithm with aided tracking of an SINS/Doppler velocity log (DVL)/magnetic compass pilot (MCP), a three-dimensional TDOA positioning algorithm of Taylor series expansion and a multi-sensor information fusion algorithm. The final simulation results show that compared to traditional underwater positioning algorithms, this scheme can not only directly correct accumulative errors caused by a dead reckoning algorithm, but also solves the problem of ambiguous correlation peaks caused by multipath transmission of underwater acoustic signals. The proposed method can calibrate the accumulative error of the AUV position more directly and effectively, which prolongs the underwater operating duration of the AUV. PMID:26729120

  6. A Parcellation Based Nonparametric Algorithm for Independent Component Analysis with Application to fMRI Data.

    PubMed

    Li, Shanshan; Chen, Shaojie; Yue, Chen; Caffo, Brian

    2016-01-01

    Independent Component analysis (ICA) is a widely used technique for separating signals that have been mixed together. In this manuscript, we propose a novel ICA algorithm using density estimation and maximum likelihood, where the densities of the signals are estimated via p-spline based histogram smoothing and the mixing matrix is simultaneously estimated using an optimization algorithm. The algorithm is exceedingly simple, easy to implement and blind to the underlying distributions of the source signals. To relax the identically distributed assumption in the density function, a modified algorithm is proposed to allow for different density functions on different regions. The performance of the proposed algorithm is evaluated in different simulation settings. For illustration, the algorithm is applied to a research investigation with a large collection of resting state fMRI datasets. The results show that the algorithm successfully recovers the established brain networks. PMID:26858592

  7. AUV Underwater Positioning Algorithm Based on Interactive Assistance of SINS and LBL

    PubMed Central

    Zhang, Tao; Chen, Liping; Li, Yao

    2015-01-01

    This paper studies an underwater positioning algorithm based on the interactive assistance of a strapdown inertial navigation system (SINS) and LBL, and this algorithm mainly includes an optimal correlation algorithm with aided tracking of an SINS/Doppler velocity log (DVL)/magnetic compass pilot (MCP), a three-dimensional TDOA positioning algorithm of Taylor series expansion and a multi-sensor information fusion algorithm. The final simulation results show that compared to traditional underwater positioning algorithms, this scheme can not only directly correct accumulative errors caused by a dead reckoning algorithm, but also solves the problem of ambiguous correlation peaks caused by multipath transmission of underwater acoustic signals. The proposed method can calibrate the accumulative error of the AUV position more directly and effectively, which prolongs the underwater operating duration of the AUV. PMID:26729120

  8. One high-accuracy camera calibration algorithm based on computer vision images

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Huang, Jianming; Wei, Xiangquan

    2015-12-01

    Camera calibration is the first step of computer vision and one of the most active research fields nowadays. In order to improve the measurement precision, the internal parameters of the camera should be accurately calibrated. So one high-accuracy camera calibration algorithm is proposed based on the images of planar targets or tridimensional targets. By using the algorithm, the internal parameters of the camera are calibrated based on the existing planar target at the vision-based navigation experiment. The experimental results show that the accuracy of the proposed algorithm is obviously improved compared with the conventional linear algorithm, Tsai general algorithm, and Zhang Zhengyou calibration algorithm. The algorithm proposed by the article can satisfy the need of computer vision and provide reference for precise measurement of the relative position and attitude.

  9. AUV Underwater Positioning Algorithm Based on Interactive Assistance of SINS and LBL.

    PubMed

    Zhang, Tao; Chen, Liping; Li, Yao

    2015-12-30

    This paper studies an underwater positioning algorithm based on the interactive assistance of a strapdown inertial navigation system (SINS) and LBL, and this algorithm mainly includes an optimal correlation algorithm with aided tracking of an SINS/Doppler velocity log (DVL)/magnetic compass pilot (MCP), a three-dimensional TDOA positioning algorithm of Taylor series expansion and a multi-sensor information fusion algorithm. The final simulation results show that compared to traditional underwater positioning algorithms, this scheme can not only directly correct accumulative errors caused by a dead reckoning algorithm, but also solves the problem of ambiguous correlation peaks caused by multipath transmission of underwater acoustic signals. The proposed method can calibrate the accumulative error of the AUV position more directly and effectively, which prolongs the underwater operating duration of the AUV.

  10. Dynamic Harmony Search with Polynomial Mutation Algorithm for Valve-Point Economic Load Dispatch

    PubMed Central

    Karthikeyan, M.; Sree Ranga Raja, T.

    2015-01-01

    Economic load dispatch (ELD) problem is an important issue in the operation and control of modern control system. The ELD problem is complex and nonlinear with equality and inequality constraints which makes it hard to be efficiently solved. This paper presents a new modification of harmony search (HS) algorithm named as dynamic harmony search with polynomial mutation (DHSPM) algorithm to solve ORPD problem. In DHSPM algorithm the key parameters of HS algorithm like harmony memory considering rate (HMCR) and pitch adjusting rate (PAR) are changed dynamically and there is no need to predefine these parameters. Additionally polynomial mutation is inserted in the updating step of HS algorithm to favor exploration and exploitation of the search space. The DHSPM algorithm is tested with three power system cases consisting of 3, 13, and 40 thermal units. The computational results show that the DHSPM algorithm is more effective in finding better solutions than other computational intelligence based methods. PMID:26491710

  11. Pea Plants Show Risk Sensitivity.

    PubMed

    Dener, Efrat; Kacelnik, Alex; Shemesh, Hagai

    2016-07-11

    Sensitivity to variability in resources has been documented in humans, primates, birds, and social insects, but the fit between empirical results and the predictions of risk sensitivity theory (RST), which aims to explain this sensitivity in adaptive terms, is weak [1]. RST predicts that agents should switch between risk proneness and risk aversion depending on state and circumstances, especially according to the richness of the least variable option [2]. Unrealistic assumptions about agents' information processing mechanisms and poor knowledge of the extent to which variability imposes specific selection in nature are strong candidates to explain the gap between theory and data. RST's rationale also applies to plants, where it has not hitherto been tested. Given the differences between animals' and plants' information processing mechanisms, such tests should help unravel the conflicts between theory and data. Measuring root growth allocation by split-root pea plants, we show that they favor variability when mean nutrient levels are low and the opposite when they are high, supporting the most widespread RST prediction. However, the combination of non-linear effects of nitrogen availability at local and systemic levels may explain some of these effects as a consequence of mechanisms not necessarily evolved to cope with variance [3, 4]. This resembles animal examples in which properties of perception and learning cause risk sensitivity even though they are not risk adaptations [5]. PMID:27374342

  12. Kernel MAD Algorithm for Relative Radiometric Normalization

    NASA Astrophysics Data System (ADS)

    Bai, Yang; Tang, Ping; Hu, Changmiao

    2016-06-01

    The multivariate alteration detection (MAD) algorithm is commonly used in relative radiometric normalization. This algorithm is based on linear canonical correlation analysis (CCA) which can analyze only linear relationships among bands. Therefore, we first introduce a new version of MAD in this study based on the established method known as kernel canonical correlation analysis (KCCA). The proposed method effectively extracts the non-linear and complex relationships among variables. We then conduct relative radiometric normalization experiments on both the linear CCA and KCCA version of the MAD algorithm with the use of Landsat-8 data of Beijing, China, and Gaofen-1(GF-1) data derived from South China. Finally, we analyze the difference between the two methods. Results show that the KCCA-based MAD can be satisfactorily applied to relative radiometric normalization, this algorithm can well describe the nonlinear relationship between multi-temporal images. This work is the first attempt to apply a KCCA-based MAD algorithm to relative radiometric normalization.

  13. An Algorithm for Autonomous Formation Obstacle Avoidance

    NASA Astrophysics Data System (ADS)

    Cruz, Yunior I.

    The level of human interaction with Unmanned Aerial Systems varies greatly from remotely piloted aircraft to fully autonomous systems. In the latter end of the spectrum, the challenge lies in designing effective algorithms to dictate the behavior of the autonomous agents. A swarm of autonomous Unmanned Aerial Vehicles requires collision avoidance and formation flight algorithms to negotiate environmental challenges it may encounter during the execution of its mission, which may include obstacles and chokepoints. In this work, a simple algorithm is developed to allow a formation of autonomous vehicles to perform point to point navigation while avoiding obstacles and navigating through chokepoints. Emphasis is placed on maintaining formation structures. Rather than breaking formation and individually navigating around the obstacle or through the chokepoint, vehicles are required to assemble into appropriately sized/shaped sub-formations, bifurcate around the obstacle or negotiate the chokepoint, and reassemble into the original formation at the far side of the obstruction. The algorithm receives vehicle and environmental properties as inputs and outputs trajectories for each vehicle from start to the desired ending location. Simulation results show that the algorithm safely routes all vehicles past the obstruction while adhering to the aforementioned requirements. The formation adapts and successfully negotiates the obstacles and chokepoints in its path while maintaining proper vehicle separation.

  14. Connected-Health Algorithm: Development and Evaluation.

    PubMed

    Vlahu-Gjorgievska, Elena; Koceski, Saso; Kulev, Igor; Trajkovik, Vladimir

    2016-04-01

    Nowadays, there is a growing interest towards the adoption of novel ICT technologies in the field of medical monitoring and personal health care systems. This paper proposes design of a connected health algorithm inspired from social computing paradigm. The purpose of the algorithm is to give a recommendation for performing a specific activity that will improve user's health, based on his health condition and set of knowledge derived from the history of the user and users with similar attitudes to him. The algorithm could help users to have bigger confidence in choosing their physical activities that will improve their health. The proposed algorithm has been experimentally validated using real data collected from a community of 1000 active users. The results showed that the recommended physical activity, contributed towards weight loss of at least 0.5 kg, is found in the first half of the ordered list of recommendations, generated by the algorithm, with the probability > 0.6 with 1 % level of significance. PMID:26922593

  15. A Novel Hybrid Firefly Algorithm for Global Optimization

    PubMed Central

    Zhang, Lina; Liu, Liqiang; Yang, Xin-She; Dai, Yuntao

    2016-01-01

    Global optimization is challenging to solve due to its nonlinearity and multimodality. Traditional algorithms such as the gradient-based methods often struggle to deal with such problems and one of the current trends is to use metaheuristic algorithms. In this paper, a novel hybrid population-based global optimization algorithm, called hybrid firefly algorithm (HFA), is proposed by combining the advantages of both the firefly algorithm (FA) and differential evolution (DE). FA and DE are executed in parallel to promote information sharing among the population and thus enhance searching efficiency. In order to evaluate the performance and efficiency of the proposed algorithm, a diverse set of selected benchmark functions are employed and these functions fall into two groups: unimodal and multimodal. The experimental results show better performance of the proposed algorithm compared to the original version of the firefly algorithm (FA), differential evolution (DE) and particle swarm optimization (PSO) in the sense of avoiding local minima and increasing the convergence rate. PMID:27685869

  16. Three hypothesis algorithm with occlusion reasoning for multiple people tracking

    NASA Astrophysics Data System (ADS)

    Reta, Carolina; Altamirano, Leopoldo; Gonzalez, Jesus A.; Medina-Carnicer, Rafael

    2015-01-01

    This work proposes a detection-based tracking algorithm able to locate and keep the identity of multiple people, who may be occluded, in uncontrolled stationary environments. Our algorithm builds a tracking graph that models spatio-temporal relationships among attributes of interacting people to predict and resolve partial and total occlusions. When a total occlusion occurs, the algorithm generates various hypotheses about the location of the occluded person considering three cases: (a) the person keeps the same direction and speed, (b) the person follows the direction and speed of the occluder, and (c) the person remains motionless during occlusion. By analyzing the graph, our algorithm can detect trajectories produced by false alarms and estimate the location of missing or occluded people. Our algorithm performs acceptably under complex conditions, such as partial visibility of individuals getting inside or outside the scene, continuous interactions and occlusions among people, wrong or missing information on the detection of persons, as well as variation of the person's appearance due to illumination changes and background-clutter distracters. Our algorithm was evaluated on test sequences in the field of intelligent surveillance achieving an overall precision of 93%. Results show that our tracking algorithm outperforms even trajectory-based state-of-the-art algorithms.

  17. An enhanced fast scanning algorithm for image segmentation

    NASA Astrophysics Data System (ADS)

    Ismael, Ahmed Naser; Yusof, Yuhanis binti

    2015-12-01

    Segmentation is an essential and important process that separates an image into regions that have similar characteristics or features. This will transform the image for a better image analysis and evaluation. An important benefit of segmentation is the identification of region of interest in a particular image. Various algorithms have been proposed for image segmentation and this includes the Fast Scanning algorithm which has been employed on food, sport and medical images. It scans all pixels in the image and cluster each pixel according to the upper and left neighbor pixels. The clustering process in Fast Scanning algorithm is performed by merging pixels with similar neighbor based on an identified threshold. Such an approach will lead to a weak reliability and shape matching of the produced segments. This paper proposes an adaptive threshold function to be used in the clustering process of the Fast Scanning algorithm. This function used the gray'value in the image's pixels and variance Also, the level of the image that is more the threshold are converted into intensity values between 0 and 1, and other values are converted into intensity values zero. The proposed enhanced Fast Scanning algorithm is realized on images of the public and private transportation in Iraq. Evaluation is later made by comparing the produced images of proposed algorithm and the standard Fast Scanning algorithm. The results showed that proposed algorithm is faster in terms the time from standard fast scanning.

  18. A Parallel Prefix Algorithm for Almost Toeplitz Tridiagonal Systems

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He; Joslin, Ronald D.

    1995-01-01

    A compact scheme is a discretization scheme that is advantageous in obtaining highly accurate solutions. However, the resulting systems from compact schemes are tridiagonal systems that are difficult to solve efficiently on parallel computers. Considering the almost symmetric Toeplitz structure, a parallel algorithm, simple parallel prefix (SPP), is proposed. The SPP algorithm requires less memory than the conventional LU decomposition and is efficient on parallel machines. It consists of a prefix communication pattern and AXPY operations. Both the computation and the communication can be truncated without degrading the accuracy when the system is diagonally dominant. A formal accuracy study has been conducted to provide a simple truncation formula. Experimental results have been measured on a MasPar MP-1 SIMD machine and on a Cray 2 vector machine. Experimental results show that the simple parallel prefix algorithm is a good algorithm for symmetric, almost symmetric Toeplitz tridiagonal systems and for the compact scheme on high-performance computers.

  19. An improved robust ADMM algorithm for quantum state tomography

    NASA Astrophysics Data System (ADS)

    Li, Kezhi; Zhang, Hui; Kuang, Sen; Meng, Fangfang; Cong, Shuang

    2016-06-01

    In this paper, an improved adaptive weights alternating direction method of multipliers algorithm is developed to implement the optimization scheme for recovering the quantum state in nearly pure states. The proposed approach is superior to many existing methods because it exploits the low-rank property of density matrices, and it can deal with unexpected sparse outliers as well. The numerical experiments are provided to verify our statements by comparing the results to three different optimization algorithms, using both adaptive and fixed weights in the algorithm, in the cases of with and without external noise, respectively. The results indicate that the improved algorithm has better performances in both estimation accuracy and robustness to external noise. The further simulation results show that the successful recovery rate increases when more qubits are estimated, which in fact satisfies the compressive sensing theory and makes the proposed approach more promising.

  20. A novel algorithm for circular and noncircular signals without knowing the number of sources with Mrla

    NASA Astrophysics Data System (ADS)

    Zeng, Yaoping; Yang, Yixin; Lu, Guangyue

    2013-07-01

    This paper focuses on the direction of arrival (DOA) under the circumstance of mixed circular and noncircular sources with Minimum-Redundancy Linear Array(MRLA).By exploiting receiving signal data and its conjugate,the proposed algorithm can augment the maximum number of detectable sources.Using the weighted MUSIC algorithm during the whole space, the proposed scheme can obtain perfect quality for MRLA without knowing the number of sources. Simulation results clearly show that the effectiveness of our proposed algorithm.

  1. Adaptive algorithm for cloud cover estimation from all-sky images over the sea

    NASA Astrophysics Data System (ADS)

    Krinitskiy, M. A.; Sinitsyn, A. V.

    2016-05-01

    A new algorithm for cloud cover estimation has been formulated and developed based on the synthetic control index, called the grayness rate index, and an additional algorithm step of adaptive filtering of the Mie scattering contribution. A setup for automated cloud cover estimation has been designed, assembled, and tested under field conditions. The results shows a significant advantage of the new algorithm over currently commonly used procedures.

  2. Affine Projection Algorithm with Improved Data-Selective Method Using the Condition Number

    NASA Astrophysics Data System (ADS)

    Ban, Sung Jun; Lee, Chang Woo; Kim, Sang Woo

    Recently, a data-selective method has been proposed to achieve low misalignment in affine projection algorithm (APA) by keeping the condition number of an input data matrix small. We present an improved method, and a complexity reduction algorithm for the APA with the data-selective method. Experimental results show that the proposed algorithm has lower misalignment and a lower condition number for an input data matrix than both the conventional APA and the APA with the previous data-selective method.

  3. [Super-resolution image reconstruction algorithm based on projection onto convex sets and wavelet fusion].

    PubMed

    Cao, Yuzhen; Liu, Xiaoting; Wang, Wei; Xing, Zhanfeng

    2009-10-01

    In this paper a new super-resolution image reconstruction algorithm was proposed. With the improvement of the classical projection onto convex sets (POCS) algorithm, as ground work, and with the combined use of POCS and wavelet fusion, a high resolution CT image was restored by using a group of low resolution CT images. The experimental results showed: the proposed algorithm improves the ability of fusing different information, the detail of the image is more prominent, and the image quality is better.

  4. An effective hybrid cuckoo search and genetic algorithm for constrained engineering design optimization

    NASA Astrophysics Data System (ADS)

    Kanagaraj, G.; Ponnambalam, S. G.; Jawahar, N.; Mukund Nilakantan, J.

    2014-10-01

    This article presents an effective hybrid cuckoo search and genetic algorithm (HCSGA) for solving engineering design optimization problems involving problem-specific constraints and mixed variables such as integer, discrete and continuous variables. The proposed algorithm, HCSGA, is first applied to 13 standard benchmark constrained optimization functions and subsequently used to solve three well-known design problems reported in the literature. The numerical results obtained by HCSGA show competitive performance with respect to recent algorithms for constrained design optimization problems.

  5. A highly accurate heuristic algorithm for the haplotype assembly problem

    PubMed Central

    2013-01-01

    Background Single nucleotide polymorphisms (SNPs) are the most common form of genetic variation in human DNA. The sequence of SNPs in each of the two copies of a given chromosome in a diploid organism is referred to as a haplotype. Haplotype information has many applications such as gene disease diagnoses, drug design, etc. The haplotype assembly problem is defined as follows: Given a set of fragments sequenced from the two copies of a chromosome of a single individual, and their locations in the chromosome, which can be pre-determined by aligning the fragments to a reference DNA sequence, the goal here is to reconstruct two haplotypes (h1, h2) from the input fragments. Existing algorithms do not work well when the error rate of fragments is high. Here we design an algorithm that can give accurate solutions, even if the error rate of fragments is high. Results We first give a dynamic programming algorithm that can give exact solutions to the haplotype assembly problem. The time complexity of the algorithm is O(n × 2t × t), where n is the number of SNPs, and t is the maximum coverage of a SNP site. The algorithm is slow when t is large. To solve the problem when t is large, we further propose a heuristic algorithm on the basis of the dynamic programming algorithm. Experiments show that our heuristic algorithm can give very accurate solutions. Conclusions We have tested our algorithm on a set of benchmark datasets. Experiments show that our algorithm can give very accurate solutions. It outperforms most of the existing programs when the error rate of the input fragments is high. PMID:23445458

  6. Generalized Weiszfeld Algorithms for Lq Optimization.

    PubMed

    Aftab, Khurrum; Hartley, Richard; Trumpf, Jochen

    2015-04-01

    In many computer vision applications, a desired model of some type is computed by minimizing a cost function based on several measurements. Typically, one may compute the model that minimizes the L2 cost, that is the sum of squares of measurement errors with respect to the model. However, the Lq solution which minimizes the sum of the qth power of errors usually gives more robust results in the presence of outliers for some values of q, for example, q = 1. The Weiszfeld algorithm is a classic algorithm for finding the geometric L1 mean of a set of points in Euclidean space. It is provably optimal and requires neither differentiation, nor line search. The Weiszfeld algorithm has also been generalized to find the L1 mean of a set of points on a Riemannian manifold of non-negative curvature. This paper shows that the Weiszfeld approach may be extended to a wide variety of problems to find an Lq mean for 1 ≤ q <; 2, while maintaining simplicity and provable convergence. We apply this problem to both single-rotation averaging (under which the algorithm provably finds the global Lq optimum) and multiple rotation averaging (for which no such proof exists). Experimental results of Lq optimization for rotations show the improved reliability and robustness compared to L2 optimization.

  7. Low complexity interference alignment algorithms for desired signal power maximization problem of MIMO channels

    NASA Astrophysics Data System (ADS)

    Sun, Cong; Yang, Yunchuan; Yuan, Yaxiang

    2012-12-01

    In this article, we investigate the interference alignment (IA) solution for a K-user MIMO interference channel. Proper users' precoders and decoders are designed through a desired signal power maximization model with IA conditions as constraints, which forms a complex matrix optimization problem. We propose two low complexity algorithms, both of which apply the Courant penalty function technique to combine the leakage interference and the desired signal power together as the new objective function. The first proposed algorithm is the modified alternating minimization algorithm (MAMA), where each subproblem has closed-form solution with an eigenvalue decomposition. To further reduce algorithm complexity, we propose a hybrid algorithm which consists of two parts. As the first part, the algorithm iterates with Householder transformation to preserve the orthogonality of precoders and decoders. In each iteration, the matrix optimization problem is considered in a sequence of 2D subspaces, which leads to one dimensional optimization subproblems. From any initial point, this algorithm obtains precoders and decoders with low leakage interference in short time. In the second part, to exploit the advantage of MAMA, it continues to iterate to perfectly align the interference from the output point of the first part. Analysis shows that in one iteration generally both proposed two algorithms have lower computational complexity than the existed maximum signal power (MSP) algorithm, and the hybrid algorithm enjoys lower complexity than MAMA. Simulations reveal that both proposed algorithms achieve similar performances as the MSP algorithm with less executing time, and show better performances than the existed alternating minimization algorithm in terms of sum rate. Besides, from the view of convergence rate, simulation results show that the MAMA enjoys fastest speed with respect to a certain sum rate value, while hybrid algorithm converges fastest to eliminate interference.

  8. Algorithm for Autonomous Landing

    NASA Technical Reports Server (NTRS)

    Kuwata, Yoshiaki

    2011-01-01

    Because of their small size, high maneuverability, and easy deployment, micro aerial vehicles (MAVs) are used for a wide variety of both civilian and military missions. One of their current drawbacks is the vast array of sensors (such as GPS, altimeter, radar, and the like) required to make a landing. Due to the MAV s small payload size, this is a major concern. Replacing the imaging sensors with a single monocular camera is sufficient to land a MAV. By applying optical flow algorithms to images obtained from the camera, time-to-collision can be measured. This is a measurement of position and velocity (but not of absolute distance), and can avoid obstacles as well as facilitate a landing on a flat surface given a set of initial conditions. The key to this approach is to calculate time-to-collision based on some image on the ground. By holding the angular velocity constant, horizontal speed decreases linearly with the height, resulting in a smooth landing. Mathematical proofs show that even with actuator saturation or modeling/ measurement uncertainties, MAVs can land safely. Landings of this nature may have a higher velocity than is desirable, but this can be compensated for by a cushioning or dampening system, or by using a system of legs to grab onto a surface. Such a monocular camera system can increase vehicle payload size (or correspondingly reduce vehicle size), increase speed of descent, and guarantee a safe landing by directly correlating speed to height from the ground.

  9. A Message-Passing Algorithm for Wireless Network Scheduling *

    PubMed Central

    Paschalidis, Ioannis Ch.; Huang, Fuzhuo; Lai, Wei

    2015-01-01

    We consider scheduling in wireless networks and formulate it as Maximum Weighted Independent Set (MWIS) problem on a “conflict” graph that captures interference among simultaneous transmissions. We propose a novel, low-complexity, and fully distributed algorithm that yields high-quality feasible solutions. Our proposed algorithm consists of two phases, each of which requires only local information and is based on message-passing. The first phase solves a relaxation of the MWIS problem using a gradient projection method. The relaxation we consider is tighter than the simple linear programming relaxation and incorporates constraints on all cliques in the graph. The second phase of the algorithm starts from the solution of the relaxation and constructs a feasible solution to the MWIS problem. We show that our algorithm always outputs an optimal solution to the MWIS problem for perfect graphs. Simulation results compare our policies against Carrier Sense Multiple Access (CSMA) and other alternatives and show excellent performance. PMID:26752942

  10. Developments in Human Centered Cueing Algorithms for Control of Flight Simulator Motion Systems

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A.; Telban, Robert J.; Cardullo, Frank M.

    1997-01-01

    The authors conducted further research with cueing algorithms for control of flight simulator motion systems. A variation of the so-called optimal algorithm was formulated using simulated aircraft angular velocity input as a basis. Models of the human vestibular sensation system, i.e. the semicircular canals and otoliths, are incorporated within the algorithm. Comparisons of angular velocity cueing responses showed a significant improvement over a formulation using angular acceleration input. Results also compared favorably with the coordinated adaptive washout algorithm, yielding similar results for angular velocity cues while eliminating false cues and reducing the tilt rate for longitudinal cues. These results were confirmed in piloted tests on the current motion system at NASA-Langley, the Visual Motion Simulator (VMS). Proposed future developments by the authors in cueing algorithms are revealed. The new motion system, the Cockpit Motion Facility (CMF), where the final evaluation of the cueing algorithms will be conducted, is also described.

  11. Efficient implementation of the adaptive scale pixel decomposition algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Bhatnagar, S.; Rau, U.; Zhang, M.

    2016-08-01

    Context. Most popular algorithms in use to remove the effects of a telescope's point spread function (PSF) in radio astronomy are variants of the CLEAN algorithm. Most of these algorithms model the sky brightness using the delta-function basis, which results in undesired artefacts when used to image extended emission. The adaptive scale pixel decomposition (Asp-Clean) algorithm models the sky brightness on a scale-sensitive basis and thus gives a significantly better imaging performance when imaging fields that contain both resolved and unresolved emission. Aims: However, the runtime cost of Asp-Clean is higher than that of scale-insensitive algorithms. In this paper, we identify the most expensive step in the original Asp-Clean algorithm and present an efficient implementation of it, which significantly reduces the computational cost while keeping the imaging performance comparable to the original algorithm. The PSF sidelobe levels of modern wide-band telescopes are significantly reduced, allowing us to make approximations to reduce the computational cost, which in turn allows for the deconvolution of larger images on reasonable timescales. Methods: As in the original algorithm, scales in the image are estimated through function fitting. Here we introduce an analytical method to model extended emission, and a modified method for estimating the initial values used for the fitting procedure, which ultimately leads to a lower computational cost. Results: The new implementation was tested with simulated EVLA data and the imaging performance compared well with the original Asp-Clean algorithm. Tests show that the current algorithm can recover features at different scales with lower computational cost.

  12. A fast algorithm for sparse matrix computations related to inversion

    NASA Astrophysics Data System (ADS)

    Li, S.; Wu, W.; Darve, E.

    2013-06-01

    We have developed a fast algorithm for computing certain entries of the inverse of a sparse matrix. Such computations are critical to many applications, such as the calculation of non-equilibrium Green's functions Gr and G< for nano-devices. The FIND (Fast Inverse using Nested Dissection) algorithm is optimal in the big-O sense. However, in practice, FIND suffers from two problems due to the width-2 separators used by its partitioning scheme. One problem is the presence of a large constant factor in the computational cost of FIND. The other problem is that the partitioning scheme used by FIND is incompatible with most existing partitioning methods and libraries for nested dissection, which all use width-1 separators. Our new algorithm resolves these problems by thoroughly decomposing the computation process such that width-1 separators can be used, resulting in a significant speedup over FIND for realistic devices — up to twelve-fold in simulation. The new algorithm also has the added advantage that desired off-diagonal entries can be computed for free. Consequently, our algorithm is faster than the current state-of-the-art recursive methods for meshes of any size. Furthermore, the framework used in the analysis of our algorithm is the first attempt to explicitly apply the widely-used relationship between mesh nodes and matrix computations to the problem of multiple eliminations with reuse of intermediate results. This framework makes our algorithm easier to generalize, and also easier to compare against other methods related to elimination trees. Finally, our accuracy analysis shows that the algorithms that require back-substitution are subject to significant extra round-off errors, which become extremely large even for some well-conditioned matrices or matrices with only moderately large condition numbers. When compared to these back-substitution algorithms, our algorithm is generally a few orders of magnitude more accurate, and our produced round-off errors

  13. Optimisation of nonlinear motion cueing algorithm based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Asadi, Houshyar; Mohamed, Shady; Rahim Zadeh, Delpak; Nahavandi, Saeid

    2015-04-01

    Motion cueing algorithms (MCAs) are playing a significant role in driving simulators, aiming to deliver the most accurate human sensation to the simulator drivers compared with a real vehicle driver, without exceeding the physical limitations of the simulator. This paper provides the optimisation design of an MCA for a vehicle simulator, in order to find the most suitable washout algorithm parameters, while respecting all motion platform physical limitations, and minimising human perception error between real and simulator driver. One of the main limitations of the classical washout filters is that it is attuned by the worst-case scenario tuning method. This is based on trial and error, and is effected by driving and programmers experience, making this the most significant obstacle to full motion platform utilisation. This leads to inflexibility of the structure, production of false cues and makes the resulting simulator fail to suit all circumstances. In addition, the classical method does not take minimisation of human perception error and physical constraints into account. Production of motion cues and the impact of different parameters of classical washout filters on motion cues remain inaccessible for designers for this reason. The aim of this paper is to provide an optimisation method for tuning the MCA parameters, based on nonlinear filtering and genetic algorithms. This is done by taking vestibular sensation error into account between real and simulated cases, as well as main dynamic limitations, tilt coordination and correlation coefficient. Three additional compensatory linear blocks are integrated into the MCA, to be tuned in order to modify the performance of the filters successfully. The proposed optimised MCA is implemented in MATLAB/Simulink software packages. The results generated using the proposed method show increased performance in terms of human sensation, reference shape tracking and exploiting the platform more efficiently without reaching

  14. Cuckoo Search Algorithm Based on Repeat-Cycle Asymptotic Self-Learning and Self-Evolving Disturbance for Function Optimization.

    PubMed

    Wang, Jie-sheng; Li, Shu-xia; Song, Jiang-di

    2015-01-01

    In order to improve convergence velocity and optimization accuracy of the cuckoo search (CS) algorithm for solving the function optimization problems, a new improved cuckoo search algorithm based on the repeat-cycle asymptotic self-learning and self-evolving disturbance (RC-SSCS) is proposed. A disturbance operation is added into the algorithm by constructing a disturbance factor to make a more careful and thorough search near the bird's nests location. In order to select a reasonable repeat-cycled disturbance number, a further study on the choice of disturbance times is made. Finally, six typical test functions are adopted to carry out simulation experiments, meanwhile, compare algorithms of this paper with two typical swarm intelligence algorithms particle swarm optimization (PSO) algorithm and artificial bee colony (ABC) algorithm. The results show that the improved cuckoo search algorithm has better convergence velocity and optimization accuracy.

  15. Cuckoo Search Algorithm Based on Repeat-Cycle Asymptotic Self-Learning and Self-Evolving Disturbance for Function Optimization

    PubMed Central

    Wang, Jie-sheng; Li, Shu-xia; Song, Jiang-di

    2015-01-01

    In order to improve convergence velocity and optimization accuracy of the cuckoo search (CS) algorithm for solving the function optimization problems, a new improved cuckoo search algorithm based on the repeat-cycle asymptotic self-learning and self-evolving disturbance (RC-SSCS) is proposed. A disturbance operation is added into the algorithm by constructing a disturbance factor to make a more careful and thorough search near the bird's nests location. In order to select a reasonable repeat-cycled disturbance number, a further study on the choice of disturbance times is made. Finally, six typical test functions are adopted to carry out simulation experiments, meanwhile, compare algorithms of this paper with two typical swarm intelligence algorithms particle swarm optimization (PSO) algorithm and artificial bee colony (ABC) algorithm. The results show that the improved cuckoo search algorithm has better convergence velocity and optimization accuracy. PMID:26366164

  16. Cuckoo Search Algorithm Based on Repeat-Cycle Asymptotic Self-Learning and Self-Evolving Disturbance for Function Optimization.

    PubMed

    Wang, Jie-sheng; Li, Shu-xia; Song, Jiang-di

    2015-01-01

    In order to improve convergence velocity and optimization accuracy of the cuckoo search (CS) algorithm for solving the function optimization problems, a new improved cuckoo search algorithm based on the repeat-cycle asymptotic self-learning and self-evolving disturbance (RC-SSCS) is proposed. A disturbance operation is added into the algorithm by constructing a disturbance factor to make a more careful and thorough search near the bird's nests location. In order to select a reasonable repeat-cycled disturbance number, a further study on the choice of disturbance times is made. Finally, six typical test functions are adopted to carry out simulation experiments, meanwhile, compare algorithms of this paper with two typical swarm intelligence algorithms particle swarm optimization (PSO) algorithm and artificial bee colony (ABC) algorithm. The results show that the improved cuckoo search algorithm has better convergence velocity and optimization accuracy. PMID:26366164

  17. Ozone differential absorption lidar algorithm intercomparison.

    PubMed

    Godin, S; Carswell, A I; Donovan, D P; Claude, H; Steinbrecht, W; McDermid, I S; McGee, T J; Gross, M R; Nakane, H; Swart, D P; Bergwerff, H B; Uchino, O; von der Gathen, P; Neuber, R

    1999-10-20

    An intercomparison of ozone differential absorption lidar algorithms was performed in 1996 within the framework of the Network for the Detection of Stratospheric Changes (NDSC) lidar working group. The objective of this research was mainly to test the differentiating techniques used by the various lidar teams involved in the NDSC for the calculation of the ozone number density from the lidar signals. The exercise consisted of processing synthetic lidar signals computed from simple Rayleigh scattering and three initial ozone profiles. Two of these profiles contained perturbations in the low and the high stratosphere to test the vertical resolution of the various algorithms. For the unperturbed profiles the results of the simulations show the correct behavior of the lidar processing methods in the low and the middle stratosphere with biases of less than 1% with respect to the initial profile to as high as 30 km in most cases. In the upper stratosphere, significant biases reaching 10% at 45 km for most of the algorithms are obtained. This bias is due to the decrease in the signal-to-noise ratio with altitude, which makes it necessary to increase the number of points of the derivative low-pass filter used for data processing. As a consequence the response of the various retrieval algorithms to perturbations in the ozone profile is much better in the lower stratosphere than in the higher range. These results show the necessity of limiting the vertical smoothing in the ozone lidar retrieval algorithm and questions the ability of current lidar systems to detect long-term ozone trends above 40 km. Otherwise the simulations show in general a correct estimation of the ozone profile random error and, as shown by the tests involving the perturbed ozone profiles, some inconsistency in the estimation of the vertical resolution among the lidar teams involved in this experiment.

  18. Benchmarking monthly homogenization algorithms

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    . Training was found to be very important. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that currently automatic algorithms can perform as well as manual ones.

  19. Contact solution algorithms

    NASA Technical Reports Server (NTRS)

    Tielking, John T.

    1989-01-01

    Two algorithms for obtaining static contact solutions are described in this presentation. Although they were derived for contact problems involving specific structures (a tire and a solid rubber cylinder), they are sufficiently general to be applied to other shell-of-revolution and solid-body contact problems. The shell-of-revolution contact algorithm is a method of obtaining a point load influence coefficient matrix for the portion of shell surface that is expected to carry a contact load. If the shell is sufficiently linear with respect to contact loading, a single influence coefficient matrix can be used to obtain a good approximation of the contact pressure distribution. Otherwise, the matrix will be updated to reflect nonlinear load-deflection behavior. The solid-body contact algorithm utilizes a Lagrange multiplier to include the contact constraint in a potential energy functional. The solution is found by applying the principle of minimum potential energy. The Lagrange multiplier is identified as the contact load resultant for a specific deflection. At present, only frictionless contact solutions have been obtained with these algorithms. A sliding tread element has been developed to calculate friction shear force in the contact region of the rolling shell-of-revolution tire model.

  20. PSC algorithm description

    NASA Technical Reports Server (NTRS)

    Nobbs, Steven G.

    1995-01-01

    An overview of the performance seeking control (PSC) algorithm and details of the important components of the algorithm are given. The onboard propulsion system models, the linear programming optimization, and engine control interface are described. The PSC algorithm receives input from various computers on the aircraft including the digital flight computer, digital engine control, and electronic inlet control. The PSC algorithm contains compact models of the propulsion system including the inlet, engine, and nozzle. The models compute propulsion system parameters, such as inlet drag and fan stall margin, which are not directly measurable in flight. The compact models also compute sensitivities of the propulsion system parameters to change in control variables. The engine model consists of a linear steady state variable model (SSVM) and a nonlinear model. The SSVM is updated with efficiency factors calculated in the engine model update logic, or Kalman filter. The efficiency factors are used to adjust the SSVM to match the actual engine. The propulsion system models are mathematically integrated to form an overall propulsion system model. The propulsion system model is then optimized using a linear programming optimization scheme. The goal of the optimization is determined from the selected PSC mode of operation. The resulting trims are used to compute a new operating point about which the optimization process is repeated. This process is continued until an overall (global) optimum is reached before applying the trims to the controllers.

  1. Satellite Movie Shows Erika Dissipate

    NASA Video Gallery

    This animation of visible and infrared imagery from NOAA's GOES-West satellite from Aug. 27 to 29 shows Tropical Storm Erika move through the Eastern Caribbean Sea and dissipate near eastern Cuba. ...

  2. On the Time Complexity of Dijkstra's Three-State Mutual Exclusion Algorithm

    NASA Astrophysics Data System (ADS)

    Kimoto, Masahiro; Tsuchiya, Tatsuhiro; Kikuno, Tohru

    In this letter we give a lower bound on the worst-case time complexity of Dijkstra's three-state mutual exclusion algorithm by specifying a concrete behavior of the algorithm. We also show that our result is more accurate than the known best bound.

  3. Parallelization of Edge Detection Algorithm using MPI on Beowulf Cluster

    NASA Astrophysics Data System (ADS)

    Haron, Nazleeni; Amir, Ruzaini; Aziz, Izzatdin A.; Jung, Low Tan; Shukri, Siti Rohkmah

    In this paper, we present the design of parallel Sobel edge detection algorithm using Foster's methodology. The parallel algorithm is implemented using MPI message passing library and master/slave algorithm. Every processor performs the same sequential algorithm but on different part of the image. Experimental results conducted on Beowulf cluster are presented to demonstrate the performance of the parallel algorithm.

  4. Application of a new finite difference algorithm for computational aeroacoustics

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1995-01-01

    Acoustic problems have become extremely important in recent years because of research efforts such as the High Speed Civil Transport program. Computational aeroacoustics (CAA) requires a faithful representation of wave propagation over long distances, and needs algorithms that are accurate and boundary conditions that are unobtrusive. This paper applies a new finite difference method and boundary algorithm to the Linearized Euler Equations (LEE). The results demonstrate the ability of a new fourth order propagation algorithm to accurately simulate the genuinely multidimensional wave dynamics of acoustic propagation in two space dimensions with the LEE. The results also show the ability of a new outflow boundary condition and fourth order algorithm to pass the evolving solution from the computational domain with no perceptible degradation of the solution remaining within the domain.

  5. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  6. PPP Sliding Window Algorithm and Its Application in Deformation Monitoring.

    PubMed

    Song, Weiwei; Zhang, Rui; Yao, Yibin; Liu, Yanyan; Hu, Yuming

    2016-01-01

    Compared with the double-difference relative positioning method, the precise point positioning (PPP) algorithm can avoid the selection of a static reference station and directly measure the three-dimensional position changes at the observation site and exhibit superiority in a variety of deformation monitoring applications. However, because of the influence of various observing errors, the accuracy of PPP is generally at the cm-dm level, which cannot meet the requirements needed for high precision deformation monitoring. For most of the monitoring applications, the observation stations maintain stationary, which can be provided as a priori constraint information. In this paper, a new PPP algorithm based on a sliding window was proposed to improve the positioning accuracy. Firstly, data from IGS tracking station was processed using both traditional and new PPP algorithm; the results showed that the new algorithm can effectively improve positioning accuracy, especially for the elevation direction. Then, an earthquake simulation platform was used to simulate an earthquake event; the results illustrated that the new algorithm can effectively detect the vibrations change of a reference station during an earthquake. At last, the observed Wenchuan earthquake experimental results showed that the new algorithm was feasible to monitor the real earthquakes and provide early-warning alerts. PMID:27241172

  7. PPP Sliding Window Algorithm and Its Application in Deformation Monitoring

    PubMed Central

    Song, Weiwei; Zhang, Rui; Yao, Yibin; Liu, Yanyan; Hu, Yuming

    2016-01-01

    Compared with the double-difference relative positioning method, the precise point positioning (PPP) algorithm can avoid the selection of a static reference station and directly measure the three-dimensional position changes at the observation site and exhibit superiority in a variety of deformation monitoring applications. However, because of the influence of various observing errors, the accuracy of PPP is generally at the cm-dm level, which cannot meet the requirements needed for high precision deformation monitoring. For most of the monitoring applications, the observation stations maintain stationary, which can be provided as a priori constraint information. In this paper, a new PPP algorithm based on a sliding window was proposed to improve the positioning accuracy. Firstly, data from IGS tracking station was processed using both traditional and new PPP algorithm; the results showed that the new algorithm can effectively improve positioning accuracy, especially for the elevation direction. Then, an earthquake simulation platform was used to simulate an earthquake event; the results illustrated that the new algorithm can effectively detect the vibrations change of a reference station during an earthquake. At last, the observed Wenchuan earthquake experimental results showed that the new algorithm was feasible to monitor the real earthquakes and provide early-warning alerts. PMID:27241172

  8. PPP Sliding Window Algorithm and Its Application in Deformation Monitoring

    NASA Astrophysics Data System (ADS)

    Song, Weiwei; Zhang, Rui; Yao, Yibin; Liu, Yanyan; Hu, Yuming

    2016-05-01

    Compared with the double-difference relative positioning method, the precise point positioning (PPP) algorithm can avoid the selection of a static reference station and directly measure the three-dimensional position changes at the observation site and exhibit superiority in a variety of deformation monitoring applications. However, because of the influence of various observing errors, the accuracy of PPP is generally at the cm-dm level, which cannot meet the requirements needed for high precision deformation monitoring. For most of the monitoring applications, the observation stations maintain stationary, which can be provided as a priori constraint information. In this paper, a new PPP algorithm based on a sliding window was proposed to improve the positioning accuracy. Firstly, data from IGS tracking station was processed using both traditional and new PPP algorithm; the results showed that the new algorithm can effectively improve positioning accuracy, especially for the elevation direction. Then, an earthquake simulation platform was used to simulate an earthquake event; the results illustrated that the new algorithm can effectively detect the vibrations change of a reference station during an earthquake. At last, the observed Wenchuan earthquake experimental results showed that the new algorithm was feasible to monitor the real earthquakes and provide early-warning alerts.

  9. Raytracing Based upon the Sympletic Algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Li, C.

    2014-12-01

    The raytracing is the basic problem in seismic imaging, and the reliability of the imaging depends on the accuracies both spatial trajectory and traveltime of the ray, and is using in seismology broadly. The seismic ray travels through the inhomogeneous media fallows the the eikonal equation, and the eikonal equation is an one order differential equation of traveltime, and satisfies the Hamilton System. In Cartesian coordinate system, we use a separable Hamilton System function. In this paper, the Sympletic algorithm method with bi-cubic convolution algorithm was used to solve the Hamilton System to deal with the raytracing problem. Compared with the Fsat Marching Method (FMM), The result shows that the Sympletic algorithm method (SAM) can keep the stability of the solution for the eikonal equation. Due to the use of the Sympletic algorithm, the method can produce a reliable seismic wavefront with an accurate ray trajectory (Fig.1). Meanwhile, the numerical modeling shows that the use of SAM can not only keep the stability of the Hamilton System with a fast computation but also improve the accuracy of the seismic ray tracing (Fig.2).

  10. Efficient Nonnegative Tucker Decompositions: Algorithms and Uniqueness.

    PubMed

    Zhou, Guoxu; Cichocki, Andrzej; Zhao, Qibin; Xie, Shengli

    2015-12-01

    Nonnegative Tucker decomposition (NTD) is a powerful tool for the extraction of nonnegative parts-based and physically meaningful latent components from high-dimensional tensor data while preserving the natural multilinear structure of data. However, as the data tensor often has multiple modes and is large scale, the existing NTD algorithms suffer from a very high computational complexity in terms of both storage and computation time, which has been one major obstacle for practical applications of NTD. To overcome these disadvantages, we show how low (multilinear) rank approximation (LRA) of tensors is able to significantly simplify the computation of the gradients of the cost function, upon which a family of efficient first-order NTD algorithms are developed. Besides dramatically reducing the storage complexity and running time, the new algorithms are quite flexible and robust to noise, because any well-established LRA approaches can be applied. We also show how nonnegativity incorporating sparsity substantially improves the uniqueness property and partially alleviates the curse of dimensionality of the Tucker decompositions. Simulation results on synthetic and real-world data justify the validity and high efficiency of the proposed NTD algorithms.

  11. Dynamic topology multi force particle swarm optimization algorithm and its application

    NASA Astrophysics Data System (ADS)

    Chen, Dongning; Zhang, Ruixing; Yao, Chengyu; Zhao, Zheyu

    2016-01-01

    Particle swarm optimization (PSO) algorithm is an effective bio-inspired algorithm but it has shortage of premature convergence. Researchers have made some improvements especially in force rules and population topologies. However, the current algorithms only consider a single kind of force rules and lack consideration of comprehensive improvement in both multi force rules and population topologies. In this paper, a dynamic topology multi force particle swarm optimization (DTMFPSO) algorithm is proposed in order to get better search performance. First of all, the principle of the presented multi force particle swarm optimization (MFPSO) algorithm is that different force rules are used in different search stages, which can balance the ability of global and local search. Secondly, a fitness-driven edge-changing (FE) topology based on the probability selection mechanism of roulette method is designed to cut and add edges between the particles, and the DTMFPSO algorithm is proposed by combining the FE topology with the MFPSO algorithm through concurrent evolution of both algorithm and structure in order to further improve the search accuracy. Thirdly, Benchmark functions are employed to evaluate the performance of the DTMFPSO algorithm, and test results show that the proposed algorithm is better than the well-known PSO algorithms, such as µPSO, MPSO, and EPSO algorithms. Finally, the proposed algorithm is applied to optimize the process parameters for ultrasonic vibration cutting on SiC wafer, and the surface quality of the SiC wafer is improved by 12.8% compared with the PSO algorithm in Ref. [25]. This research proposes a DTMFPSO algorithm with multi force rules and dynamic population topologies evolved simultaneously, and it has better search performance.

  12. Genetic Bee Colony (GBC) algorithm: A new gene selection method for microarray cancer classification.

    PubMed

    Alshamlan, Hala M; Badr, Ghada H; Alohali, Yousef A

    2015-06-01

    Naturally inspired evolutionary algorithms prove effectiveness when used for solving feature selection and classification problems. Artificial Bee Colony (ABC) is a relatively new swarm intelligence method. In this paper, we propose a new hybrid gene selection method, namely Genetic Bee Colony (GBC) algorithm. The proposed algorithm combines the used of a Genetic Algorithm (GA) along with Artificial Bee Colony (ABC) algorithm. The goal is to integrate the advantages of both algorithms. The proposed algorithm is applied to a microarray gene expression profile in order to select the most predictive and informative genes for cancer classification. In order to test the accuracy performance of the proposed algorithm, extensive experiments were conducted. Three binary microarray datasets are use, which include: colon, leukemia, and lung. In addition, another three multi-class microarray datasets are used, which are: SRBCT, lymphoma, and leukemia. Results of the GBC algorithm are compared with our recently proposed technique: mRMR when combined with the Artificial Bee Colony algorithm (mRMR-ABC). We also compared the combination of mRMR with GA (mRMR-GA) and Particle Swarm Optimization (mRMR-PSO) algorithms. In addition, we compared the GBC algorithm with other related algorithms that have been recently published in the literature, using all benchmark datasets. The GBC algorithm shows superior performance as it achieved the highest classification accuracy along with the lowest average number of selected genes. This proves that the GBC algorithm is a promising approach for solving the gene selection problem in both binary and multi-class cancer classification.

  13. Multiprojection algorithms with generalized projections

    SciTech Connect

    Censor, J.; Elfving, T.

    1994-12-31

    Generalized distances give raise to generalized projections onto convex sets. An important question is whether or not one can use, within the same projection algorithm, different types of such generalized projections. This question has practical consequences in the areas of signal detection and image recovery, in situations that can be formulated mathematically as convex feasibility problems. We show here that a simultaneous multiprojection algorithmic scheme converges. Different specific multiprojection algorithms can be derived from our scheme by a judicious choice of the Bregman functions which govern the process. As a by-product of the investigation we also obtain block-iterative schemes for certain kinds of linearly constrained optimization problems.

  14. Mean-shift tracking algorithm based on adaptive fusion of multi-feature

    NASA Astrophysics Data System (ADS)

    Yang, Kai; Xiao, Yanghui; Wang, Ende; Feng, Junhui

    2015-10-01

    The classic mean-shift tracking algorithm has achieved success in the field of computer vision because of its speediness and efficiency. However, classic mean-shift tracking algorithm would fail to track in some complicated conditions such as some parts of the target are occluded, little color difference between the target and background exists, or sudden change of illumination and so on. In order to solve the problems, an improved algorithm is proposed based on the mean-shift tracking algorithm and adaptive fusion of features. Color, edges and corners of the target are used to describe the target in the feature space, and a method for measuring the discrimination of various features is presented to make feature selection adaptive. Then the improved mean-shift tracking algorithm is introduced based on the fusion of various features. For the purpose of solving the problem that mean-shift tracking algorithm with the single color feature is vulnerable to sudden change of illumination, we eliminate the effects by the fusion of affine illumination model and color feature space which ensures the correctness and stability of target tracking in that condition. Using a group of videos to test the proposed algorithm, the results show that the tracking correctness and stability of this algorithm are better than the mean-shift tracking algorithm with single feature space. Furthermore the proposed algorithm is more robust than the classic algorithm in the conditions of occlusion, target similar with background or illumination change.

  15. Arches showing UV flaring activity

    NASA Technical Reports Server (NTRS)

    Fontenla, J. M.

    1988-01-01

    The UVSP data obtained in the previous maximum activity cycle show the frequent appearance of flaring events in the UV. In many cases these flaring events are characterized by at least two footpoints which show compact impulsive non-simultaneous brightenings and a fainter but clearly observed arch developes between the footpoints. These arches and footpoints are observed in line corresponding to different temperatures, as Lyman alpha, N V, and C IV, and when observed above the limb display large Doppler shifts at some stages. The size of the arches can be larger than 20 arcsec.

  16. Optimization of composite structures by estimation of distribution algorithms

    NASA Astrophysics Data System (ADS)

    Grosset, Laurent

    The design of high performance composite laminates, such as those used in aerospace structures, leads to complex combinatorial optimization problems that cannot be addressed by conventional methods. These problems are typically solved by stochastic algorithms, such as evolutionary algorithms. This dissertation proposes a new evolutionary algorithm for composite laminate optimization, named Double-Distribution Optimization Algorithm (DDOA). DDOA belongs to the family of estimation of distributions algorithms (EDA) that build a statistical model of promising regions of the design space based on sets of good points, and use it to guide the search. A generic framework for introducing statistical variable dependencies by making use of the physics of the problem is proposed. The algorithm uses two distributions simultaneously: the marginal distributions of the design variables, complemented by the distribution of auxiliary variables. The combination of the two generates complex distributions at a low computational cost. The dissertation demonstrates the efficiency of DDOA for several laminate optimization problems where the design variables are the fiber angles and the auxiliary variables are the lamination parameters. The results show that its reliability in finding the optima is greater than that of a simple EDA and of a standard genetic algorithm, and that its advantage increases with the problem dimension. A continuous version of the algorithm is presented and applied to a constrained quadratic problem. Finally, a modification of the algorithm incorporating probabilistic and directional search mechanisms is proposed. The algorithm exhibits a faster convergence to the optimum and opens the way for a unified framework for stochastic and directional optimization.

  17. Fuzzy evolutionary algorithm to solve chromosomes conflict and its application to lecture schedule problems

    NASA Astrophysics Data System (ADS)

    Marwati, Rini; Yulianti, Kartika; Pangestu, Herny Wulandari

    2016-02-01

    A fuzzy evolutionary algorithm is an integration of an evolutionary algorithm and a fuzzy system. In this paper, we present an application of a genetic algorithm to a fuzzy evolutionary algorithm to detect and to solve chromosomes conflict. A chromosome conflict is identified by existence of any two genes in a chromosome that has the same values as two genes in another chromosome. Based on this approach, we construct an algorithm to solve a lecture scheduling problem. Time codes, lecture codes, lecturer codes, and room codes are defined as genes. They are collected to become chromosomes. As a result, the conflicted schedule turns into chromosomes conflict. Built in the Delphi program, results show that the conflicted lecture schedule problem is solvable by this algorithm.

  18. The First Results of Testing Methods and Algorithms for Automatic Real Time Identification of Waveforms Introduction from Local Earthquakes in Increased Level of Man-induced Noises for the Purposes of Ultra-short-term Warning about an Occurred Earthquake

    NASA Astrophysics Data System (ADS)

    Gravirov, V. V.; Kislov, K. V.

    2009-12-01

    The chief hazard posed by earthquakes consists in their suddenness. The number of earthquakes annually recorded is in excess of 100,000; of these, over 1000 are strong ones. Great human losses usually occur because no devices exist for advance warning of earthquakes. It is therefore high time that mobile information automatic systems should be developed for analysis of seismic information at high levels of manmade noise. The systems should be operated in real time with the minimum possible computational delays and be able to make fast decisions. The chief statement of the project is that sufficiently complete information about an earthquake can be obtained in real time by examining its first onset as recorded by a single seismic sensor or a local seismic array. The essential difference from the existing systems consists in the following: analysis of local seismic data at high levels of manmade noise (that is, when the noise level may be above the seismic signal level), as well as self-contained operation. The algorithms developed during the execution of the project will be capable to be used with success for individual personal protection kits and for warning the population in earthquake-prone areas over the world. The system being developed for this project uses P and S waves as well. The difference in the velocities of these seismic waves permits a technique to be developed for identifying a damaging earthquake. Real time analysis of first onsets yields the time that remains before surface waves arrive and the damage potential of these waves. Estimates show that, when the difference between the earthquake epicenter and the monitored site is of order 200 km, the time difference between the arrivals of P waves and surface waves will be about 30 seconds, which is quite sufficient to evacuate people from potentially hazardous space, insertion of moderators at nuclear power stations, pipeline interlocking, transportation stoppage, warnings issued to rescue services

  19. Create a Polarized Light Show.

    ERIC Educational Resources Information Center

    Conrad, William H.

    1992-01-01

    Presents a lesson that introduces students to polarized light using a problem-solving approach. After illustrating the concept using a slinky and poster board with a vertical slot, students solve the problem of creating a polarized light show using Polya's problem-solving methods. (MDH)

  20. Pembrolizumab Shows Promise for NSCLC.

    PubMed

    2015-06-01

    Data from the KEYNOTE-001 trial show that pembrolizumab improves clinical outcomes for patients with advanced non-small cell lung cancer, and is well tolerated. PD-L1 expression in at least 50% of tumor cells correlated with improved efficacy.

  1. Quantum defragmentation algorithm

    SciTech Connect

    Burgarth, Daniel; Giovannetti, Vittorio

    2010-08-15

    In this addendum to our paper [D. Burgarth and V. Giovannetti, Phys. Rev. Lett. 99, 100501 (2007)] we prove that during the transformation that allows one to enforce control by relaxation on a quantum system, the ancillary memory can be kept at a finite size, independently from the fidelity one wants to achieve. The result is obtained by introducing the quantum analog of defragmentation algorithms which are employed for efficiently reorganizing classical information in conventional hard disks.

  2. What is a Systolic Algorithm?

    NASA Astrophysics Data System (ADS)

    Rao, Sailesh K.; Kollath, T.

    1986-07-01

    In this paper, we show that every systolic array executes a Regular Iterative Algorithm with a strongly separating hyperplane and conversely, that every such algorithm can be implemented on a systolic array. This characterization provides us with an unified framework for describing the contributions of other authors. It also exposes the relevance of many fundamental concepts that were introduced in the sixties by Hennie, Waite and Karp, Miller and Winograd, to the present day concern of systolic array

  3. Magic Carpet Shows Its Colors

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The upper left image in this display is from the panoramic camera on the Mars Exploration Rover Spirit, showing the 'Magic Carpet' region near the rover at Gusev Crater, Mars, on Sol 7, the seventh martian day of its journey (Jan. 10, 2004). The lower image, also from the panoramic camera, is a monochrome (single filter) image of a rock in the 'Magic Carpet' area. Note that colored portions of the rock correlate with extracted spectra shown in the plot to the side. Four different types of materials are shown: the rock itself, the soil in front of the rock, some brighter soil on top of the rock, and some dust that has collected in small recesses on the rock face ('spots'). Each color on the spectra matches a line on the graph, showing how the panoramic camera's different colored filters are used to broadly assess the varying mineral compositions of martian rocks and soils.

  4. An adaptive algorithm for noise rejection.

    PubMed

    Lovelace, D E; Knoebel, S B

    1978-01-01

    An adaptive algorithm for the rejection of noise artifact in 24-hour ambulatory electrocardiographic recordings is described. The algorithm is based on increased amplitude distortion or increased frequency of fluctuations associated with an episode of noise artifact. The results of application of the noise rejection algorithm on a high noise population of test tapes are discussed.

  5. An Adaptive Digital Image Watermarking Algorithm Based on Morphological Haar Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Huang, Xiaosheng; Zhao, Sujuan

    At present, much more of the wavelet-based digital watermarking algorithms are based on linear wavelet transform and fewer on non-linear wavelet transform. In this paper, we propose an adaptive digital image watermarking algorithm based on non-linear wavelet transform--Morphological Haar Wavelet Transform. In the algorithm, the original image and the watermark image are decomposed with multi-scale morphological wavelet transform respectively. Then the watermark information is adaptively embedded into the original image in different resolutions, combining the features of Human Visual System (HVS). The experimental results show that our method is more robust and effective than the ordinary wavelet transform algorithms.

  6. Sparse Algorithms Are Not Stable: A No-Free-Lunch Theorem.

    PubMed

    Huan Xu; Caramanis, C; Mannor, S

    2012-01-01

    We consider two desired properties of learning algorithms: sparsity and algorithmic stability. Both properties are believed to lead to good generalization ability. We show that these two properties are fundamentally at odds with each other: A sparse algorithm cannot be stable and vice versa. Thus, one has to trade off sparsity and stability in designing a learning algorithm. In particular, our general result implies that ℓ(1)-regularized regression (Lasso) cannot be stable, while ℓ(2)-regularized regression is known to have strong stability properties and is therefore not sparse.

  7. Application of watersheds algorithms to train wheel tread check and measure systems

    NASA Astrophysics Data System (ADS)

    Shen, Bangxing; Zhang, Haibo; Wang, Wei

    2006-11-01

    A image system and a new kind of image processing algorithm (watersheds algorithm) are introduced; First, we get the prime wheel tread data by a Charge Coupled Device (CCD), after the pretreatment of the data, we use the watersheds algorithm to get the trail wheel abrasion and peel off strips, than we will get the numerical value of the damage. The result of the experiment shows that the system can obtain the edge of the trail wheel abrasion and peel off strips; the algorithm is fast, stable and anti-jamming; the recognition system can satisfy the manufactory requirements.

  8. Medical image reconstruction algorithm based on the geometric information between sensor detector and ROI

    NASA Astrophysics Data System (ADS)

    Ham, Woonchul; Song, Chulgyu; Lee, Kangsan; Roh, Seungkuk

    2016-05-01

    In this paper, we propose a new image reconstruction algorithm considering the geometric information of acoustic sources and senor detector and review the two-step reconstruction algorithm which was previously proposed based on the geometrical information of ROI(region of interest) considering the finite size of acoustic sensor element. In a new image reconstruction algorithm, not only mathematical analysis is very simple but also its software implementation is very easy because we don't need to use the FFT. We verify the effectiveness of the proposed reconstruction algorithm by showing the simulation results by using Matlab k-wave toolkit.

  9. [A fast non-local means algorithm for denoising of computed tomography images].

    PubMed

    Kang, Changqing; Cao, Wenping; Fang, Lei; Hua, Li; Cheng, Hong

    2012-11-01

    A fast non-local means image denoising algorithm is presented based on the single motif of existing computed tomography images in medical archiving systems. The algorithm is carried out in two steps of prepossessing and actual possessing. The sample neighborhood database is created via the data structure of locality sensitive hashing in the prepossessing stage. The CT image noise is removed by non-local means algorithm based on the sample neighborhoods accessed fast by locality sensitive hashing. The experimental results showed that the proposed algorithm could greatly reduce the execution time, as compared to NLM, and effectively preserved the image edges and details.

  10. A multi-level solution algorithm for steady-state Markov chains

    NASA Technical Reports Server (NTRS)

    Horton, Graham; Leutenegger, Scott T.

    1993-01-01

    A new iterative algorithm, the multi-level algorithm, for the numerical solution of steady state Markov chains is presented. The method utilizes a set of recursively coarsened representations of the original system to achieve accelerated convergence. It is motivated by multigrid methods, which are widely used for fast solution of partial differential equations. Initial results of numerical experiments are reported, showing significant reductions in computation time, often an order of magnitude or more, relative to the Gauss-Seidel and optimal SOR algorithms for a variety of test problems. The multi-level method is compared and contrasted with the iterative aggregation-disaggregation algorithm of Takahashi.

  11. Reduction Algorithm

    2010-12-31

    Conventional methods used for modeling a transmission network have resulted in a high degree of error and instability. This methodology condenses the network for analysis purposes without a loss of precision.

  12. Algorithms to detect multiprotein modularity conserved during evolution.

    PubMed

    Hodgkinson, Luqman; Karp, Richard M

    2012-01-01

    Detecting essential multiprotein modules that change infrequently during evolution is a challenging algorithmic task that is important for understanding the structure, function, and evolution of the biological cell. In this paper, we define a measure of modularity for interactomes and present a linear-time algorithm, Produles, for detecting multiprotein modularity conserved during evolution that improves on the running time of previous algorithms for related problems and offers desirable theoretical guarantees. We present a biologically motivated graph theoretic set of evaluation measures complementary to previous evaluation measures, demonstrate that Produles exhibits good performance by all measures, and describe certain recurrent anomalies in the performance of previous algorithms that are not detected by previous measures. Consideration of the newly defined measures and algorithm performance on these measures leads to useful insights on the nature of interactomics data and the goals of previous and current algorithms. Through randomization experiments, we demonstrate that conserved modularity is a defining characteristic of interactomes. Computational experiments on current experimentally derived interactomes for Homo sapiens and Drosophila melanogaster, combining results across algorithms, show that nearly 10 percent of current interactome proteins participate in multiprotein modules with good evidence in the protein interaction data of being conserved between human and Drosophila.

  13. Comparison and improvement of algorithms for computing minimal cut sets

    PubMed Central

    2013-01-01

    Background Constrained minimal cut sets (cMCSs) have recently been introduced as a framework to enumerate minimal genetic intervention strategies for targeted optimization of metabolic networks. Two different algorithmic schemes (adapted Berge algorithm and binary integer programming) have been proposed to compute cMCSs from elementary modes. However, in their original formulation both algorithms are not fully comparable. Results Here we show that by a small extension to the integer program both methods become equivalent. Furthermore, based on well-known preprocessing procedures for integer programming we present efficient preprocessing steps which can be used for both algorithms. We then benchmark the numerical performance of the algorithms in several realistic medium-scale metabolic models. The benchmark calculations reveal (i) that these preprocessing steps can lead to an enormous speed-up under both algorithms, and (ii) that the adapted Berge algorithm outperforms the binary integer approach. Conclusions Generally, both of our new implementations are by at least one order of magnitude faster than other currently available implementations. PMID:24191903

  14. AdaBoost-based algorithm for network intrusion detection.

    PubMed

    Hu, Weiming; Hu, Wei; Maybank, Steve

    2008-04-01

    Network intrusion detection aims at distinguishing the attacks on the Internet from normal use of the Internet. It is an indispensable part of the information security system. Due to the variety of network behaviors and the rapid development of attack fashions, it is necessary to develop fast machine-learning-based intrusion detection algorithms with high detection rates and low false-alarm rates. In this correspondence, we propose an intrusion detection algorithm based on the AdaBoost algorithm. In the algorithm, decision stumps are used as weak classifiers. The decision rules are provided for both categorical and continuous features. By combining the weak classifiers for continuous features and the weak classifiers for categorical features into a strong classifier, the relations between these two different types of features are handled naturally, without any forced conversions between continuous and categorical features. Adaptable initial weights and a simple strategy for avoiding overfitting are adopted to improve the performance of the algorithm. Experimental results show that our algorithm has low computational complexity and error rates, as compared with algorithms of higher computational complexity, as tested on the benchmark sample data. PMID:18348941

  15. Graphene Oxides Show Angiogenic Properties.

    PubMed

    Mukherjee, Sudip; Sriram, Pavithra; Barui, Ayan Kumar; Nethi, Susheel Kumar; Veeriah, Vimal; Chatterjee, Suvro; Suresh, Kattimuttathu Ittara; Patra, Chitta Ranjan

    2015-08-01

    Angiogenesis, a process resulting in the formation of new capillaries from the pre-existing vasculature plays vital role for the development of therapeutic approaches for cancer, atherosclerosis, wound healing, and cardiovascular diseases. In this report, the synthesis, characterization, and angiogenic properties of graphene oxide (GO) and reduced graphene oxide (rGO) have been demonstrated, observed through several in vitro and in vivo angiogenesis assays. The results here demonstrate that the intracellular formation of reactive oxygen species and reactive nitrogen species as well as activation of phospho-eNOS and phospho-Akt might be the plausible mechanisms for GO and rGO induced angiogenesis. The results altogether suggest the possibilities for the development of alternative angiogenic therapeutic approach for the treatment of cardiovascular related diseases where angiogenesis plays a significant role.

  16. A baseline algorithm for face detection and tracking in video

    NASA Astrophysics Data System (ADS)

    Manohar, Vasant; Soundararajan, Padmanabhan; Korzhova, Valentina; Boonstra, Matthew; Goldgof, Dmitry; Kasturi, Rangachar

    2007-10-01

    Establishing benchmark datasets, performance metrics and baseline algorithms have considerable research significance in gauging the progress in any application domain. These primarily allow both users and developers to compare the performance of various algorithms on a common platform. In our earlier works, we focused on developing performance metrics and establishing a substantial dataset with ground truth for object detection and tracking tasks (text and face) in two video domains -- broadcast news and meetings. In this paper, we present the results of a face detection and tracking algorithm on broadcast news videos with the objective of establishing a baseline performance for this task-domain pair. The detection algorithm uses a statistical approach that was originally developed by Viola and Jones and later extended by Lienhart. The algorithm uses a feature set that is Haar-like and a cascade of boosted decision tree classifiers as a statistical model. In this work, we used the Intel Open Source Computer Vision Library (OpenCV) implementation of the Haar face detection algorithm. The optimal values for the tunable parameters of this implementation were found through an experimental design strategy commonly used in statistical analyses of industrial processes. Tracking was accomplished as continuous detection with the detected objects in two frames mapped using a greedy algorithm based on the distances between the centroids of bounding boxes. Results on the evaluation set containing 50 sequences (~ 2.5 mins.) using the developed performance metrics show good performance of the algorithm reflecting the state-of-the-art which makes it an appropriate choice as the baseline algorithm for the problem.

  17. Large scale tracking algorithms.

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  18. RADFLO physics and algorithms

    SciTech Connect

    Symbalisty, E.M.D.; Zinn, J.; Whitaker, R.W.

    1995-09-01

    This paper describes the history, physics, and algorithms of the computer code RADFLO and its extension HYCHEM. RADFLO is a one-dimensional, radiation-transport hydrodynamics code that is used to compute early-time fireball behavior for low-altitude nuclear bursts. The primary use of the code is the prediction of optical signals produced by nuclear explosions. It has also been used to predict thermal and hydrodynamic effects that are used for vulnerability and lethality applications. Another closely related code, HYCHEM, is an extension of RADFLO which includes the effects of nonequilibrium chemistry. Some examples of numerical results will be shown, along with scaling expressions derived from those results. We describe new computations of the structures and luminosities of steady-state shock waves and radiative thermal waves, which have been extended to cover a range of ambient air densities for high-altitude applications. We also describe recent modifications of the codes to use a one-dimensional analog of the CAVEAT fluid-dynamics algorithm in place of the former standard Richtmyer-von Neumann algorithm.

  19. Evaluating super resolution algorithms

    NASA Astrophysics Data System (ADS)

    Kim, Youn Jin; Park, Jong Hyun; Shin, Gun Shik; Lee, Hyun-Seung; Kim, Dong-Hyun; Park, Se Hyeok; Kim, Jaehyun

    2011-01-01

    This study intends to establish a sound testing and evaluation methodology based upon the human visual characteristics for appreciating the image restoration accuracy; in addition to comparing the subjective results with predictions by some objective evaluation methods. In total, six different super resolution (SR) algorithms - such as iterative back-projection (IBP), robust SR, maximum a posteriori (MAP), projections onto convex sets (POCS), a non-uniform interpolation, and frequency domain approach - were selected. The performance comparison between the SR algorithms in terms of their restoration accuracy was carried out through both subjectively and objectively. The former methodology relies upon the paired comparison method that involves the simultaneous scaling of two stimuli with respect to image restoration accuracy. For the latter, both conventional image quality metrics and color difference methods are implemented. Consequently, POCS and a non-uniform interpolation outperformed the others for an ideal situation, while restoration based methods appear more accurate to the HR image in a real world case where any prior information about the blur kernel is remained unknown. However, the noise-added-image could not be restored successfully by any of those methods. The latest International Commission on Illumination (CIE) standard color difference equation CIEDE2000 was found to predict the subjective results accurately and outperformed conventional methods for evaluating the restoration accuracy of those SR algorithms.

  20. The red planet shows off

    NASA Astrophysics Data System (ADS)

    Beish, J. D.; Parker, D. C.; Hernandez, C. E.

    1989-01-01

    Results from observations of Mars between November 1987 and September 1988 are reviewed. The observations were part of a program to provide continuous global coverage of Mars in the period surrounding its opposition on September 28, 1988. Observations of Martian clouds, dust storms, the planet's south pole, and the Martian surface are discussed.

  1. Magnetotelluric inversion via reverse time migration algorithm of seismic data

    SciTech Connect

    Ha, Taeyoung . E-mail: tyha@math.snu.ac.kr; Shin, Changsoo . E-mail: css@model.snu.ac.kr

    2007-07-01

    We propose a new algorithm for two-dimensional magnetotelluric (MT) inversion. Our algorithm is an MT inversion based on the steepest descent method, borrowed from the backpropagation technique of seismic inversion or reverse time migration, introduced in the middle 1980s by Lailly and Tarantola. The steepest descent direction can be calculated efficiently by using the symmetry of numerical Green's function derived from a mixed finite element method proposed by Nedelec for Maxwell's equation, without calculating the Jacobian matrix explicitly. We construct three different objective functions by taking the logarithm of the complex apparent resistivity as introduced in the recent waveform inversion algorithm by Shin and Min. These objective functions can be naturally separated into amplitude inversion, phase inversion and simultaneous inversion. We demonstrate our algorithm by showing three inversion results for synthetic data.

  2. A Spectral Algorithm for Envelope Reduction of Sparse Matrices

    NASA Technical Reports Server (NTRS)

    Barnard, Stephen T.; Pothen, Alex; Simon, Horst D.

    1993-01-01

    The problem of reordering a sparse symmetric matrix to reduce its envelope size is considered. A new spectral algorithm for computing an envelope-reducing reordering is obtained by associating a Laplacian matrix with the given matrix and then sorting the components of a specified eigenvector of the Laplacian. This Laplacian eigenvector solves a continuous relaxation of a discrete problem related to envelope minimization called the minimum 2-sum problem. The permutation vector computed by the spectral algorithm is a closest permutation vector to the specified Laplacian eigenvector. Numerical results show that the new reordering algorithm usually computes smaller envelope sizes than those obtained from the current standard algorithms such as Gibbs-Poole-Stockmeyer (GPS) or SPARSPAK reverse Cuthill-McKee (RCM), in some cases reducing the envelope by more than a factor of two.

  3. Parallel algorithms for computation of the manipulator inertia matrix

    NASA Technical Reports Server (NTRS)

    Amin-Javaheri, Masoud; Orin, David E.

    1989-01-01

    The development of an O(log2N) parallel algorithm for the manipulator inertia matrix is presented. It is based on the most efficient serial algorithm which uses the composite rigid body method. Recursive doubling is used to reformulate the linear recurrence equations which are required to compute the diagonal elements of the matrix. It results in O(log2N) levels of computation. Computation of the off-diagonal elements involves N linear recurrences of varying-size and a new method, which avoids redundant computation of position and orientation transforms for the manipulator, is developed. The O(log2N) algorithm is presented in both equation and graphic forms which clearly show the parallelism inherent in the algorithm.

  4. Vertical compression algorithms for sequentially processed statistical files

    SciTech Connect

    Batory, D.S.

    1984-01-01

    Horizontal data compression eliminates redundancies or regularities that occur within individual records. Suppression of trailing blanks and leading zeros are examples. Vertical compression eliminates regularities that occur across consecutively stored records. Prefix and suffix compression are examples. Two new vertical compression algorithms, VRE and HVRE, are presented in this paper. They are based on a combination of character matrix transposition (where rows are initially identified with records) and horizontal compression algorithms (run-length encoding and Huffman encoding). Experimental and theoretical results are presented that show the performance of VRE and HVRE is superior to that of some reputable commercial algorithms. Specifically, these are the compression algorithms of the ADABAS, IDMS and SHRINK/2 data management systems. VRE and HVRE are best suited for compressing statistical files which are sequentially processed and batch updated. They may also be used for file archival and for compressing randomly processed files as well.

  5. Naive Bayes-Guided Bat Algorithm for Feature Selection

    PubMed Central

    Taha, Ahmed Majid; Mustapha, Aida; Chen, Soong-Der

    2013-01-01

    When the amount of data and information is said to double in every 20 months or so, feature selection has become highly important and beneficial. Further improvements in feature selection will positively affect a wide array of applications in fields such as pattern recognition, machine learning, or signal processing. Bio-inspired method called Bat Algorithm hybridized with a Naive Bayes classifier has been presented in this work. The performance of the proposed feature selection algorithm was investigated using twelve benchmark datasets from different domains and was compared to three other well-known feature selection algorithms. Discussion focused on four perspectives: number of features, classification accuracy, stability, and feature generalization. The results showed that BANB significantly outperformed other algorithms in selecting lower number of features, hence removing irrelevant, redundant, or noisy features while maintaining the classification accuracy. BANB is also proven to be more stable than other methods and is capable of producing more general feature subsets. PMID:24396295

  6. Rayleigh wave nonlinear inversion based on the Firefly algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Teng-Fei; Peng, Geng-Xin; Hu, Tian-Yue; Duan, Wen-Sheng; Yao, Feng-Chang; Liu, Yi-Mou

    2014-06-01

    Rayleigh waves have high amplitude, low frequency, and low velocity, which are treated as strong noise to be attenuated in reflected seismic surveys. This study addresses how to identify useful shear wave velocity profile and stratigraphic information from Rayleigh waves. We choose the Firefly algorithm for inversion of surface waves. The Firefly algorithm, a new type of particle swarm optimization, has the advantages of being robust, highly effective, and allows global searching. This algorithm is feasible and has advantages for use in Rayleigh wave inversion with both synthetic models and field data. The results show that the Firefly algorithm, which is a robust and practical method, can achieve nonlinear inversion of surface waves with high resolution.

  7. Adaptive Flocking of Robot Swarms: Algorithms and Properties

    NASA Astrophysics Data System (ADS)

    Lee, Geunho; Chong, Nak Young

    This paper presents a distributed approach for adaptive flocking of swarms of mobile robots that enables to navigate autonomously in complex environments populated with obstacles. Based on the observation of the swimming behavior of a school of fish, we propose an integrated algorithm that allows a swarm of robots to navigate in a coordinated manner, split into multiple swarms, or merge with other swarms according to the environment conditions. We prove the convergence of the proposed algorithm using Lyapunov stability theory. We also verify the effectiveness of the algorithm through extensive simulations, where a swarm of robots repeats the process of splitting and merging while passing around multiple stationary and moving obstacles. The simulation results show that the proposed algorithm is scalable, and robust to variations in the sensing capability of individual robots.

  8. Alternative learning algorithms for feedforward neural networks

    SciTech Connect

    Vitela, J.E.

    1996-03-01

    The efficiency of the back propagation algorithm to train feed forward multilayer neural networks has originated the erroneous belief among many neural networks users, that this is the only possible way to obtain the gradient of the error in this type of networks. The purpose of this paper is to show how alternative algorithms can be obtained within the framework of ordered partial derivatives. Two alternative forward-propagating algorithms are derived in this work which are mathematically equivalent to the BP algorithm. This systematic way of obtaining learning algorithms illustrated with this particular type of neural networks can also be used with other types such as recurrent neural networks.

  9. ShowMe3D

    SciTech Connect

    Sinclair, Michael B

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from the displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.

  10. "Show me" bioethics and politics.

    PubMed

    Christopher, Myra J

    2007-10-01

    Missouri, the "Show Me State," has become the epicenter of several important national public policy debates, including abortion rights, the right to choose and refuse medical treatment, and, most recently, early stem cell research. In this environment, the Center for Practical Bioethics (formerly, Midwest Bioethics Center) emerged and grew. The Center's role in these "cultural wars" is not to advocate for a particular position but to provide well researched and objective information, perspective, and advocacy for the ethical justification of policy positions; and to serve as a neutral convener and provider of a public forum for discussion. In this article, the Center's work on early stem cell research is a case study through which to argue that not only the Center, but also the field of bioethics has a critical role in the politics of public health policy.

  11. ShowMe3D

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from themore » displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.« less

  12. Algorithms and Application of Sparse Matrix Assembly and Equation Solvers for Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Watson, W. R.; Nguyen, D. T.; Reddy, C. J.; Vatsa, V. N.; Tang, W. H.

    2001-01-01

    An algorithm for symmetric sparse equation solutions on an unstructured grid is described. Efficient, sequential sparse algorithms for degree-of-freedom reordering, supernodes, symbolic/numerical factorization, and forward backward solution phases are reviewed. Three sparse algorithms for the generation and assembly of symmetric systems of matrix equations are presented. The accuracy and numerical performance of the sequential version of the sparse algorithms are evaluated over the frequency range of interest in a three-dimensional aeroacoustics application. Results show that the solver solutions are accurate using a discretization of 12 points per wavelength. Results also show that the first assembly algorithm is impractical for high-frequency noise calculations. The second and third assembly algorithms have nearly equal performance at low values of source frequencies, but at higher values of source frequencies the third algorithm saves CPU time and RAM. The CPU time and the RAM required by the second and third assembly algorithms are two orders of magnitude smaller than that required by the sparse equation solver. A sequential version of these sparse algorithms can, therefore, be conveniently incorporated into a substructuring for domain decomposition formulation to achieve parallel computation, where different substructures are handles by different parallel processors.

  13. Library of Continuation Algorithms

    2005-03-01

    LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newton’s method for their nonlinear solve.

  14. Ligand Identification Scoring Algorithm (LISA)

    PubMed Central

    Zheng, Zheng; Merz, Kenneth M.

    2011-01-01

    A central problem in de novo drug design is determining the binding affinity of a ligand with a receptor. A new scoring algorithm is presented that estimates the binding affinity of a protein-ligand complex given a three-dimensional structure. The method, LISA (Ligand Identification Scoring Algorithm), uses an empirical scoring function to describe the binding free energy. Interaction terms have been designed to account for van der Waals (VDW) contacts, hydrogen bonding, desolvation effects and metal chelation to model the dissociation equilibrium constants using a linear model. Atom types have been introduced to differentiate the parameters for VDW, H-bonding interactions and metal chelation between different atom pairs. A training set of 492 protein-ligand complexes was selected for the fitting process. Different test sets have been examined to evaluate its ability to predict experimentally measured binding affinities. By comparing with other well known scoring functions, the results show that LISA has advantages over many existing scoring functions in simulating protein-ligand binding affinity, especially metalloprotein-ligand binding affinity. Artificial Neural Network (ANN) was also used in order to demonstrate that the energy terms in LISA are well designed and do not require extra cross terms. PMID:21561101

  15. An innovative localisation algorithm for railway vehicles

    NASA Astrophysics Data System (ADS)

    Allotta, B.; D'Adamio, P.; Malvezzi, M.; Pugi, L.; Ridolfi, A.; Rindi, A.; Vettori, G.

    2014-11-01

    . The estimation strategy has good performance also under degraded adhesion conditions and could be put on board of high-speed railway vehicles; it represents an accurate and reliable solution. The IMU board is tested via a dedicated Hardware in the Loop (HIL) test rig: it includes an industrial robot able to replicate the motion of the railway vehicle. Through the generated experimental outputs the performances of the innovative localisation algorithm have been evaluated: the HIL test rig permitted to test the proposed algorithm, avoiding expensive (in terms of time and cost) on-track tests, obtaining encouraging results. In fact, the preliminary results show a significant improvement of the position and speed estimation performances compared to those obtained with SCMT algorithms, currently in use on the Italian railway network.

  16. Algorithmic causets

    NASA Astrophysics Data System (ADS)

    Bolognesi, Tommaso

    2011-07-01

    In the context of quantum gravity theories, several researchers have proposed causal sets as appropriate discrete models of spacetime. We investigate families of causal sets obtained from two simple models of computation - 2D Turing machines and network mobile automata - that operate on 'high-dimensional' supports, namely 2D arrays of cells and planar graphs, respectively. We study a number of quantitative and qualitative emergent properties of these causal sets, including dimension, curvature and localized structures, or 'particles'. We show how the possibility to detect and separate particles from background space depends on the choice between a global or local view at the causal set. Finally, we spot very rare cases of pseudo-randomness, or deterministic chaos; these exhibit a spontaneous phenomenon of 'causal compartmentation' that appears as a prerequisite for the occurrence of anything of physical interest in the evolution of spacetime.

  17. Comparison study of reconstruction algorithms for prototype digital breast tomosynthesis using various breast phantoms.

    PubMed

    Kim, Ye-seul; Park, Hye-suk; Lee, Haeng-Hwa; Choi, Young-Wook; Choi, Jae-Gu; Kim, Hak Hee; Kim, Hee-Joung

    2016-02-01

    Digital breast tomosynthesis (DBT) is a recently developed system for three-dimensional imaging that offers the potential to reduce the false positives of mammography by preventing tissue overlap. Many qualitative evaluations of digital breast tomosynthesis were previously performed by using a phantom with an unrealistic model and with heterogeneous background and noise, which is not representative of real breasts. The purpose of the present work was to compare reconstruction algorithms for DBT by using various breast phantoms; validation was also performed by using patient images. DBT was performed by using a prototype unit that was optimized for very low exposures and rapid readout. Three algorithms were compared: a back-projection (BP) algorithm, a filtered BP (FBP) algorithm, and an iterative expectation maximization (EM) algorithm. To compare the algorithms, three types of breast phantoms (homogeneous background phantom, heterogeneous background phantom, and anthropomorphic breast phantom) were evaluated, and clinical images were also reconstructed by using the different reconstruction algorithms. The in-plane image quality was evaluated based on the line profile and the contrast-to-noise ratio (CNR), and out-of-plane artifacts were evaluated by means of the artifact spread function (ASF). Parenchymal texture features of contrast and homogeneity were computed based on reconstructed images of an anthropomorphic breast phantom. The clinical images were studied to validate the effect of reconstruction algorithms. The results showed that the CNRs of masses reconstructed by using the EM algorithm were slightly higher than those obtained by using the BP algorithm, whereas the FBP algorithm yielded much lower CNR due to its high fluctuations of background noise. The FBP algorithm provides the best conspicuity for larger calcifications by enhancing their contrast and sharpness more than the other algorithms; however, in the case of small-size and low

  18. Predicting mining activity with parallel genetic algorithms

    USGS Publications Warehouse

    Talaie, S.; Leigh, R.; Louis, S.J.; Raines, G.L.; Beyer, H.G.; O'Reilly, U.M.; Banzhaf, Arnold D.; Blum, W.; Bonabeau, C.; Cantu-Paz, E.W.; ,; ,

    2005-01-01

    We explore several different techniques in our quest to improve the overall model performance of a genetic algorithm calibrated probabilistic cellular automata. We use the Kappa statistic to measure correlation between ground truth data and data predicted by the model. Within the genetic algorithm, we introduce a new evaluation function sensitive to spatial correctness and we explore the idea of evolving different rule parameters for different subregions of the land. We reduce the time required to run a simulation from 6 hours to 10 minutes by parallelizing the code and employing a 10-node cluster. Our empirical results suggest that using the spatially sensitive evaluation function does indeed improve the performance of the model and our preliminary results also show that evolving different rule parameters for different regions tends to improve overall model performance. Copyright 2005 ACM.

  19. Novel permutation measures for image encryption algorithms

    NASA Astrophysics Data System (ADS)

    Abd-El-Hafiz, Salwa K.; AbdElHaleem, Sherif H.; Radwan, Ahmed G.

    2016-10-01

    This paper proposes two measures for the evaluation of permutation techniques used in image encryption. First, a general mathematical framework for describing the permutation phase used in image encryption is presented. Using this framework, six different permutation techniques, based on chaotic and non-chaotic generators, are described. The two new measures are, then, introduced to evaluate the effectiveness of permutation techniques. These measures are (1) Percentage of Adjacent Pixels Count (PAPC) and (2) Distance Between Adjacent Pixels (DBAP). The proposed measures are used to evaluate and compare the six permutation techniques in different scenarios. The permutation techniques are applied on several standard images and the resulting scrambled images are analyzed. Moreover, the new measures are used to compare the permutation algorithms on different matrix sizes irrespective of the actual parameters used in each algorithm. The analysis results show that the proposed measures are good indicators of the effectiveness of the permutation technique.

  20. Multikernel least mean square algorithm.

    PubMed

    Tobar, Felipe A; Kung, Sun-Yuan; Mandic, Danilo P

    2014-02-01

    The multikernel least-mean-square algorithm is introduced for adaptive estimation of vector-valued nonlinear and nonstationary signals. This is achieved by mapping the multivariate input data to a Hilbert space of time-varying vector-valued functions, whose inner products (kernels) are combined in an online fashion. The proposed algorithm is equipped with novel adaptive sparsification criteria ensuring a finite dictionary, and is computationally efficient and suitable for nonstationary environments. We also show the ability of the proposed vector-valued reproducing kernel Hilbert space to serve as a feature space for the class of multikernel least-squares algorithms. The benefits of adaptive multikernel (MK) estimation algorithms are illuminated in the nonlinear multivariate adaptive prediction setting. Simulations on nonlinear inertial body sensor signals and nonstationary real-world wind signals of low, medium, and high dynamic regimes support the approach. PMID:24807027

  1. The Origins of Counting Algorithms

    PubMed Central

    Cantlon, Jessica F.; Piantadosi, Steven T.; Ferrigno, Stephen; Hughes, Kelly D.; Barnard, Allison M.

    2015-01-01

    Humans’ ability to ‘count’ by verbally labeling discrete quantities is unique in animal cognition. The evolutionary origins of counting algorithms are not understood. We report that non-human primates exhibit a cognitive ability that is algorithmically and logically similar to human counting. Monkeys were given the task of choosing between two food caches. Monkeys saw one cache baited with some number of food items, one item at a time. Then, a second cache was baited with food items, one at a time. At the point when the second set approximately outnumbered the first set, monkeys spontaneously moved to choose the second set even before it was completely baited. Using a novel Bayesian analysis, we show that monkeys used an approximate counting algorithm to increment and compare quantities in sequence. This algorithm is structurally similar to formal counting in humans and thus may have been an important evolutionary precursor to human counting. PMID:25953949

  2. Adaptive snakes using the EM algorithm.

    PubMed

    Nascimento, Jacinto C; Marques, Jorge S

    2005-11-01

    Deformable models (e.g., snakes) perform poorly in many image analysis problems. The contour model is attracted by edge points detected in the image. However, many edge points do not belong to the object contour, preventing the active contour from converging toward the object boundary. A new algorithm is proposed in this paper to overcome this difficulty. The algorithm is based on two key ideas. First, edge points are associated in strokes. Second, each stroke is classified as valid (inlier) or invalid (outlier) and a confidence degree is associated to each stroke. The expectation maximization algorithm is used to update the confidence degrees and to estimate the object contour. It is shown that this is equivalent to the use of an adaptive potential function which varies during the optimization process. Valid strokes receive high confidence degrees while confidence degrees of invalid strokes tend to zero during the optimization process. Experimental results are presented to illustrate the performance of the proposed algorithm in the presence of clutter, showing a remarkable robustness.

  3. Evaluation of Algorithms for Compressing Hyperspectral Data

    NASA Technical Reports Server (NTRS)

    Cook, Sid; Harsanyi, Joseph; Faber, Vance

    2003-01-01

    With EO-1 Hyperion in orbit NASA is showing their continued commitment to hyperspectral imaging (HSI). As HSI sensor technology continues to mature, the ever-increasing amounts of sensor data generated will result in a need for more cost effective communication and data handling systems. Lockheed Martin, with considerable experience in spacecraft design and developing special purpose onboard processors, has teamed with Applied Signal & Image Technology (ASIT), who has an extensive heritage in HSI spectral compression and Mapping Science (MSI) for JPEG 2000 spatial compression expertise, to develop a real-time and intelligent onboard processing (OBP) system to reduce HSI sensor downlink requirements. Our goal is to reduce the downlink requirement by a factor > 100, while retaining the necessary spectral and spatial fidelity of the sensor data needed to satisfy the many science, military, and intelligence goals of these systems. Our compression algorithms leverage commercial-off-the-shelf (COTS) spectral and spatial exploitation algorithms. We are currently in the process of evaluating these compression algorithms using statistical analysis and NASA scientists. We are also developing special purpose processors for executing these algorithms onboard a spacecraft.

  4. Casimir experiments showing saturation effects

    SciTech Connect

    Sernelius, Bo E.

    2009-10-15

    We address several different Casimir experiments where theory and experiment disagree. First out is the classical Casimir force measurement between two metal half spaces; here both in the form of the torsion pendulum experiment by Lamoreaux and in the form of the Casimir pressure measurement between a gold sphere and a gold plate as performed by Decca et al.; theory predicts a large negative thermal correction, absent in the high precision experiments. The third experiment is the measurement of the Casimir force between a metal plate and a laser irradiated semiconductor membrane as performed by Chen et al.; the change in force with laser intensity is larger than predicted by theory. The fourth experiment is the measurement of the Casimir force between an atom and a wall in the form of the measurement by Obrecht et al. of the change in oscillation frequency of a {sup 87}Rb Bose-Einstein condensate trapped to a fused silica wall; the change is smaller than predicted by theory. We show that saturation effects can explain the discrepancies between theory and experiment observed in all these cases.

  5. Evaluation of Electroencephalography Source Localization Algorithms with Multiple Cortical Sources

    PubMed Central

    Bradley, Allison; Yao, Jun; Dewald, Jules; Richter, Claus-Peter

    2016-01-01

    Background Source localization algorithms often show multiple active cortical areas as the source of electroencephalography (EEG). Yet, there is little data quantifying the accuracy of these results. In this paper, the performance of current source density source localization algorithms for the detection of multiple cortical sources of EEG data has been characterized. Methods EEG data were generated by simulating multiple cortical sources (2–4) with the same strength or two sources with relative strength ratios of 1:1 to 4:1, and adding noise. These data were used to reconstruct the cortical sources using current source density (CSD) algorithms: sLORETA, MNLS, and LORETA using a p-norm with p equal to 1, 1.5 and 2. Precision (percentage of the reconstructed activity corresponding to simulated activity) and Recall (percentage of the simulated sources reconstructed) of each of the CSD algorithms were calculated. Results While sLORETA has the best performance when only one source is present, when two or more sources are present LORETA with p equal to 1.5 performs better. When the relative strength of one of the sources is decreased, all algorithms have more difficulty reconstructing that source. However, LORETA 1.5 continues to outperform other algorithms. If only the strongest source is of interest sLORETA is recommended, while LORETA with p equal to 1.5 is recommended if two or more of the cortical sources are of interest. These results provide guidance for choosing a CSD algorithm to locate multiple cortical sources of EEG and for interpreting the results of these algorithms. PMID:26809000

  6. Computed Tomography Images De-noising using a Novel Two Stage Adaptive Algorithm

    PubMed Central

    Fadaee, Mojtaba; Shamsi, Mousa; Saberkari, Hamidreza; Sedaaghi, Mohammad Hossein

    2015-01-01

    In this paper, an optimal algorithm is presented for de-noising of medical images. The presented algorithm is based on improved version of local pixels grouping and principal component analysis. In local pixels grouping algorithm, blocks matching based on L2 norm method is utilized, which leads to matching performance improvement. To evaluate the performance of our proposed algorithm, peak signal to noise ratio (PSNR) and structural similarity (SSIM) evaluation criteria have been used, which are respectively according to the signal to noise ratio in the image and structural similarity of two images. The proposed algorithm has two de-noising and cleanup stages. The cleanup stage is carried out comparatively; meaning that it is alternately repeated until the two conditions based on PSNR and SSIM are established. Implementation results show that the presented algorithm has a significant superiority in de-noising. Furthermore, the quantities of SSIM and PSNR values are higher in comparison to other methods. PMID:26955565

  7. An improved filter-u least mean square vibration control algorithm for aircraft framework.

    PubMed

    Huang, Quanzhen; Luo, Jun; Gao, Zhiyuan; Zhu, Xiaojin; Li, Hengyu

    2014-09-01

    Active vibration control of aerospace vehicle structures is very a hot spot and in which filter-u least mean square (FULMS) algorithm is one of the key methods. But for practical reasons and technical limitations, vibration reference signal extraction is always a difficult problem for FULMS algorithm. To solve the vibration reference signal extraction problem, an improved FULMS vibration control algorithm is proposed in this paper. Reference signal is constructed based on the controller structure and the data in the algorithm process, using a vibration response residual signal extracted directly from the vibration structure. To test the proposed algorithm, an aircraft frame model is built and an experimental platform is constructed. The simulation and experimental results show that the proposed algorithm is more practical with a good vibration suppression performance.

  8. An improved filter-u least mean square vibration control algorithm for aircraft framework

    NASA Astrophysics Data System (ADS)

    Huang, Quanzhen; Luo, Jun; Gao, Zhiyuan; Zhu, Xiaojin; Li, Hengyu

    2014-09-01

    Active vibration control of aerospace vehicle structures is very a hot spot and in which filter-u least mean square (FULMS) algorithm is one of the key methods. But for practical reasons and technical limitations, vibration reference signal extraction is always a difficult problem for FULMS algorithm. To solve the vibration reference signal extraction problem, an improved FULMS vibration control algorithm is proposed in this paper. Reference signal is constructed based on the controller structure and the data in the algorithm process, using a vibration response residual signal extracted directly from the vibration structure. To test the proposed algorithm, an aircraft frame model is built and an experimental platform is constructed. The simulation and experimental results show that the proposed algorithm is more practical with a good vibration suppression performance.

  9. A pegging algorithm for separable continuous nonlinear knapsack problems with box constraints

    NASA Astrophysics Data System (ADS)

    Kim, Gitae; Wu, Chih-Hang

    2012-10-01

    This article proposes an efficient pegging algorithm for solving separable continuous nonlinear knapsack problems with box constraints. A well-known pegging algorithm for solving this problem is the Bitran-Hax algorithm, a preferred choice for large-scale problems. However, at each iteration, it must calculate an optimal dual variable and update all free primal variables, which is time consuming. The proposed algorithm checks the box constraints implicitly using the bounds on the Lagrange multiplier without explicitly calculating primal variables at each iteration as well as updating the dual solution in a more efficient manner. Results of computational experiments have shown that the proposed algorithm consistently outperforms the Bitran-Hax in all baseline testing and two real-time application models. The proposed algorithm shows significant potential for many other mathematical models in real-world applications with straightforward extensions.

  10. A novel blinding digital watermark algorithm based on lab color space

    NASA Astrophysics Data System (ADS)

    Dong, Bing-feng; Qiu, Yun-jie; Lu, Hong-tao

    2010-02-01

    It is necessary for blinding digital image watermark algorithm to extract watermark information without any extra information except the watermarked image itself. But most of the current blinding watermark algorithms have the same disadvantage: besides the watermarked image, they also need the size and other information about the original image when extracting the watermark. This paper presents an innovative blinding color image watermark algorithm based on Lab color space, which does not have the disadvantages mentioned above. This algorithm first marks the watermark region size and position through embedding some regular blocks called anchor points in image spatial domain, and then embeds the watermark into the image. In doing so, the watermark information can be easily extracted after doing cropping and scale change to the image. Experimental results show that the algorithm is particularly robust against the color adjusting and geometry transformation. This algorithm has already been used in a copyright protecting project and works very well.

  11. A simple, pipelined algorithm for large, irregular all-gather problems.

    SciTech Connect

    Traff, J. L.; Ripke, A.; Siebert, C.; Balaji, P.; Thakur, R.; Gropp, W.; Mathematics and Computer Science; NEC Lab.; Univ. of Illinois

    2008-01-01

    We present and evaluate a new, simple, pipelined algorithm for large, irregular all-gather problems, useful for the implementation of the MPI-Allgatherv collective operation of MPI. The algorithm can be viewed as an adaptation of a linear ring algorithm for regular all-gather problems for single-ported, clustered multiprocessors to the irregular problem. Compared to the standard ring algorithm, whose performance is dominated by the largest data size broadcast by a process (times the number of processes), the performance of the new algorithm depends only on the total amount of data over all processes. The new algorithm has been implemented within different MPI libraries. Benchmark results on NEC SX-8, Linux clusters with InfiniBand and Gigabit Ethernet, Blue Gene/P, and SiCortex systems show huge performance gains in accordance with the expected behavior.

  12. An Improved Inertial Frame Alignment Algorithm Based on Horizontal Alignment Information for Marine SINS.

    PubMed

    Che, Yanting; Wang, Qiuying; Gao, Wei; Yu, Fei

    2015-01-01

    In this paper, an improved inertial frame alignment algorithm for a marine SINS under mooring conditions is proposed, which significantly improves accuracy. Since the horizontal alignment is easy to complete, and a characteristic of gravity is that its component in the horizontal plane is zero, we use a clever method to improve the conventional inertial alignment algorithm. Firstly, a large misalignment angle model and a dimensionality reduction Gauss-Hermite filter are employed to establish the fine horizontal reference frame. Based on this, the projection of the gravity in the body inertial coordinate frame can be calculated easily. Then, the initial alignment algorithm is accomplished through an inertial frame alignment algorithm. The simulation and experiment results show that the improved initial alignment algorithm performs better than the conventional inertial alignment algorithm, and meets the accuracy requirements of a medium-accuracy marine SINS.

  13. Research on ADV-Hop localization algorithm in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Zhao, Shijun; Xu, Xiulan; Zhang, Zhaohui; Sun, Meiling

    2008-10-01

    Wireless sensor networks (WSN) have wide applicability to many important applications including environmental monitoring, military applications and disaster management, etc. In many applications, sensors are assumed to know their absolute locations. Some localization methods of WSN have been proposed. In these methods, nodes equipped with GPS to get precise location information, namely the anchor nodes, are employed to derive the locations of other nodes. Most of the recent work focuses on increasing the accuracy in position estimation. In this paper, aiming at the high communication cost and average positioning error of DV-hop algorithm, an advanced algorithm which is called ADV-hop algorithm is proposed. Simulations are made by the network simulator NS2. The simulation results show that ADV-hop algorithm has lower communication cost and smaller average positioning error than DV-hop algorithm, which makes ADV-hop algorithm more suitable for the node location of WSN.

  14. Data classification using metaheuristic Cuckoo Search technique for Levenberg Marquardt back propagation (CSLM) algorithm

    NASA Astrophysics Data System (ADS)

    Nawi, Nazri Mohd.; Khan, Abdullah; Rehman, M. Z.

    2015-05-01

    A nature inspired behavior metaheuristic techniques which provide derivative-free solutions to solve complex problems. One of the latest additions to the group of nature inspired optimization procedure is Cuckoo Search (CS) algorithm. Artificial Neural Network (ANN) training is an optimization task since it is desired to find optimal weight set of a neural network in training process. Traditional training algorithms have some limitation such as getting trapped in local minima and slow convergence rate. This study proposed a new technique CSLM by combining the best features of two known algorithms back-propagation (BP) and Levenberg Marquardt algorithm (LM) for improving the convergence speed of ANN training and avoiding local minima problem by training this network. Some selected benchmark classification datasets are used for simulation. The experiment result show that the proposed cuckoo search with Levenberg Marquardt algorithm has better performance than other algorithm used in this study.

  15. An Improved Inertial Frame Alignment Algorithm Based on Horizontal Alignment Information for Marine SINS

    PubMed Central

    Che, Yanting; Wang, Qiuying; Gao, Wei; Yu, Fei

    2015-01-01

    In this paper, an improved inertial frame alignment algorithm for a marine SINS under mooring conditions is proposed, which significantly improves accuracy. Since the horizontal alignment is easy to complete, and a characteristic of gravity is that its component in the horizontal plane is zero, we use a clever method to improve the conventional inertial alignment algorithm. Firstly, a large misalignment angle model and a dimensionality reduction Gauss-Hermite filter are employed to establish the fine horizontal reference frame. Based on this, the projection of the gravity in the body inertial coordinate frame can be calculated easily. Then, the initial alignment algorithm is accomplished through an inertial frame alignment algorithm. The simulation and experiment results show that the improved initial alignment algorithm performs better than the conventional inertial alignment algorithm, and meets the accuracy requirements of a medium-accuracy marine SINS. PMID:26445048

  16. A Fast Implementation of the ISOCLUS Algorithm

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.; Netanyahu, Nathan S.; LeMoigne, Jacqueline

    2003-01-01

    Unsupervised clustering is a fundamental building block in numerous image processing applications. One of the most popular and widely used clustering schemes for remote sensing applications is the ISOCLUS algorithm, which is based on the ISODATA method. The algorithm is given a set of n data points in d-dimensional space, an integer k indicating the initial number of clusters, and a number of additional parameters. The general goal is to compute the coordinates of a set of cluster centers in d-space, such that those centers minimize the mean squared distance from each data point to its nearest center. This clustering algorithm is similar to another well-known clustering method, called k-means. One significant feature of ISOCLUS over k-means is that the actual number of clusters reported might be fewer or more than the number supplied as part of the input. The algorithm uses different heuristics to determine whether to merge lor split clusters. As ISOCLUS can run very slowly, particularly on large data sets, there has been a growing .interest in the remote sensing community in computing it efficiently. We have developed a faster implementation of the ISOCLUS algorithm. Our improvement is based on a recent acceleration to the k-means algorithm of Kanungo, et al. They showed that, by using a kd-tree data structure for storing the data, it is possible to reduce the running time of k-means. We have adapted this method for the ISOCLUS algorithm, and we show that it is possible to achieve essentially the same results as ISOCLUS on large data sets, but with significantly lower running times. This adaptation involves computing a number of cluster statistics that are needed for ISOCLUS but not for k-means. Both the k-means and ISOCLUS algorithms are based on iterative schemes, in which nearest neighbors are calculated until some convergence criterion is satisfied. Each iteration requires that the nearest center for each data point be computed. Naively, this requires O

  17. Audio Watermarking Algorithm Based on Centroid and Statistical Features

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoming; Yin, Xiong

    Experimental testing shows that the relative relation in the number of samples among the neighboring bins and the audio frequency centroid are two robust features to the Time Scale Modification (TSM) attacks. Accordingly, an audio watermark algorithm based on frequency centroid and histogram is proposed by modifying the frequency coefficients. The audio histogram with equal-sized bins is extracted from a selected frequency coefficient range referred to the audio centroid. The watermarked audio signal is perceptibly similar to the original one. The experimental results show that the algorithm is very robust to resample TSM and a variety of common attacks. Subjective quality evaluation of the algorithm shows that embedded watermark introduces low, inaudible distortion of host audio signal.

  18. The Loop Algorithm

    NASA Astrophysics Data System (ADS)

    Evertz, Hans Gerd

    1998-03-01

    Exciting new investigations have recently become possible for strongly correlated systems of spins, bosons, and fermions, through Quantum Monte Carlo simulations with the Loop Algorithm (H.G. Evertz, G. Lana, and M. Marcu, Phys. Rev. Lett. 70, 875 (1993).) (For a recent review see: H.G. Evertz, cond- mat/9707221.) and its generalizations. A review of this new method, its generalizations and its applications is given, including some new results. The Loop Algorithm is based on a formulation of physical models in an extended ensemble of worldlines and graphs, and is related to Swendsen-Wang cluster algorithms. It performs nonlocal changes of worldline configurations, determined by local stochastic decisions. It overcomes many of the difficulties of traditional worldline simulations. Computer time requirements are reduced by orders of magnitude, through a corresponding reduction in autocorrelations. The grand-canonical ensemble (e.g. varying winding numbers) is naturally simulated. The continuous time limit can be taken directly. Improved Estimators exist which further reduce the errors of measured quantities. The algorithm applies unchanged in any dimension and for varying bond-strengths. It becomes less efficient in the presence of strong site disorder or strong magnetic fields. It applies directly to locally XYZ-like spin, fermion, and hard-core boson models. It has been extended to the Hubbard and the tJ model and generalized to higher spin representations. There have already been several large scale applications, especially for Heisenberg-like models, including a high statistics continuous time calculation of quantum critical exponents on a regularly depleted two-dimensional lattice of up to 20000 spatial sites at temperatures down to T=0.01 J.

  19. Ensembles of satellite aerosol retrievals based on three AATSR algorithms within aerosol_cci

    NASA Astrophysics Data System (ADS)

    Kosmale, Miriam; Popp, Thomas

    2016-04-01

    Ensemble techniques are widely used in the modelling community, combining different modelling results in order to reduce uncertainties. This approach could be also adapted to satellite measurements. Aerosol_cci is an ESA funded project, where most of the European aerosol retrieval groups work together. The different algorithms are homogenized as far as it makes sense, but remain essentially different. Datasets are compared with ground based measurements and between each other. Three AATSR algorithms (Swansea university aerosol retrieval, ADV aerosol retrieval by FMI and Oxford aerosol retrieval ORAC) provide within this project 17 year global aerosol records. Each of these algorithms provides also uncertainty information on pixel level. Within the presented work, an ensembles of the three AATSR algorithms is performed. The advantage over each single algorithm is the higher spatial coverage due to more measurement pixels per gridbox. A validation to ground based AERONET measurements shows still a good correlation of the ensemble, compared to the single algorithms. Annual mean maps show the global aerosol distribution, based on a combination of the three aerosol algorithms. In addition, pixel level uncertainties of each algorithm are used for weighting the contributions, in order to reduce the uncertainty of the ensemble. Results of different versions of the ensembles for aerosol optical depth will be presented and discussed. The results are validated against ground based AERONET measurements. A higher spatial coverage on daily basis allows better results in annual mean maps. The benefit of using pixel level uncertainties is analysed.

  20. The effect of sub-surface volume scattering on the accuracy of ice-sheet altimeter retracking algorithms

    NASA Technical Reports Server (NTRS)

    Davis, Curt H.

    1993-01-01

    The NASA and ESA retracking algorithms are compared with an algorithm based upon a combined surface and volume (S/V) scattering model. First, the S/V, NASA, and ESA algorithms were used to retrack over 400,000 altimeter return waveforms from the Greenland and Antarctic ice sheets. The surface elevations from the S/V algorithm were compared with the elevations produced by the NASA and ESA algorithms to determine the relative accuracy of these algorithms when subsurface volume-scattering occurs. The results show that the NASA algorithm produced surface elevations within 35 to 50 cm of the S/V algorithm, while the performance of the ESA algorithm was slightly worse. Next, by analyzing several thousand satellite crossover points from the Antarctic data set, we determined the retracking algorithm that produced the most repeatable surface elevations. The elevations derived from the S/V algorithm had the smallest RMS error for the region of the East Antarctic plateau examined here. The ESA algorithm produced erroneous estimates of elevation change when seasonal variations were present; it measured 0.7 to 1.6-m change in elevation over a 6-month period on the East Antarctic plateau where accumulation rates are only 10 cm/year.

  1. Analysis of multigrid algorithms for nonsymmetric and indefinite elliptic problems

    SciTech Connect

    Bramble, J.H.; Pasciak, J.E.; Xu, J.

    1988-10-01

    We prove some new estimates for the convergence of multigrid algorithms applied to nonsymmetric and indefinite elliptic boundary value problems. We provide results for the so-called 'symmetric' multigrid schemes. We show that for the variable V-script-cycle and the W-script-cycle schemes, multigrid algorithms with any amount of smoothing on the finest grid converge at a rate that is independent of the number of levels or unknowns, provided that the initial grid is sufficiently fine. We show that the V-script-cycle algorithm also converges (under appropriate assumptions on the coarsest grid) but at a rate which may deteriorate as the number of levels increases. This deterioration for the V-script-cycle may occur even in the case of full elliptic regularity. Finally, the results of numerical experiments are given which illustrate the convergence behavior suggested by the theory.

  2. Analysis of exclusive kT jet algorithms in electron-positron annihilation

    NASA Astrophysics Data System (ADS)

    Chay, Junegone; Kim, Chul; Kim, Inchol

    2015-10-01

    We study the factorization of the dijet cross section in e+e- annihilation using the generalized exclusive jet algorithm which includes the cone-type, the JADE, the kT, the anti-kT and the Cambridge/Aachen jet algorithms as special cases. In order to probe the characteristics of the jet algorithms in a unified way, we consider the generalized kT jet algorithm with an arbitrary weight of the energies, in which various types of the kT-type algorithms are included for specific values of the parameter. We show that the jet algorithm respects the factorization property for the parameter α <2 . The factorized jet function and the soft function are well defined and infrared safe for all the jet algorithms except the kT algorithm. The kT algorithm (α =2 ) breaks the factorization since the jet and the soft functions are infrared divergent and are not defined for α =2 , though the dijet cross section is infrared finite. In the jet algorithms which enable factorization, we give a phenomenological analysis using the resummed and the fixed-order results.

  3. Active Control of Automotive Intake Noise under Rapid Acceleration using the Co-FXLMS Algorithm

    NASA Astrophysics Data System (ADS)

    Lee, Hae-Jin; Lee, Gyeong-Tae; Oh, Jae-Eung

    The method of reducing automotive intake noise can be classified by passive and active control techniques. However, passive control has a limited effect of noise reduction at low frequency range (below 500 Hz) and is limited by the space of the engine room. However, active control can overcome these passive control limitations. The active control technique mostly uses the Least-Mean-Square (LMS) algorithm, because the LMS algorithm can easily obtain the complex transfer function in real-time, particularly when the Filtered-X LMS (FXLMS) algorithm is applied to an active noise control (ANC) system. However, the convergence performance of the LMS algorithm decreases significantly when the FXLMS algorithm is applied to the active control of intake noise under rapidly accelerating driving conditions. Therefore, in this study, the Co-FXLMS algorithm was proposed to improve the control performance of the FXLMS algorithm during rapid acceleration. The Co-FXLMS algorithm is realized by using an estimate of the cross correlation between the adaptation error and the filtered input signal to control the step size. The performance of the Co-FXLMS algorithm is presented in comparison with that of the FXLMS algorithm. Experimental results show that active noise control using Co-FXLMS is effective in reducing automotive intake noise during rapid acceleration.

  4. Quantifying dynamic sensitivity of optimization algorithm parameters to improve hydrological model calibration

    NASA Astrophysics Data System (ADS)

    Qi, Wei; Zhang, Chi; Fu, Guangtao; Zhou, Huicheng

    2016-02-01

    It is widely recognized that optimization algorithm parameters have significant impacts on algorithm performance, but quantifying the influence is very complex and difficult due to high computational demands and dynamic nature of search parameters. The overall aim of this paper is to develop a global sensitivity analysis based framework to dynamically quantify the individual and interactive influence of algorithm parameters on algorithm performance. A variance decomposition sensitivity analysis method, Analysis of Variance (ANOVA), is used for sensitivity quantification, because it is capable of handling small samples and more computationally efficient compared with other approaches. The Shuffled Complex Evolution method developed at the University of Arizona algorithm (SCE-UA) is selected as an optimization algorithm for investigation, and two criteria, i.e., convergence speed and success rate, are used to measure the performance of SCE-UA. Results show the proposed framework can effectively reveal the dynamic sensitivity of algorithm parameters in the search processes, including individual influences of parameters and their interactive impacts. Interactions between algorithm parameters have significant impacts on SCE-UA performance, which has not been reported in previous research. The proposed framework provides a means to understand the dynamics of algorithm parameter influence, and highlights the significance of considering interactive parameter influence to improve algorithm performance in the search processes.

  5. Comparison of l₁-Norm SVR and Sparse Coding Algorithms for Linear Regression.

    PubMed

    Zhang, Qingtian; Hu, Xiaolin; Zhang, Bo

    2015-08-01

    Support vector regression (SVR) is a popular function estimation technique based on Vapnik's concept of support vector machine. Among many variants, the l1-norm SVR is known to be good at selecting useful features when the features are redundant. Sparse coding (SC) is a technique widely used in many areas and a number of efficient algorithms are available. Both l1-norm SVR and SC can be used for linear regression. In this brief, the close connection between the l1-norm SVR and SC is revealed and some typical algorithms are compared for linear regression. The results show that the SC algorithms outperform the Newton linear programming algorithm, an efficient l1-norm SVR algorithm, in efficiency. The algorithms are then used to design the radial basis function (RBF) neural networks. Experiments on some benchmark data sets demonstrate the high efficiency of the SC algorithms. In particular, one of the SC algorithms, the orthogonal matching pursuit is two orders of magnitude faster than a well-known RBF network designing algorithm, the orthogonal least squares algorithm.

  6. PMCR-Miner: parallel maximal confident association rules miner algorithm for microarray data set.

    PubMed

    Zakaria, Wael; Kotb, Yasser; Ghaleb, Fayed F M

    2015-01-01

    The MCR-Miner algorithm is aimed to mine all maximal high confident association rules form the microarray up/down-expressed genes data set. This paper introduces two new algorithms: IMCR-Miner and PMCR-Miner. The IMCR-Miner algorithm is an extension of the MCR-Miner algorithm with some improvements. These improvements implement a novel way to store the samples of each gene into a list of unsigned integers in order to benefit using the bitwise operations. In addition, the IMCR-Miner algorithm overcomes the drawbacks faced by the MCR-Miner algorithm by setting some restrictions to ignore repeated comparisons. The PMCR-Miner algorithm is a parallel version of the new proposed IMCR-Miner algorithm. The PMCR-Miner algorithm is based on shared-memory systems and task parallelism, where no time is needed in the process of sharing and combining data between processors. The experimental results on real microarray data sets show that the PMCR-Miner algorithm is more efficient and scalable than the counterparts.

  7. A comparative analysis of biclustering algorithms for gene expression data.

    PubMed

    Eren, Kemal; Deveci, Mehmet; Küçüktunç, Onur; Çatalyürek, Ümit V

    2013-05-01

    The need to analyze high-dimension biological data is driving the development of new data mining methods. Biclustering algorithms have been successfully applied to gene expression data to discover local patterns, in which a subset of genes exhibit similar expression levels over a subset of conditions. However, it is not clear which algorithms are best suited for this task. Many algorithms have been published in the past decade, most of which have been compared only to a small number of algorithms. Surveys and comparisons exist in the literature, but because of the large number and variety of biclustering algorithms, they are quickly outdated. In this article we partially address this problem of evaluating the strengths and weaknesses of existing biclustering methods. We used the BiBench package to compare 12 algorithms, many of which were recently published or have not been extensively studied. The algorithms were tested on a suite of synthetic data sets to measure their performance on data with varying conditions, such as different bicluster models, varying noise, varying numbers of biclusters and overlapping biclusters. The algorithms were also tested on eight large gene expression data sets obtained from the Gene Expression Omnibus. Gene Ontology enrichment analysis was performed on the resulting biclusters, and the best enrichment terms are reported. Our analyses show that the biclustering method and its parameters should be selected based on the desired model, whether that model allows overlapping biclusters, and its robustness to noise. In addition, we observe that the biclustering algorithms capable of finding more than one model are more successful at capturing biologically relevant clusters.

  8. Mimas Showing False Colors #1

    NASA Technical Reports Server (NTRS)

    2005-01-01

    False color images of Saturn's moon, Mimas, reveal variation in either the composition or texture across its surface.

    During its approach to Mimas on Aug. 2, 2005, the Cassini spacecraft narrow-angle camera obtained multi-spectral views of the moon from a range of 228,000 kilometers (142,500 miles).

    The image at the left is a narrow angle clear-filter image, which was separately processed to enhance the contrast in brightness and sharpness of visible features. The image at the right is a color composite of narrow-angle ultraviolet, green, infrared and clear filter images, which have been specially processed to accentuate subtle changes in the spectral properties of Mimas' surface materials. To create this view, three color images (ultraviolet, green and infrared) were combined into a single black and white picture that isolates and maps regional color differences. This 'color map' was then superimposed over the clear-filter image at the left.

    The combination of color map and brightness image shows how the color differences across the Mimas surface materials are tied to geological features. Shades of blue and violet in the image at the right are used to identify surface materials that are bluer in color and have a weaker infrared brightness than average Mimas materials, which are represented by green.

    Herschel crater, a 140-kilometer-wide (88-mile) impact feature with a prominent central peak, is visible in the upper right of each image. The unusual bluer materials are seen to broadly surround Herschel crater. However, the bluer material is not uniformly distributed in and around the crater. Instead, it appears to be concentrated on the outside of the crater and more to the west than to the north or south. The origin of the color differences is not yet understood. It may represent ejecta material that was excavated from inside Mimas when the Herschel impact occurred. The bluer color of these materials may be caused by subtle differences in

  9. An Autonomous Star Identification Algorithm Based on One-Dimensional Vector Pattern for Star Sensors.

    PubMed

    Luo, Liyan; Xu, Luping; Zhang, Hua

    2015-07-07

    In order to enhance the robustness and accelerate the recognition speed of star identification, an autonomous star identification algorithm for star sensors is proposed based on the one-dimensional vector pattern (one_DVP). In the proposed algorithm, the space geometry information of the observed stars is used to form the one-dimensional vector pattern of the observed star. The one-dimensional vector pattern of the same observed star remains unchanged when the stellar image rotates, so the problem of star identification is simplified as the comparison of the two feature vectors. The one-dimensional vector pattern is adopted to build the feature vector of the star pattern, which makes it possible to identify the observed stars robustly. The characteristics of the feature vector and the proposed search strategy for the matching pattern make it possible to achieve the recognition result as quickly as possible. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition accuracy and robustness by the proposed algorithm are better than those by the pyramid algorithm, the modified grid algorithm, and the LPT algorithm. The theoretical analysis and experimental results show that the proposed algorithm outperforms the other three star identification algorithms.

  10. An Autonomous Star Identification Algorithm Based on One-Dimensional Vector Pattern for Star Sensors

    PubMed Central

    Luo, Liyan; Xu, Luping; Zhang, Hua

    2015-01-01

    In order to enhance the robustness and accelerate the recognition speed of star identification, an autonomous star identification algorithm for star sensors is proposed based on the one-dimensional vector pattern (one_DVP). In the proposed algorithm, the space geometry information of the observed stars is used to form the one-dimensional vector pattern of the observed star. The one-dimensional vector pattern of the same observed star remains unchanged when the stellar image rotates, so the problem of star identification is simplified as the comparison of the two feature vectors. The one-dimensional vector pattern is adopted to build the feature vector of the star pattern, which makes it possible to identify the observed stars robustly. The characteristics of the feature vector and the proposed search strategy for the matching pattern make it possible to achieve the recognition result as quickly as possible. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition accuracy and robustness by the proposed algorithm are better than those by the pyramid algorithm, the modified grid algorithm, and the LPT algorithm. The theoretical analysis and experimental results show that the proposed algorithm outperforms the other three star identification algorithms. PMID:26198233

  11. Multiple sequence alignment algorithm based on a dispersion graph and ant colony algorithm.

    PubMed

    Chen, Weiyang; Liao, Bo; Zhu, Wen; Xiang, Xuyu

    2009-10-01

    In this article, we describe a representation for the processes of multiple sequences alignment (MSA) and used it to solve the problem of MSA. By this representation, we took every possible aligning result into account by defining the representation of gap insertion, the value of heuristic information in every optional path and scoring rule. On the basis of the proposed multidimensional graph, we used the ant colony algorithm to find the better path that denotes a better aligning result. In our article, we proposed the instance of three-dimensional graph and four-dimensional graph and advanced a special ichnographic representation to analyze MSA. It is yet only an experimental software, and we gave an example for finding the best aligning result by three-dimensional graph and ant colony algorithm. Experimental results show that our method can improve the solution quality on MSA benchmarks. PMID:19130503

  12. Evaluation of Demons- and FEM-Based Registration Algorithms for Lung Cancer.

    PubMed

    Yang, Juan; Li, Dengwang; Yin, Yong; Zhao, Fen; Wang, Hongjun

    2016-04-01

    We evaluated and compared the accuracy of 2 deformable image registration algorithms in 4-dimensional computed tomography images for patients with lung cancer. Ten patients with non-small cell lung cancer or small cell lung cancer were enrolled in this institutional review board-approved study. The displacement vector fields relative to a specific reference image were calculated by using the diffeomorphic demons (DD) algorithm and the finite element method (FEM)-based algorithm. The registration accuracy was evaluated by using normalized mutual information (NMI), the sum of squared intensity difference (SSD), modified Hausdorff distance (dH_M), and ratio of gross tumor volume (rGTV) difference between reference image and deformed phase image. We also compared the registration speed of the 2 algorithms. Of all patients, the FEM-based algorithm showed stronger ability in aligning 2 images than the DD algorithm. The means (±standard deviation) of NMI were 0.86 (±0.05) and 0.90 (±0.05) using the DD algorithm and the FEM-based algorithm, respectively. The means of SSD were 0.006 (±0.003) and 0.003 (±0.002) using the DD algorithm and the FEM-based algorithm, respectively. The means of dH_M were 0.04 (±0.02) and 0.03 (±0.03) using the DD algorithm and the FEM-based algorithm, respectively. The means of rGTV were 3.9% (±1.01%) and 2.9% (±1.1%) using the DD algorithm and the FEM-based algorithm, respectively. However, the FEM-based algorithm costs a longer time than the DD algorithm, with the average running time of 31.4 minutes compared to 21.9 minutes for all patients. The preliminary results showed that the FEM-based algorithm was more accurate than the DD algorithm while compromised with the registration speed. PMID:25817713

  13. An enhanced algorithm to estimate BDS satellite's differential code biases

    NASA Astrophysics Data System (ADS)

    Shi, Chuang; Fan, Lei; Li, Min; Liu, Zhizhao; Gu, Shengfeng; Zhong, Shiming; Song, Weiwei

    2016-02-01

    This paper proposes an enhanced algorithm to estimate the differential code biases (DCB) on three frequencies of the BeiDou Navigation Satellite System (BDS) satellites. By forming ionospheric observables derived from uncombined precise point positioning and geometry-free linear combination of phase-smoothed range, satellite DCBs are determined together with ionospheric delay that is modeled at each individual station. Specifically, the DCB and ionospheric delay are estimated in a weighted least-squares estimator by considering the precision of ionospheric observables, and a misclosure constraint for different types of satellite DCBs is introduced. This algorithm was tested by GNSS data collected in November and December 2013 from 29 stations of Multi-GNSS Experiment (MGEX) and BeiDou Experimental Tracking Stations. Results show that the proposed algorithm is able to precisely estimate BDS satellite DCBs, where the mean value of day-to-day scattering is about 0.19 ns and the RMS of the difference with respect to MGEX DCB products is about 0.24 ns. In order to make comparison, an existing algorithm based on IGG: Institute of Geodesy and Geophysics, China (IGGDCB), is also used to process the same dataset. Results show that, the DCB difference between results from the enhanced algorithm and the DCB products from Center for Orbit Determination in Europe (CODE) and MGEX is reduced in average by 46 % for GPS satellites and 14 % for BDS satellites, when compared with DCB difference between the results of IGGDCB algorithm and the DCB products from CODE and MGEX. In addition, we find the day-to-day scattering of BDS IGSO satellites is obviously lower than that of GEO and MEO satellites, and a significant bias exists in daily DCB values of GEO satellites comparing with MGEX DCB product. This proposed algorithm also provides a new approach to estimate the satellite DCBs of multiple GNSS systems.

  14. The association between symbolic and nonsymbolic numerical magnitude processing and mental versus algorithmic subtraction in adults.

    PubMed

    Linsen, Sarah; Torbeyns, Joke; Verschaffel, Lieven; Reynvoet, Bert; De Smedt, Bert

    2016-03-01

    There are two well-known computation methods for solving multi-digit subtraction items, namely mental and algorithmic computation. It has been contended that mental and algorithmic computation differentially rely on numerical magnitude processing, an assumption that has already been examined in children, but not yet in adults. Therefore, in this study, we examined how numerical magnitude processing was associated with mental and algorithmic computation, and whether this association with numerical magnitude processing was different for mental versus algorithmic computation. We also investigated whether the association between numerical magnitude processing and mental and algorithmic computation differed for measures of symbolic versus nonsymbolic numerical magnitude processing. Results showed that symbolic, and not nonsymbolic, numerical magnitude processing was associated with mental computation, but not with algorithmic computation. Additional analyses showed, however, that the size of this association with symbolic numerical magnitude processing was not significantly different for mental and algorithmic computation. We also tried to further clarify the association between numerical magnitude processing and complex calculation by also including relevant arithmetical subskills, i.e. arithmetic facts, needed for complex calculation that are also known to be dependent on numerical magnitude processing. Results showed that the associations between symbolic numerical magnitude processing and mental and algorithmic computation were fully explained by individual differences in elementary arithmetic fact knowledge. PMID:26914586

  15. Select algorithm for local certificate repository of self-organized key management scheme in ad hoc networks

    NASA Astrophysics Data System (ADS)

    Liu, Shizhong; Zhang, Zongyun

    2013-07-01

    Based on the Maximum Degree Construction algorithm, a new select algorithm is proposed in this paper. In the algorithm, each node and its neighbors issue the certificates each other to generate the local In-degree and Out-degree certificate repository. Similar to the ant colony algorithm, it finds the certificate chain between the source node and destination node by selecting the node of the maximum certificated times from the beginning. The algorithm reduces the complexity of the selection, provides a guarantee to find the certificate chain, and saves the spending of space as well. Next, this paper gives the simulation of the algorithm and the simulated results show that this is an optimized select algorithm for local certificate repository.

  16. A universal optimization strategy for ant colony optimization algorithms based on the Physarum-inspired mathematical model.

    PubMed

    Zhang, Zili; Gao, Chao; Liu, Yuxin; Qian, Tao

    2014-09-01

    Ant colony optimization (ACO) algorithms often fall into the local optimal solution and have lower search efficiency for solving the travelling salesman problem (TSP). According to these shortcomings, this paper proposes a universal optimization strategy for updating the pheromone matrix in the ACO algorithms. The new optimization strategy takes advantages of the unique feature of critical paths reserved in the process of evolving adaptive networks of the Physarum-inspired mathematical model (PMM). The optimized algorithms, denoted as PMACO algorithms, can enhance the amount of pheromone in the critical paths and promote the exploitation of the optimal solution. Experimental results in synthetic and real networks show that the PMACO algorithms are more efficient and robust than the traditional ACO algorithms, which are adaptable to solve the TSP with single or multiple objectives. Meanwhile, we further analyse the influence of parameters on the performance of the PMACO algorithms. Based on these analyses, the best values of these parameters are worked out for the TSP.

  17. Artificial immune algorithm for multi-depot vehicle scheduling problems

    NASA Astrophysics Data System (ADS)

    Wu, Zhongyi; Wang, Donggen; Xia, Linyuan; Chen, Xiaoling

    2008-10-01

    In the fast-developing logistics and supply chain management fields, one of the key problems in the decision support system is that how to arrange, for a lot of customers and suppliers, the supplier-to-customer assignment and produce a detailed supply schedule under a set of constraints. Solutions to the multi-depot vehicle scheduling problems (MDVRP) help in solving this problem in case of transportation applications. The objective of the MDVSP is to minimize the total distance covered by all vehicles, which can be considered as delivery costs or time consumption. The MDVSP is one of nondeterministic polynomial-time hard (NP-hard) problem which cannot be solved to optimality within polynomial bounded computational time. Many different approaches have been developed to tackle MDVSP, such as exact algorithm (EA), one-stage approach (OSA), two-phase heuristic method (TPHM), tabu search algorithm (TSA), genetic algorithm (GA) and hierarchical multiplex structure (HIMS). Most of the methods mentioned above are time consuming and have high risk to result in local optimum. In this paper, a new search algorithm is proposed to solve MDVSP based on Artificial Immune Systems (AIS), which are inspirited by vertebrate immune systems. The proposed AIS algorithm is tested with 30 customers and 6 vehicles located in 3 depots. Experimental results show that the artificial immune system algorithm is an effective and efficient method for solving MDVSP problems.

  18. Adaptive improved natural gradient algorithm for blind source separation.

    PubMed

    Liu, Jian-Qiang; Feng, Da-Zheng; Zhang, Wei-Wei

    2009-03-01

    We propose an adaptive improved natural gradient algorithm for blind separation of independent sources. First, inspired by the well-known backpropagation algorithm, we incorporate a momentum term into the natural gradient learning process to accelerate the convergence rate and improve the stability. Then an estimation function for the adaptation of the separation model is obtained to adaptively control a step-size parameter and a momentum factor. The proposed natural gradient algorithm with variable step-size parameter and variable momentum factor is therefore particularly well suited to blind source separation in a time-varying environment, such as an abruptly changing mixing matrix or signal power. The expected improvement in the convergence speed, stability, and tracking ability of the proposed algorithm is demonstrated by extensive simulation results in both time-invariant and time-varying environments. The ability of the proposed algorithm to separate extremely weak or badly scaled sources is also verified. In addition, simulation results show that the proposed algorithm is suitable for separating mixtures of many sources (e.g., the number of sources is 10) in the complete case.

  19. Statistical Evaluation of the Performance of Energy Surface Search Algorithms

    NASA Astrophysics Data System (ADS)

    Horoi, Mihai; Jackson, Koblar A.

    2001-03-01

    In the last few years several new energy surface search algorithms have been proposed, including Genetic Algorithms (GA), Basin-Hopping Monte Carlo, and a single parent GA (see I. Rata, et al., Phys. Rev. Lett. 85, 546(2000)). Each time a new algorithm was presented, the authors claimed better performance by finding lower minima for previously studied clusters. However, it was not clear if the better result was a consequence of a better algorithm or due to more patience in searching the configuration space. We have done a statistical evaluation of all these algorithms and find that the distribution of the number of search steps required to locate the global minimum from a random starting point is an exponential for each method, with the average equal to the variance. This behavior is a result of the random steps made by each method in searching the configuration space. Understanding the nature of the distribution allows the performance of the methods to be compared statistically and suggests possible improvements. We also show that in some cases (small clusters with Lennard-Jones interactions) a completely random search starting in a "small" box can be more efficient than any of the more complex algorithms.

  20. Adaptive bad pixel correction algorithm for IRFPA based on PCNN

    NASA Astrophysics Data System (ADS)

    Leng, Hanbing; Zhou, Zuofeng; Cao, Jianzhong; Yi, Bo; Yan, Aqi; Zhang, Jian

    2013-10-01

    Bad pixels and response non-uniformity are the primary obstacles when IRFPA is used in different thermal imaging systems. The bad pixels of IRFPA include fixed bad pixels and random bad pixels. The former is caused by material or manufacture defect and their positions are always fixed, the latter is caused by temperature drift and their positions are always changing. Traditional radiometric calibration-based bad pixel detection and compensation algorithm is only valid to the fixed bad pixels. Scene-based bad pixel correction algorithm is the effective way to eliminate these two kinds of bad pixels. Currently, the most used scene-based bad pixel correction algorithm is based on adaptive median filter (AMF). In this algorithm, bad pixels are regarded as image noise and then be replaced by filtered value. However, missed correction and false correction often happens when AMF is used to handle complex infrared scenes. To solve this problem, a new adaptive bad pixel correction algorithm based on pulse coupled neural networks (PCNN) is proposed. Potential bad pixels are detected by PCNN in the first step, then image sequences are used periodically to confirm the real bad pixels and exclude the false one, finally bad pixels are replaced by the filtered result. With the real infrared images obtained from a camera, the experiment results show the effectiveness of the proposed algorithm.

  1. Incremental k-core decomposition: Algorithms and evaluation

    DOE PAGES

    Sariyuce, Ahmet Erdem; Gedik, Bugra; Jacques-SIlva, Gabriela; Wu, Kun -Lung; Catalyurek, Umit V.

    2016-02-01

    A k-core of a graph is a maximal connected subgraph in which every vertex is connected to at least k vertices in the subgraph. k-core decomposition is often used in large-scale network analysis, such as community detection, protein function prediction, visualization, and solving NP-hard problems on real networks efficiently, like maximal clique finding. In many real-world applications, networks change over time. As a result, it is essential to develop efficient incremental algorithms for dynamic graph data. In this paper, we propose a suite of incremental k-core decomposition algorithms for dynamic graph data. These algorithms locate a small subgraph that ismore » guaranteed to contain the list of vertices whose maximum k-core values have changed and efficiently process this subgraph to update the k-core decomposition. We present incremental algorithms for both insertion and deletion operations, and propose auxiliary vertex state maintenance techniques that can further accelerate these operations. Our results show a significant reduction in runtime compared to non-incremental alternatives. We illustrate the efficiency of our algorithms on different types of real and synthetic graphs, at varying scales. Furthermore, for a graph of 16 million vertices, we observe relative throughputs reaching a million times, relative to the non-incremental algorithms.« less

  2. ICESat-2 / ATLAS Flight Science Receiver Algorithms

    NASA Astrophysics Data System (ADS)

    Mcgarry, J.; Carabajal, C. C.; Degnan, J. J.; Mallama, A.; Palm, S. P.; Ricklefs, R.; Saba, J. L.

    2013-12-01

    . This Simulator makes it possible to check all logic paths that could be encountered by the Algorithms on orbit. In addition the NASA airborne instrument MABEL is collecting data with characteristics similar to what ATLAS will see. MABEL data is being used to test the ATLAS Receiver Algorithms. Further verification will be performed during Integration and Testing of the ATLAS instrument and during Environmental Testing on the full ATLAS instrument. Results from testing to date show the Receiver Algorithms have the ability to handle a wide range of signal and noise levels with a very good sensitivity at relatively low signal to noise ratios. In addition, preliminary tests have demonstrated, using the ICESat-2 Science Team's selected land ice and sea ice test cases, the capability of the Algorithms to successfully find and telemeter the surface echoes. In this presentation we will describe the ATLAS Flight Science Receiver Algorithms and the Software Simulator, and will present results of the testing to date. The onboard databases (DEM, DRM and the Surface Reference Mask) are being developed at the University of Texas at Austin as part of the ATLAS Flight Science Receiver Algorithms. Verification of the onboard databases is being performed by ATLAS Receiver Algorithms team members Claudia Carabajal and Jack Saba.

  3. Compound algorithm for restoration of heavy turbulence-degraded image for space target

    NASA Astrophysics Data System (ADS)

    Wang, Liang-liang; Wang, Ru-jie; Li, Ming; Kang, Zi-qian; Xu, Xiao-qin; Gao, Xin

    2012-11-01

    Restoration of atmospheric turbulence degraded image is needed to be solved as soon as possible in the field of astronomical space technology. Owing to the fact that the point spread function of turbulence is unknown, changeable with time, hard to be described by mathematics models, withal, kinds of noises would be brought during the imaging processes (such as sensor noise), the image for space target is edge blurred and heavy noised, which making a single restoration algorithm to reach the requirement of restoration difficult. Focusing the fact that the image for space target which was fetched during observation by ground-based optical telescopes is heavy noisy turbulence degraded, this paper discusses the adjustment and reformation of various algorithm structures as well as the selection of various parameters, after the combination of the nonlinear filter algorithm based on noise spatial characteristics, restoration algorithm of heavy turbulence degrade image for space target based on regularization, and the statistics theory based EM restoration algorithm. In order to test the validity of the algorithm, a series of restoration experiments are performed on the heavy noisy turbulence-degraded images for space target. The experiment results show that the new compound algorithm can achieve noise restriction and detail preservation simultaneously, which is effective and practical. Withal, the definition measures and relative definition measures show that the new compound algorithm is better than the traditional algorithms.

  4. Benchmarking of data fusion algorithms in support of earth observation based Antarctic wildlife monitoring

    NASA Astrophysics Data System (ADS)

    Witharana, Chandi; LaRue, Michelle A.; Lynch, Heather J.

    2016-03-01

    Remote sensing is a rapidly developing tool for mapping the abundance and distribution of Antarctic wildlife. While both panchromatic and multispectral imagery have been used in this context, image fusion techniques have received little attention. We tasked seven widely-used fusion algorithms: Ehlers fusion, hyperspherical color space fusion, high-pass fusion, principal component analysis (PCA) fusion, University of New Brunswick fusion, and wavelet-PCA fusion to resolution enhance a series of single-date QuickBird-2 and Worldview-2 image scenes comprising penguin guano, seals, and vegetation. Fused images were assessed for spectral and spatial fidelity using a variety of quantitative quality indicators and visual inspection methods. Our visual evaluation elected the high-pass fusion algorithm and the University of New Brunswick fusion algorithm as best for manual wildlife detection while the quantitative assessment suggested the Gram-Schmidt fusion algorithm and the University of New Brunswick fusion algorithm as best for automated classification. The hyperspherical color space fusion algorithm exhibited mediocre results in terms of spectral and spatial fidelities. The PCA fusion algorithm showed spatial superiority at the expense of spectral inconsistencies. The Ehlers fusion algorithm and the wavelet-PCA algorithm showed the weakest performances. As remote sensing becomes a more routine method of surveying Antarctic wildlife, these benchmarks will provide guidance for image fusion and pave the way for more standardized products for specific types of wildlife surveys.

  5. Software for portable laser light show system

    NASA Astrophysics Data System (ADS)

    Buruchin, Dmitrey J.; Leonov, Alexander F.

    1995-04-01

    Portable laser light show system LS-3500-10M is connected to the parallel port of IBM PC/AT compatible computer. Computer performs output of digital control data describing images. Specially designed control device is used to convert digital data coming from parallel port to the analog signal driving scanner. Capabilities of even cost nothing 286 computer are quite enough for laser graphics control. Technology of scanning used in laser graphics system LS-3500-10M essentially differs from widely spread systems based on galvanometers with mobile core or with mobile magnet. Such devices are based on the same principle of work as electrically driven servo-mechanism. As scanner we use elastic system with hydraulic dampen oscillations and opened loop. For most of applications of laser graphics such system provides satisfactory precision and speed of scanning. LS-3500-10M software gives user ability to create on PC and play his own laser graphics demonstrations. It is possible to render recognizable text and pictures using different styles, 3D and abstract animation. All types of demonstrations can be mixed in slide-show. Time synchronization is supported. Software has the following features: (1) Different types of text output. Built-in text editor for typing and editing of textural information. Different fonts can be used to display text. User can create his own fonts using specially developed font editor. (2) Editor of 3D animation with library of predefined shapes. (3) Abstract animation provided by software routines. (4) Support of different graphics files formats (PCX or DXF). Original algorithm of raster image tracing was implemented. (5) Built-in slide-show editor.

  6. A Parallel Genetic Algorithm for Automated Electronic Circuit Design

    NASA Technical Reports Server (NTRS)

    Lohn, Jason D.; Colombano, Silvano P.; Haith, Gary L.; Stassinopoulos, Dimitris; Norvig, Peter (Technical Monitor)

    2000-01-01

    We describe a parallel genetic algorithm (GA) that automatically generates circuit designs using evolutionary search. A circuit-construction programming language is introduced and we show how evolution can generate practical analog circuit designs. Our system allows circuit size (number of devices), circuit topology, and device values to be evolved. We present experimental results as applied to analog filter and amplifier design tasks.

  7. Fast Algorithm for Continuous Monitoring with Ambient Noise

    NASA Astrophysics Data System (ADS)

    Martin, E. R.; Lindsey, N.; Biondi, B. C.; Chang, J. P.; Ajo Franklin, J. B.; Dou, S.; Daley, T. M.; Freifeld, B. M.; Robertson, M.; Ulrich, C.; Wagner, A. M.; Bjella, K.

    2015-12-01

    A common approach to analyzing ambient seismic noise involves O(n^2) pairwise cross-correlations of n sensors. Following cross-correlations the resulting coherent waveforms are then synthesized into a velocity estimate, often in the form of a dispersion image. As we move towards larger surveys and arrays for continuous subsurface monitoring, this computation can become prohibitively expensive. We show that theoretically equivalent results can be achieved by a simple algorithm which skips the cross-correlations, and scales as O(n). Additionally, this algorithm is embarrassingly parallel, and is significantly cheaper than the commonly used algorithms. We demonstrate the algorithm on two field data sets: (1) a continuously recording linear trenched distributed acoustic sensing (DAS) array designed as a pilot test to develop a permafrost thaw monitoring system, and (2) the Long Beach Array, an irregularly spaced 3D array. These results show superior performance in both speed and numerical accuracy. An open-source implementation of this algorithm is available.

  8. Greedy heuristic algorithm for solving series of eee components classification problems*

    NASA Astrophysics Data System (ADS)

    Kazakovtsev, A. L.; Antamoshkin, A. N.; Fedosov, V. V.

    2016-04-01

    Algorithms based on using the agglomerative greedy heuristics demonstrate precise and stable results for clustering problems based on k- means and p-median models. Such algorithms are successfully implemented in the processes of production of specialized EEE components for using in space systems which include testing each EEE device and detection of homogeneous production batches of the EEE components based on results of the tests using p-median models. In this paper, authors propose a new version of the genetic algorithm with the greedy agglomerative heuristic which allows solving series of problems. Such algorithm is useful for solving the k-means and p-median clustering problems when the number of clusters is unknown. Computational experiments on real data show that the preciseness of the result decreases insignificantly in comparison with the initial genetic algorithm for solving a single problem.

  9. Distributed edge detection algorithm based on wavelet transform for wireless video sensor network

    NASA Astrophysics Data System (ADS)

    Li, Qiulin; Hao, Qun; Song, Yong; Wang, Dongsheng

    2010-12-01

    Edge detection algorithms are critical to image processing and computer vision. Traditional edge detection algorithms are not suitable for wireless video sensor network (WVSN) in which the nodes are with in limited calculation capability and resources. In this paper, a distributed edge detection algorithm based on wavelet transform designed for WVSN is proposed. Wavelet transform decompose the image into several parts, then the parts are assigned to different nodes through wireless network separately. Each node performs sub-image edge detecting algorithm correspondingly, all the results are sent to sink node, Fusing and Synthesis which include image binary and edge connect are executed in it. And finally output the edge image. Lifting scheme and parallel distributed algorithm are adopted to improve the efficiency, simultaneously, decrease the computational complexity. Experimental results show that this method could achieve higher efficiency and better result.

  10. Distributed edge detection algorithm based on wavelet transform for wireless video sensor network

    NASA Astrophysics Data System (ADS)

    Li, Qiulin; Hao, Qun; Song, Yong; Wang, Dongsheng

    2011-05-01

    Edge detection algorithms are critical to image processing and computer vision. Traditional edge detection algorithms are not suitable for wireless video sensor network (WVSN) in which the nodes are with in limited calculation capability and resources. In this paper, a distributed edge detection algorithm based on wavelet transform designed for WVSN is proposed. Wavelet transform decompose the image into several parts, then the parts are assigned to different nodes through wireless network separately. Each node performs sub-image edge detecting algorithm correspondingly, all the results are sent to sink node, Fusing and Synthesis which include image binary and edge connect are executed in it. And finally output the edge image. Lifting scheme and parallel distributed algorithm are adopted to improve the efficiency, simultaneously, decrease the computational complexity. Experimental results show that this method could achieve higher efficiency and better result.

  11. Acoustic design of rotor blades using a genetic algorithm

    NASA Technical Reports Server (NTRS)

    Wells, V. L.; Han, A. Y.; Crossley, W. A.

    1995-01-01

    A genetic algorithm coupled with a simplified acoustic analysis was used to generate low-noise rotor blade designs. The model includes thickness, steady loading and blade-vortex interaction noise estimates. The paper presents solutions for several variations in the fitness function, including thickness noise only, loading noise only, and combinations of the noise types. Preliminary results indicate that the analysis provides reasonable assessments of the noise produced, and that genetic algorithm successfully searches for 'good' designs. The results show that, for a given required thrust coefficient, proper blade design can noticeably reduce the noise produced at some expense to the power requirements.

  12. Evolutionary algorithm for optimization of nonimaging Fresnel lens geometry.

    PubMed

    Yamada, N; Nishikawa, T

    2010-06-21

    In this study, an evolutionary algorithm (EA), which consists of genetic and immune algorithms, is introduced to design the optical geometry of a nonimaging Fresnel lens; this lens generates the uniform flux concentration required for a photovoltaic cell. Herein, a design procedure that incorporates a ray-tracing technique in the EA is described, and the validity of the design is demonstrated. The results show that the EA automatically generated a unique geometry of the Fresnel lens; the use of this geometry resulted in better uniform flux concentration with high optical efficiency.

  13. Determination of multifractal dimensions of complex networks by means of the sandbox algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Jin-Long; Yu, Zu-Guo; Anh, Vo

    2015-02-01

    Complex networks have attracted much attention in diverse areas of science and technology. Multifractal analysis (MFA) is a useful way to systematically describe the spatial heterogeneity of both theoretical and experimental fractal patterns. In this paper, we employ the sandbox (SB) algorithm proposed by Tél et al. (Physica A 159, 155-166 (1989)), for MFA of complex networks. First, we compare the SB algorithm with two existing algorithms of MFA for complex networks: the compact-box-burning algorithm proposed by Furuya and Yakubo (Phys. Rev. E 84, 036118 (2011)), and the improved box-counting algorithm proposed by Li et al. (J. Stat. Mech.: Theor. Exp. 2014, P02020 (2014)) by calculating the mass exponents τ(q) of some deterministic model networks. We make a detailed comparison between the numerical and theoretical results of these model networks. The comparison results show that the SB algorithm is the most effective and feasible algorithm to calculate the mass exponents τ(q) and to explore the multifractal behavior of complex networks. Then, we apply the SB algorithm to study the multifractal property of some classic model networks, such as scale-free networks, small-world networks, and random networks. Our results show that multifractality exists in scale-free networks, that of small-world networks is not obvious, and it almost does not exist in random networks.

  14. Determination of multifractal dimensions of complex networks by means of the sandbox algorithm.

    PubMed

    Liu, Jin-Long; Yu, Zu-Guo; Anh, Vo

    2015-02-01

    Complex networks have attracted much attention in diverse areas of science and technology. Multifractal analysis (MFA) is a useful way to systematically describe the spatial heterogeneity of both theoretical and experimental fractal patterns. In this paper, we employ the sandbox (SB) algorithm proposed by Tél et al. (Physica A 159, 155-166 (1989)), for MFA of complex networks. First, we compare the SB algorithm with two existing algorithms of MFA for complex networks: the compact-box-burning algorithm proposed by Furuya and Yakubo (Phys. Rev. E 84, 036118 (2011)), and the improved box-counting algorithm proposed by Li et al. (J. Stat. Mech.: Theor. Exp. 2014, P02020 (2014)) by calculating the mass exponents τ(q) of some deterministic model networks. We make a detailed comparison between the numerical and theoretical results of these model networks. The comparison results show that the SB algorithm is the most effective and feasible algorithm to calculate the mass exponents τ(q) and to explore the multifractal behavior of complex networks. Then, we apply the SB algorithm to study the multifractal property of some classic model networks, such as scale-free networks, small-world networks, and random networks. Our results show that multifractality exists in scale-free networks, that of small-world networks is not obvious, and it almost does not exist in random networks.

  15. Lossless compression algorithm for multispectral imagers

    NASA Astrophysics Data System (ADS)

    Gladkova, Irina; Grossberg, Michael; Gottipati, Srikanth

    2008-08-01

    will also show results of the algorithm for on NOAA AVHRR data and data from SEVIRI. The algorithm is designed to be adapted to the wide range of multispectral imagers and should facilitate distribution of data throughout globally. This compression research is managed by Roger Heymann, PE of OSD NOAA NESDIS Engineering, in collaboration with the NOAA NESDIS STAR Research Office through Mitch Goldberg, Tim Schmit, Walter Wolf.

  16. New algorithms for binary wavefront optimization

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaolong; Kner, Peter

    2015-03-01

    Binary amplitude modulation promises to allow rapid focusing through strongly scattering media with a large number of segments due to the faster update rates of digital micromirror devices (DMDs) compared to spatial light modulators (SLMs). While binary amplitude modulation has a lower theoretical enhancement than phase modulation, the faster update rate should more than compensate for the difference - a factor of π2 /2. Here we present two new algorithms, a genetic algorithm and a transmission matrix algorithm, for optimizing the focus with binary amplitude modulation that achieve enhancements close to the theoretical maximum. Genetic algorithms have been shown to work well in noisy environments and we show that the genetic algorithm performs better than a stepwise algorithm. Transmission matrix algorithms allow complete characterization and control of the medium but require phase control either at the input or output. Here we introduce a transmission matrix algorithm that works with only binary amplitude control and intensity measurements. We apply these algorithms to binary amplitude modulation using a Texas Instruments Digital Micromirror Device. Here we report an enhancement of 152 with 1536 segments (9.90%×N) using a genetic algorithm with binary amplitude modulation and an enhancement of 136 with 1536 segments (8.9%×N) using an intensity-only transmission matrix algorithm.

  17. Evolutionary algorithms and multi-agent systems

    NASA Astrophysics Data System (ADS)

    Oh, Jae C.

    2006-05-01

    This paper discusses how evolutionary algorithms are related to multi-agent systems and the possibility of military applications using the two disciplines. In particular, we present a game theoretic model for multi-agent resource distribution and allocation where agents in the environment must help each other to survive. Each agent maintains a set of variables representing actual friendship and perceived friendship. The model directly addresses problems in reputation management schemes in multi-agent systems and Peer-to-Peer distributed systems. We present algorithms based on evolutionary game process for maintaining the friendship values as well as a utility equation used in each agent's decision making. For an application problem, we adapted our formal model to the military coalition support problem in peace-keeping missions. Simulation results show that efficient resource allocation and sharing with minimum communication cost is achieved without centralized control.

  18. Improved imaging algorithm for bridge crack detection

    NASA Astrophysics Data System (ADS)

    Lu, Jingxiao; Song, Pingli; Han, Kaihong

    2012-04-01

    This paper present an improved imaging algorithm for bridge crack detection, through optimizing the eight-direction Sobel edge detection operator, making the positioning of edge points more accurate than without the optimization, and effectively reducing the false edges information, so as to facilitate follow-up treatment. In calculating the crack geometry characteristics, we use the method of extracting skeleton on single crack length. In order to calculate crack area, we construct the template of area by making logical bitwise AND operation of the crack image. After experiment, the results show errors of the crack detection method and actual manual measurement are within an acceptable range, meet the needs of engineering applications. This algorithm is high-speed and effective for automated crack measurement, it can provide more valid data for proper planning and appropriate performance of the maintenance and rehabilitation processes of bridge.

  19. Simulation System of Car Crash Test in C-NCAP Analysis Based on an Improved Apriori Algorithm*

    NASA Astrophysics Data System (ADS)

    Xiang, LI

    In order to analysis car crash test in C-NCAP, an improved algorithm is given based on Apriori algorithm in this paper. The new algorithm is implemented with vertical data layout, breadth first searching, and intersecting. It takes advantage of the efficiency of vertical data layout and intersecting, and prunes candidate frequent item sets like Apriori. Finally, the new algorithm is applied in simulation of car crash test analysis system. The result shows that the relations will affect the C-NCAP test results, and it can provide a reference for the automotive design.

  20. A new minimax algorithm

    NASA Technical Reports Server (NTRS)

    Vardi, A.

    1984-01-01

    The representation min t s.t. F(I)(x). - t less than or equal to 0 for all i is examined. An active set strategy is designed of functions: active, semi-active, and non-active. This technique will help in preventing zigzagging which often occurs when an active set strategy is used. Some of the inequality constraints are handled with slack variables. Also a trust region strategy is used in which at each iteration there is a sphere around the current point in which the local approximation of the function is trusted. The algorithm is implemented into a successful computer program. Numerical results are provided.

  1. MLP iterative construction algorithm

    NASA Astrophysics Data System (ADS)

    Rathbun, Thomas F.; Rogers, Steven K.; DeSimio, Martin P.; Oxley, Mark E.

    1997-04-01

    The MLP Iterative Construction Algorithm (MICA) designs a Multi-Layer Perceptron (MLP) neural network as it trains. MICA adds Hidden Layer Nodes one at a time, separating classes on a pair-wise basis, until the data is projected into a linear separable space by class. Then MICA trains the Output Layer Nodes, which results in an MLP that achieves 100% accuracy on the training data. MICA, like Backprop, produces an MLP that is a minimum mean squared error approximation of the Bayes optimal discriminant function. Moreover, MICA's training technique yields novel feature selection technique and hidden node pruning technique

  2. Enhanced probability-selection artificial bee colony algorithm for economic load dispatch: A comprehensive analysis

    NASA Astrophysics Data System (ADS)

    Ghani Abro, Abdul; Mohamad-Saleh, Junita

    2014-10-01

    The prime motive of economic load dispatch (ELD) is to optimize the production cost of electrical power generation through appropriate division of load demand among online generating units. Bio-inspired optimization algorithms have outperformed classical techniques for optimizing the production cost. Probability-selection artificial bee colony (PS-ABC) algorithm is a recently proposed variant of ABC optimization algorithm. PS-ABC generates optimal solutions using three different mutation equations simultaneously. The results show improved performance of PS-ABC over the ABC algorithm. Nevertheless, all the mutation equations of PS-ABC are excessively self-reinforced and, hence, PS-ABC is prone to premature convergence. Therefore, this research work has replaced the mutation equations and has improved the scout-bee stage of PS-ABC for enhancing the algorithm's performance. The proposed algorithm has been compared with many ABC variants and numerous other optimization algorithms on benchmark functions and ELD test cases. The adapted ELD test cases comprise of transmission losses, multiple-fuel effect, valve-point effect and toxic gases emission constraints. The results reveal that the proposed algorithm has the best capability to yield the optimal solution for the problem among the compared algorithms.

  3. Experimental study on subaperture testing with iterative triangulation algorithm.

    PubMed

    Yan, Lisong; Wang, Xiaokun; Zheng, Ligong; Zeng, Xuefeng; Hu, Haixiang; Zhang, Xuejun

    2013-09-23

    Applying the iterative triangulation stitching algorithm, we provide an experimental demonstration by testing a Φ120 mm flat mirror, a Φ1450 mm off-axis parabolic mirror and a convex hyperboloid mirror. By comparing the stitching results with the self-examine subaperture, it shows that the reconstruction results are in consistent with that of the subaperture testing. As all the experiments are conducted with a 5-dof adjustment platform with big adjustment errors, it proves that using the above mentioned algorithm, the subaperture stitching can be easily performed without a precise positioning system. In addition, with the algorithm, we accomplish the coordinate unification between the testing and processing that makes it possible to guide the processing by the stitching result.

  4. Systolic array architecture for convolutional decoding algorithms: Viterbi algorithm and stack algorithm

    SciTech Connect

    Chang, C.Y.

    1986-01-01

    New results on efficient forms of decoding convolutional codes based on Viterbi and stack algorithms using systolic array architecture are presented. Some theoretical aspects of systolic arrays are also investigated. First, systolic array implementation of Viterbi algorithm is considered, and various properties of convolutional codes are derived. A technique called strongly connected trellis decoding is introduced to increase the efficient utilization of all the systolic array processors. The issues dealing with the composite branch metric generation, survivor updating, overall system architecture, throughput rate, and computations overhead ratio are also investigated. Second, the existing stack algorithm is modified and restated in a more concise version so that it can be efficiently implemented by a special type of systolic array called systolic priority queue. Three general schemes of systolic priority queue based on random access memory, shift register, and ripple register are proposed. Finally, a systematic approach is presented to design systolic arrays for certain general classes of recursively formulated algorithms.

  5. Algebraic iterative algorithm for deflection tomography and its application to density flow fields in a hypersonic wind tunnel.

    PubMed

    Song, Yang; Zhang, Bin; He, Anzhi

    2006-11-01

    A novel algebraic iterative algorithm based on deflection tomography is presented. This algorithm is derived from the essentials of deflection tomography with a linear expansion of the local basis functions. By use of this algorithm the tomographic problem is finally reduced to the solution of a set of linear equations. The algorithm is demonstrated by mapping a three-peak Gaussian simulative temperature field. Compared with reconstruction results obtained by other traditional deflection algorithms, its reconstruction results provide a significant improvement in reconstruction accuracy, especially in cases with noisy data added. In the density diagnosis of a hypersonic wind tunnel, this algorithm is adopted to reconstruct density distributions of an axial symmetry flow field. One cross section of the reconstruction results is selected to be compared with the inverse Abel transform algorithm. Results show that the novel algorithm can achieve an accuracy equivalent to the inverse Abel transform algorithm. However, the novel algorithm is more versatile because it is applicable to arbitrary kinds of distribution.

  6. Computational algorithms to predict Gene Ontology annotations

    PubMed Central

    2015-01-01

    Background Gene function annotations, which are associations between a gene and a term of a controlled vocabulary describing gene functional features, are of paramount importance in modern biology. Datasets of these annotations, such as the ones provided by the Gene Ontology Consortium, are used to design novel biological experiments and interpret their results. Despite their importance, these sources of information have some known issues. They are incomplete, since biological knowledge is far from being definitive and it rapidly evolves, and some erroneous annotations may be present. Since the curation process of novel annotations is a costly procedure, both in economical and time terms, computational tools that can reliably predict likely annotations, and thus quicken the discovery of new gene annotations, are very useful. Methods We used a set of computational algorithms and weighting schemes to infer novel gene annotations from a set of known ones. We used the latent semantic analysis approach, implementing two popular algorithms (Latent Semantic Indexing and Probabilistic Latent Semantic Analysis) and propose a novel method, the Semantic IMproved Latent Semantic Analysis, which adds a clustering step on the set of considered genes. Furthermore, we propose the improvement of these algorithms by weighting the annotations in the input set. Results We tested our methods and their weighted variants on the Gene Ontology annotation sets of three model organism genes (Bos taurus, Danio rerio and Drosophila melanogaster ). The methods showed their ability in predicting novel gene annotations and the weighting procedures demonstrated to lead to a valuable improvement, although the obtained results vary according to the dimension of the input annotation set and the considered algorithm. Conclusions Out of the three considered methods, the Semantic IMproved Latent Semantic Analysis is the one that provides better results. In particular, when coupled with a proper

  7. Experimental results for correlation-based wavefront sensing

    SciTech Connect

    Poyneer, L A; Palmer, D W; LaFortune, K N; Bauman, B

    2005-07-01

    Correlation wave-front sensing can improve Adaptive Optics (AO) system performance in two keys areas. For point-source-based AO systems, Correlation is more accurate, more robust to changing conditions and provides lower noise than a centroiding algorithm. Experimental results from the Lick AO system and the SSHCL laser AO system confirm this. For remote imaging, Correlation enables the use of extended objects for wave-front sensing. Results from short horizontal-path experiments will show algorithm properties and requirements.

  8. An Intrusion Detection Algorithm Based On NFPA

    NASA Astrophysics Data System (ADS)

    Anming, Zhong

    A process oriented intrusion detection algorithm based on Probabilistic Automaton with No Final probabilities (NFPA) is introduced, system call sequence of process is used as the source data. By using information in system call sequence of normal process and system call sequence of anomaly process, the anomaly detection and the misuse detection are efficiently combined. Experiments show better performance of our algorithm compared to the classical algorithm in this field.

  9. Algorithms versus architectures for computational chemistry

    NASA Technical Reports Server (NTRS)

    Partridge, H.; Bauschlicher, C. W., Jr.

    1986-01-01

    The algorithms employed are computationally intensive and, as a result, increased performance (both algorithmic and architectural) is required to improve accuracy and to treat larger molecular systems. Several benchmark quantum chemistry codes are examined on a variety of architectures. While these codes are only a small portion of a typical quantum chemistry library, they illustrate many of the computationally intensive kernels and data manipulation requirements of some applications. Furthermore, understanding the performance of the existing algorithm on present and proposed supercomputers serves as a guide for future programs and algorithm development. The algorithms investigated are: (1) a sparse symmetric matrix vector product; (2) a four index integral transformation; and (3) the calculation of diatomic two electron Slater integrals. The vectorization strategies are examined for these algorithms for both the Cyber 205 and Cray XMP. In addition, multiprocessor implementations of the algorithms are looked at on the Cray XMP and on the MIT static data flow machine proposed by DENNIS.

  10. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.

    1986-01-01

    Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.

  11. Algorithm-development activities

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    1994-01-01

    The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.

  12. A highly efficient multi-core algorithm for clustering extremely large datasets

    PubMed Central

    2010-01-01

    Background In recent years, the demand for computational power in computational biology has increased due to rapidly growing data sets from microarray and other high-throughput technologies. This demand is likely to increase. Standard algorithms for analyzing data, such as cluster algorithms, need to be parallelized for fast processing. Unfortunately, most approaches for parallelizing algorithms largely rely on network communication protocols connecting and requiring multiple computers. One answer to this problem is to utilize the intrinsic capabilities in current multi-core hardware to distribute the tasks among the different cores of one computer. Results We introduce a multi-core parallelization of the k-means and k-modes cluster algorithms based on the design principles of transactional memory for clustering gene expression microarray type data and categorial SNP data. Our new shared memory parallel algorithms show to be highly efficient. We demonstrate their computational power and show their utility in cluster stability and sensitivity analysis employing repeated runs with slightly changed parameters. Computation speed of our Java based algorithm was increased by a factor of 10 for large data sets while preserving computational accuracy compared to single-core implementations and a recently published network based parallelization. Conclusions Most desktop computers and even notebooks provide at least dual-core processors. Our multi-core algorithms show that using modern algorithmic concepts, parallelization makes it possible to perform even such laborious tasks as cluster sensitivity and cluster number estimation on the laboratory computer. PMID:20370922

  13. A fast image encryption algorithm based on chaotic map

    NASA Astrophysics Data System (ADS)

    Liu, Wenhao; Sun, Kehui; Zhu, Congxu

    2016-09-01

    Derived from Sine map and iterative chaotic map with infinite collapse (ICMIC), a new two-dimensional Sine ICMIC modulation map (2D-SIMM) is proposed based on a close-loop modulation coupling (CMC) model, and its chaotic performance is analyzed by means of phase diagram, Lyapunov exponent spectrum and complexity. It shows that this map has good ergodicity, hyperchaotic behavior, large maximum Lyapunov exponent and high complexity. Based on this map, a fast image encryption algorithm is proposed. In this algorithm, the confusion and diffusion processes are combined for one stage. Chaotic shift transform (CST) is proposed to efficiently change the image pixel positions, and the row and column substitutions are applied to scramble the pixel values simultaneously. The simulation and analysis results show that this algorithm has high security, low time complexity, and the abilities of resisting statistical analysis, differential, brute-force, known-plaintext and chosen-plaintext attacks.

  14. Noise-enhanced clustering and competitive learning algorithms.

    PubMed

    Osoba, Osonde; Kosko, Bart

    2013-01-01

    Noise can provably speed up convergence in many centroid-based clustering algorithms. This includes the popular k-means clustering algorithm. The clustering noise benefit follows from the general noise benefit for the expectation-maximization algorithm because many clustering algorithms are special cases of the expectation-maximization algorithm. Simulations show that noise also speeds up convergence in stochastic unsupervised competitive learning, supervised competitive learning, and differential competitive learning.

  15. Evaluation of algorithms used to order markers on genetic maps.

    PubMed

    Mollinari, M; Margarido, G R A; Vencovsky, R; Garcia, A A F

    2009-12-01

    When building genetic maps, it is necessary to choose from several marker ordering algorithms and criteria, and the choice is not always simple. In this study, we evaluate the efficiency of algorithms try (TRY), seriation (SER), rapid chain delineation (RCD), recombination counting and ordering (RECORD) and unidirectional growth (UG), as well as the criteria PARF (product of adjacent recombination fractions), SARF (sum of adjacent recombination fractions), SALOD (sum of adjacent LOD scores) and LHMC (likelihood through hidden Markov chains), used with the RIPPLE algorithm for error verification, in the construction of genetic linkage maps. A linkage map of a hypothetical diploid and monoecious plant species was simulated containing one linkage group and 21 markers with fixed distance of 3 cM between them. In all, 700 F(2) populations were randomly simulated with 100 and 400 individuals with different combinations of dominant and co-dominant markers, as well as 10 and 20% of missing data. The simulations showed that, in the presence of co-dominant markers only, any combination of algorithm and criteria may be used, even for a reduced population size. In the case of a smaller proportion of dominant markers, any of the algorithms and criteria (except SALOD) investigated may be used. In the presence of high proportions of dominant markers and smaller samples (around 100), the probability of repulsion linkage increases between them and, in this case, use of the algorithms TRY and SER associated to RIPPLE with criterion LHMC would provide better results.

  16. A new root-based direction-finding algorithm

    NASA Astrophysics Data System (ADS)

    Wasylkiwskyj, Wasyl; Kopriva, Ivica; DoroslovačKi, Miloš; Zaghloul, Amir I.

    2007-04-01

    Polynomial rooting direction-finding (DF) algorithms are a computationally efficient alternative to search-based DF algorithms and are particularly suitable for uniform linear arrays of physically identical elements provided that mutual interaction among the array elements can be either neglected or compensated for. A popular algorithm in such situations is Root Multiple Signal Classification (Root MUSIC (RM)), wherein the estimation of the directions of arrivals (DOA) requires the computation of the roots of a (2N - 2) -order polynomial, where N represents number of array elements. The DOA are estimated from the L pairs of roots closest to the unit circle, where L represents number of sources. In this paper we derive a modified root polynomial (MRP) algorithm requiring the calculation of only L roots in order to estimate the L DOA. We evaluate the performance of the MRP algorithm numerically and show that it is as accurate as the RM algorithm but with a significantly simpler algebraic structure. In order to demonstrate that the theoretically predicted performance can be achieved in an experimental setting, a decoupled array is emulated in hardware using phase shifters. The results are in excellent agreement with theory.

  17. A Geometric Clustering Algorithm with Applications to Structural Data

    PubMed Central

    Xu, Shutan; Zou, Shuxue

    2015-01-01

    Abstract An important feature of structural data, especially those from structural determination and protein-ligand docking programs, is that their distribution could be mostly uniform. Traditional clustering algorithms developed specifically for nonuniformly distributed data may not be adequate for their classification. Here we present a geometric partitional algorithm that could be applied to both uniformly and nonuniformly distributed data. The algorithm is a top-down approach that recursively selects the outliers as the seeds to form new clusters until all the structures within a cluster satisfy a classification criterion. The algorithm has been evaluated on a diverse set of real structural data and six sets of test data. The results show that it is superior to the previous algorithms for the clustering of structural data and is similar to or better than them for the classification of the test data. The algorithm should be especially useful for the identification of the best but minor clusters and for speeding up an iterative process widely used in NMR structure determination. PMID:25517067

  18. A Compression Algorithm in Wireless Sensor Networks of Bearing Monitoring

    NASA Astrophysics Data System (ADS)

    Bin, Zheng; Qingfeng, Meng; Nan, Wang; Zhi, Li

    2011-07-01

    The energy consumption of wireless sensor networks (WSNs) is always an important problem in the application of wireless sensor networks. This paper proposes a data compression algorithm to reduce amount of data and energy consumption during the data transmission process in the on-line WSNs-based bearing monitoring system. The proposed compression algorithm is based on lifting wavelets, Zerotree coding and Hoffman coding. Among of that, 5/3 lifting wavelets is used for dividing data into different frequency bands to extract signal characteristics. Zerotree coding is applied to calculate the dynamic thresholds to retain the attribute data. The attribute data are then encoded by Hoffman coding to further enhance the compression ratio. In order to validate the algorithm, simulation is carried out by using Matlab. The result of simulation shows that the proposed algorithm is very suitable for the compression of bearing monitoring data. The algorithm has been successfully used in online WSNs-based bearing monitoring system, in which TI DSP TMS320F2812 is used to realize the algorithm.

  19. Scalable Virtual Network Mapping Algorithm for Internet-Scale Networks

    NASA Astrophysics Data System (ADS)

    Yang, Qiang; Wu, Chunming; Zhang, Min

    The proper allocation of network resources from a common physical substrate to a set of virtual networks (VNs) is one of the key technical challenges of network virtualization. While a variety of state-of-the-art algorithms have been proposed in an attempt to address this issue from different facets, the challenge still remains in the context of large-scale networks as the existing solutions mainly perform in a centralized manner which requires maintaining the overall and up-to-date information of the underlying substrate network. This implies the restricted scalability and computational efficiency when the network scale becomes large. This paper tackles the virtual network mapping problem and proposes a novel hierarchical algorithm in conjunction with a substrate network decomposition approach. By appropriately transforming the underlying substrate network into a collection of sub-networks, the hierarchical virtual network mapping algorithm can be carried out through a global virtual network mapping algorithm (GVNMA) and a local virtual network mapping algorithm (LVNMA) operated in the network central server and within individual sub-networks respectively with their cooperation and coordination as necessary. The proposed algorithm is assessed against the centralized approaches through a set of numerical simulation experiments for a range of network scenarios. The results show that the proposed hierarchical approach can be about 5-20 times faster for VN mapping tasks than conventional centralized approaches with acceptable communication overhead between GVNCA and LVNCA for all examined networks, whilst performs almost as well as the centralized solutions.

  20. Development of a Scoring Algorithm To Replace Expert Rating for Scoring a Complex Performance-Based Assessment.

    ERIC Educational Resources Information Center

    Clauser, Brian E.; Ross, Linette P.; Clyman, Stephen G.; Rose, Kathie M.; Margolis, Melissa J.; Nungester, Ronald J.; Piemme, Thomas E.; Chang, Lucy; El-Bayoumi, Gigi; Malakoff, Gary L.; Pincetl, Pierre S.

    1997-01-01

    Describes an automated scoring algorithm for a computer-based simulation examination of physicians' patient-management skills. Results with 280 medical students show that scores produced using this algorithm are highly correlated to actual clinician ratings. Scores were also effective in discriminating between case performance judged passing or…