Science.gov

Sample records for algorithm results show

  1. Wake Vortex Algorithm Scoring Results

    NASA Technical Reports Server (NTRS)

    Robins, R. E.; Delisi, D. P.; Hinton, David (Technical Monitor)

    2002-01-01

    This report compares the performance of two models of trailing vortex evolution for which interaction with the ground is not a significant factor. One model uses eddy dissipation rate (EDR) and the other uses the kinetic energy of turbulence fluctuations (TKE) to represent the effect of turbulence. In other respects, the models are nearly identical. The models are evaluated by comparing their predictions of circulation decay, vertical descent, and lateral transport to observations for over four hundred cases from Memphis and Dallas/Fort Worth International Airports. These observations were obtained during deployments in support of NASA's Aircraft Vortex Spacing System (AVOSS). The results of the comparisons show that the EDR model usually performs slightly better than the TKE model.

  2. New Results in Astrodynamics Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Coverstone-Carroll, V.; Hartmann, J. W.; Williams, S. N.; Mason, W. J.

    1998-01-01

    Generic algorithms have gained popularity as an effective procedure for obtaining solutions to traditionally difficult space mission optimization problems. In this paper, a brief survey of the use of genetic algorithms to solve astrodynamics problems is presented and is followed by new results obtained from applying a Pareto genetic algorithm to the optimization of low-thrust interplanetary spacecraft missions.

  3. Preliminary results from MERIS Land Algorithm

    NASA Astrophysics Data System (ADS)

    Gobron, N.; Pinty, B.; Taberner, M.; Melin, F.; Verstraete, M. M.; Widlowski, J.-L.

    2003-04-01

    This paper presents a first and preliminary evaluation of the performance of the algorithm implemented in the Medium Resolution Imaging Spectrometer (MERIS) ground segment for assessing the status of land surfaces. First, we propose an updated version of the MERIS algorithm itself, which improves the accuracy of the product. Second, we analyze the first results by inter-comparing the MERIS Global Vegetation Index (MGVI) to similar products derived from the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) that are generated at the European Commission Joint Research Center (EC-JRC). The first evaluation between MERIS and SeaWiFS derived products is made using data acquired on the same day by both instruments. The results show acceptable agreement and the differences are well understood by radiation transfer model simulations.

  4. 13. DETAIL VIEW OF BUTTRESS 4 SHOWING THE RESULTS OF ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    13. DETAIL VIEW OF BUTTRESS 4 SHOWING THE RESULTS OF POOR CONSTRUCTION WORK. THOUGH NOT A SERIOUS STRUCTURAL DEFICIENCY, THE 'HONEYCOMB' TEXTURE OF THE CONCRETE SURFACE WAS THE RESULT OF INADEQUATE TAMPING AT THE TIME OF THE INITIAL 'POUR'. - Hume Lake Dam, Sequioa National Forest, Hume, Fresno County, CA

  5. Convergence Results on Iteration Algorithms to Linear Systems

    PubMed Central

    Wang, Zhuande; Yang, Chuansheng; Yuan, Yubo

    2014-01-01

    In order to solve the large scale linear systems, backward and Jacobi iteration algorithms are employed. The convergence is the most important issue. In this paper, a unified backward iterative matrix is proposed. It shows that some well-known iterative algorithms can be deduced with it. The most important result is that the convergence results have been proved. Firstly, the spectral radius of the Jacobi iterative matrix is positive and the one of backward iterative matrix is strongly positive (lager than a positive constant). Secondly, the mentioned two iterations have the same convergence results (convergence or divergence simultaneously). Finally, some numerical experiments show that the proposed algorithms are correct and have the merit of backward methods. PMID:24991640

  6. Preliminary results from the ASF/GPS ice classification algorithm

    NASA Technical Reports Server (NTRS)

    Cunningham, G.; Kwok, R.; Holt, B.

    1992-01-01

    The European Space Agency Remote Sensing Satellite (ERS-1) satellite carried a C-band synthetic aperture radar (SAR) to study the earth's polar regions. The radar returns from sea ice can be used to infer properties of ice, including ice type. An algorithm has been developed for the Alaska SAR facility (ASF)/Geophysical Processor System (GPS) to infer ice type from the SAR observations over sea ice and open water. The algorithm utilizes look-up tables containing expected backscatter values from various ice types. An analysis has been made of two overlapping strips with 14 SAR images. The backscatter values of specific ice regions were sampled to study the backscatter characteristics of the ice in time and space. Results show both stability of the backscatter values in time and a good separation of multiyear and first-year ice signals, verifying the approach used in the classification algorithm.

  7. The Aquarius Salinity Retrieval Algorithm: Early Results

    NASA Technical Reports Server (NTRS)

    Meissner, Thomas; Wentz, Frank J.; Lagerloef, Gary; LeVine, David

    2012-01-01

    The Aquarius L-band radiometer/scatterometer system is designed to provide monthly salinity maps at 150 km spatial scale to a 0.2 psu accuracy. The sensor was launched on June 10, 2011, aboard the Argentine CONAE SAC-D spacecraft. The L-band radiometers and the scatterometer have been taking science data observations since August 25, 2011. The first part of this presentation gives an overview over the Aquarius salinity retrieval algorithm. The instrument calibration converts Aquarius radiometer counts into antenna temperatures (TA). The salinity retrieval algorithm converts those TA into brightness temperatures (TB) at a flat ocean surface. As a first step, contributions arising from the intrusion of solar, lunar and galactic radiation are subtracted. The antenna pattern correction (APC) removes the effects of cross-polarization contamination and spillover. The Aquarius radiometer measures the 3rd Stokes parameter in addition to vertical (v) and horizontal (h) polarizations, which allows for an easy removal of ionospheric Faraday rotation. The atmospheric absorption at L-band is almost entirely due to O2, which can be calculated based on auxiliary input fields from numerical weather prediction models and then successively removed from the TB. The final step in the TA to TB conversion is the correction for the roughness of the sea surface due to wind. This is based on the radar backscatter measurements by the scatterometer. The TB of the flat ocean surface can now be matched to a salinity value using a surface emission model that is based on a model for the dielectric constant of sea water and an auxiliary field for the sea surface temperature. In the current processing (as of writing this abstract) only v-pol TB are used for this last process and NCEP winds are used for the roughness correction. Before the salinity algorithm can be operationally implemented and its accuracy assessed by comparing versus in situ measurements, an extensive calibration and validation

  8. Evaluation of registration, compression and classification algorithms. Volume 1: Results

    NASA Technical Reports Server (NTRS)

    Jayroe, R.; Atkinson, R.; Callas, L.; Hodges, J.; Gaggini, B.; Peterson, J.

    1979-01-01

    The registration, compression, and classification algorithms were selected on the basis that such a group would include most of the different and commonly used approaches. The results of the investigation indicate clearcut, cost effective choices for registering, compressing, and classifying multispectral imagery.

  9. The EM/MPM algorithm for segmentation of textured images: analysis and further experimental results.

    PubMed

    Comer, M L; Delp, E J

    2000-01-01

    In this paper we present new results relative to the "expectation-maximization/maximization of the posterior marginals" (EM/MPM) algorithm for simultaneous parameter estimation and segmentation of textured images. The EM/MPM algorithm uses a Markov random field model for the pixel class labels and alternately approximates the MPM estimate of the pixel class labels and estimates parameters of the observed image model. The goal of the EM/MPM algorithm is to minimize the expected value of the number of misclassified pixels. We present new theoretical results in this paper which show that the algorithm can be expected to achieve this goal, to the extent that the EM estimates of the model parameters are close to the true values of the model parameters. We also present new experimental results demonstrating the performance of the EM/MPM algorithm.

  10. The Effect of Pansharpening Algorithms on the Resulting Orthoimagery

    NASA Astrophysics Data System (ADS)

    Agrafiotis, P.; Georgopoulos, A.; Karantzalos, K.

    2016-06-01

    This paper evaluates the geometric effects of pansharpening algorithms on automatically generated DSMs and thus on the resulting orthoimagery through a quantitative assessment of the accuracy on the end products. The main motivation was based on the fact that for automatically generated Digital Surface Models, an image correlation step is employed for extracting correspondences between the overlapping images. Thus their accuracy and reliability is strictly related to image quality, while pansharpening may result into lower image quality which may affect the DSM generation and the resulting orthoimage accuracy. To this direction, an iterative methodology was applied in order to combine the process described by Agrafiotis and Georgopoulos (2015) with different pansharpening algorithms and check the accuracy of orthoimagery resulting from pansharpened data. Results are thoroughly examined and statistically analysed. The overall evaluation indicated that the pansharpening process didn't affect the geometric accuracy of the resulting DSM with a 10m interval, as well as the resulting orthoimagery. Although some residuals in the orthoimages were observed, their magnitude cannot adversely affect the accuracy of the final orthoimagery.

  11. Adaptively resizing populations: Algorithm, analysis, and first results

    NASA Technical Reports Server (NTRS)

    Smith, Robert E.; Smuda, Ellen

    1993-01-01

    Deciding on an appropriate population size for a given Genetic Algorithm (GA) application can often be critical to the algorithm's success. Too small, and the GA can fall victim to sampling error, affecting the efficacy of its search. Too large, and the GA wastes computational resources. Although advice exists for sizing GA populations, much of this advice involves theoretical aspects that are not accessible to the novice user. An algorithm for adaptively resizing GA populations is suggested. This algorithm is based on recent theoretical developments that relate population size to schema fitness variance. The suggested algorithm is developed theoretically, and simulated with expected value equations. The algorithm is then tested on a problem where population sizing can mislead the GA. The work presented suggests that the population sizing algorithm may be a viable way to eliminate the population sizing decision from the application of GA's.

  12. Centroid-Based Document Classification Algorithms: Analysis & Experimental Results

    DTIC Science & Technology

    2000-03-06

    in terms of zero-one loss (misclassification rate ). Linear Classifiers Linear classifiers [31] are a family of text categorization learning...training set and and a test set. The error rates of algorithms A and B on the test set are recorded. Let p (i)A be the error rate of algorithm A and p(i...B be the error rate of algorithm B during trial i . Then Student’s t test can be computed using the statistic: t = p̄ √ n √ ∑n i=1(p(i)− p̄)2 n−1

  13. Gun shows and gun violence: fatally flawed study yields misleading results.

    PubMed

    Wintemute, Garen J; Hemenway, David; Webster, Daniel; Pierce, Glenn; Braga, Anthony A

    2010-10-01

    A widely publicized but unpublished study of the relationship between gun shows and gun violence is being cited in debates about the regulation of gun shows and gun commerce. We believe the study is fatally flawed. A working paper entitled "The Effect of Gun Shows on Gun-Related Deaths: Evidence from California and Texas" outlined this study, which found no association between gun shows and gun-related deaths. We believe the study reflects a limited understanding of gun shows and gun markets and is not statistically powered to detect even an implausibly large effect of gun shows on gun violence. In addition, the research contains serious ascertainment and classification errors, produces results that are sensitive to minor specification changes in key variables and in some cases have no face validity, and is contradicted by 1 of its own authors' prior research. The study should not be used as evidence in formulating gun policy.

  14. Astronomy Diagnostic Test Results Reflect Course Goals and Show Room for Improvement

    NASA Astrophysics Data System (ADS)

    Lopresto, Michael C.

    The results of administering the Astronomy Diagnostic Test (ADT) to introductory astronomy students at Henry Ford Community College over three years have shown gains comparable with national averages. Results have also accurately corresponded to course goals, showing greater gains in topics covered in more detail, and lower gains in topics covered in less detail. Also evident in the results were topics for which improvement of instruction is needed. These factors and the ease with which the ADT can be administered constitute evidence of the usefulness of the ADT as an assessment instrument for introductory astronomy.

  15. The weirdest SDSS galaxies: results from an outlier detection algorithm

    NASA Astrophysics Data System (ADS)

    Baron, Dalya; Poznanski, Dovi

    2017-03-01

    How can we discover objects we did not know existed within the large data sets that now abound in astronomy? We present an outlier detection algorithm that we developed, based on an unsupervised Random Forest. We test the algorithm on more than two million galaxy spectra from the Sloan Digital Sky Survey and examine the 400 galaxies with the highest outlier score. We find objects which have extreme emission line ratios and abnormally strong absorption lines, objects with unusual continua, including extremely reddened galaxies. We find galaxy-galaxy gravitational lenses, double-peaked emission line galaxies and close galaxy pairs. We find galaxies with high ionization lines, galaxies that host supernovae and galaxies with unusual gas kinematics. Only a fraction of the outliers we find were reported by previous studies that used specific and tailored algorithms to find a single class of unusual objects. Our algorithm is general and detects all of these classes, and many more, regardless of what makes them peculiar. It can be executed on imaging, time series and other spectroscopic data, operates well with thousands of features, is not sensitive to missing values and is easily parallelizable.

  16. Respiratory rate detection algorithm based on RGB-D camera: theoretical background and experimental results.

    PubMed

    Benetazzo, Flavia; Freddi, Alessandro; Monteriù, Andrea; Longhi, Sauro

    2014-09-01

    Both the theoretical background and the experimental results of an algorithm developed to perform human respiratory rate measurements without any physical contact are presented. Based on depth image sensing techniques, the respiratory rate is derived by measuring morphological changes of the chest wall. The algorithm identifies the human chest, computes its distance from the camera and compares this value with the instantaneous distance, discerning if it is due to the respiratory act or due to a limited movement of the person being monitored. To experimentally validate the proposed algorithm, the respiratory rate measurements coming from a spirometer were taken as a benchmark and compared with those estimated by the algorithm. Five tests were performed, with five different persons sat in front of the camera. The first test aimed to choose the suitable sampling frequency. The second test was conducted to compare the performances of the proposed system with respect to the gold standard in ideal conditions of light, orientation and clothing. The third, fourth and fifth tests evaluated the algorithm performances under different operating conditions. The experimental results showed that the system can correctly measure the respiratory rate, and it is a viable alternative to monitor the respiratory activity of a person without using invasive sensors.

  17. Respiratory rate detection algorithm based on RGB-D camera: theoretical background and experimental results

    PubMed Central

    Freddi, Alessandro; Monteriù, Andrea; Longhi, Sauro

    2014-01-01

    Both the theoretical background and the experimental results of an algorithm developed to perform human respiratory rate measurements without any physical contact are presented. Based on depth image sensing techniques, the respiratory rate is derived by measuring morphological changes of the chest wall. The algorithm identifies the human chest, computes its distance from the camera and compares this value with the instantaneous distance, discerning if it is due to the respiratory act or due to a limited movement of the person being monitored. To experimentally validate the proposed algorithm, the respiratory rate measurements coming from a spirometer were taken as a benchmark and compared with those estimated by the algorithm. Five tests were performed, with five different persons sat in front of the camera. The first test aimed to choose the suitable sampling frequency. The second test was conducted to compare the performances of the proposed system with respect to the gold standard in ideal conditions of light, orientation and clothing. The third, fourth and fifth tests evaluated the algorithm performances under different operating conditions. The experimental results showed that the system can correctly measure the respiratory rate, and it is a viable alternative to monitor the respiratory activity of a person without using invasive sensors. PMID:26609383

  18. Comparative Evaluation of Registration Algorithms in Different Brain Databases With Varying Difficulty: Results and Insights

    PubMed Central

    Akbari, Hamed; Bilello, Michel; Da, Xiao; Davatzikos, Christos

    2015-01-01

    Evaluating various algorithms for the inter-subject registration of brain magnetic resonance images (MRI) is a necessary topic receiving growing attention. Existing studies evaluated image registration algorithms in specific tasks or using specific databases (e.g., only for skull-stripped images, only for single-site images, etc.). Consequently, the choice of registration algorithms seems task- and usage/parameter-dependent. Nevertheless, recent large-scale, often multi-institutional imaging-related studies create the need and raise the question whether some registration algorithms can 1) generally apply to various tasks/databases posing various challenges; 2) perform consistently well, and while doing so, 3) require minimal or ideally no parameter tuning. In seeking answers to this question, we evaluated 12 general-purpose registration algorithms, for their generality, accuracy and robustness. We fixed their parameters at values suggested by algorithm developers as reported in the literature. We tested them in 7 databases/tasks, which present one or more of 4 commonly-encountered challenges: 1) inter-subject anatomical variability in skull-stripped images; 2) intensity homogeneity, noise and large structural differences in raw images; 3) imaging protocol and field-of-view (FOV) differences in multi-site data; and 4) missing correspondences in pathology-bearing images. Totally 7,562 registrations were performed. Registration accuracies were measured by (multi-)expert-annotated landmarks or regions of interest (ROIs). To ensure reproducibility, we used public software tools, public databases (whenever possible), and we fully disclose the parameter settings. We show evaluation results, and discuss the performances in light of algorithms’ similarity metrics, transformation models and optimization strategies. We also discuss future directions for the algorithm development and evaluations. PMID:24951685

  19. Control of Boolean networks: hardness results and algorithms for tree structured networks.

    PubMed

    Akutsu, Tatsuya; Hayashida, Morihiro; Ching, Wai-Ki; Ng, Michael K

    2007-02-21

    Finding control strategies of cells is a challenging and important problem in the post-genomic era. This paper considers theoretical aspects of the control problem using the Boolean network (BN), which is a simplified model of genetic networks. It is shown that finding a control strategy leading to the desired global state is computationally intractable (NP-hard) in general. Furthermore, this hardness result is extended for BNs with considerably restricted network structures. These results justify existing exponential time algorithms for finding control strategies for probabilistic Boolean networks (PBNs). On the other hand, this paper shows that the control problem can be solved in polynomial time if the network has a tree structure. Then, this algorithm is extended for the case where the network has a few loops and the number of time steps is small. Though this paper focuses on theoretical aspects, biological implications of the theoretical results are also discussed.

  20. Long-Term Trial Results Show No Mortality Benefit from Annual Prostate Cancer Screening

    Cancer.gov

    Thirteen year follow-up data from the Prostate, Lung, Colorectal and Ovarian (PLCO) cancer screening trial show higher incidence but similar mortality among men screened annually with the prostate-specific antigen (PSA) test and digital rectal examination

  1. Results From Mars Show Electrostatic Charging of the Mars Pathfinder Sojourner Rover

    NASA Technical Reports Server (NTRS)

    Kolecki, Joseph C.; Siebert, Mark W.

    1998-01-01

    Indirect evidence (dust accumulation) has been obtained indicating that the Mars Pathfinder rover, Sojourner, experienced electrostatic charging on Mars. Lander camera images of the Sojourner rover provide distinctive evidence of dust accumulation on rover wheels during traverses, turns, and crabbing maneuvers. The sol 22 (22nd Martian "day" after Pathfinder landed) end-of-day image clearly shows fine red dust concentrated around the wheel edges with additional accumulation in the wheel hubs. A sol 41 image of the rover near the rock "Wedge" (see the next image) shows a more uniform coating of dust on the wheel drive surfaces with accumulation in the hubs similar to that in the previous image. In the sol 41 image, note particularly the loss of black-white contrast on the Wheel Abrasion Experiment strips (center wheel). This loss of contrast was also seen when dust accumulated on test wheels in the laboratory. We believe that this accumulation occurred because the Martian surface dust consists of clay-sized particles, similar to those detected by Viking, which have become electrically charged. By adhering to the wheels, the charged dust carries a net nonzero charge to the rover, raising its electrical potential relative to its surroundings. Similar charging behavior was routinely observed in an experimental facility at the NASA Lewis Research Center, where a Sojourner wheel was driven in a simulated Martian surface environment. There, as the wheel moved and accumulated dust (see the following image), electrical potentials in excess of 100 V (relative to the chamber ground) were detected by a capacitively coupled electrostatic probe located 4 mm from the wheel surface. The measured wheel capacitance was approximately 80 picofarads (pF), and the calculated charge, 8 x 10(exp -9) coulombs (C). Voltage differences of 100 V and greater are believed sufficient to produce Paschen electrical discharge in the Martian atmosphere. With an accumulated net charge of 8 x 10(exp

  2. Stem cells show promising results for lymphoedema treatment--a literature review.

    PubMed

    Toyserkani, Navid Mohamadpour; Christensen, Marlene Louise; Sheikh, Søren Paludan; Sørensen, Jens Ahm

    2015-04-01

    Lymphoedema is a debilitating condition, manifesting in excess lymphatic fluid and swelling of subcutaneous tissues. Lymphoedema is as of yet still an incurable condition and current treatment modalities are not satisfactory. The capacity of mesenchymal stem cells to promote angiogenesis, secrete growth factors, regulate the inflammatory process, and differentiate into multiple cell types make them a potential ideal therapy for lymphoedema. Adipose tissue is the richest and most accessible source of mesenchymal stem cells and they can be harvested, isolated, and used for therapy in a single stage procedure as an autologous treatment. The aim of this paper was to review all studies using mesenchymal stem cells for lymphoedema treatment with a special focus on the potential use of adipose-derived stem cells. A systematic search was performed and five preclinical and two clinical studies were found. Different stem cell sources and lymphoedema models were used in the described studies. Most studies showed a decrease in lymphoedema and an increased lymphangiogenesis when treated with stem cells and this treatment modality has so far shown great potential. The present studies are, however, subject to bias and more preclinical studies and large-scale high quality clinical trials are needed to show if this emerging therapy can satisfy expectations.

  3. AMS 14C analysis of teeth from archaeological sites showing anomalous esr dating results

    NASA Astrophysics Data System (ADS)

    Grün, Rainer; Abeyratne, Mohan; Head, John; Tuniz, Claudio; Hedges, Robert E. M.

    We have carried out AMS radiocarbon analysis on two groups of samples: the first one gave reasonable ESR age estimates and the second one yielded serious age underestinations. All samples were supposedly older than 35 ka, the oldest being around 160 ka. Two pretreatment techniques were used for radiocarbon dating: acid evolution and thermal release. Heating to 600, 750 and 900°C combined with total de-gassing at these temperatures was chosen to obtain age estimates on the organic fraction, secondary carbonates and original carbonate present in the hydroxyapatite mineral phase, respectively. All radiocarbon results present serious age underestimations. The secondary carbonate fraction gives almost modern results indicating an extremely rapid exchange of this component. Owing to this very rapid carbonate exchange it is not likely that the ESR signals used for dating are associated with the secondary carbonates. One tooth from Tabun with independent age estimates of >150 ka was further investigated by the Oxford AMS laboratory, yielding an age estimate of 1930±100 BP on the residual collagen from dentine and 18,000±160 BP on the carbonate component of the enamel bioapatite. We did not, however, find an explanation of why some samples give serious ESR underestimatioils whilst many others provide reasonable results.

  4. Modal characterization of the ASCIE segmented optics testbed: New algorithms and experimental results

    NASA Technical Reports Server (NTRS)

    Carrier, Alain C.; Aubrun, Jean-Noel

    1993-01-01

    New frequency response measurement procedures, on-line modal tuning techniques, and off-line modal identification algorithms are developed and applied to the modal identification of the Advanced Structures/Controls Integrated Experiment (ASCIE), a generic segmented optics telescope test-bed representative of future complex space structures. The frequency response measurement procedure uses all the actuators simultaneously to excite the structure and all the sensors to measure the structural response so that all the transfer functions are measured simultaneously. Structural responses to sinusoidal excitations are measured and analyzed to calculate spectral responses. The spectral responses in turn are analyzed as the spectral data become available and, which is new, the results are used to maintain high quality measurements. Data acquisition, processing, and checking procedures are fully automated. As the acquisition of the frequency response progresses, an on-line algorithm keeps track of the actuator force distribution that maximizes the structural response to automatically tune to a structural mode when approaching a resonant frequency. This tuning is insensitive to delays, ill-conditioning, and nonproportional damping. Experimental results show that is useful for modal surveys even in high modal density regions. For thorough modeling, a constructive procedure is proposed to identify the dynamics of a complex system from its frequency response with the minimization of a least-squares cost function as a desirable objective. This procedure relies on off-line modal separation algorithms to extract modal information and on least-squares parameter subset optimization to combine the modal results and globally fit the modal parameters to the measured data. The modal separation algorithms resolved modal density of 5 modes/Hz in the ASCIE experiment. They promise to be useful in many challenging applications.

  5. Animation shows promise in initiating timely cardiopulmonary resuscitation: results of a pilot study.

    PubMed

    Attin, Mina; Winslow, Katheryn; Smith, Tyler

    2014-04-01

    Delayed responses during cardiac arrest are common. Timely interventions during cardiac arrest have a direct impact on patient survival. Integration of technology in nursing education is crucial to enhance teaching effectiveness. The goal of this study was to investigate the effect of animation on nursing students' response time to cardiac arrest, including initiation of timely chest compression. Nursing students were randomized into experimental and control groups prior to practicing in a high-fidelity simulation laboratory. The experimental group was educated, by discussion and animation, about the importance of starting cardiopulmonary resuscitation upon recognizing an unresponsive patient. Afterward, a discussion session allowed students in the experimental group to gain more in-depth knowledge about the most recent changes in the cardiac resuscitation guidelines from the American Heart Association. A linear mixed model was run to investigate differences in time of response between the experimental and control groups while controlling for differences in those with additional degrees, prior code experience, and basic life support certification. The experimental group had a faster response time compared with the control group and initiated timely cardiopulmonary resuscitation upon recognition of deteriorating conditions (P < .0001). The results demonstrated the efficacy of combined teaching modalities for timely cardiopulmonary resuscitation. Providing opportunities for repetitious practice when a patient's condition is deteriorating is crucial for teaching safe practice.

  6. Aortic emboli show surprising size dependent predilection for cerebral arteries: Results from computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Carr, Ian; Schwartz, Robert; Shadden, Shawn

    2012-11-01

    Cardiac emboli can have devastating consequences if they enter the cerebral circulation, and are the most common cause of embolic stroke. Little is known about relationships of embolic origin/density/size to cerebral events; as these relationships are difficult to observe. To better understand stoke risk from cardiac and aortic emboli, we developed a computational model to track emboli from the heart to the brain. Patient-specific models of the human aorta and arteries to the brain were derived from CT angiography from 10 MHIF patients. Blood flow was modeled by the Navier-Stokes equations using pulsatile inflow at the aortic valve, and physiologic Windkessel models at the outlets. Particulate was injected at the aortic valve and tracked using modified Maxey-Riley equations with a wall collision model. Results demonstrate aortic emboli that entered the cerebral circulation through the carotid or vertebral arteries were localized to specific locations of the proximal aorta. The percentage of released particles embolic to the brain markedly increased with particle size from 0 to ~1-1.5 mm in all patients. Larger particulate became less likely to traverse the cerebral vessels. These findings are consistent with sparse literature based on transesophageal echo measurements. This work was supported in part by the National Science Foundation, award number 1157041.

  7. Initiative To Reduce Avoidable Hospitalizations Among Nursing Facility Residents Shows Promising Results.

    PubMed

    Ingber, Melvin J; Feng, Zhanlian; Khatutsky, Galina; Wang, Joyce M; Bercaw, Lawren E; Zheng, Nan Tracy; Vadnais, Alison; Coomer, Nicole M; Segelman, Micah

    2017-03-01

    Nursing facility residents are frequently admitted to the hospital, and these hospital stays are often potentially avoidable. Such hospitalizations are detrimental to patients and costly to Medicare and Medicaid. In 2012 the Centers for Medicare and Medicaid Services launched the Initiative to Reduce Avoidable Hospitalizations among Nursing Facility Residents, using evidence-based clinical and educational interventions among long-stay residents in 143 facilities in seven states. In state-specific analyses, we estimated net reductions in 2015 of 2.2-9.3 percentage points in the probability of an all-cause hospitalization and 1.4-7.2 percentage points in the probability of a potentially avoidable hospitalization for participating facility residents, relative to comparison-group members. In that year, average per resident Medicare expenditures were reduced by $60-$2,248 for all-cause hospitalizations and by $98-$577 for potentially avoidable hospitalizations. The effects for over half of the outcomes in these analyses were significant. Variability in implementation and engagement across the nursing facilities and organizations that customized and implemented the initiative helps explain the variability in the estimated effects. Initiative models that included registered nurses or nurse practitioners who provided consistent clinical care for residents demonstrated higher staff engagement and more positive outcomes, compared to models providing only education or intermittent clinical care. These results provide promising evidence of an effective approach for reducing avoidable hospitalizations among nursing facility residents.

  8. LMS learning algorithms: misconceptions and new results on converence.

    PubMed

    Wang, Z Q; Manry, M T; Schiano, J L

    2000-01-01

    The Widrow-Hoff delta rule is one of the most popular rules used in training neural networks. It was originally proposed for the ADALINE, but has been successfully applied to a few nonlinear neural networks as well. Despite its popularity, there exist a few misconceptions on its convergence properties. In this paper we consider repetitive learning (i.e., a fixed set of samples are used for training) and provide an in-depth analysis in the least mean square (LMS) framework. Our main result is that contrary to common belief, the nonbatch Widrow-Hoff rule does not converge in general. It converges only to a limit cycle.

  9. Evaluation of observation-driven evaporation algorithms: results of the WACMOS-ET project

    NASA Astrophysics Data System (ADS)

    Miralles, Diego G.; Jimenez, Carlos; Ershadi, Ali; McCabe, Matthew F.; Michel, Dominik; Hirschi, Martin; Seneviratne, Sonia I.; Jung, Martin; Wood, Eric F.; (Bob) Su, Z.; Timmermans, Joris; Chen, Xuelong; Fisher, Joshua B.; Mu, Quiaozen; Fernandez, Diego

    2015-04-01

    Terrestrial evaporation (ET) links the continental water, energy and carbon cycles. Understanding the magnitude and variability of ET at the global scale is an essential step towards reducing uncertainties in our projections of climatic conditions and water availability for the future. However, the requirement of global observational data of ET can neither be satisfied with our sparse global in-situ networks, nor with the existing satellite sensors (which cannot measure evaporation directly from space). This situation has led to the recent rise of several algorithms dedicated to deriving ET fields from satellite data indirectly, based on the combination of ET-drivers that can be observed from space (e.g. radiation, temperature, phenological variability, water content, etc.). These algorithms can either be based on physics (e.g. Priestley and Taylor or Penman-Monteith approaches) or be purely statistical (e.g., machine learning). However, and despite the efforts from different initiatives like GEWEX LandFlux (Jimenez et al., 2011; Mueller et al., 2013), the uncertainties inherent in the resulting global ET datasets remain largely unexplored, partly due to a lack of inter-product consistency in forcing data. In response to this need, the ESA WACMOS-ET project started in 2012 with the main objectives of (a) developing a Reference Input Data Set to derive and validate ET estimates, and (b) performing a cross-comparison, error characterization and validation exercise of a group of selected ET algorithms driven by this Reference Input Data Set and by in-situ forcing data. The algorithms tested are SEBS (Su et al., 2002), the Penman- Monteith approach from MODIS (Mu et al., 2011), the Priestley and Taylor JPL model (Fisher et al., 2008), the MPI-MTE model (Jung et al., 2010) and GLEAM (Miralles et al., 2011). In this presentation we will show the first results from the ESA WACMOS-ET project. The performance of the different algorithms at multiple spatial and temporal

  10. A statistical algorithm showing coenzyme Q10 and citrate synthase as biomarkers for mitochondrial respiratory chain enzyme activities.

    PubMed

    Yubero, D; Adin, A; Montero, R; Jou, C; Jiménez-Mallebrera, C; García-Cazorla, A; Nascimento, A; O'Callaghan, M M; Montoya, J; Gort, L; Navas, P; Ribes, A; Ugarte, M D; Artuch, R

    2016-12-01

    Laboratory data interpretation for the assessment of complex biological systems remains a great challenge, as occurs in mitochondrial function research studies. The classical biochemical data interpretation of patients versus reference values may be insufficient, and in fact the current classifications of mitochondrial patients are still done on basis of probability criteria. We have developed and applied a mathematic agglomerative algorithm to search for correlations among the different biochemical variables of the mitochondrial respiratory chain in order to identify populations displaying correlation coefficients >0.95. We demonstrated that coenzyme Q10 may be a better biomarker of mitochondrial respiratory chain enzyme activities than the citrate synthase activity. Furthermore, the application of this algorithm may be useful to re-classify mitochondrial patients or to explore associations among other biochemical variables from different biological systems.

  11. A comparison of two position estimate algorithms that use ILS localizer and DME information. Simulation and flight test results

    NASA Technical Reports Server (NTRS)

    Knox, C. E.; Vicroy, D. D.; Scanlon, C.

    1984-01-01

    Simulation and flight tests were conducted to compare the accuracy of two algorithms designed to compute a position estimate with an airborne navigation computer. Both algorithms used ILS localizer and DME radio signals to compute a position difference vector to be used as an input to the navigation computer position estimate filter. The results of these tests show that the position estimate accuracy and response to artificially induced errors are improved when the position estimate is computed by an algorithm that geometrically combines DME and ILS localizer information to form a single component of error rather than by an algorithm that produces two independent components of error, one from a DMD input and the other from the ILS localizer input.

  12. Flight test results of failure detection and isolation algorithms for a redundant strapdown inertial measurement unit

    NASA Technical Reports Server (NTRS)

    Morrell, F. R.; Motyka, P. R.; Bailey, M. L.

    1990-01-01

    Flight test results for two sensor fault-tolerant algorithms developed for a redundant strapdown inertial measurement unit are presented. The inertial measurement unit (IMU) consists of four two-degrees-of-freedom gyros and accelerometers mounted on the faces of a semi-octahedron. Fault tolerance is provided by edge vector test and generalized likelihood test algorithms, each of which can provide dual fail-operational capability for the IMU. To detect the wide range of failure magnitudes in inertial sensors, which provide flight crucial information for flight control and navigation, failure detection and isolation are developed in terms of a multi level structure. Threshold compensation techniques, developed to enhance the sensitivity of the failure detection process to navigation level failures, are presented. Four flight tests were conducted in a commercial transport-type environment to compare and determine the performance of the failure detection and isolation methods. Dual flight processors enabled concurrent tests for the algorithms. Failure signals such as hard-over, null, or bias shift, were added to the sensor outputs as simple or multiple failures during the flights. Both algorithms provided timely detection and isolation of flight control level failures. The generalized likelihood test algorithm provided more timely detection of low-level sensor failures, but it produced one false isolation. Both algorithms demonstrated the capability to provide dual fail-operational performance for the skewed array of inertial sensors.

  13. A Super-Resolution Algorithm for Enhancement of FLASH LIDAR Data: Flight Test Results

    NASA Technical Reports Server (NTRS)

    Bulyshev, Alexander; Amzajerdian, Farzin; Roback, Eric; Reisse Robert

    2014-01-01

    This paper describes the results of a 3D super-resolution algorithm applied to the range data obtained from a recent Flash Lidar helicopter flight test. The flight test was conducted by the NASA's Autonomous Landing and Hazard Avoidance Technology (ALHAT) project over a simulated lunar terrain facility at NASA Kennedy Space Center. ALHAT is developing the technology for safe autonomous landing on the surface of celestial bodies: Moon, Mars, asteroids. One of the test objectives was to verify the ability of 3D super-resolution technique to generate high resolution digital elevation models (DEMs) and to determine time resolved relative positions and orientations of the vehicle. 3D super-resolution algorithm was developed earlier and tested in computational modeling, and laboratory experiments, and in a few dynamic experiments using a moving truck. Prior to the helicopter flight test campaign, a 100mX100m hazard field was constructed having most of the relevant extraterrestrial hazard: slopes, rocks, and craters with different sizes. Data were collected during the flight and then processed by the super-resolution code. The detailed DEM of the hazard field was constructed using independent measurement to be used for comparison. ALHAT navigation system data were used to verify abilities of super-resolution method to provide accurate relative navigation information. Namely, the 6 degree of freedom state vector of the instrument as a function of time was restored from super-resolution data. The results of comparisons show that the super-resolution method can construct high quality DEMs and allows for identifying hazards like rocks and craters within the accordance of ALHAT requirements.

  14. Results from CrIS/ATMS Obtained Using an "AIRS Version-6 Like Retrieval Algorithm

    NASA Astrophysics Data System (ADS)

    Susskind, J.

    2015-12-01

    A main objective of AIRS/AMSU on EOS is to provide accurate sounding products that are used to generate climate data sets. Suomi NPP carries CrIS/ATMS that were designed as follow-ons to AIRS/AMSU. Our objective is to generate a long term climate data set of products derived from CrIS/ATMS to serve as a continuation of the AIRS/AMSU products. The Goddard DISC has generated AIRS/AMSU retrieval products, extending from September 2002 through real time, using the AIRS Science Team Version-6 retrieval algorithm. Level-3 gridded monthly mean values of these products, generated using AIRS Version-6, form a state of the art multi-year set of Climate Data Records (CDRs), which is expected to continue through 2022 and possibly beyond, as the AIRS instrument is extremely stable. The goal of this research is to develop and implement a CrIS/ATMS retrieval system to generate CDRs that are compatible with, and are of comparable quality to, those generated operationally using AIRS/AMSU data. The AIRS Science Team has made considerable improvements in AIRS Science Team retrieval methodology and is working on the development of an improved AIRS Science Team Version-7 retrieval methodology to be used to reprocess all AIRS data in the relatively near future. Research is underway by Dr. Susskind and co-workers at the NASA GSFC Sounder Research Team (SRT) towards the finalization of the AIRS Version-7 retrieval algorithm, the current version of which is called SRT AIRS Version-6.22. Dr. Susskind and co-workers have developed analogous retrieval methodology for analysis of CrIS/ATMS data, called SRT CrIS Version-6.22. Results will be presented that show that AIRS and CrIS products derived using a common further improved retrieval algorithm agree closely with each other and are both superior to AIRS Version 6. The goal of the AIRS Science Team is to continue to improve both AIRS and CrIS retrieval products and then use the improved retrieval methodology for the processing of past and

  15. Hydrodynamical simulation of detonations in superbursts. I. The hydrodynamical algorithm and some preliminary one-dimensional results

    NASA Astrophysics Data System (ADS)

    Noël, C.; Busegnies, Y.; Papalexandris, M. V.; Deledicque, V.; El Messoudi, A.

    2007-08-01

    Aims:This work presents a new hydrodynamical algorithm to study astrophysical detonations. A prime motivation of this development is the description of a carbon detonation in conditions relevant to superbursts, which are thought to result from the propagation of a detonation front around the surface of a neutron star in the carbon layer underlying the atmosphere. Methods: The algorithm we have developed is a finite-volume method inspired by the original MUSCL scheme of van Leer (1979). The algorithm is of second-order in the smooth part of the flow and avoids dimensional splitting. It is applied to some test cases, and the time-dependent results are compared to the corresponding steady state solution. Results: Our algorithm proves to be robust to test cases, and is considered to be reliably applicable to astrophysical detonations. The preliminary one-dimensional calculations we have performed demonstrate that the carbon detonation at the surface of a neutron star is a multiscale phenomenon. The length scale of liberation of energy is 106 times smaller than the total reaction length. We show that a multi-resolution approach can be used to solve all the reaction lengths. This result will be very useful in future multi-dimensional simulations. We present also thermodynamical and composition profiles after the passage of a detonation in a pure carbon or mixed carbon-iron layer, in thermodynamical conditions relevant to superbursts in pure helium accretor systems.

  16. Image Artifacts Resulting from Gamma-Ray Tracking Algorithms Used with Compton Imagers

    SciTech Connect

    Seifert, Carolyn E.; He, Zhong

    2005-10-01

    For Compton imaging it is necessary to determine the sequence of gamma-ray interactions in a single detector or array of detectors. This can be done by time-of-flight measurements if the interactions are sufficiently far apart. However, in small detectors the time between interactions can be too small to measure, and other means of gamma-ray sequencing must be used. In this work, several popular sequencing algorithms are reviewed for sequences with two observed events and three or more observed events in the detector. These algorithms can result in poor imaging resolution and introduce artifacts in the backprojection images. The effects of gamma-ray tracking algorithms on Compton imaging are explored in the context of the 4π Compton imager built by the University of Michigan.

  17. The design and results of an algorithm for intelligent ground vehicles

    NASA Astrophysics Data System (ADS)

    Duncan, Matthew; Milam, Justin; Tote, Caleb; Riggins, Robert N.

    2010-01-01

    This paper addresses the design, design method, test platform, and test results of an algorithm used in autonomous navigation for intelligent vehicles. The Bluefield State College (BSC) team created this algorithm for its 2009 Intelligent Ground Vehicle Competition (IGVC) robot called Anassa V. The BSC robotics team is comprised of undergraduate computer science, engineering technology, marketing students, and one robotics faculty advisor. The team has participated in IGVC since the year 2000. A major part of the design process that the BSC team uses each year for IGVC is a fully documented "Post-IGVC Analysis." Over the nine years since 2000, the lessons the students learned from these analyses have resulted in an ever-improving, highly successful autonomous algorithm. The algorithm employed in Anassa V is a culmination of past successes and new ideas, resulting in Anassa V earning several excellent IGVC 2009 performance awards, including third place overall. The paper will discuss all aspects of the design of this autonomous robotic system, beginning with the design process and ending with test results for both simulation and real environments.

  18. Study of 201 Non-Small Cell Lung Cancer Patients Given Stereotactic Ablative Radiation Therapy Shows Local Control Dependence on Dose Calculation Algorithm

    SciTech Connect

    Latifi, Kujtim; Oliver, Jasmine; Baker, Ryan; Dilling, Thomas J.; Stevens, Craig W.; Kim, Jongphil; Yue, Binglin; DeMarco, MaryLou; Zhang, Geoffrey G.; Moros, Eduardo G.; Feygelman, Vladimir

    2014-04-01

    Purpose: Pencil beam (PB) and collapsed cone convolution (CCC) dose calculation algorithms differ significantly when used in the thorax. However, such differences have seldom been previously directly correlated with outcomes of lung stereotactic ablative body radiation (SABR). Methods and Materials: Data for 201 non-small cell lung cancer patients treated with SABR were analyzed retrospectively. All patients were treated with 50 Gy in 5 fractions of 10 Gy each. The radiation prescription mandated that 95% of the planning target volume (PTV) receive the prescribed dose. One hundred sixteen patients were planned with BrainLab treatment planning software (TPS) with the PB algorithm and treated on a Novalis unit. The other 85 were planned on the Pinnacle TPS with the CCC algorithm and treated on a Varian linac. Treatment planning objectives were numerically identical for both groups. The median follow-up times were 24 and 17 months for the PB and CCC groups, respectively. The primary endpoint was local/marginal control of the irradiated lesion. Gray's competing risk method was used to determine the statistical differences in local/marginal control rates between the PB and CCC groups. Results: Twenty-five patients planned with PB and 4 patients planned with the CCC algorithms to the same nominal doses experienced local recurrence. There was a statistically significant difference in recurrence rates between the PB and CCC groups (hazard ratio 3.4 [95% confidence interval: 1.18-9.83], Gray's test P=.019). The differences (Δ) between the 2 algorithms for target coverage were as follows: ΔD99{sub GITV} = 7.4 Gy, ΔD99{sub PTV} = 10.4 Gy, ΔV90{sub GITV} = 13.7%, ΔV90{sub PTV} = 37.6%, ΔD95{sub PTV} = 9.8 Gy, and ΔD{sub ISO} = 3.4 Gy. GITV = gross internal tumor volume. Conclusions: Local control in patients receiving who were planned to the same nominal dose with PB and CCC algorithms were statistically significantly different. Possible alternative

  19. Simulation and experimental results for a phase retrieval-based algorithm for far-field beam steering and shaping

    NASA Astrophysics Data System (ADS)

    Roggemann, Michael C.; Welsh, Byron M.; Stone, Bradley R.; Su, Ting Ei

    2002-02-01

    Active laser-based electro-optical (EO) sensors on future aircraft and spacecraft will be used for a variety of missions and will be required to have a number of demanding technical characteristics. A key challenge to achieving these characteristics is the development of inexpensive, high degree of freedom optical wave front control devices, and the development of effective algorithms for controlling these devices. In this paper we present our research in the development of phase retrieval-based wave front control algorithms that can be used implemented with segmented liquid crystal-based wave front control devices. We have developed a wave front control algorithm that allows dynamic small-angle beam steering and shaping in the presence of an aberrating output window. Our approach is based on a phase retrieval algorithm to determine the optimal figure of a segmented wave front control device. Simulation and experimental results presented here show that this approach allows shaped far field patterns to be created and steered over small angles.

  20. A treatment algorithm for patients with large skull bone defects and first results.

    PubMed

    Lethaus, Bernd; Ter Laak, Marielle Poort; Laeven, Paul; Beerens, Maikel; Koper, David; Poukens, Jules; Kessler, Peter

    2011-09-01

    Large skull bone defects resulting from craniotomies due to cerebral insults, trauma or tumours create functional and aesthetic disturbances to the patient. The reconstruction of large osseous defects is still challenging. A treatment algorithm is presented based on the close interaction of radiologists, computer engineers and cranio-maxillofacial surgeons. From 2004 until today twelve consecutive patients have been operated on successfully according to this treatment plan. Titanium and polyetheretherketone (PEEK) were used to manufacture the implants. The treatment algorithm is proved to be reliable. No corrections had to be performed either to the skull bone or to the implant. Short operations and hospitalization periods are essential prerequisites for treatment success and justify the high expenses.

  1. Efficient algorithms for mixed aleatory-epistemic uncertainty quantification with application to radiation-hardened electronics. Part I, algorithms and benchmark results.

    SciTech Connect

    Swiler, Laura Painton; Eldred, Michael Scott

    2009-09-01

    This report documents the results of an FY09 ASC V&V Methods level 2 milestone demonstrating new algorithmic capabilities for mixed aleatory-epistemic uncertainty quantification. Through the combination of stochastic expansions for computing aleatory statistics and interval optimization for computing epistemic bounds, mixed uncertainty analysis studies are shown to be more accurate and efficient than previously achievable. Part I of the report describes the algorithms and presents benchmark performance results. Part II applies these new algorithms to UQ analysis of radiation effects in electronic devices and circuits for the QASPR program.

  2. Conventional physical therapy and physical therapy based on reflex stimulation showed similar results in children with myelomeningocele.

    PubMed

    Aizawa, Carolina Y P; Morales, Mariana P; Lundberg, Carolina; Moura, Maria Clara D Soares de; Pinto, Fernando C G; Voos, Mariana C; Hasue, Renata H

    2017-03-01

    We aimed to investigate whether infants with myelomeningocele would improve their motor ability and functional independence after ten sessions of physical therapy and compare the outcomes of conventional physical therapy (CPT) to a physical therapy program based on reflex stimulation (RPT). Twelve children were allocated to CPT (n = 6, age 18.3 months) or RPT (n = 6, age 18.2 months). The RPT involved proprioceptive neuromuscular facilitation. Children were assessed with the Gross Motor Function Measure and the Pediatric Evaluation of Disability Inventory before and after treatment. Mann-Whitney tests compared the improvement on the two scales of CPT versus RPT and the Wilcoxon test compared CPT to RPT (before vs. after treatment). Possible correlations between the two scales were tested with Spearman correlation coefficients. Both groups showed improvement on self-care and mobility domains of both scales. There were no differences between the groups, before, or after intervention. The CPT and RPT showed similar results after ten weeks of treatment.

  3. Model of stacked long Josephson junctions: Parallel algorithm and numerical results in case of weak coupling

    NASA Astrophysics Data System (ADS)

    Zemlyanaya, E. V.; Bashashin, M. V.; Rahmonov, I. R.; Shukrinov, Yu. M.; Atanasova, P. Kh.; Volokhova, A. V.

    2016-10-01

    We consider a model of system of long Josephson junctions (LJJ) with inductive and capacitive coupling. Corresponding system of nonlinear partial differential equations is solved by means of the standard three-point finite-difference approximation in the spatial coordinate and utilizing the Runge-Kutta method for solution of the resulting Cauchy problem. A parallel algorithm is developed and implemented on a basis of the MPI (Message Passing Interface) technology. Effect of the coupling between the JJs on the properties of LJJ system is demonstrated. Numerical results are discussed from the viewpoint of effectiveness of parallel implementation.

  4. Orion Guidance and Control Ascent Abort Algorithm Design and Performance Results

    NASA Technical Reports Server (NTRS)

    Proud, Ryan W.; Bendle, John R.; Tedesco, Mark B.; Hart, Jeremy J.

    2009-01-01

    During the ascent flight phase of NASA s Constellation Program, the Ares launch vehicle propels the Orion crew vehicle to an agreed to insertion target. If a failure occurs at any point in time during ascent then a system must be in place to abort the mission and return the crew to a safe landing with a high probability of success. To achieve continuous abort coverage one of two sets of effectors is used. Either the Launch Abort System (LAS), consisting of the Attitude Control Motor (ACM) and the Abort Motor (AM), or the Service Module (SM), consisting of SM Orion Main Engine (OME), Auxiliary (Aux) Jets, and Reaction Control System (RCS) jets, is used. The LAS effectors are used for aborts from liftoff through the first 30 seconds of second stage flight. The SM effectors are used from that point through Main Engine Cutoff (MECO). There are two distinct sets of Guidance and Control (G&C) algorithms that are designed to maximize the performance of these abort effectors. This paper will outline the necessary inputs to the G&C subsystem, the preliminary design of the G&C algorithms, the ability of the algorithms to predict what abort modes are achievable, and the resulting success of the abort system. Abort success will be measured against the Preliminary Design Review (PDR) abort performance metrics and overall performance will be reported. Finally, potential improvements to the G&C design will be discussed.

  5. A Comparison of Direction Finding Results From an FFT Peak Identification Technique With Those From the Music Algorithm

    DTIC Science & Technology

    1991-07-01

    MUSIC ALGORITHM (U) by L.E. Montbrland go I July 1991 CRC REPORT NO. 1438 Ottawa I* Government of Canada Gouvsrnweient du Canada I o DParunnt of...FINDING RESULTS FROM AN FFT PEAK IDENTIFICATION TECHNIQUE WITH THOSE FROM THE MUSIC ALGORITHM (U) by L.E. Montbhrand CRC REPORT NO. 1438 July 1991...Ottawa A Comparison of Direction Finding Results From an FFT Peak Identification Technique With Those From the Music Algorithm L.E. Montbriand Abstract A

  6. Genomic and Enzymatic Results Show Bacillus cellulosilyticus Uses a Novel Set of LPXTA Carbohydrases to Hydrolyze Polysaccharides

    PubMed Central

    Mead, David; Drinkwater, Colleen; Brumm, Phillip J.

    2013-01-01

    Background Alkaliphilic Bacillus species are intrinsically interesting due to the bioenergetic problems posed by growth at high pH and high salt. Three alkaline cellulases have been cloned, sequenced and expressed from Bacillus cellulosilyticus N-4 (Bcell) making it an excellent target for genomic sequencing and mining of biomass-degrading enzymes. Methodology/Principal Findings The genome of Bcell is a single chromosome of 4.7 Mb with no plasmids present and three large phage insertions. The most unusual feature of the genome is the presence of 23 LPXTA membrane anchor proteins; 17 of these are annotated as involved in polysaccharide degradation. These two values are significantly higher than seen in any other Bacillus species. This high number of membrane anchor proteins is seen only in pathogenic Gram-positive organisms such as Listeria monocytogenes or Staphylococcus aureus. Bcell also possesses four sortase D subfamily 4 enzymes that incorporate LPXTA-bearing proteins into the cell wall; three of these are closely related to each other and unique to Bcell. Cell fractionation and enzymatic assay of Bcell cultures show that the majority of polysaccharide degradation is associated with the cell wall LPXTA-enzymes, an unusual feature in Gram-positive aerobes. Genomic analysis and growth studies both strongly argue against Bcell being a truly cellulolytic organism, in spite of its name. Preliminary results suggest that fungal mycelia may be the natural substrate for this organism. Conclusions/Significance Bacillus cellulosilyticus N-4, in spite of its name, does not possess any of the genes necessary for crystalline cellulose degradation, demonstrating the risk of classifying microorganisms without the benefit of genomic analysis. Bcell is the first Gram-positive aerobic organism shown to use predominantly cell-bound, non-cellulosomal enzymes for polysaccharide degradation. The LPXTA-sortase system utilized by Bcell may have applications both in anchoring

  7. Results from CrIS/ATMS Obtained Using an AIRS "Version-6 like" Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Kouvaris, Louis; Iredell, Lena

    2015-01-01

    We tested and evaluated Version-6.22 AIRS and Version-6.22 CrIS products on a single day, December 4, 2013, and compared results to those derived using AIRS Version-6. AIRS and CrIS Version-6.22 O3(p) and q(p) products are both superior to those of AIRS Version-6All AIRS and CrIS products agree reasonably well with each other. CrIS Version-6.22 T(p) and q(p) results are slightly poorer than AIRS over land, especially under very cloudy conditions. Both AIRS and CrIS Version-6.22 run now at JPL. Our short term plans are to analyze many common months at JPL in the near future using Version-6.22 or a further improved algorithm to assess the compatibility of AIRS and CrIS monthly mean products and their interannual differences. Updates to the calibration of both CrIS and ATMS are still being finalized. JPL plans, in collaboration with the Goddard DISC, to reprocess all AIRS data using a still to be finalized Version-7 retrieval algorithm, and to reprocess all recalibrated CrISATMS data using Version-7 as well.

  8. Results from CrIS/ATMS Obtained Using an AIRS "Version-6 Like" Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Kouvaris, Louis; Iredell, Lena

    2015-01-01

    We have tested and evaluated Version-6.22 AIRS and Version-6.22 CrIS products on a single day, December 4, 2013, and compared results to those derived using AIRS Version-6. AIRS and CrIS Version-6.22 O3(p) and q(p) products are both superior to those of AIRS Version-6All AIRS and CrIS products agree reasonably well with each other CrIS Version-6.22 T(p) and q(p) results are slightly poorer than AIRS under very cloudy conditions. Both AIRS and CrIS Version-6.22 run now at JPL. Our short term plans are to analyze many common months at JPL in the near future using Version-6.22 or a further improved algorithm to assess the compatibility of AIRS and CrIS monthly mean products and their interannual differencesUpdates to the calibration of both CrIS and ATMS are still being finalized. JPL plans, in collaboration with the Goddard DISC, to reprocess all AIRS data using a still to be finalized Version-7 retrieval algorithm, and to reprocess all recalibrated CrISATMS data using Version-7 as well.

  9. Mars Entry Atmospheric Data System Trajectory Reconstruction Algorithms and Flight Results

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark; Shidner, Jeremy; Munk, Michelle

    2013-01-01

    The Mars Entry Atmospheric Data System is a part of the Mars Science Laboratory, Entry, Descent, and Landing Instrumentation project. These sensors are a system of seven pressure transducers linked to ports on the entry vehicle forebody to record the pressure distribution during atmospheric entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. Specifically, angle of attack, angle of sideslip, dynamic pressure, Mach number, and freestream atmospheric properties are reconstructed from the measured pressures. Such data allows for the aerodynamics to become decoupled from the assumed atmospheric properties, allowing for enhanced trajectory reconstruction and performance analysis as well as an aerodynamic reconstruction, which has not been possible in past Mars entry reconstructions. This paper provides details of the data processing algorithms that are utilized for this purpose. The data processing algorithms include two approaches that have commonly been utilized in past planetary entry trajectory reconstruction, and a new approach for this application that makes use of the pressure measurements. The paper describes assessments of data quality and preprocessing, and results of the flight data reduction from atmospheric entry, which occurred on August 5th, 2012.

  10. Passification based simple adaptive control of quadrotor attitude: Algorithms and testbed results

    NASA Astrophysics Data System (ADS)

    Tomashevich, Stanislav; Belyavskyi, Andrey; Andrievsky, Boris

    2017-01-01

    In the paper, the results of the Passification Method with the Implicit Reference Model (IRM) approach are applied for designing the simple adaptive controller for quadrotor attitude. The IRM design technique makes it possible to relax the matching condition, known for habitual MRAC systems, and leads to simple adaptive controllers, ensuring fast tuning the controller gains, high robustness with respect to nonlinearities in the control loop, to the external disturbances and the unmodeled plant dynamics. For experimental evaluation of the adaptive systems performance, the 2DOF laboratory setup has been created. The testbed allows to safely test new control algorithms in the laboratory area with a small space and promptly make changes in cases of failure. The testing results of simple adaptive control of quadrotor attitude are presented, demonstrating efficacy of the applied simple adaptive control method. The experiments demonstrate good performance quality and high adaptation rate of the simple adaptive control system.

  11. Evidence-based algorithm for diagnosis and assessment in psoriatic arthritis: results by Italian DElphi in psoriatic Arthritis (IDEA).

    PubMed

    Lapadula, G; Marchesoni, A; Salaffi, F; Ramonda, R; Salvarani, C; Punzi, L; Costa, L; Caso, F; Simone, D; Baiocchi, G; Scioscia, C; Di Carlo, M; Scarpa, R; Ferraccioli, G

    2016-12-16

    Psoriatic arthritis (PsA) is a chronic inflammatory disease involving skin, peripheral joints, entheses, and axial skeleton. The disease is frequently associated with extrarticular manifestations (EAMs) and comorbidities. In order to create a protocol for PsA diagnosis and global assessment of patients with an algorithm based on anamnestic, clinical, laboratory and imaging procedures, we established a DElphi study on a national scale, named Italian DElphi in psoriatic Arthritis (IDEA). After a literature search, a Delphi poll, involving 52 rheumatologists, was performed. On the basis of the literature search, 202 potential items were identified. The steering committee planned at least two Delphi rounds. In the first Delphi round, the experts judged each of the 202 items using a score ranging from 1 to 9 based on its increasing clinical relevance. The questions posed to experts were How relevant is this procedure/observation/sign/symptom for assessment of a psoriatic arthritis patient? Proposals of additional items, not included in the questionnaire, were also encouraged. The results of the poll were discussed by the Steering Committee, which evaluated the necessity for removing selected procedures or adding additional ones, according to criteria of clinical appropriateness and sustainability. A total of 43 recommended diagnosis and assessment procedures, recognized as items, were derived by combination of the Delphi survey and two National Expert Meetings, and grouped in different areas. Favourable opinion was reached in 100% of cases for several aspects covering the following areas: medical (familial and personal) history, physical evaluation, imaging tool, second level laboratory tests, disease activity measurement and extrarticular manifestations. After performing PsA diagnosis, identification of specific disease activity scores and clinimetric approaches were suggested for assessing the different clinical subsets. Further, results showed the need for

  12. A collaborative accountable care model in three practices showed promising early results on costs and quality of care.

    PubMed

    Salmon, Richard B; Sanderson, Mark I; Walters, Barbara A; Kennedy, Karen; Flores, Robert C; Muney, Alan M

    2012-11-01

    Cigna's Collaborative Accountable Care initiative provides financial incentives to physician groups and integrated delivery systems to improve the quality and efficiency of care for patients in commercial open-access benefit plans. Registered nurses who serve as care coordinators employed by participating practices are a central feature of the initiative. They use patient-specific reports and practice performance reports provided by Cigna to improve care coordination, identify and close care gaps, and address other opportunities for quality improvement. We report interim quality and cost results for three geographically and structurally diverse provider practices in Arizona, New Hampshire, and Texas. Although not statistically significant, these early results revealed favorable trends in total medical costs and quality of care, suggesting that a shared-savings accountable care model and collaborative support from the payer can enable practices to take meaningful steps toward full accountability for care quality and efficiency.

  13. Volar locking distal radius plates show better short-term results than other treatment options: A prospective randomised controlled trial

    PubMed Central

    Drobetz, Herwig; Koval, Lidia; Weninger, Patrick; Luscombe, Ruth; Jeffries, Paula; Ehrendorfer, Stefan; Heal, Clare

    2016-01-01

    AIM To compare the outcomes of displaced distal radius fractures treated with volar locking plates and with immediate postoperative mobilisation with the outcomes of these fractures treated with modalities that necessitate 6 wk wrist immobilisation. METHODS A prospective, randomised controlled single-centre trial was conducted with 56 patients who had a displaced radius fracture were randomised to treatment either with a volar locking plate (n = 29), or another treatment modality (n = 27; cast immobilisation with or without wires or external fixator). Outcomes were measured at 12 wk. Functional outcome scores measured were the Patient-Rated Wrist Evaluation (PRWE) Score; Disabilities of the Arm, Shoulder and Hand and activities of daily living (ADLs). Clinical outcomes were wrist range of motion and grip strength. Radiographic parameters were volar inclination and ulnar variance. RESULTS Patients in the volar locking plate group had significantly better PRWE scores, ADL scores, grip strength and range of extension at three months compared with the control group. All radiological parameters were significantly better in the volar locking plate group at 3 mo. CONCLUSION The present study suggests that volar locking plates produced significantly better functional and clinical outcomes at 3 mo compared with other treatment modalities. Anatomical reduction was significantly more likely to be preserved in the plating group. Level of evidence: II. PMID:27795951

  14. Selection Indices and Multivariate Analysis Show Similar Results in the Evaluation of Growth and Carcass Traits in Beef Cattle

    PubMed Central

    Brito Lopes, Fernando; da Silva, Marcelo Corrêa; Magnabosco, Cláudio Ulhôa; Goncalves Narciso, Marcelo; Sainz, Roberto Daniel

    2016-01-01

    This research evaluated a multivariate approach as an alternative tool for the purpose of selection regarding expected progeny differences (EPDs). Data were fitted using a multi-trait model and consisted of growth traits (birth weight and weights at 120, 210, 365 and 450 days of age) and carcass traits (longissimus muscle area (LMA), back-fat thickness (BF), and rump fat thickness (RF)), registered over 21 years in extensive breeding systems of Polled Nellore cattle in Brazil. Multivariate analyses were performed using standardized (zero mean and unit variance) EPDs. The k mean method revealed that the best fit of data occurred using three clusters (k = 3) (P < 0.001). Estimates of genetic correlation among growth and carcass traits and the estimates of heritability were moderate to high, suggesting that a correlated response approach is suitable for practical decision making. Estimates of correlation between selection indices and the multivariate index (LD1) were moderate to high, ranging from 0.48 to 0.97. This reveals that both types of indices give similar results and that the multivariate approach is reliable for the purpose of selection. The alternative tool seems very handy when economic weights are not available or in cases where more rapid identification of the best animals is desired. Interestingly, multivariate analysis allowed forecasting information based on the relationships among breeding values (EPDs). Also, it enabled fine discrimination, rapid data summarization after genetic evaluation, and permitted accounting for maternal ability and the genetic direct potential of the animals. In addition, we recommend the use of longissimus muscle area and subcutaneous fat thickness as selection criteria, to allow estimation of breeding values before the first mating season in order to accelerate the response to individual selection. PMID:26789008

  15. Selection Indices and Multivariate Analysis Show Similar Results in the Evaluation of Growth and Carcass Traits in Beef Cattle.

    PubMed

    Brito Lopes, Fernando; da Silva, Marcelo Corrêa; Magnabosco, Cláudio Ulhôa; Goncalves Narciso, Marcelo; Sainz, Roberto Daniel

    2016-01-01

    This research evaluated a multivariate approach as an alternative tool for the purpose of selection regarding expected progeny differences (EPDs). Data were fitted using a multi-trait model and consisted of growth traits (birth weight and weights at 120, 210, 365 and 450 days of age) and carcass traits (longissimus muscle area (LMA), back-fat thickness (BF), and rump fat thickness (RF)), registered over 21 years in extensive breeding systems of Polled Nellore cattle in Brazil. Multivariate analyses were performed using standardized (zero mean and unit variance) EPDs. The k mean method revealed that the best fit of data occurred using three clusters (k = 3) (P < 0.001). Estimates of genetic correlation among growth and carcass traits and the estimates of heritability were moderate to high, suggesting that a correlated response approach is suitable for practical decision making. Estimates of correlation between selection indices and the multivariate index (LD1) were moderate to high, ranging from 0.48 to 0.97. This reveals that both types of indices give similar results and that the multivariate approach is reliable for the purpose of selection. The alternative tool seems very handy when economic weights are not available or in cases where more rapid identification of the best animals is desired. Interestingly, multivariate analysis allowed forecasting information based on the relationships among breeding values (EPDs). Also, it enabled fine discrimination, rapid data summarization after genetic evaluation, and permitted accounting for maternal ability and the genetic direct potential of the animals. In addition, we recommend the use of longissimus muscle area and subcutaneous fat thickness as selection criteria, to allow estimation of breeding values before the first mating season in order to accelerate the response to individual selection.

  16. Preliminary Analysis of a Breadth-First Parsing Algorithm: Theoretical and Experimental Results.

    DTIC Science & Technology

    1981-06-01

    also constructed synthetic sequences which generate products of two Catalan numbers and the Fibonacci [20] numbers. These will be presented in turn. One...WORDS (Continue on reveree side if neceemary md Identity by block nuiiber) Parsing, chart parsing, natural language processing, Earley’s algorithm 21...Words: Parsing, Chart Parsing, Natural Language Processing, Earley’s Algorithm V. This research was supported (in part) by the National Institutes of

  17. Flight test results of a vector-based failure detection and isolation algorithm for a redundant strapdown inertial measurement unit

    NASA Technical Reports Server (NTRS)

    Morrell, F. R.; Bailey, M. L.; Motyka, P. R.

    1988-01-01

    Flight test results of a vector-based fault-tolerant algorithm for a redundant strapdown inertial measurement unit are presented. Because the inertial sensors provide flight-critical information for flight control and navigation, failure detection and isolation is developed in terms of a multi-level structure. Threshold compensation techniques for gyros and accelerometers, developed to enhance the sensitivity of the failure detection process to low-level failures, are presented. Four flight tests, conducted in a commercial transport type environment, were used to determine the ability of the failure detection and isolation algorithm to detect failure signals, such a hard-over, null, or bias shifts. The algorithm provided timely detection and correct isolation of flight control- and low-level failures. The flight tests of the vector-based algorithm demonstrated its capability to provide false alarm free dual fail-operational performance for the skewed array of inertial sensors.

  18. Planning fuel-conservative descents in an airline environmental using a small programmable calculator: algorithm development and flight test results

    SciTech Connect

    Knox, C.E.; Vicroy, D.D.; Simmon, D.A.

    1985-05-01

    A simple, airborne, flight-management descent algorithm was developed and programmed into a small programmable calculator. The algorithm may be operated in either a time mode or speed mode. The time mode was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The speed model was designed for planning fuel-conservative descents when time is not a consideration. The descent path for both modes was calculated for a constant with considerations given for the descent Mach/airspeed schedule, gross weight, wind, wind gradient, and nonstandard temperature effects. Flight tests, using the algorithm on the programmable calculator, showed that the open-loop guidance could be useful to airline flight crews for planning and executing fuel-conservative descents.

  19. Planning fuel-conservative descents in an airline environmental using a small programmable calculator: Algorithm development and flight test results

    NASA Technical Reports Server (NTRS)

    Knox, C. E.; Vicroy, D. D.; Simmon, D. A.

    1985-01-01

    A simple, airborne, flight-management descent algorithm was developed and programmed into a small programmable calculator. The algorithm may be operated in either a time mode or speed mode. The time mode was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The speed model was designed for planning fuel-conservative descents when time is not a consideration. The descent path for both modes was calculated for a constant with considerations given for the descent Mach/airspeed schedule, gross weight, wind, wind gradient, and nonstandard temperature effects. Flight tests, using the algorithm on the programmable calculator, showed that the open-loop guidance could be useful to airline flight crews for planning and executing fuel-conservative descents.

  20. A Result on the Computational Complexity of Heuristic Estimates for the A Algorithm.

    DTIC Science & Technology

    1983-01-01

    compare these algorithms according to the criterion ’number of node expansions," which is discussed and general - ly accepted in the published...alla Teoria doi Problemi." i i 4 a..... . - 22 - P__ e£jnjs of AICA 1980, Bari, Italy, 177-193 (in Italian). [HNRaph68] Hart, Peter A., Nils J. Nilsson...Intoijigence, 15 (1980), pp. 241-254. [Kibler82] Kibler, Dennis. "Natural Generation of Admissible Heuristics." Technical Report TR-188, Information and

  1. Haplotyping algorithms

    SciTech Connect

    Sobel, E.; Lange, K.; O`Connell, J.R.

    1996-12-31

    Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.

  2. Results from CrIS/ATMS Obtained Using an "AIRS Version-6 Like" Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Kouvaris, Louis; Iredell, Lena

    2015-01-01

    A main objective of AIRS/AMSU on EOS is to provide accurate sounding products that are used to generate climate data sets. Suomi NPP carries CrIS/ATMS that were designed as follow-ons to AIRS/AMSU. Our objective is to generate a long term climate data set of products derived from CrIS/ATMS to serve as a continuation of the AIRS/AMSU products. We have modified an improved version of the operational AIRS Version-6 retrieval algorithm for use with CrIS/ATMS. CrIS/ATMS products are of very good quality, and are comparable to, and consistent with, those of AIRS.

  3. Advanced Transport Delay Compensation Algorithms: Results of Delay Measurement and Piloted Performance Tests

    NASA Technical Reports Server (NTRS)

    Guo, Liwen; Cardullo, Frank M.; Kelly, Lon C.

    2007-01-01

    This report summarizes the results of delay measurement and piloted performance tests that were conducted to assess the effectiveness of the adaptive compensator and the state space compensator for alleviating the phase distortion of transport delay in the visual system in the VMS at the NASA Langley Research Center. Piloted simulation tests were conducted to assess the effectiveness of two novel compensators in comparison to the McFarland predictor and the baseline system with no compensation. Thirteen pilots with heterogeneous flight experience executed straight-in and offset approaches, at various delay configurations, on a flight simulator where different predictors were applied to compensate for transport delay. The glideslope and touchdown errors, power spectral density of the pilot control inputs, NASA Task Load Index, and Cooper-Harper rating of the handling qualities were employed for the analyses. The overall analyses show that the adaptive predictor results in slightly poorer compensation for short added delay (up to 48 ms) and better compensation for long added delay (up to 192 ms) than the McFarland compensator. The analyses also show that the state space predictor is fairly superior for short delay and significantly superior for long delay than the McFarland compensator.

  4. Diagnosis algorithm for leptospirosis in dogs: disease and vaccination effects on the serological results.

    PubMed

    Andre-Fontaine, G

    2013-05-11

    Leptospirosis is a common disease in dogs, despite their current vaccination. Vet surgeons may use a serological test to verify their clinical observations. The gold standard is the Microscopic Agglutination Test (MAT). After infection, the dog produces agglutinating antibodies against the lipopolyosidic antigens shared by the infectious strain but also, after vaccination, against the lipopolyosidic antigens shared by the serovars used in the bacterins (Leptospira species serovars Icterohaemorrhagiae and Canicola in most countries). MATs were performed in a group of 102 healthy field dogs and a group of 6 Canicola-challenged dogs. A diagnosis algorithm was constructed based on age, previous vaccinations, kinetics of the agglutinating antibodies after infection or vaccination and the delay after onset of the disease. This algorithm was applied to 169 well-documented sera (clinical and vaccine data) from 272 sick dogs with suspected leptospirosis. Totally, 102 dogs were vaccinated according to the usual vaccination scheme and 30 were not vaccinated. Leptospirosis was confirmed by MAT in 37/102 (36.2 per cent) vaccinated dogs and remained probable in 14 others (13.7 per cent), thus indicating the permanent exposure of dogs and the weakness of the protection offered by the current vaccines to pathogenic Leptospira.

  5. First Results from the OMI Rotational Raman Scattering Cloud Pressure Algorithm

    NASA Technical Reports Server (NTRS)

    Joiner, Joanna; Vasilkov, Alexander P.

    2006-01-01

    We have developed an algorithm to retrieve scattering cloud pressures and other cloud properties with the Aura Ozone Monitoring Instrument (OMI). The scattering cloud pressure is retrieved using the effects of rotational Raman scattering (RRS). It is defined as the pressure of a Lambertian surface that would produce the observed amount of RRS consistent with the derived reflectivity of that surface. The independent pixel approximation is used in conjunction with the Lambertian-equivalent reflectivity model to provide an effective radiative cloud fraction and scattering pressure in the presence of broken or thin cloud. The derived cloud pressures will enable accurate retrievals of trace gas mixing ratios, including ozone, in the troposphere within and above clouds. We describe details of the algorithm that will be used for the first release of these products. We compare our scattering cloud pressures with cloud-top pressures and other cloud properties from the Aqua Moderate-Resolution Imaging Spectroradiometer (MODIS) instrument. OMI and MODIS are part of the so-called A-train satellites flying in formation within 30 min of each other. Differences between OMI and MODIS are expected because the MODIS observations in the thermal infrared are more sensitive to the cloud top whereas the backscattered photons in the ultraviolet can penetrate deeper into clouds. Radiative transfer calculations are consistent with the observed differences. The OMI cloud pressures are shown to be correlated with the cirrus reflectance. This relationship indicates that OMI can probe through thin or moderately thick cirrus to lower lying water clouds.

  6. Results from CrIS/ATMS Obtained Using an "AIRS Version-6 Like" Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Kouvaris, Louis; Iredell, Lena; Blaisdell, John

    2015-01-01

    AIRS and CrIS Version-6.22 O3(p) and q(p) products are both superior to those of AIRS Version-6.Monthly mean August 2014 Version-6.22 AIRS and CrIS products agree reasonably well with OMPS, CERES, and witheach other. JPL plans to process AIRS and CrIS for many months and compare interannual differences. Updates to thecalibration of both CrIS and ATMS are still being finalized. We are also working with JPL to develop a joint AIRS/CrISlevel-1 to level-3 processing system using a still to be finalized Version-7 retrieval algorithm. The NASA Goddard DISCwill eventually use this system to reprocess all AIRS and recalibrated CrIS/ATMS. .

  7. Arctic Mixed-Phase Cloud Properties from AERI Lidar Observations: Algorithm and Results from SHEBA

    SciTech Connect

    Turner, David D.

    2005-04-01

    A new approach to retrieve microphysical properties from mixed-phase Arctic clouds is presented. This mixed-phase cloud property retrieval algorithm (MIXCRA) retrieves cloud optical depth, ice fraction, and the effective radius of the water and ice particles from ground-based, high-resolution infrared radiance and lidar cloud boundary observations. The theoretical basis for this technique is that the absorption coefficient of ice is greater than that of liquid water from 10 to 13 μm, whereas liquid water is more absorbing than ice from 16 to 25 μm. MIXCRA retrievals are only valid for optically thin (τvisible < 6) single-layer clouds when the precipitable water vapor is less than 1 cm. MIXCRA was applied to the Atmospheric Emitted Radiance Interferometer (AERI) data that were collected during the Surface Heat Budget of the Arctic Ocean (SHEBA) experiment from November 1997 to May 1998, where 63% of all of the cloudy scenes above the SHEBA site met this specification. The retrieval determined that approximately 48% of these clouds were mixed phase and that a significant number of clouds (during all 7 months) contained liquid water, even for cloud temperatures as low as 240 K. The retrieved distributions of effective radii for water and ice particles in single-phase clouds are shown to be different than the effective radii in mixed-phase clouds.

  8. A New Retrieval Algorithm for OMI NO2: Tropospheric Results and Comparisons with Measurements and Models

    NASA Technical Reports Server (NTRS)

    Swartz, W. H.; Bucesla, E. J.; Lamsal, L. N.; Celarier, E. A.; Krotkov, N. A.; Bhartia, P, K,; Strahan, S. E.; Gleason, J. F.; Herman, J.; Pickering, K.

    2012-01-01

    Nitrogen oxides (NOx =NO+NO2) are important atmospheric trace constituents that impact tropospheric air pollution chemistry and air quality. We have developed a new NASA algorithm for the retrieval of stratospheric and tropospheric NO2 vertical column densities using measurements from the nadir-viewing Ozone Monitoring Instrument (OMI) on NASA's Aura satellite. The new products rely on an improved approach to stratospheric NO2 column estimation and stratosphere-troposphere separation and a new monthly NO2 climatology based on the NASA Global Modeling Initiative chemistry-transport model. The retrieval does not rely on daily model profiles, minimizing the influence of a priori information. We evaluate the retrieved tropospheric NO2 columns using surface in situ (e.g., AQS/EPA), ground-based (e.g., DOAS), and airborne measurements (e.g., DISCOVER-AQ). The new, improved OMI tropospheric NO2 product is available at high spatial resolution for the years 200S-present. We believe that this product is valuable for the evaluation of chemistry-transport models, examining the spatial and temporal patterns of NOx emissions, constraining top-down NOx inventories, and for the estimation of NOx lifetimes.

  9. [Fractal dimension and histogram method: algorithm and some preliminary results of noise-like time series analysis].

    PubMed

    Pancheliuga, V A; Pancheliuga, M S

    2013-01-01

    In the present work a methodological background for the histogram method of time series analysis is developed. Connection between shapes of smoothed histograms constructed on the basis of short segments of time series of fluctuations and the fractal dimension of the segments is studied. It is shown that the fractal dimension possesses all main properties of the histogram method. Based on it a further development of fractal dimension determination algorithm is proposed. This algorithm allows more precision determination of the fractal dimension by using the "all possible combination" method. The application of the method to noise-like time series analysis leads to results, which could be obtained earlier only by means of the histogram method based on human expert comparisons of histograms shapes.

  10. Usefulness of a metal artifact reduction algorithm for orthopedic implants in abdominal CT: phantom and clinical study results.

    PubMed

    Jeong, Seonji; Kim, Se Hyung; Hwang, Eui Jin; Shin, Cheong-Il; Han, Joon Koo; Choi, Byung Ihn

    2015-02-01

    OBJECTIVE. The purpose of this study was to evaluate the usefulness of a metal artifact reduction (MAR) algorithm for orthopedic prostheses in phantom and clinical CT. MATERIALS AND METHODS. An agar phantom with two sets of spinal screws was scanned at various tube voltage (80-140 kVp) and tube current-time (34-1032 mAs) settings. The orthopedic MAR algorithm was combined with filtered back projection (FBP) or iterative reconstruction. The mean SDs in three ROIs were compared among four datasets (FBP, iterative reconstruction, FBP with orthopedic MAR, and iterative reconstruction with orthopedic MAR). For the clinical study, the mean SDs of three ROIs and 4-point scaled image quality in 52 patients with metallic orthopedic prostheses were compared between CT images acquired with and without orthopedic MAR. The presence and type of image quality improvement with orthopedic MAR and the presence of orthopedic MAR-related new artifacts were also analyzed. RESULTS. In the phantom study, the mean SD with orthopedic MAR was significantly lower than that without orthopedic MAR regardless of dose settings and reconstruction algorithms (FBP versus iterative reconstruction). The mean SD near the metallic prosthesis in 52 patients was significantly lower on CT images with orthopedic MAR (28.04 HU) than those without it (49.21 HU). Image quality regarding metallic artifact was significantly improved with orthopedic MAR (rating of 2.60 versus 1.04). Notable reduction of metallic artifacts and better depiction of abdominal organs were observed in 45 patients. Diagnostic benefit was achieved in six patients, but orthopedic MAR-related new artifacts were seen in 30 patients. CONCLUSION. Use of the orthopedic MAR algorithm significantly reduces metal artifacts in CT of both phantoms and patients and has potential for improving diagnostic performance in patients with severe metallic artifacts.

  11. "The Show"

    ERIC Educational Resources Information Center

    Gehring, John

    2004-01-01

    For the past 16 years, the blue-collar city of Huntington, West Virginia, has rolled out the red carpet to welcome young wrestlers and their families as old friends. They have come to town chasing the same dream for a spot in what many of them call "The Show". For three days, under the lights of an arena packed with 5,000 fans, the…

  12. Simulation Results of the Huygens Probe Entry and Descent Trajectory Reconstruction Algorithm

    NASA Technical Reports Server (NTRS)

    Kazeminejad, B.; Atkinson, D. H.; Perez-Ayucar, M.

    2005-01-01

    Cassini/Huygens is a joint NASA/ESA mission to explore the Saturnian system. The ESA Huygens probe is scheduled to be released from the Cassini spacecraft on December 25, 2004, enter the atmosphere of Titan in January, 2005, and descend to Titan s surface using a sequence of different parachutes. To correctly interpret and correlate results from the probe science experiments and to provide a reference set of data for "ground-truthing" Orbiter remote sensing measurements, it is essential that the probe entry and descent trajectory reconstruction be performed as early as possible in the postflight data analysis phase. The Huygens Descent Trajectory Working Group (DTWG), a subgroup of the Huygens Science Working Team (HSWT), is responsible for developing a methodology and performing the entry and descent trajectory reconstruction. This paper provides an outline of the trajectory reconstruction methodology, preliminary probe trajectory retrieval test results using a simulated synthetic Huygens dataset developed by the Huygens Project Scientist Team at ESA/ESTEC, and a discussion of strategies for recovery from possible instrument failure.

  13. Deriving Arctic Cloud Microphysics at Barrow, Alaska. Algorithms, Results, and Radiative Closure

    SciTech Connect

    Shupe, Matthew D.; Turner, David D.; Zwink, Alexander; Thieman, Mandana M.; Mlawer, Eli J.; Shippert, Timothy

    2015-07-01

    Cloud phase and microphysical properties control the radiative effects of clouds in the climate system and are therefore crucial to characterize in a variety of conditions and locations. An Arctic-specific, ground-based, multi-sensor cloud retrieval system is described here and applied to two years of observations from Barrow, Alaska. Over these two years, clouds occurred 75% of the time, with cloud ice and liquid each occurring nearly 60% of the time. Liquid water occurred at least 25% of the time even in the winter, and existed up to heights of 8 km. The vertically integrated mass of liquid was typically larger than that of ice. While it is generally difficult to evaluate the overall uncertainty of a comprehensive cloud retrieval system of this type, radiative flux closure analyses were performed where flux calculations using the derived microphysical properties were compared to measurements at the surface and top-of-atmosphere. Radiative closure biases were generally smaller for cloudy scenes relative to clear skies, while the variability of flux closure results was only moderately larger than under clear skies. The best closure at the surface was obtained for liquid-containing clouds. Radiative closure results were compared to those based on a similar, yet simpler, cloud retrieval system. These comparisons demonstrated the importance of accurate cloud phase classification, and specifically the identification of liquid water, for determining radiative fluxes. Enhanced retrievals of liquid water path for thin clouds were also shown to improve radiative flux calculations.

  14. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  15. The equation of state for stellar envelopes. II - Algorithm and selected results

    NASA Technical Reports Server (NTRS)

    Mihalas, Dimitri; Dappen, Werner; Hummer, D. G.

    1988-01-01

    A free-energy-minimization method for computing the dissociation and ionization equilibrium of a multicomponent gas is discussed. The adopted free energy includes terms representing the translational free energy of atoms, ions, and molecules; the internal free energy of particles with excited states; the free energy of a partially degenerate electron gas; and the configurational free energy from shielded Coulomb interactions among charged particles. Internal partition functions are truncated using an occupation probability formalism that accounts for perturbations of bound states by both neutral and charged perturbers. The entire theory is analytical and differentiable to all orders, so it is possible to write explicit analytical formulas for all derivatives required in a Newton-Raphson iteration; these are presented to facilitate future work. Some representative results for both Saha and free-energy-minimization equilibria are presented for a hydrogen-helium plasma with N(He)/N(H) = 0.10. These illustrate nicely the phenomena of pressure dissociation and ionization, and also demonstrate vividly the importance of choosing a reliable cutoff procedure for internal partition functions.

  16. Automated analysis of Kokee-Wettzell Intensive VLBI sessions—algorithms, results, and recommendations

    NASA Astrophysics Data System (ADS)

    Kareinen, Niko; Hobiger, Thomas; Haas, Rüdiger

    2015-11-01

    The time-dependent variations in the rotation and orientation of the Earth are represented by a set of Earth Orientation Parameters (EOP). Currently, Very Long Baseline Interferometry (VLBI) is the only technique able to measure all EOP simultaneously and to provide direct observation of universal time, usually expressed as UT1-UTC. To produce estimates for UT1-UTC on a daily basis, 1-h VLBI experiments involving two or three stations are organised by the International VLBI Service for Geodesy and Astrometry (IVS), the IVS Intensive (INT) series. There is an ongoing effort to minimise the turn-around time for the INT sessions in order to achieve near real-time and high quality UT1-UTC estimates. As a step further towards true fully automated real-time analysis of UT1-UTC, we carry out an extensive investigation with INT sessions on the Kokee-Wettzell baseline. Our analysis starts with the first versions of the observational files in S- and X-band and includes an automatic group delay ambiguity resolution and ionospheric calibration. Several different analysis strategies are investigated. In particular, we focus on the impact of external information, such as meteorological and cable delay data provided in the station log-files, and a priori EOP information. The latter is studied by extensive Monte Carlo simulations. Our main findings are that it is easily possible to analyse the INT sessions in a fully automated mode to provide UT1-UTC with very low latency. The information found in the station log-files is important for the accuracy of the UT1-UTC results, provided that the data in the station log-files are reliable. Furthermore, to guarantee UT1-UTC with an accuracy of less than 20 μs, it is necessary to use predicted a priori polar motion data in the analysis that are not older than 12 h.

  17. Development of a Quasi-3D Multiscale Modeling Framework: Motivation, Basic Algorithm and Preliminary results

    NASA Astrophysics Data System (ADS)

    Jung, Joon-Hee; Arakawa, Akio

    2010-04-01

    A new framework for modeling the atmosphere, which we call the quasi-3D (Q3D) multi-scale modeling framework (MMF), is developed with the objective of including cloud-scale three-dimensional effects in a GCM without necessarily using a global cloud-resolving model (CRM). It combines a GCM with a Q3D CRM that has the horizontal domain consisting of two perpendicular sets of channels, each of which contains a locally 3D grid-point array. For computing efficiency, the widths of the channels are chosen to be narrow. Thus, it is crucial to select a proper lateral boundary condition to realistically simulate the statistics of cloud and cloud-associated processes. Among the various possibilities, a periodic lateral boundary condition is chosen for the deviations from background fields that are obtained by interpolations from the GCM grid points. Since the deviations tend to vanish as the GCM grid size approaches that of the CRM, the whole system of the Q3D MMF can converge to a fully 3D global CRM. Consequently, the horizontal resolution of the GCM can be freely chosen depending on the objective of application, without changing the formulation of model physics. To evaluate the newly developed Q3D CRM in an efficient way, idealized experiments have been performed using a small horizontal domain. In these tests, the Q3D CRM uses only one pair of perpendicular channels with only two grid points across each channel. Comparing the simulation results with those of a fully 3D CRM, it is concluded that the Q3D CRM can reproduce most of the important statistics of the 3D solutions, including the vertical distributions of cloud water and precipitants, vertical transports of potential temperature and water vapor, and the variances and covariances of dynamical variables. The main improvement from a corresponding 2D simulation appears in the surface fluxes and the vorticity transports that cause the mean wind to change. A comparison with a simulation using a coarse-resolution 3D CRM

  18. Analysis of conservative tracer measurement results using the Frechet distribution at planted horizontal subsurface flow constructed wetlands filled with coarse gravel and showing the effect of clogging processes.

    PubMed

    Dittrich, Ernő; Klincsik, Mihály

    2015-11-01

    A mathematical process, developed in Maple environment, has been successful in decreasing the error of measurement results and in the precise calculation of the moments of corrected tracer functions. It was proved that with this process, the measured tracer results of horizontal subsurface flow constructed wetlands filled with coarse gravel (HSFCW-C) can be fitted more accurately than with the conventionally used distribution functions (Gaussian, Lognormal, Fick (Inverse Gaussian) and Gamma). This statement is true only for the planted HSFCW-Cs. The analysis of unplanted HSFCW-Cs needs more research. The result of the analysis shows that the conventional solutions (completely stirred series tank reactor (CSTR) model and convection-dispersion transport (CDT) model) cannot describe these types of transport processes with sufficient accuracy. These outcomes can help in developing better process descriptions of very difficult transport processes in HSFCW-Cs. Furthermore, a new mathematical process can be developed for the calculation of real hydraulic residence time (HRT) and dispersion coefficient values. The presented method can be generalized to other kinds of hydraulic environments.

  19. T-cell lines from 2 patients with adenosine deaminase (ADA) deficiency showed the restoration of ADA activity resulted from the reversion of an inherited mutation.

    PubMed

    Ariga, T; Oda, N; Yamaguchi, K; Kawamura, N; Kikuta, H; Taniuchi, S; Kobayashi, Y; Terada, K; Ikeda, H; Hershfield, M S; Kobayashi, K; Sakiyama, Y

    2001-05-01

    Inherited deficiency of adenosine deaminase (ADA) results in one of the autosomal recessive forms of severe combined immunodeficiency. This report discusses 2 patients with ADA deficiency from different families, in whom a possible reverse mutation had occurred. The novel mutations were identified in the ADA gene from the patients, and both their parents were revealed to be carriers. Unexpectedly, established patient T-cell lines, not B-cell lines, showed half-normal levels of ADA enzyme activity. Reevaluation of the mutations in these T-cell lines indicated that one of the inherited ADA gene mutations was reverted in both patients. At least one of the patients seemed to possess the revertant cells in vivo; however, the mutant cells might have overcome the revertant after receiving ADA enzyme replacement therapy. These findings may have significant implications regarding the prospects for stem cell gene therapy for ADA deficiency.

  20. How often do German children and adolescents show signs of common mental health problems? Results from different methodological approaches – a cross-sectional study

    PubMed Central

    2014-01-01

    Background Child and adolescent mental health problems are ubiquitous and burdensome. Their impact on functional disability, the high rates of accompanying medical illnesses and the potential to last until adulthood make them a major public health issue. While methodological factors cause variability of the results from epidemiological studies, there is a lack of prevalence rates of mental health problems in children and adolescents according to ICD-10 criteria from nationally representative samples. International findings suggest only a small proportion of children with function impairing mental health problems receive treatment, but information about the health care situation of children and adolescents is scarce. The aim of this epidemiological study was a) to classify symptoms of common mental health problems according to ICD-10 criteria in order to compare the statistical and clinical case definition strategies using a single set of data and b) to report ICD-10 codes from health insurance claims data. Methods a) Based on a clinical expert rating, questionnaire items were mapped on ICD-10 criteria; data from the Mental Health Module (BELLA study) were analyzed for relevant ICD-10 and cut-off criteria; b) Claims data were analyzed for relevant ICD-10 codes. Results According to parent report 7.5% (n = 208) met the ICD-10 criteria of a mild depressive episode and 11% (n = 305) showed symptoms of depression according to cut-off score; Anxiety is reported in 5.6% (n = 156) and 11.6% (n = 323), conduct disorder in 15.2% (n = 373) and 14.6% (n = 357). Self-reported symptoms in 11 to 17 year olds resulted in 15% (n = 279) reporting signs of a mild depression according to ICD-10 criteria (vs. 16.7% (n = 307) based on cut-off) and 10.9% (n = 201) reported symptoms of anxiety (vs. 15.4% (n = 283)). Results from routine data identify 0.9% (n = 1,196) with a depression diagnosis, 3.1% (n = 6,729) with anxiety and 1.4% (n

  1. Transgene silencing of the Hutchinson-Gilford progeria syndrome mutation results in a reversible bone phenotype, whereas resveratrol treatment does not show overall beneficial effects.

    PubMed

    Strandgren, Charlotte; Nasser, Hasina Abdul; McKenna, Tomás; Koskela, Antti; Tuukkanen, Juha; Ohlsson, Claes; Rozell, Björn; Eriksson, Maria

    2015-08-01

    Hutchinson-Gilford progeria syndrome (HGPS) is a rare premature aging disorder that is most commonly caused by a de novo point mutation in exon 11 of the LMNA gene, c.1824C>T, which results in an increased production of a truncated form of lamin A known as progerin. In this study, we used a mouse model to study the possibility of recovering from HGPS bone disease upon silencing of the HGPS mutation, and the potential benefits from treatment with resveratrol. We show that complete silencing of the transgenic expression of progerin normalized bone morphology and mineralization already after 7 weeks. The improvements included lower frequencies of rib fractures and callus formation, an increased number of osteocytes in remodeled bone, and normalized dentinogenesis. The beneficial effects from resveratrol treatment were less significant and to a large extent similar to mice treated with sucrose alone. However, the reversal of the dental phenotype of overgrown and laterally displaced lower incisors in HGPS mice could be attributed to resveratrol. Our results indicate that the HGPS bone defects were reversible upon suppressed transgenic expression and suggest that treatments targeting aberrant progerin splicing give hope to patients who are affected by HGPS.

  2. Preliminary test results of a flight management algorithm for fuel conservative descents in a time based metered traffic environment. [flight tests of an algorithm to minimize fuel consumption of aircraft based on flight time

    NASA Technical Reports Server (NTRS)

    Knox, C. E.; Cannon, D. G.

    1979-01-01

    A flight management algorithm designed to improve the accuracy of delivering the airplane fuel efficiently to a metering fix at a time designated by air traffic control is discussed. The algorithm provides a 3-D path with time control (4-D) for a test B 737 airplane to make an idle thrust, clean configured descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path is calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithms and the results of the flight tests are discussed.

  3. Automatic design of decision-tree algorithms with evolutionary algorithms.

    PubMed

    Barros, Rodrigo C; Basgalupp, Márcio P; de Carvalho, André C P L F; Freitas, Alex A

    2013-01-01

    This study reports the empirical analysis of a hyper-heuristic evolutionary algorithm that is capable of automatically designing top-down decision-tree induction algorithms. Top-down decision-tree algorithms are of great importance, considering their ability to provide an intuitive and accurate knowledge representation for classification problems. The automatic design of these algorithms seems timely, given the large literature accumulated over more than 40 years of research in the manual design of decision-tree induction algorithms. The proposed hyper-heuristic evolutionary algorithm, HEAD-DT, is extensively tested using 20 public UCI datasets and 10 microarray gene expression datasets. The algorithms automatically designed by HEAD-DT are compared with traditional decision-tree induction algorithms, such as C4.5 and CART. Experimental results show that HEAD-DT is capable of generating algorithms which are significantly more accurate than C4.5 and CART.

  4. Comparative Results of AIRS AMSU and CrIS/ATMS Retrievals Using a Scientifically Equivalent Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Kouvaris, Louis; Iredell, Lena

    2016-01-01

    The AIRS Science Team Version 6 retrieval algorithm is currently producing high quality level-3 Climate Data Records (CDRs) from AIRSAMSU which are critical for understanding climate processes. The AIRS Science Team is finalizing an improved Version-7 retrieval algorithm to reprocess all old and future AIRS data. AIRS CDRs should eventually cover the period September 2002 through at least 2020. CrISATMS is the only scheduled follow on to AIRSAMSU. The objective of this research is to prepare for generation of a long term CrISATMS level-3 data using a finalized retrieval algorithm that is scientifically equivalent to AIRSAMSU Version-7.

  5. Comparative Results of AIRS/AMSU and CrIS/ATMS Retrievals Using a Scientifically Equivalent Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Kouvaris, Louis; Iredell, Lena

    2016-01-01

    The AIRS Science Team Version-6 retrieval algorithm is currently producing high quality level-3 Climate Data Records (CDRs) from AIRS/AMSU which are critical for understanding climate processes. The AIRS Science Team is finalizing an improved Version-7 retrieval algorithm to reprocess all old and future AIRS data. AIRS CDRs should eventually cover the period September 2002 through at least 2020. CrIS/ATMS is the only scheduled follow on to AIRS/AMSU. The objective of this research is to prepare for generation of long term CrIS/ATMS CDRs using a retrieval algorithm that is scientifically equivalent to AIRS/AMSU Version-7.

  6. Storytelling Slide Shows to Improve Diabetes and High Blood Pressure Knowledge and Self-Efficacy: Three-Year Results among Community Dwelling Older African Americans

    ERIC Educational Resources Information Center

    Bertera, Elizabeth M.

    2014-01-01

    This study combined the African American tradition of oral storytelling with the Hispanic medium of "Fotonovelas." A staggered pretest posttest control group design was used to evaluate four Storytelling Slide Shows on health that featured community members. A total of 212 participants were recruited for the intervention and 217 for the…

  7. Preliminary Structural Design Using Topology Optimization with a Comparison of Results from Gradient and Genetic Algorithm Methods

    NASA Technical Reports Server (NTRS)

    Burt, Adam O.; Tinker, Michael L.

    2014-01-01

    In this paper, genetic algorithm based and gradient-based topology optimization is presented in application to a real hardware design problem. Preliminary design of a planetary lander mockup structure is accomplished using these methods that prove to provide major weight savings by addressing the structural efficiency during the design cycle. This paper presents two alternative formulations of the topology optimization problem. The first is the widely-used gradient-based implementation using commercially available algorithms. The second is formulated using genetic algorithms and internally developed capabilities. These two approaches are applied to a practical design problem for hardware that has been built, tested and proven to be functional. Both formulations converged on similar solutions and therefore were proven to be equally valid implementations of the process. This paper discusses both of these formulations at a high level.

  8. ICPES analyses using full image spectra and astronomical data fitting algorithms to provide diagnostic and result information

    SciTech Connect

    Spencer, W.A.; Goode, S.R.

    1997-10-01

    ICP emission analyses are prone to errors due to changes in power level, nebulization rate, plasma temperature, and sample matrix. As a result, accurate analyses of complex samples often require frequent bracketing with matrix matched standards. Information needed to track and correct the matrix errors is contained in the emission spectrum. But most commercial software packages use only the analyte line emission to determine concentrations. Changes in plasma temperature and the nebulization rate are reflected by changes in the hydrogen line widths, the oxygen emission, and neutral ion line ratios. Argon and off-line emissions provide a measure to correct the power level and the background scattering occurring in the polychromator. The authors` studies indicated that changes in the intensity of the Ar 404.4 nm line readily flag most matrix and plasma condition modifications. Carbon lines can be used to monitor the impact of organics on the analyses and calcium and argon lines can be used to correct for spectral drift and alignment. Spectra of contaminated groundwater and simulated defense waste glasses were obtained using a Thermo Jarrell Ash ICP that has an echelle CID detector system covering the 190-850 nm range. The echelle images were translated to the FITS data format, which astronomers recommend for data storage. Data reduction packages such as those in the ESO-MIDAS/ECHELLE and DAOPHOT programs were tried with limited success. The radial point spread function was evaluated as a possible improved peak intensity measurement instead of the common pixel averaging approach used in the commercial ICP software. Several algorithms were evaluated to align and automatically scale the background and reference spectra. A new data reduction approach that utilizes standard reference images, successive subtractions, and residual analyses has been evaluated to correct for matrix effects.

  9. Mathematical modelling in Matlab of the experimental results shows the electrochemical potential difference - temperature of the WC coatings immersed in a NaCl solution

    NASA Astrophysics Data System (ADS)

    Benea, M. L.; Benea, O. D.

    2016-02-01

    The method used for purchasing the corrosion behaviour the WC coatings deposited by plasma spraying, on a martensitic stainless steel substrate consists in measuring the electrochemical potential of the coating, respectively that of the substrate, immersed in a NaCl solution as corrosive agent. The mathematical processing of the obtained experimental results in Matlab allowed us to make some correlations between the electrochemical potential of the coating and the solution temperature is very well described by some curves having equations obtained by interpolation order 4.

  10. Parallel algorithms for unconstrained optimizations by multisplitting

    SciTech Connect

    He, Qing

    1994-12-31

    In this paper a new parallel iterative algorithm for unconstrained optimization using the idea of multisplitting is proposed. This algorithm uses the existing sequential algorithms without any parallelization. Some convergence and numerical results for this algorithm are presented. The experiments are performed on an Intel iPSC/860 Hyper Cube with 64 nodes. It is interesting that the sequential implementation on one node shows that if the problem is split properly, the algorithm converges much faster than one without splitting.

  11. In axial spondyloarthritis, never smokers, ex-smokers and current smokers show a gradient of increasing disease severity - results from the Scotland Registry for Ankylosing Spondylitis (SIRAS).

    PubMed

    Jones, Gareth T; Ratz, Tiara; Dean, Linda E; Macfarlane, Gary J; Atzeni, Fabiola

    2016-11-29

    Objectives To examine the relationship between smoking, smoking cessation, and disease characteristics/quality of life (QoL) in spondyloarthritis. Methods The Scotland Registry for Ankylosing Spondylitis collects data from clinically diagnosed patients with spondyloarthritis. Clinical data, including Bath Ankylosing Spondylitis indices of disease activity (BASDAI) and function (BASFI), was obtained from medical records. Postal questionnaires provided information on smoking status and QoL (Ankylosing Spondylitis QoL questionnaire; ASQoL). Linear and logistic regression quantified the effect of smoking, and smoking cessation, on various disease-specific and QoL outcomes, adjusting for age, sex, deprivation, education and alcohol status. Results are presented as regression coefficients (β) or odds ratios (OR) with 95% confidence intervals. Results 946 participants provided data (male 73.5%, mean age 52yrs). Current smoking was reported by 22%, and 38% were ex-smokers. Ever smokers experienced poorer BASDAI (β = 0.5; 0.2 to 0.9) and BASFI (β = 0.8; 0.4 to 1.2), and reported worse QoL (ASQoL, β = 1.5; 0.7 to 2.3). Compared to current smokers, ex-smokers reported lower disease activity (BASDAI, β = -0.5; -1.0 to -0.04) and significantly better QoL (ASQoL, β = -1.2; -2.3 to -0.2). They also were more likely to have a uveitis history (OR = 2.4; 1.5 to 3.8). Conclusions Smokers with spondyloarthritis experience worse disease than never smokers. However, we provide new evidence that, among smokers, smoking cessation is associated with lower disease activity and better physical function and QoL. Clinicians should specifically promote smoking cessation as an adjunct to usual therapy in patients with spondyloarthritis. This article is protected by copyright. All rights reserved.

  12. "First Things First" Shows Promising Results

    ERIC Educational Resources Information Center

    Hendrie, Caroline

    2005-01-01

    In this article, the author discusses a school improvement model, First Things First, developed by James P. Connell, a former tenured professor of psychology at the University of Rochester in New York. The model has three pillars for the high school level: (1) small, themed learning communities that each keep a group of students together…

  13. Results.

    ERIC Educational Resources Information Center

    Zemsky, Robert; Shaman, Susan; Shapiro, Daniel B.

    2001-01-01

    Describes the Collegiate Results Instrument (CRI), which measures a range of collegiate outcomes for alumni 6 years after graduation. The CRI was designed to target alumni from institutions across market segments and assess their values, abilities, work skills, occupations, and pursuit of lifelong learning. (EV)

  14. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  15. Development and test results of a flight management algorithm for fuel conservative descents in a time-based metered traffic environment

    NASA Technical Reports Server (NTRS)

    Knox, C. E.; Cannon, D. G.

    1980-01-01

    A simple flight management descent algorithm designed to improve the accuracy of delivering an airplane in a fuel-conservative manner to a metering fix at a time designated by air traffic control was developed and flight tested. This algorithm provides a three dimensional path with terminal area time constraints (four dimensional) for an airplane to make an idle thrust, clean configured (landing gear up, flaps zero, and speed brakes retracted) descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithm is described. The results of the flight tests flown with the Terminal Configured Vehicle airplane are presented.

  16. Planning fuel-conservative descents with or without time constraints using a small programmable calculator: Algorithm development and flight test results

    NASA Technical Reports Server (NTRS)

    Knox, C. E.

    1983-01-01

    A simplified flight-management descent algorithm, programmed on a small programmable calculator, was developed and flight tested. It was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel-conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight-management descent algorithm is described. The results of flight tests flown with a T-39A (Sabreliner) airplane are presented.

  17. Planning fuel-conservative descents with or without time constraints using a small programmable calculator: algorithm development and flight test results

    SciTech Connect

    Knox, C.E.

    1983-03-01

    A simplified flight-management descent algorithm, programmed on a small programmable calculator, was developed and flight tested. It was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel-conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight-management descent algorithm is described. The results of flight tests flown with a T-39A (Sabreliner) airplane are presented.

  18. The Operational MODIS Cloud Optical and Microphysical Property Product: Overview of the Collection 6 Algorithm and Preliminary Results

    NASA Technical Reports Server (NTRS)

    Platnick, Steven; King, Michael D.; Wind, Galina; Amarasinghe, Nandana; Marchant, Benjamin; Arnold, G. Thomas

    2012-01-01

    Operational Moderate Resolution Imaging Spectroradiometer (MODIS) retrievals of cloud optical and microphysical properties (part of the archived products MOD06 and MYD06, for MODIS Terra and Aqua, respectively) are currently being reprocessed along with other MODIS Atmosphere Team products. The latest "Collection 6" processing stream, which is expected to begin production by summer 2012, includes updates to the previous cloud retrieval algorithm along with new capabilities. The 1 km retrievals, based on well-known solar reflectance techniques, include cloud optical thickness, effective particle radius, and water path, as well as thermodynamic phase derived from a combination of solar and infrared tests. Being both global and of high spatial resolution requires an algorithm that is computationally efficient and can perform over all surface types. Collection 6 additions and enhancements include: (i) absolute effective particle radius retrievals derived separately from the 1.6 and 3.7 !-lm bands (instead of differences relative to the standard 2.1 !-lm retrieval), (ii) comprehensive look-up tables for cloud reflectance and emissivity (no asymptotic theory) with a wind-speed interpolated Cox-Munk BRDF for ocean surfaces, (iii) retrievals for both liquid water and ice phases for each pixel, and a subsequent determination of the phase based, in part, on effective radius retrieval outcomes for the two phases, (iv) new ice cloud radiative models using roughened particles with a specified habit, (v) updated spatially-complete global spectral surface albedo maps derived from MODIS Collection 5, (vi) enhanced pixel-level uncertainty calculations incorporating additional radiative error sources including the MODIS L1 B uncertainty index for assessing band and scene-dependent radiometric uncertainties, (v) and use of a new 1 km cloud top pressure/temperature algorithm (also part of MOD06) for atmospheric corrections and low cloud non-unity emissivity temperature adjustments.

  19. Long Term Maturation of Congenital Diaphragmatic Hernia Treatment Results: Toward Development of a Severity-Specific Treatment Algorithm

    PubMed Central

    Kays, David W.; Islam, Saleem; Larson, Shawn D.; Perkins, Joy; Talbert, James L.

    2015-01-01

    Objective To assess the impact of varying approaches to CDH repair timing on survival and need for ECMO when controlled for anatomic and physiologic disease severity in a large consecutive series of CDH patients. Summary Background Data Our publication of 60 consecutive CDH patients in 1999 showed that survival is significantly improved by limiting lung inflation pressures and eliminating hyperventilation. Methods We retrospectively reviewed 268 consecutive CDH patients, combining 208 new patients with the 60 previously reported. Management and ventilator strategy were highly consistent throughout. Varying approaches to surgical timing were applied as the series matured. Results Patients with anatomically less-severe left liver-down CDH had significantly increased need for ECMO if repaired in the first 48 hours, while patients with more-severe left liver-up CDH survived at a higher rate when repair was performed before ECMO. Overall survival of 268 patients was 78%. For those without lethal associated anomalies, survival was 88%. Of these, 99% of left liver-down CDH survived, 91% of right CDH survived. and 76% of left liver-up CDH survived. Conclusions This study shows that patients with anatomically less severe CDH benefit from delayed surgery while patients with anatomically more severe CDH may benefit from a more aggressive surgical approach. These findings show that patients respond differently across the CDH anatomic severity spectrum, and lay the foundation for the development of risk specific treatment protocols for patients with CDH. PMID:23989050

  20. Preliminary results of real-time PPP-RTK positioning algorithm development for moving platforms and its performance validation

    NASA Astrophysics Data System (ADS)

    Won, Jihye; Park, Kwan-Dong

    2015-04-01

    Real-time PPP-RTK positioning algorithms were developed for the purpose of getting precise coordinates of moving platforms. In this implementation, corrections for the satellite orbit and satellite clock were taken from the IGS-RTS products while the ionospheric delay was removed through ionosphere-free combination and the tropospheric delay was either taken care of using the Global Pressure and Temperature (GPT) model or estimated as a stochastic parameter. To improve the convergence speed, all the available GPS and GLONASS measurements were used and Extended Kalman Filter parameters were optimized. To validate our algorithms, we collected the GPS and GLONASS data from a geodetic-quality receiver installed on a roof of a moving vehicle in an open-sky environment and used IGS final products of satellite orbits and clock offsets. The horizontal positioning error got less than 10 cm within 5 minutes, and the error stayed below 10 cm even after the vehicle start moving. When the IGS-RTS product and the GPT model were used instead of the IGS precise product, the positioning accuracy of the moving vehicle was maintained at better than 20 cm once convergence was achieved at around 6 minutes.

  1. Dynamics of a Single Spin-1/2 Coupled to x- and y-Spin Baths: Algorithm and Results

    NASA Astrophysics Data System (ADS)

    Novotny, M. A.; Guerra, Marta L.; De Raedt, Hans; Michielsen, Kristel; Jin, Fengping

    The real-time dynamics of a single spin-1/2 particle, called the central spin, coupled to the x(y)-components of the spins of one or more baths is simulated. The bath Hamiltonians contain interactions of x(y)-components of the bath spins only but are general otherwise. An efficient algorithm is described which allows solving the time-dependent Schr'odinger equation for the central spin, even if the x(y) baths contain hundreds of spins. The algorithm requires storage for 2 × 2 matrices only, no matter how many spins are in the baths. We calculate the expectation value of the central spin, as well as its von Neumann entropy S(t), the quantum purity P(t), and the off-diagonal elements of the quantum density matrix. In the case of coupling the central spin to both x- and y- baths the relaxation of S(t) and P(t) with time is a power law, compared to an exponential if the central spin is only coupled to an x-bath. The effect of different initial states for the central spin and bath is studied. Comparison with more general spin baths is also presented.

  2. The Results of a Simulator Study to Determine the Effects on Pilot Performance of Two Different Motion Cueing Algorithms and Various Delays, Compensated and Uncompensated

    NASA Technical Reports Server (NTRS)

    Guo, Li-Wen; Cardullo, Frank M.; Telban, Robert J.; Houck, Jacob A.; Kelly, Lon C.

    2003-01-01

    A study was conducted employing the Visual Motion Simulator (VMS) at the NASA Langley Research Center, Hampton, Virginia. This study compared two motion cueing algorithms, the NASA adaptive algorithm and a new optimal control based algorithm. Also, the study included the effects of transport delays and the compensation thereof. The delay compensation algorithm employed is one developed by Richard McFarland at NASA Ames Research Center. This paper reports on the analyses of the results of analyzing the experimental data collected from preliminary simulation tests. This series of tests was conducted to evaluate the protocols and the methodology of data analysis in preparation for more comprehensive tests which will be conducted during the spring of 2003. Therefore only three pilots were used. Nevertheless some useful results were obtained. The experimental conditions involved three maneuvers; a straight-in approach with a rotating wind vector, an offset approach with turbulence and gust, and a takeoff with and without an engine failure shortly after liftoff. For each of the maneuvers the two motion conditions were combined with four delay conditions (0, 50, 100 & 200ms), with and without compensation.

  3. Unified treatment algorithm for the management of crotaline snakebite in the United States: results of an evidence-informed consensus workshop

    PubMed Central

    2011-01-01

    Background Envenomation by crotaline snakes (rattlesnake, cottonmouth, copperhead) is a complex, potentially lethal condition affecting thousands of people in the United States each year. Treatment of crotaline envenomation is not standardized, and significant variation in practice exists. Methods A geographically diverse panel of experts was convened for the purpose of deriving an evidence-informed unified treatment algorithm. Research staff analyzed the extant medical literature and performed targeted analyses of existing databases to inform specific clinical decisions. A trained external facilitator used modified Delphi and structured consensus methodology to achieve consensus on the final treatment algorithm. Results A unified treatment algorithm was produced and endorsed by all nine expert panel members. This algorithm provides guidance about clinical and laboratory observations, indications for and dosing of antivenom, adjunctive therapies, post-stabilization care, and management of complications from envenomation and therapy. Conclusions Clinical manifestations and ideal treatment of crotaline snakebite differ greatly, and can result in severe complications. Using a modified Delphi method, we provide evidence-informed treatment guidelines in an attempt to reduce variation in care and possibly improve clinical outcomes. PMID:21291549

  4. Assessment of the improvements in accuracy of aerosol characterization resulted from additions of polarimetric measurements to intensity-only observations using GRASP algorithm (Invited)

    NASA Astrophysics Data System (ADS)

    Dubovik, O.; Litvinov, P.; Lapyonok, T.; Herman, M.; Fedorenko, A.; Lopatin, A.; Goloub, P.; Ducos, F.; Aspetsberger, M.; Planer, W.; Federspiel, C.

    2013-12-01

    During last few years we were developing GRASP (Generalized Retrieval of Aerosol and Surface Properties) algorithm designed for the enhanced characterization of aerosol properties from spectral, multi-angular polarimetric remote sensing observations. The concept of GRASP essentially relies on the accumulated positive research heritage from previous remote sensing aerosol retrieval developments, in particular those from the AERONET and POLDER retrieval activities. The details of the algorithm are described by Dubovik et al. (Atmos. Meas. Tech., 4, 975-1018, 2011). The GRASP retrieves properties of both aerosol and land surface reflectance in cloud-free environments. It is based on highly advanced statistically optimized fitting and deduces nearly 50 unknowns for each observed site. The algorithm derives a similar set of aerosol parameters as AERONET including detailed particle size distribution, the spectrally dependent the complex index of refraction and the fraction of non-spherical particles. The algorithm uses detailed aerosol and surface models and fully accounts for all multiple interactions of scattered solar light with aerosol, gases and the underlying surface. All calculations are done on-line without using traditional look-up tables. In addition, the algorithm uses the new multi-pixel retrieval concept - a simultaneous fitting of a large group of pixels with additional constraints limiting the time variability of surface properties and spatial variability of aerosol properties. This principle is expected to result in higher consistency and accuracy of aerosol products compare to conventional approaches especially over bright surfaces where information content of satellite observations in respect to aerosol properties is limited. The GRASP is a highly versatile algorithm that allows input from both satellite and ground-based measurements. It also has essential flexibility in measurement processing. For example, if observation data set includes spectral

  5. One improved LSB steganography algorithm

    NASA Astrophysics Data System (ADS)

    Song, Bing; Zhang, Zhi-hong

    2013-03-01

    It is easy to be detected by X2 and RS steganalysis with high accuracy that using LSB algorithm to hide information in digital image. We started by selecting information embedded location and modifying the information embedded method, combined with sub-affine transformation and matrix coding method, improved the LSB algorithm and a new LSB algorithm was proposed. Experimental results show that the improved one can resist the X2 and RS steganalysis effectively.

  6. Improved Differentiation of Streptococcus pneumoniae and Other S. mitis Group Streptococci by MALDI Biotyper Using an Improved MALDI Biotyper Database Content and a Novel Result Interpretation Algorithm.

    PubMed

    Harju, Inka; Lange, Christoph; Kostrzewa, Markus; Maier, Thomas; Rantakokko-Jalava, Kaisu; Haanperä, Marjo

    2017-03-01

    Reliable distinction of Streptococcus pneumoniae and viridans group streptococci is important because of the different pathogenic properties of these organisms. Differentiation between S. pneumoniae and closely related Sreptococcusmitis species group streptococci has always been challenging, even when using such modern methods as 16S rRNA gene sequencing or matrix-assisted laser desorption ionization-time of flight (MALDI-TOF) mass spectrometry. In this study, a novel algorithm combined with an enhanced database was evaluated for differentiation between S. pneumoniae and S. mitis species group streptococci. One hundred one clinical S. mitis species group streptococcal strains and 188 clinical S. pneumoniae strains were identified by both the standard MALDI Biotyper database alone and that combined with a novel algorithm. The database update from 4,613 strains to 5,627 strains drastically improved the differentiation of S. pneumoniae and S. mitis species group streptococci: when the new database version containing 5,627 strains was used, only one of the 101 S. mitis species group isolates was misidentified as S. pneumoniae, whereas 66 of them were misidentified as S. pneumoniae when the earlier 4,613-strain MALDI Biotyper database version was used. The updated MALDI Biotyper database combined with the novel algorithm showed even better performance, producing no misidentifications of the S. mitis species group strains as S. pneumoniae All S. pneumoniae strains were correctly identified as S. pneumoniae with both the standard MALDI Biotyper database and the standard MALDI Biotyper database combined with the novel algorithm. This new algorithm thus enables reliable differentiation between pneumococci and other S. mitis species group streptococci with the MALDI Biotyper.

  7. Automated classification of seismic sources in large database using random forest algorithm: First results at Piton de la Fournaise volcano (La Réunion).

    NASA Astrophysics Data System (ADS)

    Hibert, Clément; Provost, Floriane; Malet, Jean-Philippe; Stumpf, André; Maggi, Alessia; Ferrazzini, Valérie

    2016-04-01

    In the past decades the increasing quality of seismic sensors and capability to transfer remotely large quantity of data led to a fast densification of local, regional and global seismic networks for near real-time monitoring. This technological advance permits the use of seismology to document geological and natural/anthropogenic processes (volcanoes, ice-calving, landslides, snow and rock avalanches, geothermal fields), but also led to an ever-growing quantity of seismic data. This wealth of seismic data makes the construction of complete seismicity catalogs, that include earthquakes but also other sources of seismic waves, more challenging and very time-consuming as this critical pre-processing stage is classically done by human operators. To overcome this issue, the development of automatic methods for the processing of continuous seismic data appears to be a necessity. The classification algorithm should satisfy the need of a method that is robust, precise and versatile enough to be deployed to monitor the seismicity in very different contexts. We propose a multi-class detection method based on the random forests algorithm to automatically classify the source of seismic signals. Random forests is a supervised machine learning technique that is based on the computation of a large number of decision trees. The multiple decision trees are constructed from training sets including each of the target classes. In the case of seismic signals, these attributes may encompass spectral features but also waveform characteristics, multi-stations observations and other relevant information. The Random Forests classifier is used because it provides state-of-the-art performance when compared with other machine learning techniques (e.g. SVM, Neural Networks) and requires no fine tuning. Furthermore it is relatively fast, robust, easy to parallelize, and inherently suitable for multi-class problems. In this work, we present the first results of the classification method applied

  8. SVM-based multimodal classification of activities of daily living in Health Smart Homes: sensors, algorithms, and first experimental results.

    PubMed

    Fleury, Anthony; Vacher, Michel; Noury, Norbert

    2010-03-01

    By 2050, about one third of the French population will be over 65. Our laboratory's current research focuses on the monitoring of elderly people at home, to detect a loss of autonomy as early as possible. Our aim is to quantify criteria such as the international activities of daily living (ADL) or the French Autonomie Gerontologie Groupes Iso-Ressources (AGGIR) scales, by automatically classifying the different ADL performed by the subject during the day. A Health Smart Home is used for this. Our Health Smart Home includes, in a real flat, infrared presence sensors (location), door contacts (to control the use of some facilities), temperature and hygrometry sensor in the bathroom, and microphones (sound classification and speech recognition). A wearable kinematic sensor also informs postural transitions (using pattern recognition) and walk periods (frequency analysis). This data collected from the various sensors are then used to classify each temporal frame into one of the ADL that was previously acquired (seven activities: hygiene, toilet use, eating, resting, sleeping, communication, and dressing/undressing). This is done using support vector machines. We performed a 1-h experimentation with 13 young and healthy subjects to determine the models of the different activities, and then we tested the classification algorithm (cross validation) with real data.

  9. Effectiveness of Ventricular Intrinsic Preference (VIP™) and Ventricular AutoCapture (VAC) algorithms in pacemaker patients: Results of the validate study

    PubMed Central

    Yadav, Rakesh; Jaswal, Aparna; Chennapragada, Sridevi; Kamath, Prakash; Hiremath, Shirish M.S.; Kahali, Dhiman; Anand, Sumit; Sood, Naresh K.; Mishra, Anil; Makkar, Jitendra S.; Kaul, Upendra

    2015-01-01

    Background Several past clinical studies have demonstrated that frequent and unnecessary right ventricular pacing in patients with sick sinus syndrome and compromised atrio-ventricular conduction (AVC) produces long-term adverse effects. The safety and efficacy of two pacemaker algorithms, Ventricular Intrinsic Preference™ (VIP) and Ventricular AutoCapture (VAC), were evaluated in a multi-center study in pacemaker patients. Methods We evaluated 80 patients across 10 centers in India. Patients were enrolled within 15 days of dual chamber pacemaker (DDDR) implantation, and within 45 days thereafter were classified to either a compromised AVC (cAVC) arm or an intact AVC (iAVC) arm based on intrinsic paced/sensed (AV/PV) delays. In each arm, patients were then randomized (1:1) into the following groups: VIP OFF and VAC OFF (Control group; CG), or VIP ON and VAC ON (Treatment Group; TG). Subsequently, the AV/PV delays in the CG groups were mandatorily programmed at 180/150 ms, and to up to 350 ms in the TG groups. The percentage of right ventricular pacing (%RVp) evaluated at 12-month post-implantation follow-ups were compared between the two groups in each arm. Additionally, in-clinic time required for collecting device data was compared between patients programmed with the automated AutoCapture algorithm activated (VAC ON) vs. the manually programmed method (VAC OFF). Results Patients randomized to the TG with the VIP algorithm activated exhibited a significantly lower %RVp at 12 months than those in the CG in both the cAVC arm (39±41% vs. 97±3%; p=0.0004) and the iAVC arm (15±25% vs. 68±39%; p=0.0067). In-clinic time required to collect device data was less in patients with the VAC algorithm activated. No device-related adverse events were reported during the year-long study period. Conclusions In our study cohort, the use of the VIP algorithm significantly reduced the %RVp, while the VAC algorithm reduced in-clinic time needed to collect device data. PMID

  10. Solutions of the two-dimensional Hubbard model: Benchmarks and results from a wide range of numerical algorithms

    DOE PAGES

    LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; ...

    2015-12-14

    Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification ofmore » uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Furthermore, cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.« less

  11. Solutions of the two-dimensional Hubbard model: Benchmarks and results from a wide range of numerical algorithms

    SciTech Connect

    LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; Bulik, Ireneusz W.; Chan, Garnet Kin-Lic; Chung, Chia -Min; Deng, Youjin; Ferrero, Michel; Henderson, Thomas M.; Jiménez-Hoyos, Carlos A.; Kozik, E.; Liu, Xuan -Wen; Millis, Andrew J.; Prokof’ev, N. V.; Qin, Mingpu; Scuseria, Gustavo E.; Shi, Hao; Svistunov, B. V.; Tocchio, Luca F.; Tupitsyn, I. S.; White, Steven R.; Zhang, Shiwei; Zheng, Bo -Xiao; Zhu, Zhenyue; Gull, Emanuel

    2015-12-14

    Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification of uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Furthermore, cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.

  12. Linear Bregman algorithm implemented in parallel GPU

    NASA Astrophysics Data System (ADS)

    Li, Pengyan; Ke, Jue; Sui, Dong; Wei, Ping

    2015-08-01

    At present, most compressed sensing (CS) algorithms have poor converging speed, thus are difficult to run on PC. To deal with this issue, we use a parallel GPU, to implement a broadly used compressed sensing algorithm, the Linear Bregman algorithm. Linear iterative Bregman algorithm is a reconstruction algorithm proposed by Osher and Cai. Compared with other CS reconstruction algorithms, the linear Bregman algorithm only involves the vector and matrix multiplication and thresholding operation, and is simpler and more efficient for programming. We use C as a development language and adopt CUDA (Compute Unified Device Architecture) as parallel computing architectures. In this paper, we compared the parallel Bregman algorithm with traditional CPU realized Bregaman algorithm. In addition, we also compared the parallel Bregman algorithm with other CS reconstruction algorithms, such as OMP and TwIST algorithms. Compared with these two algorithms, the result of this paper shows that, the parallel Bregman algorithm needs shorter time, and thus is more convenient for real-time object reconstruction, which is important to people's fast growing demand to information technology.

  13. The Soil Moisture Active Passive Mission (SMAP) Science Data Products: Results of Testing with Field Experiment and Algorithm Testbed Simulation Environment Data

    NASA Technical Reports Server (NTRS)

    Entekhabi, Dara; Njoku, Eni E.; O'Neill, Peggy E.; Kellogg, Kent H.; Entin, Jared K.

    2010-01-01

    Talk outline 1. Derivation of SMAP basic and applied science requirements from the NRC Earth Science Decadal Survey applications 2. Data products and latencies 3. Algorithm highlights 4. SMAP Algorithm Testbed 5. SMAP Working Groups and community engagement

  14. Spaceborne SAR Imaging Algorithm for Coherence Optimized

    PubMed Central

    Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun

    2016-01-01

    This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446

  15. A Short Survey of Document Structure Similarity Algorithms

    SciTech Connect

    Buttler, D

    2004-02-27

    This paper provides a brief survey of document structural similarity algorithms, including the optimal Tree Edit Distance algorithm and various approximation algorithms. The approximation algorithms include the simple weighted tag similarity algorithm, Fourier transforms of the structure, and a new application of the shingle technique to structural similarity. We show three surprising results. First, the Fourier transform technique proves to be the least accurate of any of approximation algorithms, while also being slowest. Second, optimal Tree Edit Distance algorithms may not be the best technique for clustering pages from different sites. Third, the simplest approximation to structure may be the most effective and efficient mechanism for many applications.

  16. DNABIT Compress - Genome compression algorithm.

    PubMed

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  17. A Fast parallel tridiagonal algorithm for a class of CFD applications

    NASA Technical Reports Server (NTRS)

    Moitra, Stuti; Sun, Xian-He

    1996-01-01

    The parallel diagonal dominant (PDD) algorithm is an efficient tridiagonal solver. This paper presents for study a variation of the PDD algorithm, the reduced PDD algorithm. The new algorithm maintains the minimum communication provided by the PDD algorithm, but has a reduced operation count. The PDD algorithm also has a smaller operation count than the conventional sequential algorithm for many applications. Accuracy analysis is provided for the reduced PDD algorithm for symmetric Toeplitz tridiagonal (STT) systems. Implementation results on Langley's Intel Paragon and IBM SP2 show that both the PDD and reduced PDD algorithms are efficient and scalable.

  18. Research on Routing Selection Algorithm Based on Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Guohong; Zhang, Baojian; Li, Xueyong; Lv, Jinna

    The hereditary algorithm is a kind of random searching and method of optimizing based on living beings natural selection and hereditary mechanism. In recent years, because of the potentiality in solving complicate problems and the successful application in the fields of industrial project, hereditary algorithm has been widely concerned by the domestic and international scholar. Routing Selection communication has been defined a standard communication model of IP version 6.This paper proposes a service model of Routing Selection communication, and designs and implements a new Routing Selection algorithm based on genetic algorithm.The experimental simulation results show that this algorithm can get more resolution at less time and more balanced network load, which enhances search ratio and the availability of network resource, and improves the quality of service.

  19. Temperature Corrected Bootstrap Algorithm

    NASA Technical Reports Server (NTRS)

    Comiso, Joey C.; Zwally, H. Jay

    1997-01-01

    A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.

  20. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  1. WE-AB-BRA-08: Results of a Multi-Institutional Study for the Evaluation of Deformable Image Registration Algorithms for Structure Delineation Via Computational Phantoms

    SciTech Connect

    Loi, G; Fusella, M; Fiandra, C; Lanzi, E; Rosica, A; Strigari, L; Orlandini, L; Gino, E; Roggio, A; Marcocci, F; Iacovello, G; Miceli, R

    2015-06-15

    Purpose: To investigate the accuracy of various algorithms for deformable image registration (DIR), to propagate regions of interest (ROIs) in computational phantoms based on patient images using different commercial systems. This work is part of an Italian multi-institutional study to test on common datasets the accuracy, reproducibility and safety of DIR applications in Adaptive Radiotherapy. Methods: Eleven institutions with three available commercial solutions provided data to assess the agreement of DIR-propagated ROIs with automatically drown ROIs considered as ground-truth for the comparison. The DIR algorithms were tested on real patient data from three different anatomical districts: head and neck, thorax and pelvis. For every dataset two specific Deformation Vector Fields (DVFs) provided by ImSimQA software were applied to the reference data set. Three different commercial software were used in this study: RayStation, Velocity and Mirada. The DIR-mapped ROIs were then compared with the reference ROIs using the Jaccard Conformity Index (JCI). Results: More than 600 DIR-mapped ROIs were analyzed. Putting together all JCI data of all institutions for the first DVF, the mean JCI was 0.87 ± 0.7 (1 SD) while for the second DVF JCI was 0.8 ± 0.13 (1 SD). Several considerations on different structures are available from collected data: the standard deviation among different institutions on specific structure raise as the larger is the applied DVF. The higher value is 10% for bladder. Conclusion: Although the complexity of deformation of human body is very difficult to model, this work illustrates some clinical scenarios with well-known DVFs provided by specific software. CI parameter gives the inter-user variability and may put in evidence the need of improving the working protocol in order to reduce the inter-institution JCI variability.

  2. Motion Cueing Algorithm Development: Initial Investigation and Redesign of the Algorithms

    NASA Technical Reports Server (NTRS)

    Telban, Robert J.; Wu, Weimin; Cardullo, Frank M.; Houck, Jacob A. (Technical Monitor)

    2000-01-01

    In this project four motion cueing algorithms were initially investigated. The classical algorithm generated results with large distortion and delay and low magnitude. The NASA adaptive algorithm proved to be well tuned with satisfactory performance, while the UTIAS adaptive algorithm produced less desirable results. Modifications were made to the adaptive algorithms to reduce the magnitude of undesirable spikes. The optimal algorithm was found to have the potential for improved performance with further redesign. The center of simulator rotation was redefined. More terms were added to the cost function to enable more tuning flexibility. A new design approach using a Fortran/Matlab/Simulink setup was employed. A new semicircular canals model was incorporated in the algorithm. With these changes results show the optimal algorithm has some advantages over the NASA adaptive algorithm. Two general problems observed in the initial investigation required solutions. A nonlinear gain algorithm was developed that scales the aircraft inputs by a third-order polynomial, maximizing the motion cues while remaining within the operational limits of the motion system. A braking algorithm was developed to bring the simulator to a full stop at its motion limit and later release the brake to follow the cueing algorithm output.

  3. A Winner Determination Algorithm for Combinatorial Auctions Based on Hybrid Artificial Fish Swarm Algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Genrang; Lin, ZhengChun

    The problem of winner determination in combinatorial auctions is a hotspot electronic business, and a NP hard problem. A Hybrid Artificial Fish Swarm Algorithm(HAFSA), which is combined with First Suite Heuristic Algorithm (FSHA) and Artificial Fish Swarm Algorithm (AFSA), is proposed to solve the problem after probing it base on the theories of AFSA. Experiment results show that the HAFSA is a rapidly and efficient algorithm for The problem of winner determining. Compared with Ant colony Optimization Algorithm, it has a good performance with broad and prosperous application.

  4. The global Minmax k-means algorithm.

    PubMed

    Wang, Xiaoyan; Bai, Yanping

    2016-01-01

    The global k-means algorithm is an incremental approach to clustering that dynamically adds one cluster center at a time through a deterministic global search procedure from suitable initial positions, and employs k-means to minimize the sum of the intra-cluster variances. However the global k-means algorithm sometimes results singleton clusters and the initial positions sometimes are bad, after a bad initialization, poor local optimal can be easily obtained by k-means algorithm. In this paper, we modified the global k-means algorithm to eliminate the singleton clusters at first, and then we apply MinMax k-means clustering error method to global k-means algorithm to overcome the effect of bad initialization, proposed the global Minmax k-means algorithm. The proposed clustering method is tested on some popular data sets and compared to the k-means algorithm, the global k-means algorithm and the MinMax k-means algorithm. The experiment results show our proposed algorithm outperforms other algorithms mentioned in the paper.

  5. Segmentation of MRI Brain Images with an Improved Harmony Searching Algorithm

    PubMed Central

    Yang, Zhang; Li, Guo; Weifeng, Ding

    2016-01-01

    The harmony searching (HS) algorithm is a kind of optimization search algorithm currently applied in many practical problems. The HS algorithm constantly revises variables in the harmony database and the probability of different values that can be used to complete iteration convergence to achieve the optimal effect. Accordingly, this study proposed a modified algorithm to improve the efficiency of the algorithm. First, a rough set algorithm was employed to improve the convergence and accuracy of the HS algorithm. Then, the optimal value was obtained using the improved HS algorithm. The optimal value of convergence was employed as the initial value of the fuzzy clustering algorithm for segmenting magnetic resonance imaging (MRI) brain images. Experimental results showed that the improved HS algorithm attained better convergence and more accurate results than those of the original HS algorithm. In our study, the MRI image segmentation effect of the improved algorithm was superior to that of the original fuzzy clustering method. PMID:27403428

  6. Segmentation of MRI Brain Images with an Improved Harmony Searching Algorithm.

    PubMed

    Yang, Zhang; Shufan, Ye; Li, Guo; Weifeng, Ding

    2016-01-01

    The harmony searching (HS) algorithm is a kind of optimization search algorithm currently applied in many practical problems. The HS algorithm constantly revises variables in the harmony database and the probability of different values that can be used to complete iteration convergence to achieve the optimal effect. Accordingly, this study proposed a modified algorithm to improve the efficiency of the algorithm. First, a rough set algorithm was employed to improve the convergence and accuracy of the HS algorithm. Then, the optimal value was obtained using the improved HS algorithm. The optimal value of convergence was employed as the initial value of the fuzzy clustering algorithm for segmenting magnetic resonance imaging (MRI) brain images. Experimental results showed that the improved HS algorithm attained better convergence and more accurate results than those of the original HS algorithm. In our study, the MRI image segmentation effect of the improved algorithm was superior to that of the original fuzzy clustering method.

  7. Applications and accuracy of the parallel diagonal dominant algorithm

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He

    1993-01-01

    The Parallel Diagonal Dominant (PDD) algorithm is a highly efficient, ideally scalable tridiagonal solver. In this paper, a detailed study of the PDD algorithm is given. First the PDD algorithm is introduced. Then the algorithm is extended to solve periodic tridiagonal systems. A variant, the reduced PDD algorithm, is also proposed. Accuracy analysis is provided for a class of tridiagonal systems, the symmetric, and anti-symmetric Toeplitz tridiagonal systems. Implementation results show that the analysis gives a good bound on the relative error, and the algorithm is a good candidate for the emerging massively parallel machines.

  8. An improved conscan algorithm based on a Kalman filter

    NASA Technical Reports Server (NTRS)

    Eldred, D. B.

    1994-01-01

    Conscan is commonly used by DSN antennas to allow adaptive tracking of a target whose position is not precisely known. This article describes an algorithm that is based on a Kalman filter and is proposed to replace the existing fast Fourier transform based (FFT-based) algorithm for conscan. Advantages of this algorithm include better pointing accuracy, continuous update information, and accommodation of missing data. Additionally, a strategy for adaptive selection of the conscan radius is proposed. The performance of the algorithm is illustrated through computer simulations and compared to the FFT algorithm. The results show that the Kalman filter algorithm is consistently superior.

  9. Public medical shows.

    PubMed

    Walusinski, Olivier

    2014-01-01

    In the second half of the 19th century, Jean-Martin Charcot (1825-1893) became famous for the quality of his teaching and his innovative neurological discoveries, bringing many French and foreign students to Paris. A hunger for recognition, together with progressive and anticlerical ideals, led Charcot to invite writers, journalists, and politicians to his lessons, during which he presented the results of his work on hysteria. These events became public performances, for which physicians and patients were transformed into actors. Major newspapers ran accounts of these consultations, more like theatrical shows in some respects. The resultant enthusiasm prompted other physicians in Paris and throughout France to try and imitate them. We will compare the form and substance of Charcot's lessons with those given by Jules-Bernard Luys (1828-1897), Victor Dumontpallier (1826-1899), Ambroise-Auguste Liébault (1823-1904), Hippolyte Bernheim (1840-1919), Joseph Grasset (1849-1918), and Albert Pitres (1848-1928). We will also note their impact on contemporary cinema and theatre.

  10. A Food Chain Algorithm for Capacitated Vehicle Routing Problem with Recycling in Reverse Logistics

    NASA Astrophysics Data System (ADS)

    Song, Qiang; Gao, Xuexia; Santos, Emmanuel T.

    2015-12-01

    This paper introduces the capacitated vehicle routing problem with recycling in reverse logistics, and designs a food chain algorithm for it. Some illustrative examples are selected to conduct simulation and comparison. Numerical results show that the performance of the food chain algorithm is better than the genetic algorithm, particle swarm optimization as well as quantum evolutionary algorithm.

  11. A flight management algorithm and guidance for fuel-conservative descents in a time-based metered air traffic environment: Development and flight test results

    NASA Technical Reports Server (NTRS)

    Knox, C. E.

    1984-01-01

    A simple airborne flight management descent algorithm designed to define a flight profile subject to the constraints of using idle thrust, a clean airplane configuration (landing gear up, flaps zero, and speed brakes retracted), and fixed-time end conditions was developed and flight tested in the NASA TSRV B-737 research airplane. The research test flights, conducted in the Denver ARTCC automated time-based metering LFM/PD ATC environment, demonstrated that time guidance and control in the cockpit was acceptable to the pilots and ATC controllers and resulted in arrival of the airplane over the metering fix with standard deviations in airspeed error of 6.5 knots, in altitude error of 23.7 m (77.8 ft), and in arrival time accuracy of 12 sec. These accuracies indicated a good representation of airplane performance and wind modeling. Fuel savings will be obtained on a fleet-wide basis through a reduction of the time error dispersions at the metering fix and on a single-airplane basis by presenting the pilot with guidance for a fuel-efficient descent.

  12. Based on Multi-sensor Information Fusion Algorithm of TPMS Research

    NASA Astrophysics Data System (ADS)

    Yulan, Zhou; Yanhong, Zang; Yahong, Lin

    In the paper are presented algorithms of TPMS (Tire Pressure Monitoring System) based on multi-sensor information fusion. A Unified mathematical models of information fusion are constructed and three algorithms are used to deal with, which include algorithm based on Bayesian, algorithm based on the relative distance (an improved algorithm of bayesian theory of evidence), algorithm based on multi-sensor weighted fusion. The calculating results shows that the algorithm based on d-s evidence theory of multisensor fusion method better than the algorithm the based on information fusion method or the bayesian method.

  13. Fourth Order Algorithms for Solving Diverse Many-Body Problems

    NASA Astrophysics Data System (ADS)

    Chin, Siu A.; Forbert, Harald A.; Chen, Chia-Rong; Kidwell, Donald W.; Ciftja, Orion

    2001-03-01

    We show that the method of factorizing an evolution operator of the form e^ɛ(A+B) to fourth order with purely positive coefficient yields new classes of symplectic algorithms for solving classical dynamical problems, unitary algorithms for solving the time-dependent Schrödinger equation, norm preserving algorithms for solving the Langevin equation and large time step convergent Diffusion Monte Carlo algorithms. Results for each class of problems will be presented and disucss

  14. MU-MIMO Pairing Algorithm Using Received Power

    NASA Astrophysics Data System (ADS)

    Kim, Young-Joon; Lee, Jung-Seung; Baik, Doo-Kwon

    In this letter, a new received power pairing scheduling (PPS) algorithm is proposed for Multi User Multiple Input and Multiple Output (MU-MIMO) systems. In contrast to existing algorithms that manage complex orthogonal factors, the PPS algorithm simply utilizes CINR to determine a MU-MIMO pair. Simulation results show that the PPS algorithm achieves up to 77% of MU-MIMO gain of determinant pairing scheduling (DPS) with low complexity.

  15. A genetic algorithm for solving supply chain network design model

    NASA Astrophysics Data System (ADS)

    Firoozi, Z.; Ismail, N.; Ariafar, S. H.; Tang, S. H.; Ariffin, M. K. M. A.

    2013-09-01

    Network design is by nature costly and optimization models play significant role in reducing the unnecessary cost components of a distribution network. This study proposes a genetic algorithm to solve a distribution network design model. The structure of the chromosome in the proposed algorithm is defined in a novel way that in addition to producing feasible solutions, it also reduces the computational complexity of the algorithm. Computational results are presented to show the algorithm performance.

  16. Genetic Algorithm Tuned Fuzzy Logic for Gliding Return Trajectories

    NASA Technical Reports Server (NTRS)

    Burchett, Bradley T.

    2003-01-01

    The problem of designing and flying a trajectory for successful recovery of a reusable launch vehicle is tackled using fuzzy logic control with genetic algorithm optimization. The plant is approximated by a simplified three degree of freedom non-linear model. A baseline trajectory design and guidance algorithm consisting of several Mamdani type fuzzy controllers is tuned using a simple genetic algorithm. Preliminary results show that the performance of the overall system is shown to improve with genetic algorithm tuning.

  17. Television Quiz Show Simulation

    ERIC Educational Resources Information Center

    Hill, Jonnie Lynn

    2007-01-01

    This article explores the simulation of four television quiz shows for students in China studying English as a foreign language (EFL). It discusses the adaptation and implementation of television quiz shows and how the students reacted to them.

  18. The Great Cometary Show

    NASA Astrophysics Data System (ADS)

    2007-01-01

    The ESO Very Large Telescope Interferometer, which allows astronomers to scrutinise objects with a precision equivalent to that of a 130-m telescope, is proving itself an unequalled success every day. One of the latest instruments installed, AMBER, has led to a flurry of scientific results, an anthology of which is being published this week as special features in the research journal Astronomy & Astrophysics. ESO PR Photo 06a/07 ESO PR Photo 06a/07 The AMBER Instrument "With its unique capabilities, the VLT Interferometer (VLTI) has created itself a niche in which it provide answers to many astronomical questions, from the shape of stars, to discs around stars, to the surroundings of the supermassive black holes in active galaxies," says Jorge Melnick (ESO), the VLT Project Scientist. The VLTI has led to 55 scientific papers already and is in fact producing more than half of the interferometric results worldwide. "With the capability of AMBER to combine up to three of the 8.2-m VLT Unit Telescopes, we can really achieve what nobody else can do," added Fabien Malbet, from the LAOG (France) and the AMBER Project Scientist. Eleven articles will appear this week in Astronomy & Astrophysics' special AMBER section. Three of them describe the unique instrument, while the other eight reveal completely new results about the early and late stages in the life of stars. ESO PR Photo 06b/07 ESO PR Photo 06b/07 The Inner Winds of Eta Carinae The first results presented in this issue cover various fields of stellar and circumstellar physics. Two papers deal with very young solar-like stars, offering new information about the geometry of the surrounding discs and associated outflowing winds. Other articles are devoted to the study of hot active stars of particular interest: Alpha Arae, Kappa Canis Majoris, and CPD -57o2874. They provide new, precise information about their rotating gas envelopes. An important new result concerns the enigmatic object Eta Carinae. Using AMBER with

  19. Comprehensive eye evaluation algorithm

    NASA Astrophysics Data System (ADS)

    Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.

    2016-03-01

    In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.

  20. Clustering algorithm studies

    NASA Astrophysics Data System (ADS)

    Graf, Norman A.

    2001-07-01

    An object-oriented framework for undertaking clustering algorithm studies has been developed. We present here the definitions for the abstract Cells and Clusters as well as the interface for the algorithm. We intend to use this framework to investigate the interplay between various clustering algorithms and the resulting jet reconstruction efficiency and energy resolutions to assist in the design of the calorimeter detector.

  1. MO-G-17A-07: Improved Image Quality in Brain F-18 FDG PET Using Penalized-Likelihood Image Reconstruction Via a Generalized Preconditioned Alternating Projection Algorithm: The First Patient Results

    SciTech Connect

    Schmidtlein, CR; Beattie, B; Humm, J; Li, S; Wu, Z; Xu, Y; Zhang, J; Shen, L; Vogelsang, L; Feiglin, D; Krol, A

    2014-06-15

    Purpose: To investigate the performance of a new penalized-likelihood PET image reconstruction algorithm using the 1{sub 1}-norm total-variation (TV) sum of the 1st through 4th-order gradients as the penalty. Simulated and brain patient data sets were analyzed. Methods: This work represents an extension of the preconditioned alternating projection algorithm (PAPA) for emission-computed tomography. In this new generalized algorithm (GPAPA), the penalty term is expanded to allow multiple components, in this case the sum of the 1st to 4th order gradients, to reduce artificial piece-wise constant regions (“staircase” artifacts typical for TV) seen in PAPA images penalized with only the 1st order gradient. Simulated data were used to test for “staircase” artifacts and to optimize the penalty hyper-parameter in the root-mean-squared error (RMSE) sense. Patient FDG brain scans were acquired on a GE D690 PET/CT (370 MBq at 1-hour post-injection for 10 minutes) in time-of-flight mode and in all cases were reconstructed using resolution recovery projectors. GPAPA images were compared PAPA and RMSE-optimally filtered OSEM (fully converged) in simulations and to clinical OSEM reconstructions (3 iterations, 32 subsets) with 2.6 mm XYGaussian and standard 3-point axial smoothing post-filters. Results: The results from the simulated data show a significant reduction in the 'staircase' artifact for GPAPA compared to PAPA and lower RMSE (up to 35%) compared to optimally filtered OSEM. A simple power-law relationship between the RMSE-optimal hyper-parameters and the noise equivalent counts (NEC) per voxel is revealed. Qualitatively, the patient images appear much sharper and with less noise than standard clinical images. The convergence rate is similar to OSEM. Conclusions: GPAPA reconstructions using the 1{sub 1}-norm total-variation sum of the 1st through 4th-order gradients as the penalty show great promise for the improvement of image quality over that currently achieved

  2. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  3. Damage evaluation on a multi-story framed structures: comparison of results retrieved from algorithms based on modal and non-modal parameters

    NASA Astrophysics Data System (ADS)

    Auletta, Gianluca; Ditommaso, Rocco; Iacovino, Chiara; Carlo Ponzo, Felice; Pina Limongelli, Maria

    2016-04-01

    Continuous monitoring based on vibrational identification methods is increasingly employed with the aim of evaluate the state of the health of existing structures and infrastructures and to evaluate the performance of safety interventions over time. In case of earthquakes, data acquired by means of continuous monitoring systems can be used to localize and quantify a possible damage occurred on a monitored structure using appropriate algorithms based on the variations of structural parameters. Most of the damage identification methods are based on the variation of few modal and/or non-modal parameters: the former, are strictly related to the structural eigenfrequencies, equivalent viscous damping factors and mode shapes; the latter, are based on the variation of parameters related to the geometric characteristics of the monitored structure whose variations could be correlated related to damage. In this work results retrieved from the application of a curvature evolution based method and an interpolation error based method are compared. The first method is based on the evaluation of the curvature variation (related to the fundamental mode of vibration) over time and compares the variations before, during and after the earthquake. The Interpolation Method is based on the detection of localized reductions of smoothness in the Operational Deformed Shapes (ODSs) of the structure. A damage feature is defined in terms of the error related to the use of a spline function in interpolating the ODSs of the structure: statistically significant variations of the interpolation error between two successive inspections of the structure indicate the onset of damage. Both methods have been applied using both numerical data retrieved from nonlinear FE models and experimental tests on scaled structures carried out on the shaking table of the University of Basilicata. Acknowledgements This study was partially funded by the Italian Civil Protection Department within the project DPC

  4. Three list scheduling temporal partitioning algorithm of time space characteristic analysis and compare for dynamic reconfigurable computing

    NASA Astrophysics Data System (ADS)

    Chen, Naijin

    2013-03-01

    Level Based Partitioning (LBP) algorithm, Cluster Based Partitioning (CBP) algorithm and Enhance Static List (ESL) temporal partitioning algorithm based on adjacent matrix and adjacent table are designed and implemented in this paper. Also partitioning time and memory occupation based on three algorithms are compared. Experiment results show LBP partitioning algorithm possesses the least partitioning time and better parallel character, as far as memory occupation and partitioning time are concerned, algorithms based on adjacent table have less partitioning time and less space memory occupation.

  5. Stretched View Showing 'Victoria'

    NASA Technical Reports Server (NTRS)

    2006-01-01

    [figure removed for brevity, see original site] Stretched View Showing 'Victoria'

    This pair of images from the panoramic camera on NASA's Mars Exploration Rover Opportunity served as initial confirmation that the two-year-old rover is within sight of 'Victoria Crater,' which it has been approaching for more than a year. Engineers on the rover team were unsure whether Opportunity would make it as far as Victoria, but scientists hoped for the chance to study such a large crater with their roving geologist. Victoria Crater is 800 meters (nearly half a mile) in diameter, about six times wider than 'Endurance Crater,' where Opportunity spent several months in 2004 examining rock layers affected by ancient water.

    When scientists using orbital data calculated that they should be able to detect Victoria's rim in rover images, they scrutinized frames taken in the direction of the crater by the panoramic camera. To positively characterize the subtle horizon profile of the crater and some of the features leading up to it, researchers created a vertically-stretched image (top) from a mosaic of regular frames from the panoramic camera (bottom), taken on Opportunity's 804th Martian day (April 29, 2006).

    The stretched image makes mild nearby dunes look like more threatening peaks, but that is only a result of the exaggerated vertical dimension. This vertical stretch technique was first applied to Viking Lander 2 panoramas by Philip Stooke, of the University of Western Ontario, Canada, to help locate the lander with respect to orbiter images. Vertically stretching the image allows features to be more readily identified by the Mars Exploration Rover science team.

    The bright white dot near the horizon to the right of center (barely visible without labeling or zoom-in) is thought to be a light-toned outcrop on the far wall of the crater, suggesting that the rover can see over the low rim of Victoria. In figure 1, the northeast and southeast rims are labeled

  6. Adaptive phase aberration correction based on imperialist competitive algorithm.

    PubMed

    Yazdani, R; Hajimahmoodzadeh, M; Fallah, H R

    2014-01-01

    We investigate numerically the feasibility of phase aberration correction in a wavefront sensorless adaptive optical system, based on the imperialist competitive algorithm (ICA). Considering a 61-element deformable mirror (DM) and the Strehl ratio as the cost function of ICA, this algorithm is employed to search the optimum surface profile of DM for correcting the phase aberrations in a solid-state laser system. The correction results show that ICA is a powerful correction algorithm for static or slowly changing phase aberrations in optical systems, such as solid-state lasers. The correction capability and the convergence speed of this algorithm are compared with those of the genetic algorithm (GA) and stochastic parallel gradient descent (SPGD) algorithm. The results indicate that these algorithms have almost the same correction capability. Also, ICA and GA are almost the same in convergence speed and SPGD is the fastest of these algorithms.

  7. [An Algorithm for Correcting Fetal Heart Rate Baseline].

    PubMed

    Li, Xiaodong; Lu, Yaosheng

    2015-10-01

    Fetal heart rate (FHR) baseline estimation is of significance for the computerized analysis of fetal heart rate and the assessment of fetal state. In our work, a fetal heart rate baseline correction algorithm was presented to make the existing baseline more accurate and fit to the tracings. Firstly, the deviation of the existing FHR baseline was found and corrected. And then a new baseline was obtained finally after treatment with some smoothing methods. To assess the performance of FHR baseline correction algorithm, a new FHR baseline estimation algorithm that combined baseline estimation algorithm and the baseline correction algorithm was compared with two existing FHR baseline estimation algorithms. The results showed that the new FHR baseline estimation algorithm did well in both accuracy and efficiency. And the results also proved the effectiveness of the FHR baseline correction algorithm.

  8. Modified Landweber algorithm for robust particle sizing by using Fraunhofer diffraction.

    PubMed

    Xu, Lijun; Wei, Tianxiao; Zhou, Jiayi; Cao, Zhang

    2014-09-20

    In this paper, a robust modified Landweber algorithm was proposed to retrieve the particle size distributions from Fraunhofer diffraction. Three typical particle size distributions, i.e., Rosin-Rammler, lognormal, and bimodal normal distributions for particles ranging from 4.8 to 96 μm, were employed to verify the performance of the algorithm. To show its merits, the proposed algorithm was compared with the Tikhonov regularization algorithm and the ℓ1-norm-based algorithm. Simulation results showed that, for noise-free data, both the modified Landweber algorithm and the ℓ1-norm-based algorithm were better than the Tikhonov regularization algorithm in terms of accuracy. When the data was noise-contaminated, the modified Landweber algorithm was superior to the other two algorithms in both accuracy and speed. An experimental setup was also established and the results validated the feasibility and effectiveness of the proposed method.

  9. Improved algorithm for hyperspectral data dimension determination

    NASA Astrophysics Data System (ADS)

    CHEN, Jie; DU, Lei; LI, Jing; HAN, Yachao; GAO, Zihong

    2017-02-01

    The correlation between adjacent bands of hyperspectral image data is relatively strong. However, signal coexists with noise and the HySime (hyperspectral signal identification by minimum error) algorithm which is based on the principle of least squares is designed to calculate the estimated noise value and the estimated signal correlation matrix value. The algorithm is effective with accurate noise value but ineffective with estimated noise value obtained from spectral dimension reduction and de-correlation process. This paper proposes an improved HySime algorithm based on noise whitening process. It carries out the noise whitening, instead of removing noise pixel by pixel, process on the original data first, obtains the noise covariance matrix estimated value accurately, and uses the HySime algorithm to calculate the signal correlation matrix value in order to improve the precision of results. With simulated as well as real data experiments in this paper, results show that: firstly, the improved HySime algorithm are more accurate and stable than the original HySime algorithm; secondly, the improved HySime algorithm results have better consistency under the different conditions compared with the classic noise subspace projection algorithm (NSP); finally, the improved HySime algorithm improves the adaptability of non-white image noise with noise whitening process.

  10. Showing What They Know

    ERIC Educational Resources Information Center

    Cech, Scott J.

    2008-01-01

    Having students show their skills in three dimensions, known as performance-based assessment, dates back at least to Socrates. Individual schools such as Barrington High School--located just outside of Providence--have been requiring students to actively demonstrate their knowledge for years. The Rhode Island's high school graduating class became…

  11. The Ozone Show.

    ERIC Educational Resources Information Center

    Mathieu, Aaron

    2000-01-01

    Uses a talk show activity for a final assessment tool for students to debate about the ozone hole. Students are assessed on five areas: (1) cooperative learning; (2) the written component; (3) content; (4) self-evaluation; and (5) peer evaluation. (SAH)

  12. What Do Maps Show?

    ERIC Educational Resources Information Center

    Geological Survey (Dept. of Interior), Reston, VA.

    This curriculum packet, appropriate for grades 4-8, features a teaching poster which shows different types of maps (different views of Salt Lake City, Utah), as well as three reproducible maps and reproducible activity sheets which complement the maps. The poster provides teacher background, including step-by-step lesson plans for four geography…

  13. Show Me the Way

    ERIC Educational Resources Information Center

    Dicks, Matthew J.

    2005-01-01

    Because today's students have grown up steeped in video games and the Internet, most of them expect feedback, and usually gratification, very soon after they expend effort on a task. Teachers can get quick feedback to students by showing them videotapes of their learning performances. The author, a 3rd grade teacher describes how the seemingly…

  14. Chemistry Game Shows

    NASA Astrophysics Data System (ADS)

    Campbell, Susan; Muzyka, Jennifer

    2002-04-01

    We present a technological improvement to the use of game shows to help students review for tests. Our approach uses HTML files interpreted with a browser on a computer attached to an LCD projector. The HTML files can be easily modified for use of the game in a variety of courses.

  15. Honored Teacher Shows Commitment.

    ERIC Educational Resources Information Center

    Ratte, Kathy

    1987-01-01

    Part of the acceptance speech of the 1985 National Council for the Social Studies Teacher of the Year, this article describes the censorship experience of this honored social studies teacher. The incident involved the showing of a videotape version of the feature film entitled "The Seduction of Joe Tynan." (JDH)

  16. Talk Show Science.

    ERIC Educational Resources Information Center

    Moore, Mitzi Ruth

    1992-01-01

    Proposes having students perform skits in which they play the roles of the science concepts they are trying to understand. Provides the dialog for a skit in which hot and cold gas molecules are interviewed on a talk show to study how these properties affect wind, rain, and other weather phenomena. (MDH)

  17. Stage a Water Show

    ERIC Educational Resources Information Center

    Frasier, Debra

    2008-01-01

    In the author's book titled "The Incredible Water Show," the characters from "Miss Alaineus: A Vocabulary Disaster" used an ocean of information to stage an inventive performance about the water cycle. In this article, the author relates how she turned the story into hands-on science teaching for real-life fifth-grade students. The author also…

  18. Coronary CTA using scout-based automated tube potential and current selection algorithm, with breast displacement results in lower radiation exposure in females compared to males

    PubMed Central

    Vadvala, Harshna; Kim, Phillip; Mayrhofer, Thomas; Pianykh, Oleg; Kalra, Mannudeep; Hoffmann, Udo

    2014-01-01

    Purpose To evaluate the effect of automatic tube potential selection and automatic exposure control combined with female breast displacement during coronary computed tomography angiography (CCTA) on radiation exposure in women versus men of the same body size. Materials and methods Consecutive clinical exams between January 2012 and July 2013 at an academic medical center were retrospectively analyzed. All examinations were performed using ECG-gating, automated tube potential, and tube current selection algorithm (APS-AEC) with breast displacement in females. Cohorts were stratified by sex and standard World Health Organization body mass index (BMI) ranges. CT dose index volume (CTDIvol), dose length product (DLP) median effective dose (ED), and size specific dose estimate (SSDE) were recorded. Univariable and multivariable regression analyses were performed to evaluate the effect of gender on radiation exposure per BMI. Results A total of 726 exams were included, 343 (47%) were females; mean BMI was similar by gender (28.6±6.9 kg/m2 females vs. 29.2±6.3 kg/m2 males; P=0.168). Median ED was 2.3 mSv (1.4-5.2) for females and 3.6 (2.5-5.9) for males (P<0.001). Females were exposed to less radiation by a difference in median ED of –1.3 mSv, CTDIvol –4.1 mGy, and SSDE –6.8 mGy (all P<0.001). After adjusting for BMI, patient characteristics, and gating mode, females exposure was lower by a median ED of –0.7 mSv, CTDIvol –2.3 mGy, and SSDE –3.15 mGy, respectively (all P<0.01). Conclusions: We observed a difference in radiation exposure to patients undergoing CCTA with the combined use of AEC-APS and breast displacement in female patients as compared to their BMI-matched male counterparts, with female patients receiving one third less exposure. PMID:25610804

  19. Listless zerotree image compression algorithm

    NASA Astrophysics Data System (ADS)

    Lian, Jing; Wang, Ke

    2006-09-01

    In this paper, an improved zerotree structure and a new coding procedure are adopted, which improve the reconstructed image qualities. Moreover, the lists in SPIHT are replaced by flag maps, and lifting scheme is adopted to realize wavelet transform, which lowers the memory requirements and speeds up the coding process. Experimental results show that the algorithm is more effective and efficient compared with SPIHT.

  20. Not a "reality" show.

    PubMed

    Wrong, Terence; Baumgart, Erica

    2013-01-01

    The authors of the preceding articles raise legitimate questions about patient and staff rights and the unintended consequences of allowing ABC News to film inside teaching hospitals. We explain why we regard their fears as baseless and not supported by what we heard from individuals portrayed in the filming, our decade-long experience making medical documentaries, and the full un-aired context of the scenes shown in the broadcast. The authors don't and can't know what conversations we had, what documents we reviewed, and what protections we put in place in each televised scene. Finally, we hope to correct several misleading examples cited by the authors as well as their offhand mischaracterization of our program as a "reality" show.

  1. Advanced optimization of permanent magnet wigglers using a genetic algorithm

    SciTech Connect

    Hajima, Ryoichi

    1995-12-31

    In permanent magnet wigglers, magnetic imperfection of each magnet piece causes field error. This field error can be reduced or compensated by sorting magnet pieces in proper order. We showed a genetic algorithm has good property for this sorting scheme. In this paper, this optimization scheme is applied to the case of permanent magnets which have errors in the direction of field. The result shows the genetic algorithm is superior to other algorithms.

  2. Rotational Invariant Dimensionality Reduction Algorithms.

    PubMed

    Lai, Zhihui; Xu, Yong; Yang, Jian; Shen, Linlin; Zhang, David

    2016-06-30

    A common intrinsic limitation of the traditional subspace learning methods is the sensitivity to the outliers and the image variations of the object since they use the L₂ norm as the metric. In this paper, a series of methods based on the L₂,₁-norm are proposed for linear dimensionality reduction. Since the L₂,₁-norm based objective function is robust to the image variations, the proposed algorithms can perform robust image feature extraction for classification. We use different ideas to design different algorithms and obtain a unified rotational invariant (RI) dimensionality reduction framework, which extends the well-known graph embedding algorithm framework to a more generalized form. We provide the comprehensive analyses to show the essential properties of the proposed algorithm framework. This paper indicates that the optimization problems have global optimal solutions when all the orthogonal projections of the data space are computed and used. Experimental results on popular image datasets indicate that the proposed RI dimensionality reduction algorithms can obtain competitive performance compared with the previous L₂ norm based subspace learning algorithms.

  3. Study of sequential optimal control algorithm smart isolation structure based on Simulink-S function

    NASA Astrophysics Data System (ADS)

    Liu, Xiaohuan; Liu, Yanhui

    2017-01-01

    The study of this paper focuses on smart isolation structure, a method for realizing structural vibration control by using Simulink simulation is proposed according to the proposed sequential optimal control algorithm. In the Simulink simulation environment, A smart isolation structure is used to compare the control effect of three algorithms, i.e., classical optimal control algorithm, linear quadratic gaussian control algorithm and sequential optimal control algorithm under the condition of sensor contaminated with noise. Simulation results show that this method can be applied to the simulation of sequential optimal control algorithm and the proposed sequential optimal control algorithm has a good ability of resisting the noise and better control efficiency.

  4. A region labeling algorithm based on block

    NASA Astrophysics Data System (ADS)

    Wang, Jing

    2009-10-01

    The time performance of region labeling algorithm is important for image process. However, common region labeling algorithms cannot meet the requirements of real-time image processing. In this paper, a technique using block to record the connective area is proposed. By this technique, connective closure and information related to the target can be computed during a one-time image scan. It records the edge pixel's coordinate, including outer side edges and inner side edges, as well as the label, and then it can calculate connecting area's shape center, area and gray. Compared to others, this block based region labeling algorithm is more efficient. It can well meet the time requirements of real-time processing. Experiment results also validate the correctness and efficiency of the algorithm. Experiment results show that it can detect any connecting areas in binary images, which contains various complex and quaint patterns. The block labeling algorithm is used in a real-time image processing program now.

  5. Children's school-breakfast reports and school-lunch reports (in 24-h dietary recalls): conventional and reporting-error-sensitive measures show inconsistent accuracy results for retention interval and breakfast location.

    PubMed

    Baxter, Suzanne D; Guinn, Caroline H; Smith, Albert F; Hitchcock, David B; Royer, Julie A; Puryear, Megan P; Collins, Kathleen L; Smith, Alyssa L

    2016-04-14

    Validation-study data were analysed to investigate retention interval (RI) and prompt effects on the accuracy of fourth-grade children's reports of school-breakfast and school-lunch (in 24-h recalls), and the accuracy of school-breakfast reports by breakfast location (classroom; cafeteria). Randomly selected fourth-grade children at ten schools in four districts were observed eating school-provided breakfast and lunch, and were interviewed under one of eight conditions created by crossing two RIs ('short'--prior-24-hour recall obtained in the afternoon and 'long'--previous-day recall obtained in the morning) with four prompts ('forward'--distant to recent, 'meal name'--breakfast, etc., 'open'--no instructions, and 'reverse'--recent to distant). Each condition had sixty children (half were girls). Of 480 children, 355 and 409 reported meals satisfying criteria for reports of school-breakfast and school-lunch, respectively. For breakfast and lunch separately, a conventional measure--report rate--and reporting-error-sensitive measures--correspondence rate and inflation ratio--were calculated for energy per meal-reporting child. Correspondence rate and inflation ratio--but not report rate--showed better accuracy for school-breakfast and school-lunch reports with the short RI than with the long RI; this pattern was not found for some prompts for each sex. Correspondence rate and inflation ratio showed better school-breakfast report accuracy for the classroom than for cafeteria location for each prompt, but report rate showed the opposite. For each RI, correspondence rate and inflation ratio showed better accuracy for lunch than for breakfast, but report rate showed the opposite. When choosing RI and prompts for recalls, researchers and practitioners should select a short RI to maximise accuracy. Recommendations for prompt selections are less clear. As report rates distort validation-study accuracy conclusions, reporting-error-sensitive measures are recommended.

  6. A connected component labeling algorithm for wheat root thinned image

    NASA Astrophysics Data System (ADS)

    Mu, ShaoMin; Zha, XuHeng; Du, HaiYang; Hao, QingBo; Chang, TengTeng

    Measuring wheat root length need manual measure by measuring rule, waste time and energy, low precision, aiming at this problem in this paper a connected component labeling algorithm for wheat root thinned image is presented. The algorithm realized on the basis of regional growth thought by dynamic queue list, only need one scan can finish label process. Aiming at label of wheat root thinned image, the algorithm compared with three algorithms, the experimental results show that the algorithm effect is good and suited to connecting component labeling for wheat root thinned image.

  7. Developing evidence-based algorithms for negative pressure wound therapy in adults with acute and chronic wounds: literature and expert-based face validation results.

    PubMed

    Beitz, Janice M; van Rijswijk, Lia

    2012-04-01

    Negative pressure wound therapy (NPWT) is used extensively in the management of acute and chronic wounds, but concerns persist about its efficacy, effectiveness, and safety. Available guidelines and algorithms are wound type-specific, not evidence-based, and many lack clearly described relative and absolute contraindications and stop criteria. The purpose of this research was to: (1) develop evidence-based algorithms for the safe use of NPWT in adults with acute and chronic wounds by nonwound expert clinicians, and (2) obtain face validity for the algorithms. Using NPWT meta-analyses and systematic reviews (n = 10), NPWT guidelines of care (n = 12), general evidence-based guidelines of wound care (n = 11), and a framework for transitioning between moisture-retentive and NPWT care (n = 1), a set of three algorithms was developed. Literature-based validity for each of the 39 discreet algorithm steps/decision points was obtained by reviewing best available evidence from systematic literature reviews (n = 331 publications) and abstraction of all NPWT-relevant publications (n = 182) using the patient-oriented Strength of Recommendation (SORT) taxonomy. Of the 182 NPWT studies abstracted, 25 met criteria for level 1 and 2 evidence but only one general assessment step had both level 1 evidence and an "A" strength of recommendation. Next, an Institutional Review Board-approved, cross-sectional mixed methods survey design face validation pilot study was conducted to solicit comments on, and rate the validity of, the 51 discreet algorithm-related statements, including the 39 decisions/steps. Twelve (12) of the 15 invited interdisciplinary wound experts agreed to participate. The overall algorithm content validity index (CVI) was high (0.96 out of 1). Helpful design suggestions to ensure safe use were made, and participants suggested an examination of commonly used wound definitions in follow-up studies. Results of the literature-based face validation confirm that the

  8. Quantum Algorithms

    NASA Technical Reports Server (NTRS)

    Abrams, D.; Williams, C.

    1999-01-01

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases for which all know classical algorithms require exponential time.

  9. Selection of views to materialize using simulated annealing algorithms

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Liu, Chi; Wang, Hongfeng; Liu, Daixin

    2002-03-01

    A data warehouse contains lots of materialized views over the data provided by the distributed heterogeneous databases for the purpose of efficiently implementing decision-support or OLAP queries. It is important to select the right view to materialize that answer a given set of queries. The goal is the minimization of the combination of the query evaluation and view maintenance costs. In this paper, we have addressed and designed algorithms for selecting a set of views to be materialized so that the sum of processing a set of queries and maintaining the materialized views is minimized. We develop an approach using simulated annealing algorithms to solve it. First, we explore simulated annealing algorithms to optimize the selection of materialized views. Then we use experiments to demonstrate our approach. The results show that our algorithm works better. We implemented our algorithms and a performance study of the algorithms shows that the proposed algorithm gives an optimal solution.

  10. A new frame-based registration algorithm

    NASA Technical Reports Server (NTRS)

    Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Sumanaweera, T. S.; Yen, S. Y.; Napel, S.

    1998-01-01

    This paper presents a new algorithm for frame registration. Our algorithm requires only that the frame be comprised of straight rods, as opposed to the N structures or an accurate frame model required by existing algorithms. The algorithm utilizes the full 3D information in the frame as well as a least squares weighting scheme to achieve highly accurate registration. We use simulated CT data to assess the accuracy of our algorithm. We compare the performance of the proposed algorithm to two commonly used algorithms. Simulation results show that the proposed algorithm is comparable to the best existing techniques with knowledge of the exact mathematical frame model. For CT data corrupted with an unknown in-plane rotation or translation, the proposed technique is also comparable to the best existing techniques. However, in situations where there is a discrepancy of more than 2 mm (0.7% of the frame dimension) between the frame and the mathematical model, the proposed technique is significantly better (p < or = 0.05) than the existing techniques. The proposed algorithm can be applied to any existing frame without modification. It provides better registration accuracy and is robust against model mis-match. It allows greater flexibility on the frame structure. Lastly, it reduces the frame construction cost as adherence to a concise model is not required.

  11. Comparison of a noise-weighted filtered backprojection algorithm with the Standard MLEM algorithm for poisson noise.

    PubMed

    Zeng, Gengsheng L

    2013-12-01

    Iterative maximum-likelihood expectation maximization and ordered-subset expectation maximization algorithms are excellent for image reconstruction and usually provide better images than filtered backprojection (FBP). Recently, an FBP algorithm able to incorporate noise weighting during reconstruction was developed. This paper compares the performance of the noise-weighted FBP algorithm and the iterative maximum-likelihood expectation maximization algorithm with Poisson noise-corrupted emission data generated by computer simulations and a SPECT experimental study. The results show comparable performance for these 2 algorithms.

  12. New-generation NASA Aura Ozone Monitoring Instrument (OMI) volcanic SO2 dataset: algorithm description, initial results, and continuation with the Suomi-NPP Ozone Mapping and Profiler Suite (OMPS)

    NASA Astrophysics Data System (ADS)

    Li, Can; Krotkov, Nickolay A.; Carn, Simon; Zhang, Yan; Spurr, Robert J. D.; Joiner, Joanna

    2017-02-01

    Since the fall of 2004, the Ozone Monitoring Instrument (OMI) has been providing global monitoring of volcanic SO2 emissions, helping to understand their climate impacts and to mitigate aviation hazards. Here we introduce a new-generation OMI volcanic SO2 dataset based on a principal component analysis (PCA) retrieval technique. To reduce retrieval noise and artifacts as seen in the current operational linear fit (LF) algorithm, the new algorithm, OMSO2VOLCANO, uses characteristic features extracted directly from OMI radiances in the spectral fitting, thereby helping to minimize interferences from various geophysical processes (e.g., O3 absorption) and measurement details (e.g., wavelength shift). To solve the problem of low bias for large SO2 total columns in the LF product, the OMSO2VOLCANO algorithm employs a table lookup approach to estimate SO2 Jacobians (i.e., the instrument sensitivity to a perturbation in the SO2 column amount) and iteratively adjusts the spectral fitting window to exclude shorter wavelengths where the SO2 absorption signals are saturated. To first order, the effects of clouds and aerosols are accounted for using a simple Lambertian equivalent reflectivity approach. As with the LF algorithm, OMSO2VOLCANO provides total column retrievals based on a set of predefined SO2 profiles from the lower troposphere to the lower stratosphere, including a new profile peaked at 13 km for plumes in the upper troposphere. Examples given in this study indicate that the new dataset shows significant improvement over the LF product, with at least 50 % reduction in retrieval noise over the remote Pacific. For large eruptions such as Kasatochi in 2008 (˜ 1700 kt total SO2) and Sierra Negra in 2005 (> 1100 DU maximum SO2), OMSO2VOLCANO generally agrees well with other algorithms that also utilize the full spectral content of satellite measurements, while the LF algorithm tends to underestimate SO2. We also demonstrate that, despite the coarser spatial and

  13. New-Generation NASA Aura Ozone Monitoring Instrument (OMI) Volcanic SO2 Dataset: Algorithm Description, Initial Results, and Continuation with the Suomi-NPP Ozone Mapping and Profiler Suite (OMPS)

    NASA Technical Reports Server (NTRS)

    Li, Can; Krotkov, Nickolay A.; Carn, Simon; Zhang, Yan; Spurr, Robert J. D.; Joiner, Joanna

    2017-01-01

    Since the fall of 2004, the Ozone Monitoring Instrument (OMI) has been providing global monitoring of volcanic SO2 emissions, helping to understand their climate impacts and to mitigate aviation hazards. Here we introduce a new-generation OMI volcanic SO2 dataset based on a principal component analysis (PCA) retrieval technique. To reduce retrieval noise and artifacts as seen in the current operational linear fit (LF) algorithm, the new algorithm, OMSO2VOLCANO, uses characteristic features extracted directly from OMI radiances in the spectral fitting, thereby helping to minimize interferences from various geophysical processes (e.g., O3 absorption) and measurement details (e.g., wavelength shift). To solve the problem of low bias for large SO2 total columns in the LF product, the OMSO2VOLCANO algorithm employs a table lookup approach to estimate SO2 Jacobians (i.e., the instrument sensitivity to a perturbation in the SO2 column amount) and iteratively adjusts the spectral fitting window to exclude shorter wavelengths where the SO2 absorption signals are saturated. To first order, the effects of clouds and aerosols are accounted for using a simple Lambertian equivalent reflectivity approach. As with the LF algorithm, OMSO2VOLCANO provides total column retrievals based on a set of predefined SO2 profiles from the lower troposphere to the lower stratosphere, including a new profile peaked at 13 km for plumes in the upper troposphere. Examples given in this study indicate that the new dataset shows significant improvement over the LF product, with at least 50% reduction in retrieval noise over the remote Pacific. For large eruptions such as Kasatochi in 2008 (approximately 1700 kt total SO2/ and Sierra Negra in 2005 (greater than 1100DU maximum SO2), OMSO2VOLCANO generally agrees well with other algorithms that also utilize the full spectral content of satellite measurements, while the LF algorithm tends to underestimate SO2. We also demonstrate that, despite the

  14. Adaptive-feedback control algorithm.

    PubMed

    Huang, Debin

    2006-06-01

    This paper is motivated by giving the detailed proofs and some interesting remarks on the results the author obtained in a series of papers [Phys. Rev. Lett. 93, 214101 (2004); Phys. Rev. E 71, 037203 (2005); 69, 067201 (2004)], where an adaptive-feedback algorithm was proposed to effectively stabilize and synchronize chaotic systems. This note proves in detail the strictness of this algorithm from the viewpoint of mathematics, and gives some interesting remarks for its potential applications to chaos control & synchronization. In addition, a significant comment on synchronization-based parameter estimation is given, which shows some techniques proposed in literature less strict and ineffective in some cases.

  15. Algorithms for automated DNA assembly

    PubMed Central

    Densmore, Douglas; Hsiau, Timothy H.-C.; Kittleson, Joshua T.; DeLoache, Will; Batten, Christopher; Anderson, J. Christopher

    2010-01-01

    Generating a defined set of genetic constructs within a large combinatorial space provides a powerful method for engineering novel biological functions. However, the process of assembling more than a few specific DNA sequences can be costly, time consuming and error prone. Even if a correct theoretical construction scheme is developed manually, it is likely to be suboptimal by any number of cost metrics. Modular, robust and formal approaches are needed for exploring these vast design spaces. By automating the design of DNA fabrication schemes using computational algorithms, we can eliminate human error while reducing redundant operations, thus minimizing the time and cost required for conducting biological engineering experiments. Here, we provide algorithms that optimize the simultaneous assembly of a collection of related DNA sequences. We compare our algorithms to an exhaustive search on a small synthetic dataset and our results show that our algorithms can quickly find an optimal solution. Comparison with random search approaches on two real-world datasets show that our algorithms can also quickly find lower-cost solutions for large datasets. PMID:20335162

  16. 2-Year follow-up to STeP trial shows sustainability of structured self-monitoring of blood glucose utilization: results from the STeP practice logistics and usability survey (STeP PLUS).

    PubMed

    Friedman, Kevin; Noyes, Jeannette; Parkin, Christopher G

    2013-04-01

    We report findings from a follow-up survey of clinicians from the STeP study that assessed their attitudes toward and current use of the Accu-Chek(®) 360° View tool (Roche Diagnostics, Indianapolis, IN) approximately 2 years after the study was completed. The Accu-Chek 360° View tool enables patients to record/plot a seven-point self-monitoring of blood glucose (SMBG) profile (fasting, preprandial/2-h postprandial at each of the three meals, and bedtime) on 3 consecutive days, document meal sizes and energy levels, and comment on their SMBG experiences. Our findings showed that the majority of these physicians continue to use the tool with their patients, citing enhanced patient understanding and engagement, better discussions with patients regarding the impact of lifestyle behaviors, improved clinical outcomes, and better practice efficiencies as significant benefits of the tool.

  17. In favour of the definition "adolescents with idiopathic scoliosis": juvenile and adolescent idiopathic scoliosis braced after ten years of age, do not show different end results. SOSORT award winner 2014

    PubMed Central

    2014-01-01

    Background The most important factor discriminating juvenile (JIS) from adolescent idiopathic scoliosis (AIS) is the risk of deformity progression. Brace treatment can change natural history, even when risk of progression is high. The aim of this study was to compare the end of growth results of JIS subjects, treated after 10 years of age, with final results of AIS. Methods Design: prospective observational controlled cohort study nested in a prospective database. Setting: outpatient tertiary referral clinic specialized in conservative treatment of spinal deformities. Inclusion criteria: idiopathic scoliosis; European Risser 0–2; 25 degrees to 45 degrees Cobb; start treatment age: 10 years or more, never treated before. Exclusion criteria: secondary scoliosis, neurological etiology, prior treatment for scoliosis (brace or surgery). Groups: 27 patients met the inclusion criteria for the AJIS, (Juvenile Idiopathic Scoliosis treated in adolescence), demonstrated by an x-ray before 10 year of age, and treatment start after 10 years of age. AIS group included 45 adolescents with a diagnostic x-ray made after the threshold of age 10 years. Results at the end of growth were analysed; the threshold of 5 Cobb degree to define worsened, improved and stabilized curves was considered. Statistics: Mean and SD were used for descriptive statistics of clinical and radiographic changes. Relative Risk of failure (RR), Chi-square and T-test of all data was calculated to find differences among the two groups. 95% Confidence Interval (CI) , and of radiographic changes have been calculated. Results We did not find any Cobb angle significant differences among groups at baseline and at the end of treatment. The only difference was in the number of patients progressed above 45 degrees, found in the JIS group. The RR of progression of AJIS was, 1.35 (IC95% 0.57-3.17) versus AIS, and it wasn't statistically significant in the AJIS group, in respect to AIS group (p = 0.5338). Conclusion

  18. [Research on and application of hybrid optimization algorithm in Brillouin scattering spectrum parameter extraction problem].

    PubMed

    Zhang, Yan-jun; Zhang, Shu-guo; Fu, Guang-wei; Li, Da; Liu, Yin; Bi, Wei-hong

    2012-04-01

    This paper presents a novel algorithm which blends optimize particle swarm optimization (PSO) algorithm and Levenberg-Marquardt (LM) algorithm according to the probability. This novel algorithm can be used for Pseudo-Voigt type of Brillouin scattering spectrum to improve the degree of fitting and precision of shift extraction. This algorithm uses PSO algorithm as the main frame. First, PSO algorithm is used in global search, after a certain number of optimization every time there generates a random probability rand (0, 1). If rand (0, 1) is less than or equal to the predetermined probability P, the optimal solution obtained by PSO algorithm will be used as the initial value of LM algorithm. Then LM algorithm is used in local depth search and the solution of LM algorithm is used to replace the previous PSO algorithm for optimal solutions. Again the PSO algorithm is used for global search. If rand (0, 1) was greater than P, PSO algorithm is still used in search, waiting the next optimization to generate random probability rand (0, 1) to judge. Two kinds of algorithms are alternatively used to obtain ideal global optimal solution. Simulation analysis and experimental results show that the new algorithm overcomes the shortcomings of single algorithm and improves the degree of fitting and precision of frequency shift extraction in Brillouin scattering spectrum, and fully prove that the new method is practical and feasible.

  19. Clinical consequences of the Calypso trial showing superiority of PEG-liposomal doxorubicin and carboplatin over paclitaxel and carboplatin in recurrent ovarian cancer: results of an Austrian gynecologic oncologists' expert meeting.

    PubMed

    Petru, Edgar; Reinthaller, Alexander; Angleitner-Boubenizek, Lukas; Schauer, Christian; Zeimet, Alain; Dirschlmayer, Wolfgang; Medl, Michael; Stummvoll, Wolfgang; Sevelda, Paul; Marth, Christian

    2010-11-01

    The Calypso trial showed an improved progression-free survival with PEG-liposomal doxorubicin (PLD) and carboplatin (P) as compared with the standard regimen paclitaxel (PCLTX) and P in the second- or third-line treatment of platinum-sensitive epithelial ovarian cancer [1]. A panel of Austrian gynecologic oncologists discussed the clinical consequences of the data from the Calypso study for the routine practice. PLD + P had a significantly lower rate of alopecia and neuropathy than the taxane regimen, both toxicities which compromise the quality of life. Due to possible significant thrombocytopenia, the blood counts of patients undergoing PLD + P therapy should be monitored weekly. Patients receiving PLD/P are at higher risk of nausea and vomiting. Palmoplantar erythrodysesthesia (hand-foot syndrome) is a significant toxicity of PLD + P most prevalent after the third or fourth cycle. Prophylaxis consists of avoiding pressure on feet and hands and other parts of the body. Similarly, prophylaxis of mucositis seems important and includes avoiding consumption of hot, spicy and salty foods and drinks. Mouth dryness should be avoided. Premedication with antiemetics and dexamethasone dissolved in 5% glucose is done to prevent hypersensitivity to PLD. In conclusion, the therapeutic index is more favorable for PLD + P than for PCTX + P.

  20. Sampling Within k-Means Algorithm to Cluster Large Datasets

    SciTech Connect

    Bejarano, Jeremy; Bose, Koushiki; Brannan, Tyler; Thomas, Anita; Adragni, Kofi; Neerchal, Nagaraj; Ostrouchov, George

    2011-08-01

    Due to current data collection technology, our ability to gather data has surpassed our ability to analyze it. In particular, k-means, one of the simplest and fastest clustering algorithms, is ill-equipped to handle extremely large datasets on even the most powerful machines. Our new algorithm uses a sample from a dataset to decrease runtime by reducing the amount of data analyzed. We perform a simulation study to compare our sampling based k-means to the standard k-means algorithm by analyzing both the speed and accuracy of the two methods. Results show that our algorithm is significantly more efficient than the existing algorithm with comparable accuracy. Further work on this project might include a more comprehensive study both on more varied test datasets as well as on real weather datasets. This is especially important considering that this preliminary study was performed on rather tame datasets. Also, these datasets should analyze the performance of the algorithm on varied values of k. Lastly, this paper showed that the algorithm was accurate for relatively low sample sizes. We would like to analyze this further to see how accurate the algorithm is for even lower sample sizes. We could find the lowest sample sizes, by manipulating width and confidence level, for which the algorithm would be acceptably accurate. In order for our algorithm to be a success, it needs to meet two benchmarks: match the accuracy of the standard k-means algorithm and significantly reduce runtime. Both goals are accomplished for all six datasets analyzed. However, on datasets of three and four dimension, as the data becomes more difficult to cluster, both algorithms fail to obtain the correct classifications on some trials. Nevertheless, our algorithm consistently matches the performance of the standard algorithm while becoming remarkably more efficient with time. Therefore, we conclude that analysts can use our algorithm, expecting accurate results in considerably less time.

  1. Teaching-learning-based Optimization Algorithm for Parameter Identification in the Design of IIR Filters

    NASA Astrophysics Data System (ADS)

    Singh, R.; Verma, H. K.

    2013-12-01

    This paper presents a teaching-learning-based optimization (TLBO) algorithm to solve parameter identification problems in the designing of digital infinite impulse response (IIR) filter. TLBO based filter modelling is applied to calculate the parameters of unknown plant in simulations. Unlike other heuristic search algorithms, TLBO algorithm is an algorithm-specific parameter-less algorithm. In this paper big bang-big crunch (BB-BC) optimization and PSO algorithms are also applied to filter design for comparison. Unknown filter parameters are considered as a vector to be optimized by these algorithms. MATLAB programming is used for implementation of proposed algorithms. Experimental results show that the TLBO is more accurate to estimate the filter parameters than the BB-BC optimization algorithm and has faster convergence rate when compared to PSO algorithm. TLBO is used where accuracy is more essential than the convergence speed.

  2. Numerical and analytic results showing the suppression of secondary electron emission from velvet and foam, and a geometric view factor model to guide the development of a surface to suppress SEE

    NASA Astrophysics Data System (ADS)

    Swanson, Charles; Kaganovich, I. D.

    2016-09-01

    The technique of suppressing secondary electron emission (SEE) from a surface by texturing it is developing rapidly in recent years. We have specific and general results in support of this technique: We have performed numerical and analytic calculations for determining the effective secondary electron yield (SEY) from velvet, which is an array of long cylinders on the micro-scale, and found velvet to be suitable for suppressing SEY from a normally incident primary distribution. We have performed numerical and analytic calculations also for metallic foams, which are an isotropic lattice of fibers on the micro-scale, and found foams to be suitable for suppressing SEY from an isotropic primary distribution. More generally, we have created a geometric weighted view factor model for determining the SEY suppression of a given surface geometry, which has optimization of SEY as a natural application. The optimal surface for suppressing SEY does not have finite area and has no smallest feature size, making it fractal in nature. This model gives simple criteria for a physical, non-fractal surface to suppress SEY. We found families of optimal surfaces to suppress SEY given a finite surface area. The research is supported by Air Force Office of Scientific Research (AFSOR).

  3. Spectrum parameter estimation in Brillouin scattering distributed temperature sensor based on cuckoo search algorithm combined with the improved differential evolution algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Yanjun; Yu, Chunjuan; Fu, Xinghu; Liu, Wenzhe; Bi, Weihong

    2015-12-01

    In the distributed optical fiber sensing system based on Brillouin scattering, strain and temperature are the main measuring parameters which can be obtained by analyzing the Brillouin center frequency shift. The novel algorithm which combines the cuckoo search algorithm (CS) with the improved differential evolution (IDE) algorithm is proposed for the Brillouin scattering parameter estimation. The CS-IDE algorithm is compared with CS algorithm and analyzed in different situation. The results show that both the CS and CS-IDE algorithm have very good convergence. The analysis reveals that the CS-IDE algorithm can extract the scattering spectrum features with different linear weight ratio, linewidth combination and SNR. Moreover, the BOTDR temperature measuring system based on electron optical frequency shift is set up to verify the effectiveness of the CS-IDE algorithm. Experimental results show that there is a good linear relationship between the Brillouin center frequency shift and temperature changes.

  4. License plate detection algorithm

    NASA Astrophysics Data System (ADS)

    Broitman, Michael; Klopovsky, Yuri; Silinskis, Normunds

    2013-12-01

    A novel algorithm for vehicle license plates localization is proposed. The algorithm is based on pixel intensity transition gradient analysis. Near to 2500 natural-scene gray-level vehicle images of different backgrounds and ambient illumination was tested. The best set of algorithm's parameters produces detection rate up to 0.94. Taking into account abnormal camera location during our tests and therefore geometrical distortion and troubles from trees this result could be considered as passable. Correlation between source data, such as license Plate dimensions and texture, cameras location and others, and parameters of algorithm were also defined.

  5. Efficient Grammar Induction Algorithm with Parse Forests from Real Corpora

    NASA Astrophysics Data System (ADS)

    Kurihara, Kenichi; Kameya, Yoshitaka; Sato, Taisuke

    The task of inducing grammar structures has received a great deal of attention. The reasons why researchers have studied are different; to use grammar induction as the first stage in building large treebanks or to make up better language models. However, grammar induction has inherent computational complexity. To overcome it, some grammar induction algorithms add new production rules incrementally. They refine the grammar while keeping their computational complexity low. In this paper, we propose a new efficient grammar induction algorithm. Although our algorithm is similar to algorithms which learn a grammar incrementally, our algorithm uses the graphical EM algorithm instead of the Inside-Outside algorithm. We report results of learning experiments in terms of learning speeds. The results show that our algorithm learns a grammar in constant time regardless of the size of the grammar. Since our algorithm decreases syntactic ambiguities in each step, our algorithm reduces required time for learning. This constant-time learning considerably affects learning time for larger grammars. We also reports results of evaluation of criteria to choose nonterminals. Our algorithm refines a grammar based on a nonterminal in each step. Since there can be several criteria to decide which nonterminal is the best, we evaluate them by learning experiments.

  6. A new algorithm for the detection of seismic quiescence: introduction of the RTM algorithm, a modified RTL algorithm

    NASA Astrophysics Data System (ADS)

    Nagao, Toshiyasu; Takeuchi, Akihiro; Nakamura, Kenji

    2011-03-01

    There are a number of reports on seismic quiescence phenomena before large earthquakes. The RTL algorithm is a weighted coefficient statistical method that takes into account the magnitude, occurrence time, and place of earthquake when seismicity pattern changes before large earthquakes are being investigated. However, we consider the original RTL algorithm to be overweighted on distance. In this paper, we introduce a modified RTL algorithm, called the RTM algorithm, and apply it to three large earthquakes in Japan, namely, the Hyogo-ken Nanbu earthquake in 1995 ( M JMA7.3), the Noto Hanto earthquake in 2007 ( M JMA 6.9), and the Iwate-Miyagi Nairiku earthquake in 2008 ( M JMA 7.2), as test cases. Because this algorithm uses several parameters to characterize the weighted coefficients, multiparameter sets have to be prepared for the tests. The results show that the RTM algorithm is more sensitive than the RTL algorithm to seismic quiescence phenomena. This paper represents the first step in a series of future analyses of seismic quiescence phenomena using the RTM algorithm. At this moment, whole surveyed parameters are empirically selected for use in the method. We have to consider the physical meaning of the "best fit" parameter, such as the relation of ACFS, among others, in future analyses.

  7. Online clustering algorithms for radar emitter classification.

    PubMed

    Liu, Jun; Lee, Jim P Y; Senior; Li, Lingjie; Luo, Zhi-Quan; Wong, K Max

    2005-08-01

    Radar emitter classification is a special application of data clustering for classifying unknown radar emitters from received radar pulse samples. The main challenges of this task are the high dimensionality of radar pulse samples, small sample group size, and closely located radar pulse clusters. In this paper, two new online clustering algorithms are developed for radar emitter classification: One is model-based using the Minimum Description Length (MDL) criterion and the other is based on competitive learning. Computational complexity is analyzed for each algorithm and then compared. Simulation results show the superior performance of the model-based algorithm over competitive learning in terms of better classification accuracy, flexibility, and stability.

  8. Efficient Algorithms for Langevin and DPD Dynamics.

    PubMed

    Goga, N; Rzepiela, A J; de Vries, A H; Marrink, S J; Berendsen, H J C

    2012-10-09

    In this article, we present several algorithms for stochastic dynamics, including Langevin dynamics and different variants of Dissipative Particle Dynamics (DPD), applicable to systems with or without constraints. The algorithms are based on the impulsive application of friction and noise, thus avoiding the computational complexity of algorithms that apply continuous friction and noise. Simulation results on thermostat strength and diffusion properties for ideal gas, coarse-grained (MARTINI) water, and constrained atomic (SPC/E) water systems are discussed. We show that the measured thermal relaxation rates agree well with theoretical predictions. The influence of various parameters on the diffusion coefficient is discussed.

  9. SDR Input Power Estimation Algorithms

    NASA Technical Reports Server (NTRS)

    Nappier, Jennifer M.; Briones, Janette C.

    2013-01-01

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  10. A novel algorithm for Bluetooth ECG.

    PubMed

    Pandya, Utpal T; Desai, Uday B

    2012-11-01

    In wireless transmission of ECG, data latency will be significant when battery power level and data transmission distance are not maintained. In applications like home monitoring or personalized care, to overcome the joint effect of previous issues of wireless transmission and other ECG measurement noises, a novel filtering strategy is required. Here, a novel algorithm, identified as peak rejection adaptive sampling modified moving average (PRASMMA) algorithm for wireless ECG is introduced. This algorithm first removes error in bit pattern of received data if occurred in wireless transmission and then removes baseline drift. Afterward, a modified moving average is implemented except in the region of each QRS complexes. The algorithm also sets its filtering parameters according to different sampling rate selected for acquisition of signals. To demonstrate the work, a prototyped Bluetooth-based ECG module is used to capture ECG with different sampling rate and in different position of patient. This module transmits ECG wirelessly to Bluetooth-enabled devices where the PRASMMA algorithm is applied on captured ECG. The performance of PRASMMA algorithm is compared with moving average and S-Golay algorithms visually as well as numerically. The results show that the PRASMMA algorithm can significantly improve the ECG reconstruction by efficiently removing the noise and its use can be extended to any parameters where peaks are importance for diagnostic purpose.

  11. Improved ant colony algorithm and its simulation study

    NASA Astrophysics Data System (ADS)

    Wang, Zongjiang

    2013-03-01

    Ant colony algorithm is development a new heuristic algorithm through simulation ant foraging. For its convergence rate slow, easy to fall into local optimal solution proposed for the adjustment of key parameters, pheromone update to improve the way and through the issue of TSP experiments, results showed that the improved algorithm has better overall search capabilities and demonstrated the feasibility and effectiveness of this method.

  12. Recent Advancements in Lightning Jump Algorithm Work

    NASA Technical Reports Server (NTRS)

    Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.

    2010-01-01

    In the past year, the primary objectives were to show the usefulness of total lightning as compared to traditional cloud-to-ground (CG) networks, test the lightning jump algorithm configurations in other regions of the country, increase the number of thunderstorms within our thunderstorm database, and to pinpoint environments that could prove difficult for any lightning jump configuration. A total of 561 thunderstorms have been examined in the past year (409 non-severe, 152 severe) from four regions of the country (North Alabama, Washington D.C., High Plains of CO/KS, and Oklahoma). Results continue to indicate that the 2 lightning jump algorithm configuration holds the most promise in terms of prospective operational lightning jump algorithms, with a probability of detection (POD) at 81%, a false alarm rate (FAR) of 45%, a critical success index (CSI) of 49% and a Heidke Skill Score (HSS) of 0.66. The second best performing algorithm configuration was the Threshold 4 algorithm, which had a POD of 72%, FAR of 51%, a CSI of 41% and an HSS of 0.58. Because a more complex algorithm configuration shows the most promise in terms of prospective operational lightning jump algorithms, accurate thunderstorm cell tracking work must be undertaken to track lightning trends on an individual thunderstorm basis over time. While these numbers for the 2 configuration are impressive, the algorithm does have its weaknesses. Specifically, low-topped and tropical cyclone thunderstorm environments are present issues for the 2 lightning jump algorithm, because of the suppressed vertical depth impact on overall flash counts (i.e., a relative dearth in lightning). For example, in a sample of 120 thunderstorms from northern Alabama that contained 72 missed events by the 2 algorithm 36% of the misses were associated with these two environments (17 storms).

  13. Combined string searching algorithm based on knuth-morris- pratt and boyer-moore algorithms

    NASA Astrophysics Data System (ADS)

    Tsarev, R. Yu; Chernigovskiy, A. S.; Tsareva, E. A.; Brezitskaya, V. V.; Nikiforov, A. Yu; Smirnov, N. A.

    2016-04-01

    The string searching task can be classified as a classic information processing task. Users either encounter the solution of this task while working with text processors or browsers, employing standard built-in tools, or this task is solved unseen by the users, while they are working with various computer programmes. Nowadays there are many algorithms for solving the string searching problem. The main criterion of these algorithms’ effectiveness is searching speed. The larger the shift of the pattern relative to the string in case of pattern and string characters’ mismatch is, the higher is the algorithm running speed. This article offers a combined algorithm, which has been developed on the basis of well-known Knuth-Morris-Pratt and Boyer-Moore string searching algorithms. These algorithms are based on two different basic principles of pattern matching. Knuth-Morris-Pratt algorithm is based upon forward pattern matching and Boyer-Moore is based upon backward pattern matching. Having united these two algorithms, the combined algorithm allows acquiring the larger shift in case of pattern and string characters’ mismatch. The article provides an example, which illustrates the results of Boyer-Moore and Knuth-Morris- Pratt algorithms and combined algorithm’s work and shows advantage of the latter in solving string searching problem.

  14. A scalable parallel algorithm for multiple objective linear programs

    NASA Technical Reports Server (NTRS)

    Wiecek, Malgorzata M.; Zhang, Hong

    1994-01-01

    This paper presents an ADBASE-based parallel algorithm for solving multiple objective linear programs (MOLP's). Job balance, speedup and scalability are of primary interest in evaluating efficiency of the new algorithm. Implementation results on Intel iPSC/2 and Paragon multiprocessors show that the algorithm significantly speeds up the process of solving MOLP's, which is understood as generating all or some efficient extreme points and unbounded efficient edges. The algorithm gives specially good results for large and very large problems. Motivation and justification for solving such large MOLP's are also included.

  15. [The analysis and comparison of different edge detection algorithms in ultrasound B-scan images].

    PubMed

    Zhang, Luo-ping; Yang, Bo-yuan; Wang, Chun-hong

    2006-05-01

    In this paper, some familiar algorithms of edge detection in ultrasound B-scan images are analyzed and studied. The results show that Sobel, Prewitt and Laplacian operators are sensitive to noise, Hough transform adapts to the whole detection, while LoG algorithm's average is zero and it couldn't change the whole dynamic area. Accordingly LoG algorithm is preferable.

  16. Performance-Based Seismic Design of Steel Frames Utilizing Colliding Bodies Algorithm

    PubMed Central

    Veladi, H.

    2014-01-01

    A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm. PMID:25202717

  17. Performance of a parallel algorithm for standard cell placement on the Intel Hypercube

    NASA Technical Reports Server (NTRS)

    Jones, Mark; Banerjee, Prithviraj

    1987-01-01

    A parallel simulated annealing algorithm for standard cell placement on the Intel Hypercube is presented. A novel tree broadcasting strategy is used extensively for updating cell locations in the parallel environment. Studies on the performance of the algorithm on example industrial circuits show that it is faster and gives better final placement results than uniprocessor simulated annealing algorithms.

  18. Performance-based seismic design of steel frames utilizing colliding bodies algorithm.

    PubMed

    Veladi, H

    2014-01-01

    A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm.

  19. Adaptive reference update (ARU) algorithm. A stochastic search algorithm for efficient optimization of multi-drug cocktails

    PubMed Central

    2012-01-01

    Background Multi-target therapeutics has been shown to be effective for treating complex diseases, and currently, it is a common practice to combine multiple drugs to treat such diseases to optimize the therapeutic outcomes. However, considering the huge number of possible ways to mix multiple drugs at different concentrations, it is practically difficult to identify the optimal drug combination through exhaustive testing. Results In this paper, we propose a novel stochastic search algorithm, called the adaptive reference update (ARU) algorithm, that can provide an efficient and systematic way for optimizing multi-drug cocktails. The ARU algorithm iteratively updates the drug combination to improve its response, where the update is made by comparing the response of the current combination with that of a reference combination, based on which the beneficial update direction is predicted. The reference combination is continuously updated based on the drug response values observed in the past, thereby adapting to the underlying drug response function. To demonstrate the effectiveness of the proposed algorithm, we evaluated its performance based on various multi-dimensional drug functions and compared it with existing algorithms. Conclusions Simulation results show that the ARU algorithm significantly outperforms existing stochastic search algorithms, including the Gur Game algorithm. In fact, the ARU algorithm can more effectively identify potent drug combinations and it typically spends fewer iterations for finding effective combinations. Furthermore, the ARU algorithm is robust to random fluctuations and noise in the measured drug response, which makes the algorithm well-suited for practical drug optimization applications. PMID:23134742

  20. Efficient convex-elastic net algorithm to solve the Euclidean traveling salesman problem.

    PubMed

    Al-Mulhem, M; Al-Maghrabi, T

    1998-01-01

    This paper describes a hybrid algorithm that combines an adaptive-type neural network algorithm and a nondeterministic iterative algorithm to solve the Euclidean traveling salesman problem (E-TSP). It begins with a brief introduction to the TSP and the E-TSP. Then, it presents the proposed algorithm with its two major components: the convex-elastic net (CEN) algorithm and the nondeterministic iterative improvement (NII) algorithm. These two algorithms are combined into the efficient convex-elastic net (ECEN) algorithm. The CEN algorithm integrates the convex-hull property and elastic net algorithm to generate an initial tour for the E-TSP. The NII algorithm uses two rearrangement operators to improve the initial tour given by the CEN algorithm. The paper presents simulation results for two instances of E-TSP: randomly generated tours and tours for well-known problems in the literature. Experimental results are given to show that the proposed algorithm ran find the nearly optimal solution for the E-TSP that outperform many similar algorithms reported in the literature. The paper concludes with the advantages of the new algorithm and possible extensions.

  1. Variable neighbourhood simulated annealing algorithm for capacitated vehicle routing problems

    NASA Astrophysics Data System (ADS)

    Xiao, Yiyong; Zhao, Qiuhong; Kaku, Ikou; Mladenovic, Nenad

    2014-04-01

    This article presents the variable neighbourhood simulated annealing (VNSA) algorithm, a variant of the variable neighbourhood search (VNS) combined with simulated annealing (SA), for efficiently solving capacitated vehicle routing problems (CVRPs). In the new algorithm, the deterministic 'Move or not' criterion of the original VNS algorithm regarding the incumbent replacement is replaced by an SA probability, and the neighbourhood shifting of the original VNS (from near to far by k← k+1) is replaced by a neighbourhood shaking procedure following a specified rule. The geographical neighbourhood structure is introduced in constructing the neighbourhood structures for the CVRP of the string model. The proposed algorithm is tested against 39 well-known benchmark CVRP instances of different scales (small/middle, large, very large). The results show that the VNSA algorithm outperforms most existing algorithms in terms of computational effectiveness and efficiency, showing good performance in solving large and very large CVRPs.

  2. The application of mixed recommendation algorithm with user clustering in the microblog advertisements promotion

    NASA Astrophysics Data System (ADS)

    Gong, Lina; Xu, Tao; Zhang, Wei; Li, Xuhong; Wang, Xia; Pan, Wenwen

    2017-03-01

    The traditional microblog recommendation algorithm has the problems of low efficiency and modest effect in the era of big data. In the aim of solving these issues, this paper proposed a mixed recommendation algorithm with user clustering. This paper first introduced the situation of microblog marketing industry. Then, this paper elaborates the user interest modeling process and detailed advertisement recommendation methods. Finally, this paper compared the mixed recommendation algorithm with the traditional classification algorithm and mixed recommendation algorithm without user clustering. The results show that the mixed recommendation algorithm with user clustering has good accuracy and recall rate in the microblog advertisements promotion.

  3. Universal lossless compression algorithm for textual images

    NASA Astrophysics Data System (ADS)

    al Zahir, Saif

    2012-03-01

    In recent years, an unparalleled volume of textual information has been transported over the Internet via email, chatting, blogging, tweeting, digital libraries, and information retrieval systems. As the volume of text data has now exceeded 40% of the total volume of traffic on the Internet, compressing textual data becomes imperative. Many sophisticated algorithms were introduced and employed for this purpose including Huffman encoding, arithmetic encoding, the Ziv-Lempel family, Dynamic Markov Compression, and Burrow-Wheeler Transform. My research presents novel universal algorithm for compressing textual images. The algorithm comprises two parts: 1. a universal fixed-to-variable codebook; and 2. our row and column elimination coding scheme. Simulation results on a large number of Arabic, Persian, and Hebrew textual images show that this algorithm has a compression ratio of nearly 87%, which exceeds published results including JBIG2.

  4. Genetic Algorithms and Local Search

    NASA Technical Reports Server (NTRS)

    Whitley, Darrell

    1996-01-01

    The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.

  5. Algorithms for Automated DNA Assembly

    DTIC Science & Technology

    2010-01-01

    polyketide synthase gene cluster. Proc. Natl Acad. Sci. USA, 101, 15573–15578. 16. Shetty,R.P., Endy,D. and Knight,T.F. Jr (2008) Engineering BioBrick vectors...correct theoretical construction scheme is de- veloped manually, it is likely to be suboptimal by any number of cost metrics. Modular, robust and...to an exhaustive search on a small synthetic dataset and our results show that our algorithms can quickly find an optimal solution. Comparison with

  6. Scheduling algorithms

    NASA Astrophysics Data System (ADS)

    Wolfe, William J.; Wood, David; Sorensen, Stephen E.

    1996-12-01

    This paper discusses automated scheduling as it applies to complex domains such as factories, transportation, and communications systems. The window-constrained-packing problem is introduced as an ideal model of the scheduling trade offs. Specific algorithms are compared in terms of simplicity, speed, and accuracy. In particular, dispatch, look-ahead, and genetic algorithms are statistically compared on randomly generated job sets. The conclusion is that dispatch methods are fast and fairly accurate; while modern algorithms, such as genetic and simulate annealing, have excessive run times, and are too complex to be practical.

  7. The Algorithm Selection Problem

    NASA Technical Reports Server (NTRS)

    Minton, Steve; Allen, John; Deiss, Ron (Technical Monitor)

    1994-01-01

    Work on NP-hard problems has shown that many instances of these theoretically computationally difficult problems are quite easy. The field has also shown that choosing the right algorithm for the problem can have a profound effect on the time needed to find a solution. However, to date there has been little work showing how to select the right algorithm for solving any particular problem. The paper refers to this as the algorithm selection problem. It describes some of the aspects that make this problem difficult, as well as proposes a technique for addressing it.

  8. Improved artificial bee colony algorithm based gravity matching navigation method.

    PubMed

    Gao, Wei; Zhao, Bo; Zhou, Guang Tao; Wang, Qiu Ying; Yu, Chun Yang

    2014-07-18

    Gravity matching navigation algorithm is one of the key technologies for gravity aided inertial navigation systems. With the development of intelligent algorithms, the powerful search ability of the Artificial Bee Colony (ABC) algorithm makes it possible to be applied to the gravity matching navigation field. However, existing search mechanisms of basic ABC algorithms cannot meet the need for high accuracy in gravity aided navigation. Firstly, proper modifications are proposed to improve the performance of the basic ABC algorithm. Secondly, a new search mechanism is presented in this paper which is based on an improved ABC algorithm using external speed information. At last, modified Hausdorff distance is introduced to screen the possible matching results. Both simulations and ocean experiments verify the feasibility of the method, and results show that the matching rate of the method is high enough to obtain a precise matching position.

  9. Algorithm for Increasing Traffic Capacity of Level-Crossing Using Scheduling Theory and Intelligent Embedded Devices

    NASA Astrophysics Data System (ADS)

    Alps, Ivars; Gorobetz, Mikhail; Levchenkov, Anatoly

    2011-01-01

    In this paper the authors present heuristics algorithm for level-crossing traffic capacity increasing. The genetic algorithm is proposed for this task solution. The control of motion speed and operation with level-crossing barriers are proposed to create control centre and installed embedded intelligent devices on railway vehicles. Algorithm is tested using computer. The results of experiments show big promises for rail transport schedule fulfilment and level-crossing traffic capacity increasing using proposed algorithm.

  10. Mutation-Based Artificial Fish Swarm Algorithm for Bound Constrained Global Optimization

    NASA Astrophysics Data System (ADS)

    Rocha, Ana Maria A. C.; Fernandes, Edite M. G. P.

    2011-09-01

    The herein presented mutation-based artificial fish swarm (AFS) algorithm includes mutation operators to prevent the algorithm to falling into local solutions, diversifying the search, and to accelerate convergence to the global optima. Three mutation strategies are introduced into the AFS algorithm to define the trial points that emerge from random, leaping and searching behaviors. Computational results show that the new algorithm outperforms other well-known global stochastic solution methods.

  11. Performance of a parallel algorithm for standard cell placement on the Intel Hypercube

    NASA Technical Reports Server (NTRS)

    Jones, Mark; Banerjee, Prithviraj

    1987-01-01

    A parallel simulated annealing algorithm for standard cell placement that is targeted to run on the Intel Hypercube is presented. A tree broadcasting strategy that is used extensively in our algorithm for updating cell locations in the parallel environment is presented. Studies on the performance of our algorithm on example industrial circuits show that it is faster and gives better final placement results than the uniprocessor simulated annealing algorithms.

  12. A modified genetic algorithm with fuzzy roulette wheel selection for job-shop scheduling problems

    NASA Astrophysics Data System (ADS)

    Thammano, Arit; Teekeng, Wannaporn

    2015-05-01

    The job-shop scheduling problem is one of the most difficult production planning problems. Since it is in the NP-hard class, a recent trend in solving the job-shop scheduling problem is shifting towards the use of heuristic and metaheuristic algorithms. This paper proposes a novel metaheuristic algorithm, which is a modification of the genetic algorithm. This proposed algorithm introduces two new concepts to the standard genetic algorithm: (1) fuzzy roulette wheel selection and (2) the mutation operation with tabu list. The proposed algorithm has been evaluated and compared with several state-of-the-art algorithms in the literature. The experimental results on 53 JSSPs show that the proposed algorithm is very effective in solving the combinatorial optimization problems. It outperforms all state-of-the-art algorithms on all benchmark problems in terms of the ability to achieve the optimal solution and the computational time.

  13. Parallel Wolff Cluster Algorithms

    NASA Astrophysics Data System (ADS)

    Bae, S.; Ko, S. H.; Coddington, P. D.

    The Wolff single-cluster algorithm is the most efficient method known for Monte Carlo simulation of many spin models. Due to the irregular size, shape and position of the Wolff clusters, this method does not easily lend itself to efficient parallel implementation, so that simulations using this method have thus far been confined to workstations and vector machines. Here we present two parallel implementations of this algorithm, and show that one gives fairly good performance on a MIMD parallel computer.

  14. National Orange Show Photovoltaic Demonstration

    SciTech Connect

    Dan Jimenez Sheri Raborn, CPA; Tom Baker

    2008-03-31

    National Orange Show Photovoltaic Demonstration created a 400KW Photovoltaic self-generation plant at the National Orange Show Events Center (NOS). The NOS owns a 120-acre state fairground where it operates an events center and produces an annual citrus fair known as the Orange Show. The NOS governing board wanted to employ cost-saving programs for annual energy expenses. It is hoped the Photovoltaic program will result in overall savings for the NOS, help reduce the State's energy demands as relating to electrical power consumption, improve quality of life within the affected grid area as well as increase the energy efficiency of buildings at our venue. In addition, the potential to reduce operational expenses would have a tremendous effect on the ability of the NOS to service its community.

  15. Algorithmic chemistry

    SciTech Connect

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  16. Target classification algorithm based on feature aided tracking

    NASA Astrophysics Data System (ADS)

    Zhan, Ronghui; Zhang, Jun

    2013-03-01

    An effective target classification algorithm based on feature aided tracking (FAT) is proposed, using the length of target (target extent) as the classification information. To implement the algorithm, the Rao-Blackwellised unscented Kalman filter (RBUKF) is used to jointly estimate the kinematic state and target extent; meanwhile the joint probability data association (JPDA) algorithm is exploited to implement multi-target data association aided by target down-range extent. Simulation results under different condition show the presented algorithm is both accurate and robust, and it is suitable for the application of near spaced targets tracking and classification under the environment of dense clutters.

  17. An affine projection algorithm using grouping selection of input vectors

    NASA Astrophysics Data System (ADS)

    Shin, JaeWook; Kong, NamWoong; Park, PooGyeon

    2011-10-01

    This paper present an affine projection algorithm (APA) using grouping selection of input vectors. To improve the performance of conventional APA, the proposed algorithm adjusts the number of the input vectors using two procedures: grouping procedure and selection procedure. In grouping procedure, the some input vectors that have overlapping information for update is grouped using normalized inner product. Then, few input vectors that have enough information for for coefficient update is selected using steady-state mean square error (MSE) in selection procedure. Finally, the filter coefficients update using selected input vectors. The experimental results show that the proposed algorithm has small steady-state estimation errors comparing with the existing algorithms.

  18. Complex algorithm of optical flow determination by weighted full search

    NASA Astrophysics Data System (ADS)

    Panin, S. V.; Chemezov, V. O.; Lyubutin, P. S.

    2016-11-01

    An optical flow determination algorithm is proposed, developed and tested in the article. The algorithm is aimed at improving the accuracy of displacement determination at the scene element boundaries (objects). The results show that the application of the proposed algorithm is rather promising for stereo vision applications. Variations in calculating parameters have allowed determining their rational values and reducing the average absolute error of the end point displacement determination (AEE). The peculiarity of the proposed algorithm is performing calculations within the local regions, which makes it possible to carry out such calculations simultaneously (to attract parallel calculations).

  19. Heuristic algorithm for off-lattice protein folding problem*

    PubMed Central

    Chen, Mao; Huang, Wen-qi

    2006-01-01

    Enlightened by the law of interactions among objects in the physical world, we propose a heuristic algorithm for solving the three-dimensional (3D) off-lattice protein folding problem. Based on a physical model, the problem is converted from a nonlinear constraint-satisfied problem to an unconstrained optimization problem which can be solved by the well-known gradient method. To improve the efficiency of our algorithm, a strategy was introduced to generate initial configuration. Computational results showed that this algorithm could find states with lower energy than previously proposed ground states obtained by nPERM algorithm for all chains with length ranging from 13 to 55. PMID:16365919

  20. An Improved Back Propagation Neural Network Algorithm on Classification Problems

    NASA Astrophysics Data System (ADS)

    Nawi, Nazri Mohd; Ransing, R. S.; Salleh, Mohd Najib Mohd; Ghazali, Rozaida; Hamid, Norhamreeza Abdul

    The back propagation algorithm is one the most popular algorithms to train feed forward neural networks. However, the convergence of this algorithm is slow, it is mainly because of gradient descent algorithm. Previous research demonstrated that in 'feed forward' algorithm, the slope of the activation function is directly influenced by a parameter referred to as 'gain'. This research proposed an algorithm for improving the performance of the back propagation algorithm by introducing the adaptive gain of the activation function. The gain values change adaptively for each node. The influence of the adaptive gain on the learning ability of a neural network is analysed. Multi layer feed forward neural networks have been assessed. Physical interpretation of the relationship between the gain value and the learning rate and weight values is given. The efficiency of the proposed algorithm is compared with conventional Gradient Descent Method and verified by means of simulation on four classification problems. In learning the patterns, the simulations result demonstrate that the proposed method converged faster on Wisconsin breast cancer with an improvement ratio of nearly 2.8, 1.76 on diabetes problem, 65% better on thyroid data sets and 97% faster on IRIS classification problem. The results clearly show that the proposed algorithm significantly improves the learning speed of the conventional back-propagation algorithm.

  1. Novel and efficient tag SNPs selection algorithms.

    PubMed

    Chen, Wen-Pei; Hung, Che-Lun; Tsai, Suh-Jen Jane; Lin, Yaw-Ling

    2014-01-01

    SNPs are the most abundant forms of genetic variations amongst species; the association studies between complex diseases and SNPs or haplotypes have received great attention. However, these studies are restricted by the cost of genotyping all SNPs; thus, it is necessary to find smaller subsets, or tag SNPs, representing the rest of the SNPs. In fact, the existing tag SNP selection algorithms are notoriously time-consuming. An efficient algorithm for tag SNP selection was presented, which was applied to analyze the HapMap YRI data. The experimental results show that the proposed algorithm can achieve better performance than the existing tag SNP selection algorithms; in most cases, this proposed algorithm is at least ten times faster than the existing methods. In many cases, when the redundant ratio of the block is high, the proposed algorithm can even be thousands times faster than the previously known methods. Tools and web services for haplotype block analysis integrated by hadoop MapReduce framework are also developed using the proposed algorithm as computation kernels.

  2. Algorithm for dynamic Speckle pattern processing

    NASA Astrophysics Data System (ADS)

    Cariñe, J.; Guzmán, R.; Torres-Ruiz, F. A.

    2016-07-01

    In this paper we present a new algorithm for determining surface activity by processing speckle pattern images recorded with a CCD camera. Surface activity can be produced by motility or small displacements among other causes, and is manifested as a change in the pattern recorded in the camera with reference to a static background pattern. This intensity variation is considered to be a small perturbation compared with the mean intensity. Based on a perturbative method we obtain an equation with which we can infer information about the dynamic behavior of the surface that generates the speckle pattern. We define an activity index based on our algorithm that can be easily compared with the outcomes from other algorithms. It is shown experimentally that this index evolves in time in the same way as the Inertia Moment method, however our algorithm is based on direct processing of speckle patterns without the need for other kinds of post-processes (like THSP and co-occurrence matrix), making it a viable real-time method. We also show how this algorithm compares with several other algorithms when applied to calibration experiments. From these results we conclude that our algorithm offer qualitative and quantitative advantages over current methods.

  3. A Fast and Efficient Algorithm for Mining Top-k Nodes in Complex Networks

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Jing, Yun; Zhao, Jing; Wang, Wenjun; Song, Guojie

    2017-02-01

    One of the key problems in social network analysis is influence maximization, which has great significance both in theory and practical applications. Given a complex network and a positive integer k, and asks the k nodes to trigger the largest expected number of the remaining nodes. Many mature algorithms are mainly divided into propagation-based algorithms and topology- based algorithms. The propagation-based algorithms are based on optimization of influence spread process, so the influence spread of them significantly outperforms the topology-based algorithms. But these algorithms still takes days to complete on large networks. Contrary to propagation based algorithms, the topology-based algorithms are based on intuitive parameter statistics and static topology structure properties. Their running time are extremely short but the results of influence spread are unstable. In this paper, we propose a novel topology-based algorithm based on local index rank (LIR). The influence spread of our algorithm is close to the propagation-based algorithm and sometimes over them. Moreover, the running time of our algorithm is millions of times shorter than that of propagation-based algorithms. Our experimental results show that our algorithm has a good and stable performance in IC and LT model.

  4. A Fast and Efficient Algorithm for Mining Top-k Nodes in Complex Networks.

    PubMed

    Liu, Dong; Jing, Yun; Zhao, Jing; Wang, Wenjun; Song, Guojie

    2017-02-27

    One of the key problems in social network analysis is influence maximization, which has great significance both in theory and practical applications. Given a complex network and a positive integer k, and asks the k nodes to trigger the largest expected number of the remaining nodes. Many mature algorithms are mainly divided into propagation-based algorithms and topology- based algorithms. The propagation-based algorithms are based on optimization of influence spread process, so the influence spread of them significantly outperforms the topology-based algorithms. But these algorithms still takes days to complete on large networks. Contrary to propagation based algorithms, the topology-based algorithms are based on intuitive parameter statistics and static topology structure properties. Their running time are extremely short but the results of influence spread are unstable. In this paper, we propose a novel topology-based algorithm based on local index rank (LIR). The influence spread of our algorithm is close to the propagation-based algorithm and sometimes over them. Moreover, the running time of our algorithm is millions of times shorter than that of propagation-based algorithms. Our experimental results show that our algorithm has a good and stable performance in IC and LT model.

  5. A Fast and Efficient Algorithm for Mining Top-k Nodes in Complex Networks

    PubMed Central

    Liu, Dong; Jing, Yun; Zhao, Jing; Wang, Wenjun; Song, Guojie

    2017-01-01

    One of the key problems in social network analysis is influence maximization, which has great significance both in theory and practical applications. Given a complex network and a positive integer k, and asks the k nodes to trigger the largest expected number of the remaining nodes. Many mature algorithms are mainly divided into propagation-based algorithms and topology- based algorithms. The propagation-based algorithms are based on optimization of influence spread process, so the influence spread of them significantly outperforms the topology-based algorithms. But these algorithms still takes days to complete on large networks. Contrary to propagation based algorithms, the topology-based algorithms are based on intuitive parameter statistics and static topology structure properties. Their running time are extremely short but the results of influence spread are unstable. In this paper, we propose a novel topology-based algorithm based on local index rank (LIR). The influence spread of our algorithm is close to the propagation-based algorithm and sometimes over them. Moreover, the running time of our algorithm is millions of times shorter than that of propagation-based algorithms. Our experimental results show that our algorithm has a good and stable performance in IC and LT model. PMID:28240238

  6. Dual format algorithm for monostatic SAR

    NASA Astrophysics Data System (ADS)

    Gorham, LeRoy A.; Rigling, Brian D.

    2010-04-01

    The polar format algorithm for monostatic synthetic aperture radar imaging is based on a linear approximation of the differential range to a scatterer, which leads to spatially-variant distortion and defocus in the resultant image. While approximate corrections may be applied to compensate for these effects, these corrections are ad-hoc in nature. Here, we introduce an alternative imaging algorithm called the Dual Format Algorithm (DFA) that provides better isolation of the defocus effects and reduces distortion. Quadratic phase errors are isolated along a single dimension by allowing image formation to an arbitrary grid instead of a Cartesian grid. This provides an opportunity for more efficient phase error corrections. We provide a description of the arbitrary image grid and we show the quadratic phase error correction derived from a second-order Taylor series approximation of the differential range. The algorithm is demonstrated with a point target simulation.

  7. Constructing backbone network by using tinker algorithm

    NASA Astrophysics Data System (ADS)

    He, Zhiwei; Zhan, Meng; Wang, Jianxiong; Yao, Chenggui

    2017-01-01

    Revealing how a biological network is organized to realize its function is one of the main topics in systems biology. The functional backbone network, defined as the primary structure of the biological network, is of great importance in maintaining the main function of the biological network. We propose a new algorithm, the tinker algorithm, to determine this core structure and apply it in the cell-cycle system. With this algorithm, the backbone network of the cell-cycle network can be determined accurately and efficiently in various models such as the Boolean model, stochastic model, and ordinary differential equation model. Results show that our algorithm is more efficient than that used in the previous research. We hope this method can be put into practical use in relevant future studies.

  8. Rigorous estimates for the relegation algorithm

    NASA Astrophysics Data System (ADS)

    Sansottera, Marco; Ceccaroni, Marta

    2017-01-01

    We revisit the relegation algorithm by Deprit et al. (Celest. Mech. Dyn. Astron. 79:157-182, 2001) in the light of the rigorous Nekhoroshev's like theory. This relatively recent algorithm is nowadays widely used for implementing closed form analytic perturbation theories, as it generalises the classical Birkhoff normalisation algorithm. The algorithm, here briefly explained by means of Lie transformations, has been so far introduced and used in a formal way, i.e. without providing any rigorous convergence or asymptotic estimates. The overall aim of this paper is to find such quantitative estimates and to show how the results about stability over exponentially long times can be recovered in a simple and effective way, at least in the non-resonant case.

  9. Distributed k-Means Algorithm and Fuzzy c-Means Algorithm for Sensor Networks Based on Multiagent Consensus Theory.

    PubMed

    Qin, Jiahu; Fu, Weiming; Gao, Huijun; Zheng, Wei Xing

    2016-03-03

    This paper is concerned with developing a distributed k-means algorithm and a distributed fuzzy c-means algorithm for wireless sensor networks (WSNs) where each node is equipped with sensors. The underlying topology of the WSN is supposed to be strongly connected. The consensus algorithm in multiagent consensus theory is utilized to exchange the measurement information of the sensors in WSN. To obtain a faster convergence speed as well as a higher possibility of having the global optimum, a distributed k-means++ algorithm is first proposed to find the initial centroids before executing the distributed k-means algorithm and the distributed fuzzy c-means algorithm. The proposed distributed k-means algorithm is capable of partitioning the data observed by the nodes into measure-dependent groups which have small in-group and large out-group distances, while the proposed distributed fuzzy c-means algorithm is capable of partitioning the data observed by the nodes into different measure-dependent groups with degrees of membership values ranging from 0 to 1. Simulation results show that the proposed distributed algorithms can achieve almost the same results as that given by the centralized clustering algorithms.

  10. Characteristic extraction and matching algorithms of ballistic missile in near-space by hyperspectral image analysis

    NASA Astrophysics Data System (ADS)

    Lu, Li; Sheng, Wen; Liu, Shihua; Zhang, Xianzhi

    2014-10-01

    The ballistic missile hyperspectral data of imaging spectrometer from the near-space platform are generated by numerical method. The characteristic of the ballistic missile hyperspectral data is extracted and matched based on two different kinds of algorithms, which called transverse counting and quantization coding, respectively. The simulation results show that two algorithms extract the characteristic of ballistic missile adequately and accurately. The algorithm based on the transverse counting has the low complexity and can be implemented easily compared to the algorithm based on the quantization coding does. The transverse counting algorithm also shows the good immunity to the disturbance signals and speed up the matching and recognition of subsequent targets.

  11. A Danger-Theory-Based Immune Network Optimization Algorithm

    PubMed Central

    Li, Tao; Xiao, Xin; Shi, Yuanquan

    2013-01-01

    Existing artificial immune optimization algorithms reflect a number of shortcomings, such as premature convergence and poor local search ability. This paper proposes a danger-theory-based immune network optimization algorithm, named dt-aiNet. The danger theory emphasizes that danger signals generated from changes of environments will guide different levels of immune responses, and the areas around danger signals are called danger zones. By defining the danger zone to calculate danger signals for each antibody, the algorithm adjusts antibodies' concentrations through its own danger signals and then triggers immune responses of self-regulation. So the population diversity can be maintained. Experimental results show that the algorithm has more advantages in the solution quality and diversity of the population. Compared with influential optimization algorithms, CLONALG, opt-aiNet, and dopt-aiNet, the algorithm has smaller error values and higher success rates and can find solutions to meet the accuracies within the specified function evaluation times. PMID:23483853

  12. Critical dynamics of cluster algorithms in the dilute Ising model

    NASA Astrophysics Data System (ADS)

    Hennecke, M.; Heyken, U.

    1993-08-01

    Autocorrelation times for thermodynamic quantities at T C are calculated from Monte Carlo simulations of the site-diluted simple cubic Ising model, using the Swendsen-Wang and Wolff cluster algorithms. Our results show that for these algorithms the autocorrelation times decrease when reducing the concentration of magnetic sites from 100% down to 40%. This is of crucial importance when estimating static properties of the model, since the variances of these estimators increase with autocorrelation time. The dynamical critical exponents are calculated for both algorithms, observing pronounced finite-size effects in the energy autocorrelation data for the algorithm of Wolff. We conclude that, when applied to the dilute Ising model, cluster algorithms become even more effective than local algorithms, for which increasing autocorrelation times are expected.

  13. SeqCompress: an algorithm for biological sequence compression.

    PubMed

    Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan

    2014-10-01

    The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms.

  14. Efficient Record Linkage Algorithms Using Complete Linkage Clustering

    PubMed Central

    Mamun, Abdullah-Al; Aseltine, Robert; Rajasekaran, Sanguthevar

    2016-01-01

    Data from different agencies share data of the same individuals. Linking these datasets to identify all the records belonging to the same individuals is a crucial and challenging problem, especially given the large volumes of data. A large number of available algorithms for record linkage are prone to either time inefficiency or low-accuracy in finding matches and non-matches among the records. In this paper we propose efficient as well as reliable sequential and parallel algorithms for the record linkage problem employing hierarchical clustering methods. We employ complete linkage hierarchical clustering algorithms to address this problem. In addition to hierarchical clustering, we also use two other techniques: elimination of duplicate records and blocking. Our algorithms use sorting as a sub-routine to identify identical copies of records. We have tested our algorithms on datasets with millions of synthetic records. Experimental results show that our algorithms achieve nearly 100% accuracy. Parallel implementations achieve almost linear speedups. Time complexities of these algorithms do not exceed those of previous best-known algorithms. Our proposed algorithms outperform previous best-known algorithms in terms of accuracy consuming reasonable run times. PMID:27124604

  15. Efficient Record Linkage Algorithms Using Complete Linkage Clustering.

    PubMed

    Mamun, Abdullah-Al; Aseltine, Robert; Rajasekaran, Sanguthevar

    2016-01-01

    Data from different agencies share data of the same individuals. Linking these datasets to identify all the records belonging to the same individuals is a crucial and challenging problem, especially given the large volumes of data. A large number of available algorithms for record linkage are prone to either time inefficiency or low-accuracy in finding matches and non-matches among the records. In this paper we propose efficient as well as reliable sequential and parallel algorithms for the record linkage problem employing hierarchical clustering methods. We employ complete linkage hierarchical clustering algorithms to address this problem. In addition to hierarchical clustering, we also use two other techniques: elimination of duplicate records and blocking. Our algorithms use sorting as a sub-routine to identify identical copies of records. We have tested our algorithms on datasets with millions of synthetic records. Experimental results show that our algorithms achieve nearly 100% accuracy. Parallel implementations achieve almost linear speedups. Time complexities of these algorithms do not exceed those of previous best-known algorithms. Our proposed algorithms outperform previous best-known algorithms in terms of accuracy consuming reasonable run times.

  16. Efficient Approximation Algorithms for Weighted $b$-Matching

    SciTech Connect

    Khan, Arif; Pothen, Alex; Mostofa Ali Patwary, Md.; Satish, Nadathur Rajagopalan; Sundaram, Narayanan; Manne, Fredrik; Halappanavar, Mahantesh; Dubey, Pradeep

    2016-01-01

    We describe a half-approximation algorithm, b-Suitor, for computing a b-Matching of maximum weight in a graph with weights on the edges. b-Matching is a generalization of the well-known Matching problem in graphs, where the objective is to choose a subset of M edges in the graph such that at most a specified number b(v) of edges in M are incident on each vertex v. Subject to this restriction we maximize the sum of the weights of the edges in M. We prove that the b-Suitor algorithm computes the same b-Matching as the one obtained by the greedy algorithm for the problem. We implement the algorithm on serial and shared-memory parallel processors, and compare its performance against a collection of approximation algorithms that have been proposed for the Matching problem. Our results show that the b-Suitor algorithm outperforms the Greedy and Locally Dominant edge algorithms by one to two orders of magnitude on a serial processor. The b-Suitor algorithm has a high degree of concurrency, and it scales well up to 240 threads on a shared memory multiprocessor. The b-Suitor algorithm outperforms the Locally Dominant edge algorithm by a factor of fourteen on 16 cores of an Intel Xeon multiprocessor.

  17. An improved lossless group compression algorithm for seismic data in SEG-Y and MiniSEED file formats

    NASA Astrophysics Data System (ADS)

    Li, Huailiang; Tuo, Xianguo; Shen, Tong; Henderson, Mark Julian; Courtois, Jérémie; Yan, Minhao

    2017-03-01

    An improved lossless group compression algorithm is proposed for decreasing the size of SEG-Y files to relieve the enormous burden associated with the transmission and storage of large amounts of seismic exploration data. Because each data point is represented by 4 bytes in SEG-Y files, the file is broken down into 4 subgroups, and the Gini coefficient is employed to analyze the distribution of the overall data and each of the 4 data subgroups within the range [0,255]. The results show that each subgroup comprises characteristic frequency distributions suited to distinct compression algorithms. Therefore, the data of each subgroup was compressed using its best suited algorithm. After comparing the compression ratios obtained for each data subgroup using different algorithms, the Lempel-Ziv-Markov chain algorithm (LZMA) was selected for the compression of the first two subgroups and the Deflate algorithm for the latter two subgroups. The compression ratios and decompression times obtained with the improved algorithm were compared with those obtained with commonly employed compression algorithms for SEG-Y files with different sizes. The experimental results show that the improved algorithm provides a compression ratio of 75-80%, which is more effective than compression algorithms presently applied to SEG-Y files. In addition, the proposed algorithm is applied to the miniSEED format used in natural earthquake monitoring, and the results compared with those obtained using the Steim2 compression algorithm, the results again show that the proposed algorithm provides better data compression.

  18. Comparison of swarm intelligence algorithms in atmospheric compensation for free space optical communication

    NASA Astrophysics Data System (ADS)

    Li, Zhaokun; Cao, Jingtai; Liu, Wei; Feng, Jianfeng; Zhao, Xiaohui

    2015-03-01

    We use conventional adaptive optical system to compensate atmospheric turbulence in free space optical (FSO) communication system under strong scintillation circumstances, undesired wave-front measurements based on Shark-Hartman sensor (SH). Since wavefront sensor-less adaptive optics is a feasible option, we propose several swarm intelligence algorithms to compensate the wavefront aberration from atmospheric interference in FSO and mainly discuss the algorithm principle, basic flows, and simulation result. The numerical simulation experiment and result analysis show that compared with SPGD algorithm, the proposed algorithms can effectively restrain wavefront aberration, and improve convergence rate of the algorithms and the coupling efficiency of receiver in large extent.

  19. An efficient algorithm for retinal blood vessel segmentation using h-maxima transform and multilevel thresholding.

    PubMed

    Saleh, Marwan D; Eswaran, C

    2012-01-01

    Retinal blood vessel detection and analysis play vital roles in early diagnosis and prevention of several diseases, such as hypertension, diabetes, arteriosclerosis, cardiovascular disease and stroke. This paper presents an automated algorithm for retinal blood vessel segmentation. The proposed algorithm takes advantage of powerful image processing techniques such as contrast enhancement, filtration and thresholding for more efficient segmentation. To evaluate the performance of the proposed algorithm, experiments were conducted on 40 images collected from DRIVE database. The results show that the proposed algorithm yields an accuracy rate of 96.5%, which is higher than the results achieved by other known algorithms.

  20. One algorithm to rule them all? An evaluation and discussion of ten eye movement event-detection algorithms.

    PubMed

    Andersson, Richard; Larsson, Linnea; Holmqvist, Kenneth; Stridh, Martin; Nyström, Marcus

    2016-05-18

    Almost all eye-movement researchers use algorithms to parse raw data and detect distinct types of eye movement events, such as fixations, saccades, and pursuit, and then base their results on these. Surprisingly, these algorithms are rarely evaluated. We evaluated the classifications of ten eye-movement event detection algorithms, on data from an SMI HiSpeed 1250 system, and compared them to manual ratings of two human experts. The evaluation focused on fixations, saccades, and post-saccadic oscillations. The evaluation used both event duration parameters, and sample-by-sample comparisons to rank the algorithms. The resulting event durations varied substantially as a function of what algorithm was used. This evaluation differed from previous evaluations by considering a relatively large set of algorithms, multiple events, and data from both static and dynamic stimuli. The main conclusion is that current detectors of only fixations and saccades work reasonably well for static stimuli, but barely better than chance for dynamic stimuli. Differing results across evaluation methods make it difficult to select one winner for fixation detection. For saccade detection, however, the algorithm by Larsson, Nyström and Stridh (IEEE Transaction on Biomedical Engineering, 60(9):2484-2493,2013) outperforms all algorithms in data from both static and dynamic stimuli. The data also show how improperly selected algorithms applied to dynamic data misestimate fixation and saccade properties.

  1. Approximation algorithms

    PubMed Central

    Schulz, Andreas S.; Shmoys, David B.; Williamson, David P.

    1997-01-01

    Increasing global competition, rapidly changing markets, and greater consumer awareness have altered the way in which corporations do business. To become more efficient, many industries have sought to model some operational aspects by gigantic optimization problems. It is not atypical to encounter models that capture 106 separate “yes” or “no” decisions to be made. Although one could, in principle, try all 2106 possible solutions to find the optimal one, such a method would be impractically slow. Unfortunately, for most of these models, no algorithms are known that find optimal solutions with reasonable computation times. Typically, industry must rely on solutions of unguaranteed quality that are constructed in an ad hoc manner. Fortunately, for some of these models there are good approximation algorithms: algorithms that produce solutions quickly that are provably close to optimal. Over the past 6 years, there has been a sequence of major breakthroughs in our understanding of the design of approximation algorithms and of limits to obtaining such performance guarantees; this area has been one of the most flourishing areas of discrete mathematics and theoretical computer science. PMID:9370525

  2. Preconditioned quantum linear system algorithm.

    PubMed

    Clader, B D; Jacobs, B C; Sprouse, C R

    2013-06-21

    We describe a quantum algorithm that generalizes the quantum linear system algorithm [Harrow et al., Phys. Rev. Lett. 103, 150502 (2009)] to arbitrary problem specifications. We develop a state preparation routine that can initialize generic states, show how simple ancilla measurements can be used to calculate many quantities of interest, and integrate a quantum-compatible preconditioner that greatly expands the number of problems that can achieve exponential speedup over classical linear systems solvers. To demonstrate the algorithm's applicability, we show how it can be used to compute the electromagnetic scattering cross section of an arbitrary target exponentially faster than the best classical algorithm.

  3. Algorithm for shortest path search in Geographic Information Systems by using reduced graphs.

    PubMed

    Rodríguez-Puente, Rafael; Lazo-Cortés, Manuel S

    2013-01-01

    The use of Geographic Information Systems has increased considerably since the eighties and nineties. As one of their most demanding applications we can mention shortest paths search. Several studies about shortest path search show the feasibility of using graphs for this purpose. Dijkstra's algorithm is one of the classic shortest path search algorithms. This algorithm is not well suited for shortest path search in large graphs. This is the reason why various modifications to Dijkstra's algorithm have been proposed by several authors using heuristics to reduce the run time of shortest path search. One of the most used heuristic algorithms is the A* algorithm, the main goal is to reduce the run time by reducing the search space. This article proposes a modification of Dijkstra's shortest path search algorithm in reduced graphs. It shows that the cost of the path found in this work, is equal to the cost of the path found using Dijkstra's algorithm in the original graph. The results of finding the shortest path, applying the proposed algorithm, Dijkstra's algorithm and A* algorithm, are compared. This comparison shows that, by applying the approach proposed, it is possible to obtain the optimal path in a similar or even in less time than when using heuristic algorithms.

  4. An improved harmony search algorithm with dynamically varying bandwidth

    NASA Astrophysics Data System (ADS)

    Kalivarapu, J.; Jain, S.; Bag, S.

    2016-07-01

    The present work demonstrates a new variant of the harmony search (HS) algorithm where bandwidth (BW) is one of the deciding factors for the time complexity and the performance of the algorithm. The BW needs to have both explorative and exploitative characteristics. The ideology is to use a large BW to search in the full domain and to adjust the BW dynamically closer to the optimal solution. After trying a series of approaches, a methodology inspired by the functioning of a low-pass filter showed satisfactory results. This approach was implemented in the self-adaptive improved harmony search (SIHS) algorithm and tested on several benchmark functions. Compared to the existing HS algorithm and its variants, SIHS showed better performance on most of the test functions. Thereafter, the algorithm was applied to geometric parameter optimization of a friction stir welding tool.

  5. Q-Learning-Based Adjustable Fixed-Phase Quantum Grover Search Algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Ying; Shi, Wensha; Wang, Yijun; Hu, Jiankun

    2017-02-01

    We demonstrate that the rotation phase can be suitably chosen to increase the efficiency of the phase-based quantum search algorithm, leading to a dynamic balance between iterations and success probabilities of the fixed-phase quantum Grover search algorithm with Q-learning for a given number of solutions. In this search algorithm, the proposed Q-learning algorithm, which is a model-free reinforcement learning strategy in essence, is used for performing a matching algorithm based on the fraction of marked items λ and the rotation phase α. After establishing the policy function α = π(λ), we complete the fixed-phase Grover algorithm, where the phase parameter is selected via the learned policy. Simulation results show that the Q-learning-based Grover search algorithm (QLGA) enables fewer iterations and gives birth to higher success probabilities. Compared with the conventional Grover algorithms, it avoids the optimal local situations, thereby enabling success probabilities to approach one.

  6. A semantic characterization of an algorithm for estimating others` beliefs from observation

    SciTech Connect

    Isozaki, Hideki; Katsuno, Hirofumi

    1996-12-31

    Human beings often estimate others beliefs and intentions when they interact with others. Estimation of others beliefs will be useful also in controlling the behavior and utterances of artificial agents, especially when lines of communication are unstable or slow. But, devising such estimation algorithms and background theories for the algorithms is difficult, because of many factors affecting one`s belief. We have proposed an algorithm that estimates others beliefs from observation in the changing world. Experimental results show that this algorithm returns natural answers to various queries. However, the algorithm is only heuristic, and how the algorithm deals with beliefs and their changes is not entirely clear. We propose certain semantics based on a nonstandard structure for modal logic. By using these semantics, we shed light on a logical meaning of the belief estimation that the algorithm deals with. We also discuss how the semantics and the algorithm can be generalized.

  7. 15. Detail showing lower chord pinconnected to vertical member, showing ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    15. Detail showing lower chord pin-connected to vertical member, showing floor beam riveted to extension of vertical member below pin-connection, and showing brackets supporting cantilevered sidewalk. View to southwest. - Selby Avenue Bridge, Spanning Short Line Railways track at Selby Avenue between Hamline & Snelling Avenues, Saint Paul, Ramsey County, MN

  8. New convergence estimates for multigrid algorithms

    SciTech Connect

    Bramble, J.H.; Pasciak, J.E.

    1987-10-01

    In this paper, new convergence estimates are proved for both symmetric and nonsymmetric multigrid algorithms applied to symmetric positive definite problems. Our theory relates the convergence of multigrid algorithms to a ''regularity and approximation'' parameter ..cap alpha.. epsilon (0, 1) and the number of relaxations m. We show that for the symmetric and nonsymmetric ..nu.. cycles, the multigrid iteration converges for any positive m at a rate which deteriorates no worse than 1-cj/sup -(1-//sup ..cap alpha..//sup )///sup ..cap alpha../, where j is the number of grid levels. We then define a generalized ..nu.. cycle algorithm which involves exponentially increasing (for example, doubling) the number of smoothings on successively coarser grids. We show that the resulting symmetric and nonsymmetric multigrid iterations converge for any ..cap alpha.. with rates that are independent of the mesh size. The theory is presented in an abstract setting which can be applied to finite element multigrid and finite difference multigrid methods.

  9. Evaluating Algorithm Performance Metrics Tailored for Prognostics

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai

    2009-01-01

    Prognostics has taken a center stage in Condition Based Maintenance (CBM) where it is desired to estimate Remaining Useful Life (RUL) of the system so that remedial measures may be taken in advance to avoid catastrophic events or unwanted downtimes. Validation of such predictions is an important but difficult proposition and a lack of appropriate evaluation methods renders prognostics meaningless. Evaluation methods currently used in the research community are not standardized and in many cases do not sufficiently assess key performance aspects expected out of a prognostics algorithm. In this paper we introduce several new evaluation metrics tailored for prognostics and show that they can effectively evaluate various algorithms as compared to other conventional metrics. Specifically four algorithms namely; Relevance Vector Machine (RVM), Gaussian Process Regression (GPR), Artificial Neural Network (ANN), and Polynomial Regression (PR) are compared. These algorithms vary in complexity and their ability to manage uncertainty around predicted estimates. Results show that the new metrics rank these algorithms in different manner and depending on the requirements and constraints suitable metrics may be chosen. Beyond these results, these metrics offer ideas about how metrics suitable to prognostics may be designed so that the evaluation procedure can be standardized. 1

  10. Study of image matching algorithm and sub-pixel fitting algorithm in target tracking

    NASA Astrophysics Data System (ADS)

    Yang, Ming-dong; Jia, Jianjun; Qiang, Jia; Wang, Jian-yu

    2015-03-01

    Image correlation matching is a tracking method that searched a region most approximate to the target template based on the correlation measure between two images. Because there is no need to segment the image, and the computation of this method is little. Image correlation matching is a basic method of target tracking. This paper mainly studies the image matching algorithm of gray scale image, which precision is at sub-pixel level. The matching algorithm used in this paper is SAD (Sum of Absolute Difference) method. This method excels in real-time systems because of its low computation complexity. The SAD method is introduced firstly and the most frequently used sub-pixel fitting algorithms are introduced at the meantime. These fitting algorithms can't be used in real-time systems because they are too complex. However, target tracking often requires high real-time performance, we put forward a fitting algorithm named paraboloidal fitting algorithm based on the consideration above, this algorithm is simple and realized easily in real-time system. The result of this algorithm is compared with that of surface fitting algorithm through image matching simulation. By comparison, the precision difference between these two algorithms is little, it's less than 0.01pixel. In order to research the influence of target rotation on precision of image matching, the experiment of camera rotation was carried on. The detector used in the camera is a CMOS detector. It is fixed to an arc pendulum table, take pictures when the camera rotated different angles. Choose a subarea in the original picture as the template, and search the best matching spot using image matching algorithm mentioned above. The result shows that the matching error is bigger when the target rotation angle is larger. It's an approximate linear relation. Finally, the influence of noise on matching precision was researched. Gaussian noise and pepper and salt noise were added in the image respectively, and the image

  11. A Hybrid Evolutionary Algorithm for Wheat Blending Problem

    PubMed Central

    Bonyadi, Mohammad Reza; Michalewicz, Zbigniew; Barone, Luigi

    2014-01-01

    This paper presents a hybrid evolutionary algorithm to deal with the wheat blending problem. The unique constraints of this problem make many existing algorithms fail: either they do not generate acceptable results or they are not able to complete optimization within the required time. The proposed algorithm starts with a filtering process that follows predefined rules to reduce the search space. Then the linear-relaxed version of the problem is solved using a standard linear programming algorithm. The result is used in conjunction with a solution generated by a heuristic method to generate an initial solution. After that, a hybrid of an evolutionary algorithm, a heuristic method, and a linear programming solver is used to improve the quality of the solution. A local search based posttuning method is also incorporated into the algorithm. The proposed algorithm has been tested on artificial test cases and also real data from past years. Results show that the algorithm is able to find quality results in all cases and outperforms the existing method in terms of both quality and speed. PMID:24707222

  12. Nonlinear physical segmentation algorithm for determining the layer boundary from lidar signal.

    PubMed

    Mao, Feiyue; Li, Jun; Li, Chen; Gong, Wei; Min, Qilong; Wang, Wei

    2015-11-30

    Layer boundary (base and top) detection is a basic problem in lidar data processing, the results of which are used as inputs of optical properties retrieval. However, traditional algorithms not only require manual intervention but also rely heavily on the signal-to-noise ratio. Therefore, we propose a robust and automatic algorithm for layer detection based on a novel algorithm for lidar signal segmentation and representation. Our algorithm is based on the lidar equation and avoids most of the limitations of the traditional algorithms. Testing of the simulated and real signals shows that the algorithm is able to position the base and top accurately even with a low signal to noise ratio. Furthermore, the results of the classification are accurate and satisfactory. The experimental results confirm that our algorithm can be used for automatic detection, retrieval, and analysis of lidar data sets.

  13. Learning factorizations in estimation of distribution algorithms using affinity propagation.

    PubMed

    Santana, Roberto; Larrañaga, Pedro; Lozano, José A

    2010-01-01

    Estimation of distribution algorithms (EDAs) that use marginal product model factorizations have been widely applied to a broad range of mainly binary optimization problems. In this paper, we introduce the affinity propagation EDA (AffEDA) which learns a marginal product model by clustering a matrix of mutual information learned from the data using a very efficient message-passing algorithm known as affinity propagation. The introduced algorithm is tested on a set of binary and nonbinary decomposable functions and using a hard combinatorial class of problem known as the HP protein model. The results show that the algorithm is a very efficient alternative to other EDAs that use marginal product model factorizations such as the extended compact genetic algorithm (ECGA) and improves the quality of the results achieved by ECGA when the cardinality of the variables is increased.

  14. Comparative testing of DNA segmentation algorithms using benchmark simulations.

    PubMed

    Elhaik, Eran; Graur, Dan; Josic, Kresimir

    2010-05-01

    Numerous segmentation methods for the detection of compositionally homogeneous domains within genomic sequences have been proposed. Unfortunately, these methods yield inconsistent results. Here, we present a benchmark consisting of two sets of simulated genomic sequences for testing the performances of segmentation algorithms. Sequences in the first set are composed of fixed-sized homogeneous domains, distinct in their between-domain guanine and cytosine (GC) content variability. The sequences in the second set are composed of a mosaic of many short domains and a few long ones, distinguished by sharp GC content boundaries between neighboring domains. We use these sets to test the performance of seven segmentation algorithms in the literature. Our results show that recursive segmentation algorithms based on the Jensen-Shannon divergence outperform all other algorithms. However, even these algorithms perform poorly in certain instances because of the arbitrary choice of a segmentation-stopping criterion.

  15. Training Feedforward Neural Networks Using Symbiotic Organisms Search Algorithm

    PubMed Central

    Wu, Haizhou; Luo, Qifang

    2016-01-01

    Symbiotic organisms search (SOS) is a new robust and powerful metaheuristic algorithm, which stimulates the symbiotic interaction strategies adopted by organisms to survive and propagate in the ecosystem. In the supervised learning area, it is a challenging task to present a satisfactory and efficient training algorithm for feedforward neural networks (FNNs). In this paper, SOS is employed as a new method for training FNNs. To investigate the performance of the aforementioned method, eight different datasets selected from the UCI machine learning repository are employed for experiment and the results are compared among seven metaheuristic algorithms. The results show that SOS performs better than other algorithms for training FNNs in terms of converging speed. It is also proven that an FNN trained by the method of SOS has better accuracy than most algorithms compared. PMID:28105044

  16. [The comparison of algorithms on the CT image retrieval of Xinjiang local liver hydatid disease].

    PubMed

    Yan, Chuanbo; Hamit, Murat; Li, Li; Chen, Jianjun; Hu, Yahting; Kong, Dewei; Zhou, Jingjing

    2013-10-01

    Xinjiang local liver hydatid disease is an infectious parasitic disease in Xinjiang pastoral areas. Based on the image features, selecting the appropriate distance algorithms to retrieve the image quickly and accurately, different distance algorithms have been induced in this area, which can greatly assist the doctors to early detect, diagnose and cure the liver hydatid disease. This paper compared the performance of different distance algorithms to retrieve the image when using the liver hydatid disease medical image texture features. The results showed that: for the liver hydatid disease medical images retrieval based on gray level cocurrence matrix (GLCM) texture features, the Mahalanobis distance algorithm is superior to other distance algorithms.

  17. Discrete bat algorithm for optimal problem of permutation flow shop scheduling.

    PubMed

    Luo, Qifang; Zhou, Yongquan; Xie, Jian; Ma, Mingzhi; Li, Liangliang

    2014-01-01

    A discrete bat algorithm (DBA) is proposed for optimal permutation flow shop scheduling problem (PFSP). Firstly, the discrete bat algorithm is constructed based on the idea of basic bat algorithm, which divide whole scheduling problem into many subscheduling problems and then NEH heuristic be introduced to solve subscheduling problem. Secondly, some subsequences are operated with certain probability in the pulse emission and loudness phases. An intensive virtual population neighborhood search is integrated into the discrete bat algorithm to further improve the performance. Finally, the experimental results show the suitability and efficiency of the present discrete bat algorithm for optimal permutation flow shop scheduling problem.

  18. An automated blood vessel segmentation algorithm using histogram equalization and automatic threshold selection.

    PubMed

    Saleh, Marwan D; Eswaran, C; Mueen, Ahmed

    2011-08-01

    This paper focuses on the detection of retinal blood vessels which play a vital role in reducing the proliferative diabetic retinopathy and for preventing the loss of visual capability. The proposed algorithm which takes advantage of the powerful preprocessing techniques such as the contrast enhancement and thresholding offers an automated segmentation procedure for retinal blood vessels. To evaluate the performance of the new algorithm, experiments are conducted on 40 images collected from DRIVE database. The results show that the proposed algorithm performs better than the other known algorithms in terms of accuracy. Furthermore, the proposed algorithm being simple and easy to implement, is best suited for fast processing applications.

  19. Parallelization of the Pipelined Thomas Algorithm

    NASA Technical Reports Server (NTRS)

    Povitsky, A.

    1998-01-01

    In this study the following questions are addressed. Is it possible to improve the parallelization efficiency of the Thomas algorithm? How should the Thomas algorithm be formulated in order to get solved lines that are used as data for other computational tasks while processors are idle? To answer these questions, two-step pipelined algorithms (PAs) are introduced formally. It is shown that the idle processor time is invariant with respect to the order of backward and forward steps in PAs starting from one outermost processor. The advantage of PAs starting from two outermost processors is small. Versions of the pipelined Thomas algorithms considered here fall into the category of PAs. These results show that the parallelization efficiency of the Thomas algorithm cannot be improved directly. However, the processor idle time can be used if some data has been computed by the time processors become idle. To achieve this goal the Immediate Backward pipelined Thomas Algorithm (IB-PTA) is developed in this article. The backward step is computed immediately after the forward step has been completed for the first portion of lines. This enables the completion of the Thomas algorithm for some of these lines before processors become idle. An algorithm for generating a static processor schedule recursively is developed. This schedule is used to switch between forward and backward computations and to control communications between processors. The advantage of the IB-PTA over the basic PTA is the presence of solved lines, which are available for other computations, by the time processors become idle.

  20. Fractal Landscape Algorithms for Environmental Simulations

    NASA Astrophysics Data System (ADS)

    Mao, H.; Moran, S.

    2014-12-01

    Natural science and geographical research are now able to take advantage of environmental simulations that more accurately test experimental hypotheses, resulting in deeper understanding. Experiments affected by the natural environment can benefit from 3D landscape simulations capable of simulating a variety of terrains and environmental phenomena. Such simulations can employ random terrain generation algorithms that dynamically simulate environments to test specific models against a variety of factors. Through the use of noise functions such as Perlin noise, Simplex noise, and diamond square algorithms, computers can generate simulations that model a variety of landscapes and ecosystems. This study shows how these algorithms work together to create realistic landscapes. By seeding values into the diamond square algorithm, one can control the shape of landscape. Perlin noise and Simplex noise are also used to simulate moisture and temperature. The smooth gradient created by coherent noise allows more realistic landscapes to be simulated. Terrain generation algorithms can be used in environmental studies and physics simulations. Potential studies that would benefit from simulations include the geophysical impact of flash floods or drought on a particular region and regional impacts on low lying area due to global warming and rising sea levels. Furthermore, terrain generation algorithms also serve as aesthetic tools to display landscapes (Google Earth), and simulate planetary landscapes. Hence, it can be used as a tool to assist science education. Algorithms used to generate these natural phenomena provide scientists a different approach in analyzing our world. The random algorithms used in terrain generation not only contribute to the generating the terrains themselves, but are also capable of simulating weather patterns.

  1. Algorithm aversion: people erroneously avoid algorithms after seeing them err.

    PubMed

    Dietvorst, Berkeley J; Simmons, Joseph P; Massey, Cade

    2015-02-01

    Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

  2. Faster Parameterized Algorithms for Minor Containment

    NASA Astrophysics Data System (ADS)

    Adler, Isolde; Dorn, Frederic; Fomin, Fedor V.; Sau, Ignasi; Thilikos, Dimitrios M.

    The theory of Graph Minors by Robertson and Seymour is one of the deepest and significant theories in modern Combinatorics. This theory has also a strong impact on the recent development of Algorithms, and several areas, like Parameterized Complexity, have roots in Graph Minors. Until very recently it was a common belief that Graph Minors Theory is mainly of theoretical importance. However, it appears that many deep results from Robertson and Seymour's theory can be also used in the design of practical algorithms. Minor containment testing is one of algorithmically most important and technical parts of the theory, and minor containment in graphs of bounded branchwidth is a basic ingredient of this algorithm. In order to implement minor containment testing on graphs of bounded branchwidth, Hicks [NETWORKS 04] described an algorithm, that in time O(3^{k^2}\\cdot (h+k-1)!\\cdot m) decides if a graph G with m edges and branchwidth k, contains a fixed graph H on h vertices as a minor. That algorithm follows the ideas introduced by Robertson and Seymour in [J'CTSB 95]. In this work we improve the dependence on k of Hicks' result by showing that checking if H is a minor of G can be done in time O(2^{(2k +1 )\\cdot log k} \\cdot h^{2k} \\cdot 2^{2h^2} \\cdot m). Our approach is based on a combinatorial object called rooted packing, which captures the properties of the potential models of subgraphs of H that we seek in our dynamic programming algorithm. This formulation with rooted packings allows us to speed up the algorithm when G is embedded in a fixed surface, obtaining the first single-exponential algorithm for minor containment testing. Namely, it runs in time 2^{O(k)} \\cdot h^{2k} \\cdot 2^{O(h)} \\cdot n, with n = |V(G)|. Finally, we show that slight modifications of our algorithm permit to solve some related problems within the same time bounds, like induced minor or contraction minor containment.

  3. Optimal Hops-Based Adaptive Clustering Algorithm

    NASA Astrophysics Data System (ADS)

    Xuan, Xin; Chen, Jian; Zhen, Shanshan; Kuo, Yonghong

    This paper proposes an optimal hops-based adaptive clustering algorithm (OHACA). The algorithm sets an energy selection threshold before the cluster forms so that the nodes with less energy are more likely to go to sleep immediately. In setup phase, OHACA introduces an adaptive mechanism to adjust cluster head and load balance. And the optimal distance theory is applied to discover the practical optimal routing path to minimize the total energy for transmission. Simulation results show that OHACA prolongs the life of network, improves utilizing rate and transmits more data because of energy balance.

  4. Speckle imaging algorithms for planetary imaging

    SciTech Connect

    Johansson, E.

    1994-11-15

    I will discuss the speckle imaging algorithms used to process images of the impact sites of the collision of comet Shoemaker-Levy 9 with Jupiter. The algorithms use a phase retrieval process based on the average bispectrum of the speckle image data. High resolution images are produced by estimating the Fourier magnitude and Fourier phase of the image separately, then combining them and inverse transforming to achieve the final result. I will show raw speckle image data and high-resolution image reconstructions from our recent experiment at Lick Observatory.

  5. Statistical pattern recognition algorithms for autofluorescence imaging

    NASA Astrophysics Data System (ADS)

    Kulas, Zbigniew; Bereś-Pawlik, Elżbieta; Wierzbicki, Jarosław

    2009-02-01

    In cancer diagnostics the most important problems are the early identification and estimation of the tumor growth and spread in order to determine the area to be operated. The aim of the work was to design of statistical algorithms helping doctors to objectively estimate pathologically changed areas and to assess the disease advancement. In the research, algorithms for classifying endoscopic autofluorescence images of larynx and intestine were used. The results show that the statistical pattern recognition offers new possibilities for endoscopic diagnostics and can be of a tremendous help in assessing the area of the pathological changes.

  6. Case study of isosurface extraction algorithm performance

    SciTech Connect

    Sutton, P M; Hansen, C D; Shen, H; Schikore, D

    1999-12-14

    Isosurface extraction is an important and useful visualization method. Over the past ten years, the field has seen numerous isosurface techniques published leaving the user in a quandary about which one should be used. Some papers have published complexity analysis of the techniques yet empirical evidence comparing different methods is lacking. This case study presents a comparative study of several representative isosurface extraction algorithms. It reports and analyzes empirical measurements of execution times and memory behavior for each algorithm. The results show that asymptotically optimal techniques may not be the best choice when implemented on modern computer architectures.

  7. Power spectral estimation algorithms

    NASA Technical Reports Server (NTRS)

    Bhatia, Manjit S.

    1989-01-01

    Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.

  8. Flocking algorithm for autonomous flying robots.

    PubMed

    Virágh, Csaba; Vásárhelyi, Gábor; Tarcai, Norbert; Szörényi, Tamás; Somorjai, Gergő; Nepusz, Tamás; Vicsek, Tamás

    2014-06-01

    Animal swarms displaying a variety of typical flocking patterns would not exist without the underlying safe, optimal and stable dynamics of the individuals. The emergence of these universal patterns can be efficiently reconstructed with agent-based models. If we want to reproduce these patterns with artificial systems, such as autonomous aerial robots, agent-based models can also be used in their control algorithms. However, finding the proper algorithms and thus understanding the essential characteristics of the emergent collective behaviour requires thorough and realistic modeling of the robot and also the environment. In this paper, we first present an abstract mathematical model of an autonomous flying robot. The model takes into account several realistic features, such as time delay and locality of communication, inaccuracy of the on-board sensors and inertial effects. We present two decentralized control algorithms. One is based on a simple self-propelled flocking model of animal collective motion, the other is a collective target tracking algorithm. Both algorithms contain a viscous friction-like term, which aligns the velocities of neighbouring agents parallel to each other. We show that this term can be essential for reducing the inherent instabilities of such a noisy and delayed realistic system. We discuss simulation results on the stability of the control algorithms, and perform real experiments to show the applicability of the algorithms on a group of autonomous quadcopters. In our case, bio-inspiration works in two ways. On the one hand, the whole idea of trying to build and control a swarm of robots comes from the observation that birds tend to flock to optimize their behaviour as a group. On the other hand, by using a realistic simulation framework and studying the group behaviour of autonomous robots we can learn about the major factors influencing the flight of bird flocks.

  9. Improved Quantum Artificial Fish Algorithm Application to Distributed Network Considering Distributed Generation.

    PubMed

    Du, Tingsong; Hu, Yang; Ke, Xianting

    2015-01-01

    An improved quantum artificial fish swarm algorithm (IQAFSA) for solving distributed network programming considering distributed generation is proposed in this work. The IQAFSA based on quantum computing which has exponential acceleration for heuristic algorithm uses quantum bits to code artificial fish and quantum revolving gate, preying behavior, and following behavior and variation of quantum artificial fish to update the artificial fish for searching for optimal value. Then, we apply the proposed new algorithm, the quantum artificial fish swarm algorithm (QAFSA), the basic artificial fish swarm algorithm (BAFSA), and the global edition artificial fish swarm algorithm (GAFSA) to the simulation experiments for some typical test functions, respectively. The simulation results demonstrate that the proposed algorithm can escape from the local extremum effectively and has higher convergence speed and better accuracy. Finally, applying IQAFSA to distributed network problems and the simulation results for 33-bus radial distribution network system show that IQAFSA can get the minimum power loss after comparing with BAFSA, GAFSA, and QAFSA.

  10. Calculation of Computational Complexity for Radix-2 (p) Fast Fourier Transform Algorithms for Medical Signals.

    PubMed

    Amirfattahi, Rassoul

    2013-10-01

    Owing to its simplicity radix-2 is a popular algorithm to implement fast fourier transform. Radix-2(p) algorithms have the same order of computational complexity as higher radices algorithms, but still retain the simplicity of radix-2. By defining a new concept, twiddle factor template, in this paper, we propose a method for exact calculation of multiplicative complexity for radix-2(p) algorithms. The methodology is described for radix-2, radix-2 (2) and radix-2 (3) algorithms. Results show that radix-2 (2) and radix-2 (3) have significantly less computational complexity compared with radix-2. Another interesting result is that while the number of complex multiplications in radix-2 (3) algorithm is slightly more than radix-2 (2), the number of real multiplications for radix-2 (3) is less than radix-2 (2). This is because of the twiddle factors in the form of which need less number of real multiplications and are more frequent in radix-2 (3) algorithm.

  11. Evaluation metric of an image understanding result

    NASA Astrophysics Data System (ADS)

    Hemery, Baptiste; Laurent, Helene; Emile, Bruno; Rosenberger, Christophe

    2015-01-01

    Image processing algorithms include methods that process images from their acquisition to the extraction of useful information for a given application. Among interpretation algorithms, some are designed to detect, localize, and identify one or several objects in an image. The problem addressed is the evaluation of the interpretation results of an image or a video given an associated ground truth. Challenges are multiple, such as the comparison of algorithms, evaluation of an algorithm during its development, or the definition of its optimal settings. We propose a new metric for evaluating the interpretation result of an image. The advantage of the proposed metric is to evaluate a result by taking into account the quality of the localization, recognition, and detection of objects of interest in the image. Several parameters allow us to change the behavior of this metric for a given application. Its behavior has been tested on a large database and showed interesting results.

  12. Optimized multilevel codebook searching algorithm for vector quantization in image coding

    NASA Astrophysics Data System (ADS)

    Cao, Hugh Q.; Li, Weiping

    1996-02-01

    An optimized multi-level codebook searching algorithm (MCS) for vector quantization is presented in this paper. Although it belongs to the category of the fast nearest neighbor searching (FNNS) algorithms for vector quantization, the MCS algorithm is not a variation of any existing FNNS algorithms (such as k-d tree searching algorithm, partial-distance searching algorithm, triangle inequality searching algorithm...). A multi-level search theory has been introduced. The problem for the implementation of this theory has been solved by a specially defined irregular tree structure which can be built from a training set. This irregular tree structure is different from any tree structures used in TSVQ, prune tree VQ, quad tree VQ... Strictly speaking, it cannot be called tree structure since it allows one node has more than one set of parents, it is only a directed graph. This is the essential difference between MCS algorithm and other TSVQ algorithms which ensures its better performance. An efficient design procedure has been given to find the optimized irregular tree for practical source. The simulation results of applying MCS algorithm to image VQ show that this algorithm can reduce searching complexity to less than 3% of the exhaustive search vector quantization (ESVQ) (4096 codevectors and 16 dimension) while introducing negligible error (0.064 dB degradation from ESVQ). Simulation results also show that the searching complexity is close linearly increase with bitrate.

  13. Hey Teacher, Your Personality's Showing!

    ERIC Educational Resources Information Center

    Paulsen, James R.

    1977-01-01

    A study of 30 fourth, fifth, and sixth grade teachers and 300 of their students showed that a teacher's age, sex, and years of experience did not relate to students' mathematics achievement, but that more effective teachers showed greater "freedom from defensive behavior" than did less effective teachers. (DT)

  14. Planning a Successful Tech Show

    ERIC Educational Resources Information Center

    Nikirk, Martin

    2011-01-01

    Tech shows are a great way to introduce prospective students, parents, and local business and industry to a technology and engineering or career and technical education program. In addition to showcasing instructional programs, a tech show allows students to demonstrate their professionalism and skills, practice public presentations, and interact…

  15. Advancing x-ray scattering metrology using inverse genetic algorithms

    NASA Astrophysics Data System (ADS)

    Hannon, Adam F.; Sunday, Daniel F.; Windover, Donald; Joseph Kline, R.

    2016-07-01

    We compare the speed and effectiveness of two genetic optimization algorithms to the results of statistical sampling via a Markov chain Monte Carlo algorithm to find which is the most robust method for determining real-space structure in periodic gratings measured using critical dimension small-angle x-ray scattering. Both a covariance matrix adaptation evolutionary strategy and differential evolution algorithm are implemented and compared using various objective functions. The algorithms and objective functions are used to minimize differences between diffraction simulations and measured diffraction data. These simulations are parameterized with an electron density model known to roughly correspond to the real-space structure of our nanogratings. The study shows that for x-ray scattering data, the covariance matrix adaptation coupled with a mean-absolute error log objective function is the most efficient combination of algorithm and goodness of fit criterion for finding structures with little foreknowledge about the underlying fine scale structure features of the nanograting.

  16. Advancing X-ray scattering metrology using inverse genetic algorithms.

    PubMed

    Hannon, Adam F; Sunday, Daniel F; Windover, Donald; Kline, R Joseph

    2016-01-01

    We compare the speed and effectiveness of two genetic optimization algorithms to the results of statistical sampling via a Markov chain Monte Carlo algorithm to find which is the most robust method for determining real space structure in periodic gratings measured using critical dimension small angle X-ray scattering. Both a covariance matrix adaptation evolutionary strategy and differential evolution algorithm are implemented and compared using various objective functions. The algorithms and objective functions are used to minimize differences between diffraction simulations and measured diffraction data. These simulations are parameterized with an electron density model known to roughly correspond to the real space structure of our nanogratings. The study shows that for X-ray scattering data, the covariance matrix adaptation coupled with a mean-absolute error log objective function is the most efficient combination of algorithm and goodness of fit criterion for finding structures with little foreknowledge about the underlying fine scale structure features of the nanograting.

  17. A simulation algorithm for ultrasound liver backscattered signals.

    PubMed

    Zatari, D; Botros, N; Dunn, F

    1995-11-01

    In this study, we present a simulation algorithm for the backscattered ultrasound signal from liver tissue. The algorithm simulates backscattered signals from normal liver and three different liver abnormalities. The performance of the algorithm has been tested by statistically comparing the simulated signals with corresponding signals obtained from a previous in vivo study. To verify that the simulated signals can be classified correctly we have applied a classification technique based on an artificial neural network. The acoustic features extracted from the spectrum over a 2.5 MHz bandwidth are the attenuation coefficient and the change of speed of sound with frequency (dispersion). Our results show that the algorithm performs satisfactorily. Further testing of the algorithm is conducted by the use of a data acquisition and analysis system designed by the authors, where several simulated signals are stored in memory chips and classified according to their abnormalities.

  18. Label propagation algorithm based on local cycles for community detection

    NASA Astrophysics Data System (ADS)

    Zhang, Xian-Kun; Fei, Song; Song, Chen; Tian, Xue; Ao, Yang-Yue

    2015-12-01

    Label propagation algorithm (LPA) has been proven to be an extremely fast method for community detection in large complex networks. But an important issue of the algorithm has not yet been properly addressed that random update orders in label propagation process hamper the algorithm robustness of algorithm. We note that when there are multiple maximal labels among a node neighbors' labels, choosing a node' label from which there is a local cycle to the node instead of a random node' label can avoid the labels propagating among communities at random. In this paper, an improved LPA based on local cycles is given. We have evaluated the proposed algorithm on computer-generated networks with planted partition and some real-world networks whose community structure are already known. The result shows that the performance of the proposed approach is even significantly improved.

  19. Simple-random-sampling-based multiclass text classification algorithm.

    PubMed

    Liu, Wuying; Wang, Lin; Yi, Mianzhu

    2014-01-01

    Multiclass text classification (MTC) is a challenging issue and the corresponding MTC algorithms can be used in many applications. The space-time overhead of the algorithms must be concerned about the era of big data. Through the investigation of the token frequency distribution in a Chinese web document collection, this paper reexamines the power law and proposes a simple-random-sampling-based MTC (SRSMTC) algorithm. Supported by a token level memory to store labeled documents, the SRSMTC algorithm uses a text retrieval approach to solve text classification problems. The experimental results on the TanCorp data set show that SRSMTC algorithm can achieve the state-of-the-art performance at greatly reduced space-time requirements.

  20. Evaluating Applicability of Four Recursive Algorithms for Computation of the Fully Normalized Associated Legendre Functions

    NASA Astrophysics Data System (ADS)

    Lei, Weiwei; Li, Kai

    2016-12-01

    There are four recursive algorithms used in the computation of the fully normalized associated Legendre functions (FNALFs): the standard forward column algorithm, the standard forward row algorithm, the recursive algorithm between every other degree, and the Belikov algorithm. These algorithms were evaluated in terms of their first relative numerical accuracy, second relative numerical accuracy, and computation speed and efficiency. The results show that when the degree n reaches 3000, both the recursive algorithm between every other degree and the Belikov algorithm are applicable for | cos θ | ∈[0, 1], with the latter better second relative numerical accuracy than the former at a slower computation speed. In terms of | cos θ | ∈[0, 1], the standard forward column algorithm, the recursive algorithm between every other degree, and the Belikov algorithm are applicable within degree n of 1900, and the standard forward column algorithm has the highest computation speed. The standard forward column algorithm is applicable for | cos θ | ∈[0, 1] within degree n of 1900. This algorithm's range of applicability decreases as the degree increases beyond 1900; however, it remains applicable within a minute range when | cos θ | is approximately equal to 1. The standard forward row algorithm has the smallest range of applicability: it is only applicable within degree n of 100 for | cos θ | ∈[0, 1], and its range of applicability decreases rapidly when the degree is greater than 100. The results of this research are expected to be useful to researchers in choosing the best algorithms for use in the computation of the FNALFs.

  1. Component Labeling Algorithm For Video Rate Processing

    NASA Astrophysics Data System (ADS)

    Gotoh, Toshiyuki; Ohta, Yoshiyuki; Yoshida, Masumi; Shirai, Yoshio

    1987-10-01

    In this paper, we propose a raster scanning algorithm for component labeling, which enables processing under pipeline architecture. In the raster scanning algorithm, labels are provisionally assigned to each pixel of components and, at the same time, the connectivities of labels are detected at first scan. Those labels are classified into groups based on the connectivities. Finally provisional labels are updated using the result of classification and a unique label is assigned to each pixel of components. However, in the conventional algorithm, the classification process needs a vast number of operations. This prevents realizing pipeline processing. We have developed a method of preprocessing to reduce the number of provisional labels, which limits the number of label connectivities. We have also developed a new classification method whose operation is proportionate to only the number of label connectivities itself. We have made experiments with computer simulation to verify this algorithm. The experimental results show that we can process 512 x 512 x 8 bit images at video rate(1/30 sec. per 1 image) when this algorithm is implemented on hardware.

  2. A novel bee swarm optimization algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Akbari, Reza; Mohammadi, Alireza; Ziarati, Koorush

    2010-10-01

    The optimization algorithms which are inspired from intelligent behavior of honey bees are among the most recently introduced population based techniques. In this paper, a novel algorithm called bee swarm optimization, or BSO, and its two extensions for improving its performance are presented. The BSO is a population based optimization technique which is inspired from foraging behavior of honey bees. The proposed approach provides different patterns which are used by the bees to adjust their flying trajectories. As the first extension, the BSO algorithm introduces different approaches such as repulsion factor and penalizing fitness (RP) to mitigate the stagnation problem. Second, to maintain efficiently the balance between exploration and exploitation, time-varying weights (TVW) are introduced into the BSO algorithm. The proposed algorithm (BSO) and its two extensions (BSO-RP and BSO-RPTVW) are compared with existing algorithms which are based on intelligent behavior of honey bees, on a set of well known numerical test functions. The experimental results show that the BSO algorithms are effective and robust; produce excellent results, and outperform other algorithms investigated in this consideration.

  3. Improved Ant Colony Clustering Algorithm and Its Performance Study

    PubMed Central

    Gao, Wei

    2016-01-01

    Clustering analysis is used in many disciplines and applications; it is an important tool that descriptively identifies homogeneous groups of objects based on attribute values. The ant colony clustering algorithm is a swarm-intelligent method used for clustering problems that is inspired by the behavior of ant colonies that cluster their corpses and sort their larvae. A new abstraction ant colony clustering algorithm using a data combination mechanism is proposed to improve the computational efficiency and accuracy of the ant colony clustering algorithm. The abstraction ant colony clustering algorithm is used to cluster benchmark problems, and its performance is compared with the ant colony clustering algorithm and other methods used in existing literature. Based on similar computational difficulties and complexities, the results show that the abstraction ant colony clustering algorithm produces results that are not only more accurate but also more efficiently determined than the ant colony clustering algorithm and the other methods. Thus, the abstraction ant colony clustering algorithm can be used for efficient multivariate data clustering. PMID:26839533

  4. Improved Ant Colony Clustering Algorithm and Its Performance Study.

    PubMed

    Gao, Wei

    2016-01-01

    Clustering analysis is used in many disciplines and applications; it is an important tool that descriptively identifies homogeneous groups of objects based on attribute values. The ant colony clustering algorithm is a swarm-intelligent method used for clustering problems that is inspired by the behavior of ant colonies that cluster their corpses and sort their larvae. A new abstraction ant colony clustering algorithm using a data combination mechanism is proposed to improve the computational efficiency and accuracy of the ant colony clustering algorithm. The abstraction ant colony clustering algorithm is used to cluster benchmark problems, and its performance is compared with the ant colony clustering algorithm and other methods used in existing literature. Based on similar computational difficulties and complexities, the results show that the abstraction ant colony clustering algorithm produces results that are not only more accurate but also more efficiently determined than the ant colony clustering algorithm and the other methods. Thus, the abstraction ant colony clustering algorithm can be used for efficient multivariate data clustering.

  5. Human performance models and rear-end collision avoidance algorithms.

    PubMed

    Brown, T L; Lee, J D; McGehee, D V

    2001-01-01

    Collision warning systems offer a promising approach to mitigate rear-end collisions, but substantial uncertainty exists regarding the joint performance of the driver and the collision warning algorithms. A simple deterministic model of driver performance was used to examine kinematics-based and perceptual-based rear-end collision avoidance algorithms over a range of collision situations, algorithm parameters, and assumptions regarding driver performance. The results show that the assumptions concerning driver reaction times have important consequences for algorithm performance, with underestimates dramatically undermining the safety benefit of the warning. Additionally, under some circumstances, when drivers rely on the warning algorithms, larger headways can result in more severe collisions. This reflects the nonlinear interaction among the collision situation, the algorithm, and driver response that should not be attributed to the complexities of driver behavior but to the kinematics of the situation. Comparisons made with experimental data demonstrate that a simple human performance model can capture important elements of system performance and complement expensive human-in-the-loop experiments. Actual or potential applications of this research include selection of an appropriate algorithm, more accurate specification of algorithm parameters, and guidance for future experiments.

  6. Satellite Animation Shows California Storms

    NASA Video Gallery

    This animation of visible and infrared imagery from NOAA's GOES-West satellite shows a series of moisture-laden storms affecting California from Jan. 6 through Jan. 9, 2017. TRT: 00:36 Credit: NASA...

  7. Satellite Movie Shows Erika Dissipate

    NASA Video Gallery

    This animation of visible and infrared imagery from NOAA's GOES-West satellite from Aug. 27 to 29 shows Tropical Storm Erika move through the Eastern Caribbean Sea and dissipate near eastern Cuba. ...

  8. Algorithms and Libraries

    NASA Technical Reports Server (NTRS)

    Dongarra, Jack

    1998-01-01

    This exploratory study initiated our inquiry into algorithms and applications that would benefit by latency tolerant approach to algorithm building, including the construction of new algorithms where appropriate. In a multithreaded execution, when a processor reaches a point where remote memory access is necessary, the request is sent out on the network and a context--switch occurs to a new thread of computation. This effectively masks a long and unpredictable latency due to remote loads, thereby providing tolerance to remote access latency. We began to develop standards to profile various algorithm and application parameters, such as the degree of parallelism, granularity, precision, instruction set mix, interprocessor communication, latency etc. These tools will continue to develop and evolve as the Information Power Grid environment matures. To provide a richer context for this research, the project also focused on issues of fault-tolerance and computation migration of numerical algorithms and software. During the initial phase we tried to increase our understanding of the bottlenecks in single processor performance. Our work began by developing an approach for the automatic generation and optimization of numerical software for processors with deep memory hierarchies and pipelined functional units. Based on the results we achieved in this study we are planning to study other architectures of interest, including development of cost models, and developing code generators appropriate to these architectures.

  9. YAMPA: Yet Another Matching Pursuit Algorithm for compressive sensing

    NASA Astrophysics Data System (ADS)

    Lodhi, Muhammad A.; Voronin, Sergey; Bajwa, Waheed U.

    2016-05-01

    State-of-the-art sparse recovery methods often rely on the restricted isometry property for their theoretical guarantees. However, they cannot explicitly incorporate metrics such as restricted isometry constants within their recovery procedures due to the computational intractability of calculating such metrics. This paper formulates an iterative algorithm, termed yet another matching pursuit algorithm (YAMPA), for recovery of sparse signals from compressive measurements. YAMPA differs from other pursuit algorithms in that: (i) it adapts to the measurement matrix using a threshold that is explicitly dependent on two computable coherence metrics of the matrix, and (ii) it does not require knowledge of the signal sparsity. Performance comparisons of YAMPA against other matching pursuit and approximate message passing algorithms are made for several types of measurement matrices. These results show that while state-of-the-art approximate message passing algorithms outperform other algorithms (including YAMPA) in the case of well-conditioned random matrices, they completely break down in the case of ill-conditioned measurement matrices. On the other hand, YAMPA and comparable pursuit algorithms not only result in reasonable performance for well-conditioned matrices, but their performance also degrades gracefully for ill-conditioned matrices. The paper also shows that YAMPA uniformly outperforms other pursuit algorithms for the case of thresholding parameters chosen in a clairvoyant fashion. Further, when combined with a simple and fast technique for selecting thresholding parameters in the case of ill-conditioned matrices, YAMPA outperforms other pursuit algorithms in the regime of low undersampling, although some of these algorithms can outperform YAMPA in the regime of high undersampling in this setting.

  10. Messy genetic algorithms: Recent developments

    SciTech Connect

    Kargupta, H.

    1996-09-01

    Messy genetic algorithms define a rare class of algorithms that realize the need for detecting appropriate relations among members of the search domain in optimization. This paper reviews earlier works in messy genetic algorithms and describes some recent developments. It also describes the gene expression messy GA (GEMGA)--an {Omicron}({Lambda}{sup {kappa}}({ell}{sup 2} + {kappa})) sample complexity algorithm for the class of order-{kappa} delineable problems (problems that can be solved by considering no higher than order-{kappa} relations) of size {ell} and alphabet size {Lambda}. Experimental results are presented to demonstrate the scalability of the GEMGA.

  11. SAGE II inversion algorithm. [Stratospheric Aerosol and Gas Experiment

    NASA Technical Reports Server (NTRS)

    Chu, W. P.; Mccormick, M. P.; Lenoble, J.; Brogniez, C.; Pruvost, P.

    1989-01-01

    The operational Stratospheric Aerosol and Gas Experiment II multichannel data inversion algorithm is described. Aerosol and ozone retrievals obtained with the algorithm are discussed. The algorithm is compared to an independently developed algorithm (Lenoble, 1989), showing that the inverted aerosol and ozone profiles from the two algorithms are similar within their respective uncertainties.

  12. Comparison of genetic algorithm and imperialist competitive algorithms in predicting bed load transport in clean pipe.

    PubMed

    Ebtehaj, Isa; Bonakdari, Hossein

    2014-01-01

    The existence of sediments in wastewater greatly affects the performance of the sewer and wastewater transmission systems. Increased sedimentation in wastewater collection systems causes problems such as reduced transmission capacity and early combined sewer overflow. The article reviews the performance of the genetic algorithm (GA) and imperialist competitive algorithm (ICA) in minimizing the target function (mean square error of observed and predicted Froude number). To study the impact of bed load transport parameters, using four non-dimensional groups, six different models have been presented. Moreover, the roulette wheel selection method is used to select the parents. The ICA with root mean square error (RMSE) = 0.007, mean absolute percentage error (MAPE) = 3.5% show better results than GA (RMSE = 0.007, MAPE = 5.6%) for the selected model. All six models return better results than the GA. Also, the results of these two algorithms were compared with multi-layer perceptron and existing equations.

  13. A hybrid cuckoo search algorithm with Nelder Mead method for solving global optimization problems.

    PubMed

    Ali, Ahmed F; Tawhid, Mohamed A

    2016-01-01

    Cuckoo search algorithm is a promising metaheuristic population based method. It has been applied to solve many real life problems. In this paper, we propose a new cuckoo search algorithm by combining the cuckoo search algorithm with the Nelder-Mead method in order to solve the integer and minimax optimization problems. We call the proposed algorithm by hybrid cuckoo search and Nelder-Mead method (HCSNM). HCSNM starts the search by applying the standard cuckoo search for number of iterations then the best obtained solution is passing to the Nelder-Mead algorithm as an intensification process in order to accelerate the search and overcome the slow convergence of the standard cuckoo search algorithm. The proposed algorithm is balancing between the global exploration of the Cuckoo search algorithm and the deep exploitation of the Nelder-Mead method. We test HCSNM algorithm on seven integer programming problems and ten minimax problems and compare against eight algorithms for solving integer programming problems and seven algorithms for solving minimax problems. The experiments results show the efficiency of the proposed algorithm and its ability to solve integer and minimax optimization problems in reasonable time.

  14. Enhanced hybrid search algorithm for protein structure prediction using the 3D-HP lattice model.

    PubMed

    Zhou, Changjun; Hou, Caixia; Zhang, Qiang; Wei, Xiaopeng

    2013-09-01

    The problem of protein structure prediction in the hydrophobic-polar (HP) lattice model is the prediction of protein tertiary structure. This problem is usually referred to as the protein folding problem. This paper presents a method for the application of an enhanced hybrid search algorithm to the problem of protein folding prediction, using the three dimensional (3D) HP lattice model. The enhanced hybrid search algorithm is a combination of the particle swarm optimizer (PSO) and tabu search (TS) algorithms. Since the PSO algorithm entraps local minimum in later evolution extremely easily, we combined PSO with the TS algorithm, which has properties of global optimization. Since the technologies of crossover and mutation are applied many times to PSO and TS algorithms, so enhanced hybrid search algorithm is called the MCMPSO-TS (multiple crossover and mutation PSO-TS) algorithm. Experimental results show that the MCMPSO-TS algorithm can find the best solutions so far for the listed benchmarks, which will help comparison with any future paper approach. Moreover, real protein sequences and Fibonacci sequences are verified in the 3D HP lattice model for the first time. Compared with the previous evolutionary algorithms, the new hybrid search algorithm is novel, and can be used effectively to predict 3D protein folding structure. With continuous development and changes in amino acids sequences, the new algorithm will also make a contribution to the study of new protein sequences.

  15. Social Emotional Optimization Algorithm for Nonlinear Constrained Optimization Problems

    NASA Astrophysics Data System (ADS)

    Xu, Yuechun; Cui, Zhihua; Zeng, Jianchao

    Nonlinear programming problem is one important branch in operational research, and has been successfully applied to various real-life problems. In this paper, a new approach called Social emotional optimization algorithm (SEOA) is used to solve this problem which is a new swarm intelligent technique by simulating the human behavior guided by emotion. Simulation results show that the social emotional optimization algorithm proposed in this paper is effective and efficiency for the nonlinear constrained programming problems.

  16. Chaotic substitution for highly autocorrelated data in encryption algorithm

    NASA Astrophysics Data System (ADS)

    Anees, Amir; Siddiqui, Adil Masood; Ahmed, Fawad

    2014-09-01

    This paper addresses the major drawback of substitution-box in highly auto-correlated data and proposes a novel chaotic substitution technique for encryption algorithm to sort the problem. Simulation results reveal that the overall strength of the proposed technique for encryption is much stronger than most of the existing encryption techniques. Furthermore, few statistical security analyses have also been done to show the strength of anticipated algorithm.

  17. A limited-memory algorithm for bound-constrained optimization

    SciTech Connect

    Byrd, R.H.; Peihuang, L.; Nocedal, J. |

    1996-03-01

    An algorithm for solving large nonlinear optimization problems with simple bounds is described. It is based on the gradient projection method and uses a limited-memory BFGS matrix to approximate the Hessian of the objective function. We show how to take advantage of the form of the limited-memory approximation to implement the algorithm efficiently. The results of numerical tests on a set of large problems are reported.

  18. Chahine algorithm to invert light scattering spectroscopy of epithelial dysplasia

    NASA Astrophysics Data System (ADS)

    Wang, Qing-Hua; Li, Zhen-Hua; Lai, Jian-Cheng; He, An-Zhi

    2007-09-01

    To perceive the epithelial dysplasia from the light scattering spectroscopy (LSS) is an inverse problem, which can be transformed into the inversion of the size distribution of epithelial-cell nuclei. Based on the simulation of single polarized LSS for epithelial-cell nuclei, Chahine algorithm is adopted to retrieve the size distribution. Numerical results show that Chahine algorithm has high inversion precision for both single-peaked and bimodal models, which implies the potential to increase diagnostic resolution of LSS.

  19. Economic load dispatch using improved gravitational search algorithm

    NASA Astrophysics Data System (ADS)

    Huang, Yu; Wang, Jia-rong; Guo, Feng

    2016-03-01

    This paper presents an improved gravitational search algorithm(IGSA) to solve the economic load dispatch(ELD) problem. In order to avoid the local optimum phenomenon, mutation processing is applied to the GSA. The IGSA is applied to solve the economic load dispatch problems with the valve point effects, which has 13 generators and a load demand of 2520 MW. Calculation results show that the algorithm in this paper can deal with the ELD problems with high stability.

  20. Combined simulated annealing algorithm for the discrete facility location problem.

    PubMed

    Qin, Jin; Ni, Ling-Lin; Shi, Feng

    2012-01-01

    The combined simulated annealing (CSA) algorithm was developed for the discrete facility location problem (DFLP) in the paper. The method is a two-layer algorithm, in which the external subalgorithm optimizes the decision of the facility location decision while the internal subalgorithm optimizes the decision of the allocation of customer's demand under the determined location decision. The performance of the CSA is tested by 30 instances with different sizes. The computational results show that CSA works much better than the previous algorithm on DFLP and offers a new reasonable alternative solution method to it.

  1. A Novel Image Encryption Algorithm Based on DNA Subsequence Operation

    PubMed Central

    Zhang, Qiang; Xue, Xianglian; Wei, Xiaopeng

    2012-01-01

    We present a novel image encryption algorithm based on DNA subsequence operation. Different from the traditional DNA encryption methods, our algorithm does not use complex biological operation but just uses the idea of DNA subsequence operations (such as elongation operation, truncation operation, deletion operation, etc.) combining with the logistic chaotic map to scramble the location and the value of pixel points from the image. The experimental results and security analysis show that the proposed algorithm is easy to be implemented, can get good encryption effect, has a wide secret key's space, strong sensitivity to secret key, and has the abilities of resisting exhaustive attack and statistic attack. PMID:23093912

  2. Restart-Based Genetic Algorithm for the Quadratic Assignment Problem

    NASA Astrophysics Data System (ADS)

    Misevicius, Alfonsas

    The power of genetic algorithms (GAs) has been demonstrated for various domains of the computer science, including combinatorial optimization. In this paper, we propose a new conceptual modification of the genetic algorithm entitled a "restart-based genetic algorithm" (RGA). An effective implementation of RGA for a well-known combinatorial optimization problem, the quadratic assignment problem (QAP), is discussed. The results obtained from the computational experiments on the QAP instances from the publicly available library QAPLIB show excellent performance of RGA. This is especially true for the real-life like QAPs.

  3. An improved corner detection algorithm for image sequence

    NASA Astrophysics Data System (ADS)

    Yan, Minqi; Zhang, Bianlian; Guo, Min; Tian, Guangyuan; Liu, Feng; Huo, Zeng

    2014-11-01

    A SUSAN corner detection algorithm for a sequence of images is proposed in this paper, The correlation matching algorithm is treated for the coarse positioning of the detection area, after that, SUSAN corner detection is used to obtain interesting points of the target. The SUSAN corner detection has been improved. For the situation that the points of a small area are often detected as corner points incorrectly, the neighbor direction filter is applied to reduce the rate of mistakes. Experiment results show that the algorithm enhances the anti-noise performance, improve the accuracy of detection.

  4. An efficient cuckoo search algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Ong, Pauline; Zainuddin, Zarita

    2013-04-01

    Cuckoo search algorithm which reproduces the breeding strategy of the best known brood parasitic bird, the cuckoos has demonstrated its superiority in obtaining the global solution for numerical optimization problems. However, the involvement of fixed step approach in its exploration and exploitation behavior might slow down the search process considerably. In this regards, an improved cuckoo search algorithm with adaptive step size adjustment is introduced and its feasibility on a variety of benchmarks is validated. The obtained results show that the proposed scheme outperforms the standard cuckoo search algorithm in terms of convergence characteristic while preserving the fascinating features of the original method.

  5. A component-labeling algorithm based on contour tracing

    NASA Astrophysics Data System (ADS)

    Qiu, Liudong; Li, Zushu

    2007-12-01

    A new method for finding connected components from binary images is presented in this paper. The main step of this method is to use a contour tracing technique to detect component contours, and use the information of contour to fill in interior areas. All the component points are traced by this algorithm in a single pass and are assigned either a new label or the same label of the contour pixels. Comparative experiment results show that Our algorithm, moreover, is a fast method that not only labels components but also extracts component contours at the same time, which proves to be more useful than those algorithms that only label components.

  6. Fast image matching algorithm based on projection characteristics

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Yue, Xiaobo; Zhou, Lijun

    2011-06-01

    Based on analyzing the traditional template matching algorithm, this paper identified the key factors restricting the speed of matching and put forward a brand new fast matching algorithm based on projection. Projecting the grayscale image, this algorithm converts the two-dimensional information of the image into one-dimensional one, and then matches and identifies through one-dimensional correlation, meanwhile, because of normalization has been done, when the image brightness or signal amplitude increasing in proportion, it could also perform correct matching. Experimental results show that the projection characteristics based image registration method proposed in this article could greatly improve the matching speed, which ensuring the matching accuracy as well.

  7. Optimization of deep learning algorithms for object classification

    NASA Astrophysics Data System (ADS)

    Horváth, András.

    2017-02-01

    Deep learning is currently the state of the art algorithm for image classification. The complexity of these feedforward neural networks have overcome a critical point, resulting algorithmic breakthroughs in various fields. On the other hand their complexity makes them executable in tasks, where High-throughput computing powers are available. The optimization of these networks -considering computational complexity and applicability on embedded systems- has not yet been studied and investigated in details. In this paper I show some examples how this algorithms can be optimized and accelerated on embedded systems.

  8. Tissue elasticity measurement method using forward and inversion algorithms

    NASA Astrophysics Data System (ADS)

    Lee, Jong-Ha; Won, Chang-Hee; Park, Hee-Jun; Ku, Jeonghun; Heo, Yun Seok; Kim, Yoon-Nyun

    2013-03-01

    Elasticity is an important indicator of tissue health, with increased stiffness pointing to an increased risk of cancer. We investigated a tissue elasticity measurement method using forward and inversion algorithms for the application of early breast tumor identification. An optical based elasticity measurement system is developed to capture images of the embedded lesions using total internal reflection principle. From elasticity images, we developed a novel method to estimate the elasticity of the embedded lesion using 3-D finite-element-model-based forward algorithm, and neural-network-based inversion algorithm. The experimental results showed that the proposed characterization method can be diffierentiate the benign and malignant breast lesions.

  9. Phyllodes tumor showing intraductal growth.

    PubMed

    Makidono, Akari; Tsunoda, Hiroko; Mori, Miki; Yagata, Hiroshi; Onoda, Yui; Kikuchi, Mari; Nozaki, Taiki; Saida, Yukihisa; Nakamura, Seigo; Suzuki, Koyu

    2013-07-01

    Phyllodes tumor of the breast is a rare fibroepithelial lesion and particularly uncommon in adolescent girls. It is thought to arise from the periductal rather than intralobular stroma. Usually, it is seen as a well-defined mass. Phyllodes tumor showing intraductal growth is extremely rare. Here we report a girl who has a phyllodes tumor with intraductal growth.

  10. A hybrid approach using chaotic dynamics and global search algorithms for combinatorial optimization problems

    NASA Astrophysics Data System (ADS)

    Igeta, Hideki; Hasegawa, Mikio

    Chaotic dynamics have been effectively applied to improve various heuristic algorithms for combinatorial optimization problems in many studies. Currently, the most used chaotic optimization scheme is to drive heuristic solution search algorithms applicable to large-scale problems by chaotic neurodynamics including the tabu effect of the tabu search. Alternatively, meta-heuristic algorithms are used for combinatorial optimization by combining a neighboring solution search algorithm, such as tabu, gradient, or other search method, with a global search algorithm, such as genetic algorithms (GA), ant colony optimization (ACO), or others. In these hybrid approaches, the ACO has effectively optimized the solution of many benchmark problems in the quadratic assignment problem library. In this paper, we propose a novel hybrid method that combines the effective chaotic search algorithm that has better performance than the tabu search and global search algorithms such as ACO and GA. Our results show that the proposed chaotic hybrid algorithm has better performance than the conventional chaotic search and conventional hybrid algorithms. In addition, we show that chaotic search algorithm combined with ACO has better performance than when combined with GA.

  11. Introduction and validation of a less painful algorithm to estimate the nociceptive flexion reflex threshold.

    PubMed

    Lichtner, Gregor; Golebiewski, Anna; Schneider, Martin H; von Dincklage, Falk

    2015-05-22

    The nociceptive flexion reflex (NFR) is a widely used tool to investigate spinal nociception for scientific and diagnostic purposes, but its clinical use is currently limited due to the painful measurement procedure, especially restricting its applicability for patients suffering from chronic pain disorders. Here we introduce a less painful algorithm to assess the NFR threshold. Application of this new algorithm leads to a reduction of subjective pain ratings by over 30% compared to the standard algorithm. We show that the reflex threshold estimates resulting from application of the new algorithm can be used interchangeably with those of the standard algorithm after adjusting for the constant difference between the algorithms. Furthermore, we show that the new algorithm can be applied at shorter interstimulus intervals than are commonly used with the standard algorithm, since reflex threshold values remain unchanged and no habituation effects occur when reducing the interstimulus interval for the new algorithm down to 3s. Finally we demonstrate the utility of the new algorithm to investigate the modulation of nociception through different states of attention. Taken together, the here presented new algorithm could increase the utility of the NFR for investigation of nociception in subjects who were previously not able to endure the measurement procedure, such as chronic pain patients.

  12. A new improved artificial bee colony algorithm for ship hull form optimization

    NASA Astrophysics Data System (ADS)

    Huang, Fuxin; Wang, Lijue; Yang, Chi

    2016-04-01

    The artificial bee colony (ABC) algorithm is a relatively new swarm intelligence-based optimization algorithm. Its simplicity of implementation, relatively few parameter settings and promising optimization capability make it widely used in different fields. However, it has problems of slow convergence due to its solution search equation. Here, a new solution search equation based on a combination of the elite solution pool and the block perturbation scheme is proposed to improve the performance of the algorithm. In addition, two different solution search equations are used by employed bees and onlooker bees to balance the exploration and exploitation of the algorithm. The developed algorithm is validated by a set of well-known numerical benchmark functions. It is then applied to optimize two ship hull forms with minimum resistance. The tested results show that the proposed new improved ABC algorithm can outperform the ABC algorithm in most of the tested problems.

  13. One high-accuracy camera calibration algorithm based on computer vision images

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Huang, Jianming; Wei, Xiangquan

    2015-12-01

    Camera calibration is the first step of computer vision and one of the most active research fields nowadays. In order to improve the measurement precision, the internal parameters of the camera should be accurately calibrated. So one high-accuracy camera calibration algorithm is proposed based on the images of planar targets or tridimensional targets. By using the algorithm, the internal parameters of the camera are calibrated based on the existing planar target at the vision-based navigation experiment. The experimental results show that the accuracy of the proposed algorithm is obviously improved compared with the conventional linear algorithm, Tsai general algorithm, and Zhang Zhengyou calibration algorithm. The algorithm proposed by the article can satisfy the need of computer vision and provide reference for precise measurement of the relative position and attitude.

  14. Dynamic Harmony Search with Polynomial Mutation Algorithm for Valve-Point Economic Load Dispatch.

    PubMed

    Karthikeyan, M; Raja, T Sree Ranga

    2015-01-01

    Economic load dispatch (ELD) problem is an important issue in the operation and control of modern control system. The ELD problem is complex and nonlinear with equality and inequality constraints which makes it hard to be efficiently solved. This paper presents a new modification of harmony search (HS) algorithm named as dynamic harmony search with polynomial mutation (DHSPM) algorithm to solve ORPD problem. In DHSPM algorithm the key parameters of HS algorithm like harmony memory considering rate (HMCR) and pitch adjusting rate (PAR) are changed dynamically and there is no need to predefine these parameters. Additionally polynomial mutation is inserted in the updating step of HS algorithm to favor exploration and exploitation of the search space. The DHSPM algorithm is tested with three power system cases consisting of 3, 13, and 40 thermal units. The computational results show that the DHSPM algorithm is more effective in finding better solutions than other computational intelligence based methods.

  15. AUV Underwater Positioning Algorithm Based on Interactive Assistance of SINS and LBL

    PubMed Central

    Zhang, Tao; Chen, Liping; Li, Yao

    2015-01-01

    This paper studies an underwater positioning algorithm based on the interactive assistance of a strapdown inertial navigation system (SINS) and LBL, and this algorithm mainly includes an optimal correlation algorithm with aided tracking of an SINS/Doppler velocity log (DVL)/magnetic compass pilot (MCP), a three-dimensional TDOA positioning algorithm of Taylor series expansion and a multi-sensor information fusion algorithm. The final simulation results show that compared to traditional underwater positioning algorithms, this scheme can not only directly correct accumulative errors caused by a dead reckoning algorithm, but also solves the problem of ambiguous correlation peaks caused by multipath transmission of underwater acoustic signals. The proposed method can calibrate the accumulative error of the AUV position more directly and effectively, which prolongs the underwater operating duration of the AUV. PMID:26729120

  16. A proximity algorithm accelerated by Gauss-Seidel iterations for L1/TV denoising models

    NASA Astrophysics Data System (ADS)

    Li, Qia; Micchelli, Charles A.; Shen, Lixin; Xu, Yuesheng

    2012-09-01

    Our goal in this paper is to improve the computational performance of the proximity algorithms for the L1/TV denoising model. This leads us to a new characterization of all solutions to the L1/TV model via fixed-point equations expressed in terms of the proximity operators. Based upon this observation we develop an algorithm for solving the model and establish its convergence. Furthermore, we demonstrate that the proposed algorithm can be accelerated through the use of the componentwise Gauss-Seidel iteration so that the CPU time consumed is significantly reduced. Numerical experiments using the proposed algorithm for impulsive noise removal are included, with a comparison to three recently developed algorithms. The numerical results show that while the proposed algorithm enjoys a high quality of the restored images, as the other three known algorithms do, it performs significantly better in terms of computational efficiency measured in the CPU time consumed.

  17. A Fast Multi-Object Extraction Algorithm Based on Cell-Based Connected Components Labeling

    NASA Astrophysics Data System (ADS)

    Gu, Qingyi; Takaki, Takeshi; Ishii, Idaku

    We describe a cell-based connected component labeling algorithm to calculate the 0th and 1st moment features as the attributes for labeled regions. These can be used to indicate their sizes and positions for multi-object extraction. Based on the additivity in moment features, the cell-based labeling algorithm can label divided cells of a certain size in an image by scanning the image only once to obtain the moment features of the labeled regions with remarkably reduced computational complexity and memory consumption for labeling. Our algorithm is a simple-one-time-scan cell-based labeling algorithm, which is suitable for hardware and parallel implementation. We also compared it with conventional labeling algorithms. The experimental results showed that our algorithm is faster than conventional raster-scan labeling algorithms.

  18. AUV Underwater Positioning Algorithm Based on Interactive Assistance of SINS and LBL.

    PubMed

    Zhang, Tao; Chen, Liping; Li, Yao

    2015-12-30

    This paper studies an underwater positioning algorithm based on the interactive assistance of a strapdown inertial navigation system (SINS) and LBL, and this algorithm mainly includes an optimal correlation algorithm with aided tracking of an SINS/Doppler velocity log (DVL)/magnetic compass pilot (MCP), a three-dimensional TDOA positioning algorithm of Taylor series expansion and a multi-sensor information fusion algorithm. The final simulation results show that compared to traditional underwater positioning algorithms, this scheme can not only directly correct accumulative errors caused by a dead reckoning algorithm, but also solves the problem of ambiguous correlation peaks caused by multipath transmission of underwater acoustic signals. The proposed method can calibrate the accumulative error of the AUV position more directly and effectively, which prolongs the underwater operating duration of the AUV.

  19. Dynamic Harmony Search with Polynomial Mutation Algorithm for Valve-Point Economic Load Dispatch

    PubMed Central

    Karthikeyan, M.; Sree Ranga Raja, T.

    2015-01-01

    Economic load dispatch (ELD) problem is an important issue in the operation and control of modern control system. The ELD problem is complex and nonlinear with equality and inequality constraints which makes it hard to be efficiently solved. This paper presents a new modification of harmony search (HS) algorithm named as dynamic harmony search with polynomial mutation (DHSPM) algorithm to solve ORPD problem. In DHSPM algorithm the key parameters of HS algorithm like harmony memory considering rate (HMCR) and pitch adjusting rate (PAR) are changed dynamically and there is no need to predefine these parameters. Additionally polynomial mutation is inserted in the updating step of HS algorithm to favor exploration and exploitation of the search space. The DHSPM algorithm is tested with three power system cases consisting of 3, 13, and 40 thermal units. The computational results show that the DHSPM algorithm is more effective in finding better solutions than other computational intelligence based methods. PMID:26491710

  20. Higher-order force gradient symplectic algorithms

    NASA Astrophysics Data System (ADS)

    Chin, Siu A.; Kidwell, Donald W.

    2000-12-01

    We show that a recently discovered fourth order symplectic algorithm, which requires one evaluation of force gradient in addition to three evaluations of the force, when iterated to higher order, yielded algorithms that are far superior to similarly iterated higher order algorithms based on the standard Forest-Ruth algorithm. We gauge the accuracy of each algorithm by comparing the step-size independent error functions associated with energy conservation and the rotation of the Laplace-Runge-Lenz vector when solving a highly eccentric Kepler problem. For orders 6, 8, 10, and 12, the new algorithms are approximately a factor of 103, 104, 104, and 105 better.

  1. Why is Boris Algorithm So Good?

    SciTech Connect

    et al, Hong Qin

    2013-03-03

    Due to its excellent long term accuracy, the Boris algorithm is the de facto standard for advancing a charged particle. Despite its popularity, up to now there has been no convincing explanation why the Boris algorithm has this advantageous feature. In this letter, we provide an answer to this question. We show that the Boris algorithm conserves phase space volume, even though it is not symplectic. The global bound on energy error typically associated with symplectic algorithms still holds for the Boris algorithm, making it an effective algorithm for the multi-scale dynamics of plasmas.

  2. Feature matching algorithm based on spatial similarity

    NASA Astrophysics Data System (ADS)

    Tang, Wenjing; Hao, Yanling; Zhao, Yuxin; Li, Ning

    2008-10-01

    The disparities of features that represent the same real world entities from disparate sources usually occur, thus the identification or matching of features is crutial to the map conflation. Motivated by the idea of identifying the same entities through integrating known information by eyes, the feature matching algorithm based on spatial similarity is proposed in this paper. Total similarity is obtained by integrating positional similarity, shape similarity and size similarity with a weighted average algorithm, then the matching entities is achieved according to the maximum total similarity. The matching of areal features is analyzed in detail. Regarding the areal feature as a whole, the proposed algorithm identifies the same areal features by their shape-center points in order to calculate their positional similarity, and shape similarity is given by the function of describing the shape, which ensures its precision not be affected by interferes and avoids the loss of shape information, furthermore the size of areal features is measured by their covered areas. Test results show the stability and reliability of the proposed algorithm, and its precision and recall are higher than other matching algorithm.

  3. Kernel MAD Algorithm for Relative Radiometric Normalization

    NASA Astrophysics Data System (ADS)

    Bai, Yang; Tang, Ping; Hu, Changmiao

    2016-06-01

    The multivariate alteration detection (MAD) algorithm is commonly used in relative radiometric normalization. This algorithm is based on linear canonical correlation analysis (CCA) which can analyze only linear relationships among bands. Therefore, we first introduce a new version of MAD in this study based on the established method known as kernel canonical correlation analysis (KCCA). The proposed method effectively extracts the non-linear and complex relationships among variables. We then conduct relative radiometric normalization experiments on both the linear CCA and KCCA version of the MAD algorithm with the use of Landsat-8 data of Beijing, China, and Gaofen-1(GF-1) data derived from South China. Finally, we analyze the difference between the two methods. Results show that the KCCA-based MAD can be satisfactorily applied to relative radiometric normalization, this algorithm can well describe the nonlinear relationship between multi-temporal images. This work is the first attempt to apply a KCCA-based MAD algorithm to relative radiometric normalization.

  4. Improved ant colony algorithm for global path planning

    NASA Astrophysics Data System (ADS)

    Li, Pengfei; Wang, Hongbo; Li, Xiaogang

    2017-03-01

    The ant colony algorithm has many advantages compared with other algorithms in path planning, but its shortcomings still cannot be ignored. For example, the convergence speed is very low at initial stage, it is easy to fall into the local optimal solution, and the solution speed is slow and so on. In order to solve these problems and reduce the search time, this paper firstly makes the assignment of the main parameters of α, β, M and ρ in the ant colony algorithm through a large number of experimental data analysis. Then an improved ant colony algorithm based on dynamic parameters and new pheromone updating mechanism is proposed in this paper. Simulation results show that the improved ant colony algorithm can not only greatly shorten the algorithm running time, but also has greater probability to get the global optimal solution, and the convergence rate of algorithm is better than traditional ant colony algorithm. It is very advantageous for solving large-scale optimization problems.

  5. A Novel Hybrid Firefly Algorithm for Global Optimization

    PubMed Central

    Zhang, Lina; Liu, Liqiang; Yang, Xin-She; Dai, Yuntao

    2016-01-01

    Global optimization is challenging to solve due to its nonlinearity and multimodality. Traditional algorithms such as the gradient-based methods often struggle to deal with such problems and one of the current trends is to use metaheuristic algorithms. In this paper, a novel hybrid population-based global optimization algorithm, called hybrid firefly algorithm (HFA), is proposed by combining the advantages of both the firefly algorithm (FA) and differential evolution (DE). FA and DE are executed in parallel to promote information sharing among the population and thus enhance searching efficiency. In order to evaluate the performance and efficiency of the proposed algorithm, a diverse set of selected benchmark functions are employed and these functions fall into two groups: unimodal and multimodal. The experimental results show better performance of the proposed algorithm compared to the original version of the firefly algorithm (FA), differential evolution (DE) and particle swarm optimization (PSO) in the sense of avoiding local minima and increasing the convergence rate. PMID:27685869

  6. An enhanced fast scanning algorithm for image segmentation

    NASA Astrophysics Data System (ADS)

    Ismael, Ahmed Naser; Yusof, Yuhanis binti

    2015-12-01

    Segmentation is an essential and important process that separates an image into regions that have similar characteristics or features. This will transform the image for a better image analysis and evaluation. An important benefit of segmentation is the identification of region of interest in a particular image. Various algorithms have been proposed for image segmentation and this includes the Fast Scanning algorithm which has been employed on food, sport and medical images. It scans all pixels in the image and cluster each pixel according to the upper and left neighbor pixels. The clustering process in Fast Scanning algorithm is performed by merging pixels with similar neighbor based on an identified threshold. Such an approach will lead to a weak reliability and shape matching of the produced segments. This paper proposes an adaptive threshold function to be used in the clustering process of the Fast Scanning algorithm. This function used the gray'value in the image's pixels and variance Also, the level of the image that is more the threshold are converted into intensity values between 0 and 1, and other values are converted into intensity values zero. The proposed enhanced Fast Scanning algorithm is realized on images of the public and private transportation in Iraq. Evaluation is later made by comparing the produced images of proposed algorithm and the standard Fast Scanning algorithm. The results showed that proposed algorithm is faster in terms the time from standard fast scanning.

  7. Generalized Pattern Search Algorithm for Peptide Structure Prediction

    PubMed Central

    Nicosia, Giuseppe; Stracquadanio, Giovanni

    2008-01-01

    Finding the near-native structure of a protein is one of the most important open problems in structural biology and biological physics. The problem becomes dramatically more difficult when a given protein has no regular secondary structure or it does not show a fold similar to structures already known. This situation occurs frequently when we need to predict the tertiary structure of small molecules, called peptides. In this research work, we propose a new ab initio algorithm, the generalized pattern search algorithm, based on the well-known class of Search-and-Poll algorithms. We performed an extensive set of simulations over a well-known set of 44 peptides to investigate the robustness and reliability of the proposed algorithm, and we compared the peptide conformation with a state-of-the-art algorithm for peptide structure prediction known as PEPstr. In particular, we tested the algorithm on the instances proposed by the originators of PEPstr, to validate the proposed algorithm; the experimental results confirm that the generalized pattern search algorithm outperforms PEPstr by 21.17% in terms of average root mean-square deviation, RMSD Cα. PMID:18487293

  8. Magic Carpet Shows Its Colors

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The upper left image in this display is from the panoramic camera on the Mars Exploration Rover Spirit, showing the 'Magic Carpet' region near the rover at Gusev Crater, Mars, on Sol 7, the seventh martian day of its journey (Jan. 10, 2004). The lower image, also from the panoramic camera, is a monochrome (single filter) image of a rock in the 'Magic Carpet' area. Note that colored portions of the rock correlate with extracted spectra shown in the plot to the side. Four different types of materials are shown: the rock itself, the soil in front of the rock, some brighter soil on top of the rock, and some dust that has collected in small recesses on the rock face ('spots'). Each color on the spectra matches a line on the graph, showing how the panoramic camera's different colored filters are used to broadly assess the varying mineral compositions of martian rocks and soils.

  9. An effective hybrid cuckoo search and genetic algorithm for constrained engineering design optimization

    NASA Astrophysics Data System (ADS)

    Kanagaraj, G.; Ponnambalam, S. G.; Jawahar, N.; Mukund Nilakantan, J.

    2014-10-01

    This article presents an effective hybrid cuckoo search and genetic algorithm (HCSGA) for solving engineering design optimization problems involving problem-specific constraints and mixed variables such as integer, discrete and continuous variables. The proposed algorithm, HCSGA, is first applied to 13 standard benchmark constrained optimization functions and subsequently used to solve three well-known design problems reported in the literature. The numerical results obtained by HCSGA show competitive performance with respect to recent algorithms for constrained design optimization problems.

  10. Affine Projection Algorithm with Improved Data-Selective Method Using the Condition Number

    NASA Astrophysics Data System (ADS)

    Ban, Sung Jun; Lee, Chang Woo; Kim, Sang Woo

    Recently, a data-selective method has been proposed to achieve low misalignment in affine projection algorithm (APA) by keeping the condition number of an input data matrix small. We present an improved method, and a complexity reduction algorithm for the APA with the data-selective method. Experimental results show that the proposed algorithm has lower misalignment and a lower condition number for an input data matrix than both the conventional APA and the APA with the previous data-selective method.

  11. A novel algorithm for circular and noncircular signals without knowing the number of sources with Mrla

    NASA Astrophysics Data System (ADS)

    Zeng, Yaoping; Yang, Yixin; Lu, Guangyue

    2013-07-01

    This paper focuses on the direction of arrival (DOA) under the circumstance of mixed circular and noncircular sources with Minimum-Redundancy Linear Array(MRLA).By exploiting receiving signal data and its conjugate,the proposed algorithm can augment the maximum number of detectable sources.Using the weighted MUSIC algorithm during the whole space, the proposed scheme can obtain perfect quality for MRLA without knowing the number of sources. Simulation results clearly show that the effectiveness of our proposed algorithm.

  12. A Parallel Prefix Algorithm for Almost Toeplitz Tridiagonal Systems

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He; Joslin, Ronald D.

    1995-01-01

    A compact scheme is a discretization scheme that is advantageous in obtaining highly accurate solutions. However, the resulting systems from compact schemes are tridiagonal systems that are difficult to solve efficiently on parallel computers. Considering the almost symmetric Toeplitz structure, a parallel algorithm, simple parallel prefix (SPP), is proposed. The SPP algorithm requires less memory than the conventional LU decomposition and is efficient on parallel machines. It consists of a prefix communication pattern and AXPY operations. Both the computation and the communication can be truncated without degrading the accuracy when the system is diagonally dominant. A formal accuracy study has been conducted to provide a simple truncation formula. Experimental results have been measured on a MasPar MP-1 SIMD machine and on a Cray 2 vector machine. Experimental results show that the simple parallel prefix algorithm is a good algorithm for symmetric, almost symmetric Toeplitz tridiagonal systems and for the compact scheme on high-performance computers.

  13. An algorithm for selecting the most accurate protocol for contact angle measurement by drop shape analysis.

    PubMed

    Xu, Z N

    2014-12-01

    In this study, an error analysis is performed to study real water drop images and the corresponding numerically generated water drop profiles for three widely used static contact angle algorithms: the circle- and ellipse-fitting algorithms and the axisymmetric drop shape analysis-profile (ADSA-P) algorithm. The results demonstrate the accuracy of the numerically generated drop profiles based on the Laplace equation. A significant number of water drop profiles with different volumes, contact angles, and noise levels are generated, and the influences of the three factors on the accuracies of the three algorithms are systematically investigated. The results reveal that the above-mentioned three algorithms are complementary. In fact, the circle- and ellipse-fitting algorithms show low errors and are highly resistant to noise for water drops with small/medium volumes and contact angles, while for water drop with large volumes and contact angles just the ADSA-P algorithm can meet accuracy requirement. However, this algorithm introduces significant errors in the case of small volumes and contact angles because of its high sensitivity to noise. The critical water drop volumes of the circle- and ellipse-fitting algorithms corresponding to a certain contact angle error are obtained through a significant amount of computation. To improve the precision of the static contact angle measurement, a more accurate algorithm based on a combination of the three algorithms is proposed. Following a systematic investigation, the algorithm selection rule is described in detail, while maintaining the advantages of the three algorithms and overcoming their deficiencies. In general, static contact angles over the entire hydrophobicity range can be accurately evaluated using the proposed algorithm. The ease of erroneous judgment in static contact angle measurements is avoided. The proposed algorithm is validated by a static contact angle evaluation of real and numerically generated water drop

  14. An algorithm for selecting the most accurate protocol for contact angle measurement by drop shape analysis

    NASA Astrophysics Data System (ADS)

    Xu, Z. N.

    2014-12-01

    In this study, an error analysis is performed to study real water drop images and the corresponding numerically generated water drop profiles for three widely used static contact angle algorithms: the circle- and ellipse-fitting algorithms and the axisymmetric drop shape analysis-profile (ADSA-P) algorithm. The results demonstrate the accuracy of the numerically generated drop profiles based on the Laplace equation. A significant number of water drop profiles with different volumes, contact angles, and noise levels are generated, and the influences of the three factors on the accuracies of the three algorithms are systematically investigated. The results reveal that the above-mentioned three algorithms are complementary. In fact, the circle- and ellipse-fitting algorithms show low errors and are highly resistant to noise for water drops with small/medium volumes and contact angles, while for water drop with large volumes and contact angles just the ADSA-P algorithm can meet accuracy requirement. However, this algorithm introduces significant errors in the case of small volumes and contact angles because of its high sensitivity to noise. The critical water drop volumes of the circle- and ellipse-fitting algorithms corresponding to a certain contact angle error are obtained through a significant amount of computation. To improve the precision of the static contact angle measurement, a more accurate algorithm based on a combination of the three algorithms is proposed. Following a systematic investigation, the algorithm selection rule is described in detail, while maintaining the advantages of the three algorithms and overcoming their deficiencies. In general, static contact angles over the entire hydrophobicity range can be accurately evaluated using the proposed algorithm. The ease of erroneous judgment in static contact angle measurements is avoided. The proposed algorithm is validated by a static contact angle evaluation of real and numerically generated water drop

  15. Graphene Oxides Show Angiogenic Properties.

    PubMed

    Mukherjee, Sudip; Sriram, Pavithra; Barui, Ayan Kumar; Nethi, Susheel Kumar; Veeriah, Vimal; Chatterjee, Suvro; Suresh, Kattimuttathu Ittara; Patra, Chitta Ranjan

    2015-08-05

    Angiogenesis, a process resulting in the formation of new capillaries from the pre-existing vasculature plays vital role for the development of therapeutic approaches for cancer, atherosclerosis, wound healing, and cardiovascular diseases. In this report, the synthesis, characterization, and angiogenic properties of graphene oxide (GO) and reduced graphene oxide (rGO) have been demonstrated, observed through several in vitro and in vivo angiogenesis assays. The results here demonstrate that the intracellular formation of reactive oxygen species and reactive nitrogen species as well as activation of phospho-eNOS and phospho-Akt might be the plausible mechanisms for GO and rGO induced angiogenesis. The results altogether suggest the possibilities for the development of alternative angiogenic therapeutic approach for the treatment of cardiovascular related diseases where angiogenesis plays a significant role.

  16. An image super-resolution algorithm for different error levels per frame.

    PubMed

    He, Hu; Kondi, Lisimachos P

    2006-03-01

    In this paper, we propose an image super-resolution (resolution enhancement) algorithm that takes into account inaccurate estimates of the registration parameters and the point spread function. These inaccurate estimates, along with the additive Gaussian noise in the low-resolution (LR) image sequence, result in different noise level for each frame. In the proposed algorithm, the LR frames are adaptively weighted according to their reliability and the regularization parameter is simultaneously estimated. A translational motion model is assumed. The convergence property of the proposed algorithm is analyzed in detail. Our experimental results using both real and synthetic data show the effectiveness of the proposed algorithm.

  17. ARPANET Routing Algorithm Improvements

    DTIC Science & Technology

    1978-10-01

    IMPROVEMENTS . .PFOnINI ORG. REPORT MUNDER -- ) _ .. .... 3940 7, AUT񓂏(c) .. .. .. CONTRACT Of GRANT NUMSlet e) SJ. M. /Mc~uillan E. C./Rosen I...8217), this problem may persist for a very long time, causing extremely bad performance throughout the whole network (for instance, if w’ reports that one of...algorithm may naturally tend to oscillate between bad routing paths and become itself a major contributor to network congestion. These examples show

  18. SIMAS ADM XBT Algorithm

    DTIC Science & Technology

    2016-06-07

    XBT’s sound speed values instead of temperature values. Studies show that the sound speed at the surface in a specific location varies less than...be entered at the terminal in metric or English temperatures or sound speeds. The algorithm automatically determines which form each data point was... sound speeds. Leroy’s equation is used to derive sound speed from temperature or temperature from sound speed. The previous, current, and next months

  19. "Medicine show." Alice in Doctorland.

    PubMed

    1987-01-01

    This is an excerpt from the script of a 1939 play provided to the Institute of Social Medicine and Community Health by the Library of Congress Federal Theater Project Collection at George Mason University Library, Fairfax, Virginia, pages 2-1-8 thru 2-1-14. The Federal Theatre Project (FTP) was part of the New Deal program for the arts 1935-1939. Funded by the Works Progress Administration (WPA) its goal was to employ theater professionals from the relief rolls. A number of FTP plays deal with aspects of medicine and public health. Pageants, puppet shows and documentary plays celebrated progress in medical science while examining social controversies in medical services and the public health movement. "Medicine Show" sharply contrasts technological wonders with social backwardness. The play was rehearsed by the FTP but never opened because funding ended. A revised version ran on Broadway in 1940. The preceding comments are adapted from an excellent, well-illustrated review of five of these plays by Barabara Melosh: "The New Deal's Federal Theatre Project," Medical Heritage, Vol. 2, No. 1 (Jan/Feb 1986), pp. 36-47.

  20. A Message-Passing Algorithm for Wireless Network Scheduling.

    PubMed

    Paschalidis, Ioannis Ch; Huang, Fuzhuo; Lai, Wei

    2015-10-01

    We consider scheduling in wireless networks and formulate it as Maximum Weighted Independent Set (MWIS) problem on a "conflict" graph that captures interference among simultaneous transmissions. We propose a novel, low-complexity, and fully distributed algorithm that yields high-quality feasible solutions. Our proposed algorithm consists of two phases, each of which requires only local information and is based on message-passing. The first phase solves a relaxation of the MWIS problem using a gradient projection method. The relaxation we consider is tighter than the simple linear programming relaxation and incorporates constraints on all cliques in the graph. The second phase of the algorithm starts from the solution of the relaxation and constructs a feasible solution to the MWIS problem. We show that our algorithm always outputs an optimal solution to the MWIS problem for perfect graphs. Simulation results compare our policies against Carrier Sense Multiple Access (CSMA) and other alternatives and show excellent performance.

  1. Developments in Human Centered Cueing Algorithms for Control of Flight Simulator Motion Systems

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A.; Telban, Robert J.; Cardullo, Frank M.

    1997-01-01

    The authors conducted further research with cueing algorithms for control of flight simulator motion systems. A variation of the so-called optimal algorithm was formulated using simulated aircraft angular velocity input as a basis. Models of the human vestibular sensation system, i.e. the semicircular canals and otoliths, are incorporated within the algorithm. Comparisons of angular velocity cueing responses showed a significant improvement over a formulation using angular acceleration input. Results also compared favorably with the coordinated adaptive washout algorithm, yielding similar results for angular velocity cues while eliminating false cues and reducing the tilt rate for longitudinal cues. These results were confirmed in piloted tests on the current motion system at NASA-Langley, the Visual Motion Simulator (VMS). Proposed future developments by the authors in cueing algorithms are revealed. The new motion system, the Cockpit Motion Facility (CMF), where the final evaluation of the cueing algorithms will be conducted, is also described.

  2. What Do Blood Tests Show?

    MedlinePlus

    ... Range Results* Red blood cell (varies with altitude) Male: 5 to 6 million cells/mcL Female: 4 to 5 million cells/mcL ... cells/mcL Platelets 140,000 to 450,000 cells/mcL Hemoglobin (varies with altitude) Male: 14 to 17 gm/dL Female: 12 to ...

  3. Algorithm for Autonomous Landing

    NASA Technical Reports Server (NTRS)

    Kuwata, Yoshiaki

    2011-01-01

    Because of their small size, high maneuverability, and easy deployment, micro aerial vehicles (MAVs) are used for a wide variety of both civilian and military missions. One of their current drawbacks is the vast array of sensors (such as GPS, altimeter, radar, and the like) required to make a landing. Due to the MAV s small payload size, this is a major concern. Replacing the imaging sensors with a single monocular camera is sufficient to land a MAV. By applying optical flow algorithms to images obtained from the camera, time-to-collision can be measured. This is a measurement of position and velocity (but not of absolute distance), and can avoid obstacles as well as facilitate a landing on a flat surface given a set of initial conditions. The key to this approach is to calculate time-to-collision based on some image on the ground. By holding the angular velocity constant, horizontal speed decreases linearly with the height, resulting in a smooth landing. Mathematical proofs show that even with actuator saturation or modeling/ measurement uncertainties, MAVs can land safely. Landings of this nature may have a higher velocity than is desirable, but this can be compensated for by a cushioning or dampening system, or by using a system of legs to grab onto a surface. Such a monocular camera system can increase vehicle payload size (or correspondingly reduce vehicle size), increase speed of descent, and guarantee a safe landing by directly correlating speed to height from the ground.

  4. A Rapid Convergent Low Complexity Interference Alignment Algorithm for Wireless Sensor Networks

    PubMed Central

    Jiang, Lihui; Wu, Zhilu; Ren, Guanghui; Wang, Gangyi; Zhao, Nan

    2015-01-01

    Interference alignment (IA) is a novel technique that can effectively eliminate the interference and approach the sum capacity of wireless sensor networks (WSNs) when the signal-to-noise ratio (SNR) is high, by casting the desired signal and interference into different signal subspaces. The traditional alternating minimization interference leakage (AMIL) algorithm for IA shows good performance in high SNR regimes, however, the complexity of the AMIL algorithm increases dramatically as the number of users and antennas increases, posing limits to its applications in the practical systems. In this paper, a novel IA algorithm, called directional quartic optimal (DQO) algorithm, is proposed to minimize the interference leakage with rapid convergence and low complexity. The properties of the AMIL algorithm are investigated, and it is discovered that the difference between the two consecutive iteration results of the AMIL algorithm will approximately point to the convergence solution when the precoding and decoding matrices obtained from the intermediate iterations are sufficiently close to their convergence values. Based on this important property, the proposed DQO algorithm employs the line search procedure so that it can converge to the destination directly. In addition, the optimal step size can be determined analytically by optimizing a quartic function. Numerical results show that the proposed DQO algorithm can suppress the interference leakage more rapidly than the traditional AMIL algorithm, and can achieve the same level of sum rate as that of AMIL algorithm with far less iterations and execution time. PMID:26230697

  5. Efficient implementation of the adaptive scale pixel decomposition algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Bhatnagar, S.; Rau, U.; Zhang, M.

    2016-08-01

    Context. Most popular algorithms in use to remove the effects of a telescope's point spread function (PSF) in radio astronomy are variants of the CLEAN algorithm. Most of these algorithms model the sky brightness using the delta-function basis, which results in undesired artefacts when used to image extended emission. The adaptive scale pixel decomposition (Asp-Clean) algorithm models the sky brightness on a scale-sensitive basis and thus gives a significantly better imaging performance when imaging fields that contain both resolved and unresolved emission. Aims: However, the runtime cost of Asp-Clean is higher than that of scale-insensitive algorithms. In this paper, we identify the most expensive step in the original Asp-Clean algorithm and present an efficient implementation of it, which significantly reduces the computational cost while keeping the imaging performance comparable to the original algorithm. The PSF sidelobe levels of modern wide-band telescopes are significantly reduced, allowing us to make approximations to reduce the computational cost, which in turn allows for the deconvolution of larger images on reasonable timescales. Methods: As in the original algorithm, scales in the image are estimated through function fitting. Here we introduce an analytical method to model extended emission, and a modified method for estimating the initial values used for the fitting procedure, which ultimately leads to a lower computational cost. Results: The new implementation was tested with simulated EVLA data and the imaging performance compared well with the original Asp-Clean algorithm. Tests show that the current algorithm can recover features at different scales with lower computational cost.

  6. Cuckoo Search Algorithm Based on Repeat-Cycle Asymptotic Self-Learning and Self-Evolving Disturbance for Function Optimization.

    PubMed

    Wang, Jie-sheng; Li, Shu-xia; Song, Jiang-di

    2015-01-01

    In order to improve convergence velocity and optimization accuracy of the cuckoo search (CS) algorithm for solving the function optimization problems, a new improved cuckoo search algorithm based on the repeat-cycle asymptotic self-learning and self-evolving disturbance (RC-SSCS) is proposed. A disturbance operation is added into the algorithm by constructing a disturbance factor to make a more careful and thorough search near the bird's nests location. In order to select a reasonable repeat-cycled disturbance number, a further study on the choice of disturbance times is made. Finally, six typical test functions are adopted to carry out simulation experiments, meanwhile, compare algorithms of this paper with two typical swarm intelligence algorithms particle swarm optimization (PSO) algorithm and artificial bee colony (ABC) algorithm. The results show that the improved cuckoo search algorithm has better convergence velocity and optimization accuracy.

  7. Dynamical behavior of the Niedermayer algorithm applied to Potts models

    NASA Astrophysics Data System (ADS)

    Girardi, D.; Penna, T. J. P.; Branco, N. S.

    2012-08-01

    In this work, we make a numerical study of the dynamic universality class of the Niedermayer algorithm applied to the two-dimensional Potts model with 2, 3, and 4 states. This algorithm updates clusters of spins and has a free parameter, E0, which controls the size of these clusters, such that E0=1 is the Metropolis algorithm and E0=0 regains the Wolff algorithm, for the Potts model. For -1show that the mean size of the clusters of (possibly) turned spins initially grows with the linear size of the lattice, L, but eventually saturates at a given lattice size L˜, which depends on E0. For L≥L˜, the Niedermayer algorithm is in the same dynamic universality class of the Metropolis one, i.e, they have the same dynamic exponent. For E0>0, spins in different states may be added to the cluster but the dynamic behavior is less efficient than for the Wolff algorithm (E0=0). Therefore, our results show that the Wolff algorithm is the best choice for Potts models, when compared to the Niedermayer's generalization.

  8. "Show me" bioethics and politics.

    PubMed

    Christopher, Myra J

    2007-10-01

    Missouri, the "Show Me State," has become the epicenter of several important national public policy debates, including abortion rights, the right to choose and refuse medical treatment, and, most recently, early stem cell research. In this environment, the Center for Practical Bioethics (formerly, Midwest Bioethics Center) emerged and grew. The Center's role in these "cultural wars" is not to advocate for a particular position but to provide well researched and objective information, perspective, and advocacy for the ethical justification of policy positions; and to serve as a neutral convener and provider of a public forum for discussion. In this article, the Center's work on early stem cell research is a case study through which to argue that not only the Center, but also the field of bioethics has a critical role in the politics of public health policy.

  9. Phoenix Scoop Inverted Showing Rasp

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This image taken by the Surface Stereo Imager on Sol 49, or the 49th Martian day of the mission (July 14, 2008), shows the silver colored rasp protruding from NASA's Phoenix Mars Lander's Robotic Arm scoop. The scoop is inverted and the rasp is pointing up.

    Shown with its forks pointing toward the ground is the thermal and electrical conductivity probe, at the lower right. The Robotic Arm Camera is pointed toward the ground.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is led by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  10. ShowMe3D

    SciTech Connect

    Sinclair, Michael B

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from the displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.

  11. Optimisation of nonlinear motion cueing algorithm based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Asadi, Houshyar; Mohamed, Shady; Rahim Zadeh, Delpak; Nahavandi, Saeid

    2015-04-01

    Motion cueing algorithms (MCAs) are playing a significant role in driving simulators, aiming to deliver the most accurate human sensation to the simulator drivers compared with a real vehicle driver, without exceeding the physical limitations of the simulator. This paper provides the optimisation design of an MCA for a vehicle simulator, in order to find the most suitable washout algorithm parameters, while respecting all motion platform physical limitations, and minimising human perception error between real and simulator driver. One of the main limitations of the classical washout filters is that it is attuned by the worst-case scenario tuning method. This is based on trial and error, and is effected by driving and programmers experience, making this the most significant obstacle to full motion platform utilisation. This leads to inflexibility of the structure, production of false cues and makes the resulting simulator fail to suit all circumstances. In addition, the classical method does not take minimisation of human perception error and physical constraints into account. Production of motion cues and the impact of different parameters of classical washout filters on motion cues remain inaccessible for designers for this reason. The aim of this paper is to provide an optimisation method for tuning the MCA parameters, based on nonlinear filtering and genetic algorithms. This is done by taking vestibular sensation error into account between real and simulated cases, as well as main dynamic limitations, tilt coordination and correlation coefficient. Three additional compensatory linear blocks are integrated into the MCA, to be tuned in order to modify the performance of the filters successfully. The proposed optimised MCA is implemented in MATLAB/Simulink software packages. The results generated using the proposed method show increased performance in terms of human sensation, reference shape tracking and exploiting the platform more efficiently without reaching

  12. Benchmarking monthly homogenization algorithms

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    . Training was found to be very important. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that currently automatic algorithms can perform as well as manual ones.

  13. Strong convergence of an extragradient-type algorithm for the multiple-sets split equality problem.

    PubMed

    Zhao, Ying; Shi, Luoyi

    2017-01-01

    This paper introduces a new extragradient-type method to solve the multiple-sets split equality problem (MSSEP). Under some suitable conditions, the strong convergence of an algorithm can be verified in the infinite-dimensional Hilbert spaces. Moreover, several numerical results are given to show the effectiveness of our algorithm.

  14. A Cluster Algorithm for the 2-D SU(3) × SU(3) Chiral Model

    NASA Astrophysics Data System (ADS)

    Ji, Da-ren; Zhang, Jian-bo

    1996-07-01

    To extend the cluster algorithm to SU(N) × SU(N) chiral models, a variant version of Wolff's cluster algorithm is proposed and tested for the 2-dimensional SU(3) × SU(3) chiral model. The results show that the new method can reduce the critical slowing down in SU(3) × SU(3) chiral model.

  15. Casimir experiments showing saturation effects

    SciTech Connect

    Sernelius, Bo E.

    2009-10-15

    We address several different Casimir experiments where theory and experiment disagree. First out is the classical Casimir force measurement between two metal half spaces; here both in the form of the torsion pendulum experiment by Lamoreaux and in the form of the Casimir pressure measurement between a gold sphere and a gold plate as performed by Decca et al.; theory predicts a large negative thermal correction, absent in the high precision experiments. The third experiment is the measurement of the Casimir force between a metal plate and a laser irradiated semiconductor membrane as performed by Chen et al.; the change in force with laser intensity is larger than predicted by theory. The fourth experiment is the measurement of the Casimir force between an atom and a wall in the form of the measurement by Obrecht et al. of the change in oscillation frequency of a {sup 87}Rb Bose-Einstein condensate trapped to a fused silica wall; the change is smaller than predicted by theory. We show that saturation effects can explain the discrepancies between theory and experiment observed in all these cases.

  16. Parallelization of Edge Detection Algorithm using MPI on Beowulf Cluster

    NASA Astrophysics Data System (ADS)

    Haron, Nazleeni; Amir, Ruzaini; Aziz, Izzatdin A.; Jung, Low Tan; Shukri, Siti Rohkmah

    In this paper, we present the design of parallel Sobel edge detection algorithm using Foster's methodology. The parallel algorithm is implemented using MPI message passing library and master/slave algorithm. Every processor performs the same sequential algorithm but on different part of the image. Experimental results conducted on Beowulf cluster are presented to demonstrate the performance of the parallel algorithm.

  17. Quantifying Global Uncertainties in a Simple Microwave Rainfall Algorithm

    NASA Technical Reports Server (NTRS)

    Kummerow, Christian; Berg, Wesley; Thomas-Stahle, Jody; Masunaga, Hirohiko

    2006-01-01

    While a large number of methods exist in the literature for retrieving rainfall from passive microwave brightness temperatures, little has been written about the quantitative assessment of the expected uncertainties in these rainfall products at various time and space scales. The latter is the result of two factors: sparse validation sites over most of the world's oceans, and algorithm sensitivities to rainfall regimes that cause inconsistencies against validation data collected at different locations. To make progress in this area, a simple probabilistic algorithm is developed. The algorithm uses an a priori database constructed from the Tropical Rainfall Measuring Mission (TRMM) radar data coupled with radiative transfer computations. Unlike efforts designed to improve rainfall products, this algorithm takes a step backward in order to focus on uncertainties. In addition to inversion uncertainties, the construction of the algorithm allows errors resulting from incorrect databases, incomplete databases, and time- and space-varying databases to be examined. These are quantified. Results show that the simple algorithm reduces errors introduced by imperfect knowledge of precipitation radar (PR) rain by a factor of 4 relative to an algorithm that is tuned to the PR rainfall. Database completeness does not introduce any additional uncertainty at the global scale, while climatologically distinct space/time domains add approximately 25% uncertainty that cannot be detected by a radiometer alone. Of this value, 20% is attributed to changes in cloud morphology and microphysics, while 5% is a result of changes in the rain/no-rain thresholds. All but 2%-3% of this variability can be accounted for by considering the implicit assumptions in the algorithm. Additional uncertainties introduced by the details of the algorithm formulation are not quantified in this study because of the need for independent measurements that are beyond the scope of this paper. A validation strategy

  18. A Novel Modification of PSO Algorithm for SML Estimation of DOA

    PubMed Central

    Chen, Haihua; Li, Shibao; Liu, Jianhang; Liu, Fen; Suzuki, Masakiyo

    2016-01-01

    This paper addresses the issue of reducing the computational complexity of Stochastic Maximum Likelihood (SML) estimation of Direction-of-Arrival (DOA). The SML algorithm is well-known for its high accuracy of DOA estimation in sensor array signal processing. However, its computational complexity is very high because the estimation of SML criteria is a multi-dimensional non-linear optimization problem. As a result, it is hard to apply the SML algorithm to real systems. The Particle Swarm Optimization (PSO) algorithm is considered as a rather efficient method for multi-dimensional non-linear optimization problems in DOA estimation. However, the conventional PSO algorithm suffers two defects, namely, too many particles and too many iteration times. Therefore, the computational complexity of SML estimation using conventional PSO algorithm is still a little high. To overcome these two defects and to reduce computational complexity further, this paper proposes a novel modification of the conventional PSO algorithm for SML estimation and we call it Joint-PSO algorithm. The core idea of the modification lies in that it uses the solution of Estimation of Signal Parameters via Rotational Invariance Techniques (ESPRIT) and stochastic Cramer-Rao bound (CRB) to determine a novel initialization space. Since this initialization space is already close to the solution of SML, fewer particles and fewer iteration times are needed. As a result, the computational complexity can be greatly reduced. In simulation, we compare the proposed algorithm with the conventional PSO algorithm, the classic Altering Minimization (AM) algorithm and Genetic algorithm (GA). Simulation results show that our proposed algorithm is one of the most efficient solving algorithms and it shows great potential for the application of SML in real systems. PMID:27999377

  19. A Novel Modification of PSO Algorithm for SML Estimation of DOA.

    PubMed

    Chen, Haihua; Li, Shibao; Liu, Jianhang; Liu, Fen; Suzuki, Masakiyo

    2016-12-19

    This paper addresses the issue of reducing the computational complexity of Stochastic Maximum Likelihood (SML) estimation of Direction-of-Arrival (DOA). The SML algorithm is well-known for its high accuracy of DOA estimation in sensor array signal processing. However, its computational complexity is very high because the estimation of SML criteria is a multi-dimensional non-linear optimization problem. As a result, it is hard to apply the SML algorithm to real systems. The Particle Swarm Optimization (PSO) algorithm is considered as a rather efficient method for multi-dimensional non-linear optimization problems in DOA estimation. However, the conventional PSO algorithm suffers two defects, namely, too many particles and too many iteration times. Therefore, the computational complexity of SML estimation using conventional PSO algorithm is still a little high. To overcome these two defects and to reduce computational complexity further, this paper proposes a novel modification of the conventional PSO algorithm for SML estimation and we call it Joint-PSO algorithm. The core idea of the modification lies in that it uses the solution of Estimation of Signal Parameters via Rotational Invariance Techniques (ESPRIT) and stochastic Cramer-Rao bound (CRB) to determine a novel initialization space. Since this initialization space is already close to the solution of SML, fewer particles and fewer iteration times are needed. As a result, the computational complexity can be greatly reduced. In simulation, we compare the proposed algorithm with the conventional PSO algorithm, the classic Altering Minimization (AM) algorithm and Genetic algorithm (GA). Simulation results show that our proposed algorithm is one of the most efficient solving algorithms and it shows great potential for the application of SML in real systems.

  20. PSC algorithm description

    NASA Technical Reports Server (NTRS)

    Nobbs, Steven G.

    1995-01-01

    An overview of the performance seeking control (PSC) algorithm and details of the important components of the algorithm are given. The onboard propulsion system models, the linear programming optimization, and engine control interface are described. The PSC algorithm receives input from various computers on the aircraft including the digital flight computer, digital engine control, and electronic inlet control. The PSC algorithm contains compact models of the propulsion system including the inlet, engine, and nozzle. The models compute propulsion system parameters, such as inlet drag and fan stall margin, which are not directly measurable in flight. The compact models also compute sensitivities of the propulsion system parameters to change in control variables. The engine model consists of a linear steady state variable model (SSVM) and a nonlinear model. The SSVM is updated with efficiency factors calculated in the engine model update logic, or Kalman filter. The efficiency factors are used to adjust the SSVM to match the actual engine. The propulsion system models are mathematically integrated to form an overall propulsion system model. The propulsion system model is then optimized using a linear programming optimization scheme. The goal of the optimization is determined from the selected PSC mode of operation. The resulting trims are used to compute a new operating point about which the optimization process is repeated. This process is continued until an overall (global) optimum is reached before applying the trims to the controllers.

  1. Stepping community detection algorithm based on label propagation and similarity

    NASA Astrophysics Data System (ADS)

    Li, Wei; Huang, Ce; Wang, Miao; Chen, Xi

    2017-04-01

    Community or module structure is one of the most common features in complex networks. The label propagation algorithm (LPA) is a near linear time algorithm that is able to detect community structure effectively. Nevertheless, when labeling a node, the LPA adopts the label belonging to the majority of its neighbors, which means that it treats all neighbors equally in spite of their different effects on the node. Another disadvantage of LPA is that the results it generates are not unique. In this paper, we propose a modified LPA called Stepping LPA-S, in which labels are propagated by similarity. Furthermore, our algorithm divides networks using a stepping framework, and uses an evaluation function proposed in this paper to select the final unique partition. We tested this algorithm on several artificial and real-world networks. The results show that Stepping LPA-S can obtain accurate and meaningful community structure without priori information.

  2. PCNN document segmentation method based on bacterial foraging optimization algorithm

    NASA Astrophysics Data System (ADS)

    Liao, Yanping; Zhang, Peng; Guo, Qiang; Wan, Jian

    2014-04-01

    Pulse Coupled Neural Network(PCNN) is widely used in the field of image processing, but it is a difficult task to define the relative parameters properly in the research of the applications of PCNN. So far the determination of parameters of its model needs a lot of experiments. To deal with the above problem, a document segmentation based on the improved PCNN is proposed. It uses the maximum entropy function as the fitness function of bacterial foraging optimization algorithm, adopts bacterial foraging optimization algorithm to search the optimal parameters, and eliminates the trouble of manually set the experiment parameters. Experimental results show that the proposed algorithm can effectively complete document segmentation. And result of the segmentation is better than the contrast algorithms.

  3. A Color Image Edge Detection Algorithm Based on Color Difference

    NASA Astrophysics Data System (ADS)

    Zhuo, Li; Hu, Xiaochen; Jiang, Liying; Zhang, Jing

    2016-12-01

    Although image edge detection algorithms have been widely applied in image processing, the existing algorithms still face two important problems. On one hand, to restrain the interference of noise, smoothing filters are generally exploited in the existing algorithms, resulting in loss of significant edges. On the other hand, since the existing algorithms are sensitive to noise, many noisy edges are usually detected, which will disturb the subsequent processing. Therefore, a color image edge detection algorithm based on color difference is proposed in this paper. Firstly, a new operation called color separation is defined in this paper, which can reflect the information of color difference. Then, for the neighborhood of each pixel, color separations are calculated in four different directions to detect the edges. Experimental results on natural and synthetic images show that the proposed algorithm can remove a large number of noisy edges and be robust to the smoothing filters. Furthermore, the proposed edge detection algorithm is applied in road foreground segmentation and shadow removal, which achieves good performances.

  4. Dynamic topology multi force particle swarm optimization algorithm and its application

    NASA Astrophysics Data System (ADS)

    Chen, Dongning; Zhang, Ruixing; Yao, Chengyu; Zhao, Zheyu

    2016-01-01

    Particle swarm optimization (PSO) algorithm is an effective bio-inspired algorithm but it has shortage of premature convergence. Researchers have made some improvements especially in force rules and population topologies. However, the current algorithms only consider a single kind of force rules and lack consideration of comprehensive improvement in both multi force rules and population topologies. In this paper, a dynamic topology multi force particle swarm optimization (DTMFPSO) algorithm is proposed in order to get better search performance. First of all, the principle of the presented multi force particle swarm optimization (MFPSO) algorithm is that different force rules are used in different search stages, which can balance the ability of global and local search. Secondly, a fitness-driven edge-changing (FE) topology based on the probability selection mechanism of roulette method is designed to cut and add edges between the particles, and the DTMFPSO algorithm is proposed by combining the FE topology with the MFPSO algorithm through concurrent evolution of both algorithm and structure in order to further improve the search accuracy. Thirdly, Benchmark functions are employed to evaluate the performance of the DTMFPSO algorithm, and test results show that the proposed algorithm is better than the well-known PSO algorithms, such as µPSO, MPSO, and EPSO algorithms. Finally, the proposed algorithm is applied to optimize the process parameters for ultrasonic vibration cutting on SiC wafer, and the surface quality of the SiC wafer is improved by 12.8% compared with the PSO algorithm in Ref. [25]. This research proposes a DTMFPSO algorithm with multi force rules and dynamic population topologies evolved simultaneously, and it has better search performance.

  5. Genetic Bee Colony (GBC) algorithm: A new gene selection method for microarray cancer classification.

    PubMed

    Alshamlan, Hala M; Badr, Ghada H; Alohali, Yousef A

    2015-06-01

    Naturally inspired evolutionary algorithms prove effectiveness when used for solving feature selection and classification problems. Artificial Bee Colony (ABC) is a relatively new swarm intelligence method. In this paper, we propose a new hybrid gene selection method, namely Genetic Bee Colony (GBC) algorithm. The proposed algorithm combines the used of a Genetic Algorithm (GA) along with Artificial Bee Colony (ABC) algorithm. The goal is to integrate the advantages of both algorithms. The proposed algorithm is applied to a microarray gene expression profile in order to select the most predictive and informative genes for cancer classification. In order to test the accuracy performance of the proposed algorithm, extensive experiments were conducted. Three binary microarray datasets are use, which include: colon, leukemia, and lung. In addition, another three multi-class microarray datasets are used, which are: SRBCT, lymphoma, and leukemia. Results of the GBC algorithm are compared with our recently proposed technique: mRMR when combined with the Artificial Bee Colony algorithm (mRMR-ABC). We also compared the combination of mRMR with GA (mRMR-GA) and Particle Swarm Optimization (mRMR-PSO) algorithms. In addition, we compared the GBC algorithm with other related algorithms that have been recently published in the literature, using all benchmark datasets. The GBC algorithm shows superior performance as it achieved the highest classification accuracy along with the lowest average number of selected genes. This proves that the GBC algorithm is a promising approach for solving the gene selection problem in both binary and multi-class cancer classification.

  6. PPP Sliding Window Algorithm and Its Application in Deformation Monitoring

    PubMed Central

    Song, Weiwei; Zhang, Rui; Yao, Yibin; Liu, Yanyan; Hu, Yuming

    2016-01-01

    Compared with the double-difference relative positioning method, the precise point positioning (PPP) algorithm can avoid the selection of a static reference station and directly measure the three-dimensional position changes at the observation site and exhibit superiority in a variety of deformation monitoring applications. However, because of the influence of various observing errors, the accuracy of PPP is generally at the cm-dm level, which cannot meet the requirements needed for high precision deformation monitoring. For most of the monitoring applications, the observation stations maintain stationary, which can be provided as a priori constraint information. In this paper, a new PPP algorithm based on a sliding window was proposed to improve the positioning accuracy. Firstly, data from IGS tracking station was processed using both traditional and new PPP algorithm; the results showed that the new algorithm can effectively improve positioning accuracy, especially for the elevation direction. Then, an earthquake simulation platform was used to simulate an earthquake event; the results illustrated that the new algorithm can effectively detect the vibrations change of a reference station during an earthquake. At last, the observed Wenchuan earthquake experimental results showed that the new algorithm was feasible to monitor the real earthquakes and provide early-warning alerts. PMID:27241172

  7. PPP Sliding Window Algorithm and Its Application in Deformation Monitoring.

    PubMed

    Song, Weiwei; Zhang, Rui; Yao, Yibin; Liu, Yanyan; Hu, Yuming

    2016-05-31

    Compared with the double-difference relative positioning method, the precise point positioning (PPP) algorithm can avoid the selection of a static reference station and directly measure the three-dimensional position changes at the observation site and exhibit superiority in a variety of deformation monitoring applications. However, because of the influence of various observing errors, the accuracy of PPP is generally at the cm-dm level, which cannot meet the requirements needed for high precision deformation monitoring. For most of the monitoring applications, the observation stations maintain stationary, which can be provided as a priori constraint information. In this paper, a new PPP algorithm based on a sliding window was proposed to improve the positioning accuracy. Firstly, data from IGS tracking station was processed using both traditional and new PPP algorithm; the results showed that the new algorithm can effectively improve positioning accuracy, especially for the elevation direction. Then, an earthquake simulation platform was used to simulate an earthquake event; the results illustrated that the new algorithm can effectively detect the vibrations change of a reference station during an earthquake. At last, the observed Wenchuan earthquake experimental results showed that the new algorithm was feasible to monitor the real earthquakes and provide early-warning alerts.

  8. Modelling of radiative transfer by the Monte Carlo method and solving the inverse problem based on a genetic algorithm according to experimental results of aerosol sensing on short paths using a femtosecond laser source

    SciTech Connect

    Matvienko, G G; Oshlakov, V K; Sukhanov, A Ya; Stepanov, A N

    2015-02-28

    We consider the algorithms that implement a broadband ('multiwave') radiative transfer with allowance for multiple (aerosol) scattering and absorption by main atmospheric gases. In the spectral range of 0.6 – 1 μm, a closed numerical simulation of modifications of the supercontinuum component of a probing femtosecond pulse is performed. In the framework of the algorithms for solving the inverse atmospheric-optics problems with the help of a genetic algorithm, we give an interpretation of the experimental backscattered spectrum of the supercontinuum. An adequate reconstruction of the distribution mode for the particles of artificial aerosol with the narrow-modal distributions in a size range of 0.5 – 2 mm and a step of 0.5 mm is obtained. (light scattering)

  9. Using DFX for Algorithm Evaluation

    SciTech Connect

    Beiriger, J.I.; Funkhouser, D.R.; Young, C.J.

    1998-10-20

    Evaluating whether or not a new seismic processing algorithm can improve the performance of the operational system can be problematic: it maybe difficult to isolate the comparable piece of the operational system it maybe necessary to duplicate ancillary timctions; and comparing results to the tuned, full-featured operational system maybe an unsat- isfactory basis on which to draw conclusions. Algorithm development and evaluation in an environment that more closely resembles the operational system can be achieved by integrating the algorithm with the custom user library of the Detection and Feature Extraction (DFX) code, developed by Science Applications kternational Corporation. This integration gives the seismic researcher access to all of the functionality of DFX, such as database access, waveform quality control, and station-specific tuning, and provides a more meaningfid basis for evaluation. The goal of this effort is to make the DFX environment more accessible to seismic researchers for algorithm evalua- tion. Typically, anew algorithm will be developed as a C-language progmm with an ASCII test parameter file. The integration process should allow the researcher to focus on the new algorithm developmen~ with minimum attention to integration issues. Customizing DFX, however, requires soflsvare engineering expertise, knowledge of the Scheme and C programming languages, and familiarity with the DFX source code. We use a C-language spatial coherence processing algorithm with a parameter and recipe file to develop a general process for integrating and evaluating a new algorithm in the DFX environment. To aid in configuring and managing the DFX environment, we develop a simple parameter management tool. We also identifi and examine capabilities that could simplify the process further, thus reducing the barriers facing researchers in using DFX..These capabilities include additional parameter manage- ment features, a Scheme-language template for algorithm testing, a

  10. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  11. Test Results for Entry Guidance Methods for Space Vehicles

    NASA Technical Reports Server (NTRS)

    Hanson, John M.; Jones, Robert E.

    2004-01-01

    There are a number of approaches to advanced guidance and control that have the potential for achieving the goals of significantly increasing reusable launch vehicle (or any space vehicle that enters an atmosphere) safety and reliability, and reducing the cost. This paper examines some approaches to entry guidance. An effort called Integration and Testing of Advanced Guidance and Control Technologies has recently completed a rigorous testing phase where these algorithms faced high-fidelity vehicle models and were required to perform a variety of representative tests. The algorithm developers spent substantial effort improving the algorithm performance in the testing. This paper lists the test cases used to demonstrate that the desired results are achieved, shows an automated test scoring method that greatly reduces the evaluation effort required, and displays results of the tests. Results show a significant improvement over previous guidance approaches. The two best-scoring algorithm approaches show roughly equivalent results and are ready to be applied to future vehicle concepts.

  12. Test Results for Entry Guidance Methods for Reusable Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Hanson, John M.; Jones, Robert E.

    2003-01-01

    There are a number of approaches to advanced guidance and control (AG&C) that have the potential for achieving the goals of significantly increasing reusable launch vehicle (RLV) safety and reliability, and reducing the cost. This paper examines some approaches to entry guidance. An effort called Integration and Testing of Advanced Guidance and Control Technologies (ITAGCT) has recently completed a rigorous testing phase where these algorithms faced high-fidelity vehicle models and were required to perform a variety of representative tests. The algorithm developers spent substantial effort improving the algorithm performance in the testing. This paper lists the test cases used to demonstrate that the desired results are achieved, shows an automated test scoring method that greatly reduces the evaluation effort required, and displays results of the tests. Results show a significant improvement over previous guidance approaches. The two best-scoring algorithm approaches show roughly equivalent results and are ready to be applied to future reusable vehicle concepts.

  13. A novel tree structure based watermarking algorithm

    NASA Astrophysics Data System (ADS)

    Lin, Qiwei; Feng, Gui

    2008-03-01

    In this paper, we propose a new blind watermarking algorithm for images which is based on tree structure. The algorithm embeds the watermark in wavelet transform domain, and the embedding positions are determined by significant coefficients wavelets tree(SCWT) structure, which has the same idea with the embedded zero-tree wavelet (EZW) compression technique. According to EZW concepts, we obtain coefficients that are related to each other by a tree structure. This relationship among the wavelet coefficients allows our technique to embed more watermark data. If the watermarked image is attacked such that the set of significant coefficients is changed, the tree structure allows the correlation-based watermark detector to recover synchronously. The algorithm also uses a visual adaptive scheme to insert the watermark to minimize watermark perceptibility. In addition to the watermark, a template is inserted into the watermarked image at the same time. The template contains synchronization information, allowing the detector to determine the geometric transformations type applied to the watermarked image. Experimental results show that the proposed watermarking algorithm is robust against most signal processing attacks, such as JPEG compression, median filtering, sharpening and rotating. And it is also an adaptive method which shows a good performance to find the best areas to insert a stronger watermark.

  14. A tuning algorithm for model predictive controllers based on genetic algorithms and fuzzy decision making.

    PubMed

    van der Lee, J H; Svrcek, W Y; Young, B R

    2008-01-01

    Model Predictive Control is a valuable tool for the process control engineer in a wide variety of applications. Because of this the structure of an MPC can vary dramatically from application to application. There have been a number of works dedicated to MPC tuning for specific cases. Since MPCs can differ significantly, this means that these tuning methods become inapplicable and a trial and error tuning approach must be used. This can be quite time consuming and can result in non-optimum tuning. In an attempt to resolve this, a generalized automated tuning algorithm for MPCs was developed. This approach is numerically based and combines a genetic algorithm with multi-objective fuzzy decision-making. The key advantages to this approach are that genetic algorithms are not problem specific and only need to be adapted to account for the number and ranges of tuning parameters for a given MPC. As well, multi-objective fuzzy decision-making can handle qualitative statements of what optimum control is, in addition to being able to use multiple inputs to determine tuning parameters that best match the desired results. This is particularly useful for multi-input, multi-output (MIMO) cases where the definition of "optimum" control is subject to the opinion of the control engineer tuning the system. A case study will be presented in order to illustrate the use of the tuning algorithm. This will include how different definitions of "optimum" control can arise, and how they are accounted for in the multi-objective decision making algorithm. The resulting tuning parameters from each of the definition sets will be compared, and in doing so show that the tuning parameters vary in order to meet each definition of optimum control, thus showing the generalized automated tuning algorithm approach for tuning MPCs is feasible.

  15. Mimas Showing False Colors #1

    NASA Technical Reports Server (NTRS)

    2005-01-01

    False color images of Saturn's moon, Mimas, reveal variation in either the composition or texture across its surface.

    During its approach to Mimas on Aug. 2, 2005, the Cassini spacecraft narrow-angle camera obtained multi-spectral views of the moon from a range of 228,000 kilometers (142,500 miles).

    The image at the left is a narrow angle clear-filter image, which was separately processed to enhance the contrast in brightness and sharpness of visible features. The image at the right is a color composite of narrow-angle ultraviolet, green, infrared and clear filter images, which have been specially processed to accentuate subtle changes in the spectral properties of Mimas' surface materials. To create this view, three color images (ultraviolet, green and infrared) were combined into a single black and white picture that isolates and maps regional color differences. This 'color map' was then superimposed over the clear-filter image at the left.

    The combination of color map and brightness image shows how the color differences across the Mimas surface materials are tied to geological features. Shades of blue and violet in the image at the right are used to identify surface materials that are bluer in color and have a weaker infrared brightness than average Mimas materials, which are represented by green.

    Herschel crater, a 140-kilometer-wide (88-mile) impact feature with a prominent central peak, is visible in the upper right of each image. The unusual bluer materials are seen to broadly surround Herschel crater. However, the bluer material is not uniformly distributed in and around the crater. Instead, it appears to be concentrated on the outside of the crater and more to the west than to the north or south. The origin of the color differences is not yet understood. It may represent ejecta material that was excavated from inside Mimas when the Herschel impact occurred. The bluer color of these materials may be caused by subtle differences in

  16. Fuzzy evolutionary algorithm to solve chromosomes conflict and its application to lecture schedule problems

    NASA Astrophysics Data System (ADS)

    Marwati, Rini; Yulianti, Kartika; Pangestu, Herny Wulandari

    2016-02-01

    A fuzzy evolutionary algorithm is an integration of an evolutionary algorithm and a fuzzy system. In this paper, we present an application of a genetic algorithm to a fuzzy evolutionary algorithm to detect and to solve chromosomes conflict. A chromosome conflict is identified by existence of any two genes in a chromosome that has the same values as two genes in another chromosome. Based on this approach, we construct an algorithm to solve a lecture scheduling problem. Time codes, lecture codes, lecturer codes, and room codes are defined as genes. They are collected to become chromosomes. As a result, the conflicted schedule turns into chromosomes conflict. Built in the Delphi program, results show that the conflicted lecture schedule problem is solvable by this algorithm.

  17. Optimization of composite structures by estimation of distribution algorithms

    NASA Astrophysics Data System (ADS)

    Grosset, Laurent

    The design of high performance composite laminates, such as those used in aerospace structures, leads to complex combinatorial optimization problems that cannot be addressed by conventional methods. These problems are typically solved by stochastic algorithms, such as evolutionary algorithms. This dissertation proposes a new evolutionary algorithm for composite laminate optimization, named Double-Distribution Optimization Algorithm (DDOA). DDOA belongs to the family of estimation of distributions algorithms (EDA) that build a statistical model of promising regions of the design space based on sets of good points, and use it to guide the search. A generic framework for introducing statistical variable dependencies by making use of the physics of the problem is proposed. The algorithm uses two distributions simultaneously: the marginal distributions of the design variables, complemented by the distribution of auxiliary variables. The combination of the two generates complex distributions at a low computational cost. The dissertation demonstrates the efficiency of DDOA for several laminate optimization problems where the design variables are the fiber angles and the auxiliary variables are the lamination parameters. The results show that its reliability in finding the optima is greater than that of a simple EDA and of a standard genetic algorithm, and that its advantage increases with the problem dimension. A continuous version of the algorithm is presented and applied to a constrained quadratic problem. Finally, a modification of the algorithm incorporating probabilistic and directional search mechanisms is proposed. The algorithm exhibits a faster convergence to the optimum and opens the way for a unified framework for stochastic and directional optimization.

  18. Algorithms on ensemble quantum computers.

    PubMed

    Boykin, P Oscar; Mor, Tal; Roychowdhury, Vwani; Vatan, Farrokh

    2010-06-01

    In ensemble (or bulk) quantum computation, all computations are performed on an ensemble of computers rather than on a single computer. Measurements of qubits in an individual computer cannot be performed; instead, only expectation values (over the complete ensemble of computers) can be measured. As a result of this limitation on the model of computation, many algorithms cannot be processed directly on such computers, and must be modified, as the common strategy of delaying the measurements usually does not resolve this ensemble-measurement problem. Here we present several new strategies for resolving this problem. Based on these strategies we provide new versions of some of the most important quantum algorithms, versions that are suitable for implementing on ensemble quantum computers, e.g., on liquid NMR quantum computers. These algorithms are Shor's factorization algorithm, Grover's search algorithm (with several marked items), and an algorithm for quantum fault-tolerant computation. The first two algorithms are simply modified using a randomizing and a sorting strategies. For the last algorithm, we develop a classical-quantum hybrid strategy for removing measurements. We use it to present a novel quantum fault-tolerant scheme. More explicitly, we present schemes for fault-tolerant measurement-free implementation of Toffoli and σ(z)(¼) as these operations cannot be implemented "bitwise", and their standard fault-tolerant implementations require measurement.

  19. Algorithm Animation with Galant.

    PubMed

    Stallmann, Matthias F

    2017-01-01

    Although surveys suggest positive student attitudes toward the use of algorithm animations, it is not clear that they improve learning outcomes. The Graph Algorithm Animation Tool, or Galant, challenges and motivates students to engage more deeply with algorithm concepts, without distracting them with programming language details or GUIs. Even though Galant is specifically designed for graph algorithms, it has also been used to animate other algorithms, most notably sorting algorithms.

  20. What is a Systolic Algorithm?

    NASA Astrophysics Data System (ADS)

    Rao, Sailesh K.; Kollath, T.

    1986-07-01

    In this paper, we show that every systolic array executes a Regular Iterative Algorithm with a strongly separating hyperplane and conversely, that every such algorithm can be implemented on a systolic array. This characterization provides us with an unified framework for describing the contributions of other authors. It also exposes the relevance of many fundamental concepts that were introduced in the sixties by Hennie, Waite and Karp, Miller and Winograd, to the present day concern of systolic array

  1. Fast Algorithm and Application of Wavelet Multiple-scale Edge Detection Filter

    NASA Astrophysics Data System (ADS)

    Liang, Likai; Yang, Min; Tong, Qiang; Zhang, Yue

    This paper focuses on the algorithm theory of the two-dimensional wavelet transform which is used for image edge detection. To simplify the algorithm, the author propounds to turn the two-dimensional dyadic wavelet to one dimensional dyadic wavelet that can be divided into product. We can use the filter to achieve the wavelet multiple scale edge detection quickly. Simultaneously, the process that the wavelet transform used for the multiple-scale edge detection is discussed in detail. Finally, the algorithm can be applied to vehicle license image detection and. Compared with the results of the Sobel, Canny and the others, this algorithm shows great feasibility and the effectiveness.

  2. Image haze removal algorithm for transmission lines based on weighted Gaussian PDF

    NASA Astrophysics Data System (ADS)

    Wang, Wanguo; Zhang, Jingjing; Li, Li; Wang, Zhenli; Li, Jianxiang; Zhao, Jinlong

    2015-03-01

    Histogram specification is a useful algorithm of image enhancement field. This paper proposes an image haze removal algorithm of histogram specification based on the weighted Gaussian probability density function (Gaussian PDF). Firstly, we consider the characteristics of image histogram that captured when sunny, fogging and haze weather. Then, we solve the weak intensity of image specification through changing the variance and weighted Gaussian PDF. The performance of the algorithm could removal the effective of fog and experimental results show the superiority of the proposed algorithm compared with histogram specification. It also has much advantage in respect of low computational complexity, high efficiency, no manual intervention.

  3. Application of integration algorithms in a parallel processing environment for the simulation of jet engines

    NASA Technical Reports Server (NTRS)

    Krosel, S. M.; Milner, E. J.

    1982-01-01

    The application of Predictor corrector integration algorithms developed for the digital parallel processing environment are investigated. The algorithms are implemented and evaluated through the use of a software simulator which provides an approximate representation of the parallel processing hardware. Test cases which focus on the use of the algorithms are presented and a specific application using a linear model of a turbofan engine is considered. Results are presented showing the effects of integration step size and the number of processors on simulation accuracy. Real time performance, interprocessor communication, and algorithm startup are also discussed.

  4. A new image encryption algorithm based on logistic chaotic map with varying parameter.

    PubMed

    Liu, Lingfeng; Miao, Suoxia

    2016-01-01

    In this paper, we proposed a new image encryption algorithm based on parameter-varied logistic chaotic map and dynamical algorithm. The parameter-varied logistic map can cure the weaknesses of logistic map and resist the phase space reconstruction attack. We use the parameter-varied logistic map to shuffle the plain image, and then use a dynamical algorithm to encrypt the image. We carry out several experiments, including Histogram analysis, information entropy analysis, sensitivity analysis, key space analysis, correlation analysis and computational complexity to evaluate its performances. The experiment results show that this algorithm is with high security and can be competitive for image encryption.

  5. [A fast non-local means algorithm for denoising of computed tomography images].

    PubMed

    Kang, Changqing; Cao, Wenping; Fang, Lei; Hua, Li; Cheng, Hong

    2012-11-01

    A fast non-local means image denoising algorithm is presented based on the single motif of existing computed tomography images in medical archiving systems. The algorithm is carried out in two steps of prepossessing and actual possessing. The sample neighborhood database is created via the data structure of locality sensitive hashing in the prepossessing stage. The CT image noise is removed by non-local means algorithm based on the sample neighborhoods accessed fast by locality sensitive hashing. The experimental results showed that the proposed algorithm could greatly reduce the execution time, as compared to NLM, and effectively preserved the image edges and details.

  6. Adaptive quasi-Newton algorithm for source extraction via CCA approach.

    PubMed

    Zhang, Wei-Tao; Lou, Shun-Tian; Feng, Da-Zheng

    2014-04-01

    This paper addresses the problem of adaptive source extraction via the canonical correlation analysis (CCA) approach. Based on Liu's analysis of CCA approach, we propose a new criterion for source extraction, which is proved to be equivalent to the CCA criterion. Then, a fast and efficient online algorithm using quasi-Newton iteration is developed. The stability of the algorithm is also analyzed using Lyapunov's method, which shows that the proposed algorithm asymptotically converges to the global minimum of the criterion. Simulation results are presented to prove our theoretical analysis and demonstrate the merits of the proposed algorithm in terms of convergence speed and successful rate for source extraction.

  7. A multi-level solution algorithm for steady-state Markov chains

    NASA Technical Reports Server (NTRS)

    Horton, Graham; Leutenegger, Scott T.

    1993-01-01

    A new iterative algorithm, the multi-level algorithm, for the numerical solution of steady state Markov chains is presented. The method utilizes a set of recursively coarsened representations of the original system to achieve accelerated convergence. It is motivated by multigrid methods, which are widely used for fast solution of partial differential equations. Initial results of numerical experiments are reported, showing significant reductions in computation time, often an order of magnitude or more, relative to the Gauss-Seidel and optimal SOR algorithms for a variety of test problems. The multi-level method is compared and contrasted with the iterative aggregation-disaggregation algorithm of Takahashi.

  8. Utilizing knowledge-base semantics in graph-based algorithms

    SciTech Connect

    Darwiche, A.

    1996-12-31

    Graph-based algorithms convert a knowledge base with a graph structure into one with a tree structure (a join-tree) and then apply tree-inference on the result. Nodes in the join-tree are cliques of variables and tree-inference is exponential in w*, the size of the maximal clique in the join-tree. A central property of join-trees that validates tree-inference is the running-intersection property: the intersection of any two cliques must belong to every clique on the path between them. We present two key results in connection to graph-based algorithms. First, we show that the running-intersection property, although sufficient, is not necessary for validating tree-inference. We present a weaker property for this purpose, called running-interaction, that depends on non-structural (semantical) properties of a knowledge base. We also present a linear algorithm that may reduce w* of a join-tree, possibly destroying its running-intersection property, while maintaining its running-interaction property and, hence, its validity for tree-inference. Second, we develop a simple algorithm for generating trees satisfying the running-interaction property. The algorithm bypasses triangulation (the standard technique for constructing join-trees) and does not construct a join-tree first. We show that the proposed algorithm may in some cases generate trees that are more efficient than those generated by modifying a join-tree.

  9. General simulation algorithm for autocorrelated binary processes

    NASA Astrophysics Data System (ADS)

    Serinaldi, Francesco; Lombardo, Federico

    2017-02-01

    The apparent ubiquity of binary random processes in physics and many other fields has attracted considerable attention from the modeling community. However, generation of binary sequences with prescribed autocorrelation is a challenging task owing to the discrete nature of the marginal distributions, which makes the application of classical spectral techniques problematic. We show that such methods can effectively be used if we focus on the parent continuous process of beta distributed transition probabilities rather than on the target binary process. This change of paradigm results in a simulation procedure effectively embedding a spectrum-based iterative amplitude-adjusted Fourier transform method devised for continuous processes. The proposed algorithm is fully general, requires minimal assumptions, and can easily simulate binary signals with power-law and exponentially decaying autocorrelation functions corresponding, for instance, to Hurst-Kolmogorov and Markov processes. An application to rainfall intermittency shows that the proposed algorithm can also simulate surrogate data preserving the empirical autocorrelation.

  10. AdaBoost-based algorithm for network intrusion detection.

    PubMed

    Hu, Weiming; Hu, Wei; Maybank, Steve

    2008-04-01

    Network intrusion detection aims at distinguishing the attacks on the Internet from normal use of the Internet. It is an indispensable part of the information security system. Due to the variety of network behaviors and the rapid development of attack fashions, it is necessary to develop fast machine-learning-based intrusion detection algorithms with high detection rates and low false-alarm rates. In this correspondence, we propose an intrusion detection algorithm based on the AdaBoost algorithm. In the algorithm, decision stumps are used as weak classifiers. The decision rules are provided for both categorical and continuous features. By combining the weak classifiers for continuous features and the weak classifiers for categorical features into a strong classifier, the relations between these two different types of features are handled naturally, without any forced conversions between continuous and categorical features. Adaptable initial weights and a simple strategy for avoiding overfitting are adopted to improve the performance of the algorithm. Experimental results show that our algorithm has low computational complexity and error rates, as compared with algorithms of higher computational complexity, as tested on the benchmark sample data.

  11. An Improved Force Feedback Control Algorithm for Active Tendons

    PubMed Central

    Guo, Tieneng; Liu, Zhifeng; Cai, Ligang

    2012-01-01

    An active tendon, consisting of a displacement actuator and a co-located force sensor, has been adopted by many studies to suppress the vibration of large space flexible structures. The damping, provided by the force feedback control algorithm in these studies, is small and can increase, especially for tendons with low axial stiffness. This study introduces an improved force feedback algorithm, which is based on the idea of velocity feedback. The algorithm provides a large damping ratio for space flexible structures and does not require a structure model. The effectiveness of the algorithm is demonstrated on a structure similar to JPL-MPI. The results show that large damping can be achieved for the vibration control of large space structures. PMID:23112660

  12. Magnetotelluric inversion via reverse time migration algorithm of seismic data

    SciTech Connect

    Ha, Taeyoung . E-mail: tyha@math.snu.ac.kr; Shin, Changsoo . E-mail: css@model.snu.ac.kr

    2007-07-01

    We propose a new algorithm for two-dimensional magnetotelluric (MT) inversion. Our algorithm is an MT inversion based on the steepest descent method, borrowed from the backpropagation technique of seismic inversion or reverse time migration, introduced in the middle 1980s by Lailly and Tarantola. The steepest descent direction can be calculated efficiently by using the symmetry of numerical Green's function derived from a mixed finite element method proposed by Nedelec for Maxwell's equation, without calculating the Jacobian matrix explicitly. We construct three different objective functions by taking the logarithm of the complex apparent resistivity as introduced in the recent waveform inversion algorithm by Shin and Min. These objective functions can be naturally separated into amplitude inversion, phase inversion and simultaneous inversion. We demonstrate our algorithm by showing three inversion results for synthetic data.

  13. New algorithms to map asymmetries of 3D surfaces.

    PubMed

    Combès, Benoît; Prima, Sylvain

    2008-01-01

    In this paper, we propose a set of new generic automated processing tools to characterise the local asymmetries of anatomical structures (represented by surfaces) at an individual level, and within/between populations. The building bricks of this toolbox are: (1) a new algorithm for robust, accurate, and fast estimation of the symmetry plane of grossly symmetrical surfaces, and (2) a new algorithm for the fast, dense, nonlinear matching of surfaces. This last algorithm is used both to compute dense individual asymmetry maps on surfaces, and to register these maps to a common template for population studies. We show these two algorithms to be mathematically well-grounded, and provide some validation experiments. Then we propose a pipeline for the statistical evaluation of local asymmetries within and between populations. Finally we present some results on real data.

  14. A Spectral Algorithm for Envelope Reduction of Sparse Matrices

    NASA Technical Reports Server (NTRS)

    Barnard, Stephen T.; Pothen, Alex; Simon, Horst D.

    1993-01-01

    The problem of reordering a sparse symmetric matrix to reduce its envelope size is considered. A new spectral algorithm for computing an envelope-reducing reordering is obtained by associating a Laplacian matrix with the given matrix and then sorting the components of a specified eigenvector of the Laplacian. This Laplacian eigenvector solves a continuous relaxation of a discrete problem related to envelope minimization called the minimum 2-sum problem. The permutation vector computed by the spectral algorithm is a closest permutation vector to the specified Laplacian eigenvector. Numerical results show that the new reordering algorithm usually computes smaller envelope sizes than those obtained from the current standard algorithms such as Gibbs-Poole-Stockmeyer (GPS) or SPARSPAK reverse Cuthill-McKee (RCM), in some cases reducing the envelope by more than a factor of two.

  15. A layer reduction based community detection algorithm on multiplex networks

    NASA Astrophysics Data System (ADS)

    Wang, Xiaodong; Liu, Jing

    2017-04-01

    Detecting hidden communities is important for the analysis of complex networks. However, many algorithms have been designed for single layer networks (SLNs) while just a few approaches have been designed for multiplex networks (MNs). In this paper, we propose an algorithm based on layer reduction for detecting communities on MNs, which is termed as LRCD-MNs. First, we improve a layer reduction algorithm termed as neighaggre to combine similar layers and keep others separated. Then, we use neighaggre to find the community structure hidden in MNs. Experiments on real-life networks show that neighaggre can obtain higher relative entropy than the other algorithm. Moreover, we apply LRCD-MNs on some real-life and synthetic multiplex networks and the results demonstrate that, although LRCD-MNs does not have the advantage in terms of modularity, it can obtain higher values of surprise, which is used to evaluate the quality of partitions of a network.

  16. Parallel algorithms for computation of the manipulator inertia matrix

    NASA Technical Reports Server (NTRS)

    Amin-Javaheri, Masoud; Orin, David E.

    1989-01-01

    The development of an O(log2N) parallel algorithm for the manipulator inertia matrix is presented. It is based on the most efficient serial algorithm which uses the composite rigid body method. Recursive doubling is used to reformulate the linear recurrence equations which are required to compute the diagonal elements of the matrix. It results in O(log2N) levels of computation. Computation of the off-diagonal elements involves N linear recurrences of varying-size and a new method, which avoids redundant computation of position and orientation transforms for the manipulator, is developed. The O(log2N) algorithm is presented in both equation and graphic forms which clearly show the parallelism inherent in the algorithm.

  17. Magnetotelluric inversion via reverse time migration algorithm of seismic data

    NASA Astrophysics Data System (ADS)

    Ha, Taeyoung; Shin, Changsoo

    2007-07-01

    We propose a new algorithm for two-dimensional magnetotelluric (MT) inversion. Our algorithm is an MT inversion based on the steepest descent method, borrowed from the backpropagation technique of seismic inversion or reverse time migration, introduced in the middle 1980s by Lailly and Tarantola. The steepest descent direction can be calculated efficiently by using the symmetry of numerical Green's function derived from a mixed finite element method proposed by Nédélec for Maxwell's equation, without calculating the Jacobian matrix explicitly. We construct three different objective functions by taking the logarithm of the complex apparent resistivity as introduced in the recent waveform inversion algorithm by Shin and Min. These objective functions can be naturally separated into amplitude inversion, phase inversion and simultaneous inversion. We demonstrate our algorithm by showing three inversion results for synthetic data.

  18. Adaptive Flocking of Robot Swarms: Algorithms and Properties

    NASA Astrophysics Data System (ADS)

    Lee, Geunho; Chong, Nak Young

    This paper presents a distributed approach for adaptive flocking of swarms of mobile robots that enables to navigate autonomously in complex environments populated with obstacles. Based on the observation of the swimming behavior of a school of fish, we propose an integrated algorithm that allows a swarm of robots to navigate in a coordinated manner, split into multiple swarms, or merge with other swarms according to the environment conditions. We prove the convergence of the proposed algorithm using Lyapunov stability theory. We also verify the effectiveness of the algorithm through extensive simulations, where a swarm of robots repeats the process of splitting and merging while passing around multiple stationary and moving obstacles. The simulation results show that the proposed algorithm is scalable, and robust to variations in the sensing capability of individual robots.

  19. Naive Bayes-guided bat algorithm for feature selection.

    PubMed

    Taha, Ahmed Majid; Mustapha, Aida; Chen, Soong-Der

    2013-01-01

    When the amount of data and information is said to double in every 20 months or so, feature selection has become highly important and beneficial. Further improvements in feature selection will positively affect a wide array of applications in fields such as pattern recognition, machine learning, or signal processing. Bio-inspired method called Bat Algorithm hybridized with a Naive Bayes classifier has been presented in this work. The performance of the proposed feature selection algorithm was investigated using twelve benchmark datasets from different domains and was compared to three other well-known feature selection algorithms. Discussion focused on four perspectives: number of features, classification accuracy, stability, and feature generalization. The results showed that BANB significantly outperformed other algorithms in selecting lower number of features, hence removing irrelevant, redundant, or noisy features while maintaining the classification accuracy. BANB is also proven to be more stable than other methods and is capable of producing more general feature subsets.

  20. A Cultural Algorithm for the Urban Public Transportation

    NASA Astrophysics Data System (ADS)

    Reyes, Laura Cruz; Zezzatti, Carlos Alberto Ochoa Ortíz; Santillán, Claudia Gómez; Hernández, Paula Hernández; Fuerte, Mercedes Villa

    In the last years the population of Leon City, located in the state of Guanajuato in Mexico, has been considerably increasing, causing the inhabitants to waste most of their time with public transportation. As a consequence of the demographic growth and traffic bottleneck, users deal with the daily problem of optimizing their travel so that to get to their destination on time. To give a solution to this problem of obtaining an optimized route between two points in a public transportation, a method based on the cultural algorithms technique is proposed. Cultural algorithms are used in the generated knowledge in a set of time periods for a same population, using a belief space. These types of algorithms are a recent creation. The proposed method seeks a path that minimizes the time of traveling and the number of transfers. The results of the experiment show that the technique of the cultural algorithms is applicable to these kinds of multi-objective problems.

  1. The research on algorithms for optoelectronic tracking servo control systems

    NASA Astrophysics Data System (ADS)

    Zhu, Qi-Hai; Zhao, Chang-Ming; Zhu, Zheng; Li, Kun

    2016-10-01

    The photoelectric servo control system based on PC controllers is mainly used to control the speed and position of the load. This paper analyzed the mathematical modeling and the system identification of the servo system. In the aspect of the control algorithm, the IP regulator, the fuzzy PID, the Active Disturbance Rejection Control (ADRC) and the adaptive algorithms were compared and analyzed. The PI-P control algorithm was proposed in this paper, which not only has the advantages of the PI regulator that can be quickly saturated, but also overcomes the shortcomings of the IP regulator. The control system has a good starting performance and the anti-load ability in a wide range. Experimental results show that the system has good performance under the guarantee of the PI-P control algorithm.

  2. A region growing vessel segmentation algorithm based on spectrum information.

    PubMed

    Jiang, Huiyan; He, Baochun; Fang, Di; Ma, Zhiyuan; Yang, Benqiang; Zhang, Libo

    2013-01-01

    We propose a region growing vessel segmentation algorithm based on spectrum information. First, the algorithm does Fourier transform on the region of interest containing vascular structures to obtain its spectrum information, according to which its primary feature direction will be extracted. Then combined edge information with primary feature direction computes the vascular structure's center points as the seed points of region growing segmentation. At last, the improved region growing method with branch-based growth strategy is used to segment the vessels. To prove the effectiveness of our algorithm, we use the retinal and abdomen liver vascular CT images to do experiments. The results show that the proposed vessel segmentation algorithm can not only extract the high quality target vessel region, but also can effectively reduce the manual intervention.

  3. Algorithms and Application of Sparse Matrix Assembly and Equation Solvers for Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Watson, W. R.; Nguyen, D. T.; Reddy, C. J.; Vatsa, V. N.; Tang, W. H.

    2001-01-01

    An algorithm for symmetric sparse equation solutions on an unstructured grid is described. Efficient, sequential sparse algorithms for degree-of-freedom reordering, supernodes, symbolic/numerical factorization, and forward backward solution phases are reviewed. Three sparse algorithms for the generation and assembly of symmetric systems of matrix equations are presented. The accuracy and numerical performance of the sequential version of the sparse algorithms are evaluated over the frequency range of interest in a three-dimensional aeroacoustics application. Results show that the solver solutions are accurate using a discretization of 12 points per wavelength. Results also show that the first assembly algorithm is impractical for high-frequency noise calculations. The second and third assembly algorithms have nearly equal performance at low values of source frequencies, but at higher values of source frequencies the third algorithm saves CPU time and RAM. The CPU time and the RAM required by the second and third assembly algorithms are two orders of magnitude smaller than that required by the sparse equation solver. A sequential version of these sparse algorithms can, therefore, be conveniently incorporated into a substructuring for domain decomposition formulation to achieve parallel computation, where different substructures are handles by different parallel processors.

  4. Large scale tracking algorithms

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  5. RADFLO physics and algorithms

    SciTech Connect

    Symbalisty, E.M.D.; Zinn, J.; Whitaker, R.W.

    1995-09-01

    This paper describes the history, physics, and algorithms of the computer code RADFLO and its extension HYCHEM. RADFLO is a one-dimensional, radiation-transport hydrodynamics code that is used to compute early-time fireball behavior for low-altitude nuclear bursts. The primary use of the code is the prediction of optical signals produced by nuclear explosions. It has also been used to predict thermal and hydrodynamic effects that are used for vulnerability and lethality applications. Another closely related code, HYCHEM, is an extension of RADFLO which includes the effects of nonequilibrium chemistry. Some examples of numerical results will be shown, along with scaling expressions derived from those results. We describe new computations of the structures and luminosities of steady-state shock waves and radiative thermal waves, which have been extended to cover a range of ambient air densities for high-altitude applications. We also describe recent modifications of the codes to use a one-dimensional analog of the CAVEAT fluid-dynamics algorithm in place of the former standard Richtmyer-von Neumann algorithm.

  6. Comparison study of reconstruction algorithms for prototype digital breast tomosynthesis using various breast phantoms.

    PubMed

    Kim, Ye-seul; Park, Hye-suk; Lee, Haeng-Hwa; Choi, Young-Wook; Choi, Jae-Gu; Kim, Hak Hee; Kim, Hee-Joung

    2016-02-01

    Digital breast tomosynthesis (DBT) is a recently developed system for three-dimensional imaging that offers the potential to reduce the false positives of mammography by preventing tissue overlap. Many qualitative evaluations of digital breast tomosynthesis were previously performed by using a phantom with an unrealistic model and with heterogeneous background and noise, which is not representative of real breasts. The purpose of the present work was to compare reconstruction algorithms for DBT by using various breast phantoms; validation was also performed by using patient images. DBT was performed by using a prototype unit that was optimized for very low exposures and rapid readout. Three algorithms were compared: a back-projection (BP) algorithm, a filtered BP (FBP) algorithm, and an iterative expectation maximization (EM) algorithm. To compare the algorithms, three types of breast phantoms (homogeneous background phantom, heterogeneous background phantom, and anthropomorphic breast phantom) were evaluated, and clinical images were also reconstructed by using the different reconstruction algorithms. The in-plane image quality was evaluated based on the line profile and the contrast-to-noise ratio (CNR), and out-of-plane artifacts were evaluated by means of the artifact spread function (ASF). Parenchymal texture features of contrast and homogeneity were computed based on reconstructed images of an anthropomorphic breast phantom. The clinical images were studied to validate the effect of reconstruction algorithms. The results showed that the CNRs of masses reconstructed by using the EM algorithm were slightly higher than those obtained by using the BP algorithm, whereas the FBP algorithm yielded much lower CNR due to its high fluctuations of background noise. The FBP algorithm provides the best conspicuity for larger calcifications by enhancing their contrast and sharpness more than the other algorithms; however, in the case of small-size and low

  7. An Optimal Class Association Rule Algorithm

    NASA Astrophysics Data System (ADS)

    Jean Claude, Turiho; Sheng, Yang; Chuang, Li; Kaia, Xie

    Classification and association rule mining algorithms are two important aspects of data mining. Class association rule mining algorithm is a promising approach for it involves the use of association rule mining algorithm to discover classification rules. This paper introduces an optimal class association rule mining algorithm known as OCARA. It uses optimal association rule mining algorithm and the rule set is sorted by priority of rules resulting into a more accurate classifier. It outperforms the C4.5, CBA, RMR on UCI eight data sets, which is proved by experimental results.

  8. An innovative localisation algorithm for railway vehicles

    NASA Astrophysics Data System (ADS)

    Allotta, B.; D'Adamio, P.; Malvezzi, M.; Pugi, L.; Ridolfi, A.; Rindi, A.; Vettori, G.

    2014-11-01

    . The estimation strategy has good performance also under degraded adhesion conditions and could be put on board of high-speed railway vehicles; it represents an accurate and reliable solution. The IMU board is tested via a dedicated Hardware in the Loop (HIL) test rig: it includes an industrial robot able to replicate the motion of the railway vehicle. Through the generated experimental outputs the performances of the innovative localisation algorithm have been evaluated: the HIL test rig permitted to test the proposed algorithm, avoiding expensive (in terms of time and cost) on-track tests, obtaining encouraging results. In fact, the preliminary results show a significant improvement of the position and speed estimation performances compared to those obtained with SCMT algorithms, currently in use on the Italian railway network.

  9. Ligand Identification Scoring Algorithm (LISA)

    PubMed Central

    Zheng, Zheng; Merz, Kenneth M.

    2011-01-01

    A central problem in de novo drug design is determining the binding affinity of a ligand with a receptor. A new scoring algorithm is presented that estimates the binding affinity of a protein-ligand complex given a three-dimensional structure. The method, LISA (Ligand Identification Scoring Algorithm), uses an empirical scoring function to describe the binding free energy. Interaction terms have been designed to account for van der Waals (VDW) contacts, hydrogen bonding, desolvation effects and metal chelation to model the dissociation equilibrium constants using a linear model. Atom types have been introduced to differentiate the parameters for VDW, H-bonding interactions and metal chelation between different atom pairs. A training set of 492 protein-ligand complexes was selected for the fitting process. Different test sets have been examined to evaluate its ability to predict experimentally measured binding affinities. By comparing with other well known scoring functions, the results show that LISA has advantages over many existing scoring functions in simulating protein-ligand binding affinity, especially metalloprotein-ligand binding affinity. Artificial Neural Network (ANN) was also used in order to demonstrate that the energy terms in LISA are well designed and do not require extra cross terms. PMID:21561101

  10. Ligand Identification Scoring Algorithm (LISA).

    PubMed

    Zheng, Zheng; Merz, Kenneth M

    2011-06-27

    A central problem in de novo drug design is determining the binding affinity of a ligand with a receptor. A new scoring algorithm is presented that estimates the binding affinity of a protein-ligand complex given a three-dimensional structure. The method, LISA (Ligand Identification Scoring Algorithm), uses an empirical scoring function to describe the binding free energy. Interaction terms have been designed to account for van der Waals (VDW) contacts, hydrogen bonding, desolvation effects, and metal chelation to model the dissociation equilibrium constants using a linear model. Atom types have been introduced to differentiate the parameters for VDW, H-bonding interactions, and metal chelation between different atom pairs. A training set of 492 protein-ligand complexes was selected for the fitting process. Different test sets have been examined to evaluate its ability to predict experimentally measured binding affinities. By comparing with other well-known scoring functions, the results show that LISA has advantages over many existing scoring functions in simulating protein-ligand binding affinity, especially metalloprotein-ligand binding affinity. Artificial Neural Network (ANN) was also used in order to demonstrate that the energy terms in LISA are well designed and do not require extra cross terms.

  11. Predicting mining activity with parallel genetic algorithms

    USGS Publications Warehouse

    Talaie, S.; Leigh, R.; Louis, S.J.; Raines, G.L.; Beyer, H.G.; O'Reilly, U.M.; Banzhaf, Arnold D.; Blum, W.; Bonabeau, C.; Cantu-Paz, E.W.; ,; ,

    2005-01-01

    We explore several different techniques in our quest to improve the overall model performance of a genetic algorithm calibrated probabilistic cellular automata. We use the Kappa statistic to measure correlation between ground truth data and data predicted by the model. Within the genetic algorithm, we introduce a new evaluation function sensitive to spatial correctness and we explore the idea of evolving different rule parameters for different subregions of the land. We reduce the time required to run a simulation from 6 hours to 10 minutes by parallelizing the code and employing a 10-node cluster. Our empirical results suggest that using the spatially sensitive evaluation function does indeed improve the performance of the model and our preliminary results also show that evolving different rule parameters for different regions tends to improve overall model performance. Copyright 2005 ACM.

  12. Novel permutation measures for image encryption algorithms

    NASA Astrophysics Data System (ADS)

    Abd-El-Hafiz, Salwa K.; AbdElHaleem, Sherif H.; Radwan, Ahmed G.

    2016-10-01

    This paper proposes two measures for the evaluation of permutation techniques used in image encryption. First, a general mathematical framework for describing the permutation phase used in image encryption is presented. Using this framework, six different permutation techniques, based on chaotic and non-chaotic generators, are described. The two new measures are, then, introduced to evaluate the effectiveness of permutation techniques. These measures are (1) Percentage of Adjacent Pixels Count (PAPC) and (2) Distance Between Adjacent Pixels (DBAP). The proposed measures are used to evaluate and compare the six permutation techniques in different scenarios. The permutation techniques are applied on several standard images and the resulting scrambled images are analyzed. Moreover, the new measures are used to compare the permutation algorithms on different matrix sizes irrespective of the actual parameters used in each algorithm. The analysis results show that the proposed measures are good indicators of the effectiveness of the permutation technique.

  13. A hybrid ECT image reconstruction based on Tikhonov regularization theory and SIRT algorithm

    NASA Astrophysics Data System (ADS)

    Lei, Wang; Xiaotong, Du; Xiaoyin, Shao

    2007-07-01

    Electrical Capacitance Tomography (ECT) image reconstruction is a key problem that is not well solved due to the influence of soft-field in the ECT system. In this paper, a new hybrid ECT image reconstruction algorithm is proposed by combining Tikhonov regularization theory and Simultaneous Reconstruction Technique (SIRT) algorithm. Tikhonov regularization theory is used to solve ill-posed image reconstruction problem to obtain a stable original reconstructed image in the region of the optimized solution aggregate. Then, SIRT algorithm is used to improve the quality of the final reconstructed image. In order to satisfy the industrial requirement of real-time computation, the proposed algorithm is further been modified to improve the calculation speed. Test results show that the quality of reconstructed image is better than that of the well-known Filter Linear Back Projection (FLBP) algorithm and the time consumption of the new algorithm is less than 0.1 second that satisfies the online requirements.

  14. A quantum algorithm for approximating the influences of Boolean functions and its applications

    NASA Astrophysics Data System (ADS)

    Li, Hongwei; Yang, Li

    2015-06-01

    We investigate the influences of variables on a Boolean function based on the quantum Bernstein-Vazirani algorithm. A previous paper (Floess et al. in Math Struct Comput Sci 23:386, 2013) has proved that if an -variable Boolean function does not depend on an input variable , using the Bernstein-Vazirani circuit for will always output that has a 0 in the th position. We generalize this result and show that, after running this algorithm once, the probability of getting a 1 in each position is equal to the dependence degree of on the variable , i.e., the influence of on . Based on this, we give an approximation algorithm to evaluate the influence of any variable on a Boolean function. Next, as an application, we use it to study the Boolean functions with juntas and construct probabilistic quantum algorithms to learn certain Boolean functions. Compared with the deterministic algorithms given by Floess et al., our probabilistic algorithms are faster.

  15. Comparison of several stochastic parallel optimization algorithms for adaptive optics system without a wavefront sensor

    NASA Astrophysics Data System (ADS)

    Yang, Huizhen; Li, Xinyang

    2011-04-01

    Optimizing the system performance metric directly is an important method for correcting wavefront aberrations in an adaptive optics (AO) system where wavefront sensing methods are unavailable or ineffective. An appropriate "Deformable Mirror" control algorithm is the key to successful wavefront correction. Based on several stochastic parallel optimization control algorithms, an adaptive optics system with a 61-element Deformable Mirror (DM) is simulated. Genetic Algorithm (GA), Stochastic Parallel Gradient Descent (SPGD), Simulated Annealing (SA) and Algorithm Of Pattern Extraction (Alopex) are compared in convergence speed and correction capability. The results show that all these algorithms have the ability to correct for atmospheric turbulence. Compared with least squares fitting, they almost obtain the best correction achievable for the 61-element DM. SA is the fastest and GA is the slowest in these algorithms. The number of perturbation by GA is almost 20 times larger than that of SA, 15 times larger than SPGD and 9 times larger than Alopex.

  16. A simple, pipelined algorithm for large, irregular all-gather problems.

    SciTech Connect

    Traff, J. L.; Ripke, A.; Siebert, C.; Balaji, P.; Thakur, R.; Gropp, W.; Mathematics and Computer Science; NEC Lab.; Univ. of Illinois

    2008-01-01

    We present and evaluate a new, simple, pipelined algorithm for large, irregular all-gather problems, useful for the implementation of the MPI-Allgatherv collective operation of MPI. The algorithm can be viewed as an adaptation of a linear ring algorithm for regular all-gather problems for single-ported, clustered multiprocessors to the irregular problem. Compared to the standard ring algorithm, whose performance is dominated by the largest data size broadcast by a process (times the number of processes), the performance of the new algorithm depends only on the total amount of data over all processes. The new algorithm has been implemented within different MPI libraries. Benchmark results on NEC SX-8, Linux clusters with InfiniBand and Gigabit Ethernet, Blue Gene/P, and SiCortex systems show huge performance gains in accordance with the expected behavior.

  17. Data classification using metaheuristic Cuckoo Search technique for Levenberg Marquardt back propagation (CSLM) algorithm

    NASA Astrophysics Data System (ADS)

    Nawi, Nazri Mohd.; Khan, Abdullah; Rehman, M. Z.

    2015-05-01

    A nature inspired behavior metaheuristic techniques which provide derivative-free solutions to solve complex problems. One of the latest additions to the group of nature inspired optimization procedure is Cuckoo Search (CS) algorithm. Artificial Neural Network (ANN) training is an optimization task since it is desired to find optimal weight set of a neural network in training process. Traditional training algorithms have some limitation such as getting trapped in local minima and slow convergence rate. This study proposed a new technique CSLM by combining the best features of two known algorithms back-propagation (BP) and Levenberg Marquardt algorithm (LM) for improving the convergence speed of ANN training and avoiding local minima problem by training this network. Some selected benchmark classification datasets are used for simulation. The experiment result show that the proposed cuckoo search with Levenberg Marquardt algorithm has better performance than other algorithm used in this study.

  18. A novel wavefront-based algorithm for numerical simulation of quasi-optical systems

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoling; Lou, Zheng; Hu, Jie; Zhou, Kangmin; Zuo, Yingxi; Shi, Shengcai

    2016-11-01

    A novel wavefront-based algorithm for the beam simulation of both reflective and refractive optics in a complicated quasi-optical system is proposed. The algorithm can be regarded as the extension to the conventional Physical Optics algorithm to handle dielectrics. Internal reflections are modeled in an accurate fashion, and coating and flossy materials can be treated in a straightforward manner. A parallel implementation of the algorithm has been developed and numerical examples show that the algorithm yields sufficient accuracy by comparing with experimental results, while the computational complexity is much less than the full-wave methods. The algorithm offers an alternative approach to the modeling of quasi-optical systems in addition to the Geometrical Optics modeling and full-wave methods.

  19. Comparison of new and existing algorithms for the analysis of 2D radioxenon beta gamma spectra

    DOE PAGES

    Deshmukh, Nikhil; Prinke, Amanda; Miller, Brian; ...

    2017-01-13

    The aim of this study is to compare radioxenon beta–gamma analysis algorithms using simulated spectra with experimentally measured background, where the ground truth of the signal is known. We believe that this is among the largest efforts to date in terms of the number of synthetic spectra generated and number of algorithms compared using identical spectra. We generate an estimate for the minimum detectable counts for each isotope using each algorithm. The paper also points out a conceptual model to put the various algorithms into a continuum. Finally, our results show that existing algorithms can be improved and some newermore » algorithms can be better than the ones currently used.« less

  20. Computed Tomography Images De-noising using a Novel Two Stage Adaptive Algorithm.

    PubMed

    Fadaee, Mojtaba; Shamsi, Mousa; Saberkari, Hamidreza; Sedaaghi, Mohammad Hossein

    2015-01-01

    In this paper, an optimal algorithm is presented for de-noising of medical images. The presented algorithm is based on improved version of local pixels grouping and principal component analysis. In local pixels grouping algorithm, blocks matching based on L (2) norm method is utilized, which leads to matching performance improvement. To evaluate the performance of our proposed algorithm, peak signal to noise ratio (PSNR) and structural similarity (SSIM) evaluation criteria have been used, which are respectively according to the signal to noise ratio in the image and structural similarity of two images. The proposed algorithm has two de-noising and cleanup stages. The cleanup stage is carried out comparatively; meaning that it is alternately repeated until the two conditions based on PSNR and SSIM are established. Implementation results show that the presented algorithm has a significant superiority in de-noising. Furthermore, the quantities of SSIM and PSNR values are higher in comparison to other methods.

  1. Computed Tomography Images De-noising using a Novel Two Stage Adaptive Algorithm

    PubMed Central

    Fadaee, Mojtaba; Shamsi, Mousa; Saberkari, Hamidreza; Sedaaghi, Mohammad Hossein

    2015-01-01

    In this paper, an optimal algorithm is presented for de-noising of medical images. The presented algorithm is based on improved version of local pixels grouping and principal component analysis. In local pixels grouping algorithm, blocks matching based on L2 norm method is utilized, which leads to matching performance improvement. To evaluate the performance of our proposed algorithm, peak signal to noise ratio (PSNR) and structural similarity (SSIM) evaluation criteria have been used, which are respectively according to the signal to noise ratio in the image and structural similarity of two images. The proposed algorithm has two de-noising and cleanup stages. The cleanup stage is carried out comparatively; meaning that it is alternately repeated until the two conditions based on PSNR and SSIM are established. Implementation results show that the presented algorithm has a significant superiority in de-noising. Furthermore, the quantities of SSIM and PSNR values are higher in comparison to other methods. PMID:26955565

  2. An improved filter-u least mean square vibration control algorithm for aircraft framework.

    PubMed

    Huang, Quanzhen; Luo, Jun; Gao, Zhiyuan; Zhu, Xiaojin; Li, Hengyu

    2014-09-01

    Active vibration control of aerospace vehicle structures is very a hot spot and in which filter-u least mean square (FULMS) algorithm is one of the key methods. But for practical reasons and technical limitations, vibration reference signal extraction is always a difficult problem for FULMS algorithm. To solve the vibration reference signal extraction problem, an improved FULMS vibration control algorithm is proposed in this paper. Reference signal is constructed based on the controller structure and the data in the algorithm process, using a vibration response residual signal extracted directly from the vibration structure. To test the proposed algorithm, an aircraft frame model is built and an experimental platform is constructed. The simulation and experimental results show that the proposed algorithm is more practical with a good vibration suppression performance.

  3. Spectrum sensing algorithm based on autocorrelation energy in cognitive radio networks

    NASA Astrophysics Data System (ADS)

    Ren, Shengwei; Zhang, Li; Zhang, Shibing

    2016-10-01

    Cognitive radio networks have wide applications in the smart home, personal communications and other wireless communication. Spectrum sensing is the main challenge in cognitive radios. This paper proposes a new spectrum sensing algorithm which is based on the autocorrelation energy of signal received. By taking the autocorrelation energy of the received signal as the statistics of spectrum sensing, the effect of the channel noise on the detection performance is reduced. Simulation results show that the algorithm is effective and performs well in low signal-to-noise ratio. Compared with the maximum generalized eigenvalue detection (MGED) algorithm, function of covariance matrix based detection (FMD) algorithm and autocorrelation-based detection (AD) algorithm, the proposed algorithm has 2 11 dB advantage.

  4. Image reconstruction algorithms for electrical capacitance tomography based on ROF model using new numerical techniques

    NASA Astrophysics Data System (ADS)

    Chen, Jiaoxuan; Zhang, Maomao; Liu, Yinyan; Chen, Jiaoliao; Li, Yi

    2017-03-01

    Electrical capacitance tomography (ECT) is a promising technique applied in many fields. However, the solutions for ECT are not unique and highly sensitive to the measurement noise. To remain a good shape of reconstructed object and endure a noisy data, a Rudin–Osher–Fatemi (ROF) model with total variation regularization is applied to image reconstruction in ECT. Two numerical methods, which are simplified augmented Lagrangian (SAL) and accelerated alternating direction method of multipliers (AADMM), are innovatively introduced to try to solve the above mentioned problems in ECT. The effect of the parameters and the number of iterations for different algorithms, and the noise level in capacitance data are discussed. Both simulation and experimental tests were carried out to validate the feasibility of the proposed algorithms, compared to the Landweber iteration (LI) algorithm. The results show that the SAL and AADMM algorithms can handle a high level of noise and the AADMM algorithm outperforms other algorithms in identifying the object from its background.

  5. An Improved Inertial Frame Alignment Algorithm Based on Horizontal Alignment Information for Marine SINS.

    PubMed

    Che, Yanting; Wang, Qiuying; Gao, Wei; Yu, Fei

    2015-10-05

    In this paper, an improved inertial frame alignment algorithm for a marine SINS under mooring conditions is proposed, which significantly improves accuracy. Since the horizontal alignment is easy to complete, and a characteristic of gravity is that its component in the horizontal plane is zero, we use a clever method to improve the conventional inertial alignment algorithm. Firstly, a large misalignment angle model and a dimensionality reduction Gauss-Hermite filter are employed to establish the fine horizontal reference frame. Based on this, the projection of the gravity in the body inertial coordinate frame can be calculated easily. Then, the initial alignment algorithm is accomplished through an inertial frame alignment algorithm. The simulation and experiment results show that the improved initial alignment algorithm performs better than the conventional inertial alignment algorithm, and meets the accuracy requirements of a medium-accuracy marine SINS.

  6. Elastic-plastic model identification for rock surrounding an underground excavation based on immunized genetic algorithm.

    PubMed

    Gao, Wei; Chen, Dongliang; Wang, Xu

    2016-01-01

    To compute the stability of underground engineering, a constitutive model of surrounding rock must be identified. Many constitutive models for rock mass have been proposed. In this model identification study, a generalized constitutive law for an elastic-plastic constitutive model is applied. Using the generalized constitutive law, the problem of model identification is transformed to a problem of parameter identification, which is a typical and complicated optimization. To improve the efficiency of the traditional optimization method, an immunized genetic algorithm that is proposed by the author is applied in this study. In this new algorithm, the principle of artificial immune algorithm is combined with the genetic algorithm. Therefore, the entire computation efficiency of model identification will be improved. Using this new model identification method, a numerical example and an engineering example are used to verify the computing ability of the algorithm. The results show that this new model identification algorithm can significantly improve the computation efficiency and the computation effect.

  7. An improved POCS super-resolution infrared image reconstruction algorithm based on visual mechanism

    NASA Astrophysics Data System (ADS)

    Liu, Jinsong; Dai, Shaosheng; Guo, Zhongyuan; Zhang, Dezhou

    2016-09-01

    The traditional projection onto convex sets (POCS) super-resolution (SR) reconstruction algorithm can only get reconstructed images with poor contrast, low signal-to-noise ratio and blurring edges. In order to solve the above disadvantages, an improved POCS SR infrared image reconstruction algorithm based on visual mechanism is proposed, which introduces data consistency constraint with variable correction thresholds to highlight the target edges and filter out background noises; further, the algorithm introduces contrast constraint considering the resolving ability of human eyes into the traditional algorithm, enhancing the contrast of the image reconstructed adaptively. The experimental results show that the improved POCS algorithm can acquire high quality infrared images whose contrast, average gradient and peak signal to noise ratio are improved many times compared with traditional algorithm.

  8. A novel blinding digital watermark algorithm based on lab color space

    NASA Astrophysics Data System (ADS)

    Dong, Bing-feng; Qiu, Yun-jie; Lu, Hong-tao

    2010-02-01

    It is necessary for blinding digital image watermark algorithm to extract watermark information without any extra information except the watermarked image itself. But most of the current blinding watermark algorithms have the same disadvantage: besides the watermarked image, they also need the size and other information about the original image when extracting the watermark. This paper presents an innovative blinding color image watermark algorithm based on Lab color space, which does not have the disadvantages mentioned above. This algorithm first marks the watermark region size and position through embedding some regular blocks called anchor points in image spatial domain, and then embeds the watermark into the image. In doing so, the watermark information can be easily extracted after doing cropping and scale change to the image. Experimental results show that the algorithm is particularly robust against the color adjusting and geometry transformation. This algorithm has already been used in a copyright protecting project and works very well.

  9. Design of asynchronous phase detection algorithms optimized for wide frequency response.

    PubMed

    Crespo, Daniel; Quiroga, Juan Antonio; Gomez-Pedrero, Jose Antonio

    2006-06-10

    In many fringe pattern processing applications the local phase has to be obtained from a sinusoidal irradiance signal with unknown local frequency. This process is called asynchronous phase demodulation. Existing algorithms for asynchronous phase detection, or asynchronous algorithms, have been designed to yield no algebraic error in the recovered value of the phase for any signal frequency. However, each asynchronous algorithm has a characteristic frequency response curve. Existing asynchronous algorithms present a range of frequencies with low response, reaching zero for particular values of the signal frequency. For real noisy signals, low response implies a low signal-to-noise ratio in the recovered phase and therefore unreliable results. We present a new Fourier-based methodology for designing asynchronous algorithms with any user-defined frequency response curve and known limit of algebraic error. We show how asynchronous algorithms designed with this method can have better properties for real conditions of noise and signal frequency variation.

  10. New figure-fracturing algorithm for high-quality variable-shaped e-beam exposure data generation

    NASA Astrophysics Data System (ADS)

    Nakao, Hiroomi; Moriizumi, Koichi; Kamiyama, Kinya; Terai, Masayuki; Miwa, Hisaharu

    1996-07-01

    We present a new figure fracturing algorithm that partitions each polygon in layout design data into trapezoids for vriab1eshaped EB exposure data generation. In order to improve the dimension accuracy of fabricated mask patterns created using the figure fracturing result, our algorithm has two new effective functions, one for suppressing narrow figure generation and the other for suppressing critical part partition. Furthermore, using a new graph based approach, our algorithm efficiently chooses from all the possible partitioning lines an appropriate set of lines by which optimal figure fracturing is performed. The application results show that the algorithm produces high quality results in a reasonable processing time.

  11. A MEDLINE categorization algorithm

    PubMed Central

    Darmoni, Stefan J; Névéol, Aurelie; Renard, Jean-Marie; Gehanno, Jean-Francois; Soualmia, Lina F; Dahamna, Badisse; Thirion, Benoit

    2006-01-01

    Background Categorization is designed to enhance resource description by organizing content description so as to enable the reader to grasp quickly and easily what are the main topics discussed in it. The objective of this work is to propose a categorization algorithm to classify a set of scientific articles indexed with the MeSH thesaurus, and in particular those of the MEDLINE bibliographic database. In a large bibliographic database such as MEDLINE, finding materials of particular interest to a specialty group, or relevant to a particular audience, can be difficult. The categorization refines the retrieval of indexed material. In the CISMeF terminology, metaterms can be considered as super-concepts. They were primarily conceived to improve recall in the CISMeF quality-controlled health gateway. Methods The MEDLINE categorization algorithm (MCA) is based on semantic links existing between MeSH terms and metaterms on the one hand and between MeSH subheadings and metaterms on the other hand. These links are used to automatically infer a list of metaterms from any MeSH term/subheading indexing. Medical librarians manually select the semantic links. Results The MEDLINE categorization algorithm lists the medical specialties relevant to a MEDLINE file by decreasing order of their importance. The MEDLINE categorization algorithm is available on a Web site. It can run on any MEDLINE file in a batch mode. As an example, the top 3 medical specialties for the set of 60 articles published in BioMed Central Medical Informatics & Decision Making, which are currently indexed in MEDLINE are: information science, organization and administration and medical informatics. Conclusion We have presented a MEDLINE categorization algorithm in order to classify the medical specialties addressed in any MEDLINE file in the form of a ranked list of relevant specialties. The categorization method introduced in this paper is based on the manual indexing of resources with MeSH (terms

  12. A framework for evaluating mixture analysis algorithms

    NASA Astrophysics Data System (ADS)

    Dasaratha, Sridhar; Vignesh, T. S.; Shanmukh, Sarat; Yarra, Malathi; Botonjic-Sehic, Edita; Grassi, James; Boudries, Hacene; Freeman, Ivan; Lee, Young K.; Sutherland, Scott

    2010-04-01

    In recent years, several sensing devices capable of identifying unknown chemical and biological substances have been commercialized. The success of these devices in analyzing real world samples is dependent on the ability of the on-board identification algorithm to de-convolve spectra of substances that are mixtures. To develop effective de-convolution algorithms, it is critical to characterize the relationship between the spectral features of a substance and its probability of detection within a mixture, as these features may be similar to or overlap with other substances in the mixture and in the library. While it has been recognized that these aspects pose challenges to mixture analysis, a systematic effort to quantify spectral characteristics and their impact, is generally lacking. In this paper, we propose metrics that can be used to quantify these spectral features. Some of these metrics, such as a modification of variance inflation factor, are derived from classical statistical measures used in regression diagnostics. We demonstrate that these metrics can be correlated to the accuracy of the substance's identification in a mixture. We also develop a framework for characterizing mixture analysis algorithms, using these metrics. Experimental results are then provided to show the application of this framework to the evaluation of various algorithms, including one that has been developed for a commercial device. The illustration is based on synthetic mixtures that are created from pure component Raman spectra measured on a portable device.

  13. Evaluation of Algorithms for Compressing Hyperspectral Data

    NASA Technical Reports Server (NTRS)

    Cook, Sid; Harsanyi, Joseph; Faber, Vance

    2003-01-01

    With EO-1 Hyperion in orbit NASA is showing their continued commitment to hyperspectral imaging (HSI). As HSI sensor technology continues to mature, the ever-increasing amounts of sensor data generated will result in a need for more cost effective communication and data handling systems. Lockheed Martin, with considerable experience in spacecraft design and developing special purpose onboard processors, has teamed with Applied Signal & Image Technology (ASIT), who has an extensive heritage in HSI spectral compression and Mapping Science (MSI) for JPEG 2000 spatial compression expertise, to develop a real-time and intelligent onboard processing (OBP) system to reduce HSI sensor downlink requirements. Our goal is to reduce the downlink requirement by a factor > 100, while retaining the necessary spectral and spatial fidelity of the sensor data needed to satisfy the many science, military, and intelligence goals of these systems. Our compression algorithms leverage commercial-off-the-shelf (COTS) spectral and spatial exploitation algorithms. We are currently in the process of evaluating these compression algorithms using statistical analysis and NASA scientists. We are also developing special purpose processors for executing these algorithms onboard a spacecraft.

  14. The BR eigenvalue algorithm

    SciTech Connect

    Geist, G.A.; Howell, G.W.; Watkins, D.S.

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  15. Performance analysis of the attenuation-partition based iterative phase retrieval algorithm for in-line phase-contrast imaging

    PubMed Central

    Yan, Aimin; Wu, Xizeng; Liu, Hong

    2010-01-01

    The phase retrieval is an important task in x-ray phase contrast imaging. The robustness of phase retrieval is especially important for potential medical imaging applications such as phase contrast mammography. Recently the authors developed an iterative phase retrieval algorithm, the attenuation-partition based algorithm, for the phase retrieval in inline phase-contrast imaging [1]. Applied to experimental images, the algorithm was proven to be fast and robust. However, a quantitative analysis of the performance of this new algorithm is desirable. In this work, we systematically compared the performance of this algorithm with other two widely used phase retrieval algorithms, namely the Gerchberg-Saxton (GS) algorithm and the Transport of Intensity Equation (TIE) algorithm. The systematical comparison is conducted by analyzing phase retrieval performances with a digital breast specimen model. We show that the proposed algorithm converges faster than the GS algorithm in the Fresnel diffraction regime, and is more robust against image noise than the TIE algorithm. These results suggest the significance of the proposed algorithm for future medical applications with the x-ray phase contrast imaging technique. PMID:20720992

  16. Ensembles of satellite aerosol retrievals based on three AATSR algorithms within aerosol_cci

    NASA Astrophysics Data System (ADS)

    Kosmale, Miriam; Popp, Thomas

    2016-04-01

    Ensemble techniques are widely used in the modelling community, combining different modelling results in order to reduce uncertainties. This approach could be also adapted to satellite measurements. Aerosol_cci is an ESA funded project, where most of the European aerosol retrieval groups work together. The different algorithms are homogenized as far as it makes sense, but remain essentially different. Datasets are compared with ground based measurements and between each other. Three AATSR algorithms (Swansea university aerosol retrieval, ADV aerosol retrieval by FMI and Oxford aerosol retrieval ORAC) provide within this project 17 year global aerosol records. Each of these algorithms provides also uncertainty information on pixel level. Within the presented work, an ensembles of the three AATSR algorithms is performed. The advantage over each single algorithm is the higher spatial coverage due to more measurement pixels per gridbox. A validation to ground based AERONET measurements shows still a good correlation of the ensemble, compared to the single algorithms. Annual mean maps show the global aerosol distribution, based on a combination of the three aerosol algorithms. In addition, pixel level uncertainties of each algorithm are used for weighting the contributions, in order to reduce the uncertainty of the ensemble. Results of different versions of the ensembles for aerosol optical depth will be presented and discussed. The results are validated against ground based AERONET measurements. A higher spatial coverage on daily basis allows better results in annual mean maps. The benefit of using pixel level uncertainties is analysed.

  17. Analysis of exclusive kT jet algorithms in electron-positron annihilation

    NASA Astrophysics Data System (ADS)

    Chay, Junegone; Kim, Chul; Kim, Inchol

    2015-10-01

    We study the factorization of the dijet cross section in e+e- annihilation using the generalized exclusive jet algorithm which includes the cone-type, the JADE, the kT, the anti-kT and the Cambridge/Aachen jet algorithms as special cases. In order to probe the characteristics of the jet algorithms in a unified way, we consider the generalized kT jet algorithm with an arbitrary weight of the energies, in which various types of the kT-type algorithms are included for specific values of the parameter. We show that the jet algorithm respects the factorization property for the parameter α <2 . The factorized jet function and the soft function are well defined and infrared safe for all the jet algorithms except the kT algorithm. The kT algorithm (α =2 ) breaks the factorization since the jet and the soft functions are infrared divergent and are not defined for α =2 , though the dijet cross section is infrared finite. In the jet algorithms which enable factorization, we give a phenomenological analysis using the resummed and the fixed-order results.

  18. Comparison of l₁-Norm SVR and Sparse Coding Algorithms for Linear Regression.

    PubMed

    Zhang, Qingtian; Hu, Xiaolin; Zhang, Bo

    2015-08-01

    Support vector regression (SVR) is a popular function estimation technique based on Vapnik's concept of support vector machine. Among many variants, the l1-norm SVR is known to be good at selecting useful features when the features are redundant. Sparse coding (SC) is a technique widely used in many areas and a number of efficient algorithms are available. Both l1-norm SVR and SC can be used for linear regression. In this brief, the close connection between the l1-norm SVR and SC is revealed and some typical algorithms are compared for linear regression. The results show that the SC algorithms outperform the Newton linear programming algorithm, an efficient l1-norm SVR algorithm, in efficiency. The algorithms are then used to design the radial basis function (RBF) neural networks. Experiments on some benchmark data sets demonstrate the high efficiency of the SC algorithms. In particular, one of the SC algorithms, the orthogonal matching pursuit is two orders of magnitude faster than a well-known RBF network designing algorithm, the orthogonal least squares algorithm.

  19. An Improved SoC Test Scheduling Method Based on Simulated Annealing Algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Jingjing; Shen, Zhihang; Gao, Huaien; Chen, Bianna; Zheng, Weida; Xiong, Xiaoming

    2017-02-01

    In this paper, we propose an improved SoC test scheduling method based on simulated annealing algorithm (SA). It is our first to disorganize IP core assignment for each TAM to produce a new solution for SA, allocate TAM width for each TAM using greedy algorithm and calculate corresponding testing time. And accepting the core assignment according to the principle of simulated annealing algorithm and finally attain the optimum solution. Simultaneously, we run the test scheduling experiment with the international reference circuits provided by International Test Conference 2002(ITC’02) and the result shows that our algorithm is superior to the conventional integer linear programming algorithm (ILP), simulated annealing algorithm (SA) and genetic algorithm(GA). When TAM width reaches to 48,56 and 64, the testing time based on our algorithm is lesser than the classic methods and the optimization rates are 30.74%, 3.32%, 16.13% respectively. Moreover, the testing time based on our algorithm is very close to that of improved genetic algorithm (IGA), which is state-of-the-art at present.

  20. Programming the gradient projection algorithm

    NASA Technical Reports Server (NTRS)

    Hargrove, A.

    1983-01-01

    The gradient projection method of numerical optimization which is applied to problems having linear constraints but nonlinear objective functions is described and analyzed. The algorithm is found to be efficient and thorough for small systems, but requires the addition of auxiliary methods and programming for large scale systems with severe nonlinearities. In order to verify the theoretical results a digital computer is used to simulate the algorithm.

  1. Experimental results for correlation-based wavefront sensing

    SciTech Connect

    Poyneer, L A; Palmer, D W; LaFortune, K N; Bauman, B

    2005-07-01

    Correlation wave-front sensing can improve Adaptive Optics (AO) system performance in two keys areas. For point-source-based AO systems, Correlation is more accurate, more robust to changing conditions and provides lower noise than a centroiding algorithm. Experimental results from the Lick AO system and the SSHCL laser AO system confirm this. For remote imaging, Correlation enables the use of extended objects for wave-front sensing. Results from short horizontal-path experiments will show algorithm properties and requirements.

  2. The PCNN adaptive segmentation algorithm based on visual perception

    NASA Astrophysics Data System (ADS)

    Zhao, Yanming

    To solve network adaptive parameter determination problem of the pulse coupled neural network (PCNN), and improve the image segmentation results in image segmentation. The PCNN adaptive segmentation algorithm based on visual perception of information is proposed. Based on the image information of visual perception and Gabor mathematical model of Optic nerve cells receptive field, the algorithm determines adaptively the receptive field of each pixel of the image. And determines adaptively the network parameters W, M, and β of PCNN by the Gabor mathematical model, which can overcome the problem of traditional PCNN parameter determination in the field of image segmentation. Experimental results show that the proposed algorithm can improve the region connectivity and edge regularity of segmentation image. And also show the PCNN of visual perception information for segmentation image of advantage.

  3. Analysis of multigrid algorithms for nonsymmetric and indefinite elliptic problems

    SciTech Connect

    Bramble, J.H.; Pasciak, J.E.; Xu, J.

    1988-10-01

    We prove some new estimates for the convergence of multigrid algorithms applied to nonsymmetric and indefinite elliptic boundary value problems. We provide results for the so-called 'symmetric' multigrid schemes. We show that for the variable V-script-cycle and the W-script-cycle schemes, multigrid algorithms with any amount of smoothing on the finest grid converge at a rate that is independent of the number of levels or unknowns, provided that the initial grid is sufficiently fine. We show that the V-script-cycle algorithm also converges (under appropriate assumptions on the coarsest grid) but at a rate which may deteriorate as the number of levels increases. This deterioration for the V-script-cycle may occur even in the case of full elliptic regularity. Finally, the results of numerical experiments are given which illustrate the convergence behavior suggested by the theory.

  4. Spatially adaptive regularized iterative high-resolution image reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Lim, Won Bae; Park, Min K.; Kang, Moon Gi

    2000-12-01

    High resolution images are often required in applications such as remote sensing, frame freeze in video, military and medical imaging. Digital image sensor arrays, which are used for image acquisition in many imaging systems, are not dense enough to prevent aliasing, so the acquired images will be degraded by aliasing effects. To prevent aliasing without loss of resolution, a dense detector array is required. But it may be very costly or unavailable, thus, many imaging systems are designed to allow some level of aliasing during image acquisition. The purpose of our work is to reconstruct an unaliased high resolution image from the acquired aliased image sequence. In this paper, we propose a spatially adaptive regularized iterative high resolution image reconstruction algorithm for blurred, noisy and down-sampled image sequences. The proposed approach is based on a Constrained Least Squares (CLS) high resolution reconstruction algorithm, with spatially adaptive regularization operators and parameters. These regularization terms are shown to improve the reconstructed image quality by forcing smoothness, while preserving edges in the reconstructed high resolution image. Accurate sub-pixel motion registration is the key of the success of the high resolution image reconstruction algorithm. However, sub-pixel motion registration may have some level of registration error. Therefore, a reconstruction algorithm which is robust against the registration error is required. The registration algorithm uses a gradient based sub-pixel motion estimator which provides shift information for each of the recorded frames. The proposed algorithm is based on a technique of high resolution image reconstruction, and it solves spatially adaptive regularized constrained least square minimization functionals. In this paper, we show that the reconstruction algorithm gives dramatic improvements in the resolution of the reconstructed image and is effective in handling the aliased information. The

  5. A comparative analysis of biclustering algorithms for gene expression data.

    PubMed

    Eren, Kemal; Deveci, Mehmet; Küçüktunç, Onur; Çatalyürek, Ümit V

    2013-05-01

    The need to analyze high-dimension biological data is driving the development of new data mining methods. Biclustering algorithms have been successfully applied to gene expression data to discover local patterns, in which a subset of genes exhibit similar expression levels over a subset of conditions. However, it is not clear which algorithms are best suited for this task. Many algorithms have been published in the past decade, most of which have been compared only to a small number of algorithms. Surveys and comparisons exist in the literature, but because of the large number and variety of biclustering algorithms, they are quickly outdated. In this article we partially address this problem of evaluating the strengths and weaknesses of existing biclustering methods. We used the BiBench package to compare 12 algorithms, many of which were recently published or have not been extensively studied. The algorithms were tested on a suite of synthetic data sets to measure their performance on data with varying conditions, such as different bicluster models, varying noise, varying numbers of biclusters and overlapping biclusters. The algorithms were also tested on eight large gene expression data sets obtained from the Gene Expression Omnibus. Gene Ontology enrichment analysis was performed on the resulting biclusters, and the best enrichment terms are reported. Our analyses show that the biclustering method and its parameters should be selected based on the desired model, whether that model allows overlapping biclusters, and its robustness to noise. In addition, we observe that the biclustering algorithms capable of finding more than one model are more successful at capturing biologically relevant clusters.

  6. An Autonomous Star Identification Algorithm Based on One-Dimensional Vector Pattern for Star Sensors.

    PubMed

    Luo, Liyan; Xu, Luping; Zhang, Hua

    2015-07-07

    In order to enhance the robustness and accelerate the recognition speed of star identification, an autonomous star identification algorithm for star sensors is proposed based on the one-dimensional vector pattern (one_DVP). In the proposed algorithm, the space geometry information of the observed stars is used to form the one-dimensional vector pattern of the observed star. The one-dimensional vector pattern of the same observed star remains unchanged when the stellar image rotates, so the problem of star identification is simplified as the comparison of the two feature vectors. The one-dimensional vector pattern is adopted to build the feature vector of the star pattern, which makes it possible to identify the observed stars robustly. The characteristics of the feature vector and the proposed search strategy for the matching pattern make it possible to achieve the recognition result as quickly as possible. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition accuracy and robustness by the proposed algorithm are better than those by the pyramid algorithm, the modified grid algorithm, and the LPT algorithm. The theoretical analysis and experimental results show that the proposed algorithm outperforms the other three star identification algorithms.

  7. Evaluation of Demons- and FEM-Based Registration Algorithms for Lung Cancer.

    PubMed

    Yang, Juan; Li, Dengwang; Yin, Yong; Zhao, Fen; Wang, Hongjun

    2016-04-01

    We evaluated and compared the accuracy of 2 deformable image registration algorithms in 4-dimensional computed tomography images for patients with lung cancer. Ten patients with non-small cell lung cancer or small cell lung cancer were enrolled in this institutional review board-approved study. The displacement vector fields relative to a specific reference image were calculated by using the diffeomorphic demons (DD) algorithm and the finite element method (FEM)-based algorithm. The registration accuracy was evaluated by using normalized mutual information (NMI), the sum of squared intensity difference (SSD), modified Hausdorff distance (dH_M), and ratio of gross tumor volume (rGTV) difference between reference image and deformed phase image. We also compared the registration speed of the 2 algorithms. Of all patients, the FEM-based algorithm showed stronger ability in aligning 2 images than the DD algorithm. The means (±standard deviation) of NMI were 0.86 (±0.05) and 0.90 (±0.05) using the DD algorithm and the FEM-based algorithm, respectively. The means of SSD were 0.006 (±0.003) and 0.003 (±0.002) using the DD algorithm and the FEM-based algorithm, respectively. The means of dH_M were 0.04 (±0.02) and 0.03 (±0.03) using the DD algorithm and the FEM-based algorithm, respectively. The means of rGTV were 3.9% (±1.01%) and 2.9% (±1.1%) using the DD algorithm and the FEM-based algorithm, respectively. However, the FEM-based algorithm costs a longer time than the DD algorithm, with the average running time of 31.4 minutes compared to 21.9 minutes for all patients. The preliminary results showed that the FEM-based algorithm was more accurate than the DD algorithm while compromised with the registration speed.

  8. The association between symbolic and nonsymbolic numerical magnitude processing and mental versus algorithmic subtraction in adults.

    PubMed

    Linsen, Sarah; Torbeyns, Joke; Verschaffel, Lieven; Reynvoet, Bert; De Smedt, Bert

    2016-03-01

    There are two well-known computation methods for solving multi-digit subtraction items, namely mental and algorithmic computation. It has been contended that mental and algorithmic computation differentially rely on numerical magnitude processing, an assumption that has already been examined in children, but not yet in adults. Therefore, in this study, we examined how numerical magnitude processing was associated with mental and algorithmic computation, and whether this association with numerical magnitude processing was different for mental versus algorithmic computation. We also investigated whether the association between numerical magnitude processing and mental and algorithmic computation differed for measures of symbolic versus nonsymbolic numerical magnitude processing. Results showed that symbolic, and not nonsymbolic, numerical magnitude processing was associated with mental computation, but not with algorithmic computation. Additional analyses showed, however, that the size of this association with symbolic numerical magnitude processing was not significantly different for mental and algorithmic computation. We also tried to further clarify the association between numerical magnitude processing and complex calculation by also including relevant arithmetical subskills, i.e. arithmetic facts, needed for complex calculation that are also known to be dependent on numerical magnitude processing. Results showed that the associations between symbolic numerical magnitude processing and mental and algorithmic computation were fully explained by individual differences in elementary arithmetic fact knowledge.

  9. A universal optimization strategy for ant colony optimization algorithms based on the Physarum-inspired mathematical model.

    PubMed

    Zhang, Zili; Gao, Chao; Liu, Yuxin; Qian, Tao

    2014-09-01

    Ant colony optimization (ACO) algorithms often fall into the local optimal solution and have lower search efficiency for solving the travelling salesman problem (TSP). According to these shortcomings, this paper proposes a universal optimization strategy for updating the pheromone matrix in the ACO algorithms. The new optimization strategy takes advantages of the unique feature of critical paths reserved in the process of evolving adaptive networks of the Physarum-inspired mathematical model (PMM). The optimized algorithms, denoted as PMACO algorithms, can enhance the amount of pheromone in the critical paths and promote the exploitation of the optimal solution. Experimental results in synthetic and real networks show that the PMACO algorithms are more efficient and robust than the traditional ACO algorithms, which are adaptable to solve the TSP with single or multiple objectives. Meanwhile, we further analyse the influence of parameters on the performance of the PMACO algorithms. Based on these analyses, the best values of these parameters are worked out for the TSP.

  10. An enhanced algorithm to estimate BDS satellite's differential code biases

    NASA Astrophysics Data System (ADS)

    Shi, Chuang; Fan, Lei; Li, Min; Liu, Zhizhao; Gu, Shengfeng; Zhong, Shiming; Song, Weiwei

    2016-02-01

    This paper proposes an enhanced algorithm to estimate the differential code biases (DCB) on three frequencies of the BeiDou Navigation Satellite System (BDS) satellites. By forming ionospheric observables derived from uncombined precise point positioning and geometry-free linear combination of phase-smoothed range, satellite DCBs are determined together with ionospheric delay that is modeled at each individual station. Specifically, the DCB and ionospheric delay are estimated in a weighted least-squares estimator by considering the precision of ionospheric observables, and a misclosure constraint for different types of satellite DCBs is introduced. This algorithm was tested by GNSS data collected in November and December 2013 from 29 stations of Multi-GNSS Experiment (MGEX) and BeiDou Experimental Tracking Stations. Results show that the proposed algorithm is able to precisely estimate BDS satellite DCBs, where the mean value of day-to-day scattering is about 0.19 ns and the RMS of the difference with respect to MGEX DCB products is about 0.24 ns. In order to make comparison, an existing algorithm based on IGG: Institute of Geodesy and Geophysics, China (IGGDCB), is also used to process the same dataset. Results show that, the DCB difference between results from the enhanced algorithm and the DCB products from Center for Orbit Determination in Europe (CODE) and MGEX is reduced in average by 46 % for GPS satellites and 14 % for BDS satellites, when compared with DCB difference between the results of IGGDCB algorithm and the DCB products from CODE and MGEX. In addition, we find the day-to-day scattering of BDS IGSO satellites is obviously lower than that of GEO and MEO satellites, and a significant bias exists in daily DCB values of GEO satellites comparing with MGEX DCB product. This proposed algorithm also provides a new approach to estimate the satellite DCBs of multiple GNSS systems.

  11. Comparison of six algorithms to determine the soil thermal diffusivity at a site in the Loess Plateau of China

    NASA Astrophysics Data System (ADS)

    Gao, Z.; Wang, L.; Horton, R.

    2009-03-01

    Soil thermal diffusivity is a crucial physical parameter that affects soil temperature. Six prevalent algorithms to calculate soil thermal diffusivity are inter-compared by using soil temperature data collected at the depths of 0.05 m and 0.10 m at a bare site in the China Loess Plateau from DOY 201 through DOY 207 in 2005. Five of the six algorithms (i.e., Amplitude, Phase, Arctangent, Logarithm, and Harmonic or HM algorithms) are developed from the traditional one-dimensional heat conduction equation. The other algorithm is based on the one-dimensional heat conduction-convection equation which considers the vertical heterogeneity of thermal diffusivity in soil and couples thermal conduction and convection processes (hereinafter referred to as the Conduction-convection algorithm). To assess these six algorithms, we (1) calculate the soil thermal diffusivities by using each of the algorithms, (2) use the soil thermal diffusivities to predict soil temperature at the 0.10 m depth, and (3) compare the estimated soil temperature against direct measurements. Results show that (1) HM algorithm gives the most reliable estimates among the traditional five algorithms; and (2) generally, the Conduction-convection algorithm provides the second best estimates. Among all of the algorithms, the HM algorithm has the best description of the upper boundary temperature with time, but it only includes conduction heat transfer in the soil. Compared to the HM algorithm, the Conduction-convection algorithm has a less accurate description of the upper boundary temperature, but by accounting for the vertical gradient of soil diffusivity and the water flux density it includes more physics in the soil heat transfer process. The Conduction-convection algorithm has potential application within land surface models, but future effort should be made to combine the HM and Conduction-convection algorithms in order to make use of the advantages of each.

  12. Incremental k-core decomposition: Algorithms and evaluation

    DOE PAGES

    Sariyuce, Ahmet Erdem; Gedik, Bugra; Jacques-SIlva, Gabriela; ...

    2016-02-01

    A k-core of a graph is a maximal connected subgraph in which every vertex is connected to at least k vertices in the subgraph. k-core decomposition is often used in large-scale network analysis, such as community detection, protein function prediction, visualization, and solving NP-hard problems on real networks efficiently, like maximal clique finding. In many real-world applications, networks change over time. As a result, it is essential to develop efficient incremental algorithms for dynamic graph data. In this paper, we propose a suite of incremental k-core decomposition algorithms for dynamic graph data. These algorithms locate a small subgraph that ismore » guaranteed to contain the list of vertices whose maximum k-core values have changed and efficiently process this subgraph to update the k-core decomposition. We present incremental algorithms for both insertion and deletion operations, and propose auxiliary vertex state maintenance techniques that can further accelerate these operations. Our results show a significant reduction in runtime compared to non-incremental alternatives. We illustrate the efficiency of our algorithms on different types of real and synthetic graphs, at varying scales. Furthermore, for a graph of 16 million vertices, we observe relative throughputs reaching a million times, relative to the non-incremental algorithms.« less

  13. Incremental k-core decomposition: Algorithms and evaluation

    SciTech Connect

    Sariyuce, Ahmet Erdem; Gedik, Bugra; Jacques-SIlva, Gabriela; Wu, Kun -Lung; Catalyurek, Umit V.

    2016-02-01

    A k-core of a graph is a maximal connected subgraph in which every vertex is connected to at least k vertices in the subgraph. k-core decomposition is often used in large-scale network analysis, such as community detection, protein function prediction, visualization, and solving NP-hard problems on real networks efficiently, like maximal clique finding. In many real-world applications, networks change over time. As a result, it is essential to develop efficient incremental algorithms for dynamic graph data. In this paper, we propose a suite of incremental k-core decomposition algorithms for dynamic graph data. These algorithms locate a small subgraph that is guaranteed to contain the list of vertices whose maximum k-core values have changed and efficiently process this subgraph to update the k-core decomposition. We present incremental algorithms for both insertion and deletion operations, and propose auxiliary vertex state maintenance techniques that can further accelerate these operations. Our results show a significant reduction in runtime compared to non-incremental alternatives. We illustrate the efficiency of our algorithms on different types of real and synthetic graphs, at varying scales. Furthermore, for a graph of 16 million vertices, we observe relative throughputs reaching a million times, relative to the non-incremental algorithms.

  14. Adaptive bad pixel correction algorithm for IRFPA based on PCNN

    NASA Astrophysics Data System (ADS)

    Leng, Hanbing; Zhou, Zuofeng; Cao, Jianzhong; Yi, Bo; Yan, Aqi; Zhang, Jian

    2013-10-01

    Bad pixels and response non-uniformity are the primary obstacles when IRFPA is used in different thermal imaging systems. The bad pixels of IRFPA include fixed bad pixels and random bad pixels. The former is caused by material or manufacture defect and their positions are always fixed, the latter is caused by temperature drift and their positions are always changing. Traditional radiometric calibration-based bad pixel detection and compensation algorithm is only valid to the fixed bad pixels. Scene-based bad pixel correction algorithm is the effective way to eliminate these two kinds of bad pixels. Currently, the most used scene-based bad pixel correction algorithm is based on adaptive median filter (AMF). In this algorithm, bad pixels are regarded as image noise and then be replaced by filtered value. However, missed correction and false correction often happens when AMF is used to handle complex infrared scenes. To solve this problem, a new adaptive bad pixel correction algorithm based on pulse coupled neural networks (PCNN) is proposed. Potential bad pixels are detected by PCNN in the first step, then image sequences are used periodically to confirm the real bad pixels and exclude the false one, finally bad pixels are replaced by the filtered result. With the real infrared images obtained from a camera, the experiment results show the effectiveness of the proposed algorithm.

  15. Biologically inspired binaural hearing aid algorithms: Design principles and effectiveness

    NASA Astrophysics Data System (ADS)

    Feng, Albert

    2002-05-01

    Despite rapid advances in the sophistication of hearing aid technology and microelectronics, listening in noise remains problematic for people with hearing impairment. To solve this problem two algorithms were designed for use in binaural hearing aid systems. The signal processing strategies are based on principles in auditory physiology and psychophysics: (a) the location/extraction (L/E) binaural computational scheme determines the directions of source locations and cancels noise by applying a simple subtraction method over every frequency band; and (b) the frequency-domain minimum-variance (FMV) scheme extracts a target sound from a known direction amidst multiple interfering sound sources. Both algorithms were evaluated using standard metrics such as signal-to-noise-ratio gain and articulation index. Results were compared with those from conventional adaptive beam-forming algorithms. In free-field tests with multiple interfering sound sources our algorithms performed better than conventional algorithms. Preliminary intelligibility and speech reception results in multitalker environments showed gains for every listener with normal or impaired hearing when the signals were processed in real time with the FMV binaural hearing aid algorithm. [Work supported by NIH-NIDCD Grant No. R21DC04840 and the Beckman Institute.

  16. Efficient algorithms to explore conformation spaces of flexible protein loops.

    PubMed

    Yao, Peggy; Dhanik, Ankur; Marz, Nathan; Propper, Ryan; Kou, Charles; Liu, Guanfeng; van den Bedem, Henry; Latombe, Jean-Claude; Halperin-Landsberg, Inbal; Altman, Russ Biagio

    2008-01-01

    Several applications in biology - e.g., incorporation of protein flexibility in ligand docking algorithms, interpretation of fuzzy X-ray crystallographic data, and homology modeling - require computing the internal parameters of a flexible fragment (usually, a loop) of a protein in order to connect its termini to the rest of the protein without causing any steric clash. One must often sample many such conformations in order to explore and adequately represent the conformational range of the studied loop. While sampling must be fast, it is made difficult by the fact that two conflicting constraints - kinematic closure and clash avoidance - must be satisfied concurrently. This paper describes two efficient and complementary sampling algorithms to explore the space of closed clash-free conformations of a flexible protein loop. The "seed sampling" algorithm samples broadly from this space, while the "deformation sampling" algorithm uses seed conformations as starting points to explore the conformation space around them at a finer grain. Computational results are presented for various loops ranging from 5 to 25 residues. More specific results also show that the combination of the sampling algorithms with a functional site prediction software (FEATURE) makes it possible to compute and recognize calcium-binding loop conformations. The sampling algorithms are implemented in a toolkit (LoopTK), which is available at https://simtk.org/home/looptk.

  17. Artificial immune algorithm for multi-depot vehicle scheduling problems

    NASA Astrophysics Data System (ADS)

    Wu, Zhongyi; Wang, Donggen; Xia, Linyuan; Chen, Xiaoling

    2008-10-01

    In the fast-developing logistics and supply chain management fields, one of the key problems in the decision support system is that how to arrange, for a lot of customers and suppliers, the supplier-to-customer assignment and produce a detailed supply schedule under a set of constraints. Solutions to the multi-depot vehicle scheduling problems (MDVRP) help in solving this problem in case of transportation applications. The objective of the MDVSP is to minimize the total distance covered by all vehicles, which can be considered as delivery costs or time consumption. The MDVSP is one of nondeterministic polynomial-time hard (NP-hard) problem which cannot be solved to optimality within polynomial bounded computational time. Many different approaches have been developed to tackle MDVSP, such as exact algorithm (EA), one-stage approach (OSA), two-phase heuristic method (TPHM), tabu search algorithm (TSA), genetic algorithm (GA) and hierarchical multiplex structure (HIMS). Most of the methods mentioned above are time consuming and have high risk to result in local optimum. In this paper, a new search algorithm is proposed to solve MDVSP based on Artificial Immune Systems (AIS), which are inspirited by vertebrate immune systems. The proposed AIS algorithm is tested with 30 customers and 6 vehicles located in 3 depots. Experimental results show that the artificial immune system algorithm is an effective and efficient method for solving MDVSP problems.

  18. CiSE: a circular spring embedder layout algorithm.

    PubMed

    Dogrusoz, Ugur; Belviranli, Mehmet E; Dilek, Alptug

    2013-06-01

    We present a new algorithm for automatic layout of clustered graphs using a circular style. The algorithm tries to determine optimal location and orientation of individual clusters intrinsically within a modified spring embedder. Heuristics such as reversal of the order of nodes in a cluster and swap of neighboring node pairs in the same cluster are employed intermittently to further relax the spring embedder system, resulting in reduced inter-cluster edge crossings. Unlike other algorithms generating circular drawings, our algorithm does not require the quotient graph to be acyclic, nor does it sacrifice the edge crossing number of individual clusters to improve respective positioning of the clusters. Moreover, it reduces the total area required by a cluster by using the space inside the associated circle. Experimental results show that the execution time and quality of the produced drawings with respect to commonly accepted layout criteria are quite satisfactory, surpassing previous algorithms. The algorithm has also been successfully implemented and made publicly available as part of a compound and clustered graph editing and layout tool named CHISIO.

  19. Benchmarking of data fusion algorithms in support of earth observation based Antarctic wildlife monitoring

    NASA Astrophysics Data System (ADS)

    Witharana, Chandi; LaRue, Michelle A.; Lynch, Heather J.

    2016-03-01

    Remote sensing is a rapidly developing tool for mapping the abundance and distribution of Antarctic wildlife. While both panchromatic and multispectral imagery have been used in this context, image fusion techniques have received little attention. We tasked seven widely-used fusion algorithms: Ehlers fusion, hyperspherical color space fusion, high-pass fusion, principal component analysis (PCA) fusion, University of New Brunswick fusion, and wavelet-PCA fusion to resolution enhance a series of single-date QuickBird-2 and Worldview-2 image scenes comprising penguin guano, seals, and vegetation. Fused images were assessed for spectral and spatial fidelity using a variety of quantitative quality indicators and visual inspection methods. Our visual evaluation elected the high-pass fusion algorithm and the University of New Brunswick fusion algorithm as best for manual wildlife detection while the quantitative assessment suggested the Gram-Schmidt fusion algorithm and the University of New Brunswick fusion algorithm as best for automated classification. The hyperspherical color space fusion algorithm exhibited mediocre results in terms of spectral and spatial fidelities. The PCA fusion algorithm showed spatial superiority at the expense of spectral inconsistencies. The Ehlers fusion algorithm and the wavelet-PCA algorithm showed the weakest performances. As remote sensing becomes a more routine method of surveying Antarctic wildlife, these benchmarks will provide guidance for image fusion and pave the way for more standardized products for specific types of wildlife surveys.

  20. ICESat-2 / ATLAS Flight Science Receiver Algorithms

    NASA Astrophysics Data System (ADS)

    Mcgarry, J.; Carabajal, C. C.; Degnan, J. J.; Mallama, A.; Palm, S. P.; Ricklefs, R.; Saba, J. L.

    2013-12-01

    . This Simulator makes it possible to check all logic paths that could be encountered by the Algorithms on orbit. In addition the NASA airborne instrument MABEL is collecting data with characteristics similar to what ATLAS will see. MABEL data is being used to test the ATLAS Receiver Algorithms. Further verification will be performed during Integration and Testing of the ATLAS instrument and during Environmental Testing on the full ATLAS instrument. Results from testing to date show the Receiver Algorithms have the ability to handle a wide range of signal and noise levels with a very good sensitivity at relatively low signal to noise ratios. In addition, preliminary tests have demonstrated, using the ICESat-2 Science Team's selected land ice and sea ice test cases, the capability of the Algorithms to successfully find and telemeter the surface echoes. In this presentation we will describe the ATLAS Flight Science Receiver Algorithms and the Software Simulator, and will present results of the testing to date. The onboard databases (DEM, DRM and the Surface Reference Mask) are being developed at the University of Texas at Austin as part of the ATLAS Flight Science Receiver Algorithms. Verification of the onboard databases is being performed by ATLAS Receiver Algorithms team members Claudia Carabajal and Jack Saba.

  1. Fast double-phase retrieval in Fresnel domain using modified Gerchberg-Saxton algorithm for lensless optical security systems.

    PubMed

    Hwang, Hone-Ene; Chang, Hsuan T; Lie, Wen-Nung

    2009-08-03

    A novel fast double-phase retrieval algorithm for lensless optical security systems based on the Fresnel domain is presented in this paper. Two phase-only masks are efficiently determined by using a modified Gerchberg-Saxton algorithm, in which two cascaded Fresnel transforms are replaced by one Fourier transform with compensations to reduce the consumed computations. Simulation results show that the proposed algorithm substantially speeds up the iterative process, while keeping the reconstructed image highly correlated with the original one.

  2. Greedy heuristic algorithm for solving series of eee components classification problems*

    NASA Astrophysics Data System (ADS)

    Kazakovtsev, A. L.; Antamoshkin, A. N.; Fedosov, V. V.

    2016-04-01

    Algorithms based on using the agglomerative greedy heuristics demonstrate precise and stable results for clustering problems based on k- means and p-median models. Such algorithms are successfully implemented in the processes of production of specialized EEE components for using in space systems which include testing each EEE device and detection of homogeneous production batches of the EEE components based on results of the tests using p-median models. In this paper, authors propose a new version of the genetic algorithm with the greedy agglomerative heuristic which allows solving series of problems. Such algorithm is useful for solving the k-means and p-median clustering problems when the number of clusters is unknown. Computational experiments on real data show that the preciseness of the result decreases insignificantly in comparison with the initial genetic algorithm for solving a single problem.

  3. Improved fuzzy clustering algorithms in segmentation of DC-enhanced breast MRI.

    PubMed

    Kannan, S R; Ramathilagam, S; Devi, Pandiyarajan; Sathya, A

    2012-02-01

    Segmentation of medical images is a difficult and challenging problem due to poor image contrast and artifacts that result in missing or diffuse organ/tissue boundaries. Many researchers have applied various techniques however fuzzy c-means (FCM) based algorithms is more effective compared to other methods. The objective of this work is to develop some robust fuzzy clustering segmentation systems for effective segmentation of DCE - breast MRI. This paper obtains the robust fuzzy clustering algorithms by incorporating kernel methods, penalty terms, tolerance of the neighborhood attraction, additional entropy term and fuzzy parameters. The initial centers are obtained using initialization algorithm to reduce the computation complexity and running time of proposed algorithms. Experimental works on breast images show that the proposed algorithms are effective to improve the similarity measurement, to handle large amount of noise, to have better results in dealing the data corrupted by noise, and other artifacts. The clustering results of proposed methods are validated using Silhouette Method.

  4. Uses of clinical algorithms.

    PubMed

    Margolis, C Z

    1983-02-04

    The clinical algorithm (flow chart) is a text format that is specially suited for representing a sequence of clinical decisions, for teaching clinical decision making, and for guiding patient care. A representative clinical algorithm is described in detail; five steps for writing an algorithm and seven steps for writing a set of algorithms are outlined. Five clinical education and patient care uses of algorithms are then discussed, including a map for teaching clinical decision making and protocol charts for guiding step-by-step care of specific problems. Clinical algorithms are compared as to their clinical usefulness with decision analysis. Three objections to clinical algorithms are answered, including the one that they restrict thinking. It is concluded that methods should be sought for writing clinical algorithms that represent expert consensus. A clinical algorithm could then be written for any area of medical decision making that can be standardized. Medical practice could then be taught more effectively, monitored accurately, and understood better.

  5. A Parallel Genetic Algorithm for Automated Electronic Circuit Design

    NASA Technical Reports Server (NTRS)

    Lohn, Jason D.; Colombano, Silvano P.; Haith, Gary L.; Stassinopoulos, Dimitris; Norvig, Peter (Technical Monitor)

    2000-01-01

    We describe a parallel genetic algorithm (GA) that automatically generates circuit designs using evolutionary search. A circuit-construction programming language is introduced and we show how evolution can generate practical analog circuit designs. Our system allows circuit size (number of devices), circuit topology, and device values to be evolved. We present experimental results as applied to analog filter and amplifier design tasks.

  6. Determination of multifractal dimensions of complex networks by means of the sandbox algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Jin-Long; Yu, Zu-Guo; Anh, Vo

    2015-02-01

    Complex networks have attracted much attention in diverse areas of science and technology. Multifractal analysis (MFA) is a useful way to systematically describe the spatial heterogeneity of both theoretical and experimental fractal patterns. In this paper, we employ the sandbox (SB) algorithm proposed by Tél et al. (Physica A 159, 155-166 (1989)), for MFA of complex networks. First, we compare the SB algorithm with two existing algorithms of MFA for complex networks: the compact-box-burning algorithm proposed by Furuya and Yakubo (Phys. Rev. E 84, 036118 (2011)), and the improved box-counting algorithm proposed by Li et al. (J. Stat. Mech.: Theor. Exp. 2014, P02020 (2014)) by calculating the mass exponents τ(q) of some deterministic model networks. We make a detailed comparison between the numerical and theoretical results of these model networks. The comparison results show that the SB algorithm is the most effective and feasible algorithm to calculate the mass exponents τ(q) and to explore the multifractal behavior of complex networks. Then, we apply the SB algorithm to study the multifractal property of some classic model networks, such as scale-free networks, small-world networks, and random networks. Our results show that multifractality exists in scale-free networks, that of small-world networks is not obvious, and it almost does not exist in random networks.

  7. SU-E-T-91: Accuracy of Dose Calculation Algorithms for Patients Undergoing Stereotactic Ablative Radiotherapy

    SciTech Connect

    Tajaldeen, A; Ramachandran, P; Geso, M

    2015-06-15

    Purpose: The purpose of this study was to investigate and quantify the variation in dose distributions in small field lung cancer radiotherapy using seven different dose calculation algorithms. Methods: The study was performed in 21 lung cancer patients who underwent Stereotactic Ablative Body Radiotherapy (SABR). Two different methods (i) Same dose coverage to the target volume (named as same dose method) (ii) Same monitor units in all algorithms (named as same monitor units) were used for studying the performance of seven different dose calculation algorithms in XiO and Eclipse treatment planning systems. The seven dose calculation algorithms include Superposition, Fast superposition, Fast Fourier Transform ( FFT) Convolution, Clarkson, Anisotropic Analytic Algorithm (AAA), Acurous XB and pencil beam (PB) algorithms. Prior to this, a phantom study was performed to assess the accuracy of these algorithms. Superposition algorithm was used as a reference algorithm in this study. The treatment plans were compared using different dosimetric parameters including conformity, heterogeneity and dose fall off index. In addition to this, the dose to critical structures like lungs, heart, oesophagus and spinal cord were also studied. Statistical analysis was performed using Prism software. Results: The mean±stdev with conformity index for Superposition, Fast superposition, Clarkson and FFT convolution algorithms were 1.29±0.13, 1.31±0.16, 2.2±0.7 and 2.17±0.59 respectively whereas for AAA, pencil beam and Acurous XB were 1.4±0.27, 1.66±0.27 and 1.35±0.24 respectively. Conclusion: Our study showed significant variations among the seven different algorithms. Superposition and AcurosXB algorithms showed similar values for most of the dosimetric parameters. Clarkson, FFT convolution and pencil beam algorithms showed large differences as compared to superposition algorithms. Based on our study, we recommend Superposition and AcurosXB algorithms as the first choice of

  8. Acoustic design of rotor blades using a genetic algorithm

    NASA Technical Reports Server (NTRS)

    Wells, V. L.; Han, A. Y.; Crossley, W. A.

    1995-01-01

    A genetic algorithm coupled with a simplified acoustic analysis was used to generate low-noise rotor blade designs. The model includes thickness, steady loading and blade-vortex interaction noise estimates. The paper presents solutions for several variations in the fitness function, including thickness noise only, loading noise only, and combinations of the noise types. Preliminary results indicate that the analysis provides reasonable assessments of the noise produced, and that genetic algorithm successfully searches for 'good' designs. The results show that, for a given required thrust coefficient, proper blade design can noticeably reduce the noise produced at some expense to the power requirements.

  9. Algorithm and implementation of GPS/VRS network RTK

    NASA Astrophysics Data System (ADS)

    Gao, Chengfa; Yuan, Benyin; Ke, Fuyang; Pan, Shuguo

    2009-06-01

    This paper presents a virtual reference station method and its application. Details of how to generate GPS virtual phase observation are discussed in depth. The developed algorithms are successfully applied to the independent development network digital land investigation system. Experiments are carried out to investigate the system's performance whose results show that the algorithms have good availability and stability. The resulted accuracy of the VRS/RTK positioning was found to be within +/-3.3cm in the horizontal component and +/-7.9cm in the vertical component, which meets the requirements of precise digital land investigation.

  10. Evolutionary algorithm for optimization of nonimaging Fresnel lens geometry.

    PubMed

    Yamada, N; Nishikawa, T

    2010-06-21

    In this study, an evolutionary algorithm (EA), which consists of genetic and immune algorithms, is introduced to design the optical geometry of a nonimaging Fresnel lens; this lens generates the uniform flux concentration required for a photovoltaic cell. Herein, a design procedure that incorporates a ray-tracing technique in the EA is described, and the validity of the design is demonstrated. The results show that the EA automatically generated a unique geometry of the Fresnel lens; the use of this geometry resulted in better uniform flux concentration with high optical efficiency.

  11. Computer program for fast Karhunen Loeve transform algorithm

    NASA Technical Reports Server (NTRS)

    Jain, A. K.

    1976-01-01

    The fast KL transform algorithm was applied for data compression of a set of four ERTS multispectral images and its performance was compared with other techniques previously studied on the same image data. The performance criteria used here are mean square error and signal to noise ratio. The results obtained show a superior performance of the fast KL transform coding algorithm on the data set used with respect to the above stated perfomance criteria. A summary of the results is given in Chapter I and details of comparisons and discussion on conclusions are given in Chapter IV.

  12. Evaluation of synthetic aperture radar image segmentation algorithms in the context of automatic target recognition

    NASA Astrophysics Data System (ADS)

    Xue, Kefu; Power, Gregory J.; Gregga, Jason B.

    2002-11-01

    Image segmentation is a process to extract and organize information energy in the image pixel space according to a prescribed feature set. It is often a key preprocess in automatic target recognition (ATR) algorithms. In many cases, the performance of image segmentation algorithms will have significant impact on the performance of ATR algorithms. Due to the variations in feature set definitions and the innovations in the segmentation processes, there is large number of image segmentation algorithms existing in ATR world. Recently, the authors have investigated a number of measures to evaluate the performance of segmentation algorithms, such as Percentage Pixels Same (pps), Partial Directed Hausdorff (pdh) and Complex Inner Product (cip). In the research, we found that the combination of the three measures shows effectiveness in the evaluation of segmentation algorithms against truth data (human master segmentation). However, we still don't know what are the impact of those measures in the performance of ATR algorithms that are commonly measured by Probability of detection (PDet), Probability of false alarm (PFA), Probability of identification (PID), etc. In all practical situations, ATR boxes are implemented without human observer in the loop. The performance of synthetic aperture radar (SAR) image segmentation should be evaluated in the context of ATR rather than human observers. This research establishes a segmentation algorithm evaluation suite involving segmentation algorithm performance measures as well as the ATR algorithm performance measures. It provides a practical quantitative evaluation method to judge which SAR image segmentation algorithm is the best for a particular ATR application. The results are tabulated based on some baseline ATR algorithms and a typical image segmentation algorithm used in ATR applications.

  13. Enhanced probability-selection artificial bee colony algorithm for economic load dispatch: A comprehensive analysis

    NASA Astrophysics Data System (ADS)

    Ghani Abro, Abdul; Mohamad-Saleh, Junita

    2014-10-01

    The prime motive of economic load dispatch (ELD) is to optimize the production cost of electrical power generation through appropriate division of load demand among online generating units. Bio-inspired optimization algorithms have outperformed classical techniques for optimizing the production cost. Probability-selection artificial bee colony (PS-ABC) algorithm is a recently proposed variant of ABC optimization algorithm. PS-ABC generates optimal solutions using three different mutation equations simultaneously. The results show improved performance of PS-ABC over the ABC algorithm. Nevertheless, all the mutation equations of PS-ABC are excessively self-reinforced and, hence, PS-ABC is prone to premature convergence. Therefore, this research work has replaced the mutation equations and has improved the scout-bee stage of PS-ABC for enhancing the algorithm's performance. The proposed algorithm has been compared with many ABC variants and numerous other optimization algorithms on benchmark functions and ELD test cases. The adapted ELD test cases comprise of transmission losses, multiple-fuel effect, valve-point effect and toxic gases emission constraints. The results reveal that the proposed algorithm has the best capability to yield the optimal solution for the problem among the compared algorithms.

  14. Backtracking algorithm for lepton reconstruction with HADES

    NASA Astrophysics Data System (ADS)

    Sellheim, P.; HADES Collaboration

    2015-04-01

    The High Acceptance Di-Electron Spectrometer (HADES) at the GSI Helmholtzzentrum für Schwerionenforschung investigates dilepton and strangeness production in elementary and heavy-ion collisions. In April - May 2012 HADES recorded 7 billion Au+Au events at a beam energy of 1.23 GeV/u with the highest multiplicities measured so far. The track reconstruction and particle identification in the high track density environment are challenging. The most important detector component for lepton identification is the Ring Imaging Cherenkov detector. Its main purpose is the separation of electrons and positrons from large background of charged hadrons produced in heavy-ion collisions. In order to improve lepton identification this backtracking algorithm was developed. In this contribution we will show the results of the algorithm compared to the currently applied method for e+/-identification. Efficiency and purity of a reconstructed e+/- sample will be discussed as well.

  15. SAR image registration based on Susan algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Chun-bo; Fu, Shao-hua; Wei, Zhong-yi

    2011-10-01

    Synthetic Aperture Radar (SAR) is an active remote sensing system which can be installed on aircraft, satellite and other carriers with the advantages of all day and night and all-weather ability. It is the important problem that how to deal with SAR and extract information reasonably and efficiently. Particularly SAR image geometric correction is the bottleneck to impede the application of SAR. In this paper we introduces image registration and the Susan algorithm knowledge firstly, then introduces the process of SAR image registration based on Susan algorithm and finally presents experimental results of SAR image registration. The Experiment shows that this method is effective and applicable, no matter from calculating the time or from the calculation accuracy.

  16. CUDT: a CUDA based decision tree algorithm.

    PubMed

    Lo, Win-Tsung; Chang, Yue-Shan; Sheu, Ruey-Kai; Chiu, Chun-Chieh; Yuan, Shyan-Ming

    2014-01-01

    Decision tree is one of the famous classification methods in data mining. Many researches have been proposed, which were focusing on improving the performance of decision tree. However, those algorithms are developed and run on traditional distributed systems. Obviously the latency could not be improved while processing huge data generated by ubiquitous sensing node in the era without new technology help. In order to improve data processing latency in huge data mining, in this paper, we design and implement a new parallelized decision tree algorithm on a CUDA (compute unified device architecture), which is a GPGPU solution provided by NVIDIA. In the proposed system, CPU is responsible for flow control while the GPU is responsible for computation. We have conducted many experiments to evaluate system performance of CUDT and made a comparison with traditional CPU version. The results show that CUDT is 5 ∼ 55 times faster than Weka-j48 and is 18 times speedup than SPRINT for large data set.

  17. Improved imaging algorithm for bridge crack detection

    NASA Astrophysics Data System (ADS)

    Lu, Jingxiao; Song, Pingli; Han, Kaihong

    2012-04-01

    This paper present an improved imaging algorithm for bridge crack detection, through optimizing the eight-direction Sobel edge detection operator, making the positioning of edge points more accurate than without the optimization, and effectively reducing the false edges information, so as to facilitate follow-up treatment. In calculating the crack geometry characteristics, we use the method of extracting skeleton on single crack length. In order to calculate crack area, we construct the template of area by making logical bitwise AND operation of the crack image. After experiment, the results show errors of the crack detection method and actual manual measurement are within an acceptable range, meet the needs of engineering applications. This algorithm is high-speed and effective for automated crack measurement, it can provide more valid data for proper planning and appropriate performance of the maintenance and rehabilitation processes of bridge.

  18. Evolutionary algorithms and multi-agent systems

    NASA Astrophysics Data System (ADS)

    Oh, Jae C.

    2006-05-01

    This paper discusses how evolutionary algorithms are related to multi-agent systems and the possibility of military applications using the two disciplines. In particular, we present a game theoretic model for multi-agent resource distribution and allocation where agents in the environment must help each other to survive. Each agent maintains a set of variables representing actual friendship and perceived friendship. The model directly addresses problems in reputation management schemes in multi-agent systems and Peer-to-Peer distributed systems. We present algorithms based on evolutionary game process for maintaining the friendship values as well as a utility equation used in each agent's decision making. For an application problem, we adapted our formal model to the military coalition support problem in peace-keeping missions. Simulation results show that efficient resource allocation and sharing with minimum communication cost is achieved without centralized control.

  19. EDGA: A Population Evolution Direction-Guided Genetic Algorithm for Protein-Ligand Docking.

    PubMed

    Guan, Boxin; Zhang, Changsheng; Ning, Jiaxu

    2016-07-01

    Protein-ligand docking can be formulated as a search algorithm associated with an accurate scoring function. However, most current search algorithms cannot show good performance in docking problems, especially for highly flexible docking. To overcome this drawback, this article presents a novel and robust optimization algorithm (EDGA) based on the Lamarckian genetic algorithm (LGA) for solving flexible protein-ligand docking problems. This method applies a population evolution direction-guided model of genetics, in which search direction evolves to the optimum solution. The method is more efficient to find the lowest energy of protein-ligand docking. We consider four search methods-a tradition genetic algorithm, LGA, SODOCK, and EDGA-and compare their performance in docking of six protein-ligand docking problems. The results show that EDGA is the most stable, reliable, and successful.

  20. RF tomography of metallic objects in free space: preliminary results

    NASA Astrophysics Data System (ADS)

    Li, Jia; Ewing, Robert L.; Berdanier, Charles; Baker, Christopher

    2015-05-01

    RF tomography has great potential in defense and homeland security applications. A distributed sensing research facility is under development at Air Force Research Lab. To develop a RF tomographic imaging system for the facility, preliminary experiments have been performed in an indoor range with 12 radar sensors distributed on a circle of 3m radius. Ultra-wideband pulses are used to illuminate single and multiple metallic targets. The echoes received by distributed sensors were processed and combined for tomography reconstruction. Traditional matched filter algorithm and truncated singular value decomposition (SVD) algorithm are compared in terms of their complexity, accuracy, and suitability for distributed processing. A new algorithm is proposed for shape reconstruction, which jointly estimates the object boundary and scatter points on the waveform's propagation path. The results show that the new algorithm allows accurate reconstruction of object shape, which is not available through the matched filter and truncated SVD algorithms.

  1. Improved pulse laser ranging algorithm based on high speed sampling

    NASA Astrophysics Data System (ADS)

    Gao, Xuan-yi; Qian, Rui-hai; Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; He, Shi-jie; Guo, Xiao-kang

    2016-10-01

    Narrow pulse laser ranging achieves long-range target detection using laser pulse with low divergent beams. Pulse laser ranging is widely used in military, industrial, civil, engineering and transportation field. In this paper, an improved narrow pulse laser ranging algorithm is studied based on the high speed sampling. Firstly, theoretical simulation models have been built and analyzed including the laser emission and pulse laser ranging algorithm. An improved pulse ranging algorithm is developed. This new algorithm combines the matched filter algorithm and the constant fraction discrimination (CFD) algorithm. After the algorithm simulation, a laser ranging hardware system is set up to implement the improved algorithm. The laser ranging hardware system includes a laser diode, a laser detector and a high sample rate data logging circuit. Subsequently, using Verilog HDL language, the improved algorithm is implemented in the FPGA chip based on fusion of the matched filter algorithm and the CFD algorithm. Finally, the laser ranging experiment is carried out to test the improved algorithm ranging performance comparing to the matched filter algorithm and the CFD algorithm using the laser ranging hardware system. The test analysis result demonstrates that the laser ranging hardware system realized the high speed processing and high speed sampling data transmission. The algorithm analysis result presents that the improved algorithm achieves 0.3m distance ranging precision. The improved algorithm analysis result meets the expected effect, which is consistent with the theoretical simulation.

  2. Computational algorithms to predict Gene Ontology annotations

    PubMed Central

    2015-01-01

    Background Gene function annotations, which are associations between a gene and a term of a controlled vocabulary describing gene functional features, are of paramount importance in modern biology. Datasets of these annotations, such as the ones provided by the Gene Ontology Consortium, are used to design novel biological experiments and interpret their results. Despite their importance, these sources of information have some known issues. They are incomplete, since biological knowledge is far from being definitive and it rapidly evolves, and some erroneous annotations may be present. Since the curation process of novel annotations is a costly procedure, both in economical and time terms, computational tools that can reliably predict likely annotations, and thus quicken the discovery of new gene annotations, are very useful. Methods We used a set of computational algorithms and weighting schemes to infer novel gene annotations from a set of known ones. We used the latent semantic analysis approach, implementing two popular algorithms (Latent Semantic Indexing and Probabilistic Latent Semantic Analysis) and propose a novel method, the Semantic IMproved Latent Semantic Analysis, which adds a clustering step on the set of considered genes. Furthermore, we propose the improvement of these algorithms by weighting the annotations in the input set. Results We tested our methods and their weighted variants on the Gene Ontology annotation sets of three model organism genes (Bos taurus, Danio rerio and Drosophila melanogaster ). The methods showed their ability in predicting novel gene annotations and the weighting procedures demonstrated to lead to a valuable improvement, although the obtained results vary according to the dimension of the input annotation set and the considered algorithm. Conclusions Out of the three considered methods, the Semantic IMproved Latent Semantic Analysis is the one that provides better results. In particular, when coupled with a proper

  3. A highly efficient multi-core algorithm for clustering extremely large datasets

    PubMed Central

    2010-01-01

    Background In recent years, the demand for computational power in computational biology has increased due to rapidly growing data sets from microarray and other high-throughput technologies. This demand is likely to increase. Standard algorithms for analyzing data, such as cluster algorithms, need to be parallelized for fast processing. Unfortunately, most approaches for parallelizing algorithms largely rely on network communication protocols connecting and requiring multiple computers. One answer to this problem is to utilize the intrinsic capabilities in current multi-core hardware to distribute the tasks among the different cores of one computer. Results We introduce a multi-core parallelization of the k-means and k-modes cluster algorithms based on the design principles of transactional memory for clustering gene expression microarray type data and categorial SNP data. Our new shared memory parallel algorithms show to be highly efficient. We demonstrate their computational power and show their utility in cluster stability and sensitivity analysis employing repeated runs with slightly changed parameters. Computation speed of our Java based algorithm was increased by a factor of 10 for large data sets while preserving computational accuracy compared to single-core implementations and a recently published network based parallelization. Conclusions Most desktop computers and even notebooks provide at least dual-core processors. Our multi-core algorithms show that using modern algorithmic concepts, parallelization makes it possible to perform even such laborious tasks as cluster sensitivity and cluster number estimation on the laboratory computer. PMID:20370922

  4. Evolving evolutionary algorithms using linear genetic programming.

    PubMed

    Oltean, Mihai

    2005-01-01

    A new model for evolving Evolutionary Algorithms is proposed in this paper. The model is based on the Linear Genetic Programming (LGP) technique. Every LGP chromosome encodes an EA which is used for solving a particular problem. Several Evolutionary Algorithms for function optimization, the Traveling Salesman Problem and the Quadratic Assignment Problem are evolved by using the considered model. Numerical experiments show that the evolved Evolutionary Algorithms perform similarly and sometimes even better than standard approaches for several well-known benchmarking problems.

  5. A Traffic Motion Object Extraction Algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Shaofei

    2015-12-01

    A motion object extraction algorithm based on the active contour model is proposed. Firstly, moving areas involving shadows are segmented with the classical background difference algorithm. Secondly, performing shadow detection and coarse removal, then a grid method is used to extract initial contours. Finally, the active contour model approach is adopted to compute the contour of the real object by iteratively tuning the parameter of the model. Experiments show the algorithm can remove the shadow and keep the integrity of a moving object.

  6. Genetic Algorithms Viewed as Anticipatory Systems

    NASA Astrophysics Data System (ADS)

    Mocanu, Irina; Kalisz, Eugenia; Negreanu, Lorina

    2010-11-01

    This paper proposes a new version of genetic algorithms—the anticipatory genetic algorithm AGA. The performance evaluation included in the paper shows that AGA is superior to traditional genetic algorithm from both speed and accuracy points of view. The paper also presents how this algorithm can be applied to solve a complex problem: image annotation, intended to be used in content based image retrieval systems.

  7. Algorithms versus architectures for computational chemistry

    NASA Technical Reports Server (NTRS)

    Partridge, H.; Bauschlicher, C. W., Jr.

    1986-01-01

    The algorithms employed are computationally intensive and, as a result, increased performance (both algorithmic and architectural) is required to improve accuracy and to treat larger molecular systems. Several benchmark quantum chemistry codes are examined on a variety of architectures. While these codes are only a small portion of a typical quantum chemistry library, they illustrate many of the computationally intensive kernels and data manipulation requirements of some applications. Furthermore, understanding the performance of the existing algorithm on present and proposed supercomputers serves as a guide for future programs and algorithm development. The algorithms investigated are: (1) a sparse symmetric matrix vector product; (2) a four index integral transformation; and (3) the calculation of diatomic two electron Slater integrals. The vectorization strategies are examined for these algorithms for both the Cyber 205 and Cray XMP. In addition, multiprocessor implementations of the algorithms are looked at on the Cray XMP and on the MIT static data flow machine proposed by DENNIS.

  8. A hybrid algorithm with GA and DAEM

    NASA Astrophysics Data System (ADS)

    Wan, HongJie; Deng, HaoJiang; Wang, XueWei

    2013-03-01

    Although the expectation-maximization (EM) algorithm has been widely used for finding maximum likelihood estimation of parameters in probabilistic models, it has the problem of trapping by local maxima. To overcome this problem, the deterministic annealing EM (DAEM) algorithm was once proposed and had achieved better performance than EM algorithm, but it is not very effective at avoiding local maxima. In this paper, a solution is proposed by integrating GA and DAEM into one procedure to further improve the solution quality. The population based search of genetic algorithm will produce different solutions and thus can increase the search space of DAEM. Therefore, the proposed algorithm will reach better solution than just using DAEM. The algorithm retains the property of DAEM and gets the better solution by genetic operation. Experiment results on Gaussian mixture model parameter estimation demonstrate that the proposed algorithm can achieve better performance.

  9. Optimal Multistage Algorithm for Adjoint Computation

    SciTech Connect

    Aupy, Guillaume; Herrmann, Julien; Hovland, Paul; Robert, Yves

    2016-01-01

    We reexamine the work of Stumm and Walther on multistage algorithms for adjoint computation. We provide an optimal algorithm for this problem when there are two levels of checkpoints, in memory and on disk. Previously, optimal algorithms for adjoint computations were known only for a single level of checkpoints with no writing and reading costs; a well-known example is the binomial checkpointing algorithm of Griewank and Walther. Stumm and Walther extended that binomial checkpointing algorithm to the case of two levels of checkpoints, but they did not provide any optimality results. We bridge the gap by designing the first optimal algorithm in this context. We experimentally compare our optimal algorithm with that of Stumm and Walther to assess the difference in performance.

  10. UWB Tracking Algorithms: AOA and TDOA

    NASA Technical Reports Server (NTRS)

    Ni, Jianjun David; Arndt, D.; Ngo, P.; Gross, J.; Refford, Melinda

    2006-01-01

    Ultra-Wideband (UWB) tracking prototype systems are currently under development at NASA Johnson Space Center for various applications on space exploration. For long range applications, a two-cluster Angle of Arrival (AOA) tracking method is employed for implementation of the tracking system; for close-in applications, a Time Difference of Arrival (TDOA) positioning methodology is exploited. Both AOA and TDOA are chosen to utilize the achievable fine time resolution of UWB signals. This talk presents a brief introduction to AOA and TDOA methodologies. The theoretical analysis of these two algorithms reveal the affecting parameters impact on the tracking resolution. For the AOA algorithm, simulations show that a tracking resolution less than 0.5% of the range can be achieved with the current achievable time resolution of UWB signals. For the TDOA algorithm used in close-in applications, simulations show that the (sub-inch) high tracking resolution is achieved with a chosen tracking baseline configuration. The analytical and simulated results provide insightful guidance for the UWB tracking system design.

  11. The Application of Baum-Welch Algorithm in Multistep Attack

    PubMed Central

    Zhang, Yanxue; Zhao, Dongmei; Liu, Jinxing

    2014-01-01

    The biggest difficulty of hidden Markov model applied to multistep attack is the determination of observations. Now the research of the determination of observations is still lacking, and it shows a certain degree of subjectivity. In this regard, we integrate the attack intentions and hidden Markov model (HMM) and support a method to forecasting multistep attack based on hidden Markov model. Firstly, we train the existing hidden Markov model(s) by the Baum-Welch algorithm of HMM. Then we recognize the alert belonging to attack scenarios with the Forward algorithm of HMM. Finally, we forecast the next possible attack sequence with the Viterbi algorithm of HMM. The results of simulation experiments show that the hidden Markov models which have been trained are better than the untrained in recognition and prediction. PMID:24991642

  12. A fast image encryption algorithm based on chaotic map

    NASA Astrophysics Data System (ADS)

    Liu, Wenhao; Sun, Kehui; Zhu, Congxu

    2016-09-01

    Derived from Sine map and iterative chaotic map with infinite collapse (ICMIC), a new two-dimensional Sine ICMIC modulation map (2D-SIMM) is proposed based on a close-loop modulation coupling (CMC) model, and its chaotic performance is analyzed by means of phase diagram, Lyapunov exponent spectrum and complexity. It shows that this map has good ergodicity, hyperchaotic behavior, large maximum Lyapunov exponent and high complexity. Based on this map, a fast image encryption algorithm is proposed. In this algorithm, the confusion and diffusion processes are combined for one stage. Chaotic shift transform (CST) is proposed to efficiently change the image pixel positions, and the row and column substitutions are applied to scramble the pixel values simultaneously. The simulation and analysis results show that this algorithm has high security, low time complexity, and the abilities of resisting statistical analysis, differential, brute-force, known-plaintext and chosen-plaintext attacks.

  13. Development of a Scoring Algorithm To Replace Expert Rating for Scoring a Complex Performance-Based Assessment.

    ERIC Educational Resources Information Center

    Clauser, Brian E.; Ross, Linette P.; Clyman, Stephen G.; Rose, Kathie M.; Margolis, Melissa J.; Nungester, Ronald J.; Piemme, Thomas E.; Chang, Lucy; El-Bayoumi, Gigi; Malakoff, Gary L.; Pincetl, Pierre S.

    1997-01-01

    Describes an automated scoring algorithm for a computer-based simulation examination of physicians' patient-management skills. Results with 280 medical students show that scores produced using this algorithm are highly correlated to actual clinician ratings. Scores were also effective in discriminating between case performance judged passing or…

  14. Noise-enhanced clustering and competitive learning algorithms.

    PubMed

    Osoba, Osonde; Kosko, Bart

    2013-01-01

    Noise can provably speed up convergence in many centroid-based clustering algorithms. This includes the popular k-means clustering algorithm. The clustering noise benefit follows from the general noise benefit for the expectation-maximization algorithm because many clustering algorithms are special cases of the expectation-maximization algorithm. Simulations show that noise also speeds up convergence in stochastic unsupervised competitive learning, supervised competitive learning, and differential competitive learning.

  15. Influence of DBT reconstruction algorithm on power law spectrum coefficient

    NASA Astrophysics Data System (ADS)

    Vancamberg, Laurence; Carton, Ann-Katherine; Abderrahmane, Ilyes H.; Palma, Giovanni; Milioni de Carvalho, Pablo; Iordache, Rǎzvan; Muller, Serge

    2015-03-01

    In breast X-ray images, texture has been characterized by a noise power spectrum (NPS) that has an inverse power-law shape described by its slope β in the log-log domain. It has been suggested that the magnitude of the power-law spectrum coefficient β is related to mass lesion detection performance. We assessed β in reconstructed digital breast tomosynthesis (DBT) images to evaluate its sensitivity to different typical reconstruction algorithms including simple back projection (SBP), filtered back projection (FBP) and a simultaneous iterative reconstruction algorithm (SIRT 30 iterations). Results were further compared to the β coefficient estimated from 2D central DBT projections. The calculations were performed on 31 unilateral clinical DBT data sets and simulated DBT images from 31 anthropomorphic software breast phantoms. Our results show that β highly depends on the reconstruction algorithm; the highest β values were found for SBP, followed by reconstruction with FBP, while the lowest β values were found for SIRT. In contrast to previous studies, we found that β is not always lower in reconstructed DBT slices, compared to 2D projections and this depends on the reconstruction algorithm. All β values estimated in DBT slices reconstructed with SBP were larger than β values from 2D central projections. Our study also shows that the reconstruction algorithm affects the symmetry of the breast texture NPS; the NPS of clinical cases reconstructed with SBP exhibit the highest symmetry, while the NPS of cases reconstructed with SIRT exhibit the highest asymmetry.

  16. A combined reconstruction algorithm for computerized ionospheric tomography

    NASA Astrophysics Data System (ADS)

    Wen, D. B.; Ou, J. K.; Yuan, Y. B.

    Ionospheric electron density profiles inverted by tomographic reconstruction of GPS derived total electron content TEC measurements has the potential to become a tool to quantify ionospheric variability and investigate ionospheric dynamics The problem of reconstructing ionospheric electron density from GPS receiver to satellite TEC measurements are formulated as an ill-posed discrete linear inverse problem A combined reconstruction algorithm of computerized ionospheric tomography CIT is proposed in this paper In this algorithm Tikhonov regularization theory TRT is exploited to solve the ill-posed problem and its estimate from GPS observation data is input as the initial guess of simultaneous iterative reconstruction algorithm SIRT The combined algorithm offer a more reasonable method to choose initial guess of SIRT and the use of SIRT algorithm is to improve the quality of the final reconstructed imaging Numerical experiments from the actual GPS observation data are used to validate the reliability of the method the reconstructed results show that the new algorithm works reasonably and effectively with CIT the overall reconstruction error reduces significantly compared to the reconstruction error of SIRT only or TRT only

  17. Evaluation of algorithms used to order markers on genetic maps.

    PubMed

    Mollinari, M; Margarido, G R A; Vencovsky, R; Garcia, A A F

    2009-12-01

    When building genetic maps, it is necessary to choose from several marker ordering algorithms and criteria, and the choice is not always simple. In this study, we evaluate the efficiency of algorithms try (TRY), seriation (SER), rapid chain delineation (RCD), recombination counting and ordering (RECORD) and unidirectional growth (UG), as well as the criteria PARF (product of adjacent recombination fractions), SARF (sum of adjacent recombination fractions), SALOD (sum of adjacent LOD scores) and LHMC (likelihood through hidden Markov chains), used with the RIPPLE algorithm for error verification, in the construction of genetic linkage maps. A linkage map of a hypothetical diploid and monoecious plant species was simulated containing one linkage group and 21 markers with fixed distance of 3 cM between them. In all, 700 F(2) populations were randomly simulated with 100 and 400 individuals with different combinations of dominant and co-dominant markers, as well as 10 and 20% of missing data. The simulations showed that, in the presence of co-dominant markers only, any combination of algorithm and criteria may be used, even for a reduced population size. In the case of a smaller proportion of dominant markers, any of the algorithms and criteria (except SALOD) investigated may be used. In the presence of high proportions of dominant markers and smaller samples (around 100), the probability of repulsion linkage increases between them and, in this case, use of the algorithms TRY and SER associated to RIPPLE with criterion LHMC would provide better results.

  18. Fast parallel algorithms for short-range molecular dynamics

    SciTech Connect

    Plimpton, S.

    1993-05-01

    Three parallel algorithms for classical molecular dynamics are presented. The first assigns each processor a subset of atoms; the second assigns each a subset of inter-atomic forces to compute; the third assigns each a fixed spatial region. The algorithms are suitable for molecular dynamics models which can be difficult to parallelize efficiently -- those with short-range forces where the neighbors of each atom change rapidly. They can be implemented on any distributed-memory parallel machine which allows for message-passing of data between independently executing processors. The algorithms are tested on a standard Lennard-Jones benchmark problem for system sizes ranging from 500 to 10,000,000 atoms on three parallel supercomputers, the nCUBE 2, Intel iPSC/860, and Intel Delta. Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems. For large problems, the spatial algorithm achieves parallel efficiencies of 90% and the Intel Delta performs about 30 times faster than a single Y-MP processor and 12 times faster than a single C90 processor. Trade-offs between the three algorithms and guidelines for adapting them to more complex molecular dynamics simulations are also discussed.

  19. Simulations of optical autofocus algorithms based on PGA in SAIL

    NASA Astrophysics Data System (ADS)

    Xu, Nan; Liu, Liren; Xu, Qian; Zhou, Yu; Sun, Jianfeng

    2011-09-01

    The phase perturbations due to propagation effects can destroy the high resolution imagery of Synthetic Aperture Imaging Ladar (SAIL). Some autofocus algorithms for Synthetic Aperture Radar (SAR) were developed and implemented. Phase Gradient Algorithm (PGA) is a well-known one for its robustness and wide application, and Phase Curvature Algorithm (PCA) as a similar algorithm expands its applied field to strip map mode. In this paper the autofocus algorithms utilized in optical frequency domain are proposed, including optical PGA and PCA respectively implemented in spotlight and strip map mode. Firstly, the mathematical flows of optical PGA and PCA in SAIL are derived. The simulations model of the airborne SAIL is established, and the compensation simulations of the synthetic aperture laser images corrupted by the random errors, linear phase errors and quadratic phase errors are executed. The compensation effect and the cycle index of the simulation are discussed. The simulation results show that both the two optical autofocus algorithms are effective while the optical PGA outperforms the optical PCA, which keeps consistency with the theory.

  20. A new root-based direction-finding algorithm

    NASA Astrophysics Data System (ADS)

    Wasylkiwskyj, Wasyl; Kopriva, Ivica; DoroslovačKi, Miloš; Zaghloul, Amir I.

    2007-04-01

    Polynomial rooting direction-finding (DF) algorithms are a computationally efficient alternative to search-based DF algorithms and are particularly suitable for uniform linear arrays of physically identical elements provided that mutual interaction among the array elements can be either neglected or compensated for. A popular algorithm in such situations is Root Multiple Signal Classification (Root MUSIC (RM)), wherein the estimation of the directions of arrivals (DOA) requires the computation of the roots of a (2N - 2) -order polynomial, where N represents number of array elements. The DOA are estimated from the L pairs of roots closest to the unit circle, where L represents number of sources. In this paper we derive a modified root polynomial (MRP) algorithm requiring the calculation of only L roots in order to estimate the L DOA. We evaluate the performance of the MRP algorithm numerically and show that it is as accurate as the RM algorithm but with a significantly simpler algebraic structure. In order to demonstrate that the theoretically predicted performance can be achieved in an experimental setting, a decoupled array is emulated in hardware using phase shifters. The results are in excellent agreement with theory.