Casimir experiments showing saturation effects
Sernelius, Bo E.
2009-10-15
We address several different Casimir experiments where theory and experiment disagree. First out is the classical Casimir force measurement between two metal half spaces; here both in the form of the torsion pendulum experiment by Lamoreaux and in the form of the Casimir pressure measurement between a gold sphere and a gold plate as performed by Decca et al.; theory predicts a large negative thermal correction, absent in the high precision experiments. The third experiment is the measurement of the Casimir force between a metal plate and a laser irradiated semiconductor membrane as performed by Chen et al.; the change in force with laser intensity is larger than predicted by theory. The fourth experiment is the measurement of the Casimir force between an atom and a wall in the form of the measurement by Obrecht et al. of the change in oscillation frequency of a {sup 87}Rb Bose-Einstein condensate trapped to a fused silica wall; the change is smaller than predicted by theory. We show that saturation effects can explain the discrepancies between theory and experiment observed in all these cases.
SAGE II inversion algorithm. [Stratospheric Aerosol and Gas Experiment
NASA Technical Reports Server (NTRS)
Chu, W. P.; Mccormick, M. P.; Lenoble, J.; Brogniez, C.; Pruvost, P.
1989-01-01
The operational Stratospheric Aerosol and Gas Experiment II multichannel data inversion algorithm is described. Aerosol and ozone retrievals obtained with the algorithm are discussed. The algorithm is compared to an independently developed algorithm (Lenoble, 1989), showing that the inverted aerosol and ozone profiles from the two algorithms are similar within their respective uncertainties.
Experiments showing dynamics of materials interfaces
Benjamin, R.F.
1997-02-01
The discipline of materials science and engineering often involves understanding and controlling properties of interfaces. The authors address the challenge of educating students about properties of interfaces, particularly dynamic properties and effects of unstable interfaces. A series of simple, inexpensive, hands-on activities about fluid interfaces provides students with a testbed to develop intuition about interface dynamics. The experiments highlight the essential role of initial interfacial perturbations in determining the dynamic response of the interface. The experiments produce dramatic, unexpected effects when initial perturbations are controlled and inhibited. These activities help students to develop insight about unstable interfaces that can be applied to analogous problems in materials science and engineering. The lessons examine ``Rayleigh-Taylor instability,`` an interfacial instability that occurs when a higher-density fluid is above a lower-density fluid.
Worldwide experience shows horizontal well success
Karlsson, H.; Bitto, R.
1989-03-01
The convergence of technology and experience has made horizontal drilling an important tool in increasing production and solving a variety of completion problems. Since the early 1980s, horizontal drilling has been used to improve production on more than 700 oil and gas wells throughout the world. Approximately 200 horizontal wells were drilled in 1988 alone. Interest in horizontal drilling has been accelerating rapidly as service companies have developed and offered new technology for drilling and producing horizontal wells. Simultaneously, oil companies have developed better methods for evaluating reservoirs for potential horizontal applications, while their production departments have gained experience at completing and producing them. To date, most horizontal wells have been drilled in the United States. A major application is to complete naturally fractured formations, such as the Austin chalk in Texas, the Bakken shale in the Williston basin, the Spraberry in West Texas and the Devonian shale in the Eastern states. In addition, many horizontal wells have been drilled to produce the Niagaran reefs and the irregular Antrim shale reservoirs in Michigan.
Children's Art Show: An Educational Family Experience
ERIC Educational Resources Information Center
Bakerlis, Julienne
2007-01-01
In a time of seemingly rampant budget cuts in the arts in school systems throughout the country, a children's art show reaps many rewards. It can strengthen family-school relationships and community ties and stimulate questions and comments about the benefits of art and its significance in the development of young children. In this photo essay of…
Retrieval Algorithms for the Halogen Occultation Experiment
NASA Technical Reports Server (NTRS)
Thompson, Robert E.; Gordley, Larry L.
2009-01-01
The Halogen Occultation Experiment (HALOE) on the Upper Atmosphere Research Satellite (UARS) provided high quality measurements of key middle atmosphere constituents, aerosol characteristics, and temperature for 14 years (1991-2005). This report is an outline of the Level 2 retrieval algorithms, and it also describes the great care that was taken in characterizing the instrument prior to launch and throughout its mission life. It represents an historical record of the techniques used to analyze the data and of the steps that must be considered for the development of a similar experiment for future satellite missions.
Algorithmic Animation in Education--Review of Academic Experience
ERIC Educational Resources Information Center
Esponda-Arguero, Margarita
2008-01-01
This article is a review of the pedagogical experience obtained with systems for algorithmic animation. Algorithms consist of a sequence of operations whose effect on data structures can be visualized using a computer. Students learn algorithms by stepping the animation through the different individual operations, possibly reversing their effect.…
Patient Experience Shows Little Relationship with Hospital Quality Management Strategies
Groene, Oliver; Arah, Onyebuchi A.; Klazinga, Niek S.; Wagner, Cordula; Bartels, Paul D.; Kristensen, Solvejg; Saillour, Florence; Thompson, Andrew; Thompson, Caroline A.; Pfaff, Holger; DerSarkissian, Maral; Sunol, Rosa
2015-01-01
Objectives Patient-reported experience measures are increasingly being used to routinely monitor the quality of care. With the increasing attention on such measures, hospital managers seek ways to systematically improve patient experience across hospital departments, in particular where outcomes are used for public reporting or reimbursement. However, it is currently unclear whether hospitals with more mature quality management systems or stronger focus on patient involvement and patient-centered care strategies perform better on patient-reported experience. We assessed the effect of such strategies on a range of patient-reported experience measures. Materials and Methods We employed a cross-sectional, multi-level study design randomly recruiting hospitals from the Czech Republic, France, Germany, Poland, Portugal, Spain, and Turkey between May 2011 and January 2012. Each hospital contributed patient level data for four conditions/pathways: acute myocardial infarction, stroke, hip fracture and deliveries. The outcome variables in this study were a set of patient-reported experience measures including a generic 6-item measure of patient experience (NORPEQ), a 3-item measure of patient-perceived discharge preparation (Health Care Transition Measure) and two single item measures of perceived involvement in care and hospital recommendation. Predictor variables included three hospital management strategies: maturity of the hospital quality management system, patient involvement in quality management functions and patient-centered care strategies. We used directed acyclic graphs to detail and guide the modeling of the complex relationships between predictor variables and outcome variables, and fitted multivariable linear mixed models with random intercept by hospital, and adjusted for fixed effects at the country level, hospital level and patient level. Results Overall, 74 hospitals and 276 hospital departments contributed data on 6,536 patients to this study (acute
Adaptive experiments with a multivariate Elo-type algorithm.
Doebler, Philipp; Alavash, Mohsen; Giessing, Carsten
2015-06-01
The present article introduces the multivariate Elo-type algorithm (META), which is inspired by the Elo rating system, a tool for the measurement of the performance of chess players. The META is intended for adaptive experiments with correlated traits. The relationship of the META to other existing procedures is explained, and useful variants and modifications are discussed. The META was investigated within three simulation studies. The gain in efficiency of the univariate Elo-type algorithm was compared to standard univariate procedures; the impact of using correlational information in the META was quantified; and the adaptability to learning and fatigue was investigated. Our results show that the META is a powerful tool to efficiently control task performance in a short time period and to assess correlated traits. The R code of the simulations, the implementation of the META in MATLAB, and an example of how to use the META in the context of neuroscience are provided in supplemental materials. PMID:24878597
Experiments on Supervised Learning Algorithms for Text Categorization
NASA Technical Reports Server (NTRS)
Namburu, Setu Madhavi; Tu, Haiying; Luo, Jianhui; Pattipati, Krishna R.
2005-01-01
Modern information society is facing the challenge of handling massive volume of online documents, news, intelligence reports, and so on. How to use the information accurately and in a timely manner becomes a major concern in many areas. While the general information may also include images and voice, we focus on the categorization of text data in this paper. We provide a brief overview of the information processing flow for text categorization, and discuss two supervised learning algorithms, viz., support vector machines (SVM) and partial least squares (PLS), which have been successfully applied in other domains, e.g., fault diagnosis [9]. While SVM has been well explored for binary classification and was reported as an efficient algorithm for text categorization, PLS has not yet been applied to text categorization. Our experiments are conducted on three data sets: Reuter's- 21578 dataset about corporate mergers and data acquisitions (ACQ), WebKB and the 20-Newsgroups. Results show that the performance of PLS is comparable to SVM in text categorization. A major drawback of SVM for multi-class categorization is that it requires a voting scheme based on the results of pair-wise classification. PLS does not have this drawback and could be a better candidate for multi-class text categorization.
MREIT conductivity imaging based on the local harmonic Bz algorithm: Animal experiments
NASA Astrophysics Data System (ADS)
Jeon, Kiwan; Lee, Chang-Ock; Woo, Eung Je; Kim, Hyung Joong; Seo, Jin Keun
2010-04-01
From numerous numerical and phantom experiments, MREIT conductivity imaging based on harmonic Bz algorithm shows that it could be yet another useful medical imaging modality. However, in animal experiments, the conventional harmonic Bz algorithm gives poor results near boundaries of problematic regions such as bones, lungs, and gas-filled stomach, and the subject boundary where electrodes are not attached. Since the amount of injected current is low enough for the safety for in vivo animal, the measured Bz data is defected by severe noise. In order to handle such problems, we use the recently developed local harmonic Bz algorithm to obtain conductivity images in our ROI(region of interest) without concerning the defected regions. Furthermore we adopt a denoising algorithm that preserves the ramp structure of Bz data, which informs of the location and size of anomaly. Incorporating these efficient techniques, we provide the conductivity imaging of post-mortem and in vivo animal experiments with high spatial resolution.
A new map-making algorithm for CMB polarization experiments
NASA Astrophysics Data System (ADS)
Wallis, Christopher G. R.; Bonaldi, A.; Brown, Michael L.; Battye, Richard A.
2015-10-01
With the temperature power spectrum of the cosmic microwave background (CMB) at least four orders of magnitude larger than the B-mode polarization power spectrum, any instrumental imperfections that couple temperature to polarization must be carefully controlled and/or removed. Here we present two new map-making algorithms that can create polarization maps that are clean of temperature-to-polarization leakage systematics due to differential gain and pointing between a detector pair. Where a half-wave plate is used, we show that the spin-2 systematic due to differential ellipticity can also be removed using our algorithms. The algorithms require no prior knowledge of the imperfections or temperature sky to remove the temperature leakage. Instead, they calculate the systematic and polarization maps in one step directly from the time-ordered data (TOD). The first algorithm is designed to work with scan strategies that have a good range of crossing angles for each map pixel and the second for scan strategies that have a limited range of crossing angles. The first algorithm can also be used to identify if systematic errors that have a particular spin are present in a TOD. We demonstrate the use of both algorithms and the ability to identify systematics with simulations of TOD with realistic scan strategies and instrumental noise.
Experience with a Genetic Algorithm Implemented on a Multiprocessor Computer
NASA Technical Reports Server (NTRS)
Plassman, Gerald E.; Sobieszczanski-Sobieski, Jaroslaw
2000-01-01
Numerical experiments were conducted to find out the extent to which a Genetic Algorithm (GA) may benefit from a multiprocessor implementation, considering, on one hand, that analyses of individual designs in a population are independent of each other so that they may be executed concurrently on separate processors, and, on the other hand, that there are some operations in a GA that cannot be so distributed. The algorithm experimented with was based on a gaussian distribution rather than bit exchange in the GA reproductive mechanism, and the test case was a hub frame structure of up to 1080 design variables. The experimentation engaging up to 128 processors confirmed expectations of radical elapsed time reductions comparing to a conventional single processor implementation. It also demonstrated that the time spent in the non-distributable parts of the algorithm and the attendant cross-processor communication may have a very detrimental effect on the efficient utilization of the multiprocessor machine and on the number of processors that can be used effectively in a concurrent manner. Three techniques were devised and tested to mitigate that effect, resulting in efficiency increasing to exceed 99 percent.
Development of clustering algorithms for Compressed Baryonic Matter experiment
NASA Astrophysics Data System (ADS)
Kozlov, G. E.; Ivanov, V. V.; Lebedev, A. A.; Vassiliev, Yu. O.
2015-05-01
A clustering problem for the coordinate detectors in the Compressed Baryonic Matter (CBM) experiment is discussed. Because of the high interaction rate and huge datasets to be dealt with, clustering algorithms are required to be fast and efficient and capable of processing events with high track multiplicity. At present there are two different approaches to the problem. In the first one each fired pad bears information about its charge, while in the second one a pad can or cannot be fired, thus rendering the separation of overlapping clusters a difficult task. To deal with the latter, two different clustering algorithms were developed, integrated into the CBMROOT software environment, and tested with various types of simulated events. Both of them are found to be highly efficient and accurate.
Experience with CANDID: Comparison algorithm for navigating digital image databases
Kelly, P.; Cannon, M.
1994-10-01
This paper presents results from the authors experience with CANDID (Comparison Algorithm for Navigating Digital Image Databases), which was designed to facilitate image retrieval by content using a query-by-example methodology. A global signature describing the texture, shape, or color content is first computed for every image stored in a database, and a normalized similarity measure between probability density functions of feature vectors is used to match signatures. This method can be used to retrieve images from a database that are similar to a user-provided example image. Results for three test applications are included.
Experience with imaging algorithms on multiple core CPUs
NASA Astrophysics Data System (ADS)
Moore, Richard
2011-01-01
With the release of an eight core Xeon processor by Intel and a twelve core Opteron processor by AMD in the spring of 2010, the increase of multiple cores per chip package continues. Multiple core processors are common place in most workstations sold today and are an attractive option for increasing imaging performance. Visual attention models are very compute intensive, requiring many imaging algorithms to be run on images such as large difference of Gaussian filters, segmentation, and region finding. In this paper we present our experience in optimizing the performance of a visual attention model on standard multi-core Windows workstations.
Experiments with a Parallel Multi-Objective Evolutionary Algorithm for Scheduling
NASA Technical Reports Server (NTRS)
Brown, Matthew; Johnston, Mark D.
2013-01-01
Evolutionary multi-objective algorithms have great potential for scheduling in those situations where tradeoffs among competing objectives represent a key requirement. One challenge, however, is runtime performance, as a consequence of evolving not just a single schedule, but an entire population, while attempting to sample the Pareto frontier as accurately and uniformly as possible. The growing availability of multi-core processors in end user workstations, and even laptops, has raised the question of the extent to which such hardware can be used to speed up evolutionary algorithms. In this paper we report on early experiments in parallelizing a Generalized Differential Evolution (GDE) algorithm for scheduling long-range activities on NASA's Deep Space Network. Initial results show that significant speedups can be achieved, but that performance does not necessarily improve as more cores are utilized. We describe our preliminary results and some initial suggestions from parallelizing the GDE algorithm. Directions for future work are outlined.
Pile-Up Discrimination Algorithms for the HOLMES Experiment
NASA Astrophysics Data System (ADS)
Ferri, E.; Alpert, B.; Bennett, D.; Faverzani, M.; Fowler, J.; Giachero, A.; Hays-Wehle, J.; Maino, M.; Nucciotti, A.; Puiu, A.; Ullom, J.
2016-07-01
The HOLMES experiment is a new large-scale experiment for the electron neutrino mass determination by means of the electron capture decay of ^{163}Ho. In such an experiment, random coincidence events are one of the main sources of background which impair the ability to identify the effect of a non-vanishing neutrino mass. In order to resolve these spurious events, detectors characterized by a fast response are needed as well as pile-up recognition algorithms. For that reason, we have developed a code for testing the discrimination efficiency of various algorithms in recognizing pile up events in dependence of the time separation between two pulses. The tests are performed on simulated realistic TES signals and noise. Indeed, the pulse profile is obtained by solving the two coupled differential equations which describe the response of the TES according to the Irwin-Hilton model. To these pulses, a noise waveform which takes into account all the noise sources regularly present in a real TES is added. The amplitude of the generated pulses is distributed as the ^{163}Ho calorimetric spectrum. Furthermore, the rise time of these pulses has been chosen taking into account the constraints given by both the bandwidth of the microwave multiplexing read out with a flux ramp demodulation and the bandwidth of the ADC boards currently available for ROACH2. Among the different rejection techniques evaluated, the Wiener Filter technique, a digital filter to gain time resolution, has shown an excellent pile-up rejection efficiency. The obtained time resolution closely matches the baseline specifications of the HOLMES experiment. We report here a description of our simulation code and a comparison of the different rejection techniques.
Pile-Up Discrimination Algorithms for the HOLMES Experiment
NASA Astrophysics Data System (ADS)
Ferri, E.; Alpert, B.; Bennett, D.; Faverzani, M.; Fowler, J.; Giachero, A.; Hays-Wehle, J.; Maino, M.; Nucciotti, A.; Puiu, A.; Ullom, J.
2016-01-01
The HOLMES experiment is a new large-scale experiment for the electron neutrino mass determination by means of the electron capture decay of ^{163} Ho. In such an experiment, random coincidence events are one of the main sources of background which impair the ability to identify the effect of a non-vanishing neutrino mass. In order to resolve these spurious events, detectors characterized by a fast response are needed as well as pile-up recognition algorithms. For that reason, we have developed a code for testing the discrimination efficiency of various algorithms in recognizing pile up events in dependence of the time separation between two pulses. The tests are performed on simulated realistic TES signals and noise. Indeed, the pulse profile is obtained by solving the two coupled differential equations which describe the response of the TES according to the Irwin-Hilton model. To these pulses, a noise waveform which takes into account all the noise sources regularly present in a real TES is added. The amplitude of the generated pulses is distributed as the ^{163} Ho calorimetric spectrum. Furthermore, the rise time of these pulses has been chosen taking into account the constraints given by both the bandwidth of the microwave multiplexing read out with a flux ramp demodulation and the bandwidth of the ADC boards currently available for ROACH2. Among the different rejection techniques evaluated, the Wiener Filter technique, a digital filter to gain time resolution, has shown an excellent pile-up rejection efficiency. The obtained time resolution closely matches the baseline specifications of the HOLMES experiment. We report here a description of our simulation code and a comparison of the different rejection techniques.
Experiences and evolutions of the ALICE DAQ Detector Algorithms framework
NASA Astrophysics Data System (ADS)
Chapeland, Sylvain; Carena, Franco; Carena, Wisla; Chibante Barroso, Vasco; Costa, Filippo; Denes, Ervin; Divia, Roberto; Fuchs, Ulrich; Grigore, Alexandru; Simonetti, Giuseppe; Soos, Csaba; Telesca, Adriana; Vande Vyvre, Pierre; von Haller, Barthelemy
2012-12-01
ALICE (A Large Ion Collider Experiment) is the heavy-ion detector studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). The 18 ALICE sub-detectors are regularly calibrated in order to achieve most accurate physics measurements. Some of these procedures are done online in the DAQ (Data Acquisition System) so that calibration results can be directly used for detector electronics configuration before physics data taking, at run time for online event monitoring, and offline for data analysis. A framework was designed to collect statistics and compute calibration parameters, and has been used in production since 2008. This paper focuses on the recent features developed to benefit from the multi-cores architecture of CPUs, and to optimize the processing power available for the calibration tasks. It involves some C++ base classes to effectively implement detector specific code, with independent processing of events in parallel threads and aggregation of partial results. The Detector Algorithm (DA) framework provides utility interfaces for handling of input and output (configuration, monitored physics data, results, logging), and self-documentation of the produced executable. New algorithms are created quickly by inheritance of base functionality and implementation of few ad-hoc virtual members, while the framework features are kept expandable thanks to the isolation of the detector calibration code. The DA control system also handles unexpected processes behaviour, logs execution status, and collects performance statistics.
STS-42 closeup view shows SE 81-09 Convection in Zero Gravity experiment
NASA Technical Reports Server (NTRS)
1992-01-01
STS-42 closeup view shows Student Experiment 81-09 (SE 81-09), Convection in Zero Gravity experiment, with radial pattern caused by convection induced by heating an oil and aluminum powder mixture in the weightlessness of space. While the STS-42 crewmembers activated the Shuttle Student Involvement Program (SSIP) experiment on Discovery's, Orbiter Vehicle (OV) 103's, middeck, Scott Thomas, the student who designed the experiment, was able to observe the procedures via downlinked television (TV) in JSC's Mission Control Center (MCC). Thomas, now a physics doctoral student at the University of Texas, came up with the experiment while he participated in the SSIP as a student at Richland High School in Johnstown, Pennsylvia.
Experiments with conjugate gradient algorithms for homotopy curve tracking
NASA Technical Reports Server (NTRS)
Irani, Kashmira M.; Ribbens, Calvin J.; Watson, Layne T.; Kamat, Manohar P.; Walker, Homer F.
1991-01-01
There are algorithms for finding zeros or fixed points of nonlinear systems of equations that are globally convergent for almost all starting points, i.e., with probability one. The essence of all such algorithms is the construction of an appropriate homotopy map and then tracking some smooth curve in the zero set of this homotopy map. HOMPACK is a mathematical software package implementing globally convergent homotopy algorithms with three different techniques for tracking a homotopy zero curve, and has separate routines for dense and sparse Jacobian matrices. The HOMPACK algorithms for sparse Jacobian matrices use a preconditioned conjugate gradient algorithm for the computation of the kernel of the homotopy Jacobian matrix, a required linear algebra step for homotopy curve tracking. Here, variants of the conjugate gradient algorithm are implemented in the context of homotopy curve tracking and compared with Craig's preconditioned conjugate gradient method used in HOMPACK. The test problems used include actual large scale, sparse structural mechanics problems.
Eigensystem realization algorithm modal identification experiences with mini-mast
NASA Technical Reports Server (NTRS)
Pappa, Richard S.; Schenk, Axel; Noll, Christopher
1992-01-01
This paper summarizes work performed under a collaborative research effort between the National Aeronautics and Space Administration (NASA) and the German Aerospace Research Establishment (DLR, Deutsche Forschungsanstalt fur Luft- und Raumfahrt). The objective is to develop and demonstrate system identification technology for future large space structures. Recent experiences using the Eigensystem Realization Algorithm (ERA), for modal identification of Mini-Mast, are reported. Mini-Mast is a 20 m long deployable space truss used for structural dynamics and active vibration-control research at the Langley Research Center. A comprehensive analysis of 306 frequency response functions (3 excitation forces and 102 displacement responses) was performed. Emphasis is placed on two topics of current research: (1) gaining an improved understanding of ERA performance characteristics (theory vs. practice); and (2) developing reliable techniques to improve identification results for complex experimental data. Because of nonlinearities and numerous local modes, modal identification of Mini-Mast proved to be surprisingly difficult. Methods were available, ERA, for obtaining detailed, high-confidence results.
ERIC Educational Resources Information Center
Hundhausen, Christopher D.; Brown, Jonathan L.
2008-01-01
Within the context of an introductory CS1 unit on algorithmic problem-solving, we are exploring the pedagogical value of a novel active learning activity--the "studio experience"--that actively engages learners with algorithm visualization technology. In a studio experience, student pairs are tasked with (a) developing a solution to an algorithm…
A field experiment shows that subtle linguistic cues might not affect voter behavior.
Gerber, Alan S; Huber, Gregory A; Biggers, Daniel R; Hendry, David J
2016-06-28
One of the most important recent developments in social psychology is the discovery of minor interventions that have large and enduring effects on behavior. A leading example of this class of results is in the work by Bryan et al. [Bryan CJ, Walton GM, Rogers T, Dweck CS (2011) Proc Natl Acad Sci USA 108(31):12653-12656], which shows that administering a set of survey items worded so that subjects think of themselves as voters (noun treatment) rather than as voting (verb treatment) substantially increases political participation (voter turnout) among subjects. We revisit these experiments by replicating and extending their research design in a large-scale field experiment. In contrast to the 11 to 14% point greater turnout among those exposed to the noun rather than the verb treatment reported in the work by Bryan et al., we find no statistically significant difference in turnout between the noun and verb treatments (the point estimate of the difference is approximately zero). Furthermore, when we benchmark these treatments against a standard get out the vote message, we estimate that both are less effective at increasing turnout than a much shorter basic mobilization message. In our conclusion, we detail how our study differs from the work by Bryan et al. and discuss how our results might be interpreted. PMID:27298362
Experiences with the PGAPack Parallel Genetic Algorithm library
Levine, D.; Hallstrom, P.; Noelle, D.; Walenz, B.
1997-07-01
PGAPack is the first widely distributed parallel genetic algorithm library. Since its release, several thousand copies have been distributed worldwide to interested users. In this paper we discuss the key components of the PGAPack design philosophy and present a number of application examples that use PGAPack.
PSO algorithm enhanced with Lozi Chaotic Map - Tuning experiment
Pluhacek, Michal; Senkerik, Roman; Zelinka, Ivan
2015-03-10
In this paper it is investigated the effect of tuning of control parameters of the Lozi Chaotic Map employed as a chaotic pseudo-random number generator for the particle swarm optimization algorithm. Three different benchmark functions are selected from the IEEE CEC 2013 competition benchmark set. The Lozi map is extensively tuned and the performance of PSO is evaluated.
Nørskov, Natalja P; Hedemann, Mette S; Theil, Peter K; Fomsgaard, Inge S; Laursen, Bente B; Knudsen, Knud Erik Bach
2013-09-18
The concentration and absorption of the nine phenolic acids of wheat were measured in a model experiment with catheterized pigs fed whole grain wheat and wheat aleurone diets. Six pigs in a repeated crossover design were fitted with catheters in the portal vein and mesenteric artery to study the absorption of phenolic acids. The difference between the artery and the vein for all phenolic acids was small, indicating that the release of phenolic acids in the large intestine was not sufficient to create a porto-arterial concentration difference. Although, the porto-arterial difference was small, their concentrations in the plasma and the absorption profiles differed between cinnamic and benzoic acid derivatives. Cinnamic acids derivatives such as ferulic acid and caffeic acid had maximum plasma concentration of 82 ± 20 and 200 ± 7 nM, respectively, and their absorption profiles differed depending on the diet consumed. Benzoic acid derivatives showed low concentration in the plasma (<30 nM) and in the diets. The exception was p-hydroxybenzoic acid, with a plasma concentration (4 ± 0.4 μM), much higher than the other plant phenolic acids, likely because it is an intermediate in the phenolic acid metabolism. It was concluded that plant phenolic acids undergo extensive interconversion in the colon and that their absorption profiles reflected their low bioavailability in the plant matrix. PMID:23971623
NASA Astrophysics Data System (ADS)
Diaz, Alejandro; Álvarez, Isaac; De la Torre, Ángel; García, Luz; Benítez, Ma Carmen; Cortés, Guillermo
2014-05-01
The detection of the arrival time of seismic waves or picking is of great importance in many seismology applications. Traditionally, picking has been carried out by human operators. This process is not systematic and relies completely on the expertise and judgment of the analysts. The limitations of manual picking and the increasing amount of data daily stored in the seismic networks worldwide distributed and in active seismic experiments lead to the development of automatic picking algorithms. Current conventional algorithms work with single signals, such as the "short-term average over long-term average" (STA/LTA) algorithm, autoregressive methods or the recently developed "Adaptive Multiband Picking Algorithm" (AMPA). This work proposes a correlation-based picking algorithm, whose main advantage is the fact of using the information of a set of signals, improving the signal to noise ratio and therefore the picking accuracy. The main advantage of this approach is that the algorithm does not require to set up sophisticated parameters, in contrast to other automatic algorithms. The accuracy of the conventional STA/LTA algorithm, the recently developed AMPA algorithm, an autoregressive method, and a preliminary version of the cross correlation-based picking algorithm were assessed using a huge data set composed by active seismic signals from experiments in Tenerife Island (January 2007, Spain). The experiment consisted of the deployment of a dense seismic network on Tenerife Island (125 seismometers in total) and the shooting of air-guns around the island with the Spanish oceanographic vessel Hespérides (6459 air shots in total). Only 110937 signals (13.74% of the total) had the signal to noise ratio enough to be manually picked. Results showed that the use of the cross correlation-based picking algorithm significantly increases the number of signals that can be considered in the tomography. A new active seismic experiment will cover Sicily and Aeolian Islands (TOMO
Video tracking algorithm of long-term experiment using stand-alone recording system
NASA Astrophysics Data System (ADS)
Chen, Yu-Jen; Li, Yan-Chay; Huang, Ke-Nung; Jen, Sun-Lon; Young, Ming-Shing
2008-08-01
Many medical and behavioral applications require the ability to monitor and quantify the behavior of small animals. In general these animals are confined in small cages. Often these situations involve very large numbers of cages. Modern research facilities commonly monitor simultaneously thousands of animals over long periods of time. However, conventional systems require one personal computer per monitoring platform, which is too complex, expensive, and increases power consumption for large laboratory applications. This paper presents a simplified video tracking algorithm for long-term recording using a stand-alone system. The feature of the presented tracking algorithm revealed that computation speed is very fast data storage requirements are small, and hardware requirements are minimal. The stand-alone system automatically performs tracking and saving acquired data to a secure digital card. The proposed system is designed for video collected at a 640×480 pixel with 16 bit color resolution. The tracking result is updated every 30 frames/s. Only the locomotive data are stored. Therefore, the data storage requirements could be minimized. In addition, detection via the designed algorithm uses the Cb and Cr values of a colored marker affixed to the target to define the tracked position and allows multiobject tracking against complex backgrounds. Preliminary experiment showed that such tracking information stored by the portable and stand-alone system could provide comprehensive information on the animal's activity.
F-18 SRA closeup of nose cap showing Advanced L-Probe Air Data Integration experiment
NASA Technical Reports Server (NTRS)
1997-01-01
This L-shaped probe mounted on the forward fuselage of a modified F-18 Systems Research Aircraft was the focus of an air data collection experiment flown at NASA's Dryden Flight Research Center, Edwards, California. The Advanced L-Probe Air Data Integration (ALADIN) experiment focused on providing pilots with angle-of-attack and angle-of-sideslip information as well as traditional airspeed and altitude data from a single system. For the experiment, the probes--one mounted on either side of the F-18's forward fuselage--were hooked to a series of four transducers, which relayed pressure measurements to an on-board research computer.
Kry, Stephen F.; Alvarez, Paola; Molineu, Andrea; Amador, Carrie; Galvin, James; Followill, David S.
2012-01-01
Purpose To determine the impact of treatment planning algorithm on the accuracy of heterogeneous dose calculations in the Radiological Physics Center (RPC) thorax phantom. Methods and Materials We retrospectively analyzed the results of 304 irradiations of the RPC thorax phantom at 221 different institutions as part of credentialing for RTOG clinical trials; the irradiations were all done using 6-MV beams. Treatment plans included those for intensity-modulated radiation therapy (IMRT) as well as 3D conformal therapy (3D CRT). Heterogeneous plans were developed using Monte Carlo (MC), convolution/superposition (CS) and the anisotropic analytic algorithm (AAA), as well as pencil beam (PB) algorithms. For each plan and delivery, the absolute dose measured in the center of a lung target was compared to the calculated dose, as was the planar dose in 3 orthogonal planes. The difference between measured and calculated dose was examined as a function of planning algorithm as well as use of IMRT. Results PB algorithms overestimated the dose delivered to the center of the target by 4.9% on average. Surprisingly, CS algorithms and AAA also showed a systematic overestimation of the dose to the center of the target, by 3.7% on average. In contrast, the MC algorithm dose calculations agreed with measurement within 0.6% on average. There was no difference observed between IMRT and 3D CRT calculation accuracy. Conclusion Unexpectedly, advanced treatment planning systems (those using CS and AAA algorithms) overestimated the dose that was delivered to the lung target. This issue requires attention in terms of heterogeneity calculations and potentially in terms of clinical practice. PMID:23237006
Lullaby Light Shows: Everyday Musical Experience among Under-Two-Year-Olds
ERIC Educational Resources Information Center
Young, Susan
2008-01-01
This article reports on information gathered from a set of interviews carried out with 88 mothers of under-two-year-olds. The interviews enquired about the everyday musical experiences of their babies and very young children in the home. From the process of analysis, the responses to the interviews were grouped into three main areas: musical…
Real Science: MIT Reality Show Tracks Experiences, Frustrations of Chemistry Lab Students
ERIC Educational Resources Information Center
Cooper, Kenneth J.
2012-01-01
A reality show about a college course--a chemistry class no less? That's what "ChemLab Boot Camp" is. The 14-part series of short videos is being released one episode at a time on the online learning site of the Massachusetts Institute of Technology. The novel show follows a diverse group of 14 freshmen as they struggle to master the laboratory…
Multiagent pursuit-evasion games: Algorithms and experiments
NASA Astrophysics Data System (ADS)
Kim, Hyounjin
Deployment of intelligent agents has been made possible through advances in control software, microprocessors, sensor/actuator technology, communication technology, and artificial intelligence. Intelligent agents now play important roles in many applications where human operation is too dangerous or inefficient. There is little doubt that the world of the future will be filled with intelligent robotic agents employed to autonomously perform tasks, or embedded in systems all around us, extending our capabilities to perceive, reason and act, and replacing human efforts. There are numerous real-world applications in which a single autonomous agent is not suitable and multiple agents are required. However, after years of active research in multi-agent systems, current technology is still far from achieving many of these real-world applications. Here, we consider the problem of deploying a team of unmanned ground vehicles (UGV) and unmanned aerial vehicles (UAV) to pursue a second team of UGV evaders while concurrently building a map in an unknown environment. This pursuit-evasion game encompasses many of the challenging issues that arise in operations using intelligent multi-agent systems. We cast the problem in a probabilistic game theoretic framework and consider two computationally feasible pursuit policies: greedy and global-max. We also formulate this probabilistic pursuit-evasion game as a partially observable Markov decision process and employ a policy search algorithm to obtain a good pursuit policy from a restricted class of policies. The estimated value of this policy is guaranteed to be uniformly close to the optimal value in the given policy class under mild conditions. To implement this scenario on real UAVs and UGVs, we propose a distributed hierarchical hybrid system architecture which emphasizes the autonomy of each agent yet allows for coordinated team efforts. We then describe our implementation on a fleet of UGVs and UAVs, detailing components such
Sinaiko, Anna D.; Ross-Degnan, Dennis; Soumerai, Stephen B.; Lieu, Tracy; Galbraith, Alison
2014-01-01
In 2022 twenty-five million people are expected to purchase health insurance through exchanges to be established under the Affordable Care Act. Understanding how people seek information and make decisions about the insurance plans that are available to them may improve their ability to select a plan and their satisfaction with it. We conducted a survey in 2010 of enrollees in one plan offered through Massachusetts’s unsubsidized health insurance exchange to analyze how a sample of consumers selected their plans. More than 40 percent found plan information difficult to understand. Approximately one-third of respondents had help selecting plans—most commonly from friends or family members. However, one-fifth of respondents wished they had had help narrowing plan choices; these enrollees were more likely to report negative experiences related to plan understanding, satisfaction with affordability and coverage, and unexpected costs. Some may have been eligible for subsidized plans. Exchanges may need to provide more resources and decision-support tools to improve consumers’ experiences in selecting a health plan. PMID:23297274
Sinaiko, Anna D; Ross-Degnan, Dennis; Soumerai, Stephen B; Lieu, Tracy; Galbraith, Alison
2013-01-01
In 2022 twenty-five million people are expected to purchase health insurance through exchanges to be established under the Affordable Care Act. Understanding how people seek information and make decisions about the insurance plans that are available to them may improve their ability to select a plan and their satisfaction with it. We conducted a survey in 2010 of enrollees in one plan offered through Massachusetts's unsubsidized health insurance exchange to analyze how a sample of consumers selected their plans. More than 40 percent found plan information difficult to understand. Approximately one-third of respondents had help selecting plans-most commonly from friends or family members. However, one-fifth of respondents wished they had had help narrowing plan choices; these enrollees were more likely to report negative experiences related to plan understanding, satisfaction with affordability and coverage, and unexpected costs. Some may have been eligible for subsidized plans. Exchanges may need to provide more resources and decision-support tools to improve consumers' experiences in selecting a health plan. PMID:23297274
Online Tracking Algorithms on GPUs for the P̅ANDA Experiment at FAIR
NASA Astrophysics Data System (ADS)
Bianchi, L.; Herten, A.; Ritman, J.; Stockmanns, T.; Adinetz,
2015-12-01
P̅ANDA is a future hadron and nuclear physics experiment at the FAIR facility in construction in Darmstadt, Germany. In contrast to the majority of current experiments, PANDA's strategy for data acquisition is based on event reconstruction from free-streaming data, performed in real time entirely by software algorithms using global detector information. This paper reports the status of the development of algorithms for the reconstruction of charged particle tracks, optimized online data processing applications, using General-Purpose Graphic Processing Units (GPU). Two algorithms for trackfinding, the Triplet Finder and the Circle Hough, are described, and details of their GPU implementations are highlighted. Average track reconstruction times of less than 100 ns are obtained running the Triplet Finder on state-of- the-art GPU cards. In addition, a proof-of-concept system for the dispatch of data to tracking algorithms using Message Queues is presented.
Global decomposition experiment shows soil animal impacts on decomposition are climate-dependent
WALL, DIANA H; BRADFORD, MARK A; ST JOHN, MARK G; TROFYMOW, JOHN A; BEHAN-PELLETIER, VALERIE; BIGNELL, DAVID E; DANGERFIELD, J MARK; PARTON, WILLIAM J; RUSEK, JOSEF; VOIGT, WINFRIED; WOLTERS, VOLKMAR; GARDEL, HOLLEY ZADEH; AYUKE, FRED O; BASHFORD, RICHARD; BELJAKOVA, OLGA I; BOHLEN, PATRICK J; BRAUMAN, ALAIN; FLEMMING, STEPHEN; HENSCHEL, JOH R; JOHNSON, DAN L; JONES, T HEFIN; KOVAROVA, MARCELA; KRANABETTER, J MARTY; KUTNY, LES; LIN, KUO-CHUAN; MARYATI, MOHAMED; MASSE, DOMINIQUE; POKARZHEVSKII, ANDREI; RAHMAN, HOMATHEVI; SABARÁ, MILLOR G; SALAMON, JOERG-ALFRED; SWIFT, MICHAEL J; VARELA, AMANDA; VASCONCELOS, HERALDO L; WHITE, DON; ZOU, XIAOMING
2008-01-01
Climate and litter quality are primary drivers of terrestrial decomposition and, based on evidence from multisite experiments at regional and global scales, are universally factored into global decomposition models. In contrast, soil animals are considered key regulators of decomposition at local scales but their role at larger scales is unresolved. Soil animals are consequently excluded from global models of organic mineralization processes. Incomplete assessment of the roles of soil animals stems from the difficulties of manipulating invertebrate animals experimentally across large geographic gradients. This is compounded by deficient or inconsistent taxonomy. We report a global decomposition experiment to assess the importance of soil animals in C mineralization, in which a common grass litter substrate was exposed to natural decomposition in either control or reduced animal treatments across 30 sites distributed from 43°S to 68°N on six continents. Animals in the mesofaunal size range were recovered from the litter by Tullgren extraction and identified to common specifications, mostly at the ordinal level. The design of the trials enabled faunal contribution to be evaluated against abiotic parameters between sites. Soil animals increase decomposition rates in temperate and wet tropical climates, but have neutral effects where temperature or moisture constrain biological activity. Our findings highlight that faunal influences on decomposition are dependent on prevailing climatic conditions. We conclude that (1) inclusion of soil animals will improve the predictive capabilities of region- or biome-scale decomposition models, (2) soil animal influences on decomposition are important at the regional scale when attempting to predict global change scenarios, and (3) the statistical relationship between decomposition rates and climate, at the global scale, is robust against changes in soil faunal abundance and diversity.
A new FOD recognition algorithm based on multi-source information fusion and experiment analysis
NASA Astrophysics Data System (ADS)
Li, Yu; Xiao, Gang
2011-08-01
Foreign Object Debris (FOD) is a kind of substance, debris or article alien to an aircraft or system, which would potentially cause huge damage when it appears on the airport runway. Due to the airport's complex circumstance, quick and precise detection of FOD target on the runway is one of the important protections for airplane's safety. A multi-sensor system including millimeter-wave radar and Infrared image sensors is introduced and a developed new FOD detection and recognition algorithm based on inherent feature of FOD is proposed in this paper. Firstly, the FOD's location and coordinate can be accurately obtained by millimeter-wave radar, and then according to the coordinate IR camera will take target images and background images. Secondly, in IR image the runway's edges which are straight lines can be extracted by using Hough transformation method. The potential target region, that is, runway region, can be segmented from the whole image. Thirdly, background subtraction is utilized to localize the FOD target in runway region. Finally, in the detailed small images of FOD target, a new characteristic is discussed and used in target classification. The experiment results show that this algorithm can effectively reduce the computational complexity, satisfy the real-time requirement and possess of high detection and recognition probability.
Pest control experiments show benefits of complexity at landscape and local scales.
Chaplin-Kramer, Rebecca; Kremen, Claire
2012-10-01
Farms benefit from pest control services provided by nature, but management of these services requires an understanding of how habitat complexity within and around the farm impacts the relationship between agricultural pests and their enemies. Using cage experiments, this study measures the effect of habitat complexity across scales on pest suppression of the cabbage aphid Brevicoryne brassicae in broccoli. Our results reveal that proportional reduction of pest density increases with complexity both at the landscape scale (measured by natural habitat cover in the 1 km around the farm) and at the local scale (plant diversity). While high local complexity can compensate for low complexity at landscape scales and vice versa, a delay in natural enemy arrival to locally complex sites in simple landscapes may compromise the enemies' ability to provide adequate control. Local complexity in simplified landscapes may only provide adequate top-down pest control in cooler microclimates with relatively low aphid colonization rates. Even so, strong natural enemy function can be overwhelmed by high rates of pest reproduction or colonization from nearby source habitat. PMID:23210310
Specific yield - laboratory experiments showing the effect of time on column drainage
Prill, Robert C.; Johnson, A.I.; Morris, Donald Arthur
1965-01-01
The increasing use of ground water from many major aquifers in the United States has required a more thorough understanding of gravity drainage, or specific yield. This report describes one phase of specific yield research by the U.S. Geological Survey's Hydrologic Laboratory in cooperation with the California Department of Water Resources. An earlier phase of the research concentrated on the final distribution of moisture retained after drainage of saturated columns of porous media. This report presents the phase that concentrated on the distribution of moisture retained in similar columns after drainage for various periods of time. Five columns, about 4 cm in diameter by 170 cm long, were packed with homogenous sand of very fine, medium, and coarse sizes, and one column was packed with alternating layers of coarse and medium sand. The very fine materials were more uniform in size range than were the medium materials. As the saturated columns drained, tensiometers installed throughout the length recorded changes in moisture tension. The relation of tension to moisture content, determined for each of the materials, was then used to convert the tension readings to moisture content. Data were then available on the distribution of retained moisture for different periods of drainage from 1 to 148 hours. Data also are presented on the final distribution of moisture content by weight and volume and on the degree of saturation. The final zone of capillary saturation was approximately 12 cm for coarse sand, 13 cm for medium sand, and 52 cm for very fine sand. The data showed these zones were 92 to 100 percent saturated. Most of the outflow from the columns occurred in the earlier hours of drainage--90 percent in 1 hour for the coarse materials, 50 percent for the medium, and 60 percent for the very fine. Although the largest percentage of the specific yield was reached during the early hours of .drainage, this study amply demonstrates that a very long time would be
Santamaría, A; Merino, A; Viñas, O; Arrizabalaga, P
2009-02-01
Have invisible barriers for women been broken in 2007, or do we still have to break through medicine's glass ceiling? Data from two of the most prestigious university hospitals in Barcelona with 700-800 beds, Hospital Clínic (HC) and Hospital de la Santa Creu i Sant Pau (HSCSP) address this issue. In the HSCSP, 87% of the department chairs are men and 85% of the department unit chiefs are also men. With respect to women, only 5 (13%) are in the top position (department chair) and 4 (15%) are department unit chiefs. Similar statistics are also found at the HC: 87% of the department chairs and 89% of the department unit chiefs are men. Currently, only 6 women (13%) are in the top position and 6 (11%) are department unit chiefs. Analysis of the 2002 data of internal promotions in HC showed that for the first level (senior specialist) sex distribution was similar. Nevertheless, for the second level (consultant) only 25% were women, and for the top level (senior consultant) only 8% were women. These proportions have not changed in 2007 in spite of a 10% increase in leadership positions during this period. Similar proportions were found in HSCSP where 68% of the top promotions were held by men. The data obtained from these two different medical institutions in Barcelona are probably representative of other hospitals in Spain. It would be ethically desirable to have males and females in leadership positions in the medical profession. PMID:19181883
Aalaei, Shokoufeh; Shahraki, Hadi; Rowhanimanesh, Alireza; Eslami, Saeid
2016-01-01
Objective(s): This study addresses feature selection for breast cancer diagnosis. The present process uses a wrapper approach using GA-based on feature selection and PS-classifier. The results of experiment show that the proposed model is comparable to the other models on Wisconsin breast cancer datasets. Materials and Methods: To evaluate effectiveness of proposed feature selection method, we employed three different classifiers artificial neural network (ANN) and PS-classifier and genetic algorithm based classifier (GA-classifier) on Wisconsin breast cancer datasets include Wisconsin breast cancer dataset (WBC), Wisconsin diagnosis breast cancer (WDBC), and Wisconsin prognosis breast cancer (WPBC). Results: For WBC dataset, it is observed that feature selection improved the accuracy of all classifiers expect of ANN and the best accuracy with feature selection achieved by PS-classifier. For WDBC and WPBC, results show feature selection improved accuracy of all three classifiers and the best accuracy with feature selection achieved by ANN. Also specificity and sensitivity improved after feature selection. Conclusion: The results show that feature selection can improve accuracy, specificity and sensitivity of classifiers. Result of this study is comparable with the other studies on Wisconsin breast cancer datasets. PMID:27403253
NASA Technical Reports Server (NTRS)
Adams, J. H., Jr.; Andreev, Valeri; Christl, M. J.; Cline, David B.; Crawford, Hank; Judd, E. G.; Pennypacker, Carl; Watts, J. W.
2007-01-01
The JEM-EUSO collaboration intends to study high energy cosmic ray showers using a large downward looking telescope mounted on the Japanese Experiment Module of the International Space Station. The telescope focal plane is instrumented with approx.300k pixels operating as a digital camera, taking snapshots at approx. 1MHz rate. We report an investigation of the trigger and reconstruction efficiency of various algorithms based on time and spatial analysis of the pixel images. Our goal is to develop trigger and reconstruction algorithms that will allow the instrument to detect energies low enough to connect smoothly to ground-based observations.
Data Association and Bullet Tracking Algorithms for the Fight Sight Experiment
Breitfeller, E; Roberts, R
2005-10-07
Previous LLNL investigators developed a bullet and projectile tracking system over a decade ago. Renewed interest in the technology has spawned research that culminated in a live-fire experiment, called Fight Sight, in September 2005. The experiment was more complex than previous LLNL bullet tracking experiments in that it included multiple shooters with simultaneous fire, new sensor-shooter geometries, large amounts of optical clutter, and greatly increased sensor-shooter distances. This presentation describes the data association and tracking algorithms for the Fight Sight experiment. Image processing applied to the imagery yields a sequence of bullet features which are input to a data association routine. The data association routine matches features with existing tracks, or initializes new tracks as needed. A Kalman filter is used to smooth and extrapolate existing tracks. The Kalman filter is also used to back-track bullets to their point of origin, thereby revealing the location of the shooter. It also provides an error ellipse for each shooter, quantifying the uncertainty of shooter location. In addition to describing the data association and tracking algorithms, several examples from the Fight Sight experiment are also presented.
F-18 SRA closeup of nose cap showing L-Probe experiment and standard air data sensors
NASA Technical Reports Server (NTRS)
1997-01-01
This under-the-nose view of a modified F-18 Systems Research Aircraft at NASA's Dryden Flight Research Center, Edwards, California, shows three critical components of the aircraft's air data systems which are mounted on both sides of the forward fuselage. Furthest forward are two L-probes that were the focus of the recent Advanced L-probe Air Data Integration (ALADIN) experiment. Behind the L-probes are angle-of-attack vanes, while below them are the aircraft's standard pitot-static air data probes. The ALADIN experiment focused on providing pilots with angle-of-attack and angle-of-sideslip air data as well as traditional airspeed and altitude information, all from a single system. Once fully developed, the new L-probes have the potential to give pilots more accurate air data information with less hardware.
Darzi, Soodabeh; Tiong, Sieh Kiong; Tariqul Islam, Mohammad; Rezai Soleymanpour, Hassan; Kibria, Salehin
2016-01-01
An experience oriented-convergence improved gravitational search algorithm (ECGSA) based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α), is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents’ positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR) beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness. PMID:27399904
Darzi, Soodabeh; Tiong, Sieh Kiong; Tariqul Islam, Mohammad; Rezai Soleymanpour, Hassan; Kibria, Salehin
2016-01-01
An experience oriented-convergence improved gravitational search algorithm (ECGSA) based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α), is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents' positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR) beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness. PMID:27399904
Model-independent nonlinear control algorithm with application to a liquid bridge experiment
Petrov, V.; Haaning, A.; Muehlner, K.A.; Van Hook, S.J.; Swinney, H.L.
1998-07-01
We present a control method for high-dimensional nonlinear dynamical systems that can target remote unstable states without {ital a priori} knowledge of the underlying dynamical equations. The algorithm constructs a high-dimensional look-up table based on the system{close_quote}s responses to a sequence of random perturbations. The method is demonstrated by stabilizing unstable flow of a liquid bridge surface-tension-driven convection experiment that models the float zone refining process. Control of the dynamics is achieved by heating or cooling two thermoelectric Peltier devices placed in the vicinity of the liquid bridge surface. The algorithm routines along with several example programs written in the MATLAB language can be found at ftp://ftp.mathworks.com/pub/contrib/v5/control/nlcontrol. {copyright} {ital 1998} {ital The American Physical Society}
Analysis of soil moisture extraction algorithm using data from aircraft experiments
NASA Technical Reports Server (NTRS)
Burke, H. H. K.; Ho, J. H.
1981-01-01
A soil moisture extraction algorithm is developed using a statistical parameter inversion method. Data sets from two aircraft experiments are utilized for the test. Multifrequency microwave radiometric data surface temperature, and soil moisture information are contained in the data sets. The surface and near surface ( or = 5 cm) soil moisture content can be extracted with accuracy of approximately 5% to 6% for bare fields and fields with grass cover by using L, C, and X band radiometer data. This technique is used for handling large amounts of remote sensing data from space.
Fast parallel tracking algorithm for the muon detector of the CBM experiment at fair
NASA Astrophysics Data System (ADS)
Lebedev, A.; Höhne, C.; Kisel, I.; Ososkov, G.
2010-07-01
Particle trajectory recognition is an important and challenging task in the Compressed Baryonic Matter (CBM) experiment at the future FAIR accelerator at Darmstadt. The tracking algorithms have to process terabytes of input data produced in particle collisions. Therefore, the speed of the tracking software is extremely important for data analysis. In this contribution, a fast parallel track reconstruction algorithm which uses available features of modern processors is presented. These features comprise a SIMD instruction set (SSE) and multithreading. The first allows one to pack several data items into one register and to operate on all of them in parallel thus achieving more operations per cycle. The second feature enables the routines to exploit all available CPU cores and hardware threads. This parallel version of the tracking algorithm has been compared to the initial serial scalar version which uses a similar approach for tracking. A speed-up factor of 487 was achieved (from 730 to 1.5 ms/event) for a computer with 2 × Intel Core i7 processors at 2.66 GHz.
Lee, Wonbae; von Hippel, Peter H.; Marcus, Andrew H.
2014-01-01
DNA constructs labeled with cyanine fluorescent dyes are important substrates for single-molecule (sm) studies of the functional activity of protein–DNA complexes. We previously studied the local DNA backbone fluctuations of replication fork and primer–template DNA constructs labeled with Cy3/Cy5 donor–acceptor Förster resonance energy transfer (FRET) chromophore pairs and showed that, contrary to dyes linked ‘externally’ to the bases with flexible tethers, direct ‘internal’ (and rigid) insertion of the chromophores into the sugar-phosphate backbones resulted in DNA constructs that could be used to study intrinsic and protein-induced DNA backbone fluctuations by both smFRET and sm Fluorescent Linear Dichroism (smFLD). Here we show that these rigidly inserted Cy3/Cy5 chromophores also exhibit two additional useful properties, showing both high photo-stability and minimal effects on the local thermodynamic stability of the DNA constructs. The increased photo-stability of the internal labels significantly reduces the proportion of false positive smFRET conversion ‘background’ signals, thereby simplifying interpretations of both smFRET and smFLD experiments, while the decreased effects of the internal probes on local thermodynamic stability also make fluctuations sensed by these probes more representative of the unperturbed DNA structure. We suggest that internal probe labeling may be useful in studies of many DNA–protein interaction systems. PMID:24627223
SU-E-T-344: Validation and Clinical Experience of Eclipse Electron Monte Carlo Algorithm (EMC)
Pokharel, S; Rana, S
2014-06-01
Purpose: The purpose of this study is to validate Eclipse Electron Monte Carlo (Algorithm for routine clinical uses. Methods: The PTW inhomogeneity phantom (T40037) with different combination of heterogeneous slabs has been CT-scanned with Philips Brilliance 16 slice scanner. The phantom contains blocks of Rando Alderson materials mimicking lung, Polystyrene (Tissue), PTFE (Bone) and PMAA. The phantom has 30×30×2.5 cm base plate with 2cm recesses to insert inhomogeneity. The detector systems used in this study are diode, tlds and Gafchromic EBT2 films. The diode and tlds were included in CT scans. The CT sets are transferred to Eclipse treatment planning system. Several plans have been created with Eclipse Monte Carlo (EMC) algorithm 11.0.21. Measurements have been carried out in Varian TrueBeam machine for energy from 6–22mev. Results: The measured and calculated doses agreed very well for tissue like media. The agreement was reasonably okay for the presence of lung inhomogeneity. The point dose agreement was within 3.5% and Gamma passing rate at 3%/3mm was greater than 93% except for 6Mev(85%). The disagreement can reach as high as 10% in the presence of bone inhomogeneity. This is due to eclipse reporting dose to the medium as opposed to the dose to the water as in conventional calculation engines. Conclusion: Care must be taken when using Varian Eclipse EMC algorithm for dose calculation for routine clinical uses. The algorithm dose not report dose to water in which most of the clinical experiences are based on rather it just reports dose to medium directly. In the presence of inhomogeneity such as bone, the dose discrepancy can be as high as 10% or even more depending on the location of normalization point or volume. As Radiation oncology as an empirical science, care must be taken before using EMC reported monitor units for clinical uses.
NASA Astrophysics Data System (ADS)
Degtyarev, Alexander; Khramushin, Vasily
2016-02-01
The paper deals with the computer implementation of direct computational experiments in fluid mechanics, constructed on the basis of the approach developed by the authors. The proposed approach allows the use of explicit numerical scheme, which is an important condition for increasing the effciency of the algorithms developed by numerical procedures with natural parallelism. The paper examines the main objects and operations that let you manage computational experiments and monitor the status of the computation process. Special attention is given to a) realization of tensor representations of numerical schemes for direct simulation; b) realization of representation of large particles of a continuous medium motion in two coordinate systems (global and mobile); c) computing operations in the projections of coordinate systems, direct and inverse transformation in these systems. Particular attention is paid to the use of hardware and software of modern computer systems.
Latifi, Kujtim; Oliver, Jasmine; Baker, Ryan; Dilling, Thomas J.; Stevens, Craig W.; Kim, Jongphil; Yue, Binglin; DeMarco, MaryLou; Zhang, Geoffrey G.; Moros, Eduardo G.; Feygelman, Vladimir
2014-04-01
Purpose: Pencil beam (PB) and collapsed cone convolution (CCC) dose calculation algorithms differ significantly when used in the thorax. However, such differences have seldom been previously directly correlated with outcomes of lung stereotactic ablative body radiation (SABR). Methods and Materials: Data for 201 non-small cell lung cancer patients treated with SABR were analyzed retrospectively. All patients were treated with 50 Gy in 5 fractions of 10 Gy each. The radiation prescription mandated that 95% of the planning target volume (PTV) receive the prescribed dose. One hundred sixteen patients were planned with BrainLab treatment planning software (TPS) with the PB algorithm and treated on a Novalis unit. The other 85 were planned on the Pinnacle TPS with the CCC algorithm and treated on a Varian linac. Treatment planning objectives were numerically identical for both groups. The median follow-up times were 24 and 17 months for the PB and CCC groups, respectively. The primary endpoint was local/marginal control of the irradiated lesion. Gray's competing risk method was used to determine the statistical differences in local/marginal control rates between the PB and CCC groups. Results: Twenty-five patients planned with PB and 4 patients planned with the CCC algorithms to the same nominal doses experienced local recurrence. There was a statistically significant difference in recurrence rates between the PB and CCC groups (hazard ratio 3.4 [95% confidence interval: 1.18-9.83], Gray's test P=.019). The differences (Δ) between the 2 algorithms for target coverage were as follows: ΔD99{sub GITV} = 7.4 Gy, ΔD99{sub PTV} = 10.4 Gy, ΔV90{sub GITV} = 13.7%, ΔV90{sub PTV} = 37.6%, ΔD95{sub PTV} = 9.8 Gy, and ΔD{sub ISO} = 3.4 Gy. GITV = gross internal tumor volume. Conclusions: Local control in patients receiving who were planned to the same nominal dose with PB and CCC algorithms were statistically significantly different. Possible alternative
NASA Technical Reports Server (NTRS)
Shia, Run-Lie; Ha, Yuk Lung; Wen, Jun-Shan; Yung, Yuk L.
1990-01-01
Extensive testing of the advective scheme proposed by Prather (1986) has been carried out in support of the California Institute of Technology-Jet Propulsion Laboratory two-dimensional model of the middle atmosphere. The original scheme is generalized to include higher-order moments. In addition, it is shown how well the scheme works in the presence of chemistry as well as eddy diffusion. Six types of numerical experiments including simple clock motion and pure advection in two dimensions have been investigated in detail. By comparison with analytic solutions, it is shown that the new algorithm can faithfully preserve concentration profiles, has essentially no numerical diffusion, and is superior to a typical fourth-order finite difference scheme.
Zimmerman, K; Levitis, D; Addicott, E; Pringle, A
2016-02-01
We present a novel algorithm for the design of crossing experiments. The algorithm identifies a set of individuals (a 'crossing-set') from a larger pool of potential crossing-sets by maximizing the diversity of traits of interest, for example, maximizing the range of genetic and geographic distances between individuals included in the crossing-set. To calculate diversity, we use the mean nearest neighbor distance of crosses plotted in trait space. We implement our algorithm on a real dataset of Neurospora crassa strains, using the genetic and geographic distances between potential crosses as a two-dimensional trait space. In simulated mating experiments, crossing-sets selected by our algorithm provide better estimates of underlying parameter values than randomly chosen crossing-sets. PMID:26419337
NASA Astrophysics Data System (ADS)
Li, Liang; Chen, Zhiqiang; Zhao, Ziran; Wu, Dufan
2013-01-01
At present, there are mainly three x-ray imaging modalities for dental clinical diagnosis: radiography, panorama and computed tomography (CT). We develop a new x-ray digital intra-oral tomosynthesis (IDT) system for quasi-three-dimensional dental imaging which can be seen as an intermediate modality between traditional radiography and CT. In addition to normal x-ray tube and digital sensor used in intra-oral radiography, IDT has a specially designed mechanical device to complete the tomosynthesis data acquisition. During the scanning, the measurement geometry is such that the sensor is stationary inside the patient's mouth and the x-ray tube moves along an arc trajectory with respect to the intra-oral sensor. Therefore, the projection geometry can be obtained without any other reference objects, which makes it be easily accepted in clinical applications. We also present a compressed sensing-based iterative reconstruction algorithm for this kind of intra-oral tomosynthesis. Finally, simulation and experiment were both carried out to evaluate this intra-oral imaging modality and algorithm. The results show that IDT has its potentiality to become a new tool for dental clinical diagnosis.
Level 3 trigger algorithm and Hardware Platform for the HADES experiment
NASA Astrophysics Data System (ADS)
Kirschner, Daniel Georg; Agakishiev, Geydar; Liu, Ming; Perez, Tiago; Kühn, Wolfgang; Pechenov, Vladimir; Spataro, Stefano
2009-01-01
A next generation real time trigger method to improve the enrichment of lepton events in the High Acceptance DiElectron Spectrometer (HADES) trigger system has been developed. In addition, a flexible Hardware Platform (Gigabit Ethernet-Multi-Node, GE-MN) was developed to implement and test the trigger method. The trigger method correlates the ring information of the HADES Ring Imaging Cherenkov (RICH) detector with the fired wires (drift cells) of the HADES Mini Drift Chamber (MDC) detector. It is demonstrated that this Level 3 trigger method can enhance the number of events which contain leptons by a factor of up to 50 at efficiencies above 80%. The performance of the correlation method in terms of the events analyzed per second has been studied with the GE-MN prototype in a lab test setup by streaming previously recorded experiment data to the module. This paper is a compilation from Kirschner [Level 3 trigger algorithm and Hardware Platform for the HADES experiment, Ph.D. Thesis, II. Physikalisches Institut der Justus-Liebig-Universität Gießen, urn:nbn:de:hebis:26-opus-50784, October 2007 [1
F-15B on ramp showing closeup of the Supersonic Natural Laminar Flow (SS-NLF) experiment attached ve
NASA Technical Reports Server (NTRS)
1999-01-01
A close up of the Supersonic Natural Laminar Flow (SS-NLF) experiment on the F-15B. The wing shape - designed by the Reno Aeronautical Corp. - had only minimal sweep and a short span. The low sweep angle gave this airfoil better take off and landing characteristics, as well as better subsonic cruise efficiency, than wings with a greater sweep angle. Engineers had reason to believe that improvements in aerodynamic efficiency from supersonic natural laminar flow might actually render a supersonic aircraft more economical to operate than slower, subsonic designs. To gather substantiate data, the SS-NLF experiment used an advanced, non-intrusive collection technique. Rather than instrumentation built into the wing, a high resolution infrared camera mounted on the F-15B fuselage recorded the data, a system with possible applications for future research aircraft.
F-15B in flight showing Supersonic Natural Laminar Flow (SS-NLF) experiment attached vertically to t
NASA Technical Reports Server (NTRS)
1999-01-01
In-flight photo of the F-15B equipped with the Supersonic Natural Laminar Flow (SS-NLF) experiment. During four research flights, laminar flow was achieved over 80 percent of the test wing at speeds approaching Mach 2. This was accomplished as the sole result of the shape of the wing, without the use of suction gloves, such as on the F-16XL. Laminar flow is a condition in which air passes over a wing in smooth layers, rather than being turbulent The greater the area of laminar flow, the lower the amount of friction drag on the wing, thus increasing an aircraft's range and fuel economy. Increasing the area of laminar flow on a wing has been the subject of research by engineers since the late 1940s, but substantial success has proven elusive. The SS-NLF experiment was intended to provide engineers with the data by which to design natural laminar flow wings.
Performance of the reconstruction algorithms of the FIRST experiment pixel sensors vertex detector
NASA Astrophysics Data System (ADS)
Rescigno, R.; Finck, Ch.; Juliani, D.; Spiriti, E.; Baudot, J.; Abou-Haidar, Z.; Agodi, C.; Alvarez, M. A. G.; Aumann, T.; Battistoni, G.; Bocci, A.; Böhlen, T. T.; Boudard, A.; Brunetti, A.; Carpinelli, M.; Cirrone, G. A. P.; Cortes-Giraldo, M. A.; Cuttone, G.; De Napoli, M.; Durante, M.; Gallardo, M. I.; Golosio, B.; Iarocci, E.; Iazzi, F.; Ickert, G.; Introzzi, R.; Krimmer, J.; Kurz, N.; Labalme, M.; Leifels, Y.; Le Fevre, A.; Leray, S.; Marchetto, F.; Monaco, V.; Morone, M. C.; Oliva, P.; Paoloni, A.; Patera, V.; Piersanti, L.; Pleskac, R.; Quesada, J. M.; Randazzo, N.; Romano, F.; Rossi, D.; Rousseau, M.; Sacchi, R.; Sala, P.; Sarti, A.; Scheidenberger, C.; Schuy, C.; Sciubba, A.; Sfienti, C.; Simon, H.; Sipala, V.; Tropea, S.; Vanstalle, M.; Younis, H.
2014-12-01
Hadrontherapy treatments use charged particles (e.g. protons and carbon ions) to treat tumors. During a therapeutic treatment with carbon ions, the beam undergoes nuclear fragmentation processes giving rise to significant yields of secondary charged particles. An accurate prediction of these production rates is necessary to estimate precisely the dose deposited into the tumours and the surrounding healthy tissues. Nowadays, a limited set of double differential carbon fragmentation cross-section is available. Experimental data are necessary to benchmark Monte Carlo simulations for their use in hadrontherapy. The purpose of the FIRST experiment is to study nuclear fragmentation processes of ions with kinetic energy in the range from 100 to 1000 MeV/u. Tracks are reconstructed using information from a pixel silicon detector based on the CMOS technology. The performances achieved using this device for hadrontherapy purpose are discussed. For each reconstruction step (clustering, tracking and vertexing), different methods are implemented. The algorithm performances and the accuracy on reconstructed observables are evaluated on the basis of simulated and experimental data.
Thermal weapon sights with integrated fire control computers: algorithms and experiences
NASA Astrophysics Data System (ADS)
Rothe, Hendrik; Graswald, Markus; Breiter, Rainer
2008-04-01
The HuntIR long range thermal weapon sight of AIM is deployed in various out of area missions since 2004 as a part of the German Future Infantryman system (IdZ). In 2007 AIM fielded RangIR as upgrade with integrated laser Range finder (LRF), digital magnetic compass (DMC) and fire control unit (FCU). RangIR fills the capability gaps of day/night fire control for grenade machine guns (GMG) and the enhanced system of the IdZ. Due to proven expertise and proprietary methods in fire control, fast access to military trials for optimisation loops and similar hardware platforms, AIM and the University of the Federal Armed Forces Hamburg (HSU) decided to team for the development of suitable fire control algorithms. The pronounced ballistic trajectory of the 40mm GMG requires most accurate FCU-solutions specifically for air burst ammunition (ABM) and is most sensitive to faint effects like levelling or firing up/downhill. This weapon was therefore selected to validate the quality of the FCU hard- and software under relevant military conditions. For exterior ballistics the modified point mass model according to STANAG 4355 is used. The differential equations of motions are solved numerically, the two point boundary value problem is solved iteratively. Computing time varies according to the precision needed and is typical in the range from 0.1 - 0.5 seconds. RangIR provided outstanding hit accuracy including ABM fuze timing in various trials of the German Army and allied partners in 2007 and is now ready for series production. This paper deals mainly with the fundamentals of the fire control algorithms and shows how to implement them in combination with any DSP-equipped thermal weapon sights (TWS) in a variety of light supporting weapon systems.
Tracking at CDF: algorithms and experience from Run I and Run II
Snider, F.D.; /Fermilab
2005-10-01
The authors describe the tracking algorithms used during Run I and Run II by CDF at the Fermilab Tevatron Collider, covering the time from about 1992 through the present, and discuss the performance of the algorithms at high luminosity. By tracing the evolution of the detectors and algorithms, they reveal some of the successful strategies used by CDF to address the problems of tracking at high luminosities.
Biology, the way it should have been, experiments with a Lamarckian algorithm
Brown, F.M.; Snider, J.
1996-12-31
This paper investigates the case where some information can be extracted directly from the fitness function of a genetic algorithm so that mutation may be achieved essentially on the Lamarckian principle of acquired characteristics. The basic rationale is that such additional information will provide better mutations, thus speeding up the search process. Comparisons are made between a pure Neo-Darwinian genetic algorithm and this Lamarckian algorithm on a number of problems, including a problem of interest to the US Army.
An Object-Oriented Collection of Minimum Degree Algorithms: Design, Implementation, and Experiences
NASA Technical Reports Server (NTRS)
Kumfert, Gary; Pothen, Alex
1999-01-01
The multiple minimum degree (MMD) algorithm and its variants have enjoyed 20+ years of research and progress in generating fill-reducing orderings for sparse, symmetric positive definite matrices. Although conceptually simple, efficient implementations of these algorithms are deceptively complex and highly specialized. In this case study, we present an object-oriented library that implements several recent minimum degree-like algorithms. We discuss how object-oriented design forces us to decompose these algorithms in a different manner than earlier codes and demonstrate how this impacts the flexibility and efficiency of our C++ implementation. We compare the performance of our code against other implementations in C or Fortran.
NASA Astrophysics Data System (ADS)
Matthews, James; Wright, Matthew; Bacak, Asan; Silva, Hugo; Priestley, Michael; Martin, Damien; Percival, Carl; Shallcross, Dudley
2016-04-01
Cyclic perfluorocarbons (PFCs) have been used to measure the passage of air in urban and rural settings as they are chemically inert, non-toxic and have low background concentrations. The use of pre-concentrators and chemical ionisation gas chromatography enables concentrations of a few parts per quadrillion (ppq) to be measured in bag samples. Three PFC tracers were used in Manchester, UK in the summer of 2015 to map airflow in the city and ingress into buildings: perfluomethylcyclohexane (PMCH), perfluoro-2-4-dimethylcyclohexane (mPDMCH) and perfluoro-2-methyl-3-ethylpentene (PMEP). A known quantity of each PFC was released for 15 minutes from steel canisters using pre-prepared PFC mixtures. Release points were chosen to be upwind of the central sampling location (Simon Building, University of Manchester) and varied in distance up to 2.2 km. Six releases using one or three tracers in different configurations and under different conditions were undertaken in the summer. Three further experiments were conducted in the Autumn, to more closely investigate the rate of ingress and decay of tracer indoors. In each experiment, 10 litre samples were made over 30 minutes into Tedlar bags, starting at the same time the as PFC release. Samples were taken in 11 locations chosen from 15 identified areas including three in public parks, three outside within the University of Manchester area, seven inside and five outside of the Simon building and two outside a building nearby. For building measurements, receptors were placed inside the buildings on different floors; outside measurements were achieved through a sample line out of the window. Three of the sample positions inside the Simon building were paired with samplers outside to allow indoor-outdoor comparisons. PFC concentrations varied depending on location and height. The highest measured concentrations occurred when the tracer was released at sunrise; up to 330 ppq above background (11 ppq) of PMCH was measured at the 6
NASA Astrophysics Data System (ADS)
Domínguez-Rué, Emma; Mrotzek, Maximilian
2014-01-01
Previous research has shown that using tools from systems science for teaching and learning in the Humanities offers innovative insights that can prove helpful for both students and lecturers. Our contention here is that a method used in systems science, namely the influence matrix, can be a suitable tool to facilitate the understanding of elementary notions in Aesthetics by means of systematizing this process. As we will demonstrate in the upcoming sections, the influence matrix can help us to understand the nature and function of the basic elements that take part in the aesthetic experience and their evolving relevance in the history of Aesthetics. The implementation of these elements to an influence matrix will contribute to a more detailed understanding of (i) the nature of each element, (ii) the interrelation between them and (iii) the influence each element has on all the others.
NASA Astrophysics Data System (ADS)
Klaessens, John H. G. M.; Hopman, Jeroen C. W.; Liem, K. Djien; de Roode, Rowland; Verdaasdonk, Rudolf M.; Thijssen, Johan M.
2008-02-01
Continuous wave Near Infrared Spectroscopy is a well known non invasive technique for measuring changes in tissue oxygenation. Absorption changes (ΔO2Hb and ΔHHb) are calculated from the light attenuations using the modified Lambert Beer equation. Generally, the concentration changes are calculated relative to the concentration at a starting point in time (delta time method). It is also possible, under certain assumptions, to calculate the concentrations by subtracting the equations at different wavelengths (delta wavelength method). We derived a new algorithm and will show the possibilities and limitations. In the delta wavelength method, the assumption is that the oxygen independent attenuation term will be eliminated from the formula even if its value changes in time, we verified the results with the classical delta time method using extinction coefficients from different literature sources for the wavelengths 767nm, 850nm and 905nm. The different methods of calculating concentration changes were applied to the data collected from animal experiments. The animals (lambs) were in a stable normoxic condition; stepwise they were made hypoxic and thereafter they returned to normoxic condition. The two algorithms were also applied for measuring two dimensional blood oxygen saturation changes in human skin tissue. The different oxygen saturation levels were induced by alterations in the respiration and by temporary arm clamping. The new delta wavelength method yielded in a steady state measurement the same changes in oxy and deoxy hemoglobin as the classical delta time method. The advantage of the new method is the independence of eventual variation of the oxygen independent attenuations in time.
NASA Astrophysics Data System (ADS)
Cetinić, I.; Perry, M. J.; D'Asaro, E.; Briggs, N.; Poulton, N.; Sieracki, M. E.; Lee, C. M.
2015-04-01
The ratio of two in situ optical measurements - chlorophyll fluorescence (Chl F) and optical particulate backscattering (bbp) - varied with changes in phytoplankton community composition during the North Atlantic Bloom Experiment in the Iceland Basin in 2008. Using ship-based measurements of Chl F, bbp, chlorophyll a (Chl), high-performance liquid chromatography (HPLC) pigments, phytoplankton composition and carbon biomass, we found that oscillations in the ratio varied with changes in plankton community composition; hence we refer to Chl F/bbp as an "optical community index". The index varied by more than a factor of 2, with low values associated with pico- and nanophytoplankton and high values associated with diatom-dominated phytoplankton communities. Observed changes in the optical index were driven by taxa-specific chlorophyll-to-autotrophic carbon ratios and by physiological changes in Chl F associated with the silica limitation. A Lagrangian mixed-layer float and four Seagliders, operating continuously for 2 months, made similar measurements of the optical community index and followed the evolution and later demise of the diatom spring bloom. Temporal changes in optical community index and, by implication, the transition in community composition from diatom to post-diatom bloom communities were not simultaneous over the spatial domain surveyed by the ship, float and gliders. The ratio of simple optical properties measured from autonomous platforms, when carefully validated, provides a unique tool for studying phytoplankton patchiness on extended temporal scales and ecologically relevant spatial scales and should offer new insights into the processes regulating patchiness.
Finnerty, P.; Aguayo, Estanislao; Amman, M.; Avignone, Frank T.; Barabash, Alexander S.; Barton, P. J.; Beene, Jim; Bertrand, F.; Boswell, M.; Brudanin, V.; Busch, Matthew; Chan, Yuen-Dat; Christofferson, Cabot-Ann; Collar, J. I.; Combs, Dustin C.; Cooper, R. J.; Detwiler, Jason A.; Doe, P. J.; Efremenko, Yuri; Egorov, Viatcheslav; Ejiri, H.; Elliott, S. R.; Esterline, James H.; Fast, James E.; Fields, N.; Fraenkle, Florian; Galindo-Uribarri, A.; Gehman, Victor M.; Giovanetti, G. K.; Green, M.; Guiseppe, Vincente; Gusey, K.; Hallin, A. L.; Hazama, R.; Henning, Reyco; Hoppe, Eric W.; Horton, Mark; Howard, Stanley; Howe, M. A.; Johnson, R. A.; Keeter, K.; Kidd, M. F.; Knecht, A.; Kochetov, Oleg; Konovalov, S.; Kouzes, Richard T.; LaFerriere, Brian D.; Leon, Jonathan D.; Leviner, L.; Loach, J. C.; Looker, Q.; Luke, P.; MacMullin, S.; Marino, Michael G.; Martin, R. D.; Merriman, Jason H.; Miller, M. L.; Mizouni, Leila; Nomachi, Masaharu; Orrell, John L.; Overman, Nicole R.; Perumpilly, Gopakumar; Phillips, David; Poon, Alan; Radford, D. C.; Rielage, Keith; Robertson, R. G. H.; Ronquest, M. C.; Schubert, Alexis G.; Shima, T.; Shirchenko, M.; Snavely, Kyle J.; Steele, David; Strain, J.; Timkin, V.; Tornow, Werner; Varner, R. L.; Vetter, Kai; Vorren, Kris R.; Wilkerson, J. F.; Yakushev, E.; Yaver, Harold; Young, A.; Yu, Chang-Hong; Yumatov, Vladimir
2014-03-24
The Majorana Demonstrator will search for the neutrinoless double-beta decay (0*) of the 76Ge isotope with a mixed array of enriched and natural germanium detectors. The observation of this rare decay would indicate the neutrino is its own anti-particle, demonstrate that lepton number is not conserved, and provide information on the absolute mass-scale of the neutrino. The Demonstrator is being assembled at the 4850 foot level of the Sanford Underground Research Facility in Lead, South Dakota. The array will be contained in a lowbackground environment and surrounded by passive and active shielding. The goals for the Demonstrator are: demonstrating a background rate less than 3 counts tonne -1 year-1 in the 4 keV region of interest (ROI) surrounding the 2039 keV 76Ge endpoint energy; establishing the technology required to build a tonne-scale germanium based double-beta decay experiment; testing the recent claim of observation of 0; and performing a direct search for lightWIMPs (3-10 GeV/c2).
NASA Technical Reports Server (NTRS)
Hancock, G. D.; Waite, W. P.
1984-01-01
Two experiments were performed employing swept frequency microwaves for the purpose of investigating the reflectivity from soil volumes containing both discontinuous and continuous changes in subsurface soil moisture content. Discontinuous moisture profiles were artificially created in the laboratory while continuous moisture profiles were induced into the soil of test plots by the environment of an agricultural field. The reflectivity for both the laboratory and field experiments was measured using bi-static reflectometers operated over the frequency ranges of 1.0 to 2.0 GHz and 4.0 to 8.0 GHz. Reflectivity models that considered the discontinuous and continuous moisture profiles within the soil volume were developed and compared with the results of the experiments. This comparison shows good agreement between the smooth surface models and the measurements. In particular the comparison of the smooth surface multi-layer model for continuous moisture profiles and the yield experiment measurements points out the sensitivity of the specular component of the scattered electromagnetic energy to the movement of moisture in the soil.
NASA Astrophysics Data System (ADS)
Meijer, Y. J.; Swart, D. P. J.; Baier, F.; Bhartia, P. K.; Bodeker, G. E.; Casadio, S.; Chance, K.; Del Frate, F.; Erbertseder, T.; Felder, M. D.; Flynn, L. E.; Godin-Beekmann, S.; Hansen, G.; Hasekamp, O. P.; Kaifel, A.; Kelder, H. M.; Kerridge, B. J.; Lambert, J.-C.; Landgraf, J.; Latter, B.; Liu, X.; McDermid, I. S.; Pachepsky, Y.; Rozanov, V.; Siddans, R.; Tellmann, S.; van der A, R. J.; van Oss, R. F.; Weber, M.; Zehner, C.
2006-11-01
An evaluation is made of ozone profiles retrieved from measurements of the nadir-viewing Global Ozone Monitoring Experiment (GOME) instrument. Currently, four different approaches are used to retrieve ozone profile information from GOME measurements, which differ in the use of external information and a priori constraints. In total nine different algorithms will be evaluated exploiting the optimal estimation (Royal Netherlands Meteorological Institute, Rutherford Appleton Laboratory, University of Bremen, National Oceanic and Atmospheric Administration, Smithsonian Astrophysical Observatory), Phillips-Tikhonov regularization (Space Research Organization Netherlands), neural network (Center for Solar Energy and Hydrogen Research, Tor Vergata University), and data assimilation (German Aerospace Center) approaches. Analysis tools are used to interpret data sets that provide averaging kernels. In the interpretation of these data, the focus is on the vertical resolution, the indicative altitude of the retrieved value, and the fraction of a priori information. The evaluation is completed with a comparison of the results to lidar data from the Network for Detection of Stratospheric Change stations in Andoya (Norway), Observatoire Haute Provence (France), Mauna Loa (Hawaii), Lauder (New Zealand), and Dumont d'Urville (Antarctic) for the years 1997-1999. In total, the comparison involves nearly 1000 ozone profiles and allows the analysis of GOME data measured in different global regions and hence observational circumstances. The main conclusion of this paper is that unambiguous information on the ozone profile can at best be retrieved in the altitude range 15-48 km with a vertical resolution of 10 to 15 km, precision of 5-10%, and a bias up to 5% or 20% depending on the success of recalibration of the input spectra. The sensitivity of retrievals to ozone at lower altitudes varies from scheme to scheme and includes significant influence from a priori assumptions.
NASA Astrophysics Data System (ADS)
Didkovsky, L.; Judge, D.; Wieman, S.; Woods, T.; Jones, A.
2012-01-01
The Extreme ultraviolet SpectroPhotometer (ESP) is one of five channels of the Extreme ultraviolet Variability Experiment (EVE) onboard the NASA Solar Dynamics Observatory (SDO). The ESP channel design is based on a highly stable diffraction transmission grating and is an advanced version of the Solar Extreme ultraviolet Monitor (SEM), which has been successfully observing solar irradiance onboard the Solar and Heliospheric Observatory (SOHO) since December 1995. ESP is designed to measure solar Extreme UltraViolet (EUV) irradiance in four first-order bands of the diffraction grating centered around 19 nm, 25 nm, 30 nm, and 36 nm, and in a soft X-ray band from 0.1 to 7.0 nm in the zeroth-order of the grating. Each band’s detector system converts the photo-current into a count rate (frequency). The count rates are integrated over 0.25-second increments and transmitted to the EVE Science and Operations Center for data processing. An algorithm for converting the measured count rates into solar irradiance and the ESP calibration parameters are described. The ESP pre-flight calibration was performed at the Synchrotron Ultraviolet Radiation Facility of the National Institute of Standards and Technology. Calibration parameters were used to calculate absolute solar irradiance from the sounding-rocket flight measurements on 14 April 2008. These irradiances for the ESP bands closely match the irradiance determined for two other EUV channels flown simultaneously: EVE’s Multiple EUV Grating Spectrograph (MEGS) and SOHO’s Charge, Element and Isotope Analysis System/ Solar EUV Monitor (CELIAS/SEM).
Good-Enough Brain Model: Challenges, Algorithms, and Discoveries in Multisubject Experiments.
Papalexakis, Evangelos E; Fyshe, Alona; Sidiropoulos, Nicholas D; Talukdar, Partha Pratim; Mitchell, Tom M; Faloutsos, Christos
2014-12-01
Given a simple noun such as apple, and a question such as "Is it edible?," what processes take place in the human brain? More specifically, given the stimulus, what are the interactions between (groups of) neurons (also known as functional connectivity) and how can we automatically infer those interactions, given measurements of the brain activity? Furthermore, how does this connectivity differ across different human subjects? In this work, we show that this problem, even though originating from the field of neuroscience, can benefit from big data techniques; we present a simple, novel good-enough brain model, or GeBM in short, and a novel algorithm Sparse-SysId, which are able to effectively model the dynamics of the neuron interactions and infer the functional connectivity. Moreover, GeBM is able to simulate basic psychological phenomena such as habituation and priming (whose definition we provide in the main text). We evaluate GeBM by using real brain data. GeBM produces brain activity patterns that are strikingly similar to the real ones, where the inferred functional connectivity is able to provide neuroscientific insights toward a better understanding of the way that neurons interact with each other, as well as detect regularities and outliers in multisubject brain activity measurements. PMID:27442756
Mutual Algorithm-Architecture Analysis for Real - Parallel Systems in Particle Physics Experiments.
NASA Astrophysics Data System (ADS)
Ni, Ping
Data acquisition from particle colliders requires real-time detection of tracks and energy clusters from collision events occurring at intervals of tens of mus. Beginning with the specification of a benchmark track-finding algorithm, parallel implementations have been developed. A revision of the routing scheme for performing reductions such as a tree sum, called the reduced routing distance scheme, has been developed and analyzed. The scheme reduces inter-PE communication time for narrow communication channel systems. A new parallel algorithm, called the interleaved tree sum, for parallel reduction problems has been developed that increases efficiency of processor use. Detailed analysis of this algorithm with different routing schemes is presented. Comparable parallel algorithms are analyzed, also taking into account the architectural parameters that play an important role in this parallel algorithm analysis. Computation and communication times are analyzed to guide the design of a custom system based on a massively parallel processing component. Developing an optimal system requires mutual analysis of algorithm and architecture parameters. It is shown that matching a processor array size to the parallelism of the problem does not always produce the best system design. Based on promising benchmark simulation results, an application specific hardware prototype board, called Dasher, has been built using two Blitzen chips. The processing array is a mesh-connected SIMD system with 256 PEs. Its design is discussed, with details on the software environment.
Parallel algorithms for unconstrained optimizations by multisplitting
He, Qing
1994-12-31
In this paper a new parallel iterative algorithm for unconstrained optimization using the idea of multisplitting is proposed. This algorithm uses the existing sequential algorithms without any parallelization. Some convergence and numerical results for this algorithm are presented. The experiments are performed on an Intel iPSC/860 Hyper Cube with 64 nodes. It is interesting that the sequential implementation on one node shows that if the problem is split properly, the algorithm converges much faster than one without splitting.
NASA Astrophysics Data System (ADS)
Khatri, Rishi
2015-08-01
We present an efficient algorithm for least-squares parameter fitting, optimized for component separation in multifrequency cosmic microwave background (CMB) experiments. We sidestep some of the problems associated with non-linear optimization by taking advantage of the quasi-linear nature of the foreground model. We demonstrate our algorithm, linearized iterative least-squares (LIL), on the publicly available Planck sky model FFP6 simulations and compare our results with those of other algorithms. We work at full Planck resolution and show that degrading the resolution of all channels to that of the lowest frequency channel is not necessary. Finally, we present results for publicly available Planck data. Our algorithm is extremely fast, fitting six parameters to the seven lowest Planck channels at full resolution (50 million pixels) in less than 160 CPU minutes (or a few minutes running in parallel on a few tens of cores). LIL is therefore easily scalable to future experiments, which may have even higher resolution and more frequency channels. We also, naturally, propagate the uncertainties in different parameters due to noise in the maps, as well as the degeneracies between the parameters, to the final errors in the parameters using the Fisher matrix. One indirect application of LIL could be a front-end for Bayesian parameter fitting to find the maximum likelihood to be used as the starting point for Gibbs sampling. We show that for rare components, such as carbon monoxide emission, present in a small fraction of sky, the optimal approach should combine parameter fitting with model selection. LIL may also be useful in other astrophysical applications that satisfy quasi-linearity criteria.
Localization of short-range acoustic and seismic wideband sources: Algorithms and experiments
NASA Astrophysics Data System (ADS)
Stafsudd, J. Z.; Asgari, S.; Hudson, R.; Yao, K.; Taciroglu, E.
2008-04-01
We consider the determination of the location (source localization) of a disturbance source which emits acoustic and/or seismic signals. We devise an enhanced approximate maximum-likelihood (AML) algorithm to process data collected at acoustic sensors (microphones) belonging to an array of, non-collocated but otherwise identical, sensors. The approximate maximum-likelihood algorithm exploits the time-delay-of-arrival of acoustic signals at different sensors, and yields the source location. For processing the seismic signals, we investigate two distinct algorithms, both of which process data collected at a single measurement station comprising a triaxial accelerometer, to determine direction-of-arrival. The direction-of-arrivals determined at each sensor station are then combined using a weighted least-squares approach for source localization. The first of the direction-of-arrival estimation algorithms is based on the spectral decomposition of the covariance matrix, while the second is based on surface wave analysis. Both of the seismic source localization algorithms have their roots in seismology; and covariance matrix analysis had been successfully employed in applications where the source and the sensors (array) are typically separated by planetary distances (i.e., hundreds to thousands of kilometers). Here, we focus on very-short distances (e.g., less than one hundred meters) instead, with an outlook to applications in multi-modal surveillance, including target detection, tracking, and zone intrusion. We demonstrate the utility of the aforementioned algorithms through a series of open-field tests wherein we successfully localize wideband acoustic and/or seismic sources. We also investigate a basic strategy for fusion of results yielded by acoustic and seismic arrays.
K (transverse) jet algorithms in hadron colliders: The D0 experience
V. Daniel Elvira
2002-12-05
D0 has implemented and studied a k{sub {perpendicular}} jet algorithm for the first time in a hadron collider. The authors have submitted two physics results for publication: the subjet multiplicity in quark and gluon jets and the central inclusive jet cross section measurements. A third result, a measurement of thrust distributions in jet events, is underway. A combination of measurements using several types of algorithms and samples taken at different center-of-mass energies is desirable to understand and distinguish with higher accuracy between instrumentation and physics effects.
Experiences with serial and parallel algorithms for channel routing using simulated annealing
NASA Technical Reports Server (NTRS)
Brouwer, Randall Jay
1988-01-01
Two algorithms for channel routing using simulated annealing are presented. Simulated annealing is an optimization methodology which allows the solution process to back up out of local minima that may be encountered by inappropriate selections. By properly controlling the annealing process, it is very likely that the optimal solution to an NP-complete problem such as channel routing may be found. The algorithm presented proposes very relaxed restrictions on the types of allowable transformations, including overlapping nets. By freeing that restriction and controlling overlap situations with an appropriate cost function, the algorithm becomes very flexible and can be applied to many extensions of channel routing. The selection of the transformation utilizes a number of heuristics, still retaining the pseudorandom nature of simulated annealing. The algorithm was implemented as a serial program for a workstation, and a parallel program designed for a hypercube computer. The details of the serial implementation are presented, including many of the heuristics used and some of the resulting solutions.
ERIC Educational Resources Information Center
Beddard, Godfrey S.
2011-01-01
Thermodynamic quantities such as the average energy, heat capacity, and entropy are calculated using a Monte Carlo method based on the Metropolis algorithm. This method is illustrated with reference to the harmonic oscillator but is particularly useful when the partition function cannot be evaluated; an example using a one-dimensional spin system…
ERIC Educational Resources Information Center
Gehring, John
2004-01-01
For the past 16 years, the blue-collar city of Huntington, West Virginia, has rolled out the red carpet to welcome young wrestlers and their families as old friends. They have come to town chasing the same dream for a spot in what many of them call "The Show". For three days, under the lights of an arena packed with 5,000 fans, the state's best…
NASA Technical Reports Server (NTRS)
Emmitt, G. D.; Wood, S. A.; Morris, M.
1990-01-01
Lidar Atmospheric Wind Sounder (LAWS) Simulation Models (LSM) were developed to evaluate the potential impact of global wind observations on the basic understanding of the Earth's atmosphere and on the predictive skills of current forecast models (GCM and regional scale). Fully integrated top to bottom LAWS Simulation Models for global and regional scale simulations were developed. The algorithm development incorporated the effects of aerosols, water vapor, clouds, terrain, and atmospheric turbulence into the models. Other additions include a new satellite orbiter, signal processor, line of sight uncertainty model, new Multi-Paired Algorithm and wind error analysis code. An atmospheric wind field library containing control fields, meteorological fields, phenomena fields, and new European Center for Medium Range Weather Forecasting (ECMWF) data was also added. The LSM was used to address some key LAWS issues and trades such as accuracy and interpretation of LAWS information, data density, signal strength, cloud obscuration, and temporal data resolution.
NASA Astrophysics Data System (ADS)
Garmendia, Iñaki; Anglada, Eva
2016-05-01
Genetic algorithms have been used for matching temperature values generated using thermal mathematical models against actual temperatures measured in thermal testing of spacecrafts and space instruments. Up to now, results for small models have been very encouraging. This work will examine the correlation of a small-medium size model, whose thermal test results were available, by means of genetic algorithms. The thermal mathematical model reviewed herein corresponds to Tribolab, a materials experiment deployed on board the International Space Station and subjected to preflight thermal testing. This paper will also discuss in great detail the influence of both the number of reference temperatures available and the number of thermal parameters included in the correlation, taking into account the presence of heat sources and the maximum range of temperature mismatch. Conclusions and recommendations for the thermal test design will be provided, as well as some indications for future improvements.
Pre-Mrna Introns as a Model for Cryptographic Algorithm:. Theory and Experiments
NASA Astrophysics Data System (ADS)
Regoli, Massimo
2010-01-01
The RNA-Crypto System (shortly RCS) is a symmetric key algorithm to cipher data. The idea for this new algorithm starts from the observation of nature. In particular from the observation of RNA behavior and some of its properties. In particular the RNA sequences have some sections called Introns. Introns, derived from the term "intragenic regions", are non-coding sections of precursor mRNA (pre-mRNA) or other RNAs, that are removed (spliced out of the RNA) before the mature RNA is formed. Once the introns have been spliced out of a pre-mRNA, the resulting mRNA sequence is ready to be translated into a protein. The corresponding parts of a gene are known as introns as well. The nature and the role of Introns in the pre-mRNA is not clear and it is under ponderous researches by Biologists but, in our case, we will use the presence of Introns in the RNA-Crypto System output as a strong method to add chaotic non coding information and an unnecessary behaviour in the access to the secret key to code the messages. In the RNA-Crypto System algorithm the introns are sections of the ciphered message with non-coding information as well as in the precursor mRNA.
NASA Astrophysics Data System (ADS)
Hanlon, C. J.; Small, A.; Bose, S.; Young, G. S.; Verlinde, J.
2013-12-01
In airborne field campaigns, investigators confront complex decision challenges concerning when and where to deploy aircraft to meet scientific objectives within constraints of time and budgeted flight hours. An automated flight decision recommendation system was developed to assist investigators leading the Deep Convective Clouds and Chemistry (DC3) campaign in spring--summer 2012. In making flight decisions, DC3 investigators needed to integrate two distinct, potentially competing objectives: to maximize the total harvest of data collected, and also to maintain an approximate balance of data collected from each of three geographic study regions. Choices needed to satisfy several constraint conditions including, most prominently, a limit on the total number of flight hours, and a bound on the number of calendar days in the field. An automated recommendation system was developed by translating these objectives and bounds into a formal problem of constrained optimization. In this formalization, a key step involved the mathematical representation of investigators' scientific preferences over the set of possible data collection outcomes. Competing objectives were integrated into a single metric by means of a utility function, which served to quantify the value of alternative data portfolios. Flight recommendations were generated to maximize the expected utility of each daily decision, conditioned on that day's forecast. A calibrated forecast probability of flight success in each study region was generated according to a forecasting system trained on numerical weather prediction model output, as well as expected climatological probability of flight success on future days. System performance was evaluated by comparing the data yielded by the actual DC3 campaign, compared with the yield that would have been realized had the algorithmic recommendations been followed. It was found that the algorithmic system would have achieved 19%--59% greater utility than the decisions
NASA Technical Reports Server (NTRS)
Carter, Richard G.
1989-01-01
For optimization problems associated with engineering design, parameter estimation, image reconstruction, and other optimization/simulation applications, low accuracy function and gradient values are frequently much less expensive to obtain than high accuracy values. Here, researchers investigate the computational performance of trust region methods for nonlinear optimization when high accuracy evaluations are unavailable or prohibitively expensive, and confirm earlier theoretical predictions when the algorithm is convergent even with relative gradient errors of 0.5 or more. The proper choice of the amount of accuracy to use in function and gradient evaluations can result in orders-of-magnitude savings in computational cost.
NASA Astrophysics Data System (ADS)
Hanlon, Christopher J.; Small, Arthur A.; Bose, Satyajit; Young, George S.; Verlinde, Johannes
2014-10-01
Automated decision systems have shown the potential to increase data yields from field experiments in atmospheric science. The present paper describes the construction and performance of a flight decision system designed for a case in which investigators pursued multiple, potentially competing objectives. The Deep Convective Clouds and Chemistry (DC3) campaign in 2012 sought in situ airborne measurements of isolated deep convection in three study regions: northeast Colorado, north Alabama, and a larger region extending from central Oklahoma through northwest Texas. As they confronted daily flight launch decisions, campaign investigators sought to achieve two mission objectives that stood in potential tension to each other: to maximize the total amount of data collected while also collecting approximately equal amounts of data from each of the three study regions. Creating an automated decision system involved understanding how investigators would themselves negotiate the trade-offs between these potentially competing goals, and representing those preferences formally using a utility function that served to rank-order the perceived value of alternative data portfolios. The decision system incorporated a custom-built method for generating probabilistic forecasts of isolated deep convection and estimated climatologies calibrated to historical observations. Monte Carlo simulations of alternative future conditions were used to generate flight decision recommendations dynamically consistent with the expected future progress of the campaign. Results show that a strict adherence to the recommendations generated by the automated system would have boosted the data yield of the campaign by between 10 and 57%, depending on the metrics used to score success, while improving portfolio balance.
Diagnostic ANCA algorithms in daily clinical practice: evidence, experience, and effectiveness.
Avery, T Y; Bons, J; van Paassen, P; Damoiseaux, J
2016-07-01
Detection of antineutrophil cytoplasmic antibodies (ANCA) for ANCA-associated vasculitides (AAV) is based on indirect immunofluorescence (IIF) on ethanol-fixed neutrophils and reactivity toward myeloperoxidase (MPO) and proteinase 3 (PR3). According to the international consensus for ANCA testing, presence of ANCA should at least be screened for by IIF and, if positive, followed by antigen-specific immunoassays. Optimally, all samples are analyzed by both IIF and quantitative antigen-specific immunoassays. Since the establishment of this consensus many new technologies have become available and this has challenged the positioning of IIF in the testing algorithm for AAV. In the current paper, we summarize the novelties in ANCA diagnostics and discuss the possible implications of these developments for the different ANCA algorithms that are currently applied in routine diagnostic laboratories. Possible consequences of replacing ANCA assays by novel methods are illustrated by our data obtained in daily clinical practice. Eventually, it is questioned if there is a need to change the consensus, and if so, whether IIF can be discarded completely, or be used as a confirmation assay instead of a screening assay. Both alternative options require that ANCA requests for AAV can be separated from ANCA requests for gastrointestinal autoimmune diseases. PMID:27252270
A clinical algorithm for the management of abnormal mammograms. A community hospital's experience.
Gist, D; Llorente, J; Mayer, J
1997-01-01
Mammography is an important tool in the early detection of breast cancer, but its use has been criticized for stimulating the performance of unnecessary breast biopsies. We retrospectively reviewed the results of breast biopsies preceded by abnormal mammograms at a community hospital for three 5-month periods--baseline, postintervention, and follow-up--to determine the effectiveness of algorithm-based care for patients with an abnormal mammogram. Cases in which there was a definite or implied recommendation for biopsy by a radiologist revealed a baseline positive predictive value of 4% (2/45), a postintervention positive predictive value of 21% (9/42), and a follow-up phase positive predictive value of 18% (5/28). A Fisher's exact test of the preintervention and postintervention positive predictive values after an abnormal mammogram with a "recommendation for biopsy" was significant (n = 87, P = .023). A Kruskal-Wallis analysis of variance to determine if there had been an increase in the mean lesion size of breast cancers detected over the 3 study periods was not significant. The results of this study suggest that developing a clinical algorithm under the leadership of an opinion leader combined with continuing medical education efforts may be efficacious in reducing the incidence of unnecessary surgical procedures. PMID:9074335
Boggs, P.; Tolle, J.; Kearsley, A.
1994-12-31
We have developed a large scale sequential quadratic programming (SQP) code based on an interior-point method for solving general (convex or nonconvex) quadratic programs (QP). We often halt this QP solver prematurely by employing a trust-region strategy. This procedure typically reduces the overall cost of the code. In this talk we briefly review the algorithm and some of its theoretical justification and then discuss recent enhancements including automatic procedures for both increasing and decreasing the parameter in the merit function, a regularization procedure for dealing with linearly dependent active constraint gradients, and a method for modifying the linearized equality constraints. Some numerical results on a significant set of {open_quotes}real-world{close_quotes} problems will be presented.
NASA Astrophysics Data System (ADS)
Peeling, S. M.
1985-12-01
It is demonstrated that the ZIP algorithm is capable of perfect alignment of annotated and unannotated speech files in the majority of cases, even when the files are from different speakers. It can therefore form the basis of an automatic annotation system. It is unlikely that ZIP can completely remove the need for human inspection. However, in cases where misalignment occurs it frequently only affects a small portion of the two files so that a minimal amount of human correction is required. Experiments suggest that a reduced representation of two channels is adequate. For the particular 2 channel representation used, a beamwidth of 200 and deletion penalties of 15 are suitable parameters. By calculating the sum of the differences between the minimum scores in consecutive windows, and averaging this sum over the whole file it is possible to automatically determine the quality of the alignment.
Ramadas, Gisela C V; Rocha, Ana Maria A C; Fernandes, Edite M G P
2015-01-01
This paper addresses the challenging task of computing multiple roots of a system of nonlinear equations. A repulsion algorithm that invokes the Nelder-Mead (N-M) local search method and uses a penalty-type merit function based on the error function, known as 'erf', is presented. In the N-M algorithm context, different strategies are proposed to enhance the quality of the solutions and improve the overall efficiency. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. PMID:25875591
Fernandes, Edite M. G. P.
2015-01-01
This paper addresses the challenging task of computing multiple roots of a system of nonlinear equations. A repulsion algorithm that invokes the Nelder-Mead (N-M) local search method and uses a penalty-type merit function based on the error function, known as ‘erf’, is presented. In the N-M algorithm context, different strategies are proposed to enhance the quality of the solutions and improve the overall efficiency. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. PMID:25875591
NASA Astrophysics Data System (ADS)
schipper, peter; stuyt, lodewijk; straat, van der, andre; schans, van der, martin
2014-05-01
processes in the soil have been modelled with simulation model SWAP. The experiment started in 2010 and is ongoing. Data, collected so far show that the plots with controlled drainage (all compared with plots equipped with conventional drainage) conserve more rain water (higher groundwater tables in early spring), lower discharges under average weather conditions and storm events, reduce N-loads and saline seepage to surface waters, enhance denitrification, show a different 'first flush' effect and show similar crop yields. The results of the experiments will contribute to a better understanding of the impact of controlled drainage on complex hydrological en geochemical processes in agricultural clay soils, the interaction between ground- en surface water and its effects on drain water quantity, quality and crop yield.
On the Juno radio science experiment: models, algorithms and sensitivity analysis
NASA Astrophysics Data System (ADS)
Tommei, G.; Dimare, L.; Serra, D.; Milani, A.
2015-01-01
Juno is a NASA mission launched in 2011 with the goal of studying Jupiter. The probe will arrive to the planet in 2016 and will be placed for one year in a polar high-eccentric orbit to study the composition of the planet, the gravity and the magnetic field. The Italian Space Agency (ASI) provided the radio science instrument KaT (Ka-Band Translator) used for the gravity experiment, which has the goal of studying the Jupiter's deep structure by mapping the planet's gravity: such instrument takes advantage of synergies with a similar tool in development for BepiColombo, the ESA cornerstone mission to Mercury. The Celestial Mechanics Group of the University of Pisa, being part of the Juno Italian team, is developing an orbit determination and parameters estimation software for processing the real data independently from NASA software ODP. This paper has a twofold goal: first, to tell about the development of this software highlighting the models used, secondly, to perform a sensitivity analysis on the parameters of interest to the mission.
Algorithms and Algorithmic Languages.
ERIC Educational Resources Information Center
Veselov, V. M.; Koprov, V. M.
This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…
Algorithmic Self-Assembly of DNA: Theoretical Motivations and 2D Assembly Experiments.
Winfree, E
2000-01-01
Abstract Biology makes things far smaller and more complex than anything produced by human engineering. The biotechnology revolution has for the first time given us the tools necessary to consider engineering on the molecular level. Research in DNA computation, launched by Len Adleman, has opened the door for experimental study of programmable biochemical reactions. Here we focus on a single biochemical mechanism, the self-assembly of DNA structures, that is theoretically sufficient for Turing-universal computation. The theory combines Hao Wang's purely mathematical Tiling Problem with the branched DNA constructions of Ned Seeman. In the context of mathematical logic, Wang showed how jigsaw-shaped tiles can be designed to simulate the operation of any Turing Machine. For a biochemical implementation, we will need molecular Wang tiles. DNA molecular structures and intermolecular interactions are particularly amenable to design and are sufficient for the creation of complex molecular objects. The structure of individual molecules can be designed by maximizing desired and minimizing undesired Watson-Crick complementarity. Intermolecular interactions are programmed by the design of sticky ends that determine which molecules associate, and how. The theory has been demonstrated experimentally using a system of synthetic DNA double-crossover molecules that self-assemble into two-dimensional crystals that have been visualized by atomic force microscopy. This experimental system provides an excellent platform for exploring the relationship between computation and molecular self-assembly, and thus represents a first step toward the ability to program molecular reactions and molecular structures. PMID:22607433
Comparative study of damage identification algorithms applied to a bridge: I. Experiment
NASA Astrophysics Data System (ADS)
Farrar, Charles R.; Jauregui, David A.
1998-10-01
Over the past 30 years detecting damage in a structure from changes in global dynamic parameters has received considerable attention from the civil, aerospace and mechanical engineering communities. The basis for this approach to damage detection is that changes in the structure's physical properties (i.e., boundary conditions, stiffness, mass and/or damping) will, in turn, alter the dynamic characteristics (i.e., resonant frequencies, modal damping and mode shapes) of the structure. Changes in properties such as the flexibility or stiffness matrices derived from measured modal properties and changes in mode shape curvature have shown promise for locating structural damage. However, to date there has not been a study reported in the technical literature that directly compares these various methods. The experimental results reported in this paper and the results of a numerical study reported in an accompanying paper attempt to fill this void in the study of damage detection methods. Five methods for damage assessment that have been reported in the technical literature are summarized and compared using experimental modal data from an undamaged and damaged bridge. For the most severe damage case investigated, all methods can accurately locate the damage. The methods show varying levels of success when applied to less severe damage cases. This paper concludes by summarizing some areas of the damage identification process that require further study.
Parallelized Dilate Algorithm for Remote Sensing Image
Zhang, Suli; Hu, Haoran; Pan, Xin
2014-01-01
As an important algorithm, dilate algorithm can give us more connective view of a remote sensing image which has broken lines or objects. However, with the technological progress of satellite sensor, the resolution of remote sensing image has been increasing and its data quantities become very large. This would lead to the decrease of algorithm running speed or cannot obtain a result in limited memory or time. To solve this problem, our research proposed a parallelized dilate algorithm for remote sensing Image based on MPI and MP. Experiments show that our method runs faster than traditional single-process algorithm. PMID:24955392
Spaceborne SAR Imaging Algorithm for Coherence Optimized
Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun
2016-01-01
This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446
Yoo, Changkyoo; Kim, Min Han
2009-06-01
This paper presents industrial experience of process identification, monitoring, and control in a full-scale wastewater treatment plant. The objectives of this study were (1) to apply and compare different process-identification methods of proportional-integral-derivative (PID) autotuning for stable dissolved oxygen (DO) control, (2) to implement a process monitoring method that estimates the respiration rate simultaneously during the process-identification step, and (3) to propose a simple set-point decision algorithm for determining the appropriate set point of the DO controller for optimal operation of the aeration basin. The proposed method was evaluated in the industrial wastewater treatment facility of an iron- and steel-making plant. Among the process-identification methods, the control signal of the controller's set-point change was best for identifying low-frequency information and enhancing the robustness to low-frequency disturbances. Combined automatic control and set-point decision method reduced the total electricity consumption by 5% and the electricity cost by 15% compared to the fixed gain PID controller, when considering only the surface aerators. Moreover, as a result of improved control performance, the fluctuation of effluent quality decreased and overall effluent water quality was better. PMID:19428173
NASA Astrophysics Data System (ADS)
Abrams, Daniel S.
This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)
NASA Astrophysics Data System (ADS)
Arpinar, V. E.; Hamamura, M. J.; Degirmenci, E.; Muftuler, L. T.
2012-07-01
Magnetic resonance electrical impedance tomography (MREIT) is a technique that produces images of conductivity in tissues and phantoms. In this technique, electrical currents are applied to an object and the resulting magnetic flux density is measured using magnetic resonance imaging (MRI) and the conductivity distribution is reconstructed using these MRI data. Currently, the technique is used in research environments, primarily studying phantoms and animals. In order to translate MREIT to clinical applications, strict safety standards need to be established, especially for safe current limits. However, there are currently no standards for safe current limits specific to MREIT. Until such standards are established, human MREIT applications need to conform to existing electrical safety standards in medical instrumentation, such as IEC601. This protocol limits patient auxiliary currents to 100 µA for low frequencies. However, published MREIT studies have utilized currents 10-400 times larger than this limit, bringing into question whether the clinical applications of MREIT are attainable under current standards. In this study, we investigated the feasibility of MREIT to accurately reconstruct the relative conductivity of a simple agarose phantom using 200 µA total injected current and tested the performance of two MREIT reconstruction algorithms. These reconstruction algorithms used are the iterative sensitivity matrix method (SMM) by Ider and Birgul (1998 Elektrik 6 215-25) with Tikhonov regularization and the harmonic BZ proposed by Oh et al (2003 Magn. Reason. Med. 50 875-8). The reconstruction techniques were tested at both 200 µA and 5 mA injected currents to investigate their noise sensitivity at low and high current conditions. It should be noted that 200 µA total injected current into a cylindrical phantom generates only 14.7 µA current in imaging slice. Similarly, 5 mA total injected current results in 367 µA in imaging slice. Total acquisition time
Sobel, E.; Lange, K.; O`Connell, J.R.
1996-12-31
Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.
NASA Astrophysics Data System (ADS)
Garcia-Yeguas, A.; Granados, M.; Garcia, L.; Benitez, C.; De la Torre, A.; Alvarez, I.; Diaz, A.; Ibañez, J.
2013-12-01
The detection of the arrival time of seismic waves or picking is of great importance in many seismology applications. Traditionally, picking has been carried out by human operators. This process is not systematic and relies completely on the expertise and judgment of the analysts. The limitations of manual picking and the increasing amount of data daily stored in the seismic networks worldwide distributed led to the development of automatic picking algorithms. The accuracy of conventional 'short-term average over long-term average' (STA/LTA) algorithm, the recently developed 'Adaptive Multiband Picking Algorithm' (AMPA) and the proposed cross correlation-based picking algorithm have been assessed using a huge data set composed by active seismic signals from experiments in Tenerife Island (Canary Islands, Spain). The experiment consisted of the deployment of a dense seismic network on Tenerife Island (125 seismometers in total) and the shooting of air-guns around the island with the Spanish Oceanographic Vessel Hespérides (6459 air shots in total). Thus, more than 800.000 signals were recorded and subsequently manually picked. Since the sources and receivers locations are known and considering that the ship travelled a small distance between two consecutive shots, a picking algorithm based on cross-correlation has been proposed. The main advantage of this approach is that the algorithm does not require to set up sophisticated parameters, in contrast to other automatic algorithms. This work was supported in part by the CEI BioTic Granada project (COD55), the Spanish mineco project APASVO (TEC2012-31551), the Spanish micinn project EPHESTOS (CGL2011-29499-C02-01) and the EU project MED-SUV.
ERIC Educational Resources Information Center
Petalas, Michael A.; Hastings, Richard P.; Nash, Susie; Dowey, Alan; Reilly, Deirdre
2009-01-01
Semi-structured interviews were used to explore the perceptions and experiences of eight typically developing siblings in middle childhood who had a brother with autism spectrum disorder (ASD). The interviews were analysed using interpretative phenomenological analysis (IPA). The analysis yielded five main themes: (i) siblings' perceptions of the…
NASA Astrophysics Data System (ADS)
Zheng, Genrang; Lin, ZhengChun
The problem of winner determination in combinatorial auctions is a hotspot electronic business, and a NP hard problem. A Hybrid Artificial Fish Swarm Algorithm(HAFSA), which is combined with First Suite Heuristic Algorithm (FSHA) and Artificial Fish Swarm Algorithm (AFSA), is proposed to solve the problem after probing it base on the theories of AFSA. Experiment results show that the HAFSA is a rapidly and efficient algorithm for The problem of winner determining. Compared with Ant colony Optimization Algorithm, it has a good performance with broad and prosperous application.
Honored Teacher Shows Commitment.
ERIC Educational Resources Information Center
Ratte, Kathy
1987-01-01
Part of the acceptance speech of the 1985 National Council for the Social Studies Teacher of the Year, this article describes the censorship experience of this honored social studies teacher. The incident involved the showing of a videotape version of the feature film entitled "The Seduction of Joe Tynan." (JDH)
NASA Astrophysics Data System (ADS)
Bobashev, S. V.; Mende, N. P.; Popov, P. A.; Sakharov, V. A.; Berdnikov, V. A.; Viktorov, V. A.; Oseeva, S. I.; Sadchikov, G. D.
2009-04-01
In part 1 of this paper, an algorithm for numerically solving the inverse problem of motion of a solid through the atmosphere is described that constitutes the basis for identifying the aerodynamic characteristics of an object from trajectory data and the respective identification procedure is presented. In part 2, methods evaluating the significance of desired parameters and adequacy of a mathematical model of motion, approaches to metrological certification of experimental equipment, and results of testing the algorithm are discussed.
NASA Astrophysics Data System (ADS)
Gabrovšek, F.; Grašič, B.; Božnar, M. Z.; Mlakar, P.; Udén, M.; Davies, E.
2013-10-01
The paper presents an experiment demonstrating a novel and successful application of Delay- and Disruption-Tolerant Networking (DTN) technology for automatic data transfer in a karst cave Early Warning and Measuring System. The experiment took place inside the Postojna Cave in Slovenia, which is open to tourists. Several automatic meteorological measuring stations are set up inside the cave, as an adjunct to the surveillance infrastructure; the regular data transfer provided by the DTN technology allows the surveillance system to take on the role of an Early Warning System (EWS). One of the stations is set up alongside the railway tracks, which allows the tourist to travel inside the cave by train. The experiment was carried out by placing a DTN "data mule" (a DTN-enabled computer with WiFi connection) on the train and by upgrading the meteorological station with a DTN-enabled WiFi transmission system. When the data mule is in the wireless drive-by mode, it collects measurement data from the station over a period of several seconds as the train passes the stationary equipment, and delivers data at the final train station by the cave entrance. This paper describes an overview of the experimental equipment and organisation allowing the use of a DTN system for data collection and an EWS inside karst caves where there is a regular traffic of tourists and researchers.
NASA Astrophysics Data System (ADS)
Gabrovšek, F.; Grašič, B.; Božnar, M. Z.; Mlakar, P.; Udén, M.; Davies, E.
2014-02-01
The paper presents an experiment demonstrating a novel and successful application of delay- and disruption-tolerant networking (DTN) technology for automatic data transfer in a karst cave early warning and measuring system. The experiment took place inside the Postojna Cave in Slovenia, which is open to tourists. Several automatic meteorological measuring stations are set up inside the cave, as an adjunct to the surveillance infrastructure; the regular data transfer provided by the DTN technology allows the surveillance system to take on the role of an early warning system (EWS). One of the stations is set up alongside the railway tracks, which allows the tourist to travel inside the cave by train. The experiment was carried out by placing a DTN "data mule" (a DTN-enabled computer with WiFi connection) on the train and by upgrading the meteorological station with a DTN-enabled WiFi transmission system. When the data mule is in the wireless drive-by mode, it collects measurement data from the station over a period of several seconds as the train without stopping passes the stationary equipment, and delivers data at the final train station by the cave entrance. This paper describes an overview of the experimental equipment and organization allowing the use of a DTN system for data collection and an EWS inside karst caves where there is regular traffic of tourists and researchers.
Nagata, Kosei; Yamamoto, Shinichi; Miyoshi, Kota; Sato, Masaki; Arino, Yusuke; Mikami, Yoji
2016-08-01
Eosinophilic granulomatosis with polyangiitis (EGPA, Churg-Strauss syndrome) is a rare systemic vasculitis and is difficult to diagnose. EGPA has a number of symptoms including peripheral dysesthesia caused by mononeuropathy multiplex, which is similar to radiculopathy due to lumbar disc hernia or lumbar spinal stenosis. Therefore, EGPA patients with mononeuropathy multiplex often visit orthopedic clinics, but orthopedic doctors and spine neurosurgeons have limited experience in diagnosing EGPA because of its rarity. We report a consecutive series of patients who were initially diagnosed as having lumbar disc hernia or lumbar spinal stenosis by at least 2 medical institutions from March 2006 to April 2013 but whose final diagnosis was EGPA. All patients had past histories of asthma or eosinophilic pneumonia, and four out of five had peripheral edema. Laboratory data showed abnormally increased eosinophil counts, and nerve conduction studies of all patients revealed axonal damage patterns. All patients recovered from paralysis to a functional level after high-dose steroid treatment. We shortened the duration of diagnosis from 49 days to one day by adopting a diagnostic algorithm after experiencing the first case. PMID:27549670
NASA Technical Reports Server (NTRS)
Arenstorf, Norbert S.; Jordan, Harry F.
1987-01-01
A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor, that support these conclusions are detailed.
NASA Technical Reports Server (NTRS)
Arenstorf, Norbert S.; Jordan, Harry F.
1989-01-01
A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor that support these conclusions, are detailed.
Miller, L.F.
1980-11-01
A brief description of the Oak Ridge Reactor Pool Side Facility (ORR-PSF) and of the associated control system is given. The ORR-PSF capsule temperatures are controlled by a digital computer which regulates the percent power delivered to electrical heaters. The total electrical power which can be input to a particular heater is determined by the setting of an associated variac. This report concentrates on the description of the ORR-PSF irradiation experiment computer control algorithm. The algorithm is an implementation of a discrete-time, state variable, optimal control approach. The Riccati equation is solved for a discretized system model to determine the control law. Experiments performed to obtain system model parameters are described. Results of performance evaluation experiments are also presented. The control algorithm maintains both capsule temperatures within a 288/sup 0/C +-10/sup 0/C band as required. The pressure vessel capsule temperatures are effectively maintained within a 288/sup 0/C +-5/sup 0/C band.
NASA Astrophysics Data System (ADS)
Finnerty, P.; Aguayo, E.; Amman, M.; Avignone, F. T., Iii; Barabash, A. S.; Barton, P. J.; Beene, J. R.; Bertrand, F. E.; Boswell, M.; Brudanin, V.; Busch, M.; Chan, Y.-D.; Christofferson, C. D.; Collar, J. I.; Combs, D. C.; Cooper, R. J.; Detwiler, J. A.; Doe, P. J.; Efremenko, Yu; Egorov, V.; Ejiri, H.; Elliott, S. R.; Esterline, J.; Fast, J. E.; Fields, N.; Fraenkle, F. M.; Galindo-Uribarri, A.; Gehman, V. M.; Giovanetti, G. K.; Green, M. P.; Guiseppe, V. E.; Gusey, K.; Hallin, A. L.; Hazama, R.; Henning, R.; Hoppe, E. W.; Horton, M.; Howard, S.; Howe, M. A.; Johnson, R. A.; Keeter, K. J.; Kidd, M. F.; Knecht, A.; Kochetov, O.; Konovalov, S. I.; Kouzes, R. T.; LaFerriere, B. D.; Leon, J.; Leviner, L. E.; Loach, J. C.; Luke, P. N.; MacMullin, S.; Marino, M. G.; Martin, R. D.; Merriman, J. H.; Miller, M. L.; Mizouni, L.; Nomachi, M.; Orrell, J. L.; Overman, N. R.; Perumpilly, G.; Phillips, D. G., Ii; Poon, A. W. P.; Radford, D. C.; Rielage, K.; Robertson, R. G. H.; Ronquest, M. C.; Schubert, A. G.; Shima, T.; Shirchenko, M.; Snavely, K. J.; Steele, D.; Strain, J.; Timkin, V.; Tornow, W.; Varner, R. L.; Vetter, K.; Vorren, K.; Wilkerson, J. F.; Yakushev, E.; Yaver, H.; Young, A. R.; Yu, C.-H.; Yumatov, V.; Majorana Collaboration
2014-03-01
The Majorana Demonstrator will search for the neutrinoless double-beta decay (0vββ) of the 76Ge isotope with a mixed array of enriched and natural germanium detectors. The observation of this rare decay would indicate the neutrino is its own anti-particle, demonstrate that lepton number is not conserved, and provide information on the absolute mass-scale of the neutrino. The Demonstrator is being assembled at the 4850 foot level of the Sanford Underground Research Facility in Lead, South Dakota. The array will be contained in a low-background environment and surrounded by passive and active shielding. The goals for the Demonstrator are: demonstrating a background rate less than 3 t-1 y-1 in the 4 keV region of interest (ROI) surrounding the 2039 keV 76Ge endpoint energy; establishing the technology required to build a tonne-scale germanium based double-beta decay experiment; testing the recent claim of observation of 0vββ [1]; and performing a direct search for light WIMPs (3-10 GeV/c2).
Kosten, Therese A; Zhang, Xiang Yang; Kehoe, Priscilla
2004-08-18
Previously, we demonstrated that the early life stress of neonatal isolation enhances extracellular dopamine (DA) levels in ventral striatum in response to psychostimulants in infant rats. Yet, neonatal isolation does not alter baseline DA levels. DA levels are affected by serotonin (5-HT) and striatal levels of this transmitter are also enhanced by cocaine. Other early life stresses are reported to alter various 5-HT neural systems. Thus, the purpose of this study is to test whether neonatal isolation alters ventral striatal 5-HT levels at baseline or in response to cocaine. Litters were subjected to neonatal isolation (1-h individual isolation/day on postnatal days 2-9) or to non-handled conditions and pups assigned to one of three cocaine doses (0, 2.5, or 5.0 mg/kg) groups. On postnatal day 10, probes were implanted in the ventral striatum. Dialysate samples obtained over a 60-min baseline period and for 120 min post cocaine injections were assessed for levels of 5-HT and its metabolite, 5-HIAA. ISO decreased ventral striatal 5-HT levels at baseline and after cocaine administration but did not alter 5-HIAA levels. These data add to the literature on the immediate effects of early life stress on 5-HT systems by showing alterations in the ventral striatal system. Because serotonergic effects in this neural area are associated with reward and with emotion and affect regulation, the results of this study suggest that early life stress may be a risk factor for addiction and other psychiatric disorders. PMID:15283991
Bowman, L C; Williams, R; Sanders, M; Ringwald-Smith, K; Baker, D; Gajjar, A
1998-01-01
The Metabolic and Infusion Support Service (MISS) at St. Jude Children's Research Hospital was established in 1988 to improve the quality of nutritional support given to children undergoing therapy for cancer. This multidisciplinary group, representing each of the clinical services within the hospital, provides a range of services to all patients requiring full enteral or parenteral nutritional support. In 1991, the MISS developed an algorithm for nutritional support which emphasized a demand for a compelling rationale for choosing parenteral over enteral support in patients with functional gastrointestinal tracts. Compliance with the algorithm was monitored annually for 3 years, with full compliance defined as meeting all criteria for initiating support and selection of an appropriate type of support. Compliance rates were 93% in 1992, 95% in 1993 and 100% in 1994. The algorithm was revised in 1994 to include criteria for offering oral supplementation to patients whose body weight was at least 90% of their ideal weight and whose protein stores were considered adequate. Full support was begun if no weight gain occurred. Patients likely to tolerate and absorb food from the gastrointestinal tract were classified into groups defined by the absence of intractable vomiting, severe diarrhea, graft-vs.-host disease affecting the gut, radiation enteritis, strictures, ileus, mucositis and treatment with allogeneic bone marrow transplant. Overall, the adoption of the algorithm has increased the frequency of enteral nutritional support, particularly via gastrostomies, by at least 3-fold. Our current emphasis is to define the time points in therapy at which nutritional intervention is most warranted. PMID:9876485
A Hybrid Monkey Search Algorithm for Clustering Analysis
Chen, Xin; Zhou, Yongquan; Luo, Qifang
2014-01-01
Clustering is a popular data analysis and data mining technique. The k-means clustering algorithm is one of the most commonly used methods. However, it highly depends on the initial solution and is easy to fall into local optimum solution. In view of the disadvantages of the k-means method, this paper proposed a hybrid monkey algorithm based on search operator of artificial bee colony algorithm for clustering analysis and experiment on synthetic and real life datasets to show that the algorithm has a good performance than that of the basic monkey algorithm for clustering analysis. PMID:24772039
An improved SIFT algorithm based on KFDA in image registration
NASA Astrophysics Data System (ADS)
Chen, Peng; Yang, Lijuan; Huo, Jinfeng
2016-03-01
As a kind of stable feature matching algorithm, SIFT has been widely used in many fields. In order to further improve the robustness of the SIFT algorithm, an improved SIFT algorithm with Kernel Discriminant Analysis (KFDA-SIFT) is presented for image registration. The algorithm uses KFDA to SIFT descriptors for feature extraction matrix, and uses the new descriptors to conduct the feature matching, finally chooses RANSAC to deal with the matches for further purification. The experiments show that the presented algorithm is robust to image changes in scale, illumination, perspective, expression and tiny pose with higher matching accuracy.
Solving SAT Problem Based on Hybrid Differential Evolution Algorithm
NASA Astrophysics Data System (ADS)
Liu, Kunqi; Zhang, Jingmin; Liu, Gang; Kang, Lishan
Satisfiability (SAT) problem is an NP-complete problem. Based on the analysis about it, SAT problem is translated equally into an optimization problem on the minimum of objective function. A hybrid differential evolution algorithm is proposed to solve the Satisfiability problem. It makes full use of strong local search capacity of hill-climbing algorithm and strong global search capability of differential evolution algorithm, which makes up their disadvantages, improves the efficiency of algorithm and avoids the stagnation phenomenon. The experiment results show that the hybrid algorithm is efficient in solving SAT problem.
Wrong, Terence; Baumgart, Erica
2013-01-01
The authors of the preceding articles raise legitimate questions about patient and staff rights and the unintended consequences of allowing ABC News to film inside teaching hospitals. We explain why we regard their fears as baseless and not supported by what we heard from individuals portrayed in the filming, our decade-long experience making medical documentaries, and the full un-aired context of the scenes shown in the broadcast. The authors don't and can't know what conversations we had, what documents we reviewed, and what protections we put in place in each televised scene. Finally, we hope to correct several misleading examples cited by the authors as well as their offhand mischaracterization of our program as a "reality" show. PMID:23631336
NASA Astrophysics Data System (ADS)
Que, Dashun; Li, Gang; Yue, Peng
2007-12-01
An adaptive optimization watermarking algorithm based on Genetic Algorithm (GA) and discrete wavelet transform (DWT) is proposed in this paper. The core of this algorithm is the fitness function optimization model for digital watermarking based on GA. The embedding intensity for digital watermarking can be modified adaptively, and the algorithm can effectively ensure the imperceptibility of watermarking while the robustness is ensured. The optimization model research may provide a new idea for anti-coalition attacks of digital watermarking algorithm. The paper has fulfilled many experiments, including the embedding and extracting experiments of watermarking, the influence experiments by the weighting factor, the experiments of embedding same watermarking to the different cover image, the experiments of embedding different watermarking to the same cover image, the comparative analysis experiments between this optimization algorithm and human visual system (HVS) algorithm and etc. The simulation results and the further analysis show the effectiveness and advantage of the new algorithm, which also has versatility and expandability. And meanwhile it has better ability of anti-coalition attacks. Moreover, the robustness and security of watermarking algorithm are improved by scrambling transformation and chaotic encryption while preprocessing the watermarking.
NASA Technical Reports Server (NTRS)
Pflaum, Christoph
1996-01-01
A multilevel algorithm is presented that solves general second order elliptic partial differential equations on adaptive sparse grids. The multilevel algorithm consists of several V-cycles. Suitable discretizations provide that the discrete equation system can be solved in an efficient way. Numerical experiments show a convergence rate of order Omicron(1) for the multilevel algorithm.
Benchmarking image fusion algorithm performance
NASA Astrophysics Data System (ADS)
Howell, Christopher L.
2012-06-01
Registering two images produced by two separate imaging sensors having different detector sizes and fields of view requires one of the images to undergo transformation operations that may cause its overall quality to degrade with regards to visual task performance. This possible change in image quality could add to an already existing difference in measured task performance. Ideally, a fusion algorithm would take as input unaltered outputs from each respective sensor used in the process. Therefore, quantifying how well an image fusion algorithm performs should be base lined to whether the fusion algorithm retained the performance benefit achievable by each independent spectral band being fused. This study investigates an identification perception experiment using a simple and intuitive process for discriminating between image fusion algorithm performances. The results from a classification experiment using information theory based image metrics is presented and compared to perception test results. The results show an effective performance benchmark for image fusion algorithms can be established using human perception test data. Additionally, image metrics have been identified that either agree with or surpass the performance benchmark established.
Double regions growing algorithm for automated satellite image mosaicking
NASA Astrophysics Data System (ADS)
Tan, Yihua; Chen, Chen; Tian, Jinwen
2011-12-01
Feathering is a most widely used method in seamless satellite image mosaicking. A simple but effective algorithm - double regions growing (DRG) algorithm, which utilizes the shape content of images' valid regions, is proposed for generating robust feathering-line before feathering. It works without any human intervention, and experiment on real satellite images shows the advantages of the proposed method.
A Test Scheduling Algorithm Based on Two-Stage GA
NASA Astrophysics Data System (ADS)
Yu, Y.; Peng, X. Y.; Peng, Y.
2006-10-01
In this paper, we present a new algorithm to co-optimize the core wrapper design and the SOC test scheduling. The SOC test scheduling problem is first formulated into a twodimension floorplan problem and a sequence pair architecture is used to represent it. Then we propose a two-stage GA (Genetic Algorithm) to solve the SOC test scheduling problem. Experiments on ITC'02 benchmark show that our algorithm can effectively reduce test time so as to decrease SOC test cost.
Research on Laser Marking Speed Optimization by Using Genetic Algorithm
Wang, Dongyun; Yu, Qiwei; Zhang, Yu
2015-01-01
Laser Marking Machine is the most common coding equipment on product packaging lines. However, the speed of laser marking has become a bottleneck of production. In order to remove this bottleneck, a new method based on a genetic algorithm is designed. On the basis of this algorithm, a controller was designed and simulations and experiments were performed. The results show that using this algorithm could effectively improve laser marking efficiency by 25%. PMID:25955831
Advancements to the planogram frequency–distance rebinning algorithm
Champley, Kyle M; Raylman, Raymond R; Kinahan, Paul E
2010-01-01
In this paper we consider the task of image reconstruction in positron emission tomography (PET) with the planogram frequency–distance rebinning (PFDR) algorithm. The PFDR algorithm is a rebinning algorithm for PET systems with panel detectors. The algorithm is derived in the planogram coordinate system which is a native data format for PET systems with panel detectors. A rebinning algorithm averages over the redundant four-dimensional set of PET data to produce a three-dimensional set of data. Images can be reconstructed from this rebinned three-dimensional set of data. This process enables one to reconstruct PET images more quickly than reconstructing directly from the four-dimensional PET data. The PFDR algorithm is an approximate rebinning algorithm. We show that implementing the PFDR algorithm followed by the (ramp) filtered backprojection (FBP) algorithm in linogram coordinates from multiple views reconstructs a filtered version of our image. We develop an explicit formula for this filter which can be used to achieve exact reconstruction by means of a modified FBP algorithm applied to the stack of rebinned linograms and can also be used to quantify the errors introduced by the PFDR algorithm. This filter is similar to the filter in the planogram filtered backprojection algorithm derived by Brasse et al. The planogram filtered backprojection and exact reconstruction with the PFDR algorithm require complete projections which can be completed with a reprojection algorithm. The PFDR algorithm is similar to the rebinning algorithm developed by Kao et al. By expressing the PFDR algorithm in detector coordinates, we provide a comparative analysis between the two algorithms. Numerical experiments using both simulated data and measured data from a positron emission mammography/tomography (PEM/PET) system are performed. Images are reconstructed by PFDR+FBP (PFDR followed by 2D FBP reconstruction), PFDRX (PFDR followed by the modified FBP algorithm for exact
NASA Technical Reports Server (NTRS)
Conel, J. E.; Abdou, W. A.; Bruegge, C. J.; Gaitley, B. J.; Helmlinger, M. C.; Ledeboer, W. C.; Pilorz, S. H.; Martonchik, J. V.
1997-01-01
Radiative closure experiments involving a comparison between surface-measured spectral irradiance and the surface irradiance calculated according to a radiative transfer code at a desert site in Nevada under clear skies, yield the result that agreement between the two requires presence of an absorbing aerosol component with an imaginary refractive index equal to 0.03 and a 50:50 mix by optical depth of small and large particles with log-normal size distributions.
Martin, Andre-Guy; Roy, Jean; Beaulieu, Luc; Pouliot, Jean; Harel, Francois; Vigneault, Eric . E-mail: Eric.Vigneault@chuq.qc.ca
2007-02-01
Purpose: To report outcomes and toxicity of the first Canadian permanent prostate implant program. Methods and Materials: 396 consecutive patients (Gleason {<=}6, initial prostate specific antigen (PSA) {<=}10 and stage T1-T2a disease) were implanted between June 1994 and December 2001. The median follow-up is of 60 months (maximum, 136 months). All patients were planned with fast-simulated annealing inverse planning algorithm with high activity seeds ([gt] 0.76 U). Acute and late toxicity is reported for the first 213 patients using a modified RTOG toxicity scale. The Kaplan-Meier biochemical failure-free survival (bFFS) is reported according to the ASTRO and Houston definitions. Results: The bFFS at 60 months was of 88.5% (90.5%) according to the ASTRO (Houston) definition and, of 91.4% (94.6%) in the low risk group (initial PSA {<=}10 and Gleason {<=}6 and Stage {<=}T2a). Risk factors statistically associated with bFFS were: initial PSA >10, a Gleason score of 7-8, and stage T2b-T3. The mean D90 was of 151 {+-} 36.1 Gy. The mean V100 was of 85.4 {+-} 8.5% with a mean V150 of 60.1 {+-} 12.3%. Overall, the implants were well tolerated. In the first 6 months, 31.5% of the patients were free of genitourinary symptoms (GUs), 12.7% had Grade 3 GUs; 91.6% were free of gastrointestinal symptoms (GIs). After 6 months, 54.0% were GUs free, 1.4% had Grade 3 GUs; 95.8% were GIs free. Conclusion: The inverse planning with fast simulated annealing and high activity seeds gives a 5-year bFFS, which is comparable with the best published series with a low toxicity profile.
Improved hybrid optimization algorithm for 3D protein structure prediction.
Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang
2014-07-01
A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins. PMID:25069136
Algorithms for automated DNA assembly
Densmore, Douglas; Hsiau, Timothy H.-C.; Kittleson, Joshua T.; DeLoache, Will; Batten, Christopher; Anderson, J. Christopher
2010-01-01
Generating a defined set of genetic constructs within a large combinatorial space provides a powerful method for engineering novel biological functions. However, the process of assembling more than a few specific DNA sequences can be costly, time consuming and error prone. Even if a correct theoretical construction scheme is developed manually, it is likely to be suboptimal by any number of cost metrics. Modular, robust and formal approaches are needed for exploring these vast design spaces. By automating the design of DNA fabrication schemes using computational algorithms, we can eliminate human error while reducing redundant operations, thus minimizing the time and cost required for conducting biological engineering experiments. Here, we provide algorithms that optimize the simultaneous assembly of a collection of related DNA sequences. We compare our algorithms to an exhaustive search on a small synthetic dataset and our results show that our algorithms can quickly find an optimal solution. Comparison with random search approaches on two real-world datasets show that our algorithms can also quickly find lower-cost solutions for large datasets. PMID:20335162
NASA Technical Reports Server (NTRS)
Di Zenzo, Silvano; Degloria, Stephen D.; Bernstein, R.; Kolsky, Harwood G.
1987-01-01
The paper presents the results of a four-factor two-level analysis of a variance experiment designed to evaluate the combined effect of the improved quality of remote-sensor data and the use of context by the classifier on classification accuracy. The improvement achievable by using the context via relaxation techniques is significantly smaller than that provided by an increase of the radiometric resolution of the sensor from 6 to 8 bits per sample (the relative increase in radiometric resolution of TM relative to MSS). It is almost equal to that achievable by an increase in the spectral coverage as provided by TM relative to MSS.
NASA Astrophysics Data System (ADS)
Lalande, Jean-Marie; Waxler, Roger; Velea, Doru
2016-04-01
As infrasonic waves propagate at long ranges through atmospheric ducts it has been suggested that observations of such waves can be used as a remote sensing techniques in order to update properties such as temperature and wind speed. In this study we investigate a new inverse approach based on Markov Chain Monte Carlo methods. This approach as the advantage of searching for the full Probability Density Function in the parameter space at a lower computational cost than extensive parameters search performed by the standard Monte Carlo approach. We apply this inverse methods to observations from the Humming Roadrunner experiment (New Mexico) and discuss implications for atmospheric updates, explosion characterization, localization and yield estimation.
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
Algorithm for Public Electric Transport Schedule Control for Intelligent Embedded Devices
NASA Astrophysics Data System (ADS)
Alps, Ivars; Potapov, Andrey; Gorobetz, Mikhail; Levchenkov, Anatoly
2010-01-01
In this paper authors present heuristics algorithm for precise schedule fulfilment in city traffic conditions taking in account traffic lights. The algorithm is proposed for programmable controller. PLC is proposed to be installed in electric vehicle to control its motion speed and signals of traffic lights. Algorithm is tested using real controller connected to virtual devices and real functional models of real tram devices. Results of experiments show high precision of public transport schedule fulfilment using proposed algorithm.
A Cross Unequal Clustering Routing Algorithm for Sensor Network
NASA Astrophysics Data System (ADS)
Tong, Wang; Jiyi, Wu; He, Xu; Jinghua, Zhu; Munyabugingo, Charles
2013-08-01
In the routing protocol for wireless sensor network, the cluster size is generally fixed in clustering routing algorithm for wireless sensor network, which can easily lead to the "hot spot" problem. Furthermore, the majority of routing algorithms barely consider the problem of long distance communication between adjacent cluster heads that brings high energy consumption. Therefore, this paper proposes a new cross unequal clustering routing algorithm based on the EEUC algorithm. In order to solve the defects of EEUC algorithm, this algorithm calculating of competition radius takes the node's position and node's remaining energy into account to make the load of cluster heads more balanced. At the same time, cluster adjacent node is applied to transport data and reduce the energy-loss of cluster heads. Simulation experiments show that, compared with LEACH and EEUC, the proposed algorithm can effectively reduce the energy-loss of cluster heads and balance the energy consumption among all nodes in the network and improve the network lifetime
A multistrategy optimization improved artificial bee colony algorithm.
Liu, Wen
2014-01-01
Being prone to the shortcomings of premature and slow convergence rate of artificial bee colony algorithm, an improved algorithm was proposed. Chaotic reverse learning strategies were used to initialize swarm in order to improve the global search ability of the algorithm and keep the diversity of the algorithm; the similarity degree of individuals of the population was used to characterize the diversity of population; population diversity measure was set as an indicator to dynamically and adaptively adjust the nectar position; the premature and local convergence were avoided effectively; dual population search mechanism was introduced to the search stage of algorithm; the parallel search of dual population considerably improved the convergence rate. Through simulation experiments of 10 standard testing functions and compared with other algorithms, the results showed that the improved algorithm had faster convergence rate and the capacity of jumping out of local optimum faster. PMID:24982924
GASAT: a genetic local search algorithm for the satisfiability problem.
Lardeux, Frédéric; Saubion, Frédéric; Hao, Jin-Kao
2006-01-01
This paper presents GASAT, a hybrid algorithm for the satisfiability problem (SAT). The main feature of GASAT is that it includes a recombination stage based on a specific crossover and a tabu search stage. We have conducted experiments to evaluate the different components of GASAT and to compare its overall performance with state-of-the-art SAT algorithms. These experiments show that GASAT provides very competitive results. PMID:16831107
Hey Teacher, Your Personality's Showing!
ERIC Educational Resources Information Center
Paulsen, James R.
1977-01-01
A study of 30 fourth, fifth, and sixth grade teachers and 300 of their students showed that a teacher's age, sex, and years of experience did not relate to students' mathematics achievement, but that more effective teachers showed greater "freedom from defensive behavior" than did less effective teachers. (DT)
NASA Technical Reports Server (NTRS)
Entekhabi, Dara; Njoku, Eni E.; O'Neill, Peggy E.; Kellogg, Kent H.; Entin, Jared K.
2010-01-01
Talk outline 1. Derivation of SMAP basic and applied science requirements from the NRC Earth Science Decadal Survey applications 2. Data products and latencies 3. Algorithm highlights 4. SMAP Algorithm Testbed 5. SMAP Working Groups and community engagement
Ensemble algorithms in reinforcement learning.
Wiering, Marco A; van Hasselt, Hado
2008-08-01
This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and implemented four different ensemble methods combining the following five different RL algorithms: Q-learning, Sarsa, actor-critic (AC), QV-learning, and AC learning automaton. The intuitively designed ensemble methods, namely, majority voting (MV), rank voting, Boltzmann multiplication (BM), and Boltzmann addition, combine the policies derived from the value functions of the different RL algorithms, in contrast to previous work where ensemble methods have been used in RL for representing and learning a single value function. We show experiments on five maze problems of varying complexity; the first problem is simple, but the other four maze tasks are of a dynamic or partially observable nature. The results indicate that the BM and MV ensembles significantly outperform the single RL algorithms. PMID:18632380
New Effective Multithreaded Matching Algorithms
Manne, Fredrik; Halappanavar, Mahantesh
2014-05-19
Matching is an important combinatorial problem with a number of applications in areas such as community detection, sparse linear algebra, and network alignment. Since computing optimal matchings can be very time consuming, several fast approximation algorithms, both sequential and parallel, have been suggested. Common to the algorithms giving the best solutions is that they tend to be sequential by nature, while algorithms more suitable for parallel computation give solutions of less quality. We present a new simple 1 2 -approximation algorithm for the weighted matching problem. This algorithm is both faster than any other suggested sequential 1 2 -approximation algorithm on almost all inputs and also scales better than previous multithreaded algorithms. We further extend this to a general scalable multithreaded algorithm that computes matchings of weight comparable with the best sequential algorithms. The performance of the suggested algorithms is documented through extensive experiments on different multithreaded architectures.
NASA Astrophysics Data System (ADS)
García, Alicia; De la Cruz-Reyna, Servando; Marrero, José M.; Ortiz, Ramón
2016-05-01
Under certain conditions, volcano-tectonic (VT) earthquakes may pose significant hazards to people living in or near active volcanic regions, especially on volcanic islands; however, hazard arising from VT activity caused by localized volcanic sources is rarely addressed in the literature. The evolution of VT earthquakes resulting from a magmatic intrusion shows some orderly behaviour that may allow the occurrence and magnitude of major events to be forecast. Thus governmental decision makers can be supplied with warnings of the increased probability of larger-magnitude earthquakes on the short-term timescale. We present here a methodology for forecasting the occurrence of large-magnitude VT events during volcanic crises; it is based on a mean recurrence time (MRT) algorithm that translates the Gutenberg-Richter distribution parameter fluctuations into time windows of increased probability of a major VT earthquake. The MRT forecasting algorithm was developed after observing a repetitive pattern in the seismic swarm episodes occurring between July and November 2011 at El Hierro (Canary Islands). From then on, this methodology has been applied to the consecutive seismic crises registered at El Hierro, achieving a high success rate in the real-time forecasting, within 10-day time windows, of volcano-tectonic earthquakes.
Fast Optimal Load Balancing Algorithms for 1D Partitioning
Pinar, Ali; Aykanat, Cevdet
2002-12-09
One-dimensional decomposition of nonuniform workload arrays for optimal load balancing is investigated. The problem has been studied in the literature as ''chains-on-chains partitioning'' problem. Despite extensive research efforts, heuristics are still used in parallel computing community with the ''hope'' of good decompositions and the ''myth'' of exact algorithms being hard to implement and not runtime efficient. The main objective of this paper is to show that using exact algorithms instead of heuristics yields significant load balance improvements with negligible increase in preprocessing time. We provide detailed pseudocodes of our algorithms so that our results can be easily reproduced. We start with a review of literature on chains-on-chains partitioning problem. We propose improvements on these algorithms as well as efficient implementation tips. We also introduce novel algorithms, which are asymptotically and runtime efficient. We experimented with data sets from two different applications: Sparse matrix computations and Direct volume rendering. Experiments showed that the proposed algorithms are 100 times faster than a single sparse-matrix vector multiplication for 64-way decompositions on average. Experiments also verify that load balance can be significantly improved by using exact algorithms instead of heuristics. These two findings show that exact algorithms with efficient implementations discussed in this paper can effectively replace heuristics.
Fast parallel algorithm for slicing STL based on pipeline
NASA Astrophysics Data System (ADS)
Ma, Xulong; Lin, Feng; Yao, Bo
2016-04-01
In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved.
Fast parallel algorithm for slicing STL based on pipeline
NASA Astrophysics Data System (ADS)
Ma, Xulong; Lin, Feng; Yao, Bo
2016-05-01
In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved.
Television Quiz Show Simulation
ERIC Educational Resources Information Center
Hill, Jonnie Lynn
2007-01-01
This article explores the simulation of four television quiz shows for students in China studying English as a foreign language (EFL). It discusses the adaptation and implementation of television quiz shows and how the students reacted to them.
Dual-Byte-Marker Algorithm for Detecting JFIF Header
NASA Astrophysics Data System (ADS)
Mohamad, Kamaruddin Malik; Herawan, Tutut; Deris, Mustafa Mat
The use of efficient algorithm to detect JPEG file is vital to reduce time taken for analyzing ever increasing data in hard drive or physical memory. In the previous paper, single-byte-marker algorithm is proposed for header detection. In this paper, another novel header detection algorithm called dual-byte-marker is proposed. Based on the experiments done on images from hard disk, physical memory and data set from DFRWS 2006 Challenge, results showed that dual-byte-marker algorithm gives better performance with better execution time for header detection as compared to single-byte-marker.
ERIC Educational Resources Information Center
Watters, Audrey
2012-01-01
As changing student demographics make it harder for today's learners to earn a four-year degree, educators are experimenting with smaller credentialing steps, such as digital badges. Mark Milliron, chancellor of Western Governors University Texas, advocates the creation of a "family of credentials," ranging from digital badges to certifications,…
NASA Technical Reports Server (NTRS)
Petersen, Walter A.; Jensen, Michael P.
2011-01-01
The joint NASA Global Precipitation Measurement (GPM) -- DOE Atmospheric Radiation Measurement (ARM) Midlatitude Continental Convective Clouds Experiment (MC3E) was conducted from April 22-June 6, 2011, centered on the DOE-ARM Southern Great Plains Central Facility site in northern Oklahoma. GPM field campaign objectives focused on the collection of airborne and ground-based measurements of warm-season continental precipitation processes to support refinement of GPM retrieval algorithm physics over land, and to improve the fidelity of coupled cloud resolving and land-surface satellite simulator models. DOE ARM objectives were synergistically focused on relating observations of cloud microphysics and the surrounding environment to feedbacks on convective system dynamics, an effort driven by the need to better represent those interactions in numerical modeling frameworks. More specific topics addressed by MC3E include ice processes and ice characteristics as coupled to precipitation at the surface and radiometer signals measured in space, the correlation properties of rainfall and drop size distributions and impacts on dual-frequency radar retrieval algorithms, the transition of cloud water to rain water (e.g., autoconversion processes) and the vertical distribution of cloud water in precipitating clouds, and vertical draft structure statistics in cumulus convection. The MC3E observational strategy relied on NASA ER-2 high-altitude airborne multi-frequency radar (HIWRAP Ka-Ku band) and radiometer (AMPR, CoSMIR; 10-183 GHz) sampling (a GPM "proxy") over an atmospheric column being simultaneously profiled in situ by the University of North Dakota Citation microphysics aircraft, an array of ground-based multi-frequency scanning polarimetric radars (DOE Ka-W, X and C-band; NASA D3R Ka-Ku and NPOL S-bands) and wind-profilers (S/UHF bands), supported by a dense network of over 20 disdrometers and rain gauges, all nested in the coverage of a six-station mesoscale rawinsonde
Styopin, Nikita E; Vershinin, Anatoly V; Zingerman, Konstantin M; Levin, Vladimir A
2016-09-01
Different variants of the Uzawa algorithm are compared with one another. The comparison is performed for the case in which this algorithm is applied to large-scale systems of linear algebraic equations. These systems arise in the finite-element solution of the problems of elasticity theory for incompressible materials. A modification of the Uzawa algorithm is proposed. Computational experiments show that this modification improves the convergence of the Uzawa algorithm for the problems of solid mechanics. The results of computational experiments show that each variant of the Uzawa algorithm considered has its advantages and disadvantages and may be convenient in one case or another. PMID:27595019
ERIC Educational Resources Information Center
Anderton, Alice
The Intertribal Wordpath Society is a nonprofit educational corporation formed to promote the teaching, status, awareness, and use of Oklahoma Indian languages. The Society produces "Wordpath," a weekly 30-minute public access television show about Oklahoma Indian languages and the people who are teaching and preserving them. The show aims to…
Memetic algorithm for community detection in networks.
Gong, Maoguo; Fu, Bao; Jiao, Licheng; Du, Haifeng
2011-11-01
Community structure is one of the most important properties in networks, and community detection has received an enormous amount of attention in recent years. Modularity is by far the most used and best known quality function for measuring the quality of a partition of a network, and many community detection algorithms are developed to optimize it. However, there is a resolution limit problem in modularity optimization methods. In this study, a memetic algorithm, named Meme-Net, is proposed to optimize another quality function, modularity density, which includes a tunable parameter that allows one to explore the network at different resolutions. Our proposed algorithm is a synergy of a genetic algorithm with a hill-climbing strategy as the local search procedure. Experiments on computer-generated and real-world networks show the effectiveness and the multiresolution ability of the proposed method. PMID:22181467
ERIC Educational Resources Information Center
Kirkpatrick, Larry D.; Rugheimer, Mac
1979-01-01
Describes the viewing sessions and the holograms of a holographic road show. The traveling exhibits, believed to stimulate interest in physics, include a wide variety of holograms and demonstrate several physical principles. (GA)
Competing Sudakov veto algorithms
NASA Astrophysics Data System (ADS)
Kleiss, Ronald; Verheyen, Rob
2016-07-01
We present a formalism to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The formal analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance in a semi-realistic setting and show that there are significantly faster alternatives to the commonly used algorithms.
Effect of qubit losses on Grover's quantum search algorithm
NASA Astrophysics Data System (ADS)
Rao, D. D. Bhaktavatsala; Mølmer, Klaus
2012-10-01
We investigate the performance of Grover's quantum search algorithm on a register that is subject to a loss of particles that carry qubit information. Under the assumption that the basic steps of the algorithm are applied correctly on the correspondingly shrinking register, we show that the algorithm converges to mixed states with 50% overlap with the target state in the bit positions still present. As an alternative to error correction, we present a procedure that combines the outcome of different trials of the algorithm to determine the solution to the full search problem. The procedure may be relevant for experiments where the algorithm is adapted as the loss of particles is registered and for experiments with Rydberg blockade interactions among neutral atoms, where monitoring of atom losses is not even necessary.
Improved artificial bee colony algorithm based gravity matching navigation method.
Gao, Wei; Zhao, Bo; Zhou, Guang Tao; Wang, Qiu Ying; Yu, Chun Yang
2014-01-01
Gravity matching navigation algorithm is one of the key technologies for gravity aided inertial navigation systems. With the development of intelligent algorithms, the powerful search ability of the Artificial Bee Colony (ABC) algorithm makes it possible to be applied to the gravity matching navigation field. However, existing search mechanisms of basic ABC algorithms cannot meet the need for high accuracy in gravity aided navigation. Firstly, proper modifications are proposed to improve the performance of the basic ABC algorithm. Secondly, a new search mechanism is presented in this paper which is based on an improved ABC algorithm using external speed information. At last, modified Hausdorff distance is introduced to screen the possible matching results. Both simulations and ocean experiments verify the feasibility of the method, and results show that the matching rate of the method is high enough to obtain a precise matching position. PMID:25046019
ERIC Educational Resources Information Center
Eccleston, Jeff
2007-01-01
Big things come in small packages. This saying came to the mind of the author after he created a simple math review activity for his fourth grade students. Though simple, it has proven to be extremely advantageous in reinforcing math concepts. He uses this activity, which he calls "Show What You Know," often. This activity provides the perfect…
ERIC Educational Resources Information Center
Mathieu, Aaron
2000-01-01
Uses a talk show activity for a final assessment tool for students to debate about the ozone hole. Students are assessed on five areas: (1) cooperative learning; (2) the written component; (3) content; (4) self-evaluation; and (5) peer evaluation. (SAH)
ERIC Educational Resources Information Center
Moore, Mitzi Ruth
1992-01-01
Proposes having students perform skits in which they play the roles of the science concepts they are trying to understand. Provides the dialog for a skit in which hot and cold gas molecules are interviewed on a talk show to study how these properties affect wind, rain, and other weather phenomena. (MDH)
ERIC Educational Resources Information Center
Frasier, Debra
2008-01-01
In the author's book titled "The Incredible Water Show," the characters from "Miss Alaineus: A Vocabulary Disaster" used an ocean of information to stage an inventive performance about the water cycle. In this article, the author relates how she turned the story into hands-on science teaching for real-life fifth-grade students. The author also…
ERIC Educational Resources Information Center
Cech, Scott J.
2008-01-01
Having students show their skills in three dimensions, known as performance-based assessment, dates back at least to Socrates. Individual schools such as Barrington High School--located just outside of Providence--have been requiring students to actively demonstrate their knowledge for years. The Rhode Island's high school graduating class became…
An adaptive algorithm for low contrast infrared image enhancement
NASA Astrophysics Data System (ADS)
Liu, Sheng-dong; Peng, Cheng-yuan; Wang, Ming-jia; Wu, Zhi-guo; Liu, Jia-qi
2013-08-01
An adaptive infrared image enhancement algorithm for low contrast is proposed in this paper, to deal with the problem that conventional image enhancement algorithm is not able to effective identify the interesting region when dynamic range is large in image. This algorithm begin with the human visual perception characteristics, take account of the global adaptive image enhancement and local feature boost, not only the contrast of image is raised, but also the texture of picture is more distinct. Firstly, the global image dynamic range is adjusted from the overall, the dynamic range of original image and display grayscale form corresponding relationship, the gray scale of bright object is raised and the the gray scale of dark target is reduced at the same time, to improve the overall image contrast. Secondly, the corresponding filtering algorithm is used on the current point and its neighborhood pixels to extract image texture information, to adjust the brightness of the current point in order to enhance the local contrast of the image. The algorithm overcomes the default that the outline is easy to vague in traditional edge detection algorithm, and ensure the distinctness of texture detail in image enhancement. Lastly, we normalize the global luminance adjustment image and the local brightness adjustment image, to ensure a smooth transition of image details. A lot of experiments is made to compare the algorithm proposed in this paper with other convention image enhancement algorithm, and two groups of vague IR image are taken in experiment. Experiments show that: the contrast ratio of the picture is boosted after handled by histogram equalization algorithm, but the detail of the picture is not clear, the detail of the picture can be distinguished after handled by the Retinex algorithm. The image after deal with by self-adaptive enhancement algorithm proposed in this paper becomes clear in details, and the image contrast is markedly improved in compared with Retinex
Algorithm for dynamic Speckle pattern processing
NASA Astrophysics Data System (ADS)
Cariñe, J.; Guzmán, R.; Torres-Ruiz, F. A.
2016-07-01
In this paper we present a new algorithm for determining surface activity by processing speckle pattern images recorded with a CCD camera. Surface activity can be produced by motility or small displacements among other causes, and is manifested as a change in the pattern recorded in the camera with reference to a static background pattern. This intensity variation is considered to be a small perturbation compared with the mean intensity. Based on a perturbative method we obtain an equation with which we can infer information about the dynamic behavior of the surface that generates the speckle pattern. We define an activity index based on our algorithm that can be easily compared with the outcomes from other algorithms. It is shown experimentally that this index evolves in time in the same way as the Inertia Moment method, however our algorithm is based on direct processing of speckle patterns without the need for other kinds of post-processes (like THSP and co-occurrence matrix), making it a viable real-time method. We also show how this algorithm compares with several other algorithms when applied to calibration experiments. From these results we conclude that our algorithm offer qualitative and quantitative advantages over current methods.
Fast ordering algorithm for exact histogram specification.
Nikolova, Mila; Steidl, Gabriele
2014-12-01
This paper provides a fast algorithm to order in a meaningful, strict way the integer gray values in digital (quantized) images. It can be used in any exact histogram specification-based application. Our algorithm relies on the ordering procedure based on the specialized variational approach. This variational method was shown to be superior to all other state-of-the art ordering algorithms in terms of faithful total strict ordering but not in speed. Indeed, the relevant functionals are in general difficult to minimize because their gradient is nearly flat over vast regions. In this paper, we propose a simple and fast fixed point algorithm to minimize these functionals. The fast convergence of our algorithm results from known analytical properties of the model. Our algorithm is equivalent to an iterative nonlinear filtering. Furthermore, we show that a particular form of the variational model gives rise to much faster convergence than other alternative forms. We demonstrate that only a few iterations of this filter yield almost the same pixel ordering as the minimizer. Thus, we apply only few iteration steps to obtain images, whose pixels can be ordered in a strict and faithful way. Numerical experiments confirm that our algorithm outperforms by far its main competitors. PMID:25347881
An Artificial Immune Univariate Marginal Distribution Algorithm
NASA Astrophysics Data System (ADS)
Zhang, Qingbin; Kang, Shuo; Gao, Junxiang; Wu, Song; Tian, Yanping
Hybridization is an extremely effective way of improving the performance of the Univariate Marginal Distribution Algorithm (UMDA). Owing to its diversity and memory mechanisms, artificial immune algorithm has been widely used to construct hybrid algorithms with other optimization algorithms. This paper proposes a hybrid algorithm which combines the UMDA with the principle of general artificial immune algorithm. Experimental results on deceptive function of order 3 show that the proposed hybrid algorithm can get more building blocks (BBs) than the UMDA.
Adaptive path planning: Algorithm and analysis
Chen, Pang C.
1995-03-01
To address the need for a fast path planner, we present a learning algorithm that improves path planning by using past experience to enhance future performance. The algorithm relies on an existing path planner to provide solutions difficult tasks. From these solutions, an evolving sparse work of useful robot configurations is learned to support faster planning. More generally, the algorithm provides a framework in which a slow but effective planner may be improved both cost-wise and capability-wise by a faster but less effective planner coupled with experience. We analyze algorithm by formalizing the concept of improvability and deriving conditions under which a planner can be improved within the framework. The analysis is based on two stochastic models, one pessimistic (on task complexity), the other randomized (on experience utility). Using these models, we derive quantitative bounds to predict the learning behavior. We use these estimation tools to characterize the situations in which the algorithm is useful and to provide bounds on the training time. In particular, we show how to predict the maximum achievable speedup. Additionally, our analysis techniques are elementary and should be useful for studying other types of probabilistic learning as well.
Boden, Timothy W
2016-01-01
Many medical practices have cut back on education and staff development expenses, especially those costs associated with conventions and conferences. But there are hard-to-value returns on your investment in these live events--beyond the obvious benefits of acquired knowledge and skills. Major vendors still exhibit their services and wares at many events, and the exhibit hall is a treasure-house of information and resources for the savvy physician or administrator. Make and stick to a purposeful plan to exploit the trade show. You can compare products, gain new insights and ideas, and even negotiate better deals with representatives anxious to realize returns on their exhibition investments. PMID:27249887
NASA Astrophysics Data System (ADS)
Leihong, Zhang; Dong, Liang; Bei, Li; Yi, Kang; Zilan, Pan; Dawei, Zhang; Xiuhua, Ma
2016-04-01
In order to improve the reconstruction accuracy and reduce the workload, the algorithm of compressive sensing based on the iterative threshold is combined with the method of adaptive selection of the training sample, and a new algorithm of adaptive compressive sensing is put forward. The three kinds of training sample are used to reconstruct the spectral reflectance of the testing sample based on the compressive sensing algorithm and adaptive compressive sensing algorithm, and the color difference and error are compared. The experiment results show that spectral reconstruction precision based on the adaptive compressive sensing algorithm is better than that based on the algorithm of compressive sensing.
NASA Astrophysics Data System (ADS)
Li, Zhaokun; Cao, Jingtai; Liu, Wei; Feng, Jianfeng; Zhao, Xiaohui
2015-03-01
We use conventional adaptive optical system to compensate atmospheric turbulence in free space optical (FSO) communication system under strong scintillation circumstances, undesired wave-front measurements based on Shark-Hartman sensor (SH). Since wavefront sensor-less adaptive optics is a feasible option, we propose several swarm intelligence algorithms to compensate the wavefront aberration from atmospheric interference in FSO and mainly discuss the algorithm principle, basic flows, and simulation result. The numerical simulation experiment and result analysis show that compared with SPGD algorithm, the proposed algorithms can effectively restrain wavefront aberration, and improve convergence rate of the algorithms and the coupling efficiency of receiver in large extent.
Walusinski, Olivier
2014-01-01
In the second half of the 19th century, Jean-Martin Charcot (1825-1893) became famous for the quality of his teaching and his innovative neurological discoveries, bringing many French and foreign students to Paris. A hunger for recognition, together with progressive and anticlerical ideals, led Charcot to invite writers, journalists, and politicians to his lessons, during which he presented the results of his work on hysteria. These events became public performances, for which physicians and patients were transformed into actors. Major newspapers ran accounts of these consultations, more like theatrical shows in some respects. The resultant enthusiasm prompted other physicians in Paris and throughout France to try and imitate them. We will compare the form and substance of Charcot's lessons with those given by Jules-Bernard Luys (1828-1897), Victor Dumontpallier (1826-1899), Ambroise-Auguste Liébault (1823-1904), Hippolyte Bernheim (1840-1919), Joseph Grasset (1849-1918), and Albert Pitres (1848-1928). We will also note their impact on contemporary cinema and theatre. PMID:25273491
Speckle imaging algorithms for planetary imaging
Johansson, E.
1994-11-15
I will discuss the speckle imaging algorithms used to process images of the impact sites of the collision of comet Shoemaker-Levy 9 with Jupiter. The algorithms use a phase retrieval process based on the average bispectrum of the speckle image data. High resolution images are produced by estimating the Fourier magnitude and Fourier phase of the image separately, then combining them and inverse transforming to achieve the final result. I will show raw speckle image data and high-resolution image reconstructions from our recent experiment at Lick Observatory.
NASA Astrophysics Data System (ADS)
2007-01-01
its high spatial and spectral resolution, it was possible to zoom into the very heart of this very massive star. In this innermost region, the observations are dominated by the extremely dense stellar wind that totally obscures the underlying central star. The AMBER observations show that this dense stellar wind is not spherically symmetric, but exhibits a clearly elongated structure. Overall, the AMBER observations confirm that the extremely high mass loss of Eta Carinae's massive central star is non-spherical and much stronger along the poles than in the equatorial plane. This is in agreement with theoretical models that predict such an enhanced polar mass-loss in the case of rapidly rotating stars. ESO PR Photo 06c/07 ESO PR Photo 06c/07 RS Ophiuchi in Outburst Several papers from this special feature focus on the later stages in a star's life. One looks at the binary system Gamma 2 Velorum, which contains the closest example of a star known as a Wolf-Rayet. A single AMBER observation allowed the astronomers to separate the spectra of the two components, offering new insights in the modeling of Wolf-Rayet stars, but made it also possible to measure the separation between the two stars. This led to a new determination of the distance of the system, showing that previous estimates were incorrect. The observations also revealed information on the region where the winds from the two stars collide. The famous binary system RS Ophiuchi, an example of a recurrent nova, was observed just 5 days after it was discovered to be in outburst on 12 February 2006, an event that has been expected for 21 years. AMBER was able to detect the extension of the expanding nova emission. These observations show a complex geometry and kinematics, far from the simple interpretation of a spherical fireball in extension. AMBER has detected a high velocity jet probably perpendicular to the orbital plane of the binary system, and allowed a precise and careful study of the wind and the shockwave
A new image encryption algorithm based on logistic chaotic map with varying parameter.
Liu, Lingfeng; Miao, Suoxia
2016-01-01
In this paper, we proposed a new image encryption algorithm based on parameter-varied logistic chaotic map and dynamical algorithm. The parameter-varied logistic map can cure the weaknesses of logistic map and resist the phase space reconstruction attack. We use the parameter-varied logistic map to shuffle the plain image, and then use a dynamical algorithm to encrypt the image. We carry out several experiments, including Histogram analysis, information entropy analysis, sensitivity analysis, key space analysis, correlation analysis and computational complexity to evaluate its performances. The experiment results show that this algorithm is with high security and can be competitive for image encryption. PMID:27066326
Multipartite entanglement in quantum algorithms
Bruss, D.; Macchiavello, C.
2011-05-15
We investigate the entanglement features of the quantum states employed in quantum algorithms. In particular, we analyze the multipartite entanglement properties in the Deutsch-Jozsa, Grover, and Simon algorithms. Our results show that for these algorithms most instances involve multipartite entanglement.
A fast algorithm for attribute reduction based on Trie tree and rough set theory
NASA Astrophysics Data System (ADS)
Hu, Feng; Wang, Xiao-yan; Luo, Chuan-jiang
2013-03-01
Attribute reduction is an important issue in rough set theory. Many efficient algorithms have been proposed, however, few of them can process huge data sets quickly. In this paper, combining the Trie tree, the algorithms for computing positive region of decision table are proposed. After that, a new algorithm for attribute reduction based on Trie tree is developed, which can be used to process the attribute reduction of large data sets quickly. Experiment results show its high efficiency.
NASA Astrophysics Data System (ADS)
Gandomi, A. H.; Yang, X.-S.; Talatahari, S.; Alavi, A. H.
2013-01-01
A recently developed metaheuristic optimization algorithm, firefly algorithm (FA), mimics the social behavior of fireflies based on the flashing and attraction characteristics of fireflies. In the present study, we will introduce chaos into FA so as to increase its global search mobility for robust global optimization. Detailed studies are carried out on benchmark problems with different chaotic maps. Here, 12 different chaotic maps are utilized to tune the attractive movement of the fireflies in the algorithm. The results show that some chaotic FAs can clearly outperform the standard FA.
The Algorithm Selection Problem
NASA Technical Reports Server (NTRS)
Minton, Steve; Allen, John; Deiss, Ron (Technical Monitor)
1994-01-01
Work on NP-hard problems has shown that many instances of these theoretically computationally difficult problems are quite easy. The field has also shown that choosing the right algorithm for the problem can have a profound effect on the time needed to find a solution. However, to date there has been little work showing how to select the right algorithm for solving any particular problem. The paper refers to this as the algorithm selection problem. It describes some of the aspects that make this problem difficult, as well as proposes a technique for addressing it.
Rate control algorithm based on frame complexity estimation for MVC
NASA Astrophysics Data System (ADS)
Yan, Tao; An, Ping; Shen, Liquan; Zhang, Zhaoyang
2010-07-01
Rate control has not been well studied for multi-view video coding (MVC). In this paper, we propose an efficient rate control algorithm for MVC by improving the quadratic rate-distortion (R-D) model, which reasonably allocate bit-rate among views based on correlation analysis. The proposed algorithm consists of four levels for rate bits control more accurately, of which the frame layer allocates bits according to frame complexity and temporal activity. Extensive experiments show that the proposed algorithm can efficiently implement bit allocation and rate control according to coding parameters.
An algorithm for prescribed mean curvature using isogeometric methods
NASA Astrophysics Data System (ADS)
Chicco-Ruiz, Aníbal; Morin, Pedro; Pauletti, M. Sebastian
2016-07-01
We present a Newton type algorithm to find parametric surfaces of prescribed mean curvature with a fixed given boundary. In particular, it applies to the problem of minimal surfaces. The algorithm relies on some global regularity of the spaces where it is posed, which is naturally fitted for discretization with isogeometric type of spaces. We introduce a discretization of the continuous algorithm and present a simple implementation using the recently released isogeometric software library igatools. Finally, we show several numerical experiments which highlight the convergence properties of the scheme.
An ant colony algorithm on continuous searching space
NASA Astrophysics Data System (ADS)
Xie, Jing; Cai, Chao
2015-12-01
Ant colony algorithm is heuristic, bionic and parallel. Because of it is property of positive feedback, parallelism and simplicity to cooperate with other method, it is widely adopted in planning on discrete space. But it is still not good at planning on continuous space. After a basic introduction to the basic ant colony algorithm, we will propose an ant colony algorithm on continuous space. Our method makes use of the following three tricks. We search for the next nodes of the route according to fixed-step to guarantee the continuity of solution. When storing pheromone, it discretizes field of pheromone, clusters states and sums up the values of pheromone of these states. When updating pheromone, it makes good resolutions measured in relative score functions leave more pheromone, so that ant colony algorithm can find a sub-optimal solution in shorter time. The simulated experiment shows that our ant colony algorithm can find sub-optimal solution in relatively shorter time.
Adaptive image contrast enhancement algorithm for point-based rendering
NASA Astrophysics Data System (ADS)
Xu, Shaoping; Liu, Xiaoping P.
2015-03-01
Surgical simulation is a major application in computer graphics and virtual reality, and most of the existing work indicates that interactive real-time cutting simulation of soft tissue is a fundamental but challenging research problem in virtual surgery simulation systems. More specifically, it is difficult to achieve a fast enough graphic update rate (at least 30 Hz) on commodity PC hardware by utilizing traditional triangle-based rendering algorithms. In recent years, point-based rendering (PBR) has been shown to offer the potential to outperform the traditional triangle-based rendering in speed when it is applied to highly complex soft tissue cutting models. Nevertheless, the PBR algorithms are still limited in visual quality due to inherent contrast distortion. We propose an adaptive image contrast enhancement algorithm as a postprocessing module for PBR, providing high visual rendering quality as well as acceptable rendering efficiency. Our approach is based on a perceptible image quality technique with automatic parameter selection, resulting in a visual quality comparable to existing conventional PBR algorithms. Experimental results show that our adaptive image contrast enhancement algorithm produces encouraging results both visually and numerically compared to representative algorithms, and experiments conducted on the latest hardware demonstrate that the proposed PBR framework with the postprocessing module is superior to the conventional PBR algorithm and that the proposed contrast enhancement algorithm can be utilized in (or compatible with) various variants of the conventional PBR algorithm.
Extended Relief-F Algorithm for Nominal Attribute Estimation in Small-Document Classification
NASA Astrophysics Data System (ADS)
Park, Heum; Kwon, Hyuk-Chul
This paper presents an extended Relief-F algorithm for nominal attribute estimation, for application to small-document classification. Relief algorithms are general and successful instance-based feature-filtering algorithms for data classification and regression. Many improved Relief algorithms have been introduced as solutions to problems of redundancy and irrelevant noisy features and to the limitations of the algorithms for multiclass datasets. However, these algorithms have only rarely been applied to text classification, because the numerous features in multiclass datasets lead to great time complexity. Therefore, in considering their application to text feature filtering and classification, we presented an extended Relief-F algorithm for numerical attribute estimation (E-Relief-F) in 2007. However, we found limitations and some problems with it. Therefore, in this paper, we introduce additional problems with Relief algorithms for text feature filtering, including the negative influence of computation similarities and weights caused by a small number of features in an instance, the absence of nearest hits and misses for some instances, and great time complexity. We then suggest a new extended Relief-F algorithm for nominal attribute estimation (E-Relief-Fd) to solve these problems, and we apply it to small text-document classification. We used the algorithm in experiments to estimate feature quality for various datasets, its application to classification, and its performance in comparison with existing Relief algorithms. The experimental results show that the new E-Relief-Fd algorithm offers better performance than previous Relief algorithms, including E-Relief-F.
Acoustic simulation in architecture with parallel algorithm
NASA Astrophysics Data System (ADS)
Li, Xiaohong; Zhang, Xinrong; Li, Dan
2004-03-01
In allusion to complexity of architecture environment and Real-time simulation of architecture acoustics, a parallel radiosity algorithm was developed. The distribution of sound energy in scene is solved with this method. And then the impulse response between sources and receivers at frequency segment, which are calculated with multi-process, are combined into whole frequency response. The numerical experiment shows that parallel arithmetic can improve the acoustic simulating efficiency of complex scene.
Blind Alley Aware ACO Routing Algorithm
NASA Astrophysics Data System (ADS)
Yoshikawa, Masaya; Otani, Kazuo
2010-10-01
The routing problem is applied to various engineering fields. Many researchers study this problem. In this paper, we propose a new routing algorithm which is based on Ant Colony Optimization. The proposed algorithm introduces the tabu search mechanism to escape the blind alley. Thus, the proposed algorithm enables to find the shortest route, even if the map data contains the blind alley. Experiments using map data prove the effectiveness in comparison with Dijkstra algorithm which is the most popular conventional routing algorithm.
Fractal Landscape Algorithms for Environmental Simulations
NASA Astrophysics Data System (ADS)
Mao, H.; Moran, S.
2014-12-01
Natural science and geographical research are now able to take advantage of environmental simulations that more accurately test experimental hypotheses, resulting in deeper understanding. Experiments affected by the natural environment can benefit from 3D landscape simulations capable of simulating a variety of terrains and environmental phenomena. Such simulations can employ random terrain generation algorithms that dynamically simulate environments to test specific models against a variety of factors. Through the use of noise functions such as Perlin noise, Simplex noise, and diamond square algorithms, computers can generate simulations that model a variety of landscapes and ecosystems. This study shows how these algorithms work together to create realistic landscapes. By seeding values into the diamond square algorithm, one can control the shape of landscape. Perlin noise and Simplex noise are also used to simulate moisture and temperature. The smooth gradient created by coherent noise allows more realistic landscapes to be simulated. Terrain generation algorithms can be used in environmental studies and physics simulations. Potential studies that would benefit from simulations include the geophysical impact of flash floods or drought on a particular region and regional impacts on low lying area due to global warming and rising sea levels. Furthermore, terrain generation algorithms also serve as aesthetic tools to display landscapes (Google Earth), and simulate planetary landscapes. Hence, it can be used as a tool to assist science education. Algorithms used to generate these natural phenomena provide scientists a different approach in analyzing our world. The random algorithms used in terrain generation not only contribute to the generating the terrains themselves, but are also capable of simulating weather patterns.
Flocking algorithm for autonomous flying robots.
Virágh, Csaba; Vásárhelyi, Gábor; Tarcai, Norbert; Szörényi, Tamás; Somorjai, Gergő; Nepusz, Tamás; Vicsek, Tamás
2014-06-01
Animal swarms displaying a variety of typical flocking patterns would not exist without the underlying safe, optimal and stable dynamics of the individuals. The emergence of these universal patterns can be efficiently reconstructed with agent-based models. If we want to reproduce these patterns with artificial systems, such as autonomous aerial robots, agent-based models can also be used in their control algorithms. However, finding the proper algorithms and thus understanding the essential characteristics of the emergent collective behaviour requires thorough and realistic modeling of the robot and also the environment. In this paper, we first present an abstract mathematical model of an autonomous flying robot. The model takes into account several realistic features, such as time delay and locality of communication, inaccuracy of the on-board sensors and inertial effects. We present two decentralized control algorithms. One is based on a simple self-propelled flocking model of animal collective motion, the other is a collective target tracking algorithm. Both algorithms contain a viscous friction-like term, which aligns the velocities of neighbouring agents parallel to each other. We show that this term can be essential for reducing the inherent instabilities of such a noisy and delayed realistic system. We discuss simulation results on the stability of the control algorithms, and perform real experiments to show the applicability of the algorithms on a group of autonomous quadcopters. In our case, bio-inspiration works in two ways. On the one hand, the whole idea of trying to build and control a swarm of robots comes from the observation that birds tend to flock to optimize their behaviour as a group. On the other hand, by using a realistic simulation framework and studying the group behaviour of autonomous robots we can learn about the major factors influencing the flight of bird flocks. PMID:24852272
Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi
2014-01-01
Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425
One high-accuracy camera calibration algorithm based on computer vision images
NASA Astrophysics Data System (ADS)
Wang, Ying; Huang, Jianming; Wei, Xiangquan
2015-12-01
Camera calibration is the first step of computer vision and one of the most active research fields nowadays. In order to improve the measurement precision, the internal parameters of the camera should be accurately calibrated. So one high-accuracy camera calibration algorithm is proposed based on the images of planar targets or tridimensional targets. By using the algorithm, the internal parameters of the camera are calibrated based on the existing planar target at the vision-based navigation experiment. The experimental results show that the accuracy of the proposed algorithm is obviously improved compared with the conventional linear algorithm, Tsai general algorithm, and Zhang Zhengyou calibration algorithm. The algorithm proposed by the article can satisfy the need of computer vision and provide reference for precise measurement of the relative position and attitude.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Lomax, Harvard
1987-01-01
The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.
NASA Astrophysics Data System (ADS)
Sellitto, P.; Del Frate, F.
2014-07-01
Atmospheric temperature profiles are inferred from passive satellite instruments, using thermal infrared or microwave observations. Here we investigate on the feasibility of the retrieval of height resolved temperature information in the ultraviolet spectral region. The temperature dependence of the absorption cross sections of ozone in the Huggins band, in particular in the interval 320-325 nm, is exploited. We carried out a sensitivity analysis and demonstrated that a non-negligible information on the temperature profile can be extracted from this small band. Starting from these results, we developed a neural network inversion algorithm, trained and tested with simulated nadir EnviSat-SCIAMACHY ultraviolet observations. The algorithm is able to retrieve the temperature profile with root mean square errors and biases comparable to existing retrieval schemes that use thermal infrared or microwave observations. This demonstrates, for the first time, the feasibility of temperature profiles retrieval from space-borne instruments operating in the ultraviolet.
A hybrid cuckoo search algorithm with Nelder Mead method for solving global optimization problems.
Ali, Ahmed F; Tawhid, Mohamed A
2016-01-01
Cuckoo search algorithm is a promising metaheuristic population based method. It has been applied to solve many real life problems. In this paper, we propose a new cuckoo search algorithm by combining the cuckoo search algorithm with the Nelder-Mead method in order to solve the integer and minimax optimization problems. We call the proposed algorithm by hybrid cuckoo search and Nelder-Mead method (HCSNM). HCSNM starts the search by applying the standard cuckoo search for number of iterations then the best obtained solution is passing to the Nelder-Mead algorithm as an intensification process in order to accelerate the search and overcome the slow convergence of the standard cuckoo search algorithm. The proposed algorithm is balancing between the global exploration of the Cuckoo search algorithm and the deep exploitation of the Nelder-Mead method. We test HCSNM algorithm on seven integer programming problems and ten minimax problems and compare against eight algorithms for solving integer programming problems and seven algorithms for solving minimax problems. The experiments results show the efficiency of the proposed algorithm and its ability to solve integer and minimax optimization problems in reasonable time. PMID:27217988
Kernel MAD Algorithm for Relative Radiometric Normalization
NASA Astrophysics Data System (ADS)
Bai, Yang; Tang, Ping; Hu, Changmiao
2016-06-01
The multivariate alteration detection (MAD) algorithm is commonly used in relative radiometric normalization. This algorithm is based on linear canonical correlation analysis (CCA) which can analyze only linear relationships among bands. Therefore, we first introduce a new version of MAD in this study based on the established method known as kernel canonical correlation analysis (KCCA). The proposed method effectively extracts the non-linear and complex relationships among variables. We then conduct relative radiometric normalization experiments on both the linear CCA and KCCA version of the MAD algorithm with the use of Landsat-8 data of Beijing, China, and Gaofen-1(GF-1) data derived from South China. Finally, we analyze the difference between the two methods. Results show that the KCCA-based MAD can be satisfactorily applied to relative radiometric normalization, this algorithm can well describe the nonlinear relationship between multi-temporal images. This work is the first attempt to apply a KCCA-based MAD algorithm to relative radiometric normalization.
Efficient algorithms for survivable virtual network embedding
NASA Astrophysics Data System (ADS)
Sun, Gang; Yu, Hongfang; Li, Lemin; Anand, Vishal; di, Hao; Gao, Xiujiao
2010-12-01
Network Virtualization Technology is serving as an effective method for providing a flexible and highly adaptable shared substrate network to satisfy the diversity of demands. But the problem of efficiently embedding Virtual Network (VN) onto substrate network is intractable since it is NP-hard. How to guarantee survivability of the embedding efficiently is another great challenge. In this paper, we investigate the Survivable Virtual Network Embedding (SVNE) problem and propose two efficient algorithms for solving this problem efficiently. Firstly, we formulate the model with minimum-cost objective of survivable network virtualization problem by Mixed Integer Linear Programming (MILP). We then devise two efficient relaxation-based algorithms for solving survivable virtual network embedding problem: (1) Lagrangian Relaxation based algorithm, called LR-SVNE in this paper; (2) Decomposition based algorithm called DSVNE in this paper. The results of simulation experiments show that these two algorithms both have good performance on time efficiency but LR-SVNE can guarantee the solution converge to optimal one under small scale substrate network.
General lossless planar coupler design algorithms.
Vance, Rod
2015-08-01
This paper reviews and extends two classes of algorithms for the design of planar couplers with any unitary transfer matrix as design goals. Such couplers find use in optical sensing for fading free interferometry, coherent optical network demodulation, and also for quantum state preparation in quantum optical experiments and technology. The two classes are (1) "atomic coupler algorithms" decomposing a unitary transfer matrix into a planar network of 2×2 couplers, and (2) "Lie theoretic algorithms" concatenating unit cell devices with variable phase delay sets that form canonical coordinates for neighborhoods in the Lie group U(N), so that the concatenations realize any transfer matrix in U(N). As well as review, this paper gives (1) a Lie theoretic proof existence proof showing that both classes of algorithms work and (2) direct proofs of the efficacy of the "atomic coupler" algorithms. The Lie theoretic proof strengthens former results. 5×5 couplers designed by both methods are compared by Monte Carlo analysis, which would seem to imply atomic rather than Lie theoretic methods yield designs more resilient to manufacturing imperfections. PMID:26367295
A compilation of jet finding algorithms
Flaugher, B.; Meier, K.
1992-12-31
Technical descriptions of jet finding algorithms currently in use in p{anti p} collider experiments (CDF, UA1, UA2), e{sup +}e{sup {minus}} experiments and Monte-Carlo event generators (LUND programs, ISAJET) have been collected. For the hadron collider experiments, the clustering methods fall into two categories: cone algorithms and nearest-neighbor algorithms. In addition, UA2 has employed a combination of both methods for some analysis. While there are clearly differences between the cone and nearest-neighbor algorithms, the authors have found that there are also differences among the cone algorithms in the details of how the centroid of a cone cluster is located and how the E{sub T} and P{sub T} of the jet are defined. The most commonly used jet algorithm in electron-positron experiments is the JADE-type cluster algorithm. Five various incarnations of this approach have been described.
Research on Chord Searching Algorithm Base on Cache Strategy
NASA Astrophysics Data System (ADS)
Jun, Guo; Chen, Chen
How to improve search efficiency is a core problem in P2P network, Chord is a successful searching algorithm, but its lookup efficiency is lower because finger table has redundant information proposed the recently visited table and improved to gain more useful information in Chord. The simulation experiments show that approach can availably improve the routing efficiently.
Preconditioned quantum linear system algorithm.
Clader, B D; Jacobs, B C; Sprouse, C R
2013-06-21
We describe a quantum algorithm that generalizes the quantum linear system algorithm [Harrow et al., Phys. Rev. Lett. 103, 150502 (2009)] to arbitrary problem specifications. We develop a state preparation routine that can initialize generic states, show how simple ancilla measurements can be used to calculate many quantities of interest, and integrate a quantum-compatible preconditioner that greatly expands the number of problems that can achieve exponential speedup over classical linear systems solvers. To demonstrate the algorithm's applicability, we show how it can be used to compute the electromagnetic scattering cross section of an arbitrary target exponentially faster than the best classical algorithm. PMID:23829722
Wang, Jie-sheng; Li, Shu-xia; Song, Jiang-di
2015-01-01
In order to improve convergence velocity and optimization accuracy of the cuckoo search (CS) algorithm for solving the function optimization problems, a new improved cuckoo search algorithm based on the repeat-cycle asymptotic self-learning and self-evolving disturbance (RC-SSCS) is proposed. A disturbance operation is added into the algorithm by constructing a disturbance factor to make a more careful and thorough search near the bird's nests location. In order to select a reasonable repeat-cycled disturbance number, a further study on the choice of disturbance times is made. Finally, six typical test functions are adopted to carry out simulation experiments, meanwhile, compare algorithms of this paper with two typical swarm intelligence algorithms particle swarm optimization (PSO) algorithm and artificial bee colony (ABC) algorithm. The results show that the improved cuckoo search algorithm has better convergence velocity and optimization accuracy. PMID:26366164
Wang, Jie-sheng; Li, Shu-xia; Song, Jiang-di
2015-01-01
In order to improve convergence velocity and optimization accuracy of the cuckoo search (CS) algorithm for solving the function optimization problems, a new improved cuckoo search algorithm based on the repeat-cycle asymptotic self-learning and self-evolving disturbance (RC-SSCS) is proposed. A disturbance operation is added into the algorithm by constructing a disturbance factor to make a more careful and thorough search near the bird's nests location. In order to select a reasonable repeat-cycled disturbance number, a further study on the choice of disturbance times is made. Finally, six typical test functions are adopted to carry out simulation experiments, meanwhile, compare algorithms of this paper with two typical swarm intelligence algorithms particle swarm optimization (PSO) algorithm and artificial bee colony (ABC) algorithm. The results show that the improved cuckoo search algorithm has better convergence velocity and optimization accuracy. PMID:26366164
On algorithmic rate-coded AER generation.
Linares-Barranco, Alejandro; Jimenez-Moreno, Gabriel; Linares-Barranco, Bernabé; Civit-Balcells, Antón
2006-05-01
This paper addresses the problem of converting a conventional video stream based on sequences of frames into the spike event-based representation known as the address-event-representation (AER). In this paper we concentrate on rate-coded AER. The problem is addressed as an algorithmic problem, in which different methods are proposed, implemented and tested through software algorithms. The proposed algorithms are comparatively evaluated according to different criteria. Emphasis is put on the potential of such algorithms for a) doing the frame-based to event-based representation in real time, and b) that the resulting event streams ressemble as much as possible those generated naturally by rate-coded address-event VLSI chips, such as silicon AER retinae. It is found that simple and straightforward algorithms tend to have high potential for real time but produce event distributions that differ considerably from those obtained in AER VLSI chips. On the other hand, sophisticated algorithms that yield better event distributions are not efficient for real time operations. The methods based on linear-feedback-shift-register (LFSR) pseudorandom number generation is a good compromise, which is feasible for real time and yield reasonably well distributed events in time. Our software experiments, on a 1.6-GHz Pentium IV, show that at 50% AER bus load the proposed algorithms require between 0.011 and 1.14 ms per 8 bit-pixel per frame. One of the proposed LFSR methods is implemented in real time hardware using a prototyping board that includes a VirtexE 300 FPGA. The demonstration hardware is capable of transforming frames of 64 x 64 pixels of 8-bit depth at a frame rate of 25 frames per second, producing spike events at a peak rate of 10(7) events per second. PMID:16722179
A multi-level solution algorithm for steady-state Markov chains
NASA Technical Reports Server (NTRS)
Horton, Graham; Leutenegger, Scott T.
1993-01-01
A new iterative algorithm, the multi-level algorithm, for the numerical solution of steady state Markov chains is presented. The method utilizes a set of recursively coarsened representations of the original system to achieve accelerated convergence. It is motivated by multigrid methods, which are widely used for fast solution of partial differential equations. Initial results of numerical experiments are reported, showing significant reductions in computation time, often an order of magnitude or more, relative to the Gauss-Seidel and optimal SOR algorithms for a variety of test problems. The multi-level method is compared and contrasted with the iterative aggregation-disaggregation algorithm of Takahashi.
An Effective Intrusion Detection Algorithm Based on Improved Semi-supervised Fuzzy Clustering
NASA Astrophysics Data System (ADS)
Li, Xueyong; Zhang, Baojian; Sun, Jiaxia; Yan, Shitao
An algorithm for intrusion detection based on improved evolutionary semi- supervised fuzzy clustering is proposed which is suited for situation that gaining labeled data is more difficulty than unlabeled data in intrusion detection systems. The algorithm requires a small number of labeled data only and a large number of unlabeled data and class labels information provided by labeled data is used to guide the evolution process of each fuzzy partition on unlabeled data, which plays the role of chromosome. This algorithm can deal with fuzzy label, uneasily plunges locally optima and is suited to implement on parallel architecture. Experiments show that the algorithm can improve classification accuracy and has high detection efficiency.
Analysis and applications of a general boresight algorithm for the DSS-13 beam waveguide antenna
NASA Technical Reports Server (NTRS)
Alvarez, L. S.
1992-01-01
A general antenna beam boresight algorithm is presented. Equations for axial pointing error, peak received signal level, and antenna half-power beamwidth are given. A pointing error variance equation is derived that illustrates the dependence of the measurement estimation performance on the various algorithm inputs, including RF signal level uncertainty. Plots showing pointing error uncertainty as function of algorithm inputs are presented. Insight gained from the performance analysis is discussed in terms of its application to the areas of antenna controller and receiver interfacing, pointing error compensation, and antenna calibrations. Current and planned applications of the boresight algorithm, including its role in the upcoming Ka-band downlink experiment (KABLE), are highlighted.
Directional algorithms for the frequency isolation problem in undamped vibrational systems
NASA Astrophysics Data System (ADS)
Moro, Julio; Egaña, Juan C.
2016-06-01
A new algorithm is presented to solve the frequency isolation problem for vibrational systems with no damping: given an undamped mass-spring system with resonant eigenvalues, the system must be re-designed, finding some close-by non-resonant system at a reasonable cost. Our approach relies on modifying masses and stiffnesses along directions in parameter space which produce a maximal variation in the resonant eigenvalues, provided the non-resonant ones do not undergo large variations. The algorithm is derived from first principles, implemented, and numerically tested. The numerical experiments show that the new algorithms are considerably faster and more robust than previous algorithms solving the same problem.
Zhang, Hao; Zhao, Yan; Cao, Liangcai; Jin, Guofan
2015-02-23
We propose an algorithm based on fully computed holographic stereogram for calculating full-parallax computer-generated holograms (CGHs) with accurate depth cues. The proposed method integrates point source algorithm and holographic stereogram based algorithm to reconstruct the three-dimensional (3D) scenes. Precise accommodation cue and occlusion effect can be created, and computer graphics rendering techniques can be employed in the CGH generation to enhance the image fidelity. Optical experiments have been performed using a spatial light modulator (SLM) and a fabricated high-resolution hologram, the results show that our proposed algorithm can perform quality reconstructions of 3D scenes with arbitrary depth information. PMID:25836429
ETD: an extended time delay algorithm for ventricular fibrillation detection.
Kim, Jungyoon; Chu, Chao-Hsien
2014-01-01
Ventricular fibrillation (VF) is the most serious type of heart attack which requires quick detection and first aid to improve patients' survival rates. To be most effective in using wearable devices for VF detection, it is vital that the detection algorithms be accurate, robust, reliable and computationally efficient. Previous studies and our experiments both indicate that the time-delay (TD) algorithm has a high reliability for separating sinus rhythm (SR) from VF and is resistant to variable factors, such as window size and filtering method. However, it fails to detect some VF cases. In this paper, we propose an extended time-delay (ETD) algorithm for VF detection and conduct experiments comparing the performance of ETD against five good VF detection algorithms, including TD, using the popular Creighton University (CU) database. Our study shows that (1) TD and ETD outperform the other four algorithms considered and (2) with the same sensitivity setting, ETD improves upon TD in three other quality measures for up to 7.64% and in terms of aggregate accuracy, the ETD algorithm shows an improvement of 2.6% of the area under curve (AUC) compared to TD. PMID:25571480
Semi-flocking algorithm for motion control of mobile sensors in large-scale surveillance systems.
Semnani, Samaneh Hosseini; Basir, Otman A
2015-01-01
The ability of sensors to self-organize is an important asset in surveillance sensor networks. Self-organize implies self-control at the sensor level and coordination at the network level. Biologically inspired approaches have recently gained significant attention as a tool to address the issue of sensor control and coordination in sensor networks. These approaches are exemplified by the two well-known algorithms, namely, the Flocking algorithm and the Anti-Flocking algorithm. Generally speaking, although these two biologically inspired algorithms have demonstrated promising performance, they expose deficiencies when it comes to their ability to maintain simultaneous robust dynamic area coverage and target coverage. These two coverage performance objectives are inherently conflicting. This paper presents Semi-Flocking, a biologically inspired algorithm that benefits from key characteristics of both the Flocking and Anti-Flocking algorithms. The Semi-Flocking algorithm approaches the problem by assigning a small flock of sensors to each target, while at the same time leaving some sensors free to explore the environment. This allows the algorithm to strike balance between robust area coverage and target coverage. Such balance is facilitated via flock-sensor coordination. The performance of the proposed Semi-Flocking algorithm is examined and compared with other two flocking-based algorithms once using randomly moving targets and once using a standard walking pedestrian dataset. The results of both experiments show that the Semi-Flocking algorithm outperforms both the Flocking algorithm and the Anti-Flocking algorithm with respect to the area of coverage and the target coverage objectives. Furthermore, the results show that the proposed algorithm demonstrates shorter target detection time and fewer undetected targets than the other two flocking-based algorithms. PMID:25014985
Designing experiments through compressed sensing.
Young, Joseph G.; Ridzal, Denis
2013-06-01
In the following paper, we discuss how to design an ensemble of experiments through the use of compressed sensing. Specifically, we show how to conduct a small number of physical experiments and then use compressed sensing to reconstruct a larger set of data. In order to accomplish this, we organize our results into four sections. We begin by extending the theory of compressed sensing to a finite product of Hilbert spaces. Then, we show how these results apply to experiment design. Next, we develop an efficient reconstruction algorithm that allows us to reconstruct experimental data projected onto a finite element basis. Finally, we verify our approach with two computational experiments.
Study of image matching algorithm and sub-pixel fitting algorithm in target tracking
NASA Astrophysics Data System (ADS)
Yang, Ming-dong; Jia, Jianjun; Qiang, Jia; Wang, Jian-yu
2015-03-01
Image correlation matching is a tracking method that searched a region most approximate to the target template based on the correlation measure between two images. Because there is no need to segment the image, and the computation of this method is little. Image correlation matching is a basic method of target tracking. This paper mainly studies the image matching algorithm of gray scale image, which precision is at sub-pixel level. The matching algorithm used in this paper is SAD (Sum of Absolute Difference) method. This method excels in real-time systems because of its low computation complexity. The SAD method is introduced firstly and the most frequently used sub-pixel fitting algorithms are introduced at the meantime. These fitting algorithms can't be used in real-time systems because they are too complex. However, target tracking often requires high real-time performance, we put forward a fitting algorithm named paraboloidal fitting algorithm based on the consideration above, this algorithm is simple and realized easily in real-time system. The result of this algorithm is compared with that of surface fitting algorithm through image matching simulation. By comparison, the precision difference between these two algorithms is little, it's less than 0.01pixel. In order to research the influence of target rotation on precision of image matching, the experiment of camera rotation was carried on. The detector used in the camera is a CMOS detector. It is fixed to an arc pendulum table, take pictures when the camera rotated different angles. Choose a subarea in the original picture as the template, and search the best matching spot using image matching algorithm mentioned above. The result shows that the matching error is bigger when the target rotation angle is larger. It's an approximate linear relation. Finally, the influence of noise on matching precision was researched. Gaussian noise and pepper and salt noise were added in the image respectively, and the image
Compound algorithm for restoration of heavy turbulence-degraded image for space target
NASA Astrophysics Data System (ADS)
Wang, Liang-liang; Wang, Ru-jie; Li, Ming; Kang, Zi-qian; Xu, Xiao-qin; Gao, Xin
2012-11-01
Restoration of atmospheric turbulence degraded image is needed to be solved as soon as possible in the field of astronomical space technology. Owing to the fact that the point spread function of turbulence is unknown, changeable with time, hard to be described by mathematics models, withal, kinds of noises would be brought during the imaging processes (such as sensor noise), the image for space target is edge blurred and heavy noised, which making a single restoration algorithm to reach the requirement of restoration difficult. Focusing the fact that the image for space target which was fetched during observation by ground-based optical telescopes is heavy noisy turbulence degraded, this paper discusses the adjustment and reformation of various algorithm structures as well as the selection of various parameters, after the combination of the nonlinear filter algorithm based on noise spatial characteristics, restoration algorithm of heavy turbulence degrade image for space target based on regularization, and the statistics theory based EM restoration algorithm. In order to test the validity of the algorithm, a series of restoration experiments are performed on the heavy noisy turbulence-degraded images for space target. The experiment results show that the new compound algorithm can achieve noise restriction and detail preservation simultaneously, which is effective and practical. Withal, the definition measures and relative definition measures show that the new compound algorithm is better than the traditional algorithms.
An improved robust ADMM algorithm for quantum state tomography
NASA Astrophysics Data System (ADS)
Li, Kezhi; Zhang, Hui; Kuang, Sen; Meng, Fangfang; Cong, Shuang
2016-06-01
In this paper, an improved adaptive weights alternating direction method of multipliers algorithm is developed to implement the optimization scheme for recovering the quantum state in nearly pure states. The proposed approach is superior to many existing methods because it exploits the low-rank property of density matrices, and it can deal with unexpected sparse outliers as well. The numerical experiments are provided to verify our statements by comparing the results to three different optimization algorithms, using both adaptive and fixed weights in the algorithm, in the cases of with and without external noise, respectively. The results indicate that the improved algorithm has better performances in both estimation accuracy and robustness to external noise. The further simulation results show that the successful recovery rate increases when more qubits are estimated, which in fact satisfies the compressive sensing theory and makes the proposed approach more promising.
An improved HMM/SVM dynamic hand gesture recognition algorithm
NASA Astrophysics Data System (ADS)
Zhang, Yi; Yao, Yuanyuan; Luo, Yuan
2015-10-01
In order to improve the recognition rate and stability of dynamic hand gesture recognition, for the low accuracy rate of the classical HMM algorithm in train the B parameter, this paper proposed an improved HMM/SVM dynamic gesture recognition algorithm. In the calculation of the B parameter of HMM model, this paper introduced the SVM algorithm which has the strong ability of classification. Through the sigmoid function converted the state output of the SVM into the probability and treat this probability as the observation state transition probability of the HMM model. After this, it optimized the B parameter of HMM model and improved the recognition rate of the system. At the same time, it also enhanced the accuracy and the real-time performance of the human-computer interaction. Experiments show that this algorithm has a strong robustness under the complex background environment and the varying illumination environment. The average recognition rate increased from 86.4% to 97.55%.
A Cultural Algorithm for the Urban Public Transportation
NASA Astrophysics Data System (ADS)
Reyes, Laura Cruz; Zezzatti, Carlos Alberto Ochoa Ortíz; Santillán, Claudia Gómez; Hernández, Paula Hernández; Fuerte, Mercedes Villa
In the last years the population of Leon City, located in the state of Guanajuato in Mexico, has been considerably increasing, causing the inhabitants to waste most of their time with public transportation. As a consequence of the demographic growth and traffic bottleneck, users deal with the daily problem of optimizing their travel so that to get to their destination on time. To give a solution to this problem of obtaining an optimized route between two points in a public transportation, a method based on the cultural algorithms technique is proposed. Cultural algorithms are used in the generated knowledge in a set of time periods for a same population, using a belief space. These types of algorithms are a recent creation. The proposed method seeks a path that minimizes the time of traveling and the number of transfers. The results of the experiment show that the technique of the cultural algorithms is applicable to these kinds of multi-objective problems.
Temperature Corrected Bootstrap Algorithm
NASA Technical Reports Server (NTRS)
Comiso, Joey C.; Zwally, H. Jay
1997-01-01
A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.
Power spectral estimation algorithms
NASA Technical Reports Server (NTRS)
Bhatia, Manjit S.
1989-01-01
Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.
Algorithm Visualization in Teaching Practice
ERIC Educational Resources Information Center
Törley, Gábor
2014-01-01
This paper presents the history of algorithm visualization (AV), highlighting teaching-methodology aspects. A combined, two-group pedagogical experiment will be presented as well, which measured the efficiency and the impact on the abstract thinking of AV. According to the results, students, who learned with AV, performed better in the experiment.
Optimisation of nonlinear motion cueing algorithm based on genetic algorithm
NASA Astrophysics Data System (ADS)
Asadi, Houshyar; Mohamed, Shady; Rahim Zadeh, Delpak; Nahavandi, Saeid
2015-04-01
Motion cueing algorithms (MCAs) are playing a significant role in driving simulators, aiming to deliver the most accurate human sensation to the simulator drivers compared with a real vehicle driver, without exceeding the physical limitations of the simulator. This paper provides the optimisation design of an MCA for a vehicle simulator, in order to find the most suitable washout algorithm parameters, while respecting all motion platform physical limitations, and minimising human perception error between real and simulator driver. One of the main limitations of the classical washout filters is that it is attuned by the worst-case scenario tuning method. This is based on trial and error, and is effected by driving and programmers experience, making this the most significant obstacle to full motion platform utilisation. This leads to inflexibility of the structure, production of false cues and makes the resulting simulator fail to suit all circumstances. In addition, the classical method does not take minimisation of human perception error and physical constraints into account. Production of motion cues and the impact of different parameters of classical washout filters on motion cues remain inaccessible for designers for this reason. The aim of this paper is to provide an optimisation method for tuning the MCA parameters, based on nonlinear filtering and genetic algorithms. This is done by taking vestibular sensation error into account between real and simulated cases, as well as main dynamic limitations, tilt coordination and correlation coefficient. Three additional compensatory linear blocks are integrated into the MCA, to be tuned in order to modify the performance of the filters successfully. The proposed optimised MCA is implemented in MATLAB/Simulink software packages. The results generated using the proposed method show increased performance in terms of human sensation, reference shape tracking and exploiting the platform more efficiently without reaching
Alshamlan, Hala M; Badr, Ghada H; Alohali, Yousef A
2015-06-01
Naturally inspired evolutionary algorithms prove effectiveness when used for solving feature selection and classification problems. Artificial Bee Colony (ABC) is a relatively new swarm intelligence method. In this paper, we propose a new hybrid gene selection method, namely Genetic Bee Colony (GBC) algorithm. The proposed algorithm combines the used of a Genetic Algorithm (GA) along with Artificial Bee Colony (ABC) algorithm. The goal is to integrate the advantages of both algorithms. The proposed algorithm is applied to a microarray gene expression profile in order to select the most predictive and informative genes for cancer classification. In order to test the accuracy performance of the proposed algorithm, extensive experiments were conducted. Three binary microarray datasets are use, which include: colon, leukemia, and lung. In addition, another three multi-class microarray datasets are used, which are: SRBCT, lymphoma, and leukemia. Results of the GBC algorithm are compared with our recently proposed technique: mRMR when combined with the Artificial Bee Colony algorithm (mRMR-ABC). We also compared the combination of mRMR with GA (mRMR-GA) and Particle Swarm Optimization (mRMR-PSO) algorithms. In addition, we compared the GBC algorithm with other related algorithms that have been recently published in the literature, using all benchmark datasets. The GBC algorithm shows superior performance as it achieved the highest classification accuracy along with the lowest average number of selected genes. This proves that the GBC algorithm is a promising approach for solving the gene selection problem in both binary and multi-class cancer classification. PMID:25880524
Ozone Differential Absorption Lidar Algorithm Intercomparison
NASA Astrophysics Data System (ADS)
Godin, Sophie; Carswell, Allen I.; Donovan, David P.; Claude, Hans; Steinbrecht, Wolfgang; McDermid, I. Stuart; McGee, Thomas J.; Gross, Michael R.; Nakane, Hideaki; Swart, Daan P. J.; Bergwerff, Hans B.; Uchino, Osamu; von der Gathen, Peter; Neuber, Roland
1999-10-01
An intercomparison of ozone differential absorption lidar algorithms was performed in 1996 within the framework of the Network for the Detection of Stratospheric Changes (NDSC) lidar working group. The objective of this research was mainly to test the differentiating techniques used by the various lidar teams involved in the NDSC for the calculation of the ozone number density from the lidar signals. The exercise consisted of processing synthetic lidar signals computed from simple Rayleigh scattering and three initial ozone profiles. Two of these profiles contained perturbations in the low and the high stratosphere to test the vertical resolution of the various algorithms. For the unperturbed profiles the results of the simulations show the correct behavior of the lidar processing methods in the low and the middle stratosphere with biases of less than 1% with respect to the initial profile to as high as 30 km in most cases. In the upper stratosphere, significant biases reaching 10% at 45 km for most of the algorithms are obtained. This bias is due to the decrease in the signal-to-noise ratio with altitude, which makes it necessary to increase the number of points of the derivative low-pass filter used for data processing. As a consequence the response of the various retrieval algorithms to perturbations in the ozone profile is much better in the lower stratosphere than in the higher range. These results show the necessity of limiting the vertical smoothing in the ozone lidar retrieval algorithm and questions the ability of current lidar systems to detect long-term ozone trends above 40 km. Otherwise the simulations show in general a correct estimation of the ozone profile random error and, as shown by the tests involving the perturbed ozone profiles, some inconsistency in the estimation of the vertical resolution among the lidar teams involved in this experiment.
Predicting the performance of a spatial gamut mapping algorithm
NASA Astrophysics Data System (ADS)
Bakke, Arne M.; Farup, Ivar; Hardeberg, Jon Y.
2009-01-01
Gamut mapping algorithms are currently being developed to take advantage of the spatial information in an image to improve the utilization of the destination gamut. These algorithms try to preserve the spatial information between neighboring pixels in the image, such as edges and gradients, without sacrificing global contrast. Experiments have shown that such algorithms can result in significantly improved reproduction of some images compared with non-spatial methods. However, due to the spatial processing of images, they introduce unwanted artifacts when used on certain types of images. In this paper we perform basic image analysis to predict whether a spatial algorithm is likely to perform better or worse than a good, non-spatial algorithm. Our approach starts by detecting the relative amount of areas in the image that are made up of uniformly colored pixels, as well as the amount of areas that contain details in out-of-gamut areas. A weighted difference is computed from these numbers, and we show that the result has a high correlation with the observed performance of the spatial algorithm in a previously conducted psychophysical experiment.
Research on Multirobot Pursuit Task Allocation Algorithm Based on Emotional Cooperation Factor
Fang, Baofu; Chen, Lu; Wang, Hao; Dai, Shuanglu; Zhong, Qiubo
2014-01-01
Multirobot task allocation is a hot issue in the field of robot research. A new emotional model is used with the self-interested robot, which gives a new way to measure self-interested robots' individual cooperative willingness in the problem of multirobot task allocation. Emotional cooperation factor is introduced into self-interested robot; it is updated based on emotional attenuation and external stimuli. Then a multirobot pursuit task allocation algorithm is proposed, which is based on emotional cooperation factor. Combined with the two-step auction algorithm recruiting team leaders and team collaborators, set up pursuit teams, and finally use certain strategies to complete the pursuit task. In order to verify the effectiveness of this algorithm, some comparing experiments have been done with the instantaneous greedy optimal auction algorithm; the results of experiments show that the total pursuit time and total team revenue can be optimized by using this algorithm. PMID:25152925
Dynamically Incremental K-means++ Clustering Algorithm Based on Fuzzy Rough Set Theory
NASA Astrophysics Data System (ADS)
Li, Wei; Wang, Rujing; Jia, Xiufang; Jiang, Qing
Being classic K-means++ clustering algorithm only for static data, dynamically incremental K-means++ clustering algorithm (DK-Means++) is presented based on fuzzy rough set theory in this paper. Firstly, in DK-Means++ clustering algorithm, the formula of similar degree is improved by weights computed by using of the important degree of attributes which are reduced on the basis of rough fuzzy set theory. Secondly, new data only need match granular which was clustered by K-means++ algorithm or seldom new data is clustered by classic K-means++ algorithm in global data. In this way, that all data is re-clustered each time in dynamic data set is avoided, so the efficiency of clustering is improved. Throughout our experiments showing, DK-Means++ algorithm can objectively and efficiently deal with clustering problem of dynamically incremental data.
Che, Yanting; Wang, Qiuying; Gao, Wei; Yu, Fei
2015-01-01
In this paper, an improved inertial frame alignment algorithm for a marine SINS under mooring conditions is proposed, which significantly improves accuracy. Since the horizontal alignment is easy to complete, and a characteristic of gravity is that its component in the horizontal plane is zero, we use a clever method to improve the conventional inertial alignment algorithm. Firstly, a large misalignment angle model and a dimensionality reduction Gauss-Hermite filter are employed to establish the fine horizontal reference frame. Based on this, the projection of the gravity in the body inertial coordinate frame can be calculated easily. Then, the initial alignment algorithm is accomplished through an inertial frame alignment algorithm. The simulation and experiment results show that the improved initial alignment algorithm performs better than the conventional inertial alignment algorithm, and meets the accuracy requirements of a medium-accuracy marine SINS. PMID:26445048
Che, Yanting; Wang, Qiuying; Gao, Wei; Yu, Fei
2015-01-01
In this paper, an improved inertial frame alignment algorithm for a marine SINS under mooring conditions is proposed, which significantly improves accuracy. Since the horizontal alignment is easy to complete, and a characteristic of gravity is that its component in the horizontal plane is zero, we use a clever method to improve the conventional inertial alignment algorithm. Firstly, a large misalignment angle model and a dimensionality reduction Gauss-Hermite filter are employed to establish the fine horizontal reference frame. Based on this, the projection of the gravity in the body inertial coordinate frame can be calculated easily. Then, the initial alignment algorithm is accomplished through an inertial frame alignment algorithm. The simulation and experiment results show that the improved initial alignment algorithm performs better than the conventional inertial alignment algorithm, and meets the accuracy requirements of a medium-accuracy marine SINS. PMID:26445048
Adaptive optics image deconvolution based on a modified Richardson-Lucy algorithm
NASA Astrophysics Data System (ADS)
Chen, Bo; Geng, Ze-xun; Yan, Xiao-dong; Yang, Yang; Sui, Xue-lian; Zhao, Zhen-lei
2007-12-01
Adaptive optical (AO) system provides a real-time compensation for atmospheric turbulence. However, the correction is often only partial, and a deconvolution is required for reaching the diffraction limit. The Richardson-Lucy (R-L) Algorithm is the technique most widely used for AO image deconvolution, but Standard R-L Algorithm (SRLA) is often puzzled by speckling phenomenon, wraparound artifact and noise problem. A Modified R-L Algorithm (MRLA) for AO image deconvolution is presented. This novel algorithm applies Magain's correct sampling approach and incorporating noise statistics to Standard R-L Algorithm. The alternant iterative method is applied to estimate PSF and object in the novel algorithm. Comparing experiments for indoor data and AO image are done with SRLA and the MRLA in this paper. Experimental results show that this novel MRLA outperforms the SRLA.
[An improved fast algorithm for ray casting volume rendering of medical images].
Tao, Ling; Wang, Huina; Tian, Zhiliang
2006-10-01
Ray casting algorithm can obtain better quality images in volume rendering, however, it presents some problems such as powerful computing capacity and slow rendering velocity. Therefore, a new fast algorithm of ray casting volume rendering is proposed in this paper. This algorithm reduces matrix computation by the matrix transformation characteristics of re-sampling points in two coordinate system, so re-sampled computational process is accelerated. By extending the Bresenham algorithm to three dimension and utilizing boundary box technique, this algorithm avoids the sampling in empty voxel and greatly improves the efficiency of ray casting. The experiment results show that the improved acceleration algorithm can produce the required quality images, at the same time reduces the total operations remarkably, and speeds up the volume rendering. PMID:17121341
NASA Astrophysics Data System (ADS)
Nawi, Nazri Mohd.; Khan, Abdullah; Rehman, M. Z.
2015-05-01
A nature inspired behavior metaheuristic techniques which provide derivative-free solutions to solve complex problems. One of the latest additions to the group of nature inspired optimization procedure is Cuckoo Search (CS) algorithm. Artificial Neural Network (ANN) training is an optimization task since it is desired to find optimal weight set of a neural network in training process. Traditional training algorithms have some limitation such as getting trapped in local minima and slow convergence rate. This study proposed a new technique CSLM by combining the best features of two known algorithms back-propagation (BP) and Levenberg Marquardt algorithm (LM) for improving the convergence speed of ANN training and avoiding local minima problem by training this network. Some selected benchmark classification datasets are used for simulation. The experiment result show that the proposed cuckoo search with Levenberg Marquardt algorithm has better performance than other algorithm used in this study.
Fast, Parallel and Secure Cryptography Algorithm Using Lorenz's Attractor
NASA Astrophysics Data System (ADS)
Marco, Anderson Gonçalves; Martinez, Alexandre Souto; Bruno, Odemir Martinez
A novel cryptography method based on the Lorenz's attractor chaotic system is presented. The proposed algorithm is secure and fast, making it practical for general use. We introduce the chaotic operation mode, which provides an interaction among the password, message and a chaotic system. It ensures that the algorithm yields a secure codification, even if the nature of the chaotic system is known. The algorithm has been implemented in two versions: one sequential and slow and the other, parallel and fast. Our algorithm assures the integrity of the ciphertext (we know if it has been altered, which is not assured by traditional algorithms) and consequently its authenticity. Numerical experiments are presented, discussed and show the behavior of the method in terms of security and performance. The fast version of the algorithm has a performance comparable to AES, a popular cryptography program used commercially nowadays, but it is more secure, which makes it immediately suitable for general purpose cryptography applications. An internet page has been set up, which enables the readers to test the algorithm and also to try to break into the cipher.
Nicolini, G.; Clivio, A.; Vanetti, E.; Cozzi, L.; Fogliata, A.; Krauss, H.; Fenoglietto, P.
2013-11-15
Purpose: To demonstrate the feasibility of portal dosimetry with an amorphous silicon mega voltage imager for flattening filter free (FFF) photon beams by means of the GLAaS methodology and to validate it for pretreatment quality assurance of volumetric modulated arc therapy (RapidArc).Methods: The GLAaS algorithm, developed for flattened beams, was applied to FFF beams of nominal energy of 6 and 10 MV generated by a Varian TrueBeam (TB). The amorphous silicon electronic portal imager [named mega voltage imager (MVI) on TB] was used to generate integrated images that were converted into matrices of absorbed dose to water. To enable GLAaS use under the increased dose-per-pulse and dose-rate conditions of the FFF beams, new operational source-detector-distance (SDD) was identified to solve detector saturation issues. Empirical corrections were defined to account for the shape of the profiles of the FFF beams to expand the original methodology of beam profile and arm backscattering correction. GLAaS for FFF beams was validated on pretreatment verification of RapidArc plans for three different TB linacs. In addition, the first pretreatment results from clinical experience on 74 arcs were reported in terms of γ analysis.Results: MVI saturates at 100 cm SDD for FFF beams but this can be avoided if images are acquired at 150 cm for all nominal dose rates of FFF beams. Rotational stability of the gantry-imager system was tested and resulted in a minimal apparent imager displacement during rotation of 0.2 ± 0.2 mm at SDD = 150 cm. The accuracy of this approach was tested with three different Varian TrueBeam linacs from different institutes. Data were stratified per energy and machine and showed no dependence with beam quality and MLC model. The results from clinical pretreatment quality assurance, provided a gamma agreement index (GAI) in the field area for six and ten FFF beams of (99.8 ± 0.3)% and (99.5 ± 0.6)% with distance to agreement and dose difference criteria
RES: Regularized Stochastic BFGS Algorithm
NASA Astrophysics Data System (ADS)
Mokhtari, Aryan; Ribeiro, Alejandro
2014-12-01
RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.
An Indoor Continuous Positioning Algorithm on the Move by Fusing Sensors and Wi-Fi on Smartphones
Li, Huaiyu; Chen, Xiuwan; Jing, Guifei; Wang, Yuan; Cao, Yanfeng; Li, Fei; Zhang, Xinlong; Xiao, Han
2015-01-01
Wi-Fi indoor positioning algorithms experience large positioning error and low stability when continuously positioning terminals that are on the move. This paper proposes a novel indoor continuous positioning algorithm that is on the move, fusing sensors and Wi-Fi on smartphones. The main innovative points include an improved Wi-Fi positioning algorithm and a novel positioning fusion algorithm named the Trust Chain Positioning Fusion (TCPF) algorithm. The improved Wi-Fi positioning algorithm was designed based on the properties of Wi-Fi signals on the move, which are found in a novel “quasi-dynamic” Wi-Fi signal experiment. The TCPF algorithm is proposed to realize the “process-level” fusion of Wi-Fi and Pedestrians Dead Reckoning (PDR) positioning, including three parts: trusted point determination, trust state and positioning fusion algorithm. An experiment is carried out for verification in a typical indoor environment, and the average positioning error on the move is 1.36 m, a decrease of 28.8% compared to an existing algorithm. The results show that the proposed algorithm can effectively reduce the influence caused by the unstable Wi-Fi signals, and improve the accuracy and stability of indoor continuous positioning on the move. PMID:26690447
An Indoor Continuous Positioning Algorithm on the Move by Fusing Sensors and Wi-Fi on Smartphones.
Li, Huaiyu; Chen, Xiuwan; Jing, Guifei; Wang, Yuan; Cao, Yanfeng; Li, Fei; Zhang, Xinlong; Xiao, Han
2015-01-01
Wi-Fi indoor positioning algorithms experience large positioning error and low stability when continuously positioning terminals that are on the move. This paper proposes a novel indoor continuous positioning algorithm that is on the move, fusing sensors and Wi-Fi on smartphones. The main innovative points include an improved Wi-Fi positioning algorithm and a novel positioning fusion algorithm named the Trust Chain Positioning Fusion (TCPF) algorithm. The improved Wi-Fi positioning algorithm was designed based on the properties of Wi-Fi signals on the move, which are found in a novel "quasi-dynamic" Wi-Fi signal experiment. The TCPF algorithm is proposed to realize the "process-level" fusion of Wi-Fi and Pedestrians Dead Reckoning (PDR) positioning, including three parts: trusted point determination, trust state and positioning fusion algorithm. An experiment is carried out for verification in a typical indoor environment, and the average positioning error on the move is 1.36 m, a decrease of 28.8% compared to an existing algorithm. The results show that the proposed algorithm can effectively reduce the influence caused by the unstable Wi-Fi signals, and improve the accuracy and stability of indoor continuous positioning on the move. PMID:26690447
The Chopthin Algorithm for Resampling
NASA Astrophysics Data System (ADS)
Gandy, Axel; Lau, F. Din-Houn
2016-08-01
Resampling is a standard step in particle filters and more generally sequential Monte Carlo methods. We present an algorithm, called chopthin, for resampling weighted particles. In contrast to standard resampling methods the algorithm does not produce a set of equally weighted particles; instead it merely enforces an upper bound on the ratio between the weights. Simulation studies show that the chopthin algorithm consistently outperforms standard resampling methods. The algorithms chops up particles with large weight and thins out particles with low weight, hence its name. It implicitly guarantees a lower bound on the effective sample size. The algorithm can be implemented efficiently, making it practically useful. We show that the expected computational effort is linear in the number of particles. Implementations for C++, R (on CRAN), Python and Matlab are available.
VIEW SHOWING WEST ELEVATION, EAST SIDE OF MEYER AVENUE. SHOWS ...
VIEW SHOWING WEST ELEVATION, EAST SIDE OF MEYER AVENUE. SHOWS 499-501, MUNOZ HOUSE (AZ-73-37) ON FAR RIGHT - Antonio Bustamente House, 485-489 South Meyer Avenue & 186 West Kennedy Street, Tucson, Pima County, AZ
15. Detail showing lower chord pinconnected to vertical member, showing ...
15. Detail showing lower chord pin-connected to vertical member, showing floor beam riveted to extension of vertical member below pin-connection, and showing brackets supporting cantilevered sidewalk. View to southwest. - Selby Avenue Bridge, Spanning Short Line Railways track at Selby Avenue between Hamline & Snelling Avenues, Saint Paul, Ramsey County, MN
A graph spectrum based geometric biclustering algorithm.
Wang, Doris Z; Yan, Hong
2013-01-21
Biclustering is capable of performing simultaneous clustering on two dimensions of a data matrix and has many applications in pattern classification. For example, in microarray experiments, a subset of genes is co-expressed in a subset of conditions, and biclustering algorithms can be used to detect the coherent patterns in the data for further analysis of function. In this paper, we present a graph spectrum based geometric biclustering (GSGBC) algorithm. In the geometrical view, biclusters can be seen as different linear geometrical patterns in high dimensional spaces. Based on this, the modified Hough transform is used to find the Hough vector (HV) corresponding to sub-bicluster patterns in 2D spaces. A graph can be built regarding each HV as a node. The graph spectrum is utilized to identify the eigengroups in which the sub-biclusters are grouped naturally to produce larger biclusters. Through a comparative study, we find that the GSGBC achieves as good a result as GBC and outperforms other kinds of biclustering algorithms. Also, compared with the original geometrical biclustering algorithm, it reduces the computing time complexity significantly. We also show that biologically meaningful biclusters can be identified by our method from real microarray gene expression data. PMID:23079285
Evaluation of Algorithms for Compressing Hyperspectral Data
NASA Technical Reports Server (NTRS)
Cook, Sid; Harsanyi, Joseph; Faber, Vance
2003-01-01
With EO-1 Hyperion in orbit NASA is showing their continued commitment to hyperspectral imaging (HSI). As HSI sensor technology continues to mature, the ever-increasing amounts of sensor data generated will result in a need for more cost effective communication and data handling systems. Lockheed Martin, with considerable experience in spacecraft design and developing special purpose onboard processors, has teamed with Applied Signal & Image Technology (ASIT), who has an extensive heritage in HSI spectral compression and Mapping Science (MSI) for JPEG 2000 spatial compression expertise, to develop a real-time and intelligent onboard processing (OBP) system to reduce HSI sensor downlink requirements. Our goal is to reduce the downlink requirement by a factor > 100, while retaining the necessary spectral and spatial fidelity of the sensor data needed to satisfy the many science, military, and intelligence goals of these systems. Our compression algorithms leverage commercial-off-the-shelf (COTS) spectral and spatial exploitation algorithms. We are currently in the process of evaluating these compression algorithms using statistical analysis and NASA scientists. We are also developing special purpose processors for executing these algorithms onboard a spacecraft.
Comparison of neuron selection algorithms of wavelet-based neural network
NASA Astrophysics Data System (ADS)
Mei, Xiaodan; Sun, Sheng-He
2001-09-01
Wavelet networks have increasingly received considerable attention in various fields such as signal processing, pattern recognition, robotics and automatic control. Recently people are interested in employing wavelet functions as activation functions and have obtained some satisfying results in approximating and localizing signals. However, the function estimation will become more and more complex with the growth of the input dimension. The hidden neurons contribute to minimize the approximation error, so it is important to study suitable algorithms for neuron selection. It is obvious that exhaustive search procedure is not satisfying when the number of neurons is large. The study in this paper focus on what type of selection algorithm has faster convergence speed and less error for signal approximation. Therefore, the Genetic algorithm and the Tabu Search algorithm are studied and compared by some experiments. This paper first presents the structure of the wavelet-based neural network, then introduces these two selection algorithms and discusses their properties and learning processes, and analyzes the experiments and results. We used two wavelet functions to test these two algorithms. The experiments show that the Tabu Search selection algorithm's performance is better than the Genetic selection algorithm, TSA has faster convergence rate than GA under the same stopping criterion.
Improved imaging algorithm for bridge crack detection
NASA Astrophysics Data System (ADS)
Lu, Jingxiao; Song, Pingli; Han, Kaihong
2012-04-01
This paper present an improved imaging algorithm for bridge crack detection, through optimizing the eight-direction Sobel edge detection operator, making the positioning of edge points more accurate than without the optimization, and effectively reducing the false edges information, so as to facilitate follow-up treatment. In calculating the crack geometry characteristics, we use the method of extracting skeleton on single crack length. In order to calculate crack area, we construct the template of area by making logical bitwise AND operation of the crack image. After experiment, the results show errors of the crack detection method and actual manual measurement are within an acceptable range, meet the needs of engineering applications. This algorithm is high-speed and effective for automated crack measurement, it can provide more valid data for proper planning and appropriate performance of the maintenance and rehabilitation processes of bridge.
Genetic Algorithms for Multiple-Choice Problems
NASA Astrophysics Data System (ADS)
Aickelin, Uwe
2010-04-01
This thesis investigates the use of problem-specific knowledge to enhance a genetic algorithm approach to multiple-choice optimisation problems.It shows that such information can significantly enhance performance, but that the choice of information and the way it is included are important factors for success.Two multiple-choice problems are considered.The first is constructing a feasible nurse roster that considers as many requests as possible.In the second problem, shops are allocated to locations in a mall subject to constraints and maximising the overall income.Genetic algorithms are chosen for their well-known robustness and ability to solve large and complex discrete optimisation problems.However, a survey of the literature reveals room for further research into generic ways to include constraints into a genetic algorithm framework.Hence, the main theme of this work is to balance feasibility and cost of solutions.In particular, co-operative co-evolution with hierarchical sub-populations, problem structure exploiting repair schemes and indirect genetic algorithms with self-adjusting decoder functions are identified as promising approaches.The research starts by applying standard genetic algorithms to the problems and explaining the failure of such approaches due to epistasis.To overcome this, problem-specific information is added in a variety of ways, some of which are designed to increase the number of feasible solutions found whilst others are intended to improve the quality of such solutions.As well as a theoretical discussion as to the underlying reasons for using each operator,extensive computational experiments are carried out on a variety of data.These show that the indirect approach relies less on problem structure and hence is easier to implement and superior in solution quality.
Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction.
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-11-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835
Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-01-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835
Research on Routing Selection Algorithm Based on Genetic Algorithm
NASA Astrophysics Data System (ADS)
Gao, Guohong; Zhang, Baojian; Li, Xueyong; Lv, Jinna
The hereditary algorithm is a kind of random searching and method of optimizing based on living beings natural selection and hereditary mechanism. In recent years, because of the potentiality in solving complicate problems and the successful application in the fields of industrial project, hereditary algorithm has been widely concerned by the domestic and international scholar. Routing Selection communication has been defined a standard communication model of IP version 6.This paper proposes a service model of Routing Selection communication, and designs and implements a new Routing Selection algorithm based on genetic algorithm.The experimental simulation results show that this algorithm can get more resolution at less time and more balanced network load, which enhances search ratio and the availability of network resource, and improves the quality of service.
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
28. MAP SHOWING LOCATION OF ARVFS FACILITY AS BUILT. SHOWS ...
28. MAP SHOWING LOCATION OF ARVFS FACILITY AS BUILT. SHOWS LINCOLN BOULEVARD, BIG LOST RIVER, AND NAVAL REACTORS FACILITY. F.C. TORKELSON DRAWING NUMBER 842-ARVFS-101-2. DATED OCTOBER 12, 1965. INEL INDEX CODE NUMBER: 075 0101 851 151969. - Idaho National Engineering Laboratory, Advanced Reentry Vehicle Fusing System, Scoville, Butte County, ID
8. Detail showing concrete abutment, showing substructure of bridge, specifically ...
8. Detail showing concrete abutment, showing substructure of bridge, specifically west side of arch and substructure. - Presumpscot Falls Bridge, Spanning Presumptscot River at Allen Avenue extension, 0.75 mile west of U.S. Interstate 95, Falmouth, Cumberland County, ME
a Genetic Algorithm for Urban Transit Routing Problem
NASA Astrophysics Data System (ADS)
Chew, Joanne Suk Chun; Lee, Lai Soon
The Urban Transit Routing Problem (UTRP) involves solving a set of transit route networks, which proved to be a highly complex multi-constrained problem. In this study, a bus route network to find an efficient network to meet customer demands given information on link travel times is considered. An evolutionary optimization technique, called Genetic Algorithm is proposed to solve the UTRP. The main objective is to minimize the passenger costs where the quality of the route sets is evaluated by a set of parameters. Initial computational experiments show that the proposed algorithm performs better than the benchmark results for Mandl's problems.
A consistent-mode indicator for the eigensystem realization algorithm
NASA Technical Reports Server (NTRS)
Pappa, Richard S.; Elliott, Kenny B.; Schenk, Axel
1992-01-01
A new method is described for assessing the consistency of model parameters identified with the Eigensystem Realization Algorithm (ERA). Identification results show varying consistency in practice due to many sources, including high modal density, nonlinearity, and inadequate excitation. Consistency is considered to be a reliable indicator of accuracy. The new method is the culmination of many years of experience in developing a practical implementation of the Eigensystem Realization Algorithm. The effectiveness of the method is illustrated using data from NASA Langley's Controls-Structures-Interaction Evolutionary Model.
Why is Boris Algorithm So Good?
et al, Hong Qin
2013-03-03
Due to its excellent long term accuracy, the Boris algorithm is the de facto standard for advancing a charged particle. Despite its popularity, up to now there has been no convincing explanation why the Boris algorithm has this advantageous feature. In this letter, we provide an answer to this question. We show that the Boris algorithm conserves phase space volume, even though it is not symplectic. The global bound on energy error typically associated with symplectic algorithms still holds for the Boris algorithm, making it an effective algorithm for the multi-scale dynamics of plasmas.
Why is Boris algorithm so good?
Qin, Hong; Plasma Physics Laboratory, Princeton University, Princeton, New Jersey 08543 ; Zhang, Shuangxi; Xiao, Jianyuan; Liu, Jian; Sun, Yajuan; Tang, William M.
2013-08-15
Due to its excellent long term accuracy, the Boris algorithm is the de facto standard for advancing a charged particle. Despite its popularity, up to now there has been no convincing explanation why the Boris algorithm has this advantageous feature. In this paper, we provide an answer to this question. We show that the Boris algorithm conserves phase space volume, even though it is not symplectic. The global bound on energy error typically associated with symplectic algorithms still holds for the Boris algorithm, making it an effective algorithm for the multi-scale dynamics of plasmas.
The Application of Baum-Welch Algorithm in Multistep Attack
Zhang, Yanxue; Zhao, Dongmei; Liu, Jinxing
2014-01-01
The biggest difficulty of hidden Markov model applied to multistep attack is the determination of observations. Now the research of the determination of observations is still lacking, and it shows a certain degree of subjectivity. In this regard, we integrate the attack intentions and hidden Markov model (HMM) and support a method to forecasting multistep attack based on hidden Markov model. Firstly, we train the existing hidden Markov model(s) by the Baum-Welch algorithm of HMM. Then we recognize the alert belonging to attack scenarios with the Forward algorithm of HMM. Finally, we forecast the next possible attack sequence with the Viterbi algorithm of HMM. The results of simulation experiments show that the hidden Markov models which have been trained are better than the untrained in recognition and prediction. PMID:24991642
Convergence Results on Iteration Algorithms to Linear Systems
Wang, Zhuande; Yang, Chuansheng; Yuan, Yubo
2014-01-01
In order to solve the large scale linear systems, backward and Jacobi iteration algorithms are employed. The convergence is the most important issue. In this paper, a unified backward iterative matrix is proposed. It shows that some well-known iterative algorithms can be deduced with it. The most important result is that the convergence results have been proved. Firstly, the spectral radius of the Jacobi iterative matrix is positive and the one of backward iterative matrix is strongly positive (lager than a positive constant). Secondly, the mentioned two iterations have the same convergence results (convergence or divergence simultaneously). Finally, some numerical experiments show that the proposed algorithms are correct and have the merit of backward methods. PMID:24991640
Analysis of multigrid algorithms for nonsymmetric and indefinite elliptic problems
Bramble, J.H.; Pasciak, J.E.; Xu, J.
1988-10-01
We prove some new estimates for the convergence of multigrid algorithms applied to nonsymmetric and indefinite elliptic boundary value problems. We provide results for the so-called 'symmetric' multigrid schemes. We show that for the variable V-script-cycle and the W-script-cycle schemes, multigrid algorithms with any amount of smoothing on the finest grid converge at a rate that is independent of the number of levels or unknowns, provided that the initial grid is sufficiently fine. We show that the V-script-cycle algorithm also converges (under appropriate assumptions on the coarsest grid) but at a rate which may deteriorate as the number of levels increases. This deterioration for the V-script-cycle may occur even in the case of full elliptic regularity. Finally, the results of numerical experiments are given which illustrate the convergence behavior suggested by the theory.
NASA Astrophysics Data System (ADS)
Shao, Xinxing; Dai, Xiangjun; He, Xiaoyuan
2015-08-01
The inverse compositional Gauss-Newton (IC-GN) algorithm is one of the most popular sub-pixel registration algorithms in digital image correlation (DIC). The IC-GN algorithm, compared with the traditional forward additive Newton-Raphson (FA-NR) algorithm, can achieve the same accuracy in less time. However, there are no clear results regarding the noise robustness of IC-GN algorithm and the computational efficiency is still in need of further improvements. In this paper, a theoretical model of the IC-GN algorithm was derived based on the sum of squared differences correlation criterion and linear interpolation. The model indicates that the IC-GN algorithm has better noise robustness than the FA-NR algorithm, and shows no noise-induced bias if the gray gradient operator is chosen properly. Both numerical simulations and experiments show good agreements with the theoretical predictions. Furthermore, a seed point-based parallel method is proposed to improve the calculation speed. Compared with the recently proposed path-independent method, our model is feasible and practical, and it can maximize the computing speed using an improved initial guess. Moreover, we compared the computational efficiency of our method with that of the reliability-guided method using a four-point bending experiment, and the results show that the computational efficiency is greatly improved. This proposed parallel IC-GN algorithm has good noise robustness and is expected to be a practical option for real-time DIC.
Schwarz-Based Algorithms for Compressible Flows
NASA Technical Reports Server (NTRS)
Tidriri, M. D.
1996-01-01
We investigate in this paper the application of Schwarz-based algorithms to compressible flows. First we study the combination of these methods with defect-correction procedures. We then study the effect on the Schwarz-based methods of replacing the explicit treatment of the boundary conditions by an implicit one. In the last part of this paper we study the combination of these methods with Newton-Krylov matrix-free methods. Numerical experiments that show the performance of our approaches are then presented.
Self-adapting root-MUSIC algorithm and its real-valued formulation for acoustic vector sensor array
NASA Astrophysics Data System (ADS)
Wang, Peng; Zhang, Guo-jun; Xue, Chen-yang; Zhang, Wen-dong; Xiong, Ji-jun
2012-12-01
In this paper, based on the root-MUSIC algorithm for acoustic pressure sensor array, a new self-adapting root-MUSIC algorithm for acoustic vector sensor array is proposed by self-adaptive selecting the lead orientation vector, and its real-valued formulation by Forward-Backward(FB) smoothing and real-valued inverse covariance matrix is also proposed, which can reduce the computational complexity and distinguish the coherent signals. The simulation experiment results show the better performance of two new algorithm with low Signal-to-Noise (SNR) in direction of arrival (DOA) estimation than traditional MUSIC algorithm, and the experiment results using MEMS vector hydrophone array in lake trails show the engineering practicability of two new algorithms.
A statistical-based scheduling algorithm in automated data path synthesis
NASA Technical Reports Server (NTRS)
Jeon, Byung Wook; Lursinsap, Chidchanok
1992-01-01
In this paper, we propose a new heuristic scheduling algorithm based on the statistical analysis of the cumulative frequency distribution of operations among control steps. It has a tendency of escaping from local minima and therefore reaching a globally optimal solution. The presented algorithm considers the real world constraints such as chained operations, multicycle operations, and pipelined data paths. The result of the experiment shows that it gives optimal solutions, even though it is greedy in nature.
NASA Astrophysics Data System (ADS)
Vanschoren, Joaquin; Blockeel, Hendrik
Next to running machine learning algorithms based on inductive queries, much can be learned by immediately querying the combined results of many prior studies. Indeed, all around the globe, thousands of machine learning experiments are being executed on a daily basis, generating a constant stream of empirical information on machine learning techniques. While the information contained in these experiments might have many uses beyond their original intent, results are typically described very concisely in papers and discarded afterwards. If we properly store and organize these results in central databases, they can be immediately reused for further analysis, thus boosting future research. In this chapter, we propose the use of experiment databases: databases designed to collect all the necessary details of these experiments, and to intelligently organize them in online repositories to enable fast and thorough analysis of a myriad of collected results. They constitute an additional, queriable source of empirical meta-data based on principled descriptions of algorithm executions, without reimplementing the algorithms in an inductive database. As such, they engender a very dynamic, collaborative approach to experimentation, in which experiments can be freely shared, linked together, and immediately reused by researchers all over the world. They can be set up for personal use, to share results within a lab or to create open, community-wide repositories. Here, we provide a high-level overview of their design, and use an existing experiment database to answer various interesting research questions about machine learning algorithms and to verify a number of recent studies.
Planning a Successful Tech Show
ERIC Educational Resources Information Center
Nikirk, Martin
2011-01-01
Tech shows are a great way to introduce prospective students, parents, and local business and industry to a technology and engineering or career and technical education program. In addition to showcasing instructional programs, a tech show allows students to demonstrate their professionalism and skills, practice public presentations, and interact…
... shows the ranges for blood glucose levels after 8 to 12 hours of fasting (not eating). It shows the normal range and the abnormal ranges that are a sign of prediabetes or diabetes. Plasma Glucose Results (mg/dL)* Diagnosis 70 to 99 ...
Linear Bregman algorithm implemented in parallel GPU
NASA Astrophysics Data System (ADS)
Li, Pengyan; Ke, Jue; Sui, Dong; Wei, Ping
2015-08-01
At present, most compressed sensing (CS) algorithms have poor converging speed, thus are difficult to run on PC. To deal with this issue, we use a parallel GPU, to implement a broadly used compressed sensing algorithm, the Linear Bregman algorithm. Linear iterative Bregman algorithm is a reconstruction algorithm proposed by Osher and Cai. Compared with other CS reconstruction algorithms, the linear Bregman algorithm only involves the vector and matrix multiplication and thresholding operation, and is simpler and more efficient for programming. We use C as a development language and adopt CUDA (Compute Unified Device Architecture) as parallel computing architectures. In this paper, we compared the parallel Bregman algorithm with traditional CPU realized Bregaman algorithm. In addition, we also compared the parallel Bregman algorithm with other CS reconstruction algorithms, such as OMP and TwIST algorithms. Compared with these two algorithms, the result of this paper shows that, the parallel Bregman algorithm needs shorter time, and thus is more convenient for real-time object reconstruction, which is important to people's fast growing demand to information technology.
Satellite Movie Shows Erika Dissipate
This animation of visible and infrared imagery from NOAA's GOES-West satellite from Aug. 27 to 29 shows Tropical Storm Erika move through the Eastern Caribbean Sea and dissipate near eastern Cuba. ...
Texture orientation-based algorithm for detecting infrared maritime targets.
Wang, Bin; Dong, Lili; Zhao, Ming; Wu, Houde; Xu, Wenhai
2015-05-20
Infrared maritime target detection is a key technology for maritime target searching systems. However, in infrared maritime images (IMIs) taken under complicated sea conditions, background clutters, such as ocean waves, clouds or sea fog, usually have high intensity that can easily overwhelm the brightness of real targets, which is difficult for traditional target detection algorithms to deal with. To mitigate this problem, this paper proposes a novel target detection algorithm based on texture orientation. This algorithm first extracts suspected targets by analyzing the intersubband correlation between horizontal and vertical wavelet subbands of the original IMI on the first scale. Then the self-adaptive wavelet threshold denoising and local singularity analysis of the original IMI is combined to remove false alarms further. Experiments show that compared with traditional algorithms, this algorithm can suppress background clutter much better and realize better single-frame detection for infrared maritime targets. Besides, in order to guarantee accurate target extraction further, the pipeline-filtering algorithm is adopted to eliminate residual false alarms. The high practical value and applicability of this proposed strategy is backed strongly by experimental data acquired under different environmental conditions. PMID:26192503
Scalable Virtual Network Mapping Algorithm for Internet-Scale Networks
NASA Astrophysics Data System (ADS)
Yang, Qiang; Wu, Chunming; Zhang, Min
The proper allocation of network resources from a common physical substrate to a set of virtual networks (VNs) is one of the key technical challenges of network virtualization. While a variety of state-of-the-art algorithms have been proposed in an attempt to address this issue from different facets, the challenge still remains in the context of large-scale networks as the existing solutions mainly perform in a centralized manner which requires maintaining the overall and up-to-date information of the underlying substrate network. This implies the restricted scalability and computational efficiency when the network scale becomes large. This paper tackles the virtual network mapping problem and proposes a novel hierarchical algorithm in conjunction with a substrate network decomposition approach. By appropriately transforming the underlying substrate network into a collection of sub-networks, the hierarchical virtual network mapping algorithm can be carried out through a global virtual network mapping algorithm (GVNMA) and a local virtual network mapping algorithm (LVNMA) operated in the network central server and within individual sub-networks respectively with their cooperation and coordination as necessary. The proposed algorithm is assessed against the centralized approaches through a set of numerical simulation experiments for a range of network scenarios. The results show that the proposed hierarchical approach can be about 5-20 times faster for VN mapping tasks than conventional centralized approaches with acceptable communication overhead between GVNCA and LVNCA for all examined networks, whilst performs almost as well as the centralized solutions.
Stability of Bareiss algorithm
NASA Astrophysics Data System (ADS)
Bojanczyk, Adam W.; Brent, Richard P.; de Hoog, F. R.
1991-12-01
In this paper, we present a numerical stability analysis of Bareiss algorithm for solving a symmetric positive definite Toeplitz system of linear equations. We also compare Bareiss algorithm with Levinson algorithm and conclude that the former has superior numerical properties.
Cheng, Kin-On; Law, Ngai-Fong; Siu, Wan-Chi; Liew, Alan Wee-Chung
2008-01-01
Background The DNA microarray technology allows the measurement of expression levels of thousands of genes under tens/hundreds of different conditions. In microarray data, genes with similar functions usually co-express under certain conditions only [1]. Thus, biclustering which clusters genes and conditions simultaneously is preferred over the traditional clustering technique in discovering these coherent genes. Various biclustering algorithms have been developed using different bicluster formulations. Unfortunately, many useful formulations result in NP-complete problems. In this article, we investigate an efficient method for identifying a popular type of biclusters called additive model. Furthermore, parallel coordinate (PC) plots are used for bicluster visualization and analysis. Results We develop a novel and efficient biclustering algorithm which can be regarded as a greedy version of an existing algorithm known as pCluster algorithm. By relaxing the constraint in homogeneity, the proposed algorithm has polynomial-time complexity in the worst case instead of exponential-time complexity as in the pCluster algorithm. Experiments on artificial datasets verify that our algorithm can identify both additive-related and multiplicative-related biclusters in the presence of overlap and noise. Biologically significant biclusters have been validated on the yeast cell-cycle expression dataset using Gene Ontology annotations. Comparative study shows that the proposed approach outperforms several existing biclustering algorithms. We also provide an interactive exploratory tool based on PC plot visualization for determining the parameters of our biclustering algorithm. Conclusion We have proposed a novel biclustering algorithm which works with PC plots for an interactive exploratory analysis of gene expression data. Experiments show that the biclustering algorithm is efficient and is capable of detecting co-regulated genes. The interactive analysis enables an optimum
Improvements of HITS Algorithms for Spam Links
NASA Astrophysics Data System (ADS)
Asano, Yasuhito; Tezuka, Yu; Nishizeki, Takao
The HITS algorithm proposed by Kleinberg is one of the representative methods of scoring Web pages by using hyperlinks. In the days when the algorithm was proposed, most of the pages given high score by the algorithm were really related to a given topic, and hence the algorithm could be used to find related pages. However, the algorithm and the variants including Bharat's improved HITS, abbreviated to BHITS, proposed by Bharat and Henzinger cannot be used to find related pages any more on today's Web, due to an increase of spam links. In this paper, we first propose three methods to find “linkfarms,” that is, sets of spam links forming a densely connected subgraph of a Web graph. We then present an algorithm, called a trust-score algorithm, to give high scores to pages which are not spam pages with a high probability. Combining the three methods and the trust-score algorithm with BHITS, we obtain several variants of the HITS algorithm. We ascertain by experiments that one of them, named TaN+BHITS using the trust-score algorithm and the method of finding linkfarms by employing name servers, is most suitable for finding related pages on today's Web. Our algorithms take time and memory no more than those required by the original HITS algorithm, and can be executed on a PC with a small amount of main memory.
Meyer, Joerg; /Bonn U.
2007-01-01
measurement of the top quark mass by the D0 experiment at Fermilab in the dilepton final states. The comparison of the measured top quark masses in different final states allows an important consistency check of the Standard Model. Inconsistent results would be a clear hint of a misinterpretation of the analyzed data set. With the exception of the Higgs boson, all particles predicted by the Standard Model have been found. The search for the Higgs boson is one of the main focuses in high energy physics. The theory section will discuss the close relationship between the physics of the Higgs boson and the top quark.
National Orange Show Photovoltaic Demonstration
Dan Jimenez Sheri Raborn, CPA; Tom Baker
2008-03-31
National Orange Show Photovoltaic Demonstration created a 400KW Photovoltaic self-generation plant at the National Orange Show Events Center (NOS). The NOS owns a 120-acre state fairground where it operates an events center and produces an annual citrus fair known as the Orange Show. The NOS governing board wanted to employ cost-saving programs for annual energy expenses. It is hoped the Photovoltaic program will result in overall savings for the NOS, help reduce the State's energy demands as relating to electrical power consumption, improve quality of life within the affected grid area as well as increase the energy efficiency of buildings at our venue. In addition, the potential to reduce operational expenses would have a tremendous effect on the ability of the NOS to service its community.
Block clustering based on difference of convex functions (DC) programming and DC algorithms.
Le, Hoai Minh; Le Thi, Hoai An; Dinh, Tao Pham; Huynh, Van Ngai
2013-10-01
We investigate difference of convex functions (DC) programming and the DC algorithm (DCA) to solve the block clustering problem in the continuous framework, which traditionally requires solving a hard combinatorial optimization problem. DC reformulation techniques and exact penalty in DC programming are developed to build an appropriate equivalent DC program of the block clustering problem. They lead to an elegant and explicit DCA scheme for the resulting DC program. Computational experiments show the robustness and efficiency of the proposed algorithm and its superiority over standard algorithms such as two-mode K-means, two-mode fuzzy clustering, and block classification EM. PMID:23777526
NASA Astrophysics Data System (ADS)
Li, Zhaokun; Cao, Jingtai; Zhao, Xiaohui; Liu, Wei
2015-03-01
A conventional adaptive optics (AO) system is widely used to compensate atmospheric turbulence in free space optical (FSO) communication systems, but wavefront measurements based on phase-conjugation principle are not desired under strong scintillation circumstances. In this study we propose a novel swarm intelligence optimization algorithm, which is called modified shuffled frog leaping algorithm (MSFL), to compensate the wavefront aberration. Simulation and experiments results show that MSFL algorithm performs well in the atmospheric compensation and it can increase the coupling efficiency in receiver terminal and significantly improve the performance of the FSO communication systems.
An infrared salient object stereo matching algorithm based on epipolar rectification
NASA Astrophysics Data System (ADS)
Zhang, Yi; Wu, Lei; Han, Jing; Bai, Lian-fa
2016-02-01
Due to the higher noise and less details in infrared images, general matching algorithms are prone to obtaining unsatisfying results. Combining the idea of salient object, we propose a novel infrared stereo matching algorithm which applies to unconstrained stereo rigs. Firstly, we present an epipolar rectification method introducing particle swarm optimization and K-nearest neighbor to deal with the problem of epipolar constraint. Then we make use of transition region to extract salient object in the rectified infrared image pairs. Finally, disparity map is generated by matching salient regions. Experiments show that our algorithm deals with the infrared stereo matching of unconstrained stereo rigs with better accuracy and higher speed.
Creating Slide Show Book Reports.
ERIC Educational Resources Information Center
Taylor, Harriet G.; Stuhlmann, Janice M.
1995-01-01
Describes the use of "Kid Pix 2" software by fourth grade students to develop slide-show book reports. Highlights include collaboration with education majors from Louisiana State University, changes in attitudes of the education major students and elementary students, and problems with navigation and disk space. (LRW)
Producing Talent and Variety Shows.
ERIC Educational Resources Information Center
Szabo, Chuck
1995-01-01
Identifies key aspects of producing talent shows and outlines helpful hints for avoiding pitfalls and ensuring a smooth production. Presents suggestions concerning publicity, scheduling, and support personnel. Describes types of acts along with special needs and problems specific to each act. Includes a list of resources. (MJP)
Greedy heuristic algorithm for solving series of eee components classification problems*
NASA Astrophysics Data System (ADS)
Kazakovtsev, A. L.; Antamoshkin, A. N.; Fedosov, V. V.
2016-04-01
Algorithms based on using the agglomerative greedy heuristics demonstrate precise and stable results for clustering problems based on k- means and p-median models. Such algorithms are successfully implemented in the processes of production of specialized EEE components for using in space systems which include testing each EEE device and detection of homogeneous production batches of the EEE components based on results of the tests using p-median models. In this paper, authors propose a new version of the genetic algorithm with the greedy agglomerative heuristic which allows solving series of problems. Such algorithm is useful for solving the k-means and p-median clustering problems when the number of clusters is unknown. Computational experiments on real data show that the preciseness of the result decreases insignificantly in comparison with the initial genetic algorithm for solving a single problem.
Multi-pattern string matching algorithms comparison for intrusion detection system
NASA Astrophysics Data System (ADS)
Hasan, Awsan A.; Rashid, Nur'Aini Abdul; Abdulrazzaq, Atheer A.
2014-12-01
Computer networks are developing exponentially and running at high speeds. With the increasing number of Internet users, computers have become the preferred target for complex attacks that require complex analyses to be detected. The Intrusion detection system (IDS) is created and turned into an important part of any modern network to protect the network from attacks. The IDS relies on string matching algorithms to identify network attacks, but these string matching algorithms consume a considerable amount of IDS processing time, thereby slows down the IDS performance. A new algorithm that can overcome the weakness of the IDS needs to be developed. Improving the multi-pattern matching algorithm ensure that an IDS can work properly and the limitations can be overcome. In this paper, we perform a comparison between our three multi-pattern matching algorithms; MP-KR, MPHQS and MPH-BMH with their corresponding original algorithms Kr, QS and BMH respectively. The experiments show that MPH-QS performs best among the proposed algorithms, followed by MPH-BMH, and MP-KR is the slowest. MPH-QS detects a large number of signature patterns in short time compared to other two algorithms. This finding can prove that the multi-pattern matching algorithms are more efficient in high-speed networks.
NASA Astrophysics Data System (ADS)
Lopez-Baeza, Ernesto; Wigneron, Jean-Pierre; Schwank, Mike; Miernecki, Maciej; Kerr, Yann; Casal, Tania; Delwart, Steven; Fernandez-Moran, Roberto; Mecklenburg, Susanne; Coll Pajaron, M. Amparo; Salgado Hernanz, Paula
The main activity of the Valencia Anchor Station (VAS) is currently now to support the validation of SMOS (Soil Moisture and Ocean Salinity) Level 2 and 3 land products (soil moisture, SM, and vegetation optical depth, TAU). With this aim, the European Space Agency (ESA) has provided the Climatology from Satellites Group of the University of Valencia with an ELBARA-II microwave radiometer under a loan agreement since September 2009. During this time, brightness temperatures (TB) have continuously been acquired, except during normal maintenance or minor repair interruptions. ELBARA-II is an L-band dual-polarization radiometer with two channels (1400-1418 MHz, 1409-1427 MHz). It is continuously measuring over a vineyard field (El Renegado, Caudete de las Fuentes, Valencia) from a 15 m platform with a constant protocol for calibration and angular scanning measurements with the aim to assisting the validation of SMOS land products and the calibration of the L-MEB (L-Band Emission of the Biosphere) -basis for the SMOS Level 2 Land Processor- over the VAS validation site. One of the advantages of using the VAS site is the possibility of studying two different environmental conditions along the year. While the vine cycle extends mainly between April and October, during the rest of the year the area remains under bare soil conditions, adequate for the calibration of the soil model. The measurement protocol currently running has shown to be robust during the whole operation time and will be extended in time as much as possible to continue providing a long-term data set of ELBARA-II TB measurements and retrieved SM and TAU. This data set is also showing to be useful in support of SMOS scientific activities: the VAS area and, specifically the ELBARA-II site, offer good conditions to control the long-term evolution of SMOS Level 2 and Level 3 land products and interpret eventual anomalies that may obscure sensor hidden biases. In addition, SM and TAU that are currently
An Iterative CT Reconstruction Algorithm for Fast Fluid Flow Imaging.
Van Eyndhoven, Geert; Batenburg, K Joost; Kazantsev, Daniil; Van Nieuwenhove, Vincent; Lee, Peter D; Dobson, Katherine J; Sijbers, Jan
2015-11-01
The study of fluid flow through solid matter by computed tomography (CT) imaging has many applications, ranging from petroleum and aquifer engineering to biomedical, manufacturing, and environmental research. To avoid motion artifacts, current experiments are often limited to slow fluid flow dynamics. This severely limits the applicability of the technique. In this paper, a new iterative CT reconstruction algorithm for improved a temporal/spatial resolution in the imaging of fluid flow through solid matter is introduced. The proposed algorithm exploits prior knowledge in two ways. First, the time-varying object is assumed to consist of stationary (the solid matter) and dynamic regions (the fluid flow). Second, the attenuation curve of a particular voxel in the dynamic region is modeled by a piecewise constant function over time, which is in accordance with the actual advancing fluid/air boundary. Quantitative and qualitative results on different simulation experiments and a real neutron tomography data set show that, in comparison with the state-of-the-art algorithms, the proposed algorithm allows reconstruction from substantially fewer projections per rotation without image quality loss. Therefore, the temporal resolution can be substantially increased, and thus fluid flow experiments with faster dynamics can be performed. PMID:26259219
Evaluating and comparing algorithms for respiratory motion prediction.
Ernst, F; Dürichen, R; Schlaefer, A; Schweikard, A
2013-06-01
In robotic radiosurgery, it is necessary to compensate for systematic latencies arising from target tracking and mechanical constraints. This compensation is usually achieved by means of an algorithm which computes the future target position. In most scientific works on respiratory motion prediction, only one or two algorithms are evaluated on a limited amount of very short motion traces. The purpose of this work is to gain more insight into the real world capabilities of respiratory motion prediction methods by evaluating many algorithms on an unprecedented amount of data. We have evaluated six algorithms, the normalized least mean squares (nLMS), recursive least squares (RLS), multi-step linear methods (MULIN), wavelet-based multiscale autoregression (wLMS), extended Kalman filtering, and ε-support vector regression (SVRpred) methods, on an extensive database of 304 respiratory motion traces. The traces were collected during treatment with the CyberKnife (Accuray, Inc., Sunnyvale, CA, USA) and feature an average length of 71 min. Evaluation was done using a graphical prediction toolkit, which is available to the general public, as is the data we used. The experiments show that the nLMS algorithm-which is one of the algorithms currently used in the CyberKnife-is outperformed by all other methods. This is especially true in the case of the wLMS, the SVRpred, and the MULIN algorithms, which perform much better. The nLMS algorithm produces a relative root mean square (RMS) error of 75% or less (i.e., a reduction in error of 25% or more when compared to not doing prediction) in only 38% of the test cases, whereas the MULIN and SVRpred methods reach this level in more than 77%, the wLMS algorithm in more than 84% of the test cases. Our work shows that the wLMS algorithm is the most accurate algorithm and does not require parameter tuning, making it an ideal candidate for clinical implementation. Additionally, we have seen that the structure of a patient's respiratory
Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs
Chen, Haijian; Han, Dongmei; Dai, Yonghui; Zhao, Lina
2015-01-01
In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738
Service Discovery Framework Supported by EM Algorithm and Bayesian Classifier
NASA Astrophysics Data System (ADS)
Peng, Yanbin
Service oriented computing has become the main stream research field nowadays. Meanwhile, machine learning is a promising AI technology which can enhance the performance of traditional algorithm. Therefore, aiming at solving service discovery problem, this paper imports Bayesian classifier to web service discovery framework, which can improve service querying speed. In this framework, services in service library become training set of Bayesian classifier, service query becomes a testing sample. Service matchmaking process can be executed in related service class, which has fewer services, thus can save time. Due to don't know the class of service in training set, EM algorithm is used to estimate prior probability and likelihood functions. Experiment results show that the EM algorithm and Bayesian classifier supported method outperforms other methods in time complexity.
Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs.
Chen, Haijian; Han, Dongmei; Dai, Yonghui; Zhao, Lina
2015-01-01
In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of "C programming language" are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738
Experimental study on subaperture testing with iterative triangulation algorithm.
Yan, Lisong; Wang, Xiaokun; Zheng, Ligong; Zeng, Xuefeng; Hu, Haixiang; Zhang, Xuejun
2013-09-23
Applying the iterative triangulation stitching algorithm, we provide an experimental demonstration by testing a Φ120 mm flat mirror, a Φ1450 mm off-axis parabolic mirror and a convex hyperboloid mirror. By comparing the stitching results with the self-examine subaperture, it shows that the reconstruction results are in consistent with that of the subaperture testing. As all the experiments are conducted with a 5-dof adjustment platform with big adjustment errors, it proves that using the above mentioned algorithm, the subaperture stitching can be easily performed without a precise positioning system. In addition, with the algorithm, we accomplish the coordinate unification between the testing and processing that makes it possible to guide the processing by the stitching result. PMID:24104151
Logit Model based Performance Analysis of an Optimization Algorithm
NASA Astrophysics Data System (ADS)
Hernández, J. A.; Ospina, J. D.; Villada, D.
2011-09-01
In this paper, the performance of the Multi Dynamics Algorithm for Global Optimization (MAGO) is studied through simulation using five standard test functions. To guarantee that the algorithm converges to a global optimum, a set of experiments searching for the best combination between the only two MAGO parameters -number of iterations and number of potential solutions, are considered. These parameters are sequentially varied, while increasing the dimension of several test functions, and performance curves were obtained. The MAGO was originally designed to perform well with small populations; therefore, the self-adaptation task with small populations is more challenging while the problem dimension is higher. The results showed that the convergence probability to an optimal solution increases according to growing patterns of the number of iterations and the number of potential solutions. However, the success rates slow down when the dimension of the problem escalates. Logit Model is used to determine the mutual effects between the parameters of the algorithm.
NASA Technical Reports Server (NTRS)
2004-01-01
The upper left image in this display is from the panoramic camera on the Mars Exploration Rover Spirit, showing the 'Magic Carpet' region near the rover at Gusev Crater, Mars, on Sol 7, the seventh martian day of its journey (Jan. 10, 2004). The lower image, also from the panoramic camera, is a monochrome (single filter) image of a rock in the 'Magic Carpet' area. Note that colored portions of the rock correlate with extracted spectra shown in the plot to the side. Four different types of materials are shown: the rock itself, the soil in front of the rock, some brighter soil on top of the rock, and some dust that has collected in small recesses on the rock face ('spots'). Each color on the spectra matches a line on the graph, showing how the panoramic camera's different colored filters are used to broadly assess the varying mineral compositions of martian rocks and soils.
Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation
Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi
2015-01-01
Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it. PMID:26221133
A New Phase Unwrapping Algorithm Based on Relative Distance Oriented
NASA Astrophysics Data System (ADS)
Zhang, Qican; Su, Xianyu; Xiang, Liqun; Yu, Liang
2010-04-01
A relative distance-oriented phase unwrapping algorithm is presented in this paper. Considered the wrapped phase value and modulation distribution of the neighboring pixel, a relative distance of two adjacent pixels is calculated and localized in a complex coordinates, in which the relative distance tree will be composed by all the relative distances and will be used to determine the optimized path of phase unwrapping. The smaller relative distance shows that the phase difference between the two corresponding pixels is very small and the phase data waiting for unwrapping is more reliable. The closer the relative distance is, the more success the phase unwrapping will achieve. Combining the minimum spanning tree algorithm, the phase unwrapping order of each pixel can be determined and the whole phase unwrapping path can also be given. The phase unwrapping path is always directed from the minimum distance value to the greater one. Consequently, this algorithm could avoid error propagating in the phase unwrapping. The errors could be limited in the minimum local region in worst case and the error probability of the phase unwrapping is as low as possible. The physical significance of the relative distance and the fully algorithm for phase unwrapping are proposed in this paper. The result of the experiment show that this new algorithm is feasible and effective, it could control the path avoid crossing the poles, branch-cut and the shadow in the phase unwrapping.
Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation.
Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi
2015-01-01
Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it. PMID:26221133
Myers, Timothy
2006-09-01
The use of protocols or care algorithms in medical facilities has increased in the managed care environment. The definition and application of care algorithms, with a particular focus on the treatment of acute bronchospasm, are explored in this review. The benefits and goals of using protocols, especially in the treatment of asthma, to standardize patient care based on clinical guidelines and evidence-based medicine are explained. Ideally, evidence-based protocols should translate research findings into best medical practices that would serve to better educate patients and their medical providers who are administering these protocols. Protocols should include evaluation components that can monitor, through some mechanism of quality assurance, the success and failure of the instrument so that modifications can be made as necessary. The development and design of an asthma care algorithm can be accomplished by using a four-phase approach: phase 1, identifying demographics, outcomes, and measurement tools; phase 2, reviewing, negotiating, and standardizing best practice; phase 3, testing and implementing the instrument and collecting data; and phase 4, analyzing the data and identifying areas of improvement and future research. The experiences of one medical institution that implemented an asthma care algorithm in the treatment of pediatric asthma are described. Their care algorithms served as tools for decision makers to provide optimal asthma treatment in children. In addition, the studies that used the asthma care algorithm to determine the efficacy and safety of ipratropium bromide and levalbuterol in children with asthma are described. PMID:16945065
Advanced spectral signature discrimination algorithm
NASA Astrophysics Data System (ADS)
Chakravarty, Sumit; Cao, Wenjie; Samat, Alim
2013-05-01
This paper presents a novel approach to the task of hyperspectral signature analysis. Hyperspectral signature analysis has been studied a lot in literature and there has been a lot of different algorithms developed which endeavors to discriminate between hyperspectral signatures. There are many approaches for performing the task of hyperspectral signature analysis. Binary coding approaches like SPAM and SFBC use basic statistical thresholding operations to binarize a signature which are then compared using Hamming distance. This framework has been extended to techniques like SDFC wherein a set of primate structures are used to characterize local variations in a signature together with the overall statistical measures like mean. As we see such structures harness only local variations and do not exploit any covariation of spectrally distinct parts of the signature. The approach of this research is to harvest such information by the use of a technique similar to circular convolution. In the approach we consider the signature as cyclic by appending the two ends of it. We then create two copies of the spectral signature. These three signatures can be placed next to each other like the rotating discs of a combination lock. We then find local structures at different circular shifts between the three cyclic spectral signatures. Texture features like in SDFC can be used to study the local structural variation for each circular shift. We can then create different measure by creating histogram from the shifts and thereafter using different techniques for information extraction from the histograms. Depending on the technique used different variant of the proposed algorithm are obtained. Experiments using the proposed technique show the viability of the proposed methods and their performances as compared to current binary signature coding techniques.
Dongarra, J.J.; Hewitt, T.
1985-08-01
This note describes some experiments on simple, dense linear algebra algorithms. These experiments show that the CRAY X-MP is capable of small-grain multitasking arising from standard implementations of LU and Cholesky decomposition. The implementation described here provides the ''fastest'' execution rate for LU decomposition, 718 MFLOPS for a matrix of order 1000.
NASA Astrophysics Data System (ADS)
Bolognesi, Tommaso
2011-07-01
In the context of quantum gravity theories, several researchers have proposed causal sets as appropriate discrete models of spacetime. We investigate families of causal sets obtained from two simple models of computation - 2D Turing machines and network mobile automata - that operate on 'high-dimensional' supports, namely 2D arrays of cells and planar graphs, respectively. We study a number of quantitative and qualitative emergent properties of these causal sets, including dimension, curvature and localized structures, or 'particles'. We show how the possibility to detect and separate particles from background space depends on the choice between a global or local view at the causal set. Finally, we spot very rare cases of pseudo-randomness, or deterministic chaos; these exhibit a spontaneous phenomenon of 'causal compartmentation' that appears as a prerequisite for the occurrence of anything of physical interest in the evolution of spacetime.
Liu, Dong-sheng; Fan, Shu-jiang
2014-01-01
In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389
A Modified Decision Tree Algorithm Based on Genetic Algorithm for Mobile User Classification Problem
Liu, Dong-sheng; Fan, Shu-jiang
2014-01-01
In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389
ENVITEC shows off air technologies
McIlvaine, R.W.
1995-08-01
The ENVITEC International Trade Fair for Environmental Protection and Waste Management Technologies, held in June in Duesseldorf, Germany, is the largest air pollution exhibition in the world and may be the largest environmental technology show overall. Visitors saw thousands of environmental solutions from 1,318 companies representing 29 countries and occupying roughly 43,000 square meters of exhibit space. Many innovations were displayed under the category, ``thermal treatment of air pollutants.`` New technologies include the following: regenerative thermal oxidizers; wet systems for removing pollutants; biological scrubbers;electrostatic precipitators; selective adsorption systems; activated-coke adsorbers; optimization of scrubber systems; and air pollution monitors.
Library of Continuation Algorithms
2005-03-01
LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newtons method for their nonlinear solve.
An enhanced algorithm to estimate BDS satellite's differential code biases
NASA Astrophysics Data System (ADS)
Shi, Chuang; Fan, Lei; Li, Min; Liu, Zhizhao; Gu, Shengfeng; Zhong, Shiming; Song, Weiwei
2016-02-01
This paper proposes an enhanced algorithm to estimate the differential code biases (DCB) on three frequencies of the BeiDou Navigation Satellite System (BDS) satellites. By forming ionospheric observables derived from uncombined precise point positioning and geometry-free linear combination of phase-smoothed range, satellite DCBs are determined together with ionospheric delay that is modeled at each individual station. Specifically, the DCB and ionospheric delay are estimated in a weighted least-squares estimator by considering the precision of ionospheric observables, and a misclosure constraint for different types of satellite DCBs is introduced. This algorithm was tested by GNSS data collected in November and December 2013 from 29 stations of Multi-GNSS Experiment (MGEX) and BeiDou Experimental Tracking Stations. Results show that the proposed algorithm is able to precisely estimate BDS satellite DCBs, where the mean value of day-to-day scattering is about 0.19 ns and the RMS of the difference with respect to MGEX DCB products is about 0.24 ns. In order to make comparison, an existing algorithm based on IGG: Institute of Geodesy and Geophysics, China (IGGDCB), is also used to process the same dataset. Results show that, the DCB difference between results from the enhanced algorithm and the DCB products from Center for Orbit Determination in Europe (CODE) and MGEX is reduced in average by 46 % for GPS satellites and 14 % for BDS satellites, when compared with DCB difference between the results of IGGDCB algorithm and the DCB products from CODE and MGEX. In addition, we find the day-to-day scattering of BDS IGSO satellites is obviously lower than that of GEO and MEO satellites, and a significant bias exists in daily DCB values of GEO satellites comparing with MGEX DCB product. This proposed algorithm also provides a new approach to estimate the satellite DCBs of multiple GNSS systems.
Geist, G.A.; Howell, G.W.; Watkins, D.S.
1997-11-01
The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.
2012-01-05
ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from themore » displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.« less
Sinclair, Michael B
2012-01-05
ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from the displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.
Parameter incremental learning algorithm for neural networks.
Wan, Sheng; Banta, Larry E
2006-11-01
In this paper, a novel stochastic (or online) training algorithm for neural networks, named parameter incremental learning (PIL) algorithm, is proposed and developed. The main idea of the PIL strategy is that the learning algorithm should not only adapt to the newly presented input-output training pattern by adjusting parameters, but also preserve the prior results. A general PIL algorithm for feedforward neural networks is accordingly presented as the first-order approximate solution to an optimization problem, where the performance index is the combination of proper measures of preservation and adaptation. The PIL algorithms for the multilayer perceptron (MLP) are subsequently derived. Numerical studies show that for all the three benchmark problems used in this paper the PIL algorithm for MLP is measurably superior to the standard online backpropagation (BP) algorithm and the stochastic diagonal Levenberg-Marquardt (SDLM) algorithm in terms of the convergence speed and accuracy. Other appealing features of the PIL algorithm are that it is computationally as simple as the BP algorithm, and as easy to use as the BP algorithm. It, therefore, can be applied, with better performance, to any situations where the standard online BP algorithm is applicable. PMID:17131658
Pea Plants Show Risk Sensitivity.
Dener, Efrat; Kacelnik, Alex; Shemesh, Hagai
2016-07-11
Sensitivity to variability in resources has been documented in humans, primates, birds, and social insects, but the fit between empirical results and the predictions of risk sensitivity theory (RST), which aims to explain this sensitivity in adaptive terms, is weak [1]. RST predicts that agents should switch between risk proneness and risk aversion depending on state and circumstances, especially according to the richness of the least variable option [2]. Unrealistic assumptions about agents' information processing mechanisms and poor knowledge of the extent to which variability imposes specific selection in nature are strong candidates to explain the gap between theory and data. RST's rationale also applies to plants, where it has not hitherto been tested. Given the differences between animals' and plants' information processing mechanisms, such tests should help unravel the conflicts between theory and data. Measuring root growth allocation by split-root pea plants, we show that they favor variability when mean nutrient levels are low and the opposite when they are high, supporting the most widespread RST prediction. However, the combination of non-linear effects of nitrogen availability at local and systemic levels may explain some of these effects as a consequence of mechanisms not necessarily evolved to cope with variance [3, 4]. This resembles animal examples in which properties of perception and learning cause risk sensitivity even though they are not risk adaptations [5]. PMID:27374342
TaDb: A time-aware diffusion-based recommender algorithm
NASA Astrophysics Data System (ADS)
Li, Wen-Jun; Xu, Yuan-Yuan; Dong, Qiang; Zhou, Jun-Lin; Fu, Yan
2015-02-01
Traditional recommender algorithms usually employ the early and recent records indiscriminately, which overlooks the change of user interests over time. In this paper, we show that the interests of a user remain stable in a short-term interval and drift during a long-term period. Based on this observation, we propose a time-aware diffusion-based (TaDb) recommender algorithm, which assigns different temporal weights to the leading links existing before the target user's collection and the following links appearing after that in the diffusion process. Experiments on four real datasets, Netflix, MovieLens, FriendFeed and Delicious show that TaDb algorithm significantly improves the prediction accuracy compared with the algorithms not considering temporal effects.
Projection Classification Based Iterative Algorithm
NASA Astrophysics Data System (ADS)
Zhang, Ruiqiu; Li, Chen; Gao, Wenhua
2015-05-01
Iterative algorithm has good performance as it does not need complete projection data in 3D image reconstruction area. It is possible to be applied in BGA based solder joints inspection but with low convergence speed which usually acts with x-ray Laminography that has a worse reconstruction image compared to the former one. This paper explores to apply one projection classification based method which tries to separate the object to three parts, i.e. solute, solution and air, and suppose that the reconstruction speed decrease from solution to two other parts on both side lineally. And then SART and CAV algorithms are improved under the proposed idea. Simulation experiment result with incomplete projection images indicates the fast convergence speed of the improved iterative algorithms and the effectiveness of the proposed method. Less the projection images, more the superiority is also founded.
Sparse subspace clustering: algorithm, theory, and applications.
Elhamifar, Ehsan; Vidal, René
2013-11-01
Many real-world problems deal with collections of high-dimensional data, such as images, videos, text, and web documents, DNA microarray data, and more. Often, such high-dimensional data lie close to low-dimensional structures corresponding to several classes or categories to which the data belong. In this paper, we propose and study an algorithm, called sparse subspace clustering, to cluster data points that lie in a union of low-dimensional subspaces. The key idea is that, among the infinitely many possible representations of a data point in terms of other points, a sparse representation corresponds to selecting a few points from the same subspace. This motivates solving a sparse optimization program whose solution is used in a spectral clustering framework to infer the clustering of the data into subspaces. Since solving the sparse optimization program is in general NP-hard, we consider a convex relaxation and show that, under appropriate conditions on the arrangement of the subspaces and the distribution of the data, the proposed minimization program succeeds in recovering the desired sparse representations. The proposed algorithm is efficient and can handle data points near the intersections of subspaces. Another key advantage of the proposed algorithm with respect to the state of the art is that it can deal directly with data nuisances, such as noise, sparse outlying entries, and missing entries, by incorporating the model of the data into the sparse optimization program. We demonstrate the effectiveness of the proposed algorithm through experiments on synthetic data as well as the two real-world problems of motion segmentation and face clustering. PMID:24051734
Computational algorithms to predict Gene Ontology annotations
2015-01-01
Background Gene function annotations, which are associations between a gene and a term of a controlled vocabulary describing gene functional features, are of paramount importance in modern biology. Datasets of these annotations, such as the ones provided by the Gene Ontology Consortium, are used to design novel biological experiments and interpret their results. Despite their importance, these sources of information have some known issues. They are incomplete, since biological knowledge is far from being definitive and it rapidly evolves, and some erroneous annotations may be present. Since the curation process of novel annotations is a costly procedure, both in economical and time terms, computational tools that can reliably predict likely annotations, and thus quicken the discovery of new gene annotations, are very useful. Methods We used a set of computational algorithms and weighting schemes to infer novel gene annotations from a set of known ones. We used the latent semantic analysis approach, implementing two popular algorithms (Latent Semantic Indexing and Probabilistic Latent Semantic Analysis) and propose a novel method, the Semantic IMproved Latent Semantic Analysis, which adds a clustering step on the set of considered genes. Furthermore, we propose the improvement of these algorithms by weighting the annotations in the input set. Results We tested our methods and their weighted variants on the Gene Ontology annotation sets of three model organism genes (Bos taurus, Danio rerio and Drosophila melanogaster ). The methods showed their ability in predicting novel gene annotations and the weighting procedures demonstrated to lead to a valuable improvement, although the obtained results vary according to the dimension of the input annotation set and the considered algorithm. Conclusions Out of the three considered methods, the Semantic IMproved Latent Semantic Analysis is the one that provides better results. In particular, when coupled with a proper
WDM Multicast Tree Construction Algorithms and Their Comparative Evaluations
NASA Astrophysics Data System (ADS)
Makabe, Tsutomu; Mikoshi, Taiju; Takenaka, Toyofumi
We propose novel tree construction algorithms for multicast communication in photonic networks. Since multicast communications consume many more link resources than unicast communications, effective algorithms for route selection and wavelength assignment are required. We propose a novel tree construction algorithm, called the Weighted Steiner Tree (WST) algorithm and a variation of the WST algorithm, called the Composite Weighted Steiner Tree (CWST) algorithm. Because these algorithms are based on the Steiner Tree algorithm, link resources among source and destination pairs tend to be commonly used and link utilization ratios are improved. Because of this, these algorithms can accept many more multicast requests than other multicast tree construction algorithms based on the Dijkstra algorithm. However, under certain delay constraints, the blocking characteristics of the proposed Weighted Steiner Tree algorithm deteriorate since some light paths between source and destinations use many hops and cannot satisfy the delay constraint. In order to adapt the approach to the delay-sensitive environments, we have devised the Composite Weighted Steiner Tree algorithm comprising the Weighted Steiner Tree algorithm and the Dijkstra algorithm for use in a delay constrained environment such as an IPTV application. In this paper, we also give the results of simulation experiments which demonstrate the superiority of the proposed Composite Weighted Steiner Tree algorithm compared with the Distributed Minimum Hop Tree (DMHT) algorithm, from the viewpoint of the light-tree request blocking.
NASA Technical Reports Server (NTRS)
2005-01-01
False color images of Saturn's moon, Mimas, reveal variation in either the composition or texture across its surface.
During its approach to Mimas on Aug. 2, 2005, the Cassini spacecraft narrow-angle camera obtained multi-spectral views of the moon from a range of 228,000 kilometers (142,500 miles).
The image at the left is a narrow angle clear-filter image, which was separately processed to enhance the contrast in brightness and sharpness of visible features. The image at the right is a color composite of narrow-angle ultraviolet, green, infrared and clear filter images, which have been specially processed to accentuate subtle changes in the spectral properties of Mimas' surface materials. To create this view, three color images (ultraviolet, green and infrared) were combined into a single black and white picture that isolates and maps regional color differences. This 'color map' was then superimposed over the clear-filter image at the left.
The combination of color map and brightness image shows how the color differences across the Mimas surface materials are tied to geological features. Shades of blue and violet in the image at the right are used to identify surface materials that are bluer in color and have a weaker infrared brightness than average Mimas materials, which are represented by green.
Herschel crater, a 140-kilometer-wide (88-mile) impact feature with a prominent central peak, is visible in the upper right of each image. The unusual bluer materials are seen to broadly surround Herschel crater. However, the bluer material is not uniformly distributed in and around the crater. Instead, it appears to be concentrated on the outside of the crater and more to the west than to the north or south. The origin of the color differences is not yet understood. It may represent ejecta material that was excavated from inside Mimas when the Herschel impact occurred. The bluer color of these materials may be caused by subtle differences in
Breadth-first search-based single-phase algorithms for bridge detection in wireless sensor networks.
Akram, Vahid Khalilpour; Dagdeviren, Orhan
2013-01-01
Wireless sensor networks (WSNs) are promising technologies for exploring harsh environments, such as oceans, wild forests, volcanic regions and outer space. Since sensor nodes may have limited transmission range, application packets may be transmitted by multi-hop communication. Thus, connectivity is a very important issue. A bridge is a critical edge whose removal breaks the connectivity of the network. Hence, it is crucial to detect bridges and take preventions. Since sensor nodes are battery-powered, services running on nodes should consume low energy. In this paper, we propose energy-efficient and distributed bridge detection algorithms for WSNs. Our algorithms run single phase and they are integrated with the Breadth-First Search (BFS) algorithm, which is a popular routing algorithm. Our first algorithm is an extended version of Milic's algorithm, which is designed to reduce the message length. Our second algorithm is novel and uses ancestral knowledge to detect bridges. We explain the operation of the algorithms, analyze their proof of correctness, message, time, space and computational complexities. To evaluate practical importance, we provide testbed experiments and extensive simulations. We show that our proposed algorithms provide less resource consumption, and the energy savings of our algorithms are up by 5.5-times. PMID:23845930
Breadth-First Search-Based Single-Phase Algorithms for Bridge Detection in Wireless Sensor Networks
Akram, Vahid Khalilpour; Dagdeviren, Orhan
2013-01-01
Wireless sensor networks (WSNs) are promising technologies for exploring harsh environments, such as oceans, wild forests, volcanic regions and outer space. Since sensor nodes may have limited transmission range, application packets may be transmitted by multi-hop communication. Thus, connectivity is a very important issue. A bridge is a critical edge whose removal breaks the connectivity of the network. Hence, it is crucial to detect bridges and take preventions. Since sensor nodes are battery-powered, services running on nodes should consume low energy. In this paper, we propose energy-efficient and distributed bridge detection algorithms for WSNs. Our algorithms run single phase and they are integrated with the Breadth-First Search (BFS) algorithm, which is a popular routing algorithm. Our first algorithm is an extended version of Milic's algorithm, which is designed to reduce the message length. Our second algorithm is novel and uses ancestral knowledge to detect bridges. We explain the operation of the algorithms, analyze their proof of correctness, message, time, space and computational complexities. To evaluate practical importance, we provide testbed experiments and extensive simulations. We show that our proposed algorithms provide less resource consumption, and the energy savings of our algorithms are up by 5.5-times. PMID:23845930
A Synthesized Heuristic Task Scheduling Algorithm
Dai, Yanyan; Zhang, Xiangli
2014-01-01
Aiming at the static task scheduling problems in heterogeneous environment, a heuristic task scheduling algorithm named HCPPEFT is proposed. In task prioritizing phase, there are three levels of priority in the algorithm to choose task. First, the critical tasks have the highest priority, secondly the tasks with longer path to exit task will be selected, and then algorithm will choose tasks with less predecessors to schedule. In resource selection phase, the algorithm is selected task duplication to reduce the interresource communication cost, besides forecasting the impact of an assignment for all children of the current task permits better decisions to be made in selecting resources. The algorithm proposed is compared with STDH, PEFT, and HEFT algorithms through randomly generated graphs and sets of task graphs. The experimental results show that the new algorithm can achieve better scheduling performance. PMID:25254244
Saving Resources with Plagues in Genetic Algorithms
de Vega, F F; Cantu-Paz, E; Lopez, J I; Manzano, T
2004-06-15
The population size of genetic algorithms (GAs) affects the quality of the solutions and the time required to find them. While progress has been made in estimating the population sizes required to reach a desired solution quality for certain problems, in practice the sizing of populations is still usually performed by trial and error. These trials might lead to find a population that is large enough to reach a satisfactory solution, but there may still be opportunities to optimize the computational cost by reducing the size of the population. This paper presents a technique called plague that periodically removes a number of individuals from the population as the GA executes. Recently, the usefulness of the plague has been demonstrated for genetic programming. The objective of this paper is to extend the study of plagues to genetic algorithms. We experiment with deceptive trap functions, a tunable difficult problem for GAs, and the experiments show that plagues can save computational time while maintaining solution quality and reliability.
An accurate and efficient algorithm for Peptide and ptm identification by tandem mass spectrometry.
Ning, Kang; Ng, Hoong Kee; Leong, Hon Wai
2007-01-01
Peptide identification by tandem mass spectrometry (MS/MS) is one of the most important problems in proteomics. Recent advances in high throughput MS/MS experiments result in huge amount of spectra. Unfortunately, identification of these spectra is relatively slow, and the accuracies of current algorithms are not high with the presence of noises and post-translational modifications (PTMs). In this paper, we strive to achieve high accuracy and efficiency for peptide identification problem, with special concern on identification of peptides with PTMs. This paper expands our previous work on PepSOM with the introduction of two accurate modified scoring functions: Slambda for peptide identification and Slambda* for identification of peptides with PTMs. Experiments showed that our algorithm is both fast and accurate for peptide identification. Experiments on spectra with simulated and real PTMs confirmed that our algorithm is accurate for identifying PTMs. PMID:18546510
An improved algorithm of a priori based on geostatistics
NASA Astrophysics Data System (ADS)
Chen, Jiangping; Wang, Rong; Tang, Xuehua
2008-12-01
In data mining one of the classical algorithms is Apriori which has been developed for association rule mining in large transaction database. And it cannot been directly used in spatial association rules mining. The main difference between data mining in relational DB and in spatial DB is that attributes of the neighbors of some object of interest may have an influence on the object and therefore have to be considered as well. The explicit location and extension of spatial objects define implicit relations of spatial neighborhood (such as topological, distance and direction relations) which are used by spatial data mining algorithms. Therefore, new techniques are required for effective and efficient spatial data mining. Geostatistics are statistical methods used to describe spatial relationships among sample data and to apply this analysis to the prediction of spatial and temporal phenomena. They are used to explain spatial patterns and to interpolate values at unsampled locations. This paper put forward an improved algorithm of Apriori about mining association rules with geostatistics. First the spatial autocorrelation of the attributes with location were estimated with the geostatistics methods such as kriging and Spatial Autoregressive Model (SAR). Then a spatial autocorrelation model of the attributes were built. Later an improved algorithm of apriori combined with the spatial autocorrelation model were offered to mine the spatial association rules. Last an experiment of the new algorithm were carried out on the hayfever incidence and climate factors in UK. The result shows that the output rules is matched with the references.
Multithreaded Algorithms for Graph Coloring
Catalyurek, Umit V.; Feo, John T.; Gebremedhin, Assefaw H.; Halappanavar, Mahantesh; Pothen, Alex
2012-10-21
Graph algorithms are challenging to parallelize when high performance and scalability are primary goals. Low concurrency, poor data locality, irregular access pattern, and high data access to computation ratio are among the chief reasons for the challenge. The performance implication of these features is exasperated on distributed memory machines. More success is being achieved on shared-memory, multi-core architectures supporting multithreading. We consider a prototypical graph problem, coloring, and show how a greedy algorithm for solving it can be e*ectively parallelized on multithreaded architectures. We present in particular two di*erent parallel algorithms. The first relies on speculation and iteration, and is suitable for any shared-memory, multithreaded system. The second uses data ow principles and is targeted at the massively multithreaded Cray XMT system. We benchmark the algorithms on three di*erent platforms and demonstrate scalable runtime performance. In terms of quality of solution, both algorithms use nearly the same number of colors as the serial algorithm.
Fusing face-verification algorithms and humans.
O'Toole, Alice J; Abdi, Hervé; Jiang, Fang; Phillips, P Jonathon
2007-10-01
It has been demonstrated recently that state-of-the-art face-recognition algorithms can surpass human accuracy at matching faces over changes in illumination. The ranking of algorithms and humans by accuracy, however, does not provide information about whether algorithms and humans perform the task comparably or whether algorithms and humans can be fused to improve performance. In this paper, we fused humans and algorithms using partial least square regression (PLSR). In the first experiment, we applied PLSR to face-pair similarity scores generated by seven algorithms participating in the Face Recognition Grand Challenge. The PLSR produced an optimal weighting of the similarity scores, which we tested for generality with a jackknife procedure. Fusing the algorithms' similarity scores using the optimal weights produced a twofold reduction of error rate over the most accurate algorithm. Next, human-subject-generated similarity scores were added to the PLSR analysis. Fusing humans and algorithms increased the performance to near-perfect classification accuracy. These results are discussed in terms of maximizing face-verification accuracy with hybrid systems consisting of multiple algorithms and humans. PMID:17926698
A novel evaluation metric based on visual perception for moving target detection algorithm
NASA Astrophysics Data System (ADS)
Huang, Wei; Liu, Lei; Cui, Minjie; Li, He
2016-05-01
Traditional performance evaluation index for moving target detection algorithm, whose each index's emphasis is different when it is used to evaluate the performance of the moving target detection algorithm, is inconvenient for people to make an evaluation of the performance of algorithm comprehensively and objectively. Particularly, when the detection results of different algorithms have the same number of the foreground point and the background point, the algorithm's each traditional index is the same, and we can't use the traditional index to compare the performance of the moving target detection algorithms, which is the disadvantage of traditional evaluation index that takes pixel as a unit when calculating the index. To solve this problem, combining with the feature of human's visual perception system, this paper presents a new evaluation index-Visual Fluctuation (VF) based on the principle of image block to evaluate the performance of moving target detection algorithm. Experiments showed that the new evaluation index based on the visual perception makes up for the deficiency of traditional one, and the calculation results are not only in accordance with visual perception of human, but also evaluate the performance of the moving target detection algorithm more objectively.
Ear feature region detection based on a combined image segmentation algorithm-KRM
NASA Astrophysics Data System (ADS)
Jiang, Jingying; Zhang, Hao; Zhang, Qi; Lu, Junsheng; Ma, Zhenhe; Xu, Kexin
2014-02-01
Scale Invariant Feature Transform SIFT algorithm is widely used for ear feature matching and recognition. However, the application of the algorithm is usually interfered by the non-target areas within the whole image, and the interference would then affect the matching and recognition of ear features. To solve this problem, a combined image segmentation algorithm i.e. KRM was introduced in this paper, As the human ear recognition pretreatment method. Firstly, the target areas of ears were extracted by the KRM algorithm and then SIFT algorithm could be applied to the detection and matching of features. The present KRM algorithm follows three steps: (1)the image was preliminarily segmented into foreground target area and background area by using K-means clustering algorithm; (2)Region growing method was used to merge the over-segmented areas; (3)Morphology erosion filtering method was applied to obtain the final segmented regions. The experiment results showed that the KRM method could effectively improve the accuracy and robustness of ear feature matching and recognition based on SIFT algorithm.
Parallel Directionally Split Solver Based on Reformulation of Pipelined Thomas Algorithm
NASA Technical Reports Server (NTRS)
Povitsky, A.
1998-01-01
In this research an efficient parallel algorithm for 3-D directionally split problems is developed. The proposed algorithm is based on a reformulated version of the pipelined Thomas algorithm that starts the backward step computations immediately after the completion of the forward step computations for the first portion of lines This algorithm has data available for other computational tasks while processors are idle from the Thomas algorithm. The proposed 3-D directionally split solver is based on the static scheduling of processors where local and non-local, data-dependent and data-independent computations are scheduled while processors are idle. A theoretical model of parallelization efficiency is used to define optimal parameters of the algorithm, to show an asymptotic parallelization penalty and to obtain an optimal cover of a global domain with subdomains. It is shown by computational experiments and by the theoretical model that the proposed algorithm reduces the parallelization penalty about two times over the basic algorithm for the range of the number of processors (subdomains) considered and the number of grid nodes per subdomain.
NASA Astrophysics Data System (ADS)
Huang, Yin; Chen, Jianhua; Xiong, Shaojun
2009-07-01
Mobile-Learning (M-learning) makes many learners get the advantages of both traditional learning and E-learning. Currently, Web-based Mobile-Learning Systems have created many new ways and defined new relationships between educators and learners. Association rule mining is one of the most important fields in data mining and knowledge discovery in databases. Rules explosion is a serious problem which causes great concerns, as conventional mining algorithms often produce too many rules for decision makers to digest. Since Web-based Mobile-Learning System collects vast amounts of student profile data, data mining and knowledge discovery techniques can be applied to find interesting relationships between attributes of learners, assessments, the solution strategies adopted by learners and so on. Therefore ,this paper focus on a new data-mining algorithm, combined with the advantages of genetic algorithm and simulated annealing algorithm , called ARGSA(Association rules based on an improved Genetic Simulated Annealing Algorithm), to mine the association rules. This paper first takes advantage of the Parallel Genetic Algorithm and Simulated Algorithm designed specifically for discovering association rules. Moreover, the analysis and experiment are also made to show the proposed method is superior to the Apriori algorithm in this Mobile-Learning system.
Motion object tracking algorithm using multi-cameras
NASA Astrophysics Data System (ADS)
Kong, Xiaofang; Chen, Qian; Gu, Guohua
2015-09-01
Motion object tracking is one of the most important research directions in computer vision. Challenges in designing a robust tracking method are usually caused by partial or complete occlusions on targets. However, motion object tracking algorithm based on multiple cameras according to the homography relation in three views can deal with this issue effectively since the information combining from multiple cameras in different views can make the target more complete and accurate. In this paper, a robust visual tracking algorithm based on the homography relations of three cameras in different views is presented to cope with the occlusion. First of all, being the main contribution of this paper, the motion object tracking algorithm based on the low-rank matrix representation under the framework of the particle filter is applied to track the same target in the public region respectively in different views. The target model and the occlusion model are established and an alternating optimization algorithm is utilized to solve the proposed optimization formulation while tracking. Then, we confirm the plane in which the target has the largest occlusion weight to be the principal plane and calculate the homography to find out the mapping relations between different views. Finally, the images of the other two views are projected into the main plane. By making use of the homography relation between different views, the information of the occluded target can be obtained completely. The proposed algorithm has been examined throughout several challenging image sequences, and experiments show that it overcomes the failure of the motion tracking especially under the situation of the occlusion. Besides, the proposed algorithm improves the accuracy of the motion tracking comparing with other state-of-the-art algorithms.
Ordered subsets algorithms for transmission tomography.
Erdogan, H; Fessler, J A
1999-11-01
The ordered subsets EM (OSEM) algorithm has enjoyed considerable interest for emission image reconstruction due to its acceleration of the original EM algorithm and ease of programming. The transmission EM reconstruction algorithm converges very slowly and is not used in practice. In this paper, we introduce a simultaneous update algorithm called separable paraboloidal surrogates (SPS) that converges much faster than the transmission EM algorithm. Furthermore, unlike the 'convex algorithm' for transmission tomography, the proposed algorithm is monotonic even with nonzero background counts. We demonstrate that the ordered subsets principle can also be applied to the new SPS algorithm for transmission tomography to accelerate 'convergence', albeit with similar sacrifice of global convergence properties as for OSEM. We implemented and evaluated this ordered subsets transmission (OSTR) algorithm. The results indicate that the OSTR algorithm speeds up the increase in the objective function by roughly the number of subsets in the early iterates when compared to the ordinary SPS algorithm. We compute mean square errors and segmentation errors for different methods and show that OSTR is superior to OSEM applied to the logarithm of the transmission data. However, penalized-likelihood reconstructions yield the best quality images among all other methods tested. PMID:10588288
NASA Astrophysics Data System (ADS)
Zhu, Li; He, Yongxiang; Xue, Haidong; Chen, Leichen
Traditional genetic algorithms (GA) displays a disadvantage of early-constringency in dealing with scheduling problem. To improve the crossover operators and mutation operators self-adaptively, this paper proposes a self-adaptive GA at the target of multitask scheduling optimization under limited resources. The experiment results show that the proposed algorithm outperforms the traditional GA in evolutive ability to deal with complex task scheduling optimization.
Reasoning about systolic algorithms
Purushothaman, S.
1986-01-01
Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.
Algorithm-development activities
NASA Technical Reports Server (NTRS)
Carder, Kendall L.
1994-01-01
The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.
Voronoi particle merging algorithm for PIC codes
NASA Astrophysics Data System (ADS)
Luu, Phuc T.; Tückmantel, T.; Pukhov, A.
2016-05-01
We present a new particle-merging algorithm for the particle-in-cell method. Based on the concept of the Voronoi diagram, the algorithm partitions the phase space into smaller subsets, which consist of only particles that are in close proximity in the phase space to each other. We show the performance of our algorithm in the case of the two-stream instability and the magnetic shower.
Sequential and Parallel Algorithms for Spherical Interpolation
NASA Astrophysics Data System (ADS)
De Rossi, Alessandra
2007-09-01
Given a large set of scattered points on a sphere and their associated real values, we analyze sequential and parallel algorithms for the construction of a function defined on the sphere satisfying the interpolation conditions. The algorithms we implemented are based on a local interpolation method using spherical radial basis functions and the Inverse Distance Weighted method. Several numerical results show accuracy and efficiency of the algorithms.
Interpreting the Standard Division Algorithm in a "Candy Factory" Context
ERIC Educational Resources Information Center
Gregg, Jeff; Gregg, Diana Underwood
2007-01-01
This article discusses the difficulties preservice teachers experience when they try to make sense of the standard long-division algorithm, describes a realistic context that we have found productive in helping our students think about why the algorithm works and the role of place value in the algorithm, and suggests the applicability of this…
Educational Outreach: The Space Science Road Show
NASA Astrophysics Data System (ADS)
Cox, N. L. J.
2002-01-01
The poster presented will give an overview of a study towards a "Space Road Show". The topic of this show is space science. The target group is adolescents, aged 12 to 15, at Dutch high schools. The show and its accompanying experiments would be supported with suitable educational material. Science teachers at schools can decide for themselves if they want to use this material in advance, afterwards or not at all. The aims of this outreach effort are: to motivate students for space science and engineering, to help them understand the importance of (space) research, to give them a positive feeling about the possibilities offered by space and in the process give them useful knowledge on space basics. The show revolves around three main themes: applications, science and society. First the students will get some historical background on the importance of space/astronomy to civilization. Secondly they will learn more about novel uses of space. On the one hand they will learn of "Views on Earth" involving technologies like Remote Sensing (or Spying), Communication, Broadcasting, GPS and Telemedicine. On the other hand they will experience "Views on Space" illustrated by past, present and future space research missions, like the space exploration missions (Cassini/Huygens, Mars Express and Rosetta) and the astronomy missions (Soho and XMM). Meanwhile, the students will learn more about the technology of launchers and satellites needed to accomplish these space missions. Throughout the show and especially towards the end attention will be paid to the third theme "Why go to space"? Other reasons for people to get into space will be explored. An important question in this is the commercial (manned) exploration of space. Thus, the questions of benefit of space to society are integrated in the entire show. It raises some fundamental questions about the effects of space travel on our environment, poverty and other moral issues. The show attempts to connect scientific with
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
Algorithms for computing and integrating physical maps using unique probes.
Jain, M; Myers, E W
1997-01-01
Current physical mapping projects based on STS-probes involve additional clues such as the fact that some probes are anchored to a known map and that others come from the ends of clones. Because of the disparate combinatorial contributions of these varied data items, it is difficult to design a "tailored" algorithm that incorporates them all. Moreover, it is inevitable that new experiments will provide new kinds of data, making obsolete any such algorithm. We show how to convert the physical mapping problem into a 0/1 linear programming (LP) problem. We further show how one can incorporate additional clues as additional constraints in the LP formulation. We give a simple relaxation of the 0/1 LP problem, which solves problems of the same scale as previously reported tailored algorithms, to equal or greater optimization levels. We also present a theorem proving that when the data is 100% accurate, then the relaxed and integer solutions coincide. The LP algorithm suffices to solve problems on the order of 80-100 probes--the typical size of the 2- or 3-connected contigs of Arratia et al. (1991). We give a heuristic algorithm which attempts to order and link the set of LP-solved contigs. Unlike previous work, this algorithm only links and orders contigs when the join is 90% or more likely to be correct. It is our view that there is no value in computing an optimal solution with respect to some criteria over very noisy data as this optimal solution rarely corresponds to the true solution. The paper involves extensive empirical trials over real and simulated data. PMID:9385539
Implementing the Deutsch-Jozsa algorithm with macroscopic ensembles
NASA Astrophysics Data System (ADS)
Semenenko, Henry; Byrnes, Tim
2016-05-01
Quantum computing implementations under consideration today typically deal with systems with microscopic degrees of freedom such as photons, ions, cold atoms, and superconducting circuits. The quantum information is stored typically in low-dimensional Hilbert spaces such as qubits, as quantum effects are strongest in such systems. It has, however, been demonstrated that quantum effects can be observed in mesoscopic and macroscopic systems, such as nanomechanical systems and gas ensembles. While few-qubit quantum information demonstrations have been performed with such macroscopic systems, a quantum algorithm showing exponential speedup over classical algorithms is yet to be shown. Here, we show that the Deutsch-Jozsa algorithm can be implemented with macroscopic ensembles. The encoding that we use avoids the detrimental effects of decoherence that normally plagues macroscopic implementations. We discuss two mapping procedures which can be chosen depending upon the constraints of the oracle and the experiment. Both methods have an exponential speedup over the classical case, and only require control of the ensembles at the level of the total spin of the ensembles. It is shown that both approaches reproduce the qubit Deutsch-Jozsa algorithm, and are robust under decoherence.
A total variation denoising algorithm for hyperspectral data
NASA Astrophysics Data System (ADS)
Li, Ting; Chen, Xiao-mei; Xue, Bo; Li, Qian-qian; Ni, Guo-qiang
2010-11-01
Since noise can undermine the effectiveness of information extracted from hyperspectral imagery, noise reduction is a prerequisite for many classification-based applications of hyperspectral imagery. In this paper, an effective three dimensional total variation denoising algorithm for hyperspectral imagery is introduced. First, a three dimensional objective function of total variation denoising model is derived from the classical two dimensional TV algorithms. For the consideration of the fact that the noise of hyperspectral imagery shows different characteristics in spatial and spectral domain, the objective function is further improved by utilizing two terms (spatial term and spectral term) and separate regularization parameters respectively which can adjust the trade-off between the two terms. Then, the improved objective function is discretized by approximating gradients with local differences, optimized by a quadratic convex function and finally solved by a majorization-minimization based iteration algorithm. The performance of the new algorithm is experimented on a set of Hyperion imageries acquired in a desert-dominated area in 2007. Experimental results show that, properly choosing the values of parameters, the new approach removes the indention and restores the spectral absorption peaks more effectively while having a similar improvement of signal-to-noise-ratio as minimum noise fraction (MNF) method.
A Short Survey of Document Structure Similarity Algorithms
Buttler, D
2004-02-27
This paper provides a brief survey of document structural similarity algorithms, including the optimal Tree Edit Distance algorithm and various approximation algorithms. The approximation algorithms include the simple weighted tag similarity algorithm, Fourier transforms of the structure, and a new application of the shingle technique to structural similarity. We show three surprising results. First, the Fourier transform technique proves to be the least accurate of any of approximation algorithms, while also being slowest. Second, optimal Tree Edit Distance algorithms may not be the best technique for clustering pages from different sites. Third, the simplest approximation to structure may be the most effective and efficient mechanism for many applications.
An ROLAP Aggregation Algorithm with the Rules Being Specified
NASA Astrophysics Data System (ADS)
Zhengqiu, Weng; Tai, Kuang; Lina, Zhang
This paper introduces the base theory of data warehouse and ROLAP, and presents a new kind of ROLAP aggregation algorithm, which has calculation algorithms. It covers the shortage of low accuracy of traditional aggregation algorithm that aggregates only by addition. The ROLAP aggregation with calculation algorithm which can aggregate according to business rules improves accuracy. And key designs and procedures are presented. Compared with the traditional method, its efficiency is displayed in an experiment.
NASA Astrophysics Data System (ADS)
Guo, Li; Li, Pei; Pan, Cong; Liao, Rujia; Cheng, Yuxuan; Hu, Weiwei; Chen, Zhong; Ding, Zhihua; Li, Peng
2016-02-01
The complex-based OCT angiography (Angio-OCT) offers high motion contrast by combining both the intensity and phase information. However, due to involuntary bulk tissue motions, complex-valued OCT raw data are processed sequentially with different algorithms for correcting bulk image shifts (BISs), compensating global phase fluctuations (GPFs) and extracting flow signals. Such a complicated procedure results in massive computational load. To mitigate such a problem, in this work, we present an inter-frame complex-correlation (CC) algorithm. The CC algorithm is suitable for parallel processing of both flow signal extraction and BIS correction, and it does not need GPF compensation. This method provides high processing efficiency and shows superiority in motion contrast. The feasibility and performance of the proposed CC algorithm is demonstrated using both flow phantom and live animal experiments.
Algorithm of weak edge detection based on the Nilpotent minimum fusion
NASA Astrophysics Data System (ADS)
Sun, Genyun; Zhang, Aizhu; Han, Xujun
2011-11-01
To overcome the shortcoming in traditional edge detection, such as the losing of weak edges and the too rough detected edges, a new edge detection method is proposed in this paper. The new algorithm is based on the Nilpotent minimum fusion. First of all, based on the space fuzzy relation of weak edges, the algorithm makes decision fusion to improve the structure of weak edges by using the Nilpotent minimum operator. Secondly, detect edges based on the fusion results. As a result, the weak edges are detected. Experiments on a variety of weak edge images show that the new algorithm can actually overcome the shortcoming in traditional edge detection, for the results are much better than traditional methods. On one hand, some of the weak edges of complex images, such as medical images, are detected. On the other hand, the edges detected by the new algorithm are thinner.
NASA Astrophysics Data System (ADS)
Wang, Bingjie; Pi, Shaohua; Sun, Qi; Jia, Bo
2015-05-01
An improved classification algorithm that considers multiscale wavelet packet Shannon entropy is proposed. Decomposition coefficients at all levels are obtained to build the initial Shannon entropy feature vector. After subtracting the Shannon entropy map of the background signal, components of the strongest discriminating power in the initial feature vector are picked out to rebuild the Shannon entropy feature vector, which is transferred to radial basis function (RBF) neural network for classification. Four types of man-made vibrational intrusion signals are recorded based on a modified Sagnac interferometer. The performance of the improved classification algorithm has been evaluated by the classification experiments via RBF neural network under different diffusion coefficients. An 85% classification accuracy rate is achieved, which is higher than the other common algorithms. The classification results show that this improved classification algorithm can be used to classify vibrational intrusion signals in an automatic real-time monitoring system.
NASA Astrophysics Data System (ADS)
Sai, Yaozhang; Jiang, Mingshun; Sui, Qingmei; Lu, Shizeng; Jia, Lei
2015-08-01
This paper proposed an impact localization system using fiber Bragg grating (FBG) network which is based on quasi-Newton algorithm and particle swarm optimization (PSO) algorithm. The FBG sensing network, formed by eight FBGs, was used to detect impact signals. And Shannon wavelet transform was employed to extract time differences. According to time differences and the coordinates of FBGs, nonlinear equations model of impact localization was established. Based on quasi-Newton algorithm and PSO algorithm, the nonlinear equations can be solved to obtain the coordinate of impact source. Testing experiments were carried out on a composite plate within 400 mm × 400 mm monitoring area. The experimental results showed that the maximum and average errors are 3.2 mm and 1.73 mm, respectively. The computational time is less than 2 s.
Multi-Core Parallel Implementation of Data Filtering Algorithm for Multi-Beam Bathymetry Data
NASA Astrophysics Data System (ADS)
Liu, Tianyang; Xu, Weiming; Yin, Xiaodong; Zhao, Xiliang
In order to improve the multi-beam bathymetry data processing speed, we propose a parallel filtering algorithm based on multi thread technology. The algorithm consists of two parts. The first is the parallel data re-order step, in which the surveying area is divided into a regular grid, and the discrete bathymetry data is arranged into each grid by parallel method. The second part is the parallel filtering step, which involves dividing the grid into blocks and parallel executing filtering process in each block. In the experiment, the speedup of the proposed algorithm reaches to about 3.67 with an 8 core computer. The result shows the method can improve computing efficiency significantly comparing to the traditional algorithm.
The convergence analysis of parallel genetic algorithm based on allied strategy
NASA Astrophysics Data System (ADS)
Lin, Feng; Sun, Wei; Chang, K. C.
2010-04-01
Genetic algorithms (GAs) have been applied to many difficult optimization problems such as track assignment and hypothesis managements for multisensor integration and data fusion. However, premature convergence has been a main problem for GAs. In order to prevent premature convergence, we introduce an allied strategy based on biological evolution and present a parallel Genetic Algorithm with the allied strategy (PGAAS). The PGAAS can prevent premature convergence, increase the optimization speed, and has been successfully applied in a few applications. In this paper, we first present a Markov chain model in the PGAAS. Based on this model, we analyze the convergence property of PGAAS. We then present the proof of global convergence for the PGAAS algorithm. The experiments results show that PGAAS is an efficient and effective parallel Genetic algorithm. Finally, we discuss several potential applications of the proposed methodology.
Study on algorithm and real-time implementation of infrared image processing based on FPGA
NASA Astrophysics Data System (ADS)
Pang, Yulin; Ding, Ruijun; Liu, Shanshan; Chen, Zhe
2010-10-01
With the fast development of Infrared Focal Plane Arrays (IRFPA) detectors, high quality real-time image processing becomes more important in infrared imaging system. Facing the demand of better visual effect and good performance, we find FPGA is an ideal choice of hardware to realize image processing algorithm that fully taking advantage of its high speed, high reliability and processing a great amount of data in parallel. In this paper, a new idea of dynamic linear extension algorithm is introduced, which has the function of automatically finding the proper extension range. This image enhancement algorithm is designed in Verilog HDL and realized on FPGA. It works on higher speed than serial processing device like CPU and DSP. Experiment shows that this hardware unit of dynamic linear extension algorithm enhances the visual effect of infrared image effectively.
NASA Astrophysics Data System (ADS)
Muramatsu, Daigo
Attacks using hill-climbing methods have been reported as a vulnerability of biometric authentication systems. In this paper, we propose a robust online signature verification algorithm against such attacks. Specifically, the attack considered in this paper is a hill-climbing forged data attack. Artificial forgeries are generated offline by using the hill-climbing method, and the forgeries are input to a target system to be attacked. In this paper, we analyze the menace of hill-climbing forged data attacks using six types of hill-climbing forged data and propose a robust algorithm by incorporating the hill-climbing method into an online signature verification algorithm. Experiments to evaluate the proposed system were performed using a public online signature database. The proposed algorithm showed improved performance against this kind of attack.
Application of ant colony algorithm in plant leaves classification based on infrared spectroscopy
NASA Astrophysics Data System (ADS)
Guo, Tiantai; Hong, Bo; Kong, Ming; Zhao, Jun
2014-04-01
This paper proposes to use ant colony algorithm in the analysis of spectral data of plant leaves to achieve the best classification of different plants within a short time. Intelligent classification is realized according to different components of featured information included in near infrared spectrum data of plants. The near infrared diffusive emission spectrum curves of the leaves of Cinnamomum camphora and Acer saccharum Marsh are acquired, which have 75 leaves respectively, and are divided into two groups. Then, the acquired data are processed using ant colony algorithm and the same kind of leaves can be classified as a class by ant colony clustering algorithm. Finally, the two groups of data are classified into two classes. Experiment results show that the algorithm can distinguish different species up to the percentage of 100%. The classification of plant leaves has important application value in agricultural development, research of species invasion, floriculture etc.
Motion Estimation Using the Firefly Algorithm in Ultrasonic Image Sequence of Soft Tissue
Chao, Chih-Feng; Horng, Ming-Huwi; Chen, Yu-Chan
2015-01-01
Ultrasonic image sequence of the soft tissue is widely used in disease diagnosis; however, the speckle noises usually influenced the image quality. These images usually have a low signal-to-noise ratio presentation. The phenomenon gives rise to traditional motion estimation algorithms that are not suitable to measure the motion vectors. In this paper, a new motion estimation algorithm is developed for assessing the velocity field of soft tissue in a sequence of ultrasonic B-mode images. The proposed iterative firefly algorithm (IFA) searches for few candidate points to obtain the optimal motion vector, and then compares it to the traditional iterative full search algorithm (IFSA) via a series of experiments of in vivo ultrasonic image sequences. The experimental results show that the IFA can assess the vector with better efficiency and almost equal estimation quality compared to the traditional IFSA method. PMID:25873987
The analysis of multigrid algorithms for pseudodifferential operators of order minus one
Bramble, J.H.; Leyk, Z.; Pasciak, J.E. ||
1994-10-01
Multigrid algorithms are developed to solve the discrete systems approximating the solutions of operator equations involving pseudodifferential operators of order minus one. Classical multigrid theory deals with the case of differential operators of positive order. The pseudodifferential operator gives rise to a coercive form on H{sup {minus}1/2}({Omega}). Effective multigrid algorithms are developed for this problem. These algorithms are novel in that they use the inner product on H{sup {minus}1}({Omega}) as a base inner product for the multigrid development. The authors show that the resulting rate of iterative convergence can, at worst, depend linearly on the number of levels in these novel multigrid algorithms. In addition, it is shown that the convergence rate is independent of the number of levels (and unknowns) in the case of a pseudodifferential operator defined by a single-layer potential. Finally, the results of numerical experiments illustrating the theory are presented. 19 refs., 1 fig., 2 tabs.
Fast computing global structural balance in signed networks based on memetic algorithm
NASA Astrophysics Data System (ADS)
Sun, Yixiang; Du, Haifeng; Gong, Maoguo; Ma, Lijia; Wang, Shanfeng
2014-12-01
Structural balance is a large area of study in signed networks, and it is intrinsically a global property of the whole network. Computing global structural balance in signed networks, which has attracted some attention in recent years, is to measure how unbalanced a signed network is and it is a nondeterministic polynomial-time hard problem. Many approaches are developed to compute global balance. However, the results obtained by them are partial and unsatisfactory. In this study, the computation of global structural balance is solved as an optimization problem by using the Memetic Algorithm. The optimization algorithm, named Meme-SB, is proposed to optimize an evaluation function, energy function, which is used to compute a distance to exact balance. Our proposed algorithm combines Genetic Algorithm and a greedy strategy as the local search procedure. Experiments on social and biological networks show the excellent effectiveness and efficiency of the proposed method.
A Speech Endpoint Detection Algorithm Based on BP Neural Network and Multiple Features
NASA Astrophysics Data System (ADS)
Shi, Yong-Qiang; Li, Ru-Wei; Zhang, Shuang; Wang, Shuai; Yi, Xiao-Qun
Focusing on a sharp decline in the performance of endpoint detection algorithm in a complicated noise environment, a new speech endpoint detection method based on BPNN (back propagation neural network) and multiple features is presented. Firstly, maximum of short-time autocorrelation function and spectrum variance of speech signals are extracted respectively. Secondly, these feature vectors as the input of BP neural network are trained and modeled and then the Genetic Algorithm is used to optimize the BP Neural Network. Finally, the signal's type is determined according to the output of Neural Network. The experiments show that the correct rate of this proposed algorithm is improved, because this method has better robustness and adaptability than algorithm based on maximum of short-time autocorrelation function or spectrum variance.
Thermostat algorithm for generating target ensembles
NASA Astrophysics Data System (ADS)
Bravetti, A.; Tapias, D.
2016-02-01
We present a deterministic algorithm called contact density dynamics that generates any prescribed target distribution in the physical phase space. Akin to the famous model of Nosé and Hoover, our algorithm is based on a non-Hamiltonian system in an extended phase space. However, the equations of motion in our case follow from contact geometry and we show that in general they have a similar form to those of the so-called density dynamics algorithm. As a prototypical example, we apply our algorithm to produce a Gibbs canonical distribution for a one-dimensional harmonic oscillator.
Practical algorithmic probability: an image inpainting example
NASA Astrophysics Data System (ADS)
Potapov, Alexey; Scherbakov, Oleg; Zhdanov, Innokentii
2013-12-01
Possibility of practical application of algorithmic probability is analyzed on an example of image inpainting problem that precisely corresponds to the prediction problem. Such consideration is fruitful both for the theory of universal prediction and practical image inpaiting methods. Efficient application of algorithmic probability implies that its computation is essentially optimized for some specific data representation. In this paper, we considered one image representation, namely spectral representation, for which an image inpainting algorithm is proposed based on the spectrum entropy criterion. This algorithm showed promising results in spite of very simple representation. The same approach can be used for introducing ALP-based criterion for more powerful image representations.
Thermostat algorithm for generating target ensembles.
Bravetti, A; Tapias, D
2016-02-01
We present a deterministic algorithm called contact density dynamics that generates any prescribed target distribution in the physical phase space. Akin to the famous model of Nosé and Hoover, our algorithm is based on a non-Hamiltonian system in an extended phase space. However, the equations of motion in our case follow from contact geometry and we show that in general they have a similar form to those of the so-called density dynamics algorithm. As a prototypical example, we apply our algorithm to produce a Gibbs canonical distribution for a one-dimensional harmonic oscillator. PMID:26986320
SDR input power estimation algorithms
NASA Astrophysics Data System (ADS)
Briones, J. C.; Nappier, J. M.
The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.
SDR Input Power Estimation Algorithms
NASA Technical Reports Server (NTRS)
Nappier, Jennifer M.; Briones, Janette C.
2013-01-01
The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.
Xu, Zhenzhen; Zou, Yongxing; Kong, Xiangjie
2015-01-01
To our knowledge, this paper investigates the first application of meta-heuristic algorithms to tackle the parallel machines scheduling problem with weighted late work criterion and common due date ([Formula: see text]). Late work criterion is one of the performance measures of scheduling problems which considers the length of late parts of particular jobs when evaluating the quality of scheduling. Since this problem is known to be NP-hard, three meta-heuristic algorithms, namely ant colony system, genetic algorithm, and simulated annealing are designed and implemented, respectively. We also propose a novel algorithm named LDF (largest density first) which is improved from LPT (longest processing time first). The computational experiments compared these meta-heuristic algorithms with LDF, LPT and LS (list scheduling), and the experimental results show that SA performs the best in most cases. However, LDF is better than SA in some conditions, moreover, the running time of LDF is much shorter than SA. PMID:26702371
A Novel Harmony Search Algorithm Based on Teaching-Learning Strategies for 0-1 Knapsack Problems
Tuo, Shouheng; Yong, Longquan; Deng, Fang'an
2014-01-01
To enhance the performance of harmony search (HS) algorithm on solving the discrete optimization problems, this paper proposes a novel harmony search algorithm based on teaching-learning (HSTL) strategies to solve 0-1 knapsack problems. In the HSTL algorithm, firstly, a method is presented to adjust dimension dynamically for selected harmony vector in optimization procedure. In addition, four strategies (harmony memory consideration, teaching-learning strategy, local pitch adjusting, and random mutation) are employed to improve the performance of HS algorithm. Another improvement in HSTL method is that the dynamic strategies are adopted to change the parameters, which maintains the proper balance effectively between global exploration power and local exploitation power. Finally, simulation experiments with 13 knapsack problems show that the HSTL algorithm can be an efficient alternative for solving 0-1 knapsack problems. PMID:24574905
Motion Cueing Algorithm Development: Initial Investigation and Redesign of the Algorithms
NASA Technical Reports Server (NTRS)
Telban, Robert J.; Wu, Weimin; Cardullo, Frank M.; Houck, Jacob A. (Technical Monitor)
2000-01-01
In this project four motion cueing algorithms were initially investigated. The classical algorithm generated results with large distortion and delay and low magnitude. The NASA adaptive algorithm proved to be well tuned with satisfactory performance, while the UTIAS adaptive algorithm produced less desirable results. Modifications were made to the adaptive algorithms to reduce the magnitude of undesirable spikes. The optimal algorithm was found to have the potential for improved performance with further redesign. The center of simulator rotation was redefined. More terms were added to the cost function to enable more tuning flexibility. A new design approach using a Fortran/Matlab/Simulink setup was employed. A new semicircular canals model was incorporated in the algorithm. With these changes results show the optimal algorithm has some advantages over the NASA adaptive algorithm. Two general problems observed in the initial investigation required solutions. A nonlinear gain algorithm was developed that scales the aircraft inputs by a third-order polynomial, maximizing the motion cues while remaining within the operational limits of the motion system. A braking algorithm was developed to bring the simulator to a full stop at its motion limit and later release the brake to follow the cueing algorithm output.
Heuristic-based tabu search algorithm for folding two-dimensional AB off-lattice model proteins.
Liu, Jingfa; Sun, Yuanyuan; Li, Gang; Song, Beibei; Huang, Weibo
2013-12-01
The protein structure prediction problem is a classical NP hard problem in bioinformatics. The lack of an effective global optimization method is the key obstacle in solving this problem. As one of the global optimization algorithms, tabu search (TS) algorithm has been successfully applied in many optimization problems. We define the new neighborhood conformation, tabu object and acceptance criteria of current conformation based on the original TS algorithm and put forward an improved TS algorithm. By integrating the heuristic initialization mechanism, the heuristic conformation updating mechanism, and the gradient method into the improved TS algorithm, a heuristic-based tabu search (HTS) algorithm is presented for predicting the two-dimensional (2D) protein folding structure in AB off-lattice model which consists of hydrophobic (A) and hydrophilic (B) monomers. The tabu search minimization leads to the basins of local minima, near which a local search mechanism is then proposed to further search for lower-energy conformations. To test the performance of the proposed algorithm, experiments are performed on four Fibonacci sequences and two real protein sequences. The experimental results show that the proposed algorithm has found the lowest-energy conformations so far for three shorter Fibonacci sequences and renewed the results for the longest one, as well as two real protein sequences, demonstrating that the HTS algorithm is quite promising in finding the ground states for AB off-lattice model proteins. PMID:24077543
Alshamlan, Hala; Badr, Ghada; Alohali, Yousef
2015-01-01
An artificial bee colony (ABC) is a relatively recent swarm intelligence optimization approach. In this paper, we propose the first attempt at applying ABC algorithm in analyzing a microarray gene expression profile. In addition, we propose an innovative feature selection algorithm, minimum redundancy maximum relevance (mRMR), and combine it with an ABC algorithm, mRMR-ABC, to select informative genes from microarray profile. The new approach is based on a support vector machine (SVM) algorithm to measure the classification accuracy for selected genes. We evaluate the performance of the proposed mRMR-ABC algorithm by conducting extensive experiments on six binary and multiclass gene expression microarray datasets. Furthermore, we compare our proposed mRMR-ABC algorithm with previously known techniques. We reimplemented two of these techniques for the sake of a fair comparison using the same parameters. These two techniques are mRMR when combined with a genetic algorithm (mRMR-GA) and mRMR when combined with a particle swarm optimization algorithm (mRMR-PSO). The experimental results prove that the proposed mRMR-ABC algorithm achieves accurate classification performance using small number of predictive genes when tested using both datasets and compared to previously suggested methods. This shows that mRMR-ABC is a promising approach for solving gene selection and cancer classification problems. PMID:25961028
Alshamlan, Hala; Badr, Ghada; Alohali, Yousef
2015-01-01
An artificial bee colony (ABC) is a relatively recent swarm intelligence optimization approach. In this paper, we propose the first attempt at applying ABC algorithm in analyzing a microarray gene expression profile. In addition, we propose an innovative feature selection algorithm, minimum redundancy maximum relevance (mRMR), and combine it with an ABC algorithm, mRMR-ABC, to select informative genes from microarray profile. The new approach is based on a support vector machine (SVM) algorithm to measure the classification accuracy for selected genes. We evaluate the performance of the proposed mRMR-ABC algorithm by conducting extensive experiments on six binary and multiclass gene expression microarray datasets. Furthermore, we compare our proposed mRMR-ABC algorithm with previously known techniques. We reimplemented two of these techniques for the sake of a fair comparison using the same parameters. These two techniques are mRMR when combined with a genetic algorithm (mRMR-GA) and mRMR when combined with a particle swarm optimization algorithm (mRMR-PSO). The experimental results prove that the proposed mRMR-ABC algorithm achieves accurate classification performance using small number of predictive genes when tested using both datasets and compared to previously suggested methods. This shows that mRMR-ABC is a promising approach for solving gene selection and cancer classification problems. PMID:25961028
Runtime support for parallelizing data mining algorithms
NASA Astrophysics Data System (ADS)
Jin, Ruoming; Agrawal, Gagan
2002-03-01
With recent technological advances, shared memory parallel machines have become more scalable, and offer large main memories and high bus bandwidths. They are emerging as good platforms for data warehousing and data mining. In this paper, we focus on shared memory parallelization of data mining algorithms. We have developed a series of techniques for parallelization of data mining algorithms, including full replication, full locking, fixed locking, optimized full locking, and cache-sensitive locking. Unlike previous work on shared memory parallelization of specific data mining algorithms, all of our techniques apply to a large number of common data mining algorithms. In addition, we propose a reduction-object based interface for specifying a data mining algorithm. We show how our runtime system can apply any of the technique we have developed starting from a common specification of the algorithm.
An improved Camshift algorithm for target recognition
NASA Astrophysics Data System (ADS)
Fu, Min; Cai, Chao; Mao, Yusu
2015-12-01
Camshift algorithm and three frame difference algorithm are the popular target recognition and tracking methods. Camshift algorithm requires a manual initialization of the search window, which needs the subjective error and coherence, and only in the initialization calculating a color histogram, so the color probability model cannot be updated continuously. On the other hand, three frame difference method does not require manual initialization search window, it can make full use of the motion information of the target only to determine the range of motion. But it is unable to determine the contours of the object, and can not make use of the color information of the target object. Therefore, the improved Camshift algorithm is proposed to overcome the disadvantages of the original algorithm, the three frame difference operation is combined with the object's motion information and color information to identify the target object. The improved Camshift algorithm is realized and shows better performance in the recognition and tracking of the target.
Spatial search algorithms on Hanoi networks
NASA Astrophysics Data System (ADS)
Marquezino, Franklin de Lima; Portugal, Renato; Boettcher, Stefan
2013-01-01
We use the abstract search algorithm and its extension due to Tulsi to analyze a spatial quantum search algorithm that finds a marked vertex in Hanoi networks of degree 4 faster than classical algorithms. We also analyze the effect of using non-Groverian coins that take advantage of the small-world structure of the Hanoi networks. We obtain the scaling of the total cost of the algorithm as a function of the number of vertices. We show that Tulsi's technique plays an important role to speed up the searching algorithm. We can improve the algorithm's efficiency by choosing a non-Groverian coin if we do not implement Tulsi's method. Our conclusions are based on numerical implementations.
Modified OMP Algorithm for Exponentially Decaying Signals
Kazimierczuk, Krzysztof; Kasprzak, Paweł
2015-01-01
A group of signal reconstruction methods, referred to as compressed sensing (CS), has recently found a variety of applications in numerous branches of science and technology. However, the condition of the applicability of standard CS algorithms (e.g., orthogonal matching pursuit, OMP), i.e., the existence of the strictly sparse representation of a signal, is rarely met. Thus, dedicated algorithms for solving particular problems have to be developed. In this paper, we introduce a modification of OMP motivated by nuclear magnetic resonance (NMR) application of CS. The algorithm is based on the fact that the NMR spectrum consists of Lorentzian peaks and matches a single Lorentzian peak in each of its iterations. Thus, we propose the name Lorentzian peak matching pursuit (LPMP). We also consider certain modification of the algorithm by introducing the allowed positions of the Lorentzian peaks' centers. Our results show that the LPMP algorithm outperforms other CS algorithms when applied to exponentially decaying signals. PMID:25609044
Basic firefly algorithm for document clustering
NASA Astrophysics Data System (ADS)
Mohammed, Athraa Jasim; Yusof, Yuhanis; Husni, Husniza
2015-12-01
The Document clustering plays significant role in Information Retrieval (IR) where it organizes documents prior to the retrieval process. To date, various clustering algorithms have been proposed and this includes the K-means and Particle Swarm Optimization. Even though these algorithms have been widely applied in many disciplines due to its simplicity, such an approach tends to be trapped in a local minimum during its search for an optimal solution. To address the shortcoming, this paper proposes a Basic Firefly (Basic FA) algorithm to cluster text documents. The algorithm employs the Average Distance to Document Centroid (ADDC) as the objective function of the search. Experiments utilizing the proposed algorithm were conducted on the 20Newsgroups benchmark dataset. Results demonstrate that the Basic FA generates a more robust and compact clusters than the ones produced by K-means and Particle Swarm Optimization (PSO).
Intelligent perturbation algorithms for space scheduling optimization
NASA Technical Reports Server (NTRS)
Kurtzman, Clifford R.
1991-01-01
Intelligent perturbation algorithms for space scheduling optimization are presented in the form of the viewgraphs. The following subject areas are covered: optimization of planning, scheduling, and manifesting; searching a discrete configuration space; heuristic algorithms used for optimization; use of heuristic methods on a sample scheduling problem; intelligent perturbation algorithms are iterative refinement techniques; properties of a good iterative search operator; dispatching examples of intelligent perturbation algorithm and perturbation operator attributes; scheduling implementations using intelligent perturbation algorithms; major advances in scheduling capabilities; the prototype ISF (industrial Space Facility) experiment scheduler; optimized schedule (max revenue); multi-variable optimization; Space Station design reference mission scheduling; ISF-TDRSS command scheduling demonstration; and example task - communications check.
A new algorithm for coding geological terminology
NASA Astrophysics Data System (ADS)
Apon, W.
The Geological Survey of The Netherlands has developed an algorithm to convert the plain geological language of lithologic well logs into codes suitable for computer processing and link these to existing plotting programs. The algorithm is based on the "direct method" and operates in three steps: (1) searching for defined word combinations and assigning codes; (2) deleting duplicated codes; (3) correcting incorrect code combinations. Two simple auxiliary files are used. A simple PC demonstration program is included to enable readers to experiment with this algorithm. The Department of Quarternary Geology of the Geological Survey of The Netherlands possesses a large database of shallow lithologic well logs in plain language and has been using a program based on this algorithm for about 3 yr. Erroneous codes resulting from using this algorithm are less than 2%.
Lamberti, Alfredo; Vanlanduit, Steve; De Pauw, Ben; Berghmans, Francis
2014-01-01
The working principle of fiber Bragg grating (FBG) sensors is mostly based on the tracking of the Bragg wavelength shift. To accomplish this task, different algorithms have been proposed, from conventional maximum and centroid detection algorithms to more recently-developed correlation-based techniques. Several studies regarding the performance of these algorithms have been conducted, but they did not take into account spectral distortions, which appear in many practical applications. This paper addresses this issue and analyzes the performance of four different wavelength tracking algorithms (maximum detection, centroid detection, cross-correlation and fast phase-correlation) when applied to distorted FBG spectra used for measuring dynamic loads. Both simulations and experiments are used for the analyses. The dynamic behavior of distorted FBG spectra is simulated using the transfer-matrix approach, and the amount of distortion of the spectra is quantified using dedicated distortion indices. The algorithms are compared in terms of achievable precision and accuracy. To corroborate the simulation results, experiments were conducted using three FBG sensors glued on a steel plate and subjected to a combination of transverse force and vibration loads. The analysis of the results showed that the fast phase-correlation algorithm guarantees the best combination of versatility, precision and accuracy. PMID:25521386
Semioptimal practicable algorithmic cooling
Elias, Yuval; Mor, Tal; Weinstein, Yossi
2011-04-15
Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.
Draijer, Matthijs; Hondebrink, Erwin; van Leeuwen, Ton; Steenbergen, Wiendelt
2009-10-01
Advances in optical array sensor technology allow for the real time acquisition of dynamic laser speckle patterns generated by tissue perfusion, which, in principle,allows for real time laser Doppler perfusion imaging(LDPI). Exploitation of these developments is enhanced with the introduction of faster algorithms to transform photo currents into perfusion estimates using the first moment of the power spectrum. A time domain (TD)algorithm is presented for determining the first-order spectral moment. Experiments are performed to compare this algorithm with the widely used Fast Fourier Transform(FFT). This study shows that the TD-algorithm is twice as fast as the FFT-algorithm without loss of accuracy.Compared to FFT, the TD-algorithm is efficient in terms of processor time, memory usage and data transport. PMID:19820976
Global structual optimizations of surface systems with a genetic algorithm
Chuang, Feng-Chuan
2005-05-01
Global structural optimizations with a genetic algorithm were performed for atomic cluster and surface systems including aluminum atomic clusters, Si magic clusters on the Si(111) 7 x 7 surface, silicon high-index surfaces, and Ag-induced Si(111) reconstructions. First, the global structural optimizations of neutral aluminum clusters Al{sub n} (n up to 23) were performed using a genetic algorithm coupled with a tight-binding potential. Second, a genetic algorithm in combination with tight-binding and first-principles calculations were performed to study the structures of magic clusters on the Si(111) 7 x 7 surface. Extensive calculations show that the magic cluster observed in scanning tunneling microscopy (STM) experiments consist of eight Si atoms. Simulated STM images of the Si magic cluster exhibit a ring-like feature similar to STM experiments. Third, a genetic algorithm coupled with a highly optimized empirical potential were used to determine the lowest energy structure of high-index semiconductor surfaces. The lowest energy structures of Si(105) and Si(114) were determined successfully. The results of Si(105) and Si(114) are reported within the framework of highly optimized empirical potential and first-principles calculations. Finally, a genetic algorithm coupled with Si and Ag tight-binding potentials were used to search for Ag-induced Si(111) reconstructions at various Ag and Si coverages. The optimized structural models of {radical}3 x {radical}3, 3 x 1, and 5 x 2 phases were reported using first-principles calculations. A novel model is found to have lower surface energy than the proposed double-honeycomb chained (DHC) model both for Au/Si(111) 5 x 2 and Ag/Si(111) 5 x 2 systems.
Discrete artificial bee colony algorithm for lot-streaming flowshop with total flowtime minimization
NASA Astrophysics Data System (ADS)
Sang, Hongyan; Gao, Liang; Pan, Quanke
2012-09-01
Unlike a traditional flowshop problem where a job is assumed to be indivisible, in the lot-streaming flowshop problem, a job is allowed to overlap its operations between successive machines by splitting it into a number of smaller sub-lots and moving the completed portion of the sub-lots to downstream machine. In this way, the production is accelerated. This paper presents a discrete artificial bee colony (DABC) algorithm for a lot-streaming flowshop scheduling problem with total flowtime criterion. Unlike the basic ABC algorithm, the proposed DABC algorithm represents a solution as a discrete job permutation. An efficient initialization scheme based on the extended Nawaz-Enscore-Ham heuristic is utilized to produce an initial population with a certain level of quality and diversity. Employed and onlooker bees generate new solutions in their neighborhood, whereas scout bees generate new solutions by performing insert operator and swap operator to the best solution found so far. Moreover, a simple but effective local search is embedded in the algorithm to enhance local exploitation capability. A comparative experiment is carried out with the existing discrete particle swarm optimization, hybrid genetic algorithm, threshold accepting, simulated annealing and ant colony optimization algorithms based on a total of 160 randomly generated instances. The experimental results show that the proposed DABC algorithm is quite effective for the lot-streaming flowshop with total flowtime criterion in terms of searching quality, robustness and effectiveness. This research provides the references to the optimization research on lot-streaming flowshop.
Du, Tingsong; Hu, Yang; Ke, Xianting
2015-01-01
An improved quantum artificial fish swarm algorithm (IQAFSA) for solving distributed network programming considering distributed generation is proposed in this work. The IQAFSA based on quantum computing which has exponential acceleration for heuristic algorithm uses quantum bits to code artificial fish and quantum revolving gate, preying behavior, and following behavior and variation of quantum artificial fish to update the artificial fish for searching for optimal value. Then, we apply the proposed new algorithm, the quantum artificial fish swarm algorithm (QAFSA), the basic artificial fish swarm algorithm (BAFSA), and the global edition artificial fish swarm algorithm (GAFSA) to the simulation experiments for some typical test functions, respectively. The simulation results demonstrate that the proposed algorithm can escape from the local extremum effectively and has higher convergence speed and better accuracy. Finally, applying IQAFSA to distributed network problems and the simulation results for 33-bus radial distribution network system show that IQAFSA can get the minimum power loss after comparing with BAFSA, GAFSA, and QAFSA. PMID:26447713
A Modified Rife Algorithm for Off-Grid DOA Estimation Based on Sparse Representations.
Chen, Tao; Wu, Huanxin; Guo, Limin; Liu, Lutao
2015-01-01
In this paper we address the problem of off-grid direction of arrival (DOA) estimation based on sparse representations in the situation of multiple measurement vectors (MMV). A novel sparse DOA estimation method which changes MMV problem to SMV is proposed. This method uses sparse representations based on weighted eigenvectors (SRBWEV) to deal with the MMV problem. MMV problem can be changed to single measurement vector (SMV) problem by using the linear combination of eigenvectors of array covariance matrix in signal subspace as a new SMV for sparse solution calculation. So the complexity of this proposed algorithm is smaller than other DOA estimation algorithms of MMV. Meanwhile, it can overcome the limitation of the conventional sparsity-based DOA estimation approaches that the unknown directions belong to a predefined discrete angular grid, so it can further improve the DOA estimation accuracy. The modified Rife algorithm for DOA estimation (MRife-DOA) is simulated based on SRBWEV algorithm. In this proposed algorithm, the largest and sub-largest inner products between the measurement vector or its residual and the atoms in the dictionary are utilized to further modify DOA estimation according to the principle of Rife algorithm and the basic idea of coarse-to-fine estimation. Finally, simulation experiments show that the proposed algorithm is effective and can reduce the DOA estimation error caused by grid effect with lower complexity. PMID:26610521
A Modified Rife Algorithm for Off-Grid DOA Estimation Based on Sparse Representations
Chen, Tao; Wu, Huanxin; Guo, Limin; Liu, Lutao
2015-01-01
In this paper we address the problem of off-grid direction of arrival (DOA) estimation based on sparse representations in the situation of multiple measurement vectors (MMV). A novel sparse DOA estimation method which changes MMV problem to SMV is proposed. This method uses sparse representations based on weighted eigenvectors (SRBWEV) to deal with the MMV problem. MMV problem can be changed to single measurement vector (SMV) problem by using the linear combination of eigenvectors of array covariance matrix in signal subspace as a new SMV for sparse solution calculation. So the complexity of this proposed algorithm is smaller than other DOA estimation algorithms of MMV. Meanwhile, it can overcome the limitation of the conventional sparsity-based DOA estimation approaches that the unknown directions belong to a predefined discrete angular grid, so it can further improve the DOA estimation accuracy. The modified Rife algorithm for DOA estimation (MRife-DOA) is simulated based on SRBWEV algorithm. In this proposed algorithm, the largest and sub-largest inner products between the measurement vector or its residual and the atoms in the dictionary are utilized to further modify DOA estimation according to the principle of Rife algorithm and the basic idea of coarse-to-fine estimation. Finally, simulation experiments show that the proposed algorithm is effective and can reduce the DOA estimation error caused by grid effect with lower complexity. PMID:26610521
Du, Tingsong; Hu, Yang; Ke, Xianting
2015-01-01
An improved quantum artificial fish swarm algorithm (IQAFSA) for solving distributed network programming considering distributed generation is proposed in this work. The IQAFSA based on quantum computing which has exponential acceleration for heuristic algorithm uses quantum bits to code artificial fish and quantum revolving gate, preying behavior, and following behavior and variation of quantum artificial fish to update the artificial fish for searching for optimal value. Then, we apply the proposed new algorithm, the quantum artificial fish swarm algorithm (QAFSA), the basic artificial fish swarm algorithm (BAFSA), and the global edition artificial fish swarm algorithm (GAFSA) to the simulation experiments for some typical test functions, respectively. The simulation results demonstrate that the proposed algorithm can escape from the local extremum effectively and has higher convergence speed and better accuracy. Finally, applying IQAFSA to distributed network problems and the simulation results for 33-bus radial distribution network system show that IQAFSA can get the minimum power loss after comparing with BAFSA, GAFSA, and QAFSA. PMID:26447713
A new constrained fixed-point algorithm for ordering independent components
NASA Astrophysics Data System (ADS)
Zhang, Hongjuan; Guo, Chonghui; Shi, Zhenwei; Feng, Enmin
2008-10-01
Independent component analysis (ICA) aims to recover a set of unknown mutually independent components (ICs) from their observed mixtures without knowledge of the mixing coefficients. In the classical ICA model there exists ICs' indeterminacy on permutation and dilation. Constrained ICA is one of methods for solving this problem through introducing constraints into the classical ICA model. In this paper we first present a new constrained ICA model which composed of three parts: a maximum likelihood criterion as an objective function, statistical measures as inequality constraints and the normalization of demixing matrix as equality constraints. Next, we incorporate the new fixed-point (newFP) algorithm into this constrained ICA model to construct a new constrained fixed-point algorithm. Computation simulations on synthesized signals and speech signals demonstrate that this combination both can eliminate ICs' indeterminacy to a certain extent, and can provide better performance. Moreover, comparison results with the existing algorithm verify the efficiency of our new algorithm furthermore, and show that it is more simple to implement than the existing algorithm due to its advantage of not using the learning rate. Finally, this new algorithm is also applied for the real-world fetal ECG data, experiment results further indicate the efficiency of the new constrained fixed-point algorithm.
Solving the depth of the repeated texture areas based on the clustering algorithm
NASA Astrophysics Data System (ADS)
Xiong, Zhang; Zhang, Jun; Tian, Jinwen
2015-12-01
The reconstruction of the 3D scene in the monocular stereo vision needs to get the depth of the field scenic points in the picture scene. But there will inevitably be error matching in the process of image matching, especially when there are a large number of repeat texture areas in the images, there will be lots of error matches. At present, multiple baseline stereo imaging algorithm is commonly used to eliminate matching error for repeated texture areas. This algorithm can eliminate the ambiguity correspond to common repetition texture. But this algorithm has restrictions on the baseline, and has low speed. In this paper, we put forward an algorithm of calculating the depth of the matching points in the repeat texture areas based on the clustering algorithm. Firstly, we adopt Gauss Filter to preprocess the images. Secondly, we segment the repeated texture regions in the images into image blocks by using spectral clustering segmentation algorithm based on super pixel and tag the image blocks. Then, match the two images and solve the depth of the image. Finally, the depth of the image blocks takes the median in all depth values of calculating point in the bock. So the depth of repeated texture areas is got. The results of a lot of image experiments show that the effect of our algorithm for calculating the depth of repeated texture areas is very good.
Introduction to systolic algorithms and architectures
Bentley, J.L.; Kung, H.T.
1983-01-01
The authors survey the class of systolic special-purpose computer architectures and algorithms, which are particularly well-suited for implementation in very large scale integrated circuitry (VLSI). They give a brief introduction to systolic arrays for a reader with a broad technical background and some experience in using a computer, but who is not necessarily a computer scientist. In addition they briefly survey the technological advances in VLSI that led to the development of systolic algorithms and architectures. 38 references.
UWB Tracking System Design with TDOA Algorithm
NASA Technical Reports Server (NTRS)
Ni, Jianjun; Arndt, Dickey; Ngo, Phong; Phan, Chau; Gross, Julia; Dusl, John; Schwing, Alan
2006-01-01
This presentation discusses an ultra-wideband (UWB) tracking system design effort using a tracking algorithm TDOA (Time Difference of Arrival). UWB technology is exploited to implement the tracking system due to its properties, such as high data rate, fine time resolution, and low power spectral density. A system design using commercially available UWB products is proposed. A two-stage weighted least square method is chosen to solve the TDOA non-linear equations. Matlab simulations in both two-dimensional space and three-dimensional space show that the tracking algorithm can achieve fine tracking resolution with low noise TDOA data. The error analysis reveals various ways to improve the tracking resolution. Lab experiments demonstrate the UWBTDOA tracking capability with fine resolution. This research effort is motivated by a prototype development project Mini-AERCam (Autonomous Extra-vehicular Robotic Camera), a free-flying video camera system under development at NASA Johnson Space Center for aid in surveillance around the International Space Station (ISS).
A semisimultaneous inversion algorithm for SAGE III
NASA Astrophysics Data System (ADS)
Ward, Dale M.
2002-12-01
The Stratospheric Aerosol and Gas Experiment (SAGE) III instrument was successfully launched into orbit on 10 December 2001. The planned operational species separation inversion algorithm will utilize a stepwise retrieval strategy. This paper presents an alternative, semisimultaneous species separation inversion that simultaneously retrieves all species over user-specified vertical intervals or blocks. By overlapping these vertical blocks, retrieved species profiles over the entire vertical range of the measurements are obtained. The semisimultaneous retrieval approach provides a more straightforward method for evaluating the error coupling that occurs among the retrieved profiles due to various types of input uncertainty. Simulation results are presented to show how the semisimultaneous inversion can enhance understanding of the SAGE III retrieval process. In the future, the semisimultaneous inversion algorithm will be used to help evaluate the results and performance of the operational inversion. Compared to SAGE II, SAGE III will provide expanded and more precise spectral measurements. This alone is shown to significantly reduce the uncertainties in the retrieved ozone, nitrogen dioxide, and aerosol extinction profiles for SAGE III. Additionally, the well-documented concern that SAGE II retrievals are biased by the level of volcanic aerosol is greatly alleviated for SAGE III.
Algorithm design of liquid lens inspection system
NASA Astrophysics Data System (ADS)
Hsieh, Lu-Lin; Wang, Chun-Chieh
2008-08-01
In mobile lens domain, the glass lens is often to be applied in high-resolution requirement situation; but the glass zoom lens needs to be collocated with movable machinery and voice-coil motor, which usually arises some space limits in minimum design. In high level molding component technology development, the appearance of liquid lens has become the focus of mobile phone and digital camera companies. The liquid lens sets with solid optical lens and driving circuit has replaced the original components. As a result, the volume requirement is decreased to merely 50% of the original design. Besides, with the high focus adjusting speed, low energy requirement, high durability, and low-cost manufacturing process, the liquid lens shows advantages in the competitive market. In the past, authors only need to inspect the scrape defect made by external force for the glass lens. As to the liquid lens, authors need to inspect the state of four different structural layers due to the different design and structure. In this paper, authors apply machine vision and digital image processing technology to administer inspections in the particular layer according to the needs of users. According to our experiment results, the algorithm proposed can automatically delete non-focus background, extract the region of interest, find out and analyze the defects efficiently in the particular layer. In the future, authors will combine the algorithm of the system with automatic-focus technology to implement the inside inspection based on the product inspective demands.
CUDT: A CUDA Based Decision Tree Algorithm
Sheu, Ruey-Kai; Chiu, Chun-Chieh
2014-01-01
Decision tree is one of the famous classification methods in data mining. Many researches have been proposed, which were focusing on improving the performance of decision tree. However, those algorithms are developed and run on traditional distributed systems. Obviously the latency could not be improved while processing huge data generated by ubiquitous sensing node in the era without new technology help. In order to improve data processing latency in huge data mining, in this paper, we design and implement a new parallelized decision tree algorithm on a CUDA (compute unified device architecture), which is a GPGPU solution provided by NVIDIA. In the proposed system, CPU is responsible for flow control while the GPU is responsible for computation. We have conducted many experiments to evaluate system performance of CUDT and made a comparison with traditional CPU version. The results show that CUDT is 5∼55 times faster than Weka-j48 and is 18 times speedup than SPRINT for large data set. PMID:25140346
Regier, Michael D; Moodie, Erica E M
2016-05-01
We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience. PMID:27227718
Dictionary Learning Algorithms for Sparse Representation
Kreutz-Delgado, Kenneth; Murray, Joseph F.; Rao, Bhaskar D.; Engan, Kjersti; Lee, Te-Won; Sejnowski, Terrence J.
2010-01-01
Algorithms for data-driven learning of domain-specific overcomplete dictionaries are developed to obtain maximum likelihood and maximum a posteriori dictionary estimates based on the use of Bayesian models with concave/Schur-concave (CSC) negative log priors. Such priors are appropriate for obtaining sparse representations of environmental signals within an appropriately chosen (environmentally matched) dictionary. The elements of the dictionary can be interpreted as concepts, features, or words capable of succinct expression of events encountered in the environment (the source of the measured signals). This is a generalization of vector quantization in that one is interested in a description involving a few dictionary entries (the proverbial “25 words or less”), but not necessarily as succinct as one entry. To learn an environmentally adapted dictionary capable of concise expression of signals generated by the environment, we develop algorithms that iterate between a representative set of sparse representations found by variants of FOCUSS and an update of the dictionary using these sparse representations. Experiments were performed using synthetic data and natural images. For complete dictionaries, we demonstrate that our algorithms have improved performance over other independent component analysis (ICA) methods, measured in terms of signal-to-noise ratios of separated sources. In the overcomplete case, we show that the true underlying dictionary and sparse sources can be accurately recovered. In tests with natural images, learned overcomplete dictionaries are shown to have higher coding efficiency than complete dictionaries; that is, images encoded with an over-complete dictionary have both higher compression (fewer bits per pixel) and higher accuracy (lower mean square error). PMID:12590811
Survey shows successes, failures of horizontal wells
Deskins, W.G.; McDonald, W.J.; Reid, T.B.
1995-06-19
Industry`s experience now shows that horizontal well technology must be applied thoughtfully and be site-specific to attain technical and economic success. This article, based on a comprehensive study done by Maurer Engineering for the US Department of Energy (DOE), addresses the success of horizontal wells in less-publicized formations, that is, other than the Austin chalk. Early excitement within the industry about the new technology reached a fever pitch at times, leaving some with the impression that horizontal drilling is a panacea for all drilling environments. This work gauges the overall success of horizontal technology in US and Canadian oil and gas fields, defines the applications where horizontal technology is most appropriate, and assesses its impact on oil recovery and reserves.
Reasoning about systolic algorithms
Purushothaman, S.; Subrahmanyam, P.A.
1988-12-01
The authors present a methodology for verifying correctness of systolic algorithms. The methodology is based on solving a set of Uniform Recurrence Equations obtained from a description of systolic algorithms as a set of recursive equations. They present an approach to mechanically verify correctness of systolic algorithms, using the Boyer-Moore theorem proven. A mechanical correctness proof of an example from the literature is also presented.
Evaluation of TCP congestion control algorithms.
Long, Robert Michael
2003-12-01
Sandia, Los Alamos, and Lawrence Livermore National Laboratories currently deploy high speed, Wide Area Network links to permit remote access to their Supercomputer systems. The current TCP congestion algorithm does not take full advantage of high delay, large bandwidth environments. This report involves evaluating alternative TCP congestion algorithms and comparing them with the currently used congestion algorithm. The goal was to find if an alternative algorithm could provide higher throughput with minimal impact on existing network traffic. The alternative congestion algorithms used were Scalable TCP and High-Speed TCP. Network lab experiments were run to record the performance of each algorithm under different network configurations. The network configurations used were back-to-back with no delay, back-to-back with a 30ms delay, and two-to-one with a 30ms delay. The performance of each algorithm was then compared to the existing TCP congestion algorithm to determine if an acceptable alternative had been found. Comparisons were made based on throughput, stability, and fairness.
QPSO-Based Adaptive DNA Computing Algorithm
Karakose, Mehmet; Cigdem, Ugur
2013-01-01
DNA (deoxyribonucleic acid) computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO). Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1) parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2) adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3) numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm. PMID:23935409
A Partitioning Algorithm for Block-Diagonal Matrices With Overlap
Guy Antoine Atenekeng Kahou; Laura Grigori; Masha Sosonkina
2008-02-02
We present a graph partitioning algorithm that aims at partitioning a sparse matrix into a block-diagonal form, such that any two consecutive blocks overlap. We denote this form of the matrix as the overlapped block-diagonal matrix. The partitioned matrix is suitable for applying the explicit formulation of Multiplicative Schwarz preconditioner (EFMS) described in [3]. The graph partitioning algorithm partitions the graph of the input matrix into K partitions, such that every partition {Omega}{sub i} has at most two neighbors {Omega}{sub i-1} and {Omega}{sub i+1}. First, an ordering algorithm, such as the reverse Cuthill-McKee algorithm, that reduces the matrix profile is performed. An initial overlapped block-diagonal partition is obtained from the profile of the matrix. An iterative strategy is then used to further refine the partitioning by allowing nodes to be transferred between neighboring partitions. Experiments are performed on matrices arising from real-world applications to show the feasibility and usefulness of this approach.
A New Algorithm to Optimize Maximal Information Coefficient.
Chen, Yuan; Zeng, Ying; Luo, Feng; Yuan, Zheming
2016-01-01
The maximal information coefficient (MIC) captures dependences between paired variables, including both functional and non-functional relationships. In this paper, we develop a new method, ChiMIC, to calculate the MIC values. The ChiMIC algorithm uses the chi-square test to terminate grid optimization and then removes the restriction of maximal grid size limitation of original ApproxMaxMI algorithm. Computational experiments show that ChiMIC algorithm can maintain same MIC values for noiseless functional relationships, but gives much smaller MIC values for independent variables. For noise functional relationship, the ChiMIC algorithm can reach the optimal partition much faster. Furthermore, the MCN values based on MIC calculated by ChiMIC can capture the complexity of functional relationships in a better way, and the statistical powers of MIC calculated by ChiMIC are higher than those calculated by ApproxMaxMI. Moreover, the computational costs of ChiMIC are much less than those of ApproxMaxMI. We apply the MIC values tofeature selection and obtain better classification accuracy using features selected by the MIC values from ChiMIC. PMID:27333001
An Incremental High-Utility Mining Algorithm with Transaction Insertion
Gan, Wensheng; Zhang, Binbin
2015-01-01
Association-rule mining is commonly used to discover useful and meaningful patterns from a very large database. It only considers the occurrence frequencies of items to reveal the relationships among itemsets. Traditional association-rule mining is, however, not suitable in real-world applications since the purchased items from a customer may have various factors, such as profit or quantity. High-utility mining was designed to solve the limitations of association-rule mining by considering both the quantity and profit measures. Most algorithms of high-utility mining are designed to handle the static database. Fewer researches handle the dynamic high-utility mining with transaction insertion, thus requiring the computations of database rescan and combination explosion of pattern-growth mechanism. In this paper, an efficient incremental algorithm with transaction insertion is designed to reduce computations without candidate generation based on the utility-list structures. The enumeration tree and the relationships between 2-itemsets are also adopted in the proposed algorithm to speed up the computations. Several experiments are conducted to show the performance of the proposed algorithm in terms of runtime, memory consumption, and number of generated patterns. PMID:25811038
Multi-jagged: A scalable parallel spatial partitioning algorithm
Deveci, Mehmet; Rajamanickam, Sivasankaran; Devine, Karen D.; Catalyurek, Umit V.
2015-03-18
Geometric partitioning is fast and effective for load-balancing dynamic applications, particularly those requiring geometric locality of data (particle methods, crash simulations). We present, to our knowledge, the first parallel implementation of a multidimensional-jagged geometric partitioner. In contrast to the traditional recursive coordinate bisection algorithm (RCB), which recursively bisects subdomains perpendicular to their longest dimension until the desired number of parts is obtained, our algorithm does recursive multi-section with a given number of parts in each dimension. By computing multiple cut lines concurrently and intelligently deciding when to migrate data while computing the partition, we minimize data movement compared to efficientmore » implementations of recursive bisection. We demonstrate the algorithm's scalability and quality relative to the RCB implementation in Zoltan on both real and synthetic datasets. Our experiments show that the proposed algorithm performs and scales better than RCB in terms of run-time without degrading the load balance. Lastly, our implementation partitions 24 billion points into 65,536 parts within a few seconds and exhibits near perfect weak scaling up to 6K cores.« less
GPS-Free Localization Algorithm for Wireless Sensor Networks
Wang, Lei; Xu, Qingzheng
2010-01-01
Localization is one of the most fundamental problems in wireless sensor networks, since the locations of the sensor nodes are critical to both network operations and most application level tasks. A GPS-free localization scheme for wireless sensor networks is presented in this paper. First, we develop a standardized clustering-based approach for the local coordinate system formation wherein a multiplication factor is introduced to regulate the number of master and slave nodes and the degree of connectivity among master nodes. Second, using homogeneous coordinates, we derive a transformation matrix between two Cartesian coordinate systems to efficiently merge them into a global coordinate system and effectively overcome the flip ambiguity problem. The algorithm operates asynchronously without a centralized controller; and does not require that the location of the sensors be known a priori. A set of parameter-setting guidelines for the proposed algorithm is derived based on a probability model and the energy requirements are also investigated. A simulation analysis on a specific numerical example is conducted to validate the mathematical analytical results. We also compare the performance of the proposed algorithm under a variety multiplication factor, node density and node communication radius scenario. Experiments show that our algorithm outperforms existing mechanisms in terms of accuracy and convergence time. PMID:22219694
A New Algorithm to Optimize Maximal Information Coefficient
Luo, Feng; Yuan, Zheming
2016-01-01
The maximal information coefficient (MIC) captures dependences between paired variables, including both functional and non-functional relationships. In this paper, we develop a new method, ChiMIC, to calculate the MIC values. The ChiMIC algorithm uses the chi-square test to terminate grid optimization and then removes the restriction of maximal grid size limitation of original ApproxMaxMI algorithm. Computational experiments show that ChiMIC algorithm can maintain same MIC values for noiseless functional relationships, but gives much smaller MIC values for independent variables. For noise functional relationship, the ChiMIC algorithm can reach the optimal partition much faster. Furthermore, the MCN values based on MIC calculated by ChiMIC can capture the complexity of functional relationships in a better way, and the statistical powers of MIC calculated by ChiMIC are higher than those calculated by ApproxMaxMI. Moreover, the computational costs of ChiMIC are much less than those of ApproxMaxMI. We apply the MIC values tofeature selection and obtain better classification accuracy using features selected by the MIC values from ChiMIC. PMID:27333001
GPS-free localization algorithm for wireless sensor networks.
Wang, Lei; Xu, Qingzheng
2010-01-01
Localization is one of the most fundamental problems in wireless sensor networks, since the locations of the sensor nodes are critical to both network operations and most application level tasks. A GPS-free localization scheme for wireless sensor networks is presented in this paper. First, we develop a standardized clustering-based approach for the local coordinate system formation wherein a multiplication factor is introduced to regulate the number of master and slave nodes and the degree of connectivity among master nodes. Second, using homogeneous coordinates, we derive a transformation matrix between two Cartesian coordinate systems to efficiently merge them into a global coordinate system and effectively overcome the flip ambiguity problem. The algorithm operates asynchronously without a centralized controller; and does not require that the location of the sensors be known a priori. A set of parameter-setting guidelines for the proposed algorithm is derived based on a probability model and the energy requirements are also investigated. A simulation analysis on a specific numerical example is conducted to validate the mathematical analytical results. We also compare the performance of the proposed algorithm under a variety multiplication factor, node density and node communication radius scenario. Experiments show that our algorithm outperforms existing mechanisms in terms of accuracy and convergence time. PMID:22219694
Dual autofocusing algorithm for optical lens measurement system
NASA Astrophysics Data System (ADS)
Zhang, Zhenjiu; Hu, Hong
2010-08-01
In order to develop an on-line optical lens measurement system, a dual autofocusing algorithm consisting of coarse autofocusing and fine autofocusing is proposed to realize the accurate and automatic image grabbing. In the procedure of coarse autofocusing, variance function on the whole image is selected as sharpness evaluation function (SEF), and mean comparison method is applied to realize the hill-climbing algorithm for focal plane searching. A 3-point method is used to initialize the searching direction. In the procedure of fine autofocusing, a conditional dilation method based on mathematical morphology is used to extract the region of interest (ROI), namely, the target images, and a shape factor is employed to eliminate the disturbance regions. Brenner function within ROI is selected as SEF, and single-point comparison method is used to find the focal plane accurately. Compared to the traditional methods, this dual autofocusing algorithm not only can realize high precision focusing, but also has large autofocusing range. The experiment results show that the dual autofocusing technique can guarantee that the focusing position locates in the depth of field. The proposed algorithm is suitable for the on-line optical lens measurement.
Accelerating universal Kriging interpolation algorithm using CUDA-enabled GPU
NASA Astrophysics Data System (ADS)
Cheng, Tangpei
2013-04-01
Kriging algorithms are a group of important interpolation methods, which are very useful in many geological applications. However, the algorithm based on traditional general purpose processors can be computationally expensive, especially when the problem scale expands. Inspired by the current trend in graphics processing technology, we proposed an efficient parallel scheme to accelerate the universal Kriging algorithm on the NVIDIA CUDA platform. Some high-performance mathematical functions have been introduced to calculate the compute-intensive steps in the Kriging algorithm, such as matrix-vector multiplication and matrix-matrix multiplication. To further optimize performance, we reduced the memory transfer overhead by reconstructing the time-consuming loops, specifically for the execution on GPU. In the numerical experiment, we compared the performances among different multi-core CPU and GPU implementations to interpolate a geological site. The improved CUDA implementation shows a nearly 18× speedup with respect to the sequential program and is 6.32 times faster compared to the OpenMP-based version running on Intel Xeon E5320 quad-cores CPU and scales well with the size of the system.
Multi-jagged: A scalable parallel spatial partitioning algorithm
Deveci, Mehmet; Rajamanickam, Sivasankaran; Devine, Karen D.; Catalyurek, Umit V.
2015-03-18
Geometric partitioning is fast and effective for load-balancing dynamic applications, particularly those requiring geometric locality of data (particle methods, crash simulations). We present, to our knowledge, the first parallel implementation of a multidimensional-jagged geometric partitioner. In contrast to the traditional recursive coordinate bisection algorithm (RCB), which recursively bisects subdomains perpendicular to their longest dimension until the desired number of parts is obtained, our algorithm does recursive multi-section with a given number of parts in each dimension. By computing multiple cut lines concurrently and intelligently deciding when to migrate data while computing the partition, we minimize data movement compared to efficient implementations of recursive bisection. We demonstrate the algorithm's scalability and quality relative to the RCB implementation in Zoltan on both real and synthetic datasets. Our experiments show that the proposed algorithm performs and scales better than RCB in terms of run-time without degrading the load balance. Lastly, our implementation partitions 24 billion points into 65,536 parts within a few seconds and exhibits near perfect weak scaling up to 6K cores.
A quantum algorithm for Viterbi decoding of classical convolutional codes
NASA Astrophysics Data System (ADS)
Grice, Jon R.; Meyer, David A.
2015-07-01
We present a quantum Viterbi algorithm (QVA) with better than classical performance under certain conditions. In this paper, the proposed algorithm is applied to decoding classical convolutional codes, for instance, large constraint length and short decode frames . Other applications of the classical Viterbi algorithm where is large (e.g., speech processing) could experience significant speedup with the QVA. The QVA exploits the fact that the decoding trellis is similar to the butterfly diagram of the fast Fourier transform, with its corresponding fast quantum algorithm. The tensor-product structure of the butterfly diagram corresponds to a quantum superposition that we show can be efficiently prepared. The quantum speedup is possible because the performance of the QVA depends on the fanout (number of possible transitions from any given state in the hidden Markov model) which is in general much less than . The QVA constructs a superposition of states which correspond to all legal paths through the decoding lattice, with phase as a function of the probability of the path being taken given received data. A specialized amplitude amplification procedure is applied one or more times to recover a superposition where the most probable path has a high probability of being measured.
Innovations in Lattice QCD Algorithms
Konstantinos Orginos
2006-06-25
Lattice QCD calculations demand a substantial amount of computing power in order to achieve the high precision results needed to better understand the nature of strong interactions, assist experiment to discover new physics, and predict the behavior of a diverse set of physical systems ranging from the proton itself to astrophysical objects such as neutron stars. However, computer power alone is clearly not enough to tackle the calculations we need to be doing today. A steady stream of recent algorithmic developments has made an important impact on the kinds of calculations we can currently perform. In this talk I am reviewing these algorithms and their impact on the nature of lattice QCD calculations performed today.
A new frame-based registration algorithm.
Yan, C H; Whalen, R T; Beaupre, G S; Sumanaweera, T S; Yen, S Y; Napel, S
1998-01-01
This paper presents a new algorithm for frame registration. Our algorithm requires only that the frame be comprised of straight rods, as opposed to the N structures or an accurate frame model required by existing algorithms. The algorithm utilizes the full 3D information in the frame as well as a least squares weighting scheme to achieve highly accurate registration. We use simulated CT data to assess the accuracy of our algorithm. We compare the performance of the proposed algorithm to two commonly used algorithms. Simulation results show that the proposed algorithm is comparable to the best existing techniques with knowledge of the exact mathematical frame model. For CT data corrupted with an unknown in-plane rotation or translation, the proposed technique is also comparable to the best existing techniques. However, in situations where there is a discrepancy of more than 2 mm (0.7% of the frame dimension) between the frame and the mathematical model, the proposed technique is significantly better (p < or = 0.05) than the existing techniques. The proposed algorithm can be applied to any existing frame without modification. It provides better registration accuracy and is robust against model mis-match. It allows greater flexibility on the frame structure. Lastly, it reduces the frame construction cost as adherence to a concise model is not required. PMID:9472834
A new frame-based registration algorithm
NASA Technical Reports Server (NTRS)
Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Sumanaweera, T. S.; Yen, S. Y.; Napel, S.
1998-01-01
This paper presents a new algorithm for frame registration. Our algorithm requires only that the frame be comprised of straight rods, as opposed to the N structures or an accurate frame model required by existing algorithms. The algorithm utilizes the full 3D information in the frame as well as a least squares weighting scheme to achieve highly accurate registration. We use simulated CT data to assess the accuracy of our algorithm. We compare the performance of the proposed algorithm to two commonly used algorithms. Simulation results show that the proposed algorithm is comparable to the best existing techniques with knowledge of the exact mathematical frame model. For CT data corrupted with an unknown in-plane rotation or translation, the proposed technique is also comparable to the best existing techniques. However, in situations where there is a discrepancy of more than 2 mm (0.7% of the frame dimension) between the frame and the mathematical model, the proposed technique is significantly better (p < or = 0.05) than the existing techniques. The proposed algorithm can be applied to any existing frame without modification. It provides better registration accuracy and is robust against model mis-match. It allows greater flexibility on the frame structure. Lastly, it reduces the frame construction cost as adherence to a concise model is not required.
On modified boosting algorithm for geographic data applications
NASA Astrophysics Data System (ADS)
Iwanowski, Michal; Mulawka, Jan
2015-09-01
Boosting algorithms constitute one of the essential tools in modern machine-learning, one of its primary applications being the improvement of classifier accuracy in supervised learning. Most widespread realization of boosting, known as AdaBoost, is based upon the concept of building a complex predictive model out of a group of simple base models. We present an approach for local assessment of base model accuracy and their improved weighting that captures inhomogeneity present in real-life datasets, in particular in those that contain geographic information. Conducted experiments show improvement in classification accuracy and F-scores of the modified algorithm, however more experimentation is required to confirm the exact scope of these improvements.
A hierarchical algorithm for molecular similarity (H-FORMS).
Ramirez-Manzanares, Alonso; Peña, Joaquin; Azpiroz, Jon M; Merino, Gabriel
2015-07-15
A new hierarchical method to determine molecular similarity is introduced. The goal of this method is to detect if a pair of molecules has the same structure by estimating a rigid transformation that aligns the molecules and a correspondence function that matches their atoms. The algorithm firstly detect similarity based on the global spatial structure. If this analysis is not sufficient, the algorithm computes novel local structural rotation-invariant descriptors for the atom neighborhood and uses this information to match atoms. Two strategies (deterministic and stochastic) on the matching based alignment computation are tested. As a result, the atom-matching based on local similarity indexes decreases the number of testing trials and significantly reduces the dimensionality of the Hungarian assignation problem. The experiments on well-known datasets show that our proposal outperforms state-of-the-art methods in terms of the required computational time and accuracy. PMID:26037060
An improved algorithm of mask image dodging for aerial image
NASA Astrophysics Data System (ADS)
Zhang, Zuxun; Zou, Songbai; Zuo, Zhiqi
2011-12-01
The technology of Mask image dodging based on Fourier transform is a good algorithm in removing the uneven luminance within a single image. At present, the difference method and the ratio method are the methods in common use, but they both have their own defects .For example, the difference method can keep the brightness uniformity of the whole image, but it is deficient in local contrast; meanwhile the ratio method can work better in local contrast, but sometimes it makes the dark areas of the original image too bright. In order to remove the defects of the two methods effectively, this paper on the basis of research of the two methods proposes a balance solution. Experiments show that the scheme not only can combine the advantages of the difference method and the ratio method, but also can avoid the deficiencies of the two algorithms.
A novel dynamical community detection algorithm based on weighting scheme
NASA Astrophysics Data System (ADS)
Li, Ju; Yu, Kai; Hu, Ke
2015-12-01
Network dynamics plays an important role in analyzing the correlation between the function properties and the topological structure. In this paper, we propose a novel dynamical iteration (DI) algorithm, which incorporates the iterative process of membership vector with weighting scheme, i.e. weighting W and tightness T. These new elements can be used to adjust the link strength and the node compactness for improving the speed and accuracy of community structure detection. To estimate the optimal stop time of iteration, we utilize a new stability measure which is defined as the Markov random walk auto-covariance. We do not need to specify the number of communities in advance. It naturally supports the overlapping communities by associating each node with a membership vector describing the node's involvement in each community. Theoretical analysis and experiments show that the algorithm can uncover communities effectively and efficiently.
Classification of Bent-Double Galaxies: Experiences with Ensembles of Decision Trees
Kamath, C; Cantu-Paz, E
2002-01-08
In earlier work, we have described our experiences with the use of decision tree classifiers to identify radio-emitting galaxies with a bent-double morphology in the FIRST astronomical survey. We now extend this work to include ensembles of decision tree classifiers, including two algorithms developed by us. These algorithms randomize the decision at each node of the tree, and because they consider fewer candidate splitting points, are faster than other methods for creating ensembles. The experiments presented in this paper with our astronomy data show that our algorithms are competitive in accuracy, but faster than other ensemble techniques such as Boosting, Bagging, and Arcx4 with different split criteria.
Algorithm That Synthesizes Other Algorithms for Hashing
NASA Technical Reports Server (NTRS)
James, Mark
2010-01-01
An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the
PACS model based on digital watermarking and its core algorithms
NASA Astrophysics Data System (ADS)
Que, Dashun; Wen, Xianlin; Chen, Bi
2009-10-01
PACS model based on digital watermarking is proposed by analyzing medical image features and PACS requirements from the point of view of information security, its core being digital watermarking server and the corresponding processing module. Two kinds of digital watermarking algorithm are studied; one is non-region of interest (NROI) digital watermarking algorithm based on wavelet domain and block-mean, the other is reversible watermarking algorithm on extended difference and pseudo-random matrix. The former belongs to robust lossy watermarking, which embedded in NROI by wavelet provides a good way for protecting the focus area (ROI) of images, and introduction of block-mean approach a good scheme to enhance the anti-attack capability; the latter belongs to fragile lossless watermarking, which has the performance of simple implementation and can realize tamper localization effectively, and the pseudo-random matrix enhances the correlation and security between pixels. Plenty of experimental research has been completed in this paper, including the realization of digital watermarking PACS model, the watermarking processing module and its anti-attack experiments, the digital watermarking server and the network transmission simulating experiments of medical images. Theoretical analysis and experimental results show that the designed PACS model can effectively ensure confidentiality, authenticity, integrity and security of medical image information.
Parallel scheduling algorithms
Dekel, E.; Sahni, S.
1983-01-01
Parallel algorithms are given for scheduling problems such as scheduling to minimize the number of tardy jobs, job sequencing with deadlines, scheduling to minimize earliness and tardiness penalties, channel assignment, and minimizing the mean finish time. The shared memory model of parallel computers is used to obtain fast algorithms. 26 references.
Developmental Algorithms Have Meaning!
ERIC Educational Resources Information Center
Green, John
1997-01-01
Adapts Stanic and McKillip's ideas for the use of developmental algorithms to propose that the present emphasis on symbolic manipulation should be tempered with an emphasis on the conceptual understanding of the mathematics underlying the algorithm. Uses examples from the areas of numeric computation, algebraic manipulation, and equation solving…
NASA Astrophysics Data System (ADS)
Lu, Lin; Chang, Yunlong; Li, Yingmin; Lu, Ming
2013-05-01
An orthogonal experiment was conducted by the means of multivariate nonlinear regression equation to adjust the influence of external transverse magnetic field and Ar flow rate on welding quality in the process of welding condenser pipe by high-speed argon tungsten-arc welding (TIG for short). The magnetic induction and flow rate of Ar gas were used as optimum variables, and tensile strength of weld was set to objective function on the base of genetic algorithm theory, and then an optimal design was conducted. According to the request of physical production, the optimum variables were restrained. The genetic algorithm in the MATLAB was used for computing. A comparison between optimum results and experiment parameters was made. The results showed that the optimum technologic parameters could be chosen by the means of genetic algorithm with the conditions of excessive optimum variables in the process of high-speed welding. And optimum technologic parameters of welding coincided with experiment results.
Adaptive color image watermarking algorithm
NASA Astrophysics Data System (ADS)
Feng, Gui; Lin, Qiwei
2008-03-01
As a major method for intellectual property right protecting, digital watermarking techniques have been widely studied and used. But due to the problems of data amount and color shifted, watermarking techniques on color image was not so widespread studied, although the color image is the principal part for multi-medium usages. Considering the characteristic of Human Visual System (HVS), an adaptive color image watermarking algorithm is proposed in this paper. In this algorithm, HSI color model was adopted both for host and watermark image, the DCT coefficient of intensity component (I) of the host color image was used for watermark date embedding, and while embedding watermark the amount of embedding bit was adaptively changed with the complex degree of the host image. As to the watermark image, preprocessing is applied first, in which the watermark image is decomposed by two layer wavelet transformations. At the same time, for enhancing anti-attack ability and security of the watermarking algorithm, the watermark image was scrambled. According to its significance, some watermark bits were selected and some watermark bits were deleted as to form the actual embedding data. The experimental results show that the proposed watermarking algorithm is robust to several common attacks, and has good perceptual quality at the same time.
A 2D vector map watermarking algorithm resistant to simplication attack
NASA Astrophysics Data System (ADS)
Wang, Chuanjian; Liang, Bin; Zhao, Qingzhan; Qiu, Zuqi; Peng, Yuwei; Yu, Liang
2009-12-01
Vector maps are valuable asset of data producers. How to protect copyright of vector maps effectively using digital watermarking is a hot research issue. In this paper, we propose a new robust and blind watermarking algorithm resilient to simplification attack. We proof that spatial topological relation between map objects bears an important property of approximate simplification invariance. We choose spatial topological relations as watermark feature domain and embed watermarks by slightly modifying spatial topological relation between map objects. Experiment shows that our algorithm has good performance to resist simplification attack and tradeoff of the robustness and data fidelity is acquired.
Genetic Algorithm and Tabu Search for Vehicle Routing Problems with Stochastic Demand
NASA Astrophysics Data System (ADS)
Ismail, Zuhaimy; Irhamah
2010-11-01
This paper presents a problem of designing solid waste collection routes, involving scheduling of vehicles where each vehicle begins at the depot, visits customers and ends at the depot. It is modeled as a Vehicle Routing Problem with Stochastic Demands (VRPSD). A data set from a real world problem (a case) is used in this research. We developed Genetic Algorithm (GA) and Tabu Search (TS) procedure and these has produced the best possible result. The problem data are inspired by real case of VRPSD in waste collection. Results from the experiment show the advantages of the proposed algorithm that are its robustness and better solution qualities.
Pedestrian navigation algorithm based on MIMU with building heading/magnetometer
NASA Astrophysics Data System (ADS)
Meng, Xiang-bin; Pan, Xian-fei; Chen, Chang-hao; Hu, Xiao-ping
2016-01-01
In order to improve the accuracy of the low-cost MIMU Inertial navigation system in the application of pedestrian navigation.And to reduce the effect of the heading error because of the low accuracy of the component of MIMU. A novel algorithm was put forward, which fusing the building heading constraint information and the magnetic heading information to achieve more advantages. We analysed the application condition and the modified effect of building heading and magnetic heading. Then experiments were conducted in indoor environment. The results show that the algorithm proposed has a better effect to restrict the heading drift problem and to achieve a higher navigation precision.
A lossless compression algorithm for aurora spectral data using online regression prediction
NASA Astrophysics Data System (ADS)
Kong, Wanqiu; Wu, Jiaji
2015-10-01
Lossless compression algorithms are available for preservation of aurora images. Handling with aurora image compression, linear prediction based method has outstanding compression performance. However, this performance is limited by prediction order and time complexity of linear prediction is relatively high. This paper describes how to solve the conflict between high prediction order and low compression bit rate with an online linear regression with RLS (OLRRLS) algorithm. Experiment results show that OLR-RLS achieves average 7%~11.8% improvement in compression gain and 2.8x speed up in computation time over linear prediction.
Applications and accuracy of the parallel diagonal dominant algorithm
NASA Technical Reports Server (NTRS)
Sun, Xian-He
1993-01-01
The Parallel Diagonal Dominant (PDD) algorithm is a highly efficient, ideally scalable tridiagonal solver. In this paper, a detailed study of the PDD algorithm is given. First the PDD algorithm is introduced. Then the algorithm is extended to solve periodic tridiagonal systems. A variant, the reduced PDD algorithm, is also proposed. Accuracy analysis is provided for a class of tridiagonal systems, the symmetric, and anti-symmetric Toeplitz tridiagonal systems. Implementation results show that the analysis gives a good bound on the relative error, and the algorithm is a good candidate for the emerging massively parallel machines.
Shock capturing data assimilation algorithm for 1D shallow water equations
NASA Astrophysics Data System (ADS)
Tirupathi, Seshu; T. Tchrakian, Tigran; Zhuk, Sergiy; McKenna, Sean
2016-02-01
We propose a new data assimilation algorithm for shallow water equations in one dimension. The algorithm is based upon Discontinuous Galerkin spatial discretization of shallow water equations (DG-SW model) and the continuous formulation of the minimax filter. The latter allows for construction of a robust estimation of the state of the DG-SW model and computes worst-case bounds for the estimation error, provided the uncertain parameters belong to a given bounding set. Numerical studies show that, given sparse observations from numerical or physical experiments, the proposed algorithm quickly reconstructs the true solution even in the presence of shocks, rarefaction waves and unknown values of model parameters. The minimax filter is compared against the ensemble Kalman filter (EnKF) for a benchmark dam-break problem and the results show that the minimax filter converges faster to the true solution for sparse observations.
Show Horse Welfare: The Viewpoints of Judges, Stewards, and Show Managers.
Voigt, Melissa; Hiney, Kristina; Croney, Candace; Waite, Karen; Borron, Abigail; Brady, Colleen
2016-01-01
The purpose of this study was to gain a better understanding of the current state of stock-type show horse welfare based on the perceptions of show officials and to identify potential means of preventing and intervening in compromises to show horse welfare. Thirteen horse show officials, including judges, stewards, and show managers, were interviewed. Findings revealed the officials had an incomplete understanding of nonhuman animal welfare and a high level of concern regarding the public's perception of show horse welfare. The officials attributed most of the frequently observed compromises to show horse welfare to (a) novices', amateurs', and young trainers' lack of experience or expertise, and (b) trainers' and owners' unrealistic expectations and prioritization of winning over horse welfare. The officials emphasized a need for distribution of responsibility among associations, officials, and individuals within the industry. Although the officials noted recent observable positive changes in the industry, they emphasized the need for continued improvements in equine welfare and greater educational opportunities for stakeholders. PMID:26742585
NASA Astrophysics Data System (ADS)
Schaart, Dennis R.; Jansen, Jan Th M.; Zoetelief, Johannes; de Leege, Piet F. A.
2002-05-01
The condensed-history electron transport algorithms in the Monte Carlo code MCNP4C are derived from ITS 3.0, which is a well-validated code for coupled electron-photon simulations. This, combined with its user-friendliness and versatility, makes MCNP4C a promising code for medical physics applications. Such applications, however, require a high degree of accuracy. In this work, MCNP4C electron depth-dose distributions in water are compared with published ITS 3.0 results. The influences of voxel size, substeps and choice of electron energy indexing algorithm are investigated at incident energies between 100 keV and 20 MeV. Furthermore, previously published dose measurements for seven beta emitters are simulated. Since MCNP4C does not allow tally segmentation with the *F8 energy deposition tally, even a homogeneous phantom must be subdivided in cells to calculate the distribution of dose. The repeated interruption of the electron tracks at the cell boundaries significantly affects the electron transport. An electron track length estimator of absorbed dose is described which allows tally segmentation. In combination with the ITS electron energy indexing algorithm, this estimator appears to reproduce ITS 3.0 and experimental results well. If, however, cell boundaries are used instead of segments, or if the MCNP indexing algorithm is applied, the agreement is considerably worse.
ERIC Educational Resources Information Center
Robertson, Alexander M.; Willett, Peter
1996-01-01
Describes a genetic algorithm (GA) that assigns weights to query terms in a ranked-output document retrieval system. Experiments showed the GA often found weights slightly superior to those produced by deterministic weighting (F4). Many times, however, the two methods gave the same results and sometimes the F4 results were superior, indicating…
Negative Selection Algorithm for Aircraft Fault Detection
NASA Technical Reports Server (NTRS)
Dasgupta, D.; KrishnaKumar, K.; Wong, D.; Berry, M.
2004-01-01
We investigated a real-valued Negative Selection Algorithm (NSA) for fault detection in man-in-the-loop aircraft operation. The detection algorithm uses body-axes angular rate sensory data exhibiting the normal flight behavior patterns, to generate probabilistically a set of fault detectors that can detect any abnormalities (including faults and damages) in the behavior pattern of the aircraft flight. We performed experiments with datasets (collected under normal and various simulated failure conditions) using the NASA Ames man-in-the-loop high-fidelity C-17 flight simulator. The paper provides results of experiments with different datasets representing various failure conditions.
Genetic Algorithm for Initial Orbit Determination with Too Short Arc
NASA Astrophysics Data System (ADS)
Li, X. R.; Wang, X.
2016-01-01
The sky surveys of space objects have obtained a huge quantity of too-short-arc (TSA) observation data. However, the classical method of initial orbit determination (IOD) can hardly get reasonable results for the TSAs. The IOD is reduced to a two-stage hierarchical optimization problem containing three variables for each stage. Using the genetic algorithm, a new method of the IOD for TSAs is established, through the selection of optimizing variables as well as the corresponding genetic operator for specific problems. Numerical experiments based on the real measurements show that the method can provide valid initial values for the follow-up work.
A Collaborative Recommend Algorithm Based on Bipartite Community
Fu, Yuchen; Liu, Quan; Cui, Zhiming
2014-01-01
The recommendation algorithm based on bipartite network is superior to traditional methods on accuracy and diversity, which proves that considering the network topology of recommendation systems could help us to improve recommendation results. However, existing algorithms mainly focus on the overall topology structure and those local characteristics could also play an important role in collaborative recommend processing. Therefore, on account of data characteristics and application requirements of collaborative recommend systems, we proposed a link community partitioning algorithm based on the label propagation and a collaborative recommendation algorithm based on the bipartite community. Then we designed numerical experiments to verify the algorithm validity under benchmark and real database. PMID:24955393
Basis for spectral curvature algorithms in remote sensing of chlorophyll
NASA Technical Reports Server (NTRS)
Campbell, J. W.; Esaias, W. E.
1983-01-01
A simple, empirically derived algorithm for estimating oceanic chlorophyll concentrations from spectral radiances measured by a low-flying spectroradiometer has proved highly successful in field experiments in 1980-82. The sensor used was the Multichannel Ocean Color Sensor, and the originator of the algorithm was Grew (1981). This paper presents an explanation for the algorithm based on the optical properties of waters containing chlorophyll and other phytoplankton pigments and the radiative transfer equations governing the remotely sensed signal. The effects of varying solar zenith, atmospheric transmittance, and interfering substances in the water on the chlorophyll algorithm are characterized, and applicability of the algorithm is discussed.
NASA Astrophysics Data System (ADS)
Rastogi, Richa; Srivastava, Abhishek; Khonde, Kiran; Sirasala, Kirannmayi M.; Londhe, Ashutosh; Chavhan, Hitesh
2015-07-01
This paper presents an efficient parallel 3D Kirchhoff depth migration algorithm suitable for current class of multicore architecture. The fundamental Kirchhoff depth migration algorithm exhibits inherent parallelism however, when it comes to 3D data migration, as the data size increases the resource requirement of the algorithm also increases. This challenges its practical implementation even on current generation high performance computing systems. Therefore a smart parallelization approach is essential to handle 3D data for migration. The most compute intensive part of Kirchhoff depth migration algorithm is the calculation of traveltime tables due to its resource requirements such as memory/storage and I/O. In the current research work, we target this area and develop a competent parallel algorithm for post and prestack 3D Kirchhoff depth migration, using hybrid MPI+OpenMP programming techniques. We introduce a concept of flexi-depth iterations while depth migrating data in parallel imaging space, using optimized traveltime table computations. This concept provides flexibility to the algorithm by migrating data in a number of depth iterations, which depends upon the available node memory and the size of data to be migrated during runtime. Furthermore, it minimizes the requirements of storage, I/O and inter-node communication, thus making it advantageous over the conventional parallelization approaches. The developed parallel algorithm is demonstrated and analysed on Yuva II, a PARAM series of supercomputers. Optimization, performance and scalability experiment results along with the migration outcome show the effectiveness of the parallel algorithm.
NASA Astrophysics Data System (ADS)
Yin, Jiale; Liu, Lei; Li, He; Liu, Qiankun
2016-07-01
This paper presents the infrared moving object detection and security detection related algorithms in video surveillance based on the classical W4 and frame difference algorithm. Classical W4 algorithm is one of the powerful background subtraction algorithms applying to infrared images which can accurately, integrally and quickly detect moving object. However, the classical W4 algorithm can only overcome the deficiency in the slight movement of background. The error will become bigger and bigger for long-term surveillance system since the background model is unchanged once established. In this paper, we present the detection algorithm based on the classical W4 and frame difference. It cannot only overcome the shortcoming of falsely detecting because of state mutations from background, but also eliminate holes caused by frame difference. Based on these we further design various security detection related algorithms such as illegal intrusion alarm, illegal persistence alarm and illegal displacement alarm. We compare our method with the classical W4, frame difference, and other state-of-the-art methods. Experiments detailed in this paper show the method proposed in this paper outperforms the classical W4 and frame difference and serves well for the security detection related algorithms.
Motion Cueing Algorithm Development: Piloted Performance Testing of the Cueing Algorithms
NASA Technical Reports Server (NTRS)
Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.
2005-01-01
The relative effectiveness in simulating aircraft maneuvers with both current and newly developed motion cueing algorithms was assessed with an eleven-subject piloted performance evaluation conducted on the NASA Langley Visual Motion Simulator (VMS). In addition to the current NASA adaptive algorithm, two new cueing algorithms were evaluated: the optimal algorithm and the nonlinear algorithm. The test maneuvers included a straight-in approach with a rotating wind vector, an offset approach with severe turbulence and an on/off lateral gust that occurs as the aircraft approaches the runway threshold, and a takeoff both with and without engine failure after liftoff. The maneuvers were executed with each cueing algorithm with added visual display delay conditions ranging from zero to 200 msec. Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. Piloted performance parameters for the approach maneuvers, the vertical velocity upon touchdown and the runway touchdown position, were also analyzed but did not show any noticeable difference among the cueing algorithms. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach were less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.
Self-adaptive parameters in genetic algorithms
NASA Astrophysics Data System (ADS)
Pellerin, Eric; Pigeon, Luc; Delisle, Sylvain
2004-04-01
Genetic algorithms are powerful search algorithms that can be applied to a wide range of problems. Generally, parameter setting is accomplished prior to running a Genetic Algorithm (GA) and this setting remains unchanged during execution. The problem of interest to us here is the self-adaptive parameters adjustment of a GA. In this research, we propose an approach in which the control of a genetic algorithm"s parameters can be encoded within the chromosome of each individual. The parameters" values are entirely dependent on the evolution mechanism and on the problem context. Our preliminary results show that a GA is able to learn and evaluate the quality of self-set parameters according to their degree of contribution to the resolution of the problem. These results are indicative of a promising approach to the development of GAs with self-adaptive parameter settings that do not require the user to pre-adjust parameters at the outset.
A Parallel Rendering Algorithm for MIMD Architectures
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.; Orloff, Tobias
1991-01-01
Applications such as animation and scientific visualization demand high performance rendering of complex three dimensional scenes. To deliver the necessary rendering rates, highly parallel hardware architectures are required. The challenge is then to design algorithms and software which effectively use the hardware parallelism. A rendering algorithm targeted to distributed memory MIMD architectures is described. For maximum performance, the algorithm exploits both object-level and pixel-level parallelism. The behavior of the algorithm is examined both analytically and experimentally. Its performance for large numbers of processors is found to be limited primarily by communication overheads. An experimental implementation for the Intel iPSC/860 shows increasing performance from 1 to 128 processors across a wide range of scene complexities. It is shown that minimal modifications to the algorithm will adapt it for use on shared memory architectures as well.
Smooth transitions between bump rendering algorithms
Becker, B.G. Max, N.L. |
1993-01-04
A method is described for switching smoothly between rendering algorithms as required by the amount of visible surface detail. The result will be more realism with less computation for displaying objects whose surface detail can be described by one or more bump maps. The three rendering algorithms considered are bidirectional reflection distribution function (BRDF), bump-mapping, and displacement-mapping. The bump-mapping has been modified to make it consistent with the other two. For a given viewpoint, one of these algorithms will show a better trade-off between quality, computation time, and aliasing than the other two. Thus, it needs to be determined for any given viewpoint which regions of the object(s) will be rendered with each algorithm The decision as to which algorithm is appropriate is a function of distance, viewing angle, and the frequency of bumps in the bump map.
Implementation of the phase gradient algorithm
Wahl, D.E.; Eichel, P.H.; Jakowatz, C.V. Jr.
1990-01-01
The recently introduced Phase Gradient Autofocus (PGA) algorithm is a non-parametric autofocus technique which has been shown to be quite effective for phase correction of Synthetic Aperture Radar (SAR) imagery. This paper will show that this powerful algorithm can be executed at near real-time speeds and also be implemented in a relatively small piece of hardware. A brief review of the PGA will be presented along with an overview of some critical implementation considerations. In addition, a demonstration of the PGA algorithm running on a 7 in. {times} 10 in. printed circuit board containing a TMS320C30 digital signal processing (DSP) chip will be given. With this system, using only the 20 range bins which contain the brightest points in the image, the algorithm can correct a badly degraded 256 {times} 256 image in as little as 3 seconds. Using all range bins, the algorithm can correct the image in 9 seconds. 4 refs., 2 figs.
Comparison of update algorithms for pure Gauge SU(3)
Gupta, R.; Kilcup, G.W.; Patel, A.; Sharpe, S.R.; Deforcrand, P.
1988-10-01
The authors show that the overrelaxed algorithm of Creutz and of Brown and Woch is the optimal local update algorithm for simulation of pure gauge SU(3). The authors' comparison criterion includes computer efficiency and decorrelation times. They also investigate the rate of decorrelation for the Hybrid Monte Carlo algorithm.
A parallel Jacobson-Oksman optimization algorithm. [parallel processing (computers)
NASA Technical Reports Server (NTRS)
Straeter, T. A.; Markos, A. T.
1975-01-01
A gradient-dependent optimization technique which exploits the vector-streaming or parallel-computing capabilities of some modern computers is presented. The algorithm, derived by assuming that the function to be minimized is homogeneous, is a modification of the Jacobson-Oksman serial minimization method. In addition to describing the algorithm, conditions insuring the convergence of the iterates of the algorithm and the results of numerical experiments on a group of sample test functions are presented. The results of these experiments indicate that this algorithm will solve optimization problems in less computing time than conventional serial methods on machines having vector-streaming or parallel-computing capabilities.
A Revision of the NASA Team Sea Ice Algorithm
NASA Technical Reports Server (NTRS)
Markus, T.; Cavalieri, Donald J.
1998-01-01
In a recent paper, two operational algorithms to derive ice concentration from satellite multichannel passive microwave sensors have been compared. Although the results of these, known as the NASA Team algorithm and the Bootstrap algorithm, have been validated and are generally in good agreement, there are areas where the ice concentrations differ, by up to 30%. These differences can be explained by shortcomings in one or the other algorithm. Here, we present an algorithm which, in addition to the 19 and 37 GHz channels used by both the Bootstrap and NASA Team algorithms, makes use of the 85 GHz channels as well. Atmospheric effects particularly at 85 GHz are reduced by using a forward atmospheric radiative transfer model. Comparisons with the NASA Team and Bootstrap algorithm show that the individual shortcomings of these algorithms are not apparent in this new approach. The results further show better quantitative agreement with ice concentrations derived from NOAA AVHRR infrared data.
Recent Advancements in Lightning Jump Algorithm Work
NASA Technical Reports Server (NTRS)
Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.
2010-01-01
In the past year, the primary objectives were to show the usefulness of total lightning as compared to traditional cloud-to-ground (CG) networks, test the lightning jump algorithm configurations in other regions of the country, increase the number of thunderstorms within our thunderstorm database, and to pinpoint environments that could prove difficult for any lightning jump configuration. A total of 561 thunderstorms have been examined in the past year (409 non-severe, 152 severe) from four regions of the country (North Alabama, Washington D.C., High Plains of CO/KS, and Oklahoma). Results continue to indicate that the 2 lightning jump algorithm configuration holds the most promise in terms of prospective operational lightning jump algorithms, with a probability of detection (POD) at 81%, a false alarm rate (FAR) of 45%, a critical success index (CSI) of 49% and a Heidke Skill Score (HSS) of 0.66. The second best performing algorithm configuration was the Threshold 4 algorithm, which had a POD of 72%, FAR of 51%, a CSI of 41% and an HSS of 0.58. Because a more complex algorithm configuration shows the most promise in terms of prospective operational lightning jump algorithms, accurate thunderstorm cell tracking work must be undertaken to track lightning trends on an individual thunderstorm basis over time. While these numbers for the 2 configuration are impressive, the algorithm does have its weaknesses. Specifically, low-topped and tropical cyclone thunderstorm environments are present issues for the 2 lightning jump algorithm, because of the suppressed vertical depth impact on overall flash counts (i.e., a relative dearth in lightning). For example, in a sample of 120 thunderstorms from northern Alabama that contained 72 missed events by the 2 algorithm 36% of the misses were associated with these two environments (17 storms).
Algorithm for Autonomous Landing
NASA Technical Reports Server (NTRS)
Kuwata, Yoshiaki
2011-01-01
Because of their small size, high maneuverability, and easy deployment, micro aerial vehicles (MAVs) are used for a wide variety of both civilian and military missions. One of their current drawbacks is the vast array of sensors (such as GPS, altimeter, radar, and the like) required to make a landing. Due to the MAV s small payload size, this is a major concern. Replacing the imaging sensors with a single monocular camera is sufficient to land a MAV. By applying optical flow algorithms to images obtained from the camera, time-to-collision can be measured. This is a measurement of position and velocity (but not of absolute distance), and can avoid obstacles as well as facilitate a landing on a flat surface given a set of initial conditions. The key to this approach is to calculate time-to-collision based on some image on the ground. By holding the angular velocity constant, horizontal speed decreases linearly with the height, resulting in a smooth landing. Mathematical proofs show that even with actuator saturation or modeling/ measurement uncertainties, MAVs can land safely. Landings of this nature may have a higher velocity than is desirable, but this can be compensated for by a cushioning or dampening system, or by using a system of legs to grab onto a surface. Such a monocular camera system can increase vehicle payload size (or correspondingly reduce vehicle size), increase speed of descent, and guarantee a safe landing by directly correlating speed to height from the ground.
A comprehensive evaluation of alignment algorithms in the context of RNA-seq.
Lindner, Robert; Friedel, Caroline C
2012-01-01
Transcriptome sequencing (RNA-Seq) overcomes limitations of previously used RNA quantification methods and provides one experimental framework for both high-throughput characterization and quantification of transcripts at the nucleotide level. The first step and a major challenge in the analysis of such experiments is the mapping of sequencing reads to a transcriptomic origin including the identification of splicing events. In recent years, a large number of such mapping algorithms have been developed, all of which have in common that they require algorithms for aligning a vast number of reads to genomic or transcriptomic sequences. Although the FM-index based aligner Bowtie has become a de facto standard within mapping pipelines, a much larger number of possible alignment algorithms have been developed also including other variants of FM-index based aligners. Accordingly, developers and users of RNA-seq mapping pipelines have the choice among a large number of available alignment algorithms. To provide guidance in the choice of alignment algorithms for these purposes, we evaluated the performance of 14 widely used alignment programs from three different algorithmic classes: algorithms using either hashing of the reference transcriptome, hashing of reads, or a compressed FM-index representation of the genome. Here, special emphasis was placed on both precision and recall and the performance for different read lengths and numbers of mismatches and indels in a read. Our results clearly showed the significant reduction in memory footprint and runtime provided by FM-index based aligners at a precision and recall comparable to the best hash table based aligners. Furthermore, the recently developed Bowtie 2 alignment algorithm shows a remarkable tolerance to both sequencing errors and indels, thus, essentially making hash-based aligners obsolete. PMID:23300661
Swarm-based algorithm for phase unwrapping.
da Silva Maciel, Lucas; Albertazzi, Armando G
2014-08-20
A novel algorithm for phase unwrapping based on swarm intelligence is proposed. The algorithm was designed based on three main goals: maximum coverage of reliable information, focused effort for better efficiency, and reliable unwrapping. Experiments were performed, and a new agent was designed to follow a simple set of five rules in order to collectively achieve these goals. These rules consist of random walking for unwrapping and searching, ambiguity evaluation by comparing unwrapped regions, and a replication behavior responsible for the good distribution of agents throughout the image. The results were comparable with the results from established methods. The swarm-based algorithm was able to suppress ambiguities better than the flood-fill algorithm without relying on lengthy processing times. In addition, future developments such as parallel processing and better-quality evaluation present great potential for the proposed method. PMID:25321125
A parallel variable metric optimization algorithm
NASA Technical Reports Server (NTRS)
Straeter, T. A.
1973-01-01
An algorithm, designed to exploit the parallel computing or vector streaming (pipeline) capabilities of computers is presented. When p is the degree of parallelism, then one cycle of the parallel variable metric algorithm is defined as follows: first, the function and its gradient are computed in parallel at p different values of the independent variable; then the metric is modified by p rank-one corrections; and finally, a single univariant minimization is carried out in the Newton-like direction. Several properties of this algorithm are established. The convergence of the iterates to the solution is proved for a quadratic functional on a real separable Hilbert space. For a finite-dimensional space the convergence is in one cycle when p equals the dimension of the space. Results of numerical experiments indicate that the new algorithm will exploit parallel or pipeline computing capabilities to effect faster convergence than serial techniques.
Fluid-structure-coupling algorithm. [BWR
McMaster, W.H.; Gong, E.Y.; Landram, C.S.; Quinones, D.F.
1980-01-01
A fluid-structure-interaction algorithm has been developed and incorporated into the two dimensional code PELE-IC. This code combines an Eulerian incompressible fluid algorithm with a Lagrangian finite element shell algorithm and incorporates the treatment of complex free surfaces. The fluid structure, and coupling algorithms have been verified by the calculation of solved problems from the literature and from air and steam blowdown experiments. The code has been used to calculate loads and structural response from air blowdown and the oscillatory condensation of steam bubbles in water suppression pools typical of boiling water reactors. The techniques developed here have been extended to three dimensions and implemented in the computer code PELE-3D.
Scheduling Earth Observing Satellites with Evolutionary Algorithms
NASA Technical Reports Server (NTRS)
Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna
2003-01-01
We hypothesize that evolutionary algorithms can effectively schedule coordinated fleets of Earth observing satellites. The constraints are complex and the bottlenecks are not well understood, a condition where evolutionary algorithms are often effective. This is, in part, because evolutionary algorithms require only that one can represent solutions, modify solutions, and evaluate solution fitness. To test the hypothesis we have developed a representative set of problems, produced optimization software (in Java) to solve them, and run experiments comparing techniques. This paper presents initial results of a comparison of several evolutionary and other optimization techniques; namely the genetic algorithm, simulated annealing, squeaky wheel optimization, and stochastic hill climbing. We also compare separate satellite vs. integrated scheduling of a two satellite constellation. While the results are not definitive, tests to date suggest that simulated annealing is the best search technique and integrated scheduling is superior.
A comprehensive review of swarm optimization algorithms.
Ab Wahab, Mohd Nadhir; Nefti-Meziani, Samia; Atyabi, Adham
2015-01-01
Many swarm optimization algorithms have been introduced since the early 60's, Evolutionary Programming to the most recent, Grey Wolf Optimization. All of these algorithms have demonstrated their potential to solve many optimization problems. This paper provides an in-depth survey of well-known optimization algorithms. Selected algorithms are briefly explained and compared with each other comprehensively through experiments conducted using thirty well-known benchmark functions. Their advantages and disadvantages are also discussed. A number of statistical tests are then carried out to determine the significant performances. The results indicate the overall advantage of Differential Evolution (DE) and is closely followed by Particle Swarm Optimization (PSO), compared with other considered approaches. PMID:25992655
A Comprehensive Review of Swarm Optimization Algorithms
2015-01-01
Many swarm optimization algorithms have been introduced since the early 60’s, Evolutionary Programming to the most recent, Grey Wolf Optimization. All of these algorithms have demonstrated their potential to solve many optimization problems. This paper provides an in-depth survey of well-known optimization algorithms. Selected algorithms are briefly explained and compared with each other comprehensively through experiments conducted using thirty well-known benchmark functions. Their advantages and disadvantages are also discussed. A number of statistical tests are then carried out to determine the significant performances. The results indicate the overall advantage of Differential Evolution (DE) and is closely followed by Particle Swarm Optimization (PSO), compared with other considered approaches. PMID:25992655
Genetic Algorithm Tuned Fuzzy Logic for Gliding Return Trajectories
NASA Technical Reports Server (NTRS)
Burchett, Bradley T.
2003-01-01
The problem of designing and flying a trajectory for successful recovery of a reusable launch vehicle is tackled using fuzzy logic control with genetic algorithm optimization. The plant is approximated by a simplified three degree of freedom non-linear model. A baseline trajectory design and guidance algorithm consisting of several Mamdani type fuzzy controllers is tuned using a simple genetic algorithm. Preliminary results show that the performance of the overall system is shown to improve with genetic algorithm tuning.
Adaptive path planning: Algorithm and analysis
Chen, Pang C.
1993-03-01
Path planning has to be fast to support real-time robot programming. Unfortunately, current planning techniques are still too slow to be effective, as they often require several minutes, if not hours of computation. To alleviate this problem, we present a learning algorithm that uses past experience to enhance future performance. The algorithm relies on an existing path planner to provide solutions to difficult tasks. From these solutions, an evolving sparse network of useful subgoals is learned to support faster planning. The algorithm is suitable for both stationary and incrementally-changing environments. To analyze our algorithm, we use a previously developed stochastic model that quantifies experience utility. Using this model, we characterize the situations in which the adaptive planner is useful, and provide quantitative bounds to predict its behavior. The results are demonstrated with problems in manipulator planning. Our algorithm and analysis are sufficiently general that they may also be applied to task planning or other planning domains in which experience is useful.
Rempp, Florian; Mahler, Guenter; Michel, Mathias
2007-09-15
We introduce a scheme to perform the cooling algorithm, first presented by Boykin et al. in 2002, for an arbitrary number of times on the same set of qbits. We achieve this goal by adding an additional SWAP gate and a bath contact to the algorithm. This way one qbit may repeatedly be cooled without adding additional qbits to the system. By using a product Liouville space to model the bath contact we calculate the density matrix of the system after a given number of applications of the algorithm.
Parallel algorithms and architectures
Albrecht, A.; Jung, H.; Mehlhorn, K.
1987-01-01
Contents of this book are the following: Preparata: Deterministic simulation of idealized parallel computers on more realistic ones; Convex hull of randomly chosen points from a polytope; Dataflow computing; Parallel in sequence; Towards the architecture of an elementary cortical processor; Parallel algorithms and static analysis of parallel programs; Parallel processing of combinatorial search; Communications; An O(nlogn) cost parallel algorithms for the single function coarsest partition problem; Systolic algorithms for computing the visibility polygon and triangulation of a polygonal region; and RELACS - A recursive layout computing system. Parallel linear conflict-free subtree access.
An advanced dispatch simulator with advanced dispatch algorithm
Kafka, R.J. ); Fink, L.H. ); Balu, N.J. ); Crim, H.G. )
1989-01-01
This paper reports on an interactive automatic generation control (AGC) simulator. Improved and timely information regarding fossil fired plant performance is potentially useful in the economic dispatch of system generating units. Commonly used economic dispatch algorithms are not able to take full advantage of this information. The dispatch simulator was developed to test and compare economic dispatch algorithms which might be able to show improvement over standard economic dispatch algorithms if accurate unit information were available. This dispatch simulator offers substantial improvements over previously available simulators. In addition, it contains an advanced dispatch algorithm which shows control and performance advantages over traditional dispatch algorithms for both plants and electric systems.
GOES-West Shows U.S. West's Record Rainfall
A new time-lapse animation of data from NOAA's GOES-West satellite provides a good picture of why the U.S. West Coast continues to experience record rainfall. The new animation shows the movement o...
Freer, Phoebe E; Slanetz, Priscilla J; Haas, Jennifer S; Tung, Nadine M; Hughes, Kevin S; Armstrong, Katrina; Semine, A Alan; Troyan, Susan L; Birdwell, Robyn L
2015-09-01
Stemming from breast density notification legislation in Massachusetts effective 2015, we sought to develop a collaborative evidence-based approach to density notification that could be used by practitioners across the state. Our goal was to develop an evidence-based consensus management algorithm to help patients and health care providers follow best practices to implement a coordinated, evidence-based, cost-effective, sustainable practice and to standardize care in recommendations for supplemental screening. We formed the Massachusetts Breast Risk Education and Assessment Task Force (MA-BREAST) a multi-institutional, multi-disciplinary panel of expert radiologists, surgeons, primary care physicians, and oncologists to develop a collaborative approach to density notification legislation. Using evidence-based data from the Institute for Clinical and Economic Review, the Cochrane review, National Comprehensive Cancer Network guidelines, American Cancer Society recommendations, and American College of Radiology appropriateness criteria, the group collaboratively developed an evidence-based best-practices algorithm. The expert consensus algorithm uses breast density as one element in the risk stratification to determine the need for supplemental screening. Women with dense breasts and otherwise low risk (<15% lifetime risk), do not routinely require supplemental screening per the expert consensus. Women of high risk (>20% lifetime) should consider supplemental screening MRI in addition to routine mammography regardless of breast density. We report the development of the multi-disciplinary collaborative approach to density notification. We propose a risk stratification algorithm to assess personal level of risk to determine the need for supplemental screening for an individual woman. PMID:26290416
a Fast and Robust Algorithm for Road Edges Extraction from LIDAR Data
NASA Astrophysics Data System (ADS)
Qiu, Kaijin; Sun, Kai; Ding, Kou; Shu, Zhen
2016-06-01
Fast mapping of roads plays an important role in many geospatial applications, such as infrastructure planning, traffic monitoring, and driver assistance. How to extract various road edges fast and robustly is a challenging task. In this paper, we present a fast and robust algorithm for the automatic road edges extraction from terrestrial mobile LiDAR data. The algorithm is based on a key observation: most roads around edges have difference in elevation and road edges with pavement are seen in two different planes. In our algorithm, we firstly extract a rough plane based on RANSAC algorithm, and then multiple refined planes which only contains pavement are extracted from the rough plane. The road edges are extracted based on these refined planes. In practice, there is a serious problem that the rough and refined planes usually extracted badly due to rough roads and different density of point cloud. To eliminate the influence of rough roads, the technology which is similar with the difference of DSM (digital surface model) and DTM (digital terrain model) is used, and we also propose a method which adjust the point clouds to a similar density to eliminate the influence of different density. Experiments show the validities of the proposed method with multiple datasets (e.g. urban road, highway, and some rural road). We use the same parameters through the experiments and our algorithm can achieve real-time processing speeds.
Distributed Storage Algorithm for Geospatial Image Data Based on Data Access Patterns
Pan, Shaoming; Li, Yongkai; Xu, Zhengquan; Chong, Yanwen
2015-01-01
Declustering techniques are widely used in distributed environments to reduce query response time through parallel I/O by splitting large files into several small blocks and then distributing those blocks among multiple storage nodes. Unfortunately, however, many small geospatial image data files cannot be further split for distributed storage. In this paper, we propose a complete theoretical system for the distributed storage of small geospatial image data files based on mining the access patterns of geospatial image data using their historical access log information. First, an algorithm is developed to construct an access correlation matrix based on the analysis of the log information, which reveals the patterns of access to the geospatial image data. Then, a practical heuristic algorithm is developed to determine a reasonable solution based on the access correlation matrix. Finally, a number of comparative experiments are presented, demonstrating that our algorithm displays a higher total parallel access probability than those of other algorithms by approximately 10–15% and that the performance can be further improved by more than 20% by simultaneously applying a copy storage strategy. These experiments show that the algorithm can be applied in distributed environments to help realize parallel I/O and thereby improve system performance. PMID:26181628
Magnetic localization and orientation of the capsule endoscope based on a random complex algorithm
He, Xiaoqi; Zheng, Zizhao; Hu, Chao
2015-01-01
The development of the capsule endoscope has made possible the examination of the whole gastrointestinal tract without much pain. However, there are still some important problems to be solved, among which, one important problem is the localization of the capsule. Currently, magnetic positioning technology is a suitable method for capsule localization, and this depends on a reliable system and algorithm. In this paper, based on the magnetic dipole model as well as magnetic sensor array, we propose nonlinear optimization algorithms using a random complex algorithm, applied to the optimization calculation for the nonlinear function of the dipole, to determine the three-dimensional position parameters and two-dimensional direction parameters. The stability and the antinoise ability of the algorithm is compared with the Levenberg–Marquart algorithm. The simulation and experiment results show that in terms of the error level of the initial guess of magnet location, the random complex algorithm is more accurate, more stable, and has a higher “denoise” capacity, with a larger range for initial guess values. PMID:25914561
VDA, a Method of Choosing a Better Algorithm with Fewer Validations
Kluger, Yuval
2011-01-01
The multitude of bioinformatics algorithms designed for performing a particular computational task presents end-users with the problem of selecting the most appropriate computational tool for analyzing their biological data. The choice of the best available method is often based on expensive experimental validation of the results. We propose an approach to design validation sets for method comparison and performance assessment that are effective in terms of cost and discrimination power. Validation Discriminant Analysis (VDA) is a method for designing a minimal validation dataset to allow reliable comparisons between the performances of different algorithms. Implementation of our VDA approach achieves this reduction by selecting predictions that maximize the minimum Hamming distance between algorithmic predictions in the validation set. We show that VDA can be used to correctly rank algorithms according to their performances. These results are further supported by simulations and by realistic algorithmic comparisons in silico. VDA is a novel, cost-efficient method for minimizing the number of validation experiments necessary for reliable performance estimation and fair comparison between algorithms. Our VDA software is available at http://sourceforge.net/projects/klugerlab/files/VDA/ PMID:22046256
Efficient algorithm for sparse coding and dictionary learning with applications to face recognition
NASA Astrophysics Data System (ADS)
Zhao, Zhong; Feng, Guocan
2015-03-01
Sparse representation has been successfully applied to pattern recognition problems in recent years. The most common way for producing sparse coding is to use the l1-norm regularization. However, the l1-norm regularization only favors sparsity and does not consider locality. It may select quite different bases for similar samples to favor sparsity, which is disadvantageous to classification. Besides, solving the l1-minimization problem is time consuming, which limits its applications in large-scale problems. We propose an improved algorithm for sparse coding and dictionary learning. This algorithm takes both sparsity and locality into consideration. It selects part of the dictionary columns that are close to the input sample for coding and imposes locality constraint on these selected dictionary columns to obtain discriminative coding for classification. Because an analytic solution of the coding is derived by only using part of the dictionary columns, the proposed algorithm is much faster than the l1-based algorithms for classification. Besides, we also derive an analytic solution for updating the dictionary in the training process. Experiments conducted on five face databases show that the proposed algorithm has better performance than the competing algorithms in terms of accuracy and efficiency.
A novel algorithm combining oversampling and digital lock-in amplifier of high speed and precision
NASA Astrophysics Data System (ADS)
Li, Gang; Zhou, Mei; He, Feng; Lin, Ling
2011-09-01
Because of a large amount of arithmetic in the standard digital lock-in detection, a high performance processor is needed to implement the algorithm in real time. This paper presents a novel algorithm that integrates oversampling and high-speed lock-in detection. The algorithm sets the sampling frequency as a whole-number multiple of four of the input signal frequency, and then uses the common downsampling technology to lower the sampling frequency to four times of the input signal frequency. It could effectively remove the noise interference and improve the detection accuracy. After that the phase sensitive detector is implemented. It simply does the addition and subtraction on four points in the period of same phase and replaces almost all the multiplication operations to speed up digital lock-in detection calculation substantially. Furthermore, the correction factor is introduced to improve the calculation accuracy of the amplitude, and an error caused by the algorithm in theory can be eliminated completely. The results of the simulation and actual experiments show that the novel algorithm combining digital lock-in detection and oversampling not only has the high precision, but also has the unprecedented speed. In our work, the new algorithm is suitable for the real-time weak signal detection in the general microprocessor not just digital signal processor.
Log-linear model based behavior selection method for artificial fish swarm algorithm.
Huang, Zhehuang; Chen, Yidong
2015-01-01
Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm. PMID:25691895
Log-Linear Model Based Behavior Selection Method for Artificial Fish Swarm Algorithm
Huang, Zhehuang; Chen, Yidong
2015-01-01
Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm. PMID:25691895
A hybrid algorithm for robust acoustic source localization in noisy and reverberant environments
NASA Astrophysics Data System (ADS)
Rajagopalan, Ramesh; Dessonville, Timothy
2014-09-01
Acoustic source localization using microphone arrays is widely used in videoconferencing and surveillance systems. However, it still remains a challenging task to develop efficient algorithms for accurate estimation of source location using distributed data processing. In this work, we propose a new algorithm for efficient localization of a speaker in noisy and reverberant environments such as videoconferencing. We propose a hybrid algorithm that combines generalized cross correlation based phase transform method (GCC-PHAT) and Tabu search to obtain a robust and accurate estimate of the speaker location. The Tabu Search algorithm iteratively improves the time difference of arrival (TDOA) estimate of GCC-PHAT by examining the neighboring solutions until a convergence in the TDOA value is obtained. Experiments were performed based on real world data recorded from a meeting room in the presence of noise such as computer and fans. Our results demonstrate that the proposed hybrid algorithm outperforms GCC-PHAT especially when the noise level is high. This shows the robustness of the proposed algorithm in noisy and realistic videoconferencing systems.
VDA, a method of choosing a better algorithm with fewer validations.
Strino, Francesco; Parisi, Fabio; Kluger, Yuval
2011-01-01
The multitude of bioinformatics algorithms designed for performing a particular computational task presents end-users with the problem of selecting the most appropriate computational tool for analyzing their biological data. The choice of the best available method is often based on expensive experimental validation of the results. We propose an approach to design validation sets for method comparison and performance assessment that are effective in terms of cost and discrimination power.Validation Discriminant Analysis (VDA) is a method for designing a minimal validation dataset to allow reliable comparisons between the performances of different algorithms. Implementation of our VDA approach achieves this reduction by selecting predictions that maximize the minimum Hamming distance between algorithmic predictions in the validation set. We show that VDA can be used to correctly rank algorithms according to their performances. These results are further supported by simulations and by realistic algorithmic comparisons in silico.VDA is a novel, cost-efficient method for minimizing the number of validation experiments necessary for reliable performance estimation and fair comparison between algorithms.Our VDA software is available at http://sourceforge.net/projects/klugerlab/files/VDA/ PMID:22046256
A multiobjective evolutionary algorithm to find community structures based on affinity propagation
NASA Astrophysics Data System (ADS)
Shang, Ronghua; Luo, Shuang; Zhang, Weitong; Stolkin, Rustam; Jiao, Licheng
2016-07-01
Community detection plays an important role in reflecting and understanding the topological structure of complex networks, and can be used to help mine the potential information in networks. This paper presents a Multiobjective Evolutionary Algorithm based on Affinity Propagation (APMOEA) which improves the accuracy of community detection. Firstly, APMOEA takes the method of affinity propagation (AP) to initially divide the network. To accelerate its convergence, the multiobjective evolutionary algorithm selects nondominated solutions from the preliminary partitioning results as its initial population. Secondly, the multiobjective evolutionary algorithm finds solutions approximating the true Pareto optimal front through constantly selecting nondominated solutions from the population after crossover and mutation in iterations, which overcomes the tendency of data clustering methods to fall into local optima. Finally, APMOEA uses an elitist strategy, called "external archive", to prevent degeneration during the process of searching using the multiobjective evolutionary algorithm. According to this strategy, the preliminary partitioning results obtained by AP will be archived and participate in the final selection of Pareto-optimal solutions. Experiments on benchmark test data, including both computer-generated networks and eight real-world networks, show that the proposed algorithm achieves more accurate results and has faster convergence speed compared with seven other state-of-art algorithms.
Use of Algorithm of Changes for Optimal Design of Heat Exchanger
NASA Astrophysics Data System (ADS)
Tam, S. C.; Tam, H. K.; Chio, C. H.; Tam, L. M.
2010-05-01
For economic reasons, the optimal design of heat exchanger is required. Design of heat exchanger is usually based on the iterative process. The design conditions, equipment geometries, the heat transfer and friction factor correlations are totally involved in the process. Using the traditional iterative method, many trials are needed for satisfying the compromise between the heat exchange performance and the cost consideration. The process is cumbersome and the optimal design is often depending on the design engineer's experience. Therefore, in the recent studies, many researchers, reviewed in [1], applied the genetic algorithm (GA) [2] for designing the heat exchanger. The results outperformed the traditional method. In this study, the alternative approach, algorithm of changes, is proposed for optimal design of shell-tube heat exchanger [3]. This new method, algorithm of changes based on I Ching (???), is developed originality by the author. In the algorithms, the hexagram operations in I Ching has been generalized to binary string case and the iterative procedure which imitates the I Ching inference is also defined. On the basis of [3], the shell inside diameter, tube outside diameter, and baffles spacing were treated as the design (or optimized) variables. The cost of the heat exchanger was arranged as the objective function. Through the case study, the results show that the algorithm of changes is comparable to the GA method. Both of method can find the optimal solution in a short time. However, without interchanging information between binary strings, the algorithm of changes has advantage on parallel computation over GA.
NASA Astrophysics Data System (ADS)
Wu, Qiong; Wang, Jihua; Wang, Cheng; Xu, Tongyu
2016-09-01
Genetic algorithm (GA) has a significant effect in the band optimization selection of Partial Least Squares (PLS) correction model. Application of genetic algorithm in selection of characteristic bands can achieve the optimal solution more rapidly, effectively improve measurement accuracy and reduce variables used for modeling. In this study, genetic algorithm as a module conducted band selection for the application of hyperspectral imaging in nondestructive testing of corn seedling leaves, and GA-PLS model was established. In addition, PLS quantitative model of full spectrum and experienced-spectrum region were established in order to suggest the feasibility of genetic algorithm optimizing wave bands, and model robustness was evaluated. There were 12 characteristic bands selected by genetic algorithm. With reflectance values of corn seedling component information at spectral characteristic wavelengths corresponding to 12 characteristic bands as variables, a model about SPAD values of corn leaves acquired was established by PLS, and modeling results showed r = 0.7825. The model results were better than those of PLS model established in full spectrum and experience-based selected bands. The results suggested that genetic algorithm can be used for data optimization and screening before establishing the corn seedling component information model by PLS method and effectively increase measurement accuracy and greatly reduce variables used for modeling.
Constrained Multiobjective Biogeography Optimization Algorithm
Mo, Hongwei; Xu, Zhidan; Xu, Lifang; Wu, Zhou; Ma, Haiping
2014-01-01
Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA) is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA. PMID:25006591
Constrained multiobjective biogeography optimization algorithm.
Mo, Hongwei; Xu, Zhidan; Xu, Lifang; Wu, Zhou; Ma, Haiping
2014-01-01
Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA) is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA. PMID:25006591
Optimisation algorithms for microarray biclustering.
Perrin, Dimitri; Duhamel, Christophe
2013-01-01
In providing simultaneous information on expression profiles for thousands of genes, microarray technologies have, in recent years, been largely used to investigate mechanisms of gene expression. Clustering and classification of such data can, indeed, highlight patterns and provide insight on biological processes. A common approach is to consider genes and samples of microarray datasets as nodes in a bipartite graphs, where edges are weighted e.g. based on the expression levels. In this paper, using a previously-evaluated weighting scheme, we focus on search algorithms and evaluate, in the context of biclustering, several variations of Genetic Algorithms. We also introduce a new heuristic "Propagate", which consists in recursively evaluating neighbour solutions with one more or one less active conditions. The results obtained on three well-known datasets show that, for a given weighting scheme, optimal or near-optimal solutions can be identified. PMID:24109756
A Simple Calculator Algorithm.
ERIC Educational Resources Information Center
Cook, Lyle; McWilliam, James
1983-01-01
The problem of finding cube roots when limited to a calculator with only square root capability is discussed. An algorithm is demonstrated and explained which should always produce a good approximation within a few iterations. (MP)
NASA Astrophysics Data System (ADS)
Feigin, G.; Ben-Yosef, N.
1983-10-01
A thinning algorithm, of the banana-peel type, is presented. In each iteration pixels are attacked from all directions (there are no sub-iterations), and the deletion criteria depend on the 24 nearest neighbours.
Diagnostic Algorithm Benchmarking
NASA Technical Reports Server (NTRS)
Poll, Scott
2011-01-01
A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.
Algorithmically specialized parallel computers
Snyder, L.; Jamieson, L.H.; Gannon, D.B.; Siegel, H.J.
1985-01-01
This book is based on a workshop which dealt with array processors. Topics considered include algorithmic specialization using VLSI, innovative architectures, signal processing, speech recognition, image processing, specialized architectures for numerical computations, and general-purpose computers.
Improving Search Algorithms by Using Intelligent Coordinates
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Tumer, Kagan; Bandari, Esfandiar
2004-01-01
We consider algorithms that maximize a global function G in a distributed manner, using a different adaptive computational agent to set each variable of the underlying space. Each agent eta is self-interested; it sets its variable to maximize its own function g (sub eta). Three factors govern such a distributed algorithm's performance, related to exploration/exploitation, game theory, and machine learning. We demonstrate how to exploit alI three factors by modifying a search algorithm's exploration stage: rather than random exploration, each coordinate of the search space is now controlled by a separate machine-learning-based player engaged in a noncooperative game. Experiments demonstrate that this modification improves simulated annealing (SA) by up to an order of magnitude for bin packing and for a model of an economic process run over an underlying network. These experiments also reveal interesting small-world phenomena.
Quantum hyperparallel algorithm for matrix multiplication
NASA Astrophysics Data System (ADS)
Zhang, Xin-Ding; Zhang, Xiao-Ming; Xue, Zheng-Yuan
2016-04-01
Hyperentangled states, entangled states with more than one degree of freedom, are considered as promising resource in quantum computation. Here we present a hyperparallel quantum algorithm for matrix multiplication with time complexity O(N2), which is better than the best known classical algorithm. In our scheme, an N dimensional vector is mapped to the state of a single source, which is separated to N paths. With the assistance of hyperentangled states, the inner product of two vectors can be calculated with a time complexity independent of dimension N. Our algorithm shows that hyperparallel quantum computation may provide a useful tool in quantum machine learning and “big data” analysis.
Quantum hyperparallel algorithm for matrix multiplication.
Zhang, Xin-Ding; Zhang, Xiao-Ming; Xue, Zheng-Yuan
2016-01-01
Hyperentangled states, entangled states with more than one degree of freedom, are considered as promising resource in quantum computation. Here we present a hyperparallel quantum algorithm for matrix multiplication with time complexity O(N(2)), which is better than the best known classical algorithm. In our scheme, an N dimensional vector is mapped to the state of a single source, which is separated to N paths. With the assistance of hyperentangled states, the inner product of two vectors can be calculated with a time complexity independent of dimension N. Our algorithm shows that hyperparallel quantum computation may provide a useful tool in quantum machine learning and "big data" analysis. PMID:27125586
2013-07-29
The OpenEIS Algorithm package seeks to provide a low-risk path for building owners, service providers and managers to explore analytical methods for improving building control and operational efficiency. Users of this software can analyze building data, and learn how commercial implementations would provide long-term value. The code also serves as a reference implementation for developers who wish to adapt the algorithms for use in commercial tools or service offerings.
The Superior Lambert Algorithm
NASA Astrophysics Data System (ADS)
der, G.
2011-09-01
Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most
A test sheet generating algorithm based on intelligent genetic algorithm and hierarchical planning
NASA Astrophysics Data System (ADS)
Gu, Peipei; Niu, Zhendong; Chen, Xuting; Chen, Wei
2013-03-01
In recent years, computer-based testing has become an effective method to evaluate students' overall learning progress so that appropriate guiding strategies can be recommended. Research has been done to develop intelligent test assembling systems which can automatically generate test sheets based on given parameters of test items. A good multisubject test sheet depends on not only the quality of the test items but also the construction of the sheet. Effective and efficient construction of test sheets according to multiple subjects and criteria is a challenging problem. In this paper, a multi-subject test sheet generation problem is formulated and a test sheet generating approach based on intelligent genetic algorithm and hierarchical planning (GAHP) is proposed to tackle this problem. The proposed approach utilizes hierarchical planning to simplify the multi-subject testing problem and adopts genetic algorithm to process the layered criteria, enabling the construction of good test sheets according to multiple test item requirements. Experiments are conducted and the results show that the proposed approach is capable of effectively generating multi-subject test sheets that meet specified requirements and achieve good performance.
A test sheet generating algorithm based on intelligent genetic algorithm and hierarchical planning
NASA Astrophysics Data System (ADS)
Gu, Peipei; Niu, Zhendong; Chen, Xuting; Chen, Wei
2012-04-01
In recent years, computer-based testing has become an effective method to evaluate students' overall learning progress so that appropriate guiding strategies can be recommended. Research has been done to develop intelligent test assembling systems which can automatically generate test sheets based on given parameters of test items. A good multisubject test sheet depends on not only the quality of the test items but also the construction of the sheet. Effective and efficient construction of test sheets according to multiple subjects and criteria is a challenging problem. In this paper, a multi-subject test sheet generation problem is formulated and a test sheet generating approach based on intelligent genetic algorithm and hierarchical planning (GAHP) is proposed to tackle this problem. The proposed approach utilizes hierarchical planning to simplify the multi-subject testing problem and adopts genetic algorithm to process the layered criteria, enabling the construction of good test sheets according to multiple test item requirements. Experiments are conducted and the results show that the proposed approach is capable of effectively generating multi-subject test sheets that meet specified requirements and achieve good performance.
NASA Technical Reports Server (NTRS)
Whitlock, Charles H.; Cox, Stephen K.; Lecroy, Stuart R.
1990-01-01
Tables are presented which show data from five sites in the First ISCCP (International Satellite Cloud Climatology Project) Regional Experiment (FIRE)/Surface Radiation Budget (SRB) Wisconsin experiment regional from October 12 through November 2, 1986. A discussion of intercomparison results is also included. The field experiment was conducted for the purposes of both intensive cirrus-cloud measurements and SRB algorithm validation activities.
Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models
Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou
2015-01-01
Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1)βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations. PMID:26502409
An Iterative Soft-Decision Decoding Algorithm
NASA Technical Reports Server (NTRS)
Lin, Shu; Koumoto, Takuya; Takata, Toyoo; Kasami, Tadao
1996-01-01
This paper presents a new minimum-weight trellis-based soft-decision iterative decoding algorithm for binary linear block codes. Simulation results for the RM(64,22), EBCH(64,24), RM(64,42) and EBCH(64,45) codes show that the proposed decoding algorithm achieves practically (or near) optimal error performance with significant reduction in decoding computational complexity. The average number of search iterations is also small even for low signal-to-noise ratio.
Dolphin shows and interaction programs: benefits for conservation education?
Miller, L J; Zeigler-Hill, V; Mellen, J; Koeppel, J; Greer, T; Kuczaj, S
2013-01-01
Dolphin shows and dolphin interaction programs are two types of education programs within zoological institutions used to educate visitors about dolphins and the marine environment. The current study examined the short- and long-term effects of these programs on visitors' conservation-related knowledge, attitude, and behavior. Participants of both dolphin shows and interaction programs demonstrated a significant short-term increase in knowledge, attitudes, and behavioral intentions. Three months following the experience, participants of both dolphin shows and interaction programs retained the knowledge learned during their experience and reported engaging in more conservation-related behaviors. Additionally, the number of dolphin shows attended in the past was a significant predictor of recent conservation-related behavior suggesting that repetition of these types of experiences may be important in inspiring people to conservation action. These results suggest that both dolphin shows and dolphin interaction programs can be an important part of a conservation education program for visitors of zoological facilities. PMID:22622768
Improved Bat Algorithm Applied to Multilevel Image Thresholding
2014-01-01
Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed. PMID:25165733
Improved bat algorithm applied to multilevel image thresholding.
Alihodzic, Adis; Tuba, Milan
2014-01-01
Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed. PMID:25165733
NASA Astrophysics Data System (ADS)
Jiang, Tianzi; Cui, Qinghua; Shi, Guihua; Ma, Songde
2003-08-01
In this paper, a novel hybrid algorithm combining genetic algorithms and tabu search is presented. In the proposed hybrid algorithm, the idea of tabu search is applied to the crossover operator. We demonstrate that the hybrid algorithm can be applied successfully to the protein folding problem based on a hydrophobic-hydrophilic lattice model. The results show that in all cases the hybrid algorithm works better than a genetic algorithm alone. A comparison with other methods is also made.
Combined string searching algorithm based on knuth-morris- pratt and boyer-moore algorithms
NASA Astrophysics Data System (ADS)
Tsarev, R. Yu; Chernigovskiy, A. S.; Tsareva, E. A.; Brezitskaya, V. V.; Nikiforov, A. Yu; Smirnov, N. A.
2016-04-01
The string searching task can be classified as a classic information processing task. Users either encounter the solution of this task while working with text processors or browsers, employing standard built-in tools, or this task is solved unseen by the users, while they are working with various computer programmes. Nowadays there are many algorithms for solving the string searching problem. The main criterion of these algorithms’ effectiveness is searching speed. The larger the shift of the pattern relative to the string in case of pattern and string characters’ mismatch is, the higher is the algorithm running speed. This article offers a combined algorithm, which has been developed on the basis of well-known Knuth-Morris-Pratt and Boyer-Moore string searching algorithms. These algorithms are based on two different basic principles of pattern matching. Knuth-Morris-Pratt algorithm is based upon forward pattern matching and Boyer-Moore is based upon backward pattern matching. Having united these two algorithms, the combined algorithm allows acquiring the larger shift in case of pattern and string characters’ mismatch. The article provides an example, which illustrates the results of Boyer-Moore and Knuth-Morris- Pratt algorithms and combined algorithm’s work and shows advantage of the latter in solving string searching problem.
Quantum Adiabatic Algorithms and Large Spin Tunnelling
NASA Technical Reports Server (NTRS)
Boulatov, A.; Smelyanskiy, V. N.
2003-01-01
We provide a theoretical study of the quantum adiabatic evolution algorithm with different evolution paths proposed in this paper. The algorithm is applied to a random binary optimization problem (a version of the 3-Satisfiability problem) where the n-bit cost function is symmetric with respect to the permutation of individual bits. The evolution paths are produced, using the generic control Hamiltonians H (r) that preserve the bit symmetry of the underlying optimization problem. In the case where the ground state of H(0) coincides with the totally-symmetric state of an n-qubit system the algorithm dynamics is completely described in terms of the motion of a spin-n/2. We show that different control Hamiltonians can be parameterized by a set of independent parameters that are expansion coefficients of H (r) in a certain universal set of operators. Only one of these operators can be responsible for avoiding the tunnelling in the spin-n/2 system during the quantum adiabatic algorithm. We show that it is possible to select a coefficient for this operator that guarantees a polynomial complexity of the algorithm for all problem instances. We show that a successful evolution path of the algorithm always corresponds to the trajectory of a classical spin-n/2 and provide a complete characterization of such paths.
Implementations of back propagation algorithm in ecosystems applications
NASA Astrophysics Data System (ADS)
Ali, Khalda F.; Sulaiman, Riza; Elamir, Amir Mohamed
2015-05-01
Artificial Neural Networks (ANNs) have been applied to an increasing number of real world problems of considerable complexity. Their most important advantage is in solving problems which are too complex for conventional technologies, that do not have an algorithmic solutions or their algorithmic Solutions is too complex to be found. In general, because of their abstraction from the biological brain, ANNs are developed from concept that evolved in the late twentieth century neuro-physiological experiments on the cells of the human brain to overcome the perceived inadequacies with conventional ecological data analysis methods. ANNs have gained increasing attention in ecosystems applications, because of ANN's capacity to detect patterns in data through non-linear relationships, this characteristic confers them a superior predictive ability. In this research, ANNs is applied in an ecological system analysis. The neural networks use the well known Back Propagation (BP) Algorithm with the Delta Rule for adaptation of the system. The Back Propagation (BP) training Algorithm is an effective analytical method for adaptation of the ecosystems applications, the main reason because of their capacity to detect patterns in data through non-linear relationships. This characteristic confers them a superior predicting ability. The BP algorithm uses supervised learning, which means that we provide the algorithm with examples of the inputs and outputs we want the network to compute, and then the error is calculated. The idea of the back propagation algorithm is to reduce this error, until the ANNs learns the training data. The training begins with random weights, and the goal is to adjust them so that the error will be minimal. This research evaluated the use of artificial neural networks (ANNs) techniques in an ecological system analysis and modeling. The experimental results from this research demonstrate that an artificial neural network system can be trained to act as an expert
Systematic studies of the Richardson-Lucy deconvolution algorithm applied to VHE gamma data
NASA Astrophysics Data System (ADS)
Heinz, S.; Jung, I.; Stegmann, C.
2012-08-01
The Richardson-Lucy deconvolution algorithm was applied to astronomical images in the very high-energy regime with photon energies above 100 GeV. Through a systematic study with respect to source significance, background level and source morphology we were able to derive optimal deconvolution parameters. The results presented show that deconvolution makes it possible to study structural details well below the angular resolution of the very high-energy γ-ray experiment.
Comparative Study of Two Automatic Registration Algorithms
NASA Astrophysics Data System (ADS)
Grant, D.; Bethel, J.; Crawford, M.
2013-10-01
The Iterative Closest Point (ICP) algorithm is prevalent for the automatic fine registration of overlapping pairs of terrestrial laser scanning (TLS) data. This method along with its vast number of variants, obtains the least squares parameters that are necessary to align the TLS data by minimizing some distance metric between the scans. The ICP algorithm uses a "model-data" concept in which the scans obtain differential treatment in the registration process depending on whether they were assigned to be the "model" or "data". For each of the "data" points, corresponding points from the "model" are sought. Another concept of "symmetric correspondence" was proposed in the Point-to-Plane (P2P) algorithm, where both scans are treated equally in the registration process. The P2P method establishes correspondences on both scans and minimizes the point-to-plane distances between the scans by simultaneously considering the stochastic properties of both scans. This paper studies both the ICP and P2P algorithms in terms of their consistency in registration parameters for pairs of TLS data. The question being investigated in this paper is, should scan A be registered to scan B, will the parameters be the same if scan B were registered to scan A? Experiments were conducted with eight pairs of real TLS data which were registered by the two algorithms in the forward (scan A to scan B) and backward (scan B to scan A) modes and the results were compared. The P2P algorithm was found to be more consistent than the ICP algorithm. The differences in registration accuracy between the forward and backward modes were negligible when using the P2P algorithm (mean difference of 0.03 mm). However, the ICP had a mean difference of 4.26 mm. Each scan was also transformed by the forward and backward parameters of the two algorithms and the misclosure computed. The mean misclosure for the P2P algorithm was 0.80 mm while that for the ICP algorithm was 5.39 mm. The conclusion from this study is
Khaji, Erfan; Karami, Masoumeh; Garkani-Nejad, Zahra
2016-02-21
Predicting the native structure of proteins based on half-sphere exposure and contact numbers has been studied deeply within recent years. Online predictors of these vectors and secondary structures of amino acids sequences have made it possible to design a function for the folding process. By choosing variant structures and directs for each secondary structure, a random conformation can be generated, and a potential function can then be assigned. Minimizing the potential function utilizing meta-heuristic algorithms is the final step of finding the native structure of a given amino acid sequence. In this work, Imperialist Competitive algorithm was used in order to accelerate the process of minimization. Moreover, we applied an adaptive procedure to apply revolutionary changes. Finally, we considered a more accurate tool for prediction of secondary structure. The results of the computational experiments on standard benchmark show the superiority of the new algorithm over the previous methods with similar potential function. PMID:26718864
The Search for Effective Algorithms for Recovery from Loss of Separation
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Hagen, George E.; Maddalon, Jeffrey M.; Munoz, Cesar A.; Narawicz, Anthony J.
2012-01-01
Our previous work presented an approach for developing high confidence algorithms for recovering aircraft from loss of separation situations. The correctness theorems for the algorithms relied on several key assumptions, namely that state data for all local aircraft is perfectly known, that resolution maneuvers can be achieved instantaneously, and that all aircraft compute resolutions using exactly the same data. Experiments showed that these assumptions were adequate in cases where the aircraft are far away from losing separation, but are insufficient when the aircraft have already lost separation. This paper describes the results of this experimentation and proposes a new criteria specification for loss of separation recovery that preserves the formal safety properties of the previous criteria while overcoming some key limitations. Candidate algorithms that satisfy the new criteria are presented.
NASA Astrophysics Data System (ADS)
Khambampati, A. K.; Rashid, A.; Kim, B. S.; Liu, Dong; Kim, S.; Kim, K. Y.
2010-04-01
EIT has been used for the dynamic estimation of organ boundaries. One specific application in this context is the estimation of lung boundaries during pulmonary circulation. This would help track the size and shape of lungs of the patients suffering from diseases like pulmonary edema and acute respiratory failure (ARF). The dynamic boundary estimation of the lungs can also be utilized to set and control the air volume and pressure delivered to the patients during artificial ventilation. In this paper, the expectation-maximization (EM) algorithm is used as an inverse algorithm to estimate the non-stationary lung boundary. The uncertainties caused in Kalman-type filters due to inaccurate selection of model parameters are overcome using EM algorithm. Numerical experiments using chest shaped geometry are carried out with proposed method and the performance is compared with extended Kalman filter (EKF). Results show superior performance of EM in estimation of the lung boundary.