Casimir experiments showing saturation effects
Sernelius, Bo E.
2009-10-15
We address several different Casimir experiments where theory and experiment disagree. First out is the classical Casimir force measurement between two metal half spaces; here both in the form of the torsion pendulum experiment by Lamoreaux and in the form of the Casimir pressure measurement between a gold sphere and a gold plate as performed by Decca et al.; theory predicts a large negative thermal correction, absent in the high precision experiments. The third experiment is the measurement of the Casimir force between a metal plate and a laser irradiated semiconductor membrane as performed by Chen et al.; the change in force with laser intensity is larger than predicted by theory. The fourth experiment is the measurement of the Casimir force between an atom and a wall in the form of the measurement by Obrecht et al. of the change in oscillation frequency of a {sup 87}Rb Bose-Einstein condensate trapped to a fused silica wall; the change is smaller than predicted by theory. We show that saturation effects can explain the discrepancies between theory and experiment observed in all these cases.
SAGE II inversion algorithm. [Stratospheric Aerosol and Gas Experiment
NASA Technical Reports Server (NTRS)
Chu, W. P.; Mccormick, M. P.; Lenoble, J.; Brogniez, C.; Pruvost, P.
1989-01-01
The operational Stratospheric Aerosol and Gas Experiment II multichannel data inversion algorithm is described. Aerosol and ozone retrievals obtained with the algorithm are discussed. The algorithm is compared to an independently developed algorithm (Lenoble, 1989), showing that the inverted aerosol and ozone profiles from the two algorithms are similar within their respective uncertainties.
Experiments showing dynamics of materials interfaces
Benjamin, R.F.
1997-02-01
The discipline of materials science and engineering often involves understanding and controlling properties of interfaces. The authors address the challenge of educating students about properties of interfaces, particularly dynamic properties and effects of unstable interfaces. A series of simple, inexpensive, hands-on activities about fluid interfaces provides students with a testbed to develop intuition about interface dynamics. The experiments highlight the essential role of initial interfacial perturbations in determining the dynamic response of the interface. The experiments produce dramatic, unexpected effects when initial perturbations are controlled and inhibited. These activities help students to develop insight about unstable interfaces that can be applied to analogous problems in materials science and engineering. The lessons examine ``Rayleigh-Taylor instability,`` an interfacial instability that occurs when a higher-density fluid is above a lower-density fluid.
Worldwide experience shows horizontal well success
Karlsson, H.; Bitto, R.
1989-03-01
The convergence of technology and experience has made horizontal drilling an important tool in increasing production and solving a variety of completion problems. Since the early 1980s, horizontal drilling has been used to improve production on more than 700 oil and gas wells throughout the world. Approximately 200 horizontal wells were drilled in 1988 alone. Interest in horizontal drilling has been accelerating rapidly as service companies have developed and offered new technology for drilling and producing horizontal wells. Simultaneously, oil companies have developed better methods for evaluating reservoirs for potential horizontal applications, while their production departments have gained experience at completing and producing them. To date, most horizontal wells have been drilled in the United States. A major application is to complete naturally fractured formations, such as the Austin chalk in Texas, the Bakken shale in the Williston basin, the Spraberry in West Texas and the Devonian shale in the Eastern states. In addition, many horizontal wells have been drilled to produce the Niagaran reefs and the irregular Antrim shale reservoirs in Michigan.
Children's Art Show: An Educational Family Experience
ERIC Educational Resources Information Center
Bakerlis, Julienne
2007-01-01
In a time of seemingly rampant budget cuts in the arts in school systems throughout the country, a children's art show reaps many rewards. It can strengthen family-school relationships and community ties and stimulate questions and comments about the benefits of art and its significance in the development of young children. In this photo essay of…
Retrieval Algorithms for the Halogen Occultation Experiment
NASA Technical Reports Server (NTRS)
Thompson, Robert E.; Gordley, Larry L.
2009-01-01
The Halogen Occultation Experiment (HALOE) on the Upper Atmosphere Research Satellite (UARS) provided high quality measurements of key middle atmosphere constituents, aerosol characteristics, and temperature for 14 years (1991-2005). This report is an outline of the Level 2 retrieval algorithms, and it also describes the great care that was taken in characterizing the instrument prior to launch and throughout its mission life. It represents an historical record of the techniques used to analyze the data and of the steps that must be considered for the development of a similar experiment for future satellite missions.
Algorithmic Animation in Education--Review of Academic Experience
ERIC Educational Resources Information Center
Esponda-Arguero, Margarita
2008-01-01
This article is a review of the pedagogical experience obtained with systems for algorithmic animation. Algorithms consist of a sequence of operations whose effect on data structures can be visualized using a computer. Students learn algorithms by stepping the animation through the different individual operations, possibly reversing their effect.…
Patient Experience Shows Little Relationship with Hospital Quality Management Strategies
Groene, Oliver; Arah, Onyebuchi A.; Klazinga, Niek S.; Wagner, Cordula; Bartels, Paul D.; Kristensen, Solvejg; Saillour, Florence; Thompson, Andrew; Thompson, Caroline A.; Pfaff, Holger; DerSarkissian, Maral; Sunol, Rosa
2015-01-01
Objectives Patient-reported experience measures are increasingly being used to routinely monitor the quality of care. With the increasing attention on such measures, hospital managers seek ways to systematically improve patient experience across hospital departments, in particular where outcomes are used for public reporting or reimbursement. However, it is currently unclear whether hospitals with more mature quality management systems or stronger focus on patient involvement and patient-centered care strategies perform better on patient-reported experience. We assessed the effect of such strategies on a range of patient-reported experience measures. Materials and Methods We employed a cross-sectional, multi-level study design randomly recruiting hospitals from the Czech Republic, France, Germany, Poland, Portugal, Spain, and Turkey between May 2011 and January 2012. Each hospital contributed patient level data for four conditions/pathways: acute myocardial infarction, stroke, hip fracture and deliveries. The outcome variables in this study were a set of patient-reported experience measures including a generic 6-item measure of patient experience (NORPEQ), a 3-item measure of patient-perceived discharge preparation (Health Care Transition Measure) and two single item measures of perceived involvement in care and hospital recommendation. Predictor variables included three hospital management strategies: maturity of the hospital quality management system, patient involvement in quality management functions and patient-centered care strategies. We used directed acyclic graphs to detail and guide the modeling of the complex relationships between predictor variables and outcome variables, and fitted multivariable linear mixed models with random intercept by hospital, and adjusted for fixed effects at the country level, hospital level and patient level. Results Overall, 74 hospitals and 276 hospital departments contributed data on 6,536 patients to this study (acute
Adaptive experiments with a multivariate Elo-type algorithm.
Doebler, Philipp; Alavash, Mohsen; Giessing, Carsten
2015-06-01
The present article introduces the multivariate Elo-type algorithm (META), which is inspired by the Elo rating system, a tool for the measurement of the performance of chess players. The META is intended for adaptive experiments with correlated traits. The relationship of the META to other existing procedures is explained, and useful variants and modifications are discussed. The META was investigated within three simulation studies. The gain in efficiency of the univariate Elo-type algorithm was compared to standard univariate procedures; the impact of using correlational information in the META was quantified; and the adaptability to learning and fatigue was investigated. Our results show that the META is a powerful tool to efficiently control task performance in a short time period and to assess correlated traits. The R code of the simulations, the implementation of the META in MATLAB, and an example of how to use the META in the context of neuroscience are provided in supplemental materials. PMID:24878597
Experiments on Supervised Learning Algorithms for Text Categorization
NASA Technical Reports Server (NTRS)
Namburu, Setu Madhavi; Tu, Haiying; Luo, Jianhui; Pattipati, Krishna R.
2005-01-01
Modern information society is facing the challenge of handling massive volume of online documents, news, intelligence reports, and so on. How to use the information accurately and in a timely manner becomes a major concern in many areas. While the general information may also include images and voice, we focus on the categorization of text data in this paper. We provide a brief overview of the information processing flow for text categorization, and discuss two supervised learning algorithms, viz., support vector machines (SVM) and partial least squares (PLS), which have been successfully applied in other domains, e.g., fault diagnosis [9]. While SVM has been well explored for binary classification and was reported as an efficient algorithm for text categorization, PLS has not yet been applied to text categorization. Our experiments are conducted on three data sets: Reuter's- 21578 dataset about corporate mergers and data acquisitions (ACQ), WebKB and the 20-Newsgroups. Results show that the performance of PLS is comparable to SVM in text categorization. A major drawback of SVM for multi-class categorization is that it requires a voting scheme based on the results of pair-wise classification. PLS does not have this drawback and could be a better candidate for multi-class text categorization.
MREIT conductivity imaging based on the local harmonic Bz algorithm: Animal experiments
NASA Astrophysics Data System (ADS)
Jeon, Kiwan; Lee, Chang-Ock; Woo, Eung Je; Kim, Hyung Joong; Seo, Jin Keun
2010-04-01
From numerous numerical and phantom experiments, MREIT conductivity imaging based on harmonic Bz algorithm shows that it could be yet another useful medical imaging modality. However, in animal experiments, the conventional harmonic Bz algorithm gives poor results near boundaries of problematic regions such as bones, lungs, and gas-filled stomach, and the subject boundary where electrodes are not attached. Since the amount of injected current is low enough for the safety for in vivo animal, the measured Bz data is defected by severe noise. In order to handle such problems, we use the recently developed local harmonic Bz algorithm to obtain conductivity images in our ROI(region of interest) without concerning the defected regions. Furthermore we adopt a denoising algorithm that preserves the ramp structure of Bz data, which informs of the location and size of anomaly. Incorporating these efficient techniques, we provide the conductivity imaging of post-mortem and in vivo animal experiments with high spatial resolution.
A new map-making algorithm for CMB polarization experiments
NASA Astrophysics Data System (ADS)
Wallis, Christopher G. R.; Bonaldi, A.; Brown, Michael L.; Battye, Richard A.
2015-10-01
With the temperature power spectrum of the cosmic microwave background (CMB) at least four orders of magnitude larger than the B-mode polarization power spectrum, any instrumental imperfections that couple temperature to polarization must be carefully controlled and/or removed. Here we present two new map-making algorithms that can create polarization maps that are clean of temperature-to-polarization leakage systematics due to differential gain and pointing between a detector pair. Where a half-wave plate is used, we show that the spin-2 systematic due to differential ellipticity can also be removed using our algorithms. The algorithms require no prior knowledge of the imperfections or temperature sky to remove the temperature leakage. Instead, they calculate the systematic and polarization maps in one step directly from the time-ordered data (TOD). The first algorithm is designed to work with scan strategies that have a good range of crossing angles for each map pixel and the second for scan strategies that have a limited range of crossing angles. The first algorithm can also be used to identify if systematic errors that have a particular spin are present in a TOD. We demonstrate the use of both algorithms and the ability to identify systematics with simulations of TOD with realistic scan strategies and instrumental noise.
Experience with a Genetic Algorithm Implemented on a Multiprocessor Computer
NASA Technical Reports Server (NTRS)
Plassman, Gerald E.; Sobieszczanski-Sobieski, Jaroslaw
2000-01-01
Numerical experiments were conducted to find out the extent to which a Genetic Algorithm (GA) may benefit from a multiprocessor implementation, considering, on one hand, that analyses of individual designs in a population are independent of each other so that they may be executed concurrently on separate processors, and, on the other hand, that there are some operations in a GA that cannot be so distributed. The algorithm experimented with was based on a gaussian distribution rather than bit exchange in the GA reproductive mechanism, and the test case was a hub frame structure of up to 1080 design variables. The experimentation engaging up to 128 processors confirmed expectations of radical elapsed time reductions comparing to a conventional single processor implementation. It also demonstrated that the time spent in the non-distributable parts of the algorithm and the attendant cross-processor communication may have a very detrimental effect on the efficient utilization of the multiprocessor machine and on the number of processors that can be used effectively in a concurrent manner. Three techniques were devised and tested to mitigate that effect, resulting in efficiency increasing to exceed 99 percent.
Development of clustering algorithms for Compressed Baryonic Matter experiment
NASA Astrophysics Data System (ADS)
Kozlov, G. E.; Ivanov, V. V.; Lebedev, A. A.; Vassiliev, Yu. O.
2015-05-01
A clustering problem for the coordinate detectors in the Compressed Baryonic Matter (CBM) experiment is discussed. Because of the high interaction rate and huge datasets to be dealt with, clustering algorithms are required to be fast and efficient and capable of processing events with high track multiplicity. At present there are two different approaches to the problem. In the first one each fired pad bears information about its charge, while in the second one a pad can or cannot be fired, thus rendering the separation of overlapping clusters a difficult task. To deal with the latter, two different clustering algorithms were developed, integrated into the CBMROOT software environment, and tested with various types of simulated events. Both of them are found to be highly efficient and accurate.
Experience with CANDID: Comparison algorithm for navigating digital image databases
Kelly, P.; Cannon, M.
1994-10-01
This paper presents results from the authors experience with CANDID (Comparison Algorithm for Navigating Digital Image Databases), which was designed to facilitate image retrieval by content using a query-by-example methodology. A global signature describing the texture, shape, or color content is first computed for every image stored in a database, and a normalized similarity measure between probability density functions of feature vectors is used to match signatures. This method can be used to retrieve images from a database that are similar to a user-provided example image. Results for three test applications are included.
Experience with imaging algorithms on multiple core CPUs
NASA Astrophysics Data System (ADS)
Moore, Richard
2011-01-01
With the release of an eight core Xeon processor by Intel and a twelve core Opteron processor by AMD in the spring of 2010, the increase of multiple cores per chip package continues. Multiple core processors are common place in most workstations sold today and are an attractive option for increasing imaging performance. Visual attention models are very compute intensive, requiring many imaging algorithms to be run on images such as large difference of Gaussian filters, segmentation, and region finding. In this paper we present our experience in optimizing the performance of a visual attention model on standard multi-core Windows workstations.
Experiments with a Parallel Multi-Objective Evolutionary Algorithm for Scheduling
NASA Technical Reports Server (NTRS)
Brown, Matthew; Johnston, Mark D.
2013-01-01
Evolutionary multi-objective algorithms have great potential for scheduling in those situations where tradeoffs among competing objectives represent a key requirement. One challenge, however, is runtime performance, as a consequence of evolving not just a single schedule, but an entire population, while attempting to sample the Pareto frontier as accurately and uniformly as possible. The growing availability of multi-core processors in end user workstations, and even laptops, has raised the question of the extent to which such hardware can be used to speed up evolutionary algorithms. In this paper we report on early experiments in parallelizing a Generalized Differential Evolution (GDE) algorithm for scheduling long-range activities on NASA's Deep Space Network. Initial results show that significant speedups can be achieved, but that performance does not necessarily improve as more cores are utilized. We describe our preliminary results and some initial suggestions from parallelizing the GDE algorithm. Directions for future work are outlined.
Pile-Up Discrimination Algorithms for the HOLMES Experiment
NASA Astrophysics Data System (ADS)
Ferri, E.; Alpert, B.; Bennett, D.; Faverzani, M.; Fowler, J.; Giachero, A.; Hays-Wehle, J.; Maino, M.; Nucciotti, A.; Puiu, A.; Ullom, J.
2016-07-01
The HOLMES experiment is a new large-scale experiment for the electron neutrino mass determination by means of the electron capture decay of ^{163}Ho. In such an experiment, random coincidence events are one of the main sources of background which impair the ability to identify the effect of a non-vanishing neutrino mass. In order to resolve these spurious events, detectors characterized by a fast response are needed as well as pile-up recognition algorithms. For that reason, we have developed a code for testing the discrimination efficiency of various algorithms in recognizing pile up events in dependence of the time separation between two pulses. The tests are performed on simulated realistic TES signals and noise. Indeed, the pulse profile is obtained by solving the two coupled differential equations which describe the response of the TES according to the Irwin-Hilton model. To these pulses, a noise waveform which takes into account all the noise sources regularly present in a real TES is added. The amplitude of the generated pulses is distributed as the ^{163}Ho calorimetric spectrum. Furthermore, the rise time of these pulses has been chosen taking into account the constraints given by both the bandwidth of the microwave multiplexing read out with a flux ramp demodulation and the bandwidth of the ADC boards currently available for ROACH2. Among the different rejection techniques evaluated, the Wiener Filter technique, a digital filter to gain time resolution, has shown an excellent pile-up rejection efficiency. The obtained time resolution closely matches the baseline specifications of the HOLMES experiment. We report here a description of our simulation code and a comparison of the different rejection techniques.
Pile-Up Discrimination Algorithms for the HOLMES Experiment
NASA Astrophysics Data System (ADS)
Ferri, E.; Alpert, B.; Bennett, D.; Faverzani, M.; Fowler, J.; Giachero, A.; Hays-Wehle, J.; Maino, M.; Nucciotti, A.; Puiu, A.; Ullom, J.
2016-01-01
The HOLMES experiment is a new large-scale experiment for the electron neutrino mass determination by means of the electron capture decay of ^{163} Ho. In such an experiment, random coincidence events are one of the main sources of background which impair the ability to identify the effect of a non-vanishing neutrino mass. In order to resolve these spurious events, detectors characterized by a fast response are needed as well as pile-up recognition algorithms. For that reason, we have developed a code for testing the discrimination efficiency of various algorithms in recognizing pile up events in dependence of the time separation between two pulses. The tests are performed on simulated realistic TES signals and noise. Indeed, the pulse profile is obtained by solving the two coupled differential equations which describe the response of the TES according to the Irwin-Hilton model. To these pulses, a noise waveform which takes into account all the noise sources regularly present in a real TES is added. The amplitude of the generated pulses is distributed as the ^{163} Ho calorimetric spectrum. Furthermore, the rise time of these pulses has been chosen taking into account the constraints given by both the bandwidth of the microwave multiplexing read out with a flux ramp demodulation and the bandwidth of the ADC boards currently available for ROACH2. Among the different rejection techniques evaluated, the Wiener Filter technique, a digital filter to gain time resolution, has shown an excellent pile-up rejection efficiency. The obtained time resolution closely matches the baseline specifications of the HOLMES experiment. We report here a description of our simulation code and a comparison of the different rejection techniques.
Experiences and evolutions of the ALICE DAQ Detector Algorithms framework
NASA Astrophysics Data System (ADS)
Chapeland, Sylvain; Carena, Franco; Carena, Wisla; Chibante Barroso, Vasco; Costa, Filippo; Denes, Ervin; Divia, Roberto; Fuchs, Ulrich; Grigore, Alexandru; Simonetti, Giuseppe; Soos, Csaba; Telesca, Adriana; Vande Vyvre, Pierre; von Haller, Barthelemy
2012-12-01
ALICE (A Large Ion Collider Experiment) is the heavy-ion detector studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). The 18 ALICE sub-detectors are regularly calibrated in order to achieve most accurate physics measurements. Some of these procedures are done online in the DAQ (Data Acquisition System) so that calibration results can be directly used for detector electronics configuration before physics data taking, at run time for online event monitoring, and offline for data analysis. A framework was designed to collect statistics and compute calibration parameters, and has been used in production since 2008. This paper focuses on the recent features developed to benefit from the multi-cores architecture of CPUs, and to optimize the processing power available for the calibration tasks. It involves some C++ base classes to effectively implement detector specific code, with independent processing of events in parallel threads and aggregation of partial results. The Detector Algorithm (DA) framework provides utility interfaces for handling of input and output (configuration, monitored physics data, results, logging), and self-documentation of the produced executable. New algorithms are created quickly by inheritance of base functionality and implementation of few ad-hoc virtual members, while the framework features are kept expandable thanks to the isolation of the detector calibration code. The DA control system also handles unexpected processes behaviour, logs execution status, and collects performance statistics.
STS-42 closeup view shows SE 81-09 Convection in Zero Gravity experiment
NASA Technical Reports Server (NTRS)
1992-01-01
STS-42 closeup view shows Student Experiment 81-09 (SE 81-09), Convection in Zero Gravity experiment, with radial pattern caused by convection induced by heating an oil and aluminum powder mixture in the weightlessness of space. While the STS-42 crewmembers activated the Shuttle Student Involvement Program (SSIP) experiment on Discovery's, Orbiter Vehicle (OV) 103's, middeck, Scott Thomas, the student who designed the experiment, was able to observe the procedures via downlinked television (TV) in JSC's Mission Control Center (MCC). Thomas, now a physics doctoral student at the University of Texas, came up with the experiment while he participated in the SSIP as a student at Richland High School in Johnstown, Pennsylvia.
Experiments with conjugate gradient algorithms for homotopy curve tracking
NASA Technical Reports Server (NTRS)
Irani, Kashmira M.; Ribbens, Calvin J.; Watson, Layne T.; Kamat, Manohar P.; Walker, Homer F.
1991-01-01
There are algorithms for finding zeros or fixed points of nonlinear systems of equations that are globally convergent for almost all starting points, i.e., with probability one. The essence of all such algorithms is the construction of an appropriate homotopy map and then tracking some smooth curve in the zero set of this homotopy map. HOMPACK is a mathematical software package implementing globally convergent homotopy algorithms with three different techniques for tracking a homotopy zero curve, and has separate routines for dense and sparse Jacobian matrices. The HOMPACK algorithms for sparse Jacobian matrices use a preconditioned conjugate gradient algorithm for the computation of the kernel of the homotopy Jacobian matrix, a required linear algebra step for homotopy curve tracking. Here, variants of the conjugate gradient algorithm are implemented in the context of homotopy curve tracking and compared with Craig's preconditioned conjugate gradient method used in HOMPACK. The test problems used include actual large scale, sparse structural mechanics problems.
Eigensystem realization algorithm modal identification experiences with mini-mast
NASA Technical Reports Server (NTRS)
Pappa, Richard S.; Schenk, Axel; Noll, Christopher
1992-01-01
This paper summarizes work performed under a collaborative research effort between the National Aeronautics and Space Administration (NASA) and the German Aerospace Research Establishment (DLR, Deutsche Forschungsanstalt fur Luft- und Raumfahrt). The objective is to develop and demonstrate system identification technology for future large space structures. Recent experiences using the Eigensystem Realization Algorithm (ERA), for modal identification of Mini-Mast, are reported. Mini-Mast is a 20 m long deployable space truss used for structural dynamics and active vibration-control research at the Langley Research Center. A comprehensive analysis of 306 frequency response functions (3 excitation forces and 102 displacement responses) was performed. Emphasis is placed on two topics of current research: (1) gaining an improved understanding of ERA performance characteristics (theory vs. practice); and (2) developing reliable techniques to improve identification results for complex experimental data. Because of nonlinearities and numerous local modes, modal identification of Mini-Mast proved to be surprisingly difficult. Methods were available, ERA, for obtaining detailed, high-confidence results.
ERIC Educational Resources Information Center
Hundhausen, Christopher D.; Brown, Jonathan L.
2008-01-01
Within the context of an introductory CS1 unit on algorithmic problem-solving, we are exploring the pedagogical value of a novel active learning activity--the "studio experience"--that actively engages learners with algorithm visualization technology. In a studio experience, student pairs are tasked with (a) developing a solution to an algorithm…
A field experiment shows that subtle linguistic cues might not affect voter behavior.
Gerber, Alan S; Huber, Gregory A; Biggers, Daniel R; Hendry, David J
2016-06-28
One of the most important recent developments in social psychology is the discovery of minor interventions that have large and enduring effects on behavior. A leading example of this class of results is in the work by Bryan et al. [Bryan CJ, Walton GM, Rogers T, Dweck CS (2011) Proc Natl Acad Sci USA 108(31):12653-12656], which shows that administering a set of survey items worded so that subjects think of themselves as voters (noun treatment) rather than as voting (verb treatment) substantially increases political participation (voter turnout) among subjects. We revisit these experiments by replicating and extending their research design in a large-scale field experiment. In contrast to the 11 to 14% point greater turnout among those exposed to the noun rather than the verb treatment reported in the work by Bryan et al., we find no statistically significant difference in turnout between the noun and verb treatments (the point estimate of the difference is approximately zero). Furthermore, when we benchmark these treatments against a standard get out the vote message, we estimate that both are less effective at increasing turnout than a much shorter basic mobilization message. In our conclusion, we detail how our study differs from the work by Bryan et al. and discuss how our results might be interpreted. PMID:27298362
Experiences with the PGAPack Parallel Genetic Algorithm library
Levine, D.; Hallstrom, P.; Noelle, D.; Walenz, B.
1997-07-01
PGAPack is the first widely distributed parallel genetic algorithm library. Since its release, several thousand copies have been distributed worldwide to interested users. In this paper we discuss the key components of the PGAPack design philosophy and present a number of application examples that use PGAPack.
PSO algorithm enhanced with Lozi Chaotic Map - Tuning experiment
Pluhacek, Michal; Senkerik, Roman; Zelinka, Ivan
2015-03-10
In this paper it is investigated the effect of tuning of control parameters of the Lozi Chaotic Map employed as a chaotic pseudo-random number generator for the particle swarm optimization algorithm. Three different benchmark functions are selected from the IEEE CEC 2013 competition benchmark set. The Lozi map is extensively tuned and the performance of PSO is evaluated.
Nørskov, Natalja P; Hedemann, Mette S; Theil, Peter K; Fomsgaard, Inge S; Laursen, Bente B; Knudsen, Knud Erik Bach
2013-09-18
The concentration and absorption of the nine phenolic acids of wheat were measured in a model experiment with catheterized pigs fed whole grain wheat and wheat aleurone diets. Six pigs in a repeated crossover design were fitted with catheters in the portal vein and mesenteric artery to study the absorption of phenolic acids. The difference between the artery and the vein for all phenolic acids was small, indicating that the release of phenolic acids in the large intestine was not sufficient to create a porto-arterial concentration difference. Although, the porto-arterial difference was small, their concentrations in the plasma and the absorption profiles differed between cinnamic and benzoic acid derivatives. Cinnamic acids derivatives such as ferulic acid and caffeic acid had maximum plasma concentration of 82 ± 20 and 200 ± 7 nM, respectively, and their absorption profiles differed depending on the diet consumed. Benzoic acid derivatives showed low concentration in the plasma (<30 nM) and in the diets. The exception was p-hydroxybenzoic acid, with a plasma concentration (4 ± 0.4 μM), much higher than the other plant phenolic acids, likely because it is an intermediate in the phenolic acid metabolism. It was concluded that plant phenolic acids undergo extensive interconversion in the colon and that their absorption profiles reflected their low bioavailability in the plant matrix. PMID:23971623
NASA Astrophysics Data System (ADS)
Diaz, Alejandro; Álvarez, Isaac; De la Torre, Ángel; García, Luz; Benítez, Ma Carmen; Cortés, Guillermo
2014-05-01
The detection of the arrival time of seismic waves or picking is of great importance in many seismology applications. Traditionally, picking has been carried out by human operators. This process is not systematic and relies completely on the expertise and judgment of the analysts. The limitations of manual picking and the increasing amount of data daily stored in the seismic networks worldwide distributed and in active seismic experiments lead to the development of automatic picking algorithms. Current conventional algorithms work with single signals, such as the "short-term average over long-term average" (STA/LTA) algorithm, autoregressive methods or the recently developed "Adaptive Multiband Picking Algorithm" (AMPA). This work proposes a correlation-based picking algorithm, whose main advantage is the fact of using the information of a set of signals, improving the signal to noise ratio and therefore the picking accuracy. The main advantage of this approach is that the algorithm does not require to set up sophisticated parameters, in contrast to other automatic algorithms. The accuracy of the conventional STA/LTA algorithm, the recently developed AMPA algorithm, an autoregressive method, and a preliminary version of the cross correlation-based picking algorithm were assessed using a huge data set composed by active seismic signals from experiments in Tenerife Island (January 2007, Spain). The experiment consisted of the deployment of a dense seismic network on Tenerife Island (125 seismometers in total) and the shooting of air-guns around the island with the Spanish oceanographic vessel Hespérides (6459 air shots in total). Only 110937 signals (13.74% of the total) had the signal to noise ratio enough to be manually picked. Results showed that the use of the cross correlation-based picking algorithm significantly increases the number of signals that can be considered in the tomography. A new active seismic experiment will cover Sicily and Aeolian Islands (TOMO
Video tracking algorithm of long-term experiment using stand-alone recording system
NASA Astrophysics Data System (ADS)
Chen, Yu-Jen; Li, Yan-Chay; Huang, Ke-Nung; Jen, Sun-Lon; Young, Ming-Shing
2008-08-01
Many medical and behavioral applications require the ability to monitor and quantify the behavior of small animals. In general these animals are confined in small cages. Often these situations involve very large numbers of cages. Modern research facilities commonly monitor simultaneously thousands of animals over long periods of time. However, conventional systems require one personal computer per monitoring platform, which is too complex, expensive, and increases power consumption for large laboratory applications. This paper presents a simplified video tracking algorithm for long-term recording using a stand-alone system. The feature of the presented tracking algorithm revealed that computation speed is very fast data storage requirements are small, and hardware requirements are minimal. The stand-alone system automatically performs tracking and saving acquired data to a secure digital card. The proposed system is designed for video collected at a 640×480 pixel with 16 bit color resolution. The tracking result is updated every 30 frames/s. Only the locomotive data are stored. Therefore, the data storage requirements could be minimized. In addition, detection via the designed algorithm uses the Cb and Cr values of a colored marker affixed to the target to define the tracked position and allows multiobject tracking against complex backgrounds. Preliminary experiment showed that such tracking information stored by the portable and stand-alone system could provide comprehensive information on the animal's activity.
F-18 SRA closeup of nose cap showing Advanced L-Probe Air Data Integration experiment
NASA Technical Reports Server (NTRS)
1997-01-01
This L-shaped probe mounted on the forward fuselage of a modified F-18 Systems Research Aircraft was the focus of an air data collection experiment flown at NASA's Dryden Flight Research Center, Edwards, California. The Advanced L-Probe Air Data Integration (ALADIN) experiment focused on providing pilots with angle-of-attack and angle-of-sideslip information as well as traditional airspeed and altitude data from a single system. For the experiment, the probes--one mounted on either side of the F-18's forward fuselage--were hooked to a series of four transducers, which relayed pressure measurements to an on-board research computer.
Kry, Stephen F.; Alvarez, Paola; Molineu, Andrea; Amador, Carrie; Galvin, James; Followill, David S.
2012-01-01
Purpose To determine the impact of treatment planning algorithm on the accuracy of heterogeneous dose calculations in the Radiological Physics Center (RPC) thorax phantom. Methods and Materials We retrospectively analyzed the results of 304 irradiations of the RPC thorax phantom at 221 different institutions as part of credentialing for RTOG clinical trials; the irradiations were all done using 6-MV beams. Treatment plans included those for intensity-modulated radiation therapy (IMRT) as well as 3D conformal therapy (3D CRT). Heterogeneous plans were developed using Monte Carlo (MC), convolution/superposition (CS) and the anisotropic analytic algorithm (AAA), as well as pencil beam (PB) algorithms. For each plan and delivery, the absolute dose measured in the center of a lung target was compared to the calculated dose, as was the planar dose in 3 orthogonal planes. The difference between measured and calculated dose was examined as a function of planning algorithm as well as use of IMRT. Results PB algorithms overestimated the dose delivered to the center of the target by 4.9% on average. Surprisingly, CS algorithms and AAA also showed a systematic overestimation of the dose to the center of the target, by 3.7% on average. In contrast, the MC algorithm dose calculations agreed with measurement within 0.6% on average. There was no difference observed between IMRT and 3D CRT calculation accuracy. Conclusion Unexpectedly, advanced treatment planning systems (those using CS and AAA algorithms) overestimated the dose that was delivered to the lung target. This issue requires attention in terms of heterogeneity calculations and potentially in terms of clinical practice. PMID:23237006
Lullaby Light Shows: Everyday Musical Experience among Under-Two-Year-Olds
ERIC Educational Resources Information Center
Young, Susan
2008-01-01
This article reports on information gathered from a set of interviews carried out with 88 mothers of under-two-year-olds. The interviews enquired about the everyday musical experiences of their babies and very young children in the home. From the process of analysis, the responses to the interviews were grouped into three main areas: musical…
Real Science: MIT Reality Show Tracks Experiences, Frustrations of Chemistry Lab Students
ERIC Educational Resources Information Center
Cooper, Kenneth J.
2012-01-01
A reality show about a college course--a chemistry class no less? That's what "ChemLab Boot Camp" is. The 14-part series of short videos is being released one episode at a time on the online learning site of the Massachusetts Institute of Technology. The novel show follows a diverse group of 14 freshmen as they struggle to master the laboratory…
Multiagent pursuit-evasion games: Algorithms and experiments
NASA Astrophysics Data System (ADS)
Kim, Hyounjin
Deployment of intelligent agents has been made possible through advances in control software, microprocessors, sensor/actuator technology, communication technology, and artificial intelligence. Intelligent agents now play important roles in many applications where human operation is too dangerous or inefficient. There is little doubt that the world of the future will be filled with intelligent robotic agents employed to autonomously perform tasks, or embedded in systems all around us, extending our capabilities to perceive, reason and act, and replacing human efforts. There are numerous real-world applications in which a single autonomous agent is not suitable and multiple agents are required. However, after years of active research in multi-agent systems, current technology is still far from achieving many of these real-world applications. Here, we consider the problem of deploying a team of unmanned ground vehicles (UGV) and unmanned aerial vehicles (UAV) to pursue a second team of UGV evaders while concurrently building a map in an unknown environment. This pursuit-evasion game encompasses many of the challenging issues that arise in operations using intelligent multi-agent systems. We cast the problem in a probabilistic game theoretic framework and consider two computationally feasible pursuit policies: greedy and global-max. We also formulate this probabilistic pursuit-evasion game as a partially observable Markov decision process and employ a policy search algorithm to obtain a good pursuit policy from a restricted class of policies. The estimated value of this policy is guaranteed to be uniformly close to the optimal value in the given policy class under mild conditions. To implement this scenario on real UAVs and UGVs, we propose a distributed hierarchical hybrid system architecture which emphasizes the autonomy of each agent yet allows for coordinated team efforts. We then describe our implementation on a fleet of UGVs and UAVs, detailing components such
Sinaiko, Anna D.; Ross-Degnan, Dennis; Soumerai, Stephen B.; Lieu, Tracy; Galbraith, Alison
2014-01-01
In 2022 twenty-five million people are expected to purchase health insurance through exchanges to be established under the Affordable Care Act. Understanding how people seek information and make decisions about the insurance plans that are available to them may improve their ability to select a plan and their satisfaction with it. We conducted a survey in 2010 of enrollees in one plan offered through Massachusetts’s unsubsidized health insurance exchange to analyze how a sample of consumers selected their plans. More than 40 percent found plan information difficult to understand. Approximately one-third of respondents had help selecting plans—most commonly from friends or family members. However, one-fifth of respondents wished they had had help narrowing plan choices; these enrollees were more likely to report negative experiences related to plan understanding, satisfaction with affordability and coverage, and unexpected costs. Some may have been eligible for subsidized plans. Exchanges may need to provide more resources and decision-support tools to improve consumers’ experiences in selecting a health plan. PMID:23297274
Sinaiko, Anna D; Ross-Degnan, Dennis; Soumerai, Stephen B; Lieu, Tracy; Galbraith, Alison
2013-01-01
In 2022 twenty-five million people are expected to purchase health insurance through exchanges to be established under the Affordable Care Act. Understanding how people seek information and make decisions about the insurance plans that are available to them may improve their ability to select a plan and their satisfaction with it. We conducted a survey in 2010 of enrollees in one plan offered through Massachusetts's unsubsidized health insurance exchange to analyze how a sample of consumers selected their plans. More than 40 percent found plan information difficult to understand. Approximately one-third of respondents had help selecting plans-most commonly from friends or family members. However, one-fifth of respondents wished they had had help narrowing plan choices; these enrollees were more likely to report negative experiences related to plan understanding, satisfaction with affordability and coverage, and unexpected costs. Some may have been eligible for subsidized plans. Exchanges may need to provide more resources and decision-support tools to improve consumers' experiences in selecting a health plan. PMID:23297274
Online Tracking Algorithms on GPUs for the P̅ANDA Experiment at FAIR
NASA Astrophysics Data System (ADS)
Bianchi, L.; Herten, A.; Ritman, J.; Stockmanns, T.; Adinetz,
2015-12-01
P̅ANDA is a future hadron and nuclear physics experiment at the FAIR facility in construction in Darmstadt, Germany. In contrast to the majority of current experiments, PANDA's strategy for data acquisition is based on event reconstruction from free-streaming data, performed in real time entirely by software algorithms using global detector information. This paper reports the status of the development of algorithms for the reconstruction of charged particle tracks, optimized online data processing applications, using General-Purpose Graphic Processing Units (GPU). Two algorithms for trackfinding, the Triplet Finder and the Circle Hough, are described, and details of their GPU implementations are highlighted. Average track reconstruction times of less than 100 ns are obtained running the Triplet Finder on state-of- the-art GPU cards. In addition, a proof-of-concept system for the dispatch of data to tracking algorithms using Message Queues is presented.
Global decomposition experiment shows soil animal impacts on decomposition are climate-dependent
WALL, DIANA H; BRADFORD, MARK A; ST JOHN, MARK G; TROFYMOW, JOHN A; BEHAN-PELLETIER, VALERIE; BIGNELL, DAVID E; DANGERFIELD, J MARK; PARTON, WILLIAM J; RUSEK, JOSEF; VOIGT, WINFRIED; WOLTERS, VOLKMAR; GARDEL, HOLLEY ZADEH; AYUKE, FRED O; BASHFORD, RICHARD; BELJAKOVA, OLGA I; BOHLEN, PATRICK J; BRAUMAN, ALAIN; FLEMMING, STEPHEN; HENSCHEL, JOH R; JOHNSON, DAN L; JONES, T HEFIN; KOVAROVA, MARCELA; KRANABETTER, J MARTY; KUTNY, LES; LIN, KUO-CHUAN; MARYATI, MOHAMED; MASSE, DOMINIQUE; POKARZHEVSKII, ANDREI; RAHMAN, HOMATHEVI; SABARÁ, MILLOR G; SALAMON, JOERG-ALFRED; SWIFT, MICHAEL J; VARELA, AMANDA; VASCONCELOS, HERALDO L; WHITE, DON; ZOU, XIAOMING
2008-01-01
Climate and litter quality are primary drivers of terrestrial decomposition and, based on evidence from multisite experiments at regional and global scales, are universally factored into global decomposition models. In contrast, soil animals are considered key regulators of decomposition at local scales but their role at larger scales is unresolved. Soil animals are consequently excluded from global models of organic mineralization processes. Incomplete assessment of the roles of soil animals stems from the difficulties of manipulating invertebrate animals experimentally across large geographic gradients. This is compounded by deficient or inconsistent taxonomy. We report a global decomposition experiment to assess the importance of soil animals in C mineralization, in which a common grass litter substrate was exposed to natural decomposition in either control or reduced animal treatments across 30 sites distributed from 43°S to 68°N on six continents. Animals in the mesofaunal size range were recovered from the litter by Tullgren extraction and identified to common specifications, mostly at the ordinal level. The design of the trials enabled faunal contribution to be evaluated against abiotic parameters between sites. Soil animals increase decomposition rates in temperate and wet tropical climates, but have neutral effects where temperature or moisture constrain biological activity. Our findings highlight that faunal influences on decomposition are dependent on prevailing climatic conditions. We conclude that (1) inclusion of soil animals will improve the predictive capabilities of region- or biome-scale decomposition models, (2) soil animal influences on decomposition are important at the regional scale when attempting to predict global change scenarios, and (3) the statistical relationship between decomposition rates and climate, at the global scale, is robust against changes in soil faunal abundance and diversity.
A new FOD recognition algorithm based on multi-source information fusion and experiment analysis
NASA Astrophysics Data System (ADS)
Li, Yu; Xiao, Gang
2011-08-01
Foreign Object Debris (FOD) is a kind of substance, debris or article alien to an aircraft or system, which would potentially cause huge damage when it appears on the airport runway. Due to the airport's complex circumstance, quick and precise detection of FOD target on the runway is one of the important protections for airplane's safety. A multi-sensor system including millimeter-wave radar and Infrared image sensors is introduced and a developed new FOD detection and recognition algorithm based on inherent feature of FOD is proposed in this paper. Firstly, the FOD's location and coordinate can be accurately obtained by millimeter-wave radar, and then according to the coordinate IR camera will take target images and background images. Secondly, in IR image the runway's edges which are straight lines can be extracted by using Hough transformation method. The potential target region, that is, runway region, can be segmented from the whole image. Thirdly, background subtraction is utilized to localize the FOD target in runway region. Finally, in the detailed small images of FOD target, a new characteristic is discussed and used in target classification. The experiment results show that this algorithm can effectively reduce the computational complexity, satisfy the real-time requirement and possess of high detection and recognition probability.
Pest control experiments show benefits of complexity at landscape and local scales.
Chaplin-Kramer, Rebecca; Kremen, Claire
2012-10-01
Farms benefit from pest control services provided by nature, but management of these services requires an understanding of how habitat complexity within and around the farm impacts the relationship between agricultural pests and their enemies. Using cage experiments, this study measures the effect of habitat complexity across scales on pest suppression of the cabbage aphid Brevicoryne brassicae in broccoli. Our results reveal that proportional reduction of pest density increases with complexity both at the landscape scale (measured by natural habitat cover in the 1 km around the farm) and at the local scale (plant diversity). While high local complexity can compensate for low complexity at landscape scales and vice versa, a delay in natural enemy arrival to locally complex sites in simple landscapes may compromise the enemies' ability to provide adequate control. Local complexity in simplified landscapes may only provide adequate top-down pest control in cooler microclimates with relatively low aphid colonization rates. Even so, strong natural enemy function can be overwhelmed by high rates of pest reproduction or colonization from nearby source habitat. PMID:23210310
Specific yield - laboratory experiments showing the effect of time on column drainage
Prill, Robert C.; Johnson, A.I.; Morris, Donald Arthur
1965-01-01
The increasing use of ground water from many major aquifers in the United States has required a more thorough understanding of gravity drainage, or specific yield. This report describes one phase of specific yield research by the U.S. Geological Survey's Hydrologic Laboratory in cooperation with the California Department of Water Resources. An earlier phase of the research concentrated on the final distribution of moisture retained after drainage of saturated columns of porous media. This report presents the phase that concentrated on the distribution of moisture retained in similar columns after drainage for various periods of time. Five columns, about 4 cm in diameter by 170 cm long, were packed with homogenous sand of very fine, medium, and coarse sizes, and one column was packed with alternating layers of coarse and medium sand. The very fine materials were more uniform in size range than were the medium materials. As the saturated columns drained, tensiometers installed throughout the length recorded changes in moisture tension. The relation of tension to moisture content, determined for each of the materials, was then used to convert the tension readings to moisture content. Data were then available on the distribution of retained moisture for different periods of drainage from 1 to 148 hours. Data also are presented on the final distribution of moisture content by weight and volume and on the degree of saturation. The final zone of capillary saturation was approximately 12 cm for coarse sand, 13 cm for medium sand, and 52 cm for very fine sand. The data showed these zones were 92 to 100 percent saturated. Most of the outflow from the columns occurred in the earlier hours of drainage--90 percent in 1 hour for the coarse materials, 50 percent for the medium, and 60 percent for the very fine. Although the largest percentage of the specific yield was reached during the early hours of .drainage, this study amply demonstrates that a very long time would be
Santamaría, A; Merino, A; Viñas, O; Arrizabalaga, P
2009-02-01
Have invisible barriers for women been broken in 2007, or do we still have to break through medicine's glass ceiling? Data from two of the most prestigious university hospitals in Barcelona with 700-800 beds, Hospital Clínic (HC) and Hospital de la Santa Creu i Sant Pau (HSCSP) address this issue. In the HSCSP, 87% of the department chairs are men and 85% of the department unit chiefs are also men. With respect to women, only 5 (13%) are in the top position (department chair) and 4 (15%) are department unit chiefs. Similar statistics are also found at the HC: 87% of the department chairs and 89% of the department unit chiefs are men. Currently, only 6 women (13%) are in the top position and 6 (11%) are department unit chiefs. Analysis of the 2002 data of internal promotions in HC showed that for the first level (senior specialist) sex distribution was similar. Nevertheless, for the second level (consultant) only 25% were women, and for the top level (senior consultant) only 8% were women. These proportions have not changed in 2007 in spite of a 10% increase in leadership positions during this period. Similar proportions were found in HSCSP where 68% of the top promotions were held by men. The data obtained from these two different medical institutions in Barcelona are probably representative of other hospitals in Spain. It would be ethically desirable to have males and females in leadership positions in the medical profession. PMID:19181883
Aalaei, Shokoufeh; Shahraki, Hadi; Rowhanimanesh, Alireza; Eslami, Saeid
2016-01-01
Objective(s): This study addresses feature selection for breast cancer diagnosis. The present process uses a wrapper approach using GA-based on feature selection and PS-classifier. The results of experiment show that the proposed model is comparable to the other models on Wisconsin breast cancer datasets. Materials and Methods: To evaluate effectiveness of proposed feature selection method, we employed three different classifiers artificial neural network (ANN) and PS-classifier and genetic algorithm based classifier (GA-classifier) on Wisconsin breast cancer datasets include Wisconsin breast cancer dataset (WBC), Wisconsin diagnosis breast cancer (WDBC), and Wisconsin prognosis breast cancer (WPBC). Results: For WBC dataset, it is observed that feature selection improved the accuracy of all classifiers expect of ANN and the best accuracy with feature selection achieved by PS-classifier. For WDBC and WPBC, results show feature selection improved accuracy of all three classifiers and the best accuracy with feature selection achieved by ANN. Also specificity and sensitivity improved after feature selection. Conclusion: The results show that feature selection can improve accuracy, specificity and sensitivity of classifiers. Result of this study is comparable with the other studies on Wisconsin breast cancer datasets. PMID:27403253
NASA Technical Reports Server (NTRS)
Adams, J. H., Jr.; Andreev, Valeri; Christl, M. J.; Cline, David B.; Crawford, Hank; Judd, E. G.; Pennypacker, Carl; Watts, J. W.
2007-01-01
The JEM-EUSO collaboration intends to study high energy cosmic ray showers using a large downward looking telescope mounted on the Japanese Experiment Module of the International Space Station. The telescope focal plane is instrumented with approx.300k pixels operating as a digital camera, taking snapshots at approx. 1MHz rate. We report an investigation of the trigger and reconstruction efficiency of various algorithms based on time and spatial analysis of the pixel images. Our goal is to develop trigger and reconstruction algorithms that will allow the instrument to detect energies low enough to connect smoothly to ground-based observations.
Data Association and Bullet Tracking Algorithms for the Fight Sight Experiment
Breitfeller, E; Roberts, R
2005-10-07
Previous LLNL investigators developed a bullet and projectile tracking system over a decade ago. Renewed interest in the technology has spawned research that culminated in a live-fire experiment, called Fight Sight, in September 2005. The experiment was more complex than previous LLNL bullet tracking experiments in that it included multiple shooters with simultaneous fire, new sensor-shooter geometries, large amounts of optical clutter, and greatly increased sensor-shooter distances. This presentation describes the data association and tracking algorithms for the Fight Sight experiment. Image processing applied to the imagery yields a sequence of bullet features which are input to a data association routine. The data association routine matches features with existing tracks, or initializes new tracks as needed. A Kalman filter is used to smooth and extrapolate existing tracks. The Kalman filter is also used to back-track bullets to their point of origin, thereby revealing the location of the shooter. It also provides an error ellipse for each shooter, quantifying the uncertainty of shooter location. In addition to describing the data association and tracking algorithms, several examples from the Fight Sight experiment are also presented.
F-18 SRA closeup of nose cap showing L-Probe experiment and standard air data sensors
NASA Technical Reports Server (NTRS)
1997-01-01
This under-the-nose view of a modified F-18 Systems Research Aircraft at NASA's Dryden Flight Research Center, Edwards, California, shows three critical components of the aircraft's air data systems which are mounted on both sides of the forward fuselage. Furthest forward are two L-probes that were the focus of the recent Advanced L-probe Air Data Integration (ALADIN) experiment. Behind the L-probes are angle-of-attack vanes, while below them are the aircraft's standard pitot-static air data probes. The ALADIN experiment focused on providing pilots with angle-of-attack and angle-of-sideslip air data as well as traditional airspeed and altitude information, all from a single system. Once fully developed, the new L-probes have the potential to give pilots more accurate air data information with less hardware.
Darzi, Soodabeh; Tiong, Sieh Kiong; Tariqul Islam, Mohammad; Rezai Soleymanpour, Hassan; Kibria, Salehin
2016-01-01
An experience oriented-convergence improved gravitational search algorithm (ECGSA) based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α), is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents’ positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR) beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness. PMID:27399904
Darzi, Soodabeh; Tiong, Sieh Kiong; Tariqul Islam, Mohammad; Rezai Soleymanpour, Hassan; Kibria, Salehin
2016-01-01
An experience oriented-convergence improved gravitational search algorithm (ECGSA) based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α), is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents' positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR) beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness. PMID:27399904
Model-independent nonlinear control algorithm with application to a liquid bridge experiment
Petrov, V.; Haaning, A.; Muehlner, K.A.; Van Hook, S.J.; Swinney, H.L.
1998-07-01
We present a control method for high-dimensional nonlinear dynamical systems that can target remote unstable states without {ital a priori} knowledge of the underlying dynamical equations. The algorithm constructs a high-dimensional look-up table based on the system{close_quote}s responses to a sequence of random perturbations. The method is demonstrated by stabilizing unstable flow of a liquid bridge surface-tension-driven convection experiment that models the float zone refining process. Control of the dynamics is achieved by heating or cooling two thermoelectric Peltier devices placed in the vicinity of the liquid bridge surface. The algorithm routines along with several example programs written in the MATLAB language can be found at ftp://ftp.mathworks.com/pub/contrib/v5/control/nlcontrol. {copyright} {ital 1998} {ital The American Physical Society}
Analysis of soil moisture extraction algorithm using data from aircraft experiments
NASA Technical Reports Server (NTRS)
Burke, H. H. K.; Ho, J. H.
1981-01-01
A soil moisture extraction algorithm is developed using a statistical parameter inversion method. Data sets from two aircraft experiments are utilized for the test. Multifrequency microwave radiometric data surface temperature, and soil moisture information are contained in the data sets. The surface and near surface ( or = 5 cm) soil moisture content can be extracted with accuracy of approximately 5% to 6% for bare fields and fields with grass cover by using L, C, and X band radiometer data. This technique is used for handling large amounts of remote sensing data from space.
Fast parallel tracking algorithm for the muon detector of the CBM experiment at fair
NASA Astrophysics Data System (ADS)
Lebedev, A.; Höhne, C.; Kisel, I.; Ososkov, G.
2010-07-01
Particle trajectory recognition is an important and challenging task in the Compressed Baryonic Matter (CBM) experiment at the future FAIR accelerator at Darmstadt. The tracking algorithms have to process terabytes of input data produced in particle collisions. Therefore, the speed of the tracking software is extremely important for data analysis. In this contribution, a fast parallel track reconstruction algorithm which uses available features of modern processors is presented. These features comprise a SIMD instruction set (SSE) and multithreading. The first allows one to pack several data items into one register and to operate on all of them in parallel thus achieving more operations per cycle. The second feature enables the routines to exploit all available CPU cores and hardware threads. This parallel version of the tracking algorithm has been compared to the initial serial scalar version which uses a similar approach for tracking. A speed-up factor of 487 was achieved (from 730 to 1.5 ms/event) for a computer with 2 × Intel Core i7 processors at 2.66 GHz.
Lee, Wonbae; von Hippel, Peter H.; Marcus, Andrew H.
2014-01-01
DNA constructs labeled with cyanine fluorescent dyes are important substrates for single-molecule (sm) studies of the functional activity of protein–DNA complexes. We previously studied the local DNA backbone fluctuations of replication fork and primer–template DNA constructs labeled with Cy3/Cy5 donor–acceptor Förster resonance energy transfer (FRET) chromophore pairs and showed that, contrary to dyes linked ‘externally’ to the bases with flexible tethers, direct ‘internal’ (and rigid) insertion of the chromophores into the sugar-phosphate backbones resulted in DNA constructs that could be used to study intrinsic and protein-induced DNA backbone fluctuations by both smFRET and sm Fluorescent Linear Dichroism (smFLD). Here we show that these rigidly inserted Cy3/Cy5 chromophores also exhibit two additional useful properties, showing both high photo-stability and minimal effects on the local thermodynamic stability of the DNA constructs. The increased photo-stability of the internal labels significantly reduces the proportion of false positive smFRET conversion ‘background’ signals, thereby simplifying interpretations of both smFRET and smFLD experiments, while the decreased effects of the internal probes on local thermodynamic stability also make fluctuations sensed by these probes more representative of the unperturbed DNA structure. We suggest that internal probe labeling may be useful in studies of many DNA–protein interaction systems. PMID:24627223
SU-E-T-344: Validation and Clinical Experience of Eclipse Electron Monte Carlo Algorithm (EMC)
Pokharel, S; Rana, S
2014-06-01
Purpose: The purpose of this study is to validate Eclipse Electron Monte Carlo (Algorithm for routine clinical uses. Methods: The PTW inhomogeneity phantom (T40037) with different combination of heterogeneous slabs has been CT-scanned with Philips Brilliance 16 slice scanner. The phantom contains blocks of Rando Alderson materials mimicking lung, Polystyrene (Tissue), PTFE (Bone) and PMAA. The phantom has 30×30×2.5 cm base plate with 2cm recesses to insert inhomogeneity. The detector systems used in this study are diode, tlds and Gafchromic EBT2 films. The diode and tlds were included in CT scans. The CT sets are transferred to Eclipse treatment planning system. Several plans have been created with Eclipse Monte Carlo (EMC) algorithm 11.0.21. Measurements have been carried out in Varian TrueBeam machine for energy from 6–22mev. Results: The measured and calculated doses agreed very well for tissue like media. The agreement was reasonably okay for the presence of lung inhomogeneity. The point dose agreement was within 3.5% and Gamma passing rate at 3%/3mm was greater than 93% except for 6Mev(85%). The disagreement can reach as high as 10% in the presence of bone inhomogeneity. This is due to eclipse reporting dose to the medium as opposed to the dose to the water as in conventional calculation engines. Conclusion: Care must be taken when using Varian Eclipse EMC algorithm for dose calculation for routine clinical uses. The algorithm dose not report dose to water in which most of the clinical experiences are based on rather it just reports dose to medium directly. In the presence of inhomogeneity such as bone, the dose discrepancy can be as high as 10% or even more depending on the location of normalization point or volume. As Radiation oncology as an empirical science, care must be taken before using EMC reported monitor units for clinical uses.
NASA Astrophysics Data System (ADS)
Degtyarev, Alexander; Khramushin, Vasily
2016-02-01
The paper deals with the computer implementation of direct computational experiments in fluid mechanics, constructed on the basis of the approach developed by the authors. The proposed approach allows the use of explicit numerical scheme, which is an important condition for increasing the effciency of the algorithms developed by numerical procedures with natural parallelism. The paper examines the main objects and operations that let you manage computational experiments and monitor the status of the computation process. Special attention is given to a) realization of tensor representations of numerical schemes for direct simulation; b) realization of representation of large particles of a continuous medium motion in two coordinate systems (global and mobile); c) computing operations in the projections of coordinate systems, direct and inverse transformation in these systems. Particular attention is paid to the use of hardware and software of modern computer systems.
Latifi, Kujtim; Oliver, Jasmine; Baker, Ryan; Dilling, Thomas J.; Stevens, Craig W.; Kim, Jongphil; Yue, Binglin; DeMarco, MaryLou; Zhang, Geoffrey G.; Moros, Eduardo G.; Feygelman, Vladimir
2014-04-01
Purpose: Pencil beam (PB) and collapsed cone convolution (CCC) dose calculation algorithms differ significantly when used in the thorax. However, such differences have seldom been previously directly correlated with outcomes of lung stereotactic ablative body radiation (SABR). Methods and Materials: Data for 201 non-small cell lung cancer patients treated with SABR were analyzed retrospectively. All patients were treated with 50 Gy in 5 fractions of 10 Gy each. The radiation prescription mandated that 95% of the planning target volume (PTV) receive the prescribed dose. One hundred sixteen patients were planned with BrainLab treatment planning software (TPS) with the PB algorithm and treated on a Novalis unit. The other 85 were planned on the Pinnacle TPS with the CCC algorithm and treated on a Varian linac. Treatment planning objectives were numerically identical for both groups. The median follow-up times were 24 and 17 months for the PB and CCC groups, respectively. The primary endpoint was local/marginal control of the irradiated lesion. Gray's competing risk method was used to determine the statistical differences in local/marginal control rates between the PB and CCC groups. Results: Twenty-five patients planned with PB and 4 patients planned with the CCC algorithms to the same nominal doses experienced local recurrence. There was a statistically significant difference in recurrence rates between the PB and CCC groups (hazard ratio 3.4 [95% confidence interval: 1.18-9.83], Gray's test P=.019). The differences (Δ) between the 2 algorithms for target coverage were as follows: ΔD99{sub GITV} = 7.4 Gy, ΔD99{sub PTV} = 10.4 Gy, ΔV90{sub GITV} = 13.7%, ΔV90{sub PTV} = 37.6%, ΔD95{sub PTV} = 9.8 Gy, and ΔD{sub ISO} = 3.4 Gy. GITV = gross internal tumor volume. Conclusions: Local control in patients receiving who were planned to the same nominal dose with PB and CCC algorithms were statistically significantly different. Possible alternative
NASA Technical Reports Server (NTRS)
Shia, Run-Lie; Ha, Yuk Lung; Wen, Jun-Shan; Yung, Yuk L.
1990-01-01
Extensive testing of the advective scheme proposed by Prather (1986) has been carried out in support of the California Institute of Technology-Jet Propulsion Laboratory two-dimensional model of the middle atmosphere. The original scheme is generalized to include higher-order moments. In addition, it is shown how well the scheme works in the presence of chemistry as well as eddy diffusion. Six types of numerical experiments including simple clock motion and pure advection in two dimensions have been investigated in detail. By comparison with analytic solutions, it is shown that the new algorithm can faithfully preserve concentration profiles, has essentially no numerical diffusion, and is superior to a typical fourth-order finite difference scheme.
Zimmerman, K; Levitis, D; Addicott, E; Pringle, A
2016-02-01
We present a novel algorithm for the design of crossing experiments. The algorithm identifies a set of individuals (a 'crossing-set') from a larger pool of potential crossing-sets by maximizing the diversity of traits of interest, for example, maximizing the range of genetic and geographic distances between individuals included in the crossing-set. To calculate diversity, we use the mean nearest neighbor distance of crosses plotted in trait space. We implement our algorithm on a real dataset of Neurospora crassa strains, using the genetic and geographic distances between potential crosses as a two-dimensional trait space. In simulated mating experiments, crossing-sets selected by our algorithm provide better estimates of underlying parameter values than randomly chosen crossing-sets. PMID:26419337
NASA Astrophysics Data System (ADS)
Li, Liang; Chen, Zhiqiang; Zhao, Ziran; Wu, Dufan
2013-01-01
At present, there are mainly three x-ray imaging modalities for dental clinical diagnosis: radiography, panorama and computed tomography (CT). We develop a new x-ray digital intra-oral tomosynthesis (IDT) system for quasi-three-dimensional dental imaging which can be seen as an intermediate modality between traditional radiography and CT. In addition to normal x-ray tube and digital sensor used in intra-oral radiography, IDT has a specially designed mechanical device to complete the tomosynthesis data acquisition. During the scanning, the measurement geometry is such that the sensor is stationary inside the patient's mouth and the x-ray tube moves along an arc trajectory with respect to the intra-oral sensor. Therefore, the projection geometry can be obtained without any other reference objects, which makes it be easily accepted in clinical applications. We also present a compressed sensing-based iterative reconstruction algorithm for this kind of intra-oral tomosynthesis. Finally, simulation and experiment were both carried out to evaluate this intra-oral imaging modality and algorithm. The results show that IDT has its potentiality to become a new tool for dental clinical diagnosis.
Level 3 trigger algorithm and Hardware Platform for the HADES experiment
NASA Astrophysics Data System (ADS)
Kirschner, Daniel Georg; Agakishiev, Geydar; Liu, Ming; Perez, Tiago; Kühn, Wolfgang; Pechenov, Vladimir; Spataro, Stefano
2009-01-01
A next generation real time trigger method to improve the enrichment of lepton events in the High Acceptance DiElectron Spectrometer (HADES) trigger system has been developed. In addition, a flexible Hardware Platform (Gigabit Ethernet-Multi-Node, GE-MN) was developed to implement and test the trigger method. The trigger method correlates the ring information of the HADES Ring Imaging Cherenkov (RICH) detector with the fired wires (drift cells) of the HADES Mini Drift Chamber (MDC) detector. It is demonstrated that this Level 3 trigger method can enhance the number of events which contain leptons by a factor of up to 50 at efficiencies above 80%. The performance of the correlation method in terms of the events analyzed per second has been studied with the GE-MN prototype in a lab test setup by streaming previously recorded experiment data to the module. This paper is a compilation from Kirschner [Level 3 trigger algorithm and Hardware Platform for the HADES experiment, Ph.D. Thesis, II. Physikalisches Institut der Justus-Liebig-Universität Gießen, urn:nbn:de:hebis:26-opus-50784, October 2007 [1
F-15B on ramp showing closeup of the Supersonic Natural Laminar Flow (SS-NLF) experiment attached ve
NASA Technical Reports Server (NTRS)
1999-01-01
A close up of the Supersonic Natural Laminar Flow (SS-NLF) experiment on the F-15B. The wing shape - designed by the Reno Aeronautical Corp. - had only minimal sweep and a short span. The low sweep angle gave this airfoil better take off and landing characteristics, as well as better subsonic cruise efficiency, than wings with a greater sweep angle. Engineers had reason to believe that improvements in aerodynamic efficiency from supersonic natural laminar flow might actually render a supersonic aircraft more economical to operate than slower, subsonic designs. To gather substantiate data, the SS-NLF experiment used an advanced, non-intrusive collection technique. Rather than instrumentation built into the wing, a high resolution infrared camera mounted on the F-15B fuselage recorded the data, a system with possible applications for future research aircraft.
F-15B in flight showing Supersonic Natural Laminar Flow (SS-NLF) experiment attached vertically to t
NASA Technical Reports Server (NTRS)
1999-01-01
In-flight photo of the F-15B equipped with the Supersonic Natural Laminar Flow (SS-NLF) experiment. During four research flights, laminar flow was achieved over 80 percent of the test wing at speeds approaching Mach 2. This was accomplished as the sole result of the shape of the wing, without the use of suction gloves, such as on the F-16XL. Laminar flow is a condition in which air passes over a wing in smooth layers, rather than being turbulent The greater the area of laminar flow, the lower the amount of friction drag on the wing, thus increasing an aircraft's range and fuel economy. Increasing the area of laminar flow on a wing has been the subject of research by engineers since the late 1940s, but substantial success has proven elusive. The SS-NLF experiment was intended to provide engineers with the data by which to design natural laminar flow wings.
Performance of the reconstruction algorithms of the FIRST experiment pixel sensors vertex detector
NASA Astrophysics Data System (ADS)
Rescigno, R.; Finck, Ch.; Juliani, D.; Spiriti, E.; Baudot, J.; Abou-Haidar, Z.; Agodi, C.; Alvarez, M. A. G.; Aumann, T.; Battistoni, G.; Bocci, A.; Böhlen, T. T.; Boudard, A.; Brunetti, A.; Carpinelli, M.; Cirrone, G. A. P.; Cortes-Giraldo, M. A.; Cuttone, G.; De Napoli, M.; Durante, M.; Gallardo, M. I.; Golosio, B.; Iarocci, E.; Iazzi, F.; Ickert, G.; Introzzi, R.; Krimmer, J.; Kurz, N.; Labalme, M.; Leifels, Y.; Le Fevre, A.; Leray, S.; Marchetto, F.; Monaco, V.; Morone, M. C.; Oliva, P.; Paoloni, A.; Patera, V.; Piersanti, L.; Pleskac, R.; Quesada, J. M.; Randazzo, N.; Romano, F.; Rossi, D.; Rousseau, M.; Sacchi, R.; Sala, P.; Sarti, A.; Scheidenberger, C.; Schuy, C.; Sciubba, A.; Sfienti, C.; Simon, H.; Sipala, V.; Tropea, S.; Vanstalle, M.; Younis, H.
2014-12-01
Hadrontherapy treatments use charged particles (e.g. protons and carbon ions) to treat tumors. During a therapeutic treatment with carbon ions, the beam undergoes nuclear fragmentation processes giving rise to significant yields of secondary charged particles. An accurate prediction of these production rates is necessary to estimate precisely the dose deposited into the tumours and the surrounding healthy tissues. Nowadays, a limited set of double differential carbon fragmentation cross-section is available. Experimental data are necessary to benchmark Monte Carlo simulations for their use in hadrontherapy. The purpose of the FIRST experiment is to study nuclear fragmentation processes of ions with kinetic energy in the range from 100 to 1000 MeV/u. Tracks are reconstructed using information from a pixel silicon detector based on the CMOS technology. The performances achieved using this device for hadrontherapy purpose are discussed. For each reconstruction step (clustering, tracking and vertexing), different methods are implemented. The algorithm performances and the accuracy on reconstructed observables are evaluated on the basis of simulated and experimental data.
Thermal weapon sights with integrated fire control computers: algorithms and experiences
NASA Astrophysics Data System (ADS)
Rothe, Hendrik; Graswald, Markus; Breiter, Rainer
2008-04-01
The HuntIR long range thermal weapon sight of AIM is deployed in various out of area missions since 2004 as a part of the German Future Infantryman system (IdZ). In 2007 AIM fielded RangIR as upgrade with integrated laser Range finder (LRF), digital magnetic compass (DMC) and fire control unit (FCU). RangIR fills the capability gaps of day/night fire control for grenade machine guns (GMG) and the enhanced system of the IdZ. Due to proven expertise and proprietary methods in fire control, fast access to military trials for optimisation loops and similar hardware platforms, AIM and the University of the Federal Armed Forces Hamburg (HSU) decided to team for the development of suitable fire control algorithms. The pronounced ballistic trajectory of the 40mm GMG requires most accurate FCU-solutions specifically for air burst ammunition (ABM) and is most sensitive to faint effects like levelling or firing up/downhill. This weapon was therefore selected to validate the quality of the FCU hard- and software under relevant military conditions. For exterior ballistics the modified point mass model according to STANAG 4355 is used. The differential equations of motions are solved numerically, the two point boundary value problem is solved iteratively. Computing time varies according to the precision needed and is typical in the range from 0.1 - 0.5 seconds. RangIR provided outstanding hit accuracy including ABM fuze timing in various trials of the German Army and allied partners in 2007 and is now ready for series production. This paper deals mainly with the fundamentals of the fire control algorithms and shows how to implement them in combination with any DSP-equipped thermal weapon sights (TWS) in a variety of light supporting weapon systems.
Tracking at CDF: algorithms and experience from Run I and Run II
Snider, F.D.; /Fermilab
2005-10-01
The authors describe the tracking algorithms used during Run I and Run II by CDF at the Fermilab Tevatron Collider, covering the time from about 1992 through the present, and discuss the performance of the algorithms at high luminosity. By tracing the evolution of the detectors and algorithms, they reveal some of the successful strategies used by CDF to address the problems of tracking at high luminosities.
Biology, the way it should have been, experiments with a Lamarckian algorithm
Brown, F.M.; Snider, J.
1996-12-31
This paper investigates the case where some information can be extracted directly from the fitness function of a genetic algorithm so that mutation may be achieved essentially on the Lamarckian principle of acquired characteristics. The basic rationale is that such additional information will provide better mutations, thus speeding up the search process. Comparisons are made between a pure Neo-Darwinian genetic algorithm and this Lamarckian algorithm on a number of problems, including a problem of interest to the US Army.
An Object-Oriented Collection of Minimum Degree Algorithms: Design, Implementation, and Experiences
NASA Technical Reports Server (NTRS)
Kumfert, Gary; Pothen, Alex
1999-01-01
The multiple minimum degree (MMD) algorithm and its variants have enjoyed 20+ years of research and progress in generating fill-reducing orderings for sparse, symmetric positive definite matrices. Although conceptually simple, efficient implementations of these algorithms are deceptively complex and highly specialized. In this case study, we present an object-oriented library that implements several recent minimum degree-like algorithms. We discuss how object-oriented design forces us to decompose these algorithms in a different manner than earlier codes and demonstrate how this impacts the flexibility and efficiency of our C++ implementation. We compare the performance of our code against other implementations in C or Fortran.
NASA Astrophysics Data System (ADS)
Matthews, James; Wright, Matthew; Bacak, Asan; Silva, Hugo; Priestley, Michael; Martin, Damien; Percival, Carl; Shallcross, Dudley
2016-04-01
Cyclic perfluorocarbons (PFCs) have been used to measure the passage of air in urban and rural settings as they are chemically inert, non-toxic and have low background concentrations. The use of pre-concentrators and chemical ionisation gas chromatography enables concentrations of a few parts per quadrillion (ppq) to be measured in bag samples. Three PFC tracers were used in Manchester, UK in the summer of 2015 to map airflow in the city and ingress into buildings: perfluomethylcyclohexane (PMCH), perfluoro-2-4-dimethylcyclohexane (mPDMCH) and perfluoro-2-methyl-3-ethylpentene (PMEP). A known quantity of each PFC was released for 15 minutes from steel canisters using pre-prepared PFC mixtures. Release points were chosen to be upwind of the central sampling location (Simon Building, University of Manchester) and varied in distance up to 2.2 km. Six releases using one or three tracers in different configurations and under different conditions were undertaken in the summer. Three further experiments were conducted in the Autumn, to more closely investigate the rate of ingress and decay of tracer indoors. In each experiment, 10 litre samples were made over 30 minutes into Tedlar bags, starting at the same time the as PFC release. Samples were taken in 11 locations chosen from 15 identified areas including three in public parks, three outside within the University of Manchester area, seven inside and five outside of the Simon building and two outside a building nearby. For building measurements, receptors were placed inside the buildings on different floors; outside measurements were achieved through a sample line out of the window. Three of the sample positions inside the Simon building were paired with samplers outside to allow indoor-outdoor comparisons. PFC concentrations varied depending on location and height. The highest measured concentrations occurred when the tracer was released at sunrise; up to 330 ppq above background (11 ppq) of PMCH was measured at the 6
NASA Astrophysics Data System (ADS)
Domínguez-Rué, Emma; Mrotzek, Maximilian
2014-01-01
Previous research has shown that using tools from systems science for teaching and learning in the Humanities offers innovative insights that can prove helpful for both students and lecturers. Our contention here is that a method used in systems science, namely the influence matrix, can be a suitable tool to facilitate the understanding of elementary notions in Aesthetics by means of systematizing this process. As we will demonstrate in the upcoming sections, the influence matrix can help us to understand the nature and function of the basic elements that take part in the aesthetic experience and their evolving relevance in the history of Aesthetics. The implementation of these elements to an influence matrix will contribute to a more detailed understanding of (i) the nature of each element, (ii) the interrelation between them and (iii) the influence each element has on all the others.
NASA Astrophysics Data System (ADS)
Klaessens, John H. G. M.; Hopman, Jeroen C. W.; Liem, K. Djien; de Roode, Rowland; Verdaasdonk, Rudolf M.; Thijssen, Johan M.
2008-02-01
Continuous wave Near Infrared Spectroscopy is a well known non invasive technique for measuring changes in tissue oxygenation. Absorption changes (ΔO2Hb and ΔHHb) are calculated from the light attenuations using the modified Lambert Beer equation. Generally, the concentration changes are calculated relative to the concentration at a starting point in time (delta time method). It is also possible, under certain assumptions, to calculate the concentrations by subtracting the equations at different wavelengths (delta wavelength method). We derived a new algorithm and will show the possibilities and limitations. In the delta wavelength method, the assumption is that the oxygen independent attenuation term will be eliminated from the formula even if its value changes in time, we verified the results with the classical delta time method using extinction coefficients from different literature sources for the wavelengths 767nm, 850nm and 905nm. The different methods of calculating concentration changes were applied to the data collected from animal experiments. The animals (lambs) were in a stable normoxic condition; stepwise they were made hypoxic and thereafter they returned to normoxic condition. The two algorithms were also applied for measuring two dimensional blood oxygen saturation changes in human skin tissue. The different oxygen saturation levels were induced by alterations in the respiration and by temporary arm clamping. The new delta wavelength method yielded in a steady state measurement the same changes in oxy and deoxy hemoglobin as the classical delta time method. The advantage of the new method is the independence of eventual variation of the oxygen independent attenuations in time.
NASA Astrophysics Data System (ADS)
Cetinić, I.; Perry, M. J.; D'Asaro, E.; Briggs, N.; Poulton, N.; Sieracki, M. E.; Lee, C. M.
2015-04-01
The ratio of two in situ optical measurements - chlorophyll fluorescence (Chl F) and optical particulate backscattering (bbp) - varied with changes in phytoplankton community composition during the North Atlantic Bloom Experiment in the Iceland Basin in 2008. Using ship-based measurements of Chl F, bbp, chlorophyll a (Chl), high-performance liquid chromatography (HPLC) pigments, phytoplankton composition and carbon biomass, we found that oscillations in the ratio varied with changes in plankton community composition; hence we refer to Chl F/bbp as an "optical community index". The index varied by more than a factor of 2, with low values associated with pico- and nanophytoplankton and high values associated with diatom-dominated phytoplankton communities. Observed changes in the optical index were driven by taxa-specific chlorophyll-to-autotrophic carbon ratios and by physiological changes in Chl F associated with the silica limitation. A Lagrangian mixed-layer float and four Seagliders, operating continuously for 2 months, made similar measurements of the optical community index and followed the evolution and later demise of the diatom spring bloom. Temporal changes in optical community index and, by implication, the transition in community composition from diatom to post-diatom bloom communities were not simultaneous over the spatial domain surveyed by the ship, float and gliders. The ratio of simple optical properties measured from autonomous platforms, when carefully validated, provides a unique tool for studying phytoplankton patchiness on extended temporal scales and ecologically relevant spatial scales and should offer new insights into the processes regulating patchiness.
Finnerty, P.; Aguayo, Estanislao; Amman, M.; Avignone, Frank T.; Barabash, Alexander S.; Barton, P. J.; Beene, Jim; Bertrand, F.; Boswell, M.; Brudanin, V.; Busch, Matthew; Chan, Yuen-Dat; Christofferson, Cabot-Ann; Collar, J. I.; Combs, Dustin C.; Cooper, R. J.; Detwiler, Jason A.; Doe, P. J.; Efremenko, Yuri; Egorov, Viatcheslav; Ejiri, H.; Elliott, S. R.; Esterline, James H.; Fast, James E.; Fields, N.; Fraenkle, Florian; Galindo-Uribarri, A.; Gehman, Victor M.; Giovanetti, G. K.; Green, M.; Guiseppe, Vincente; Gusey, K.; Hallin, A. L.; Hazama, R.; Henning, Reyco; Hoppe, Eric W.; Horton, Mark; Howard, Stanley; Howe, M. A.; Johnson, R. A.; Keeter, K.; Kidd, M. F.; Knecht, A.; Kochetov, Oleg; Konovalov, S.; Kouzes, Richard T.; LaFerriere, Brian D.; Leon, Jonathan D.; Leviner, L.; Loach, J. C.; Looker, Q.; Luke, P.; MacMullin, S.; Marino, Michael G.; Martin, R. D.; Merriman, Jason H.; Miller, M. L.; Mizouni, Leila; Nomachi, Masaharu; Orrell, John L.; Overman, Nicole R.; Perumpilly, Gopakumar; Phillips, David; Poon, Alan; Radford, D. C.; Rielage, Keith; Robertson, R. G. H.; Ronquest, M. C.; Schubert, Alexis G.; Shima, T.; Shirchenko, M.; Snavely, Kyle J.; Steele, David; Strain, J.; Timkin, V.; Tornow, Werner; Varner, R. L.; Vetter, Kai; Vorren, Kris R.; Wilkerson, J. F.; Yakushev, E.; Yaver, Harold; Young, A.; Yu, Chang-Hong; Yumatov, Vladimir
2014-03-24
The Majorana Demonstrator will search for the neutrinoless double-beta decay (0*) of the 76Ge isotope with a mixed array of enriched and natural germanium detectors. The observation of this rare decay would indicate the neutrino is its own anti-particle, demonstrate that lepton number is not conserved, and provide information on the absolute mass-scale of the neutrino. The Demonstrator is being assembled at the 4850 foot level of the Sanford Underground Research Facility in Lead, South Dakota. The array will be contained in a lowbackground environment and surrounded by passive and active shielding. The goals for the Demonstrator are: demonstrating a background rate less than 3 counts tonne -1 year-1 in the 4 keV region of interest (ROI) surrounding the 2039 keV 76Ge endpoint energy; establishing the technology required to build a tonne-scale germanium based double-beta decay experiment; testing the recent claim of observation of 0; and performing a direct search for lightWIMPs (3-10 GeV/c2).
NASA Technical Reports Server (NTRS)
Hancock, G. D.; Waite, W. P.
1984-01-01
Two experiments were performed employing swept frequency microwaves for the purpose of investigating the reflectivity from soil volumes containing both discontinuous and continuous changes in subsurface soil moisture content. Discontinuous moisture profiles were artificially created in the laboratory while continuous moisture profiles were induced into the soil of test plots by the environment of an agricultural field. The reflectivity for both the laboratory and field experiments was measured using bi-static reflectometers operated over the frequency ranges of 1.0 to 2.0 GHz and 4.0 to 8.0 GHz. Reflectivity models that considered the discontinuous and continuous moisture profiles within the soil volume were developed and compared with the results of the experiments. This comparison shows good agreement between the smooth surface models and the measurements. In particular the comparison of the smooth surface multi-layer model for continuous moisture profiles and the yield experiment measurements points out the sensitivity of the specular component of the scattered electromagnetic energy to the movement of moisture in the soil.
NASA Astrophysics Data System (ADS)
Meijer, Y. J.; Swart, D. P. J.; Baier, F.; Bhartia, P. K.; Bodeker, G. E.; Casadio, S.; Chance, K.; Del Frate, F.; Erbertseder, T.; Felder, M. D.; Flynn, L. E.; Godin-Beekmann, S.; Hansen, G.; Hasekamp, O. P.; Kaifel, A.; Kelder, H. M.; Kerridge, B. J.; Lambert, J.-C.; Landgraf, J.; Latter, B.; Liu, X.; McDermid, I. S.; Pachepsky, Y.; Rozanov, V.; Siddans, R.; Tellmann, S.; van der A, R. J.; van Oss, R. F.; Weber, M.; Zehner, C.
2006-11-01
An evaluation is made of ozone profiles retrieved from measurements of the nadir-viewing Global Ozone Monitoring Experiment (GOME) instrument. Currently, four different approaches are used to retrieve ozone profile information from GOME measurements, which differ in the use of external information and a priori constraints. In total nine different algorithms will be evaluated exploiting the optimal estimation (Royal Netherlands Meteorological Institute, Rutherford Appleton Laboratory, University of Bremen, National Oceanic and Atmospheric Administration, Smithsonian Astrophysical Observatory), Phillips-Tikhonov regularization (Space Research Organization Netherlands), neural network (Center for Solar Energy and Hydrogen Research, Tor Vergata University), and data assimilation (German Aerospace Center) approaches. Analysis tools are used to interpret data sets that provide averaging kernels. In the interpretation of these data, the focus is on the vertical resolution, the indicative altitude of the retrieved value, and the fraction of a priori information. The evaluation is completed with a comparison of the results to lidar data from the Network for Detection of Stratospheric Change stations in Andoya (Norway), Observatoire Haute Provence (France), Mauna Loa (Hawaii), Lauder (New Zealand), and Dumont d'Urville (Antarctic) for the years 1997-1999. In total, the comparison involves nearly 1000 ozone profiles and allows the analysis of GOME data measured in different global regions and hence observational circumstances. The main conclusion of this paper is that unambiguous information on the ozone profile can at best be retrieved in the altitude range 15-48 km with a vertical resolution of 10 to 15 km, precision of 5-10%, and a bias up to 5% or 20% depending on the success of recalibration of the input spectra. The sensitivity of retrievals to ozone at lower altitudes varies from scheme to scheme and includes significant influence from a priori assumptions.
NASA Astrophysics Data System (ADS)
Didkovsky, L.; Judge, D.; Wieman, S.; Woods, T.; Jones, A.
2012-01-01
The Extreme ultraviolet SpectroPhotometer (ESP) is one of five channels of the Extreme ultraviolet Variability Experiment (EVE) onboard the NASA Solar Dynamics Observatory (SDO). The ESP channel design is based on a highly stable diffraction transmission grating and is an advanced version of the Solar Extreme ultraviolet Monitor (SEM), which has been successfully observing solar irradiance onboard the Solar and Heliospheric Observatory (SOHO) since December 1995. ESP is designed to measure solar Extreme UltraViolet (EUV) irradiance in four first-order bands of the diffraction grating centered around 19 nm, 25 nm, 30 nm, and 36 nm, and in a soft X-ray band from 0.1 to 7.0 nm in the zeroth-order of the grating. Each band’s detector system converts the photo-current into a count rate (frequency). The count rates are integrated over 0.25-second increments and transmitted to the EVE Science and Operations Center for data processing. An algorithm for converting the measured count rates into solar irradiance and the ESP calibration parameters are described. The ESP pre-flight calibration was performed at the Synchrotron Ultraviolet Radiation Facility of the National Institute of Standards and Technology. Calibration parameters were used to calculate absolute solar irradiance from the sounding-rocket flight measurements on 14 April 2008. These irradiances for the ESP bands closely match the irradiance determined for two other EUV channels flown simultaneously: EVE’s Multiple EUV Grating Spectrograph (MEGS) and SOHO’s Charge, Element and Isotope Analysis System/ Solar EUV Monitor (CELIAS/SEM).
Good-Enough Brain Model: Challenges, Algorithms, and Discoveries in Multisubject Experiments.
Papalexakis, Evangelos E; Fyshe, Alona; Sidiropoulos, Nicholas D; Talukdar, Partha Pratim; Mitchell, Tom M; Faloutsos, Christos
2014-12-01
Given a simple noun such as apple, and a question such as "Is it edible?," what processes take place in the human brain? More specifically, given the stimulus, what are the interactions between (groups of) neurons (also known as functional connectivity) and how can we automatically infer those interactions, given measurements of the brain activity? Furthermore, how does this connectivity differ across different human subjects? In this work, we show that this problem, even though originating from the field of neuroscience, can benefit from big data techniques; we present a simple, novel good-enough brain model, or GeBM in short, and a novel algorithm Sparse-SysId, which are able to effectively model the dynamics of the neuron interactions and infer the functional connectivity. Moreover, GeBM is able to simulate basic psychological phenomena such as habituation and priming (whose definition we provide in the main text). We evaluate GeBM by using real brain data. GeBM produces brain activity patterns that are strikingly similar to the real ones, where the inferred functional connectivity is able to provide neuroscientific insights toward a better understanding of the way that neurons interact with each other, as well as detect regularities and outliers in multisubject brain activity measurements. PMID:27442756
Mutual Algorithm-Architecture Analysis for Real - Parallel Systems in Particle Physics Experiments.
NASA Astrophysics Data System (ADS)
Ni, Ping
Data acquisition from particle colliders requires real-time detection of tracks and energy clusters from collision events occurring at intervals of tens of mus. Beginning with the specification of a benchmark track-finding algorithm, parallel implementations have been developed. A revision of the routing scheme for performing reductions such as a tree sum, called the reduced routing distance scheme, has been developed and analyzed. The scheme reduces inter-PE communication time for narrow communication channel systems. A new parallel algorithm, called the interleaved tree sum, for parallel reduction problems has been developed that increases efficiency of processor use. Detailed analysis of this algorithm with different routing schemes is presented. Comparable parallel algorithms are analyzed, also taking into account the architectural parameters that play an important role in this parallel algorithm analysis. Computation and communication times are analyzed to guide the design of a custom system based on a massively parallel processing component. Developing an optimal system requires mutual analysis of algorithm and architecture parameters. It is shown that matching a processor array size to the parallelism of the problem does not always produce the best system design. Based on promising benchmark simulation results, an application specific hardware prototype board, called Dasher, has been built using two Blitzen chips. The processing array is a mesh-connected SIMD system with 256 PEs. Its design is discussed, with details on the software environment.
Parallel algorithms for unconstrained optimizations by multisplitting
He, Qing
1994-12-31
In this paper a new parallel iterative algorithm for unconstrained optimization using the idea of multisplitting is proposed. This algorithm uses the existing sequential algorithms without any parallelization. Some convergence and numerical results for this algorithm are presented. The experiments are performed on an Intel iPSC/860 Hyper Cube with 64 nodes. It is interesting that the sequential implementation on one node shows that if the problem is split properly, the algorithm converges much faster than one without splitting.
NASA Astrophysics Data System (ADS)
Khatri, Rishi
2015-08-01
We present an efficient algorithm for least-squares parameter fitting, optimized for component separation in multifrequency cosmic microwave background (CMB) experiments. We sidestep some of the problems associated with non-linear optimization by taking advantage of the quasi-linear nature of the foreground model. We demonstrate our algorithm, linearized iterative least-squares (LIL), on the publicly available Planck sky model FFP6 simulations and compare our results with those of other algorithms. We work at full Planck resolution and show that degrading the resolution of all channels to that of the lowest frequency channel is not necessary. Finally, we present results for publicly available Planck data. Our algorithm is extremely fast, fitting six parameters to the seven lowest Planck channels at full resolution (50 million pixels) in less than 160 CPU minutes (or a few minutes running in parallel on a few tens of cores). LIL is therefore easily scalable to future experiments, which may have even higher resolution and more frequency channels. We also, naturally, propagate the uncertainties in different parameters due to noise in the maps, as well as the degeneracies between the parameters, to the final errors in the parameters using the Fisher matrix. One indirect application of LIL could be a front-end for Bayesian parameter fitting to find the maximum likelihood to be used as the starting point for Gibbs sampling. We show that for rare components, such as carbon monoxide emission, present in a small fraction of sky, the optimal approach should combine parameter fitting with model selection. LIL may also be useful in other astrophysical applications that satisfy quasi-linearity criteria.
Localization of short-range acoustic and seismic wideband sources: Algorithms and experiments
NASA Astrophysics Data System (ADS)
Stafsudd, J. Z.; Asgari, S.; Hudson, R.; Yao, K.; Taciroglu, E.
2008-04-01
We consider the determination of the location (source localization) of a disturbance source which emits acoustic and/or seismic signals. We devise an enhanced approximate maximum-likelihood (AML) algorithm to process data collected at acoustic sensors (microphones) belonging to an array of, non-collocated but otherwise identical, sensors. The approximate maximum-likelihood algorithm exploits the time-delay-of-arrival of acoustic signals at different sensors, and yields the source location. For processing the seismic signals, we investigate two distinct algorithms, both of which process data collected at a single measurement station comprising a triaxial accelerometer, to determine direction-of-arrival. The direction-of-arrivals determined at each sensor station are then combined using a weighted least-squares approach for source localization. The first of the direction-of-arrival estimation algorithms is based on the spectral decomposition of the covariance matrix, while the second is based on surface wave analysis. Both of the seismic source localization algorithms have their roots in seismology; and covariance matrix analysis had been successfully employed in applications where the source and the sensors (array) are typically separated by planetary distances (i.e., hundreds to thousands of kilometers). Here, we focus on very-short distances (e.g., less than one hundred meters) instead, with an outlook to applications in multi-modal surveillance, including target detection, tracking, and zone intrusion. We demonstrate the utility of the aforementioned algorithms through a series of open-field tests wherein we successfully localize wideband acoustic and/or seismic sources. We also investigate a basic strategy for fusion of results yielded by acoustic and seismic arrays.
K (transverse) jet algorithms in hadron colliders: The D0 experience
V. Daniel Elvira
2002-12-05
D0 has implemented and studied a k{sub {perpendicular}} jet algorithm for the first time in a hadron collider. The authors have submitted two physics results for publication: the subjet multiplicity in quark and gluon jets and the central inclusive jet cross section measurements. A third result, a measurement of thrust distributions in jet events, is underway. A combination of measurements using several types of algorithms and samples taken at different center-of-mass energies is desirable to understand and distinguish with higher accuracy between instrumentation and physics effects.
Experiences with serial and parallel algorithms for channel routing using simulated annealing
NASA Technical Reports Server (NTRS)
Brouwer, Randall Jay
1988-01-01
Two algorithms for channel routing using simulated annealing are presented. Simulated annealing is an optimization methodology which allows the solution process to back up out of local minima that may be encountered by inappropriate selections. By properly controlling the annealing process, it is very likely that the optimal solution to an NP-complete problem such as channel routing may be found. The algorithm presented proposes very relaxed restrictions on the types of allowable transformations, including overlapping nets. By freeing that restriction and controlling overlap situations with an appropriate cost function, the algorithm becomes very flexible and can be applied to many extensions of channel routing. The selection of the transformation utilizes a number of heuristics, still retaining the pseudorandom nature of simulated annealing. The algorithm was implemented as a serial program for a workstation, and a parallel program designed for a hypercube computer. The details of the serial implementation are presented, including many of the heuristics used and some of the resulting solutions.
ERIC Educational Resources Information Center
Beddard, Godfrey S.
2011-01-01
Thermodynamic quantities such as the average energy, heat capacity, and entropy are calculated using a Monte Carlo method based on the Metropolis algorithm. This method is illustrated with reference to the harmonic oscillator but is particularly useful when the partition function cannot be evaluated; an example using a one-dimensional spin system…
ERIC Educational Resources Information Center
Gehring, John
2004-01-01
For the past 16 years, the blue-collar city of Huntington, West Virginia, has rolled out the red carpet to welcome young wrestlers and their families as old friends. They have come to town chasing the same dream for a spot in what many of them call "The Show". For three days, under the lights of an arena packed with 5,000 fans, the state's best…
NASA Technical Reports Server (NTRS)
Emmitt, G. D.; Wood, S. A.; Morris, M.
1990-01-01
Lidar Atmospheric Wind Sounder (LAWS) Simulation Models (LSM) were developed to evaluate the potential impact of global wind observations on the basic understanding of the Earth's atmosphere and on the predictive skills of current forecast models (GCM and regional scale). Fully integrated top to bottom LAWS Simulation Models for global and regional scale simulations were developed. The algorithm development incorporated the effects of aerosols, water vapor, clouds, terrain, and atmospheric turbulence into the models. Other additions include a new satellite orbiter, signal processor, line of sight uncertainty model, new Multi-Paired Algorithm and wind error analysis code. An atmospheric wind field library containing control fields, meteorological fields, phenomena fields, and new European Center for Medium Range Weather Forecasting (ECMWF) data was also added. The LSM was used to address some key LAWS issues and trades such as accuracy and interpretation of LAWS information, data density, signal strength, cloud obscuration, and temporal data resolution.
NASA Astrophysics Data System (ADS)
Garmendia, Iñaki; Anglada, Eva
2016-05-01
Genetic algorithms have been used for matching temperature values generated using thermal mathematical models against actual temperatures measured in thermal testing of spacecrafts and space instruments. Up to now, results for small models have been very encouraging. This work will examine the correlation of a small-medium size model, whose thermal test results were available, by means of genetic algorithms. The thermal mathematical model reviewed herein corresponds to Tribolab, a materials experiment deployed on board the International Space Station and subjected to preflight thermal testing. This paper will also discuss in great detail the influence of both the number of reference temperatures available and the number of thermal parameters included in the correlation, taking into account the presence of heat sources and the maximum range of temperature mismatch. Conclusions and recommendations for the thermal test design will be provided, as well as some indications for future improvements.
Pre-Mrna Introns as a Model for Cryptographic Algorithm:. Theory and Experiments
NASA Astrophysics Data System (ADS)
Regoli, Massimo
2010-01-01
The RNA-Crypto System (shortly RCS) is a symmetric key algorithm to cipher data. The idea for this new algorithm starts from the observation of nature. In particular from the observation of RNA behavior and some of its properties. In particular the RNA sequences have some sections called Introns. Introns, derived from the term "intragenic regions", are non-coding sections of precursor mRNA (pre-mRNA) or other RNAs, that are removed (spliced out of the RNA) before the mature RNA is formed. Once the introns have been spliced out of a pre-mRNA, the resulting mRNA sequence is ready to be translated into a protein. The corresponding parts of a gene are known as introns as well. The nature and the role of Introns in the pre-mRNA is not clear and it is under ponderous researches by Biologists but, in our case, we will use the presence of Introns in the RNA-Crypto System output as a strong method to add chaotic non coding information and an unnecessary behaviour in the access to the secret key to code the messages. In the RNA-Crypto System algorithm the introns are sections of the ciphered message with non-coding information as well as in the precursor mRNA.
NASA Astrophysics Data System (ADS)
Hanlon, C. J.; Small, A.; Bose, S.; Young, G. S.; Verlinde, J.
2013-12-01
In airborne field campaigns, investigators confront complex decision challenges concerning when and where to deploy aircraft to meet scientific objectives within constraints of time and budgeted flight hours. An automated flight decision recommendation system was developed to assist investigators leading the Deep Convective Clouds and Chemistry (DC3) campaign in spring--summer 2012. In making flight decisions, DC3 investigators needed to integrate two distinct, potentially competing objectives: to maximize the total harvest of data collected, and also to maintain an approximate balance of data collected from each of three geographic study regions. Choices needed to satisfy several constraint conditions including, most prominently, a limit on the total number of flight hours, and a bound on the number of calendar days in the field. An automated recommendation system was developed by translating these objectives and bounds into a formal problem of constrained optimization. In this formalization, a key step involved the mathematical representation of investigators' scientific preferences over the set of possible data collection outcomes. Competing objectives were integrated into a single metric by means of a utility function, which served to quantify the value of alternative data portfolios. Flight recommendations were generated to maximize the expected utility of each daily decision, conditioned on that day's forecast. A calibrated forecast probability of flight success in each study region was generated according to a forecasting system trained on numerical weather prediction model output, as well as expected climatological probability of flight success on future days. System performance was evaluated by comparing the data yielded by the actual DC3 campaign, compared with the yield that would have been realized had the algorithmic recommendations been followed. It was found that the algorithmic system would have achieved 19%--59% greater utility than the decisions
NASA Technical Reports Server (NTRS)
Carter, Richard G.
1989-01-01
For optimization problems associated with engineering design, parameter estimation, image reconstruction, and other optimization/simulation applications, low accuracy function and gradient values are frequently much less expensive to obtain than high accuracy values. Here, researchers investigate the computational performance of trust region methods for nonlinear optimization when high accuracy evaluations are unavailable or prohibitively expensive, and confirm earlier theoretical predictions when the algorithm is convergent even with relative gradient errors of 0.5 or more. The proper choice of the amount of accuracy to use in function and gradient evaluations can result in orders-of-magnitude savings in computational cost.
NASA Astrophysics Data System (ADS)
Hanlon, Christopher J.; Small, Arthur A.; Bose, Satyajit; Young, George S.; Verlinde, Johannes
2014-10-01
Automated decision systems have shown the potential to increase data yields from field experiments in atmospheric science. The present paper describes the construction and performance of a flight decision system designed for a case in which investigators pursued multiple, potentially competing objectives. The Deep Convective Clouds and Chemistry (DC3) campaign in 2012 sought in situ airborne measurements of isolated deep convection in three study regions: northeast Colorado, north Alabama, and a larger region extending from central Oklahoma through northwest Texas. As they confronted daily flight launch decisions, campaign investigators sought to achieve two mission objectives that stood in potential tension to each other: to maximize the total amount of data collected while also collecting approximately equal amounts of data from each of the three study regions. Creating an automated decision system involved understanding how investigators would themselves negotiate the trade-offs between these potentially competing goals, and representing those preferences formally using a utility function that served to rank-order the perceived value of alternative data portfolios. The decision system incorporated a custom-built method for generating probabilistic forecasts of isolated deep convection and estimated climatologies calibrated to historical observations. Monte Carlo simulations of alternative future conditions were used to generate flight decision recommendations dynamically consistent with the expected future progress of the campaign. Results show that a strict adherence to the recommendations generated by the automated system would have boosted the data yield of the campaign by between 10 and 57%, depending on the metrics used to score success, while improving portfolio balance.
Diagnostic ANCA algorithms in daily clinical practice: evidence, experience, and effectiveness.
Avery, T Y; Bons, J; van Paassen, P; Damoiseaux, J
2016-07-01
Detection of antineutrophil cytoplasmic antibodies (ANCA) for ANCA-associated vasculitides (AAV) is based on indirect immunofluorescence (IIF) on ethanol-fixed neutrophils and reactivity toward myeloperoxidase (MPO) and proteinase 3 (PR3). According to the international consensus for ANCA testing, presence of ANCA should at least be screened for by IIF and, if positive, followed by antigen-specific immunoassays. Optimally, all samples are analyzed by both IIF and quantitative antigen-specific immunoassays. Since the establishment of this consensus many new technologies have become available and this has challenged the positioning of IIF in the testing algorithm for AAV. In the current paper, we summarize the novelties in ANCA diagnostics and discuss the possible implications of these developments for the different ANCA algorithms that are currently applied in routine diagnostic laboratories. Possible consequences of replacing ANCA assays by novel methods are illustrated by our data obtained in daily clinical practice. Eventually, it is questioned if there is a need to change the consensus, and if so, whether IIF can be discarded completely, or be used as a confirmation assay instead of a screening assay. Both alternative options require that ANCA requests for AAV can be separated from ANCA requests for gastrointestinal autoimmune diseases. PMID:27252270
A clinical algorithm for the management of abnormal mammograms. A community hospital's experience.
Gist, D; Llorente, J; Mayer, J
1997-01-01
Mammography is an important tool in the early detection of breast cancer, but its use has been criticized for stimulating the performance of unnecessary breast biopsies. We retrospectively reviewed the results of breast biopsies preceded by abnormal mammograms at a community hospital for three 5-month periods--baseline, postintervention, and follow-up--to determine the effectiveness of algorithm-based care for patients with an abnormal mammogram. Cases in which there was a definite or implied recommendation for biopsy by a radiologist revealed a baseline positive predictive value of 4% (2/45), a postintervention positive predictive value of 21% (9/42), and a follow-up phase positive predictive value of 18% (5/28). A Fisher's exact test of the preintervention and postintervention positive predictive values after an abnormal mammogram with a "recommendation for biopsy" was significant (n = 87, P = .023). A Kruskal-Wallis analysis of variance to determine if there had been an increase in the mean lesion size of breast cancers detected over the 3 study periods was not significant. The results of this study suggest that developing a clinical algorithm under the leadership of an opinion leader combined with continuing medical education efforts may be efficacious in reducing the incidence of unnecessary surgical procedures. PMID:9074335
Boggs, P.; Tolle, J.; Kearsley, A.
1994-12-31
We have developed a large scale sequential quadratic programming (SQP) code based on an interior-point method for solving general (convex or nonconvex) quadratic programs (QP). We often halt this QP solver prematurely by employing a trust-region strategy. This procedure typically reduces the overall cost of the code. In this talk we briefly review the algorithm and some of its theoretical justification and then discuss recent enhancements including automatic procedures for both increasing and decreasing the parameter in the merit function, a regularization procedure for dealing with linearly dependent active constraint gradients, and a method for modifying the linearized equality constraints. Some numerical results on a significant set of {open_quotes}real-world{close_quotes} problems will be presented.
NASA Astrophysics Data System (ADS)
Peeling, S. M.
1985-12-01
It is demonstrated that the ZIP algorithm is capable of perfect alignment of annotated and unannotated speech files in the majority of cases, even when the files are from different speakers. It can therefore form the basis of an automatic annotation system. It is unlikely that ZIP can completely remove the need for human inspection. However, in cases where misalignment occurs it frequently only affects a small portion of the two files so that a minimal amount of human correction is required. Experiments suggest that a reduced representation of two channels is adequate. For the particular 2 channel representation used, a beamwidth of 200 and deletion penalties of 15 are suitable parameters. By calculating the sum of the differences between the minimum scores in consecutive windows, and averaging this sum over the whole file it is possible to automatically determine the quality of the alignment.
Ramadas, Gisela C V; Rocha, Ana Maria A C; Fernandes, Edite M G P
2015-01-01
This paper addresses the challenging task of computing multiple roots of a system of nonlinear equations. A repulsion algorithm that invokes the Nelder-Mead (N-M) local search method and uses a penalty-type merit function based on the error function, known as 'erf', is presented. In the N-M algorithm context, different strategies are proposed to enhance the quality of the solutions and improve the overall efficiency. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. PMID:25875591
Fernandes, Edite M. G. P.
2015-01-01
This paper addresses the challenging task of computing multiple roots of a system of nonlinear equations. A repulsion algorithm that invokes the Nelder-Mead (N-M) local search method and uses a penalty-type merit function based on the error function, known as ‘erf’, is presented. In the N-M algorithm context, different strategies are proposed to enhance the quality of the solutions and improve the overall efficiency. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. PMID:25875591
NASA Astrophysics Data System (ADS)
schipper, peter; stuyt, lodewijk; straat, van der, andre; schans, van der, martin
2014-05-01
processes in the soil have been modelled with simulation model SWAP. The experiment started in 2010 and is ongoing. Data, collected so far show that the plots with controlled drainage (all compared with plots equipped with conventional drainage) conserve more rain water (higher groundwater tables in early spring), lower discharges under average weather conditions and storm events, reduce N-loads and saline seepage to surface waters, enhance denitrification, show a different 'first flush' effect and show similar crop yields. The results of the experiments will contribute to a better understanding of the impact of controlled drainage on complex hydrological en geochemical processes in agricultural clay soils, the interaction between ground- en surface water and its effects on drain water quantity, quality and crop yield.
On the Juno radio science experiment: models, algorithms and sensitivity analysis
NASA Astrophysics Data System (ADS)
Tommei, G.; Dimare, L.; Serra, D.; Milani, A.
2015-01-01
Juno is a NASA mission launched in 2011 with the goal of studying Jupiter. The probe will arrive to the planet in 2016 and will be placed for one year in a polar high-eccentric orbit to study the composition of the planet, the gravity and the magnetic field. The Italian Space Agency (ASI) provided the radio science instrument KaT (Ka-Band Translator) used for the gravity experiment, which has the goal of studying the Jupiter's deep structure by mapping the planet's gravity: such instrument takes advantage of synergies with a similar tool in development for BepiColombo, the ESA cornerstone mission to Mercury. The Celestial Mechanics Group of the University of Pisa, being part of the Juno Italian team, is developing an orbit determination and parameters estimation software for processing the real data independently from NASA software ODP. This paper has a twofold goal: first, to tell about the development of this software highlighting the models used, secondly, to perform a sensitivity analysis on the parameters of interest to the mission.
Algorithms and Algorithmic Languages.
ERIC Educational Resources Information Center
Veselov, V. M.; Koprov, V. M.
This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…
Algorithmic Self-Assembly of DNA: Theoretical Motivations and 2D Assembly Experiments.
Winfree, E
2000-01-01
Abstract Biology makes things far smaller and more complex than anything produced by human engineering. The biotechnology revolution has for the first time given us the tools necessary to consider engineering on the molecular level. Research in DNA computation, launched by Len Adleman, has opened the door for experimental study of programmable biochemical reactions. Here we focus on a single biochemical mechanism, the self-assembly of DNA structures, that is theoretically sufficient for Turing-universal computation. The theory combines Hao Wang's purely mathematical Tiling Problem with the branched DNA constructions of Ned Seeman. In the context of mathematical logic, Wang showed how jigsaw-shaped tiles can be designed to simulate the operation of any Turing Machine. For a biochemical implementation, we will need molecular Wang tiles. DNA molecular structures and intermolecular interactions are particularly amenable to design and are sufficient for the creation of complex molecular objects. The structure of individual molecules can be designed by maximizing desired and minimizing undesired Watson-Crick complementarity. Intermolecular interactions are programmed by the design of sticky ends that determine which molecules associate, and how. The theory has been demonstrated experimentally using a system of synthetic DNA double-crossover molecules that self-assemble into two-dimensional crystals that have been visualized by atomic force microscopy. This experimental system provides an excellent platform for exploring the relationship between computation and molecular self-assembly, and thus represents a first step toward the ability to program molecular reactions and molecular structures. PMID:22607433
Comparative study of damage identification algorithms applied to a bridge: I. Experiment
NASA Astrophysics Data System (ADS)
Farrar, Charles R.; Jauregui, David A.
1998-10-01
Over the past 30 years detecting damage in a structure from changes in global dynamic parameters has received considerable attention from the civil, aerospace and mechanical engineering communities. The basis for this approach to damage detection is that changes in the structure's physical properties (i.e., boundary conditions, stiffness, mass and/or damping) will, in turn, alter the dynamic characteristics (i.e., resonant frequencies, modal damping and mode shapes) of the structure. Changes in properties such as the flexibility or stiffness matrices derived from measured modal properties and changes in mode shape curvature have shown promise for locating structural damage. However, to date there has not been a study reported in the technical literature that directly compares these various methods. The experimental results reported in this paper and the results of a numerical study reported in an accompanying paper attempt to fill this void in the study of damage detection methods. Five methods for damage assessment that have been reported in the technical literature are summarized and compared using experimental modal data from an undamaged and damaged bridge. For the most severe damage case investigated, all methods can accurately locate the damage. The methods show varying levels of success when applied to less severe damage cases. This paper concludes by summarizing some areas of the damage identification process that require further study.
Parallelized Dilate Algorithm for Remote Sensing Image
Zhang, Suli; Hu, Haoran; Pan, Xin
2014-01-01
As an important algorithm, dilate algorithm can give us more connective view of a remote sensing image which has broken lines or objects. However, with the technological progress of satellite sensor, the resolution of remote sensing image has been increasing and its data quantities become very large. This would lead to the decrease of algorithm running speed or cannot obtain a result in limited memory or time. To solve this problem, our research proposed a parallelized dilate algorithm for remote sensing Image based on MPI and MP. Experiments show that our method runs faster than traditional single-process algorithm. PMID:24955392
Spaceborne SAR Imaging Algorithm for Coherence Optimized
Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun
2016-01-01
This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446
Yoo, Changkyoo; Kim, Min Han
2009-06-01
This paper presents industrial experience of process identification, monitoring, and control in a full-scale wastewater treatment plant. The objectives of this study were (1) to apply and compare different process-identification methods of proportional-integral-derivative (PID) autotuning for stable dissolved oxygen (DO) control, (2) to implement a process monitoring method that estimates the respiration rate simultaneously during the process-identification step, and (3) to propose a simple set-point decision algorithm for determining the appropriate set point of the DO controller for optimal operation of the aeration basin. The proposed method was evaluated in the industrial wastewater treatment facility of an iron- and steel-making plant. Among the process-identification methods, the control signal of the controller's set-point change was best for identifying low-frequency information and enhancing the robustness to low-frequency disturbances. Combined automatic control and set-point decision method reduced the total electricity consumption by 5% and the electricity cost by 15% compared to the fixed gain PID controller, when considering only the surface aerators. Moreover, as a result of improved control performance, the fluctuation of effluent quality decreased and overall effluent water quality was better. PMID:19428173
NASA Astrophysics Data System (ADS)
Abrams, Daniel S.
This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)
NASA Astrophysics Data System (ADS)
Arpinar, V. E.; Hamamura, M. J.; Degirmenci, E.; Muftuler, L. T.
2012-07-01
Magnetic resonance electrical impedance tomography (MREIT) is a technique that produces images of conductivity in tissues and phantoms. In this technique, electrical currents are applied to an object and the resulting magnetic flux density is measured using magnetic resonance imaging (MRI) and the conductivity distribution is reconstructed using these MRI data. Currently, the technique is used in research environments, primarily studying phantoms and animals. In order to translate MREIT to clinical applications, strict safety standards need to be established, especially for safe current limits. However, there are currently no standards for safe current limits specific to MREIT. Until such standards are established, human MREIT applications need to conform to existing electrical safety standards in medical instrumentation, such as IEC601. This protocol limits patient auxiliary currents to 100 µA for low frequencies. However, published MREIT studies have utilized currents 10-400 times larger than this limit, bringing into question whether the clinical applications of MREIT are attainable under current standards. In this study, we investigated the feasibility of MREIT to accurately reconstruct the relative conductivity of a simple agarose phantom using 200 µA total injected current and tested the performance of two MREIT reconstruction algorithms. These reconstruction algorithms used are the iterative sensitivity matrix method (SMM) by Ider and Birgul (1998 Elektrik 6 215-25) with Tikhonov regularization and the harmonic BZ proposed by Oh et al (2003 Magn. Reason. Med. 50 875-8). The reconstruction techniques were tested at both 200 µA and 5 mA injected currents to investigate their noise sensitivity at low and high current conditions. It should be noted that 200 µA total injected current into a cylindrical phantom generates only 14.7 µA current in imaging slice. Similarly, 5 mA total injected current results in 367 µA in imaging slice. Total acquisition time
Sobel, E.; Lange, K.; O`Connell, J.R.
1996-12-31
Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.
NASA Astrophysics Data System (ADS)
Garcia-Yeguas, A.; Granados, M.; Garcia, L.; Benitez, C.; De la Torre, A.; Alvarez, I.; Diaz, A.; Ibañez, J.
2013-12-01
The detection of the arrival time of seismic waves or picking is of great importance in many seismology applications. Traditionally, picking has been carried out by human operators. This process is not systematic and relies completely on the expertise and judgment of the analysts. The limitations of manual picking and the increasing amount of data daily stored in the seismic networks worldwide distributed led to the development of automatic picking algorithms. The accuracy of conventional 'short-term average over long-term average' (STA/LTA) algorithm, the recently developed 'Adaptive Multiband Picking Algorithm' (AMPA) and the proposed cross correlation-based picking algorithm have been assessed using a huge data set composed by active seismic signals from experiments in Tenerife Island (Canary Islands, Spain). The experiment consisted of the deployment of a dense seismic network on Tenerife Island (125 seismometers in total) and the shooting of air-guns around the island with the Spanish Oceanographic Vessel Hespérides (6459 air shots in total). Thus, more than 800.000 signals were recorded and subsequently manually picked. Since the sources and receivers locations are known and considering that the ship travelled a small distance between two consecutive shots, a picking algorithm based on cross-correlation has been proposed. The main advantage of this approach is that the algorithm does not require to set up sophisticated parameters, in contrast to other automatic algorithms. This work was supported in part by the CEI BioTic Granada project (COD55), the Spanish mineco project APASVO (TEC2012-31551), the Spanish micinn project EPHESTOS (CGL2011-29499-C02-01) and the EU project MED-SUV.
ERIC Educational Resources Information Center
Petalas, Michael A.; Hastings, Richard P.; Nash, Susie; Dowey, Alan; Reilly, Deirdre
2009-01-01
Semi-structured interviews were used to explore the perceptions and experiences of eight typically developing siblings in middle childhood who had a brother with autism spectrum disorder (ASD). The interviews were analysed using interpretative phenomenological analysis (IPA). The analysis yielded five main themes: (i) siblings' perceptions of the…
NASA Astrophysics Data System (ADS)
Zheng, Genrang; Lin, ZhengChun
The problem of winner determination in combinatorial auctions is a hotspot electronic business, and a NP hard problem. A Hybrid Artificial Fish Swarm Algorithm(HAFSA), which is combined with First Suite Heuristic Algorithm (FSHA) and Artificial Fish Swarm Algorithm (AFSA), is proposed to solve the problem after probing it base on the theories of AFSA. Experiment results show that the HAFSA is a rapidly and efficient algorithm for The problem of winner determining. Compared with Ant colony Optimization Algorithm, it has a good performance with broad and prosperous application.
Honored Teacher Shows Commitment.
ERIC Educational Resources Information Center
Ratte, Kathy
1987-01-01
Part of the acceptance speech of the 1985 National Council for the Social Studies Teacher of the Year, this article describes the censorship experience of this honored social studies teacher. The incident involved the showing of a videotape version of the feature film entitled "The Seduction of Joe Tynan." (JDH)
NASA Astrophysics Data System (ADS)
Bobashev, S. V.; Mende, N. P.; Popov, P. A.; Sakharov, V. A.; Berdnikov, V. A.; Viktorov, V. A.; Oseeva, S. I.; Sadchikov, G. D.
2009-04-01
In part 1 of this paper, an algorithm for numerically solving the inverse problem of motion of a solid through the atmosphere is described that constitutes the basis for identifying the aerodynamic characteristics of an object from trajectory data and the respective identification procedure is presented. In part 2, methods evaluating the significance of desired parameters and adequacy of a mathematical model of motion, approaches to metrological certification of experimental equipment, and results of testing the algorithm are discussed.
NASA Astrophysics Data System (ADS)
Gabrovšek, F.; Grašič, B.; Božnar, M. Z.; Mlakar, P.; Udén, M.; Davies, E.
2013-10-01
The paper presents an experiment demonstrating a novel and successful application of Delay- and Disruption-Tolerant Networking (DTN) technology for automatic data transfer in a karst cave Early Warning and Measuring System. The experiment took place inside the Postojna Cave in Slovenia, which is open to tourists. Several automatic meteorological measuring stations are set up inside the cave, as an adjunct to the surveillance infrastructure; the regular data transfer provided by the DTN technology allows the surveillance system to take on the role of an Early Warning System (EWS). One of the stations is set up alongside the railway tracks, which allows the tourist to travel inside the cave by train. The experiment was carried out by placing a DTN "data mule" (a DTN-enabled computer with WiFi connection) on the train and by upgrading the meteorological station with a DTN-enabled WiFi transmission system. When the data mule is in the wireless drive-by mode, it collects measurement data from the station over a period of several seconds as the train passes the stationary equipment, and delivers data at the final train station by the cave entrance. This paper describes an overview of the experimental equipment and organisation allowing the use of a DTN system for data collection and an EWS inside karst caves where there is a regular traffic of tourists and researchers.
NASA Astrophysics Data System (ADS)
Gabrovšek, F.; Grašič, B.; Božnar, M. Z.; Mlakar, P.; Udén, M.; Davies, E.
2014-02-01
The paper presents an experiment demonstrating a novel and successful application of delay- and disruption-tolerant networking (DTN) technology for automatic data transfer in a karst cave early warning and measuring system. The experiment took place inside the Postojna Cave in Slovenia, which is open to tourists. Several automatic meteorological measuring stations are set up inside the cave, as an adjunct to the surveillance infrastructure; the regular data transfer provided by the DTN technology allows the surveillance system to take on the role of an early warning system (EWS). One of the stations is set up alongside the railway tracks, which allows the tourist to travel inside the cave by train. The experiment was carried out by placing a DTN "data mule" (a DTN-enabled computer with WiFi connection) on the train and by upgrading the meteorological station with a DTN-enabled WiFi transmission system. When the data mule is in the wireless drive-by mode, it collects measurement data from the station over a period of several seconds as the train without stopping passes the stationary equipment, and delivers data at the final train station by the cave entrance. This paper describes an overview of the experimental equipment and organization allowing the use of a DTN system for data collection and an EWS inside karst caves where there is regular traffic of tourists and researchers.
Nagata, Kosei; Yamamoto, Shinichi; Miyoshi, Kota; Sato, Masaki; Arino, Yusuke; Mikami, Yoji
2016-08-01
Eosinophilic granulomatosis with polyangiitis (EGPA, Churg-Strauss syndrome) is a rare systemic vasculitis and is difficult to diagnose. EGPA has a number of symptoms including peripheral dysesthesia caused by mononeuropathy multiplex, which is similar to radiculopathy due to lumbar disc hernia or lumbar spinal stenosis. Therefore, EGPA patients with mononeuropathy multiplex often visit orthopedic clinics, but orthopedic doctors and spine neurosurgeons have limited experience in diagnosing EGPA because of its rarity. We report a consecutive series of patients who were initially diagnosed as having lumbar disc hernia or lumbar spinal stenosis by at least 2 medical institutions from March 2006 to April 2013 but whose final diagnosis was EGPA. All patients had past histories of asthma or eosinophilic pneumonia, and four out of five had peripheral edema. Laboratory data showed abnormally increased eosinophil counts, and nerve conduction studies of all patients revealed axonal damage patterns. All patients recovered from paralysis to a functional level after high-dose steroid treatment. We shortened the duration of diagnosis from 49 days to one day by adopting a diagnostic algorithm after experiencing the first case. PMID:27549670
NASA Technical Reports Server (NTRS)
Arenstorf, Norbert S.; Jordan, Harry F.
1987-01-01
A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor, that support these conclusions are detailed.
NASA Technical Reports Server (NTRS)
Arenstorf, Norbert S.; Jordan, Harry F.
1989-01-01
A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor that support these conclusions, are detailed.
Miller, L.F.
1980-11-01
A brief description of the Oak Ridge Reactor Pool Side Facility (ORR-PSF) and of the associated control system is given. The ORR-PSF capsule temperatures are controlled by a digital computer which regulates the percent power delivered to electrical heaters. The total electrical power which can be input to a particular heater is determined by the setting of an associated variac. This report concentrates on the description of the ORR-PSF irradiation experiment computer control algorithm. The algorithm is an implementation of a discrete-time, state variable, optimal control approach. The Riccati equation is solved for a discretized system model to determine the control law. Experiments performed to obtain system model parameters are described. Results of performance evaluation experiments are also presented. The control algorithm maintains both capsule temperatures within a 288/sup 0/C +-10/sup 0/C band as required. The pressure vessel capsule temperatures are effectively maintained within a 288/sup 0/C +-5/sup 0/C band.
NASA Astrophysics Data System (ADS)
Finnerty, P.; Aguayo, E.; Amman, M.; Avignone, F. T., Iii; Barabash, A. S.; Barton, P. J.; Beene, J. R.; Bertrand, F. E.; Boswell, M.; Brudanin, V.; Busch, M.; Chan, Y.-D.; Christofferson, C. D.; Collar, J. I.; Combs, D. C.; Cooper, R. J.; Detwiler, J. A.; Doe, P. J.; Efremenko, Yu; Egorov, V.; Ejiri, H.; Elliott, S. R.; Esterline, J.; Fast, J. E.; Fields, N.; Fraenkle, F. M.; Galindo-Uribarri, A.; Gehman, V. M.; Giovanetti, G. K.; Green, M. P.; Guiseppe, V. E.; Gusey, K.; Hallin, A. L.; Hazama, R.; Henning, R.; Hoppe, E. W.; Horton, M.; Howard, S.; Howe, M. A.; Johnson, R. A.; Keeter, K. J.; Kidd, M. F.; Knecht, A.; Kochetov, O.; Konovalov, S. I.; Kouzes, R. T.; LaFerriere, B. D.; Leon, J.; Leviner, L. E.; Loach, J. C.; Luke, P. N.; MacMullin, S.; Marino, M. G.; Martin, R. D.; Merriman, J. H.; Miller, M. L.; Mizouni, L.; Nomachi, M.; Orrell, J. L.; Overman, N. R.; Perumpilly, G.; Phillips, D. G., Ii; Poon, A. W. P.; Radford, D. C.; Rielage, K.; Robertson, R. G. H.; Ronquest, M. C.; Schubert, A. G.; Shima, T.; Shirchenko, M.; Snavely, K. J.; Steele, D.; Strain, J.; Timkin, V.; Tornow, W.; Varner, R. L.; Vetter, K.; Vorren, K.; Wilkerson, J. F.; Yakushev, E.; Yaver, H.; Young, A. R.; Yu, C.-H.; Yumatov, V.; Majorana Collaboration
2014-03-01
The Majorana Demonstrator will search for the neutrinoless double-beta decay (0vββ) of the 76Ge isotope with a mixed array of enriched and natural germanium detectors. The observation of this rare decay would indicate the neutrino is its own anti-particle, demonstrate that lepton number is not conserved, and provide information on the absolute mass-scale of the neutrino. The Demonstrator is being assembled at the 4850 foot level of the Sanford Underground Research Facility in Lead, South Dakota. The array will be contained in a low-background environment and surrounded by passive and active shielding. The goals for the Demonstrator are: demonstrating a background rate less than 3 t-1 y-1 in the 4 keV region of interest (ROI) surrounding the 2039 keV 76Ge endpoint energy; establishing the technology required to build a tonne-scale germanium based double-beta decay experiment; testing the recent claim of observation of 0vββ [1]; and performing a direct search for light WIMPs (3-10 GeV/c2).
Kosten, Therese A; Zhang, Xiang Yang; Kehoe, Priscilla
2004-08-18
Previously, we demonstrated that the early life stress of neonatal isolation enhances extracellular dopamine (DA) levels in ventral striatum in response to psychostimulants in infant rats. Yet, neonatal isolation does not alter baseline DA levels. DA levels are affected by serotonin (5-HT) and striatal levels of this transmitter are also enhanced by cocaine. Other early life stresses are reported to alter various 5-HT neural systems. Thus, the purpose of this study is to test whether neonatal isolation alters ventral striatal 5-HT levels at baseline or in response to cocaine. Litters were subjected to neonatal isolation (1-h individual isolation/day on postnatal days 2-9) or to non-handled conditions and pups assigned to one of three cocaine doses (0, 2.5, or 5.0 mg/kg) groups. On postnatal day 10, probes were implanted in the ventral striatum. Dialysate samples obtained over a 60-min baseline period and for 120 min post cocaine injections were assessed for levels of 5-HT and its metabolite, 5-HIAA. ISO decreased ventral striatal 5-HT levels at baseline and after cocaine administration but did not alter 5-HIAA levels. These data add to the literature on the immediate effects of early life stress on 5-HT systems by showing alterations in the ventral striatal system. Because serotonergic effects in this neural area are associated with reward and with emotion and affect regulation, the results of this study suggest that early life stress may be a risk factor for addiction and other psychiatric disorders. PMID:15283991
Bowman, L C; Williams, R; Sanders, M; Ringwald-Smith, K; Baker, D; Gajjar, A
1998-01-01
The Metabolic and Infusion Support Service (MISS) at St. Jude Children's Research Hospital was established in 1988 to improve the quality of nutritional support given to children undergoing therapy for cancer. This multidisciplinary group, representing each of the clinical services within the hospital, provides a range of services to all patients requiring full enteral or parenteral nutritional support. In 1991, the MISS developed an algorithm for nutritional support which emphasized a demand for a compelling rationale for choosing parenteral over enteral support in patients with functional gastrointestinal tracts. Compliance with the algorithm was monitored annually for 3 years, with full compliance defined as meeting all criteria for initiating support and selection of an appropriate type of support. Compliance rates were 93% in 1992, 95% in 1993 and 100% in 1994. The algorithm was revised in 1994 to include criteria for offering oral supplementation to patients whose body weight was at least 90% of their ideal weight and whose protein stores were considered adequate. Full support was begun if no weight gain occurred. Patients likely to tolerate and absorb food from the gastrointestinal tract were classified into groups defined by the absence of intractable vomiting, severe diarrhea, graft-vs.-host disease affecting the gut, radiation enteritis, strictures, ileus, mucositis and treatment with allogeneic bone marrow transplant. Overall, the adoption of the algorithm has increased the frequency of enteral nutritional support, particularly via gastrostomies, by at least 3-fold. Our current emphasis is to define the time points in therapy at which nutritional intervention is most warranted. PMID:9876485
A Hybrid Monkey Search Algorithm for Clustering Analysis
Chen, Xin; Zhou, Yongquan; Luo, Qifang
2014-01-01
Clustering is a popular data analysis and data mining technique. The k-means clustering algorithm is one of the most commonly used methods. However, it highly depends on the initial solution and is easy to fall into local optimum solution. In view of the disadvantages of the k-means method, this paper proposed a hybrid monkey algorithm based on search operator of artificial bee colony algorithm for clustering analysis and experiment on synthetic and real life datasets to show that the algorithm has a good performance than that of the basic monkey algorithm for clustering analysis. PMID:24772039
An improved SIFT algorithm based on KFDA in image registration
NASA Astrophysics Data System (ADS)
Chen, Peng; Yang, Lijuan; Huo, Jinfeng
2016-03-01
As a kind of stable feature matching algorithm, SIFT has been widely used in many fields. In order to further improve the robustness of the SIFT algorithm, an improved SIFT algorithm with Kernel Discriminant Analysis (KFDA-SIFT) is presented for image registration. The algorithm uses KFDA to SIFT descriptors for feature extraction matrix, and uses the new descriptors to conduct the feature matching, finally chooses RANSAC to deal with the matches for further purification. The experiments show that the presented algorithm is robust to image changes in scale, illumination, perspective, expression and tiny pose with higher matching accuracy.
Solving SAT Problem Based on Hybrid Differential Evolution Algorithm
NASA Astrophysics Data System (ADS)
Liu, Kunqi; Zhang, Jingmin; Liu, Gang; Kang, Lishan
Satisfiability (SAT) problem is an NP-complete problem. Based on the analysis about it, SAT problem is translated equally into an optimization problem on the minimum of objective function. A hybrid differential evolution algorithm is proposed to solve the Satisfiability problem. It makes full use of strong local search capacity of hill-climbing algorithm and strong global search capability of differential evolution algorithm, which makes up their disadvantages, improves the efficiency of algorithm and avoids the stagnation phenomenon. The experiment results show that the hybrid algorithm is efficient in solving SAT problem.
Wrong, Terence; Baumgart, Erica
2013-01-01
The authors of the preceding articles raise legitimate questions about patient and staff rights and the unintended consequences of allowing ABC News to film inside teaching hospitals. We explain why we regard their fears as baseless and not supported by what we heard from individuals portrayed in the filming, our decade-long experience making medical documentaries, and the full un-aired context of the scenes shown in the broadcast. The authors don't and can't know what conversations we had, what documents we reviewed, and what protections we put in place in each televised scene. Finally, we hope to correct several misleading examples cited by the authors as well as their offhand mischaracterization of our program as a "reality" show. PMID:23631336
NASA Astrophysics Data System (ADS)
Que, Dashun; Li, Gang; Yue, Peng
2007-12-01
An adaptive optimization watermarking algorithm based on Genetic Algorithm (GA) and discrete wavelet transform (DWT) is proposed in this paper. The core of this algorithm is the fitness function optimization model for digital watermarking based on GA. The embedding intensity for digital watermarking can be modified adaptively, and the algorithm can effectively ensure the imperceptibility of watermarking while the robustness is ensured. The optimization model research may provide a new idea for anti-coalition attacks of digital watermarking algorithm. The paper has fulfilled many experiments, including the embedding and extracting experiments of watermarking, the influence experiments by the weighting factor, the experiments of embedding same watermarking to the different cover image, the experiments of embedding different watermarking to the same cover image, the comparative analysis experiments between this optimization algorithm and human visual system (HVS) algorithm and etc. The simulation results and the further analysis show the effectiveness and advantage of the new algorithm, which also has versatility and expandability. And meanwhile it has better ability of anti-coalition attacks. Moreover, the robustness and security of watermarking algorithm are improved by scrambling transformation and chaotic encryption while preprocessing the watermarking.
NASA Technical Reports Server (NTRS)
Pflaum, Christoph
1996-01-01
A multilevel algorithm is presented that solves general second order elliptic partial differential equations on adaptive sparse grids. The multilevel algorithm consists of several V-cycles. Suitable discretizations provide that the discrete equation system can be solved in an efficient way. Numerical experiments show a convergence rate of order Omicron(1) for the multilevel algorithm.
Benchmarking image fusion algorithm performance
NASA Astrophysics Data System (ADS)
Howell, Christopher L.
2012-06-01
Registering two images produced by two separate imaging sensors having different detector sizes and fields of view requires one of the images to undergo transformation operations that may cause its overall quality to degrade with regards to visual task performance. This possible change in image quality could add to an already existing difference in measured task performance. Ideally, a fusion algorithm would take as input unaltered outputs from each respective sensor used in the process. Therefore, quantifying how well an image fusion algorithm performs should be base lined to whether the fusion algorithm retained the performance benefit achievable by each independent spectral band being fused. This study investigates an identification perception experiment using a simple and intuitive process for discriminating between image fusion algorithm performances. The results from a classification experiment using information theory based image metrics is presented and compared to perception test results. The results show an effective performance benchmark for image fusion algorithms can be established using human perception test data. Additionally, image metrics have been identified that either agree with or surpass the performance benchmark established.
Double regions growing algorithm for automated satellite image mosaicking
NASA Astrophysics Data System (ADS)
Tan, Yihua; Chen, Chen; Tian, Jinwen
2011-12-01
Feathering is a most widely used method in seamless satellite image mosaicking. A simple but effective algorithm - double regions growing (DRG) algorithm, which utilizes the shape content of images' valid regions, is proposed for generating robust feathering-line before feathering. It works without any human intervention, and experiment on real satellite images shows the advantages of the proposed method.
A Test Scheduling Algorithm Based on Two-Stage GA
NASA Astrophysics Data System (ADS)
Yu, Y.; Peng, X. Y.; Peng, Y.
2006-10-01
In this paper, we present a new algorithm to co-optimize the core wrapper design and the SOC test scheduling. The SOC test scheduling problem is first formulated into a twodimension floorplan problem and a sequence pair architecture is used to represent it. Then we propose a two-stage GA (Genetic Algorithm) to solve the SOC test scheduling problem. Experiments on ITC'02 benchmark show that our algorithm can effectively reduce test time so as to decrease SOC test cost.
Research on Laser Marking Speed Optimization by Using Genetic Algorithm
Wang, Dongyun; Yu, Qiwei; Zhang, Yu
2015-01-01
Laser Marking Machine is the most common coding equipment on product packaging lines. However, the speed of laser marking has become a bottleneck of production. In order to remove this bottleneck, a new method based on a genetic algorithm is designed. On the basis of this algorithm, a controller was designed and simulations and experiments were performed. The results show that using this algorithm could effectively improve laser marking efficiency by 25%. PMID:25955831
Advancements to the planogram frequency–distance rebinning algorithm
Champley, Kyle M; Raylman, Raymond R; Kinahan, Paul E
2010-01-01
In this paper we consider the task of image reconstruction in positron emission tomography (PET) with the planogram frequency–distance rebinning (PFDR) algorithm. The PFDR algorithm is a rebinning algorithm for PET systems with panel detectors. The algorithm is derived in the planogram coordinate system which is a native data format for PET systems with panel detectors. A rebinning algorithm averages over the redundant four-dimensional set of PET data to produce a three-dimensional set of data. Images can be reconstructed from this rebinned three-dimensional set of data. This process enables one to reconstruct PET images more quickly than reconstructing directly from the four-dimensional PET data. The PFDR algorithm is an approximate rebinning algorithm. We show that implementing the PFDR algorithm followed by the (ramp) filtered backprojection (FBP) algorithm in linogram coordinates from multiple views reconstructs a filtered version of our image. We develop an explicit formula for this filter which can be used to achieve exact reconstruction by means of a modified FBP algorithm applied to the stack of rebinned linograms and can also be used to quantify the errors introduced by the PFDR algorithm. This filter is similar to the filter in the planogram filtered backprojection algorithm derived by Brasse et al. The planogram filtered backprojection and exact reconstruction with the PFDR algorithm require complete projections which can be completed with a reprojection algorithm. The PFDR algorithm is similar to the rebinning algorithm developed by Kao et al. By expressing the PFDR algorithm in detector coordinates, we provide a comparative analysis between the two algorithms. Numerical experiments using both simulated data and measured data from a positron emission mammography/tomography (PEM/PET) system are performed. Images are reconstructed by PFDR+FBP (PFDR followed by 2D FBP reconstruction), PFDRX (PFDR followed by the modified FBP algorithm for exact
NASA Technical Reports Server (NTRS)
Conel, J. E.; Abdou, W. A.; Bruegge, C. J.; Gaitley, B. J.; Helmlinger, M. C.; Ledeboer, W. C.; Pilorz, S. H.; Martonchik, J. V.
1997-01-01
Radiative closure experiments involving a comparison between surface-measured spectral irradiance and the surface irradiance calculated according to a radiative transfer code at a desert site in Nevada under clear skies, yield the result that agreement between the two requires presence of an absorbing aerosol component with an imaginary refractive index equal to 0.03 and a 50:50 mix by optical depth of small and large particles with log-normal size distributions.
Martin, Andre-Guy; Roy, Jean; Beaulieu, Luc; Pouliot, Jean; Harel, Francois; Vigneault, Eric . E-mail: Eric.Vigneault@chuq.qc.ca
2007-02-01
Purpose: To report outcomes and toxicity of the first Canadian permanent prostate implant program. Methods and Materials: 396 consecutive patients (Gleason {<=}6, initial prostate specific antigen (PSA) {<=}10 and stage T1-T2a disease) were implanted between June 1994 and December 2001. The median follow-up is of 60 months (maximum, 136 months). All patients were planned with fast-simulated annealing inverse planning algorithm with high activity seeds ([gt] 0.76 U). Acute and late toxicity is reported for the first 213 patients using a modified RTOG toxicity scale. The Kaplan-Meier biochemical failure-free survival (bFFS) is reported according to the ASTRO and Houston definitions. Results: The bFFS at 60 months was of 88.5% (90.5%) according to the ASTRO (Houston) definition and, of 91.4% (94.6%) in the low risk group (initial PSA {<=}10 and Gleason {<=}6 and Stage {<=}T2a). Risk factors statistically associated with bFFS were: initial PSA >10, a Gleason score of 7-8, and stage T2b-T3. The mean D90 was of 151 {+-} 36.1 Gy. The mean V100 was of 85.4 {+-} 8.5% with a mean V150 of 60.1 {+-} 12.3%. Overall, the implants were well tolerated. In the first 6 months, 31.5% of the patients were free of genitourinary symptoms (GUs), 12.7% had Grade 3 GUs; 91.6% were free of gastrointestinal symptoms (GIs). After 6 months, 54.0% were GUs free, 1.4% had Grade 3 GUs; 95.8% were GIs free. Conclusion: The inverse planning with fast simulated annealing and high activity seeds gives a 5-year bFFS, which is comparable with the best published series with a low toxicity profile.
Improved hybrid optimization algorithm for 3D protein structure prediction.
Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang
2014-07-01
A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins. PMID:25069136
Algorithms for automated DNA assembly
Densmore, Douglas; Hsiau, Timothy H.-C.; Kittleson, Joshua T.; DeLoache, Will; Batten, Christopher; Anderson, J. Christopher
2010-01-01
Generating a defined set of genetic constructs within a large combinatorial space provides a powerful method for engineering novel biological functions. However, the process of assembling more than a few specific DNA sequences can be costly, time consuming and error prone. Even if a correct theoretical construction scheme is developed manually, it is likely to be suboptimal by any number of cost metrics. Modular, robust and formal approaches are needed for exploring these vast design spaces. By automating the design of DNA fabrication schemes using computational algorithms, we can eliminate human error while reducing redundant operations, thus minimizing the time and cost required for conducting biological engineering experiments. Here, we provide algorithms that optimize the simultaneous assembly of a collection of related DNA sequences. We compare our algorithms to an exhaustive search on a small synthetic dataset and our results show that our algorithms can quickly find an optimal solution. Comparison with random search approaches on two real-world datasets show that our algorithms can also quickly find lower-cost solutions for large datasets. PMID:20335162
NASA Technical Reports Server (NTRS)
Di Zenzo, Silvano; Degloria, Stephen D.; Bernstein, R.; Kolsky, Harwood G.
1987-01-01
The paper presents the results of a four-factor two-level analysis of a variance experiment designed to evaluate the combined effect of the improved quality of remote-sensor data and the use of context by the classifier on classification accuracy. The improvement achievable by using the context via relaxation techniques is significantly smaller than that provided by an increase of the radiometric resolution of the sensor from 6 to 8 bits per sample (the relative increase in radiometric resolution of TM relative to MSS). It is almost equal to that achievable by an increase in the spectral coverage as provided by TM relative to MSS.
NASA Astrophysics Data System (ADS)
Lalande, Jean-Marie; Waxler, Roger; Velea, Doru
2016-04-01
As infrasonic waves propagate at long ranges through atmospheric ducts it has been suggested that observations of such waves can be used as a remote sensing techniques in order to update properties such as temperature and wind speed. In this study we investigate a new inverse approach based on Markov Chain Monte Carlo methods. This approach as the advantage of searching for the full Probability Density Function in the parameter space at a lower computational cost than extensive parameters search performed by the standard Monte Carlo approach. We apply this inverse methods to observations from the Humming Roadrunner experiment (New Mexico) and discuss implications for atmospheric updates, explosion characterization, localization and yield estimation.
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
Algorithm for Public Electric Transport Schedule Control for Intelligent Embedded Devices
NASA Astrophysics Data System (ADS)
Alps, Ivars; Potapov, Andrey; Gorobetz, Mikhail; Levchenkov, Anatoly
2010-01-01
In this paper authors present heuristics algorithm for precise schedule fulfilment in city traffic conditions taking in account traffic lights. The algorithm is proposed for programmable controller. PLC is proposed to be installed in electric vehicle to control its motion speed and signals of traffic lights. Algorithm is tested using real controller connected to virtual devices and real functional models of real tram devices. Results of experiments show high precision of public transport schedule fulfilment using proposed algorithm.
A Cross Unequal Clustering Routing Algorithm for Sensor Network
NASA Astrophysics Data System (ADS)
Tong, Wang; Jiyi, Wu; He, Xu; Jinghua, Zhu; Munyabugingo, Charles
2013-08-01
In the routing protocol for wireless sensor network, the cluster size is generally fixed in clustering routing algorithm for wireless sensor network, which can easily lead to the "hot spot" problem. Furthermore, the majority of routing algorithms barely consider the problem of long distance communication between adjacent cluster heads that brings high energy consumption. Therefore, this paper proposes a new cross unequal clustering routing algorithm based on the EEUC algorithm. In order to solve the defects of EEUC algorithm, this algorithm calculating of competition radius takes the node's position and node's remaining energy into account to make the load of cluster heads more balanced. At the same time, cluster adjacent node is applied to transport data and reduce the energy-loss of cluster heads. Simulation experiments show that, compared with LEACH and EEUC, the proposed algorithm can effectively reduce the energy-loss of cluster heads and balance the energy consumption among all nodes in the network and improve the network lifetime
A multistrategy optimization improved artificial bee colony algorithm.
Liu, Wen
2014-01-01
Being prone to the shortcomings of premature and slow convergence rate of artificial bee colony algorithm, an improved algorithm was proposed. Chaotic reverse learning strategies were used to initialize swarm in order to improve the global search ability of the algorithm and keep the diversity of the algorithm; the similarity degree of individuals of the population was used to characterize the diversity of population; population diversity measure was set as an indicator to dynamically and adaptively adjust the nectar position; the premature and local convergence were avoided effectively; dual population search mechanism was introduced to the search stage of algorithm; the parallel search of dual population considerably improved the convergence rate. Through simulation experiments of 10 standard testing functions and compared with other algorithms, the results showed that the improved algorithm had faster convergence rate and the capacity of jumping out of local optimum faster. PMID:24982924
GASAT: a genetic local search algorithm for the satisfiability problem.
Lardeux, Frédéric; Saubion, Frédéric; Hao, Jin-Kao
2006-01-01
This paper presents GASAT, a hybrid algorithm for the satisfiability problem (SAT). The main feature of GASAT is that it includes a recombination stage based on a specific crossover and a tabu search stage. We have conducted experiments to evaluate the different components of GASAT and to compare its overall performance with state-of-the-art SAT algorithms. These experiments show that GASAT provides very competitive results. PMID:16831107
Hey Teacher, Your Personality's Showing!
ERIC Educational Resources Information Center
Paulsen, James R.
1977-01-01
A study of 30 fourth, fifth, and sixth grade teachers and 300 of their students showed that a teacher's age, sex, and years of experience did not relate to students' mathematics achievement, but that more effective teachers showed greater "freedom from defensive behavior" than did less effective teachers. (DT)
NASA Technical Reports Server (NTRS)
Entekhabi, Dara; Njoku, Eni E.; O'Neill, Peggy E.; Kellogg, Kent H.; Entin, Jared K.
2010-01-01
Talk outline 1. Derivation of SMAP basic and applied science requirements from the NRC Earth Science Decadal Survey applications 2. Data products and latencies 3. Algorithm highlights 4. SMAP Algorithm Testbed 5. SMAP Working Groups and community engagement
Ensemble algorithms in reinforcement learning.
Wiering, Marco A; van Hasselt, Hado
2008-08-01
This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and implemented four different ensemble methods combining the following five different RL algorithms: Q-learning, Sarsa, actor-critic (AC), QV-learning, and AC learning automaton. The intuitively designed ensemble methods, namely, majority voting (MV), rank voting, Boltzmann multiplication (BM), and Boltzmann addition, combine the policies derived from the value functions of the different RL algorithms, in contrast to previous work where ensemble methods have been used in RL for representing and learning a single value function. We show experiments on five maze problems of varying complexity; the first problem is simple, but the other four maze tasks are of a dynamic or partially observable nature. The results indicate that the BM and MV ensembles significantly outperform the single RL algorithms. PMID:18632380
New Effective Multithreaded Matching Algorithms
Manne, Fredrik; Halappanavar, Mahantesh
2014-05-19
Matching is an important combinatorial problem with a number of applications in areas such as community detection, sparse linear algebra, and network alignment. Since computing optimal matchings can be very time consuming, several fast approximation algorithms, both sequential and parallel, have been suggested. Common to the algorithms giving the best solutions is that they tend to be sequential by nature, while algorithms more suitable for parallel computation give solutions of less quality. We present a new simple 1 2 -approximation algorithm for the weighted matching problem. This algorithm is both faster than any other suggested sequential 1 2 -approximation algorithm on almost all inputs and also scales better than previous multithreaded algorithms. We further extend this to a general scalable multithreaded algorithm that computes matchings of weight comparable with the best sequential algorithms. The performance of the suggested algorithms is documented through extensive experiments on different multithreaded architectures.
NASA Astrophysics Data System (ADS)
García, Alicia; De la Cruz-Reyna, Servando; Marrero, José M.; Ortiz, Ramón
2016-05-01
Under certain conditions, volcano-tectonic (VT) earthquakes may pose significant hazards to people living in or near active volcanic regions, especially on volcanic islands; however, hazard arising from VT activity caused by localized volcanic sources is rarely addressed in the literature. The evolution of VT earthquakes resulting from a magmatic intrusion shows some orderly behaviour that may allow the occurrence and magnitude of major events to be forecast. Thus governmental decision makers can be supplied with warnings of the increased probability of larger-magnitude earthquakes on the short-term timescale. We present here a methodology for forecasting the occurrence of large-magnitude VT events during volcanic crises; it is based on a mean recurrence time (MRT) algorithm that translates the Gutenberg-Richter distribution parameter fluctuations into time windows of increased probability of a major VT earthquake. The MRT forecasting algorithm was developed after observing a repetitive pattern in the seismic swarm episodes occurring between July and November 2011 at El Hierro (Canary Islands). From then on, this methodology has been applied to the consecutive seismic crises registered at El Hierro, achieving a high success rate in the real-time forecasting, within 10-day time windows, of volcano-tectonic earthquakes.
Fast Optimal Load Balancing Algorithms for 1D Partitioning
Pinar, Ali; Aykanat, Cevdet
2002-12-09
One-dimensional decomposition of nonuniform workload arrays for optimal load balancing is investigated. The problem has been studied in the literature as ''chains-on-chains partitioning'' problem. Despite extensive research efforts, heuristics are still used in parallel computing community with the ''hope'' of good decompositions and the ''myth'' of exact algorithms being hard to implement and not runtime efficient. The main objective of this paper is to show that using exact algorithms instead of heuristics yields significant load balance improvements with negligible increase in preprocessing time. We provide detailed pseudocodes of our algorithms so that our results can be easily reproduced. We start with a review of literature on chains-on-chains partitioning problem. We propose improvements on these algorithms as well as efficient implementation tips. We also introduce novel algorithms, which are asymptotically and runtime efficient. We experimented with data sets from two different applications: Sparse matrix computations and Direct volume rendering. Experiments showed that the proposed algorithms are 100 times faster than a single sparse-matrix vector multiplication for 64-way decompositions on average. Experiments also verify that load balance can be significantly improved by using exact algorithms instead of heuristics. These two findings show that exact algorithms with efficient implementations discussed in this paper can effectively replace heuristics.
Fast parallel algorithm for slicing STL based on pipeline
NASA Astrophysics Data System (ADS)
Ma, Xulong; Lin, Feng; Yao, Bo
2016-04-01
In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved.
Fast parallel algorithm for slicing STL based on pipeline
NASA Astrophysics Data System (ADS)
Ma, Xulong; Lin, Feng; Yao, Bo
2016-05-01
In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved.
Television Quiz Show Simulation
ERIC Educational Resources Information Center
Hill, Jonnie Lynn
2007-01-01
This article explores the simulation of four television quiz shows for students in China studying English as a foreign language (EFL). It discusses the adaptation and implementation of television quiz shows and how the students reacted to them.
Dual-Byte-Marker Algorithm for Detecting JFIF Header
NASA Astrophysics Data System (ADS)
Mohamad, Kamaruddin Malik; Herawan, Tutut; Deris, Mustafa Mat
The use of efficient algorithm to detect JPEG file is vital to reduce time taken for analyzing ever increasing data in hard drive or physical memory. In the previous paper, single-byte-marker algorithm is proposed for header detection. In this paper, another novel header detection algorithm called dual-byte-marker is proposed. Based on the experiments done on images from hard disk, physical memory and data set from DFRWS 2006 Challenge, results showed that dual-byte-marker algorithm gives better performance with better execution time for header detection as compared to single-byte-marker.
ERIC Educational Resources Information Center
Watters, Audrey
2012-01-01
As changing student demographics make it harder for today's learners to earn a four-year degree, educators are experimenting with smaller credentialing steps, such as digital badges. Mark Milliron, chancellor of Western Governors University Texas, advocates the creation of a "family of credentials," ranging from digital badges to certifications,…
NASA Technical Reports Server (NTRS)
Petersen, Walter A.; Jensen, Michael P.
2011-01-01
The joint NASA Global Precipitation Measurement (GPM) -- DOE Atmospheric Radiation Measurement (ARM) Midlatitude Continental Convective Clouds Experiment (MC3E) was conducted from April 22-June 6, 2011, centered on the DOE-ARM Southern Great Plains Central Facility site in northern Oklahoma. GPM field campaign objectives focused on the collection of airborne and ground-based measurements of warm-season continental precipitation processes to support refinement of GPM retrieval algorithm physics over land, and to improve the fidelity of coupled cloud resolving and land-surface satellite simulator models. DOE ARM objectives were synergistically focused on relating observations of cloud microphysics and the surrounding environment to feedbacks on convective system dynamics, an effort driven by the need to better represent those interactions in numerical modeling frameworks. More specific topics addressed by MC3E include ice processes and ice characteristics as coupled to precipitation at the surface and radiometer signals measured in space, the correlation properties of rainfall and drop size distributions and impacts on dual-frequency radar retrieval algorithms, the transition of cloud water to rain water (e.g., autoconversion processes) and the vertical distribution of cloud water in precipitating clouds, and vertical draft structure statistics in cumulus convection. The MC3E observational strategy relied on NASA ER-2 high-altitude airborne multi-frequency radar (HIWRAP Ka-Ku band) and radiometer (AMPR, CoSMIR; 10-183 GHz) sampling (a GPM "proxy") over an atmospheric column being simultaneously profiled in situ by the University of North Dakota Citation microphysics aircraft, an array of ground-based multi-frequency scanning polarimetric radars (DOE Ka-W, X and C-band; NASA D3R Ka-Ku and NPOL S-bands) and wind-profilers (S/UHF bands), supported by a dense network of over 20 disdrometers and rain gauges, all nested in the coverage of a six-station mesoscale rawinsonde
Styopin, Nikita E; Vershinin, Anatoly V; Zingerman, Konstantin M; Levin, Vladimir A
2016-09-01
Different variants of the Uzawa algorithm are compared with one another. The comparison is performed for the case in which this algorithm is applied to large-scale systems of linear algebraic equations. These systems arise in the finite-element solution of the problems of elasticity theory for incompressible materials. A modification of the Uzawa algorithm is proposed. Computational experiments show that this modification improves the convergence of the Uzawa algorithm for the problems of solid mechanics. The results of computational experiments show that each variant of the Uzawa algorithm considered has its advantages and disadvantages and may be convenient in one case or another. PMID:27595019
ERIC Educational Resources Information Center
Anderton, Alice
The Intertribal Wordpath Society is a nonprofit educational corporation formed to promote the teaching, status, awareness, and use of Oklahoma Indian languages. The Society produces "Wordpath," a weekly 30-minute public access television show about Oklahoma Indian languages and the people who are teaching and preserving them. The show aims to…
Memetic algorithm for community detection in networks.
Gong, Maoguo; Fu, Bao; Jiao, Licheng; Du, Haifeng
2011-11-01
Community structure is one of the most important properties in networks, and community detection has received an enormous amount of attention in recent years. Modularity is by far the most used and best known quality function for measuring the quality of a partition of a network, and many community detection algorithms are developed to optimize it. However, there is a resolution limit problem in modularity optimization methods. In this study, a memetic algorithm, named Meme-Net, is proposed to optimize another quality function, modularity density, which includes a tunable parameter that allows one to explore the network at different resolutions. Our proposed algorithm is a synergy of a genetic algorithm with a hill-climbing strategy as the local search procedure. Experiments on computer-generated and real-world networks show the effectiveness and the multiresolution ability of the proposed method. PMID:22181467
ERIC Educational Resources Information Center
Kirkpatrick, Larry D.; Rugheimer, Mac
1979-01-01
Describes the viewing sessions and the holograms of a holographic road show. The traveling exhibits, believed to stimulate interest in physics, include a wide variety of holograms and demonstrate several physical principles. (GA)
Competing Sudakov veto algorithms
NASA Astrophysics Data System (ADS)
Kleiss, Ronald; Verheyen, Rob
2016-07-01
We present a formalism to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The formal analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance in a semi-realistic setting and show that there are significantly faster alternatives to the commonly used algorithms.
Effect of qubit losses on Grover's quantum search algorithm
NASA Astrophysics Data System (ADS)
Rao, D. D. Bhaktavatsala; Mølmer, Klaus
2012-10-01
We investigate the performance of Grover's quantum search algorithm on a register that is subject to a loss of particles that carry qubit information. Under the assumption that the basic steps of the algorithm are applied correctly on the correspondingly shrinking register, we show that the algorithm converges to mixed states with 50% overlap with the target state in the bit positions still present. As an alternative to error correction, we present a procedure that combines the outcome of different trials of the algorithm to determine the solution to the full search problem. The procedure may be relevant for experiments where the algorithm is adapted as the loss of particles is registered and for experiments with Rydberg blockade interactions among neutral atoms, where monitoring of atom losses is not even necessary.
Improved artificial bee colony algorithm based gravity matching navigation method.
Gao, Wei; Zhao, Bo; Zhou, Guang Tao; Wang, Qiu Ying; Yu, Chun Yang
2014-01-01
Gravity matching navigation algorithm is one of the key technologies for gravity aided inertial navigation systems. With the development of intelligent algorithms, the powerful search ability of the Artificial Bee Colony (ABC) algorithm makes it possible to be applied to the gravity matching navigation field. However, existing search mechanisms of basic ABC algorithms cannot meet the need for high accuracy in gravity aided navigation. Firstly, proper modifications are proposed to improve the performance of the basic ABC algorithm. Secondly, a new search mechanism is presented in this paper which is based on an improved ABC algorithm using external speed information. At last, modified Hausdorff distance is introduced to screen the possible matching results. Both simulations and ocean experiments verify the feasibility of the method, and results show that the matching rate of the method is high enough to obtain a precise matching position. PMID:25046019
ERIC Educational Resources Information Center
Eccleston, Jeff
2007-01-01
Big things come in small packages. This saying came to the mind of the author after he created a simple math review activity for his fourth grade students. Though simple, it has proven to be extremely advantageous in reinforcing math concepts. He uses this activity, which he calls "Show What You Know," often. This activity provides the perfect…
ERIC Educational Resources Information Center
Mathieu, Aaron
2000-01-01
Uses a talk show activity for a final assessment tool for students to debate about the ozone hole. Students are assessed on five areas: (1) cooperative learning; (2) the written component; (3) content; (4) self-evaluation; and (5) peer evaluation. (SAH)
ERIC Educational Resources Information Center
Moore, Mitzi Ruth
1992-01-01
Proposes having students perform skits in which they play the roles of the science concepts they are trying to understand. Provides the dialog for a skit in which hot and cold gas molecules are interviewed on a talk show to study how these properties affect wind, rain, and other weather phenomena. (MDH)
ERIC Educational Resources Information Center
Frasier, Debra
2008-01-01
In the author's book titled "The Incredible Water Show," the characters from "Miss Alaineus: A Vocabulary Disaster" used an ocean of information to stage an inventive performance about the water cycle. In this article, the author relates how she turned the story into hands-on science teaching for real-life fifth-grade students. The author also…
ERIC Educational Resources Information Center
Cech, Scott J.
2008-01-01
Having students show their skills in three dimensions, known as performance-based assessment, dates back at least to Socrates. Individual schools such as Barrington High School--located just outside of Providence--have been requiring students to actively demonstrate their knowledge for years. The Rhode Island's high school graduating class became…
An adaptive algorithm for low contrast infrared image enhancement
NASA Astrophysics Data System (ADS)
Liu, Sheng-dong; Peng, Cheng-yuan; Wang, Ming-jia; Wu, Zhi-guo; Liu, Jia-qi
2013-08-01
An adaptive infrared image enhancement algorithm for low contrast is proposed in this paper, to deal with the problem that conventional image enhancement algorithm is not able to effective identify the interesting region when dynamic range is large in image. This algorithm begin with the human visual perception characteristics, take account of the global adaptive image enhancement and local feature boost, not only the contrast of image is raised, but also the texture of picture is more distinct. Firstly, the global image dynamic range is adjusted from the overall, the dynamic range of original image and display grayscale form corresponding relationship, the gray scale of bright object is raised and the the gray scale of dark target is reduced at the same time, to improve the overall image contrast. Secondly, the corresponding filtering algorithm is used on the current point and its neighborhood pixels to extract image texture information, to adjust the brightness of the current point in order to enhance the local contrast of the image. The algorithm overcomes the default that the outline is easy to vague in traditional edge detection algorithm, and ensure the distinctness of texture detail in image enhancement. Lastly, we normalize the global luminance adjustment image and the local brightness adjustment image, to ensure a smooth transition of image details. A lot of experiments is made to compare the algorithm proposed in this paper with other convention image enhancement algorithm, and two groups of vague IR image are taken in experiment. Experiments show that: the contrast ratio of the picture is boosted after handled by histogram equalization algorithm, but the detail of the picture is not clear, the detail of the picture can be distinguished after handled by the Retinex algorithm. The image after deal with by self-adaptive enhancement algorithm proposed in this paper becomes clear in details, and the image contrast is markedly improved in compared with Retinex
Algorithm for dynamic Speckle pattern processing
NASA Astrophysics Data System (ADS)
Cariñe, J.; Guzmán, R.; Torres-Ruiz, F. A.
2016-07-01
In this paper we present a new algorithm for determining surface activity by processing speckle pattern images recorded with a CCD camera. Surface activity can be produced by motility or small displacements among other causes, and is manifested as a change in the pattern recorded in the camera with reference to a static background pattern. This intensity variation is considered to be a small perturbation compared with the mean intensity. Based on a perturbative method we obtain an equation with which we can infer information about the dynamic behavior of the surface that generates the speckle pattern. We define an activity index based on our algorithm that can be easily compared with the outcomes from other algorithms. It is shown experimentally that this index evolves in time in the same way as the Inertia Moment method, however our algorithm is based on direct processing of speckle patterns without the need for other kinds of post-processes (like THSP and co-occurrence matrix), making it a viable real-time method. We also show how this algorithm compares with several other algorithms when applied to calibration experiments. From these results we conclude that our algorithm offer qualitative and quantitative advantages over current methods.
Fast ordering algorithm for exact histogram specification.
Nikolova, Mila; Steidl, Gabriele
2014-12-01
This paper provides a fast algorithm to order in a meaningful, strict way the integer gray values in digital (quantized) images. It can be used in any exact histogram specification-based application. Our algorithm relies on the ordering procedure based on the specialized variational approach. This variational method was shown to be superior to all other state-of-the art ordering algorithms in terms of faithful total strict ordering but not in speed. Indeed, the relevant functionals are in general difficult to minimize because their gradient is nearly flat over vast regions. In this paper, we propose a simple and fast fixed point algorithm to minimize these functionals. The fast convergence of our algorithm results from known analytical properties of the model. Our algorithm is equivalent to an iterative nonlinear filtering. Furthermore, we show that a particular form of the variational model gives rise to much faster convergence than other alternative forms. We demonstrate that only a few iterations of this filter yield almost the same pixel ordering as the minimizer. Thus, we apply only few iteration steps to obtain images, whose pixels can be ordered in a strict and faithful way. Numerical experiments confirm that our algorithm outperforms by far its main competitors. PMID:25347881
An Artificial Immune Univariate Marginal Distribution Algorithm
NASA Astrophysics Data System (ADS)
Zhang, Qingbin; Kang, Shuo; Gao, Junxiang; Wu, Song; Tian, Yanping
Hybridization is an extremely effective way of improving the performance of the Univariate Marginal Distribution Algorithm (UMDA). Owing to its diversity and memory mechanisms, artificial immune algorithm has been widely used to construct hybrid algorithms with other optimization algorithms. This paper proposes a hybrid algorithm which combines the UMDA with the principle of general artificial immune algorithm. Experimental results on deceptive function of order 3 show that the proposed hybrid algorithm can get more building blocks (BBs) than the UMDA.
Adaptive path planning: Algorithm and analysis
Chen, Pang C.
1995-03-01
To address the need for a fast path planner, we present a learning algorithm that improves path planning by using past experience to enhance future performance. The algorithm relies on an existing path planner to provide solutions difficult tasks. From these solutions, an evolving sparse work of useful robot configurations is learned to support faster planning. More generally, the algorithm provides a framework in which a slow but effective planner may be improved both cost-wise and capability-wise by a faster but less effective planner coupled with experience. We analyze algorithm by formalizing the concept of improvability and deriving conditions under which a planner can be improved within the framework. The analysis is based on two stochastic models, one pessimistic (on task complexity), the other randomized (on experience utility). Using these models, we derive quantitative bounds to predict the learning behavior. We use these estimation tools to characterize the situations in which the algorithm is useful and to provide bounds on the training time. In particular, we show how to predict the maximum achievable speedup. Additionally, our analysis techniques are elementary and should be useful for studying other types of probabilistic learning as well.
Boden, Timothy W
2016-01-01
Many medical practices have cut back on education and staff development expenses, especially those costs associated with conventions and conferences. But there are hard-to-value returns on your investment in these live events--beyond the obvious benefits of acquired knowledge and skills. Major vendors still exhibit their services and wares at many events, and the exhibit hall is a treasure-house of information and resources for the savvy physician or administrator. Make and stick to a purposeful plan to exploit the trade show. You can compare products, gain new insights and ideas, and even negotiate better deals with representatives anxious to realize returns on their exhibition investments. PMID:27249887
NASA Astrophysics Data System (ADS)
Leihong, Zhang; Dong, Liang; Bei, Li; Yi, Kang; Zilan, Pan; Dawei, Zhang; Xiuhua, Ma
2016-04-01
In order to improve the reconstruction accuracy and reduce the workload, the algorithm of compressive sensing based on the iterative threshold is combined with the method of adaptive selection of the training sample, and a new algorithm of adaptive compressive sensing is put forward. The three kinds of training sample are used to reconstruct the spectral reflectance of the testing sample based on the compressive sensing algorithm and adaptive compressive sensing algorithm, and the color difference and error are compared. The experiment results show that spectral reconstruction precision based on the adaptive compressive sensing algorithm is better than that based on the algorithm of compressive sensing.
NASA Astrophysics Data System (ADS)
Li, Zhaokun; Cao, Jingtai; Liu, Wei; Feng, Jianfeng; Zhao, Xiaohui
2015-03-01
We use conventional adaptive optical system to compensate atmospheric turbulence in free space optical (FSO) communication system under strong scintillation circumstances, undesired wave-front measurements based on Shark-Hartman sensor (SH). Since wavefront sensor-less adaptive optics is a feasible option, we propose several swarm intelligence algorithms to compensate the wavefront aberration from atmospheric interference in FSO and mainly discuss the algorithm principle, basic flows, and simulation result. The numerical simulation experiment and result analysis show that compared with SPGD algorithm, the proposed algorithms can effectively restrain wavefront aberration, and improve convergence rate of the algorithms and the coupling efficiency of receiver in large extent.
Walusinski, Olivier
2014-01-01
In the second half of the 19th century, Jean-Martin Charcot (1825-1893) became famous for the quality of his teaching and his innovative neurological discoveries, bringing many French and foreign students to Paris. A hunger for recognition, together with progressive and anticlerical ideals, led Charcot to invite writers, journalists, and politicians to his lessons, during which he presented the results of his work on hysteria. These events became public performances, for which physicians and patients were transformed into actors. Major newspapers ran accounts of these consultations, more like theatrical shows in some respects. The resultant enthusiasm prompted other physicians in Paris and throughout France to try and imitate them. We will compare the form and substance of Charcot's lessons with those given by Jules-Bernard Luys (1828-1897), Victor Dumontpallier (1826-1899), Ambroise-Auguste Liébault (1823-1904), Hippolyte Bernheim (1840-1919), Joseph Grasset (1849-1918), and Albert Pitres (1848-1928). We will also note their impact on contemporary cinema and theatre. PMID:25273491
Speckle imaging algorithms for planetary imaging
Johansson, E.
1994-11-15
I will discuss the speckle imaging algorithms used to process images of the impact sites of the collision of comet Shoemaker-Levy 9 with Jupiter. The algorithms use a phase retrieval process based on the average bispectrum of the speckle image data. High resolution images are produced by estimating the Fourier magnitude and Fourier phase of the image separately, then combining them and inverse transforming to achieve the final result. I will show raw speckle image data and high-resolution image reconstructions from our recent experiment at Lick Observatory.
NASA Astrophysics Data System (ADS)
2007-01-01
its high spatial and spectral resolution, it was possible to zoom into the very heart of this very massive star. In this innermost region, the observations are dominated by the extremely dense stellar wind that totally obscures the underlying central star. The AMBER observations show that this dense stellar wind is not spherically symmetric, but exhibits a clearly elongated structure. Overall, the AMBER observations confirm that the extremely high mass loss of Eta Carinae's massive central star is non-spherical and much stronger along the poles than in the equatorial plane. This is in agreement with theoretical models that predict such an enhanced polar mass-loss in the case of rapidly rotating stars. ESO PR Photo 06c/07 ESO PR Photo 06c/07 RS Ophiuchi in Outburst Several papers from this special feature focus on the later stages in a star's life. One looks at the binary system Gamma 2 Velorum, which contains the closest example of a star known as a Wolf-Rayet. A single AMBER observation allowed the astronomers to separate the spectra of the two components, offering new insights in the modeling of Wolf-Rayet stars, but made it also possible to measure the separation between the two stars. This led to a new determination of the distance of the system, showing that previous estimates were incorrect. The observations also revealed information on the region where the winds from the two stars collide. The famous binary system RS Ophiuchi, an example of a recurrent nova, was observed just 5 days after it was discovered to be in outburst on 12 February 2006, an event that has been expected for 21 years. AMBER was able to detect the extension of the expanding nova emission. These observations show a complex geometry and kinematics, far from the simple interpretation of a spherical fireball in extension. AMBER has detected a high velocity jet probably perpendicular to the orbital plane of the binary system, and allowed a precise and careful study of the wind and the shockwave
A new image encryption algorithm based on logistic chaotic map with varying parameter.
Liu, Lingfeng; Miao, Suoxia
2016-01-01
In this paper, we proposed a new image encryption algorithm based on parameter-varied logistic chaotic map and dynamical algorithm. The parameter-varied logistic map can cure the weaknesses of logistic map and resist the phase space reconstruction attack. We use the parameter-varied logistic map to shuffle the plain image, and then use a dynamical algorithm to encrypt the image. We carry out several experiments, including Histogram analysis, information entropy analysis, sensitivity analysis, key space analysis, correlation analysis and computational complexity to evaluate its performances. The experiment results show that this algorithm is with high security and can be competitive for image encryption. PMID:27066326
Multipartite entanglement in quantum algorithms
Bruss, D.; Macchiavello, C.
2011-05-15
We investigate the entanglement features of the quantum states employed in quantum algorithms. In particular, we analyze the multipartite entanglement properties in the Deutsch-Jozsa, Grover, and Simon algorithms. Our results show that for these algorithms most instances involve multipartite entanglement.
A fast algorithm for attribute reduction based on Trie tree and rough set theory
NASA Astrophysics Data System (ADS)
Hu, Feng; Wang, Xiao-yan; Luo, Chuan-jiang
2013-03-01
Attribute reduction is an important issue in rough set theory. Many efficient algorithms have been proposed, however, few of them can process huge data sets quickly. In this paper, combining the Trie tree, the algorithms for computing positive region of decision table are proposed. After that, a new algorithm for attribute reduction based on Trie tree is developed, which can be used to process the attribute reduction of large data sets quickly. Experiment results show its high efficiency.
NASA Astrophysics Data System (ADS)
Gandomi, A. H.; Yang, X.-S.; Talatahari, S.; Alavi, A. H.
2013-01-01
A recently developed metaheuristic optimization algorithm, firefly algorithm (FA), mimics the social behavior of fireflies based on the flashing and attraction characteristics of fireflies. In the present study, we will introduce chaos into FA so as to increase its global search mobility for robust global optimization. Detailed studies are carried out on benchmark problems with different chaotic maps. Here, 12 different chaotic maps are utilized to tune the attractive movement of the fireflies in the algorithm. The results show that some chaotic FAs can clearly outperform the standard FA.
The Algorithm Selection Problem
NASA Technical Reports Server (NTRS)
Minton, Steve; Allen, John; Deiss, Ron (Technical Monitor)
1994-01-01
Work on NP-hard problems has shown that many instances of these theoretically computationally difficult problems are quite easy. The field has also shown that choosing the right algorithm for the problem can have a profound effect on the time needed to find a solution. However, to date there has been little work showing how to select the right algorithm for solving any particular problem. The paper refers to this as the algorithm selection problem. It describes some of the aspects that make this problem difficult, as well as proposes a technique for addressing it.
Rate control algorithm based on frame complexity estimation for MVC
NASA Astrophysics Data System (ADS)
Yan, Tao; An, Ping; Shen, Liquan; Zhang, Zhaoyang
2010-07-01
Rate control has not been well studied for multi-view video coding (MVC). In this paper, we propose an efficient rate control algorithm for MVC by improving the quadratic rate-distortion (R-D) model, which reasonably allocate bit-rate among views based on correlation analysis. The proposed algorithm consists of four levels for rate bits control more accurately, of which the frame layer allocates bits according to frame complexity and temporal activity. Extensive experiments show that the proposed algorithm can efficiently implement bit allocation and rate control according to coding parameters.
An algorithm for prescribed mean curvature using isogeometric methods
NASA Astrophysics Data System (ADS)
Chicco-Ruiz, Aníbal; Morin, Pedro; Pauletti, M. Sebastian
2016-07-01
We present a Newton type algorithm to find parametric surfaces of prescribed mean curvature with a fixed given boundary. In particular, it applies to the problem of minimal surfaces. The algorithm relies on some global regularity of the spaces where it is posed, which is naturally fitted for discretization with isogeometric type of spaces. We introduce a discretization of the continuous algorithm and present a simple implementation using the recently released isogeometric software library igatools. Finally, we show several numerical experiments which highlight the convergence properties of the scheme.
An ant colony algorithm on continuous searching space
NASA Astrophysics Data System (ADS)
Xie, Jing; Cai, Chao
2015-12-01
Ant colony algorithm is heuristic, bionic and parallel. Because of it is property of positive feedback, parallelism and simplicity to cooperate with other method, it is widely adopted in planning on discrete space. But it is still not good at planning on continuous space. After a basic introduction to the basic ant colony algorithm, we will propose an ant colony algorithm on continuous space. Our method makes use of the following three tricks. We search for the next nodes of the route according to fixed-step to guarantee the continuity of solution. When storing pheromone, it discretizes field of pheromone, clusters states and sums up the values of pheromone of these states. When updating pheromone, it makes good resolutions measured in relative score functions leave more pheromone, so that ant colony algorithm can find a sub-optimal solution in shorter time. The simulated experiment shows that our ant colony algorithm can find sub-optimal solution in relatively shorter time.
Adaptive image contrast enhancement algorithm for point-based rendering
NASA Astrophysics Data System (ADS)
Xu, Shaoping; Liu, Xiaoping P.
2015-03-01
Surgical simulation is a major application in computer graphics and virtual reality, and most of the existing work indicates that interactive real-time cutting simulation of soft tissue is a fundamental but challenging research problem in virtual surgery simulation systems. More specifically, it is difficult to achieve a fast enough graphic update rate (at least 30 Hz) on commodity PC hardware by utilizing traditional triangle-based rendering algorithms. In recent years, point-based rendering (PBR) has been shown to offer the potential to outperform the traditional triangle-based rendering in speed when it is applied to highly complex soft tissue cutting models. Nevertheless, the PBR algorithms are still limited in visual quality due to inherent contrast distortion. We propose an adaptive image contrast enhancement algorithm as a postprocessing module for PBR, providing high visual rendering quality as well as acceptable rendering efficiency. Our approach is based on a perceptible image quality technique with automatic parameter selection, resulting in a visual quality comparable to existing conventional PBR algorithms. Experimental results show that our adaptive image contrast enhancement algorithm produces encouraging results both visually and numerically compared to representative algorithms, and experiments conducted on the latest hardware demonstrate that the proposed PBR framework with the postprocessing module is superior to the conventional PBR algorithm and that the proposed contrast enhancement algorithm can be utilized in (or compatible with) various variants of the conventional PBR algorithm.
Extended Relief-F Algorithm for Nominal Attribute Estimation in Small-Document Classification
NASA Astrophysics Data System (ADS)
Park, Heum; Kwon, Hyuk-Chul
This paper presents an extended Relief-F algorithm for nominal attribute estimation, for application to small-document classification. Relief algorithms are general and successful instance-based feature-filtering algorithms for data classification and regression. Many improved Relief algorithms have been introduced as solutions to problems of redundancy and irrelevant noisy features and to the limitations of the algorithms for multiclass datasets. However, these algorithms have only rarely been applied to text classification, because the numerous features in multiclass datasets lead to great time complexity. Therefore, in considering their application to text feature filtering and classification, we presented an extended Relief-F algorithm for numerical attribute estimation (E-Relief-F) in 2007. However, we found limitations and some problems with it. Therefore, in this paper, we introduce additional problems with Relief algorithms for text feature filtering, including the negative influence of computation similarities and weights caused by a small number of features in an instance, the absence of nearest hits and misses for some instances, and great time complexity. We then suggest a new extended Relief-F algorithm for nominal attribute estimation (E-Relief-Fd) to solve these problems, and we apply it to small text-document classification. We used the algorithm in experiments to estimate feature quality for various datasets, its application to classification, and its performance in comparison with existing Relief algorithms. The experimental results show that the new E-Relief-Fd algorithm offers better performance than previous Relief algorithms, including E-Relief-F.
Acoustic simulation in architecture with parallel algorithm
NASA Astrophysics Data System (ADS)
Li, Xiaohong; Zhang, Xinrong; Li, Dan
2004-03-01
In allusion to complexity of architecture environment and Real-time simulation of architecture acoustics, a parallel radiosity algorithm was developed. The distribution of sound energy in scene is solved with this method. And then the impulse response between sources and receivers at frequency segment, which are calculated with multi-process, are combined into whole frequency response. The numerical experiment shows that parallel arithmetic can improve the acoustic simulating efficiency of complex scene.
Blind Alley Aware ACO Routing Algorithm
NASA Astrophysics Data System (ADS)
Yoshikawa, Masaya; Otani, Kazuo
2010-10-01
The routing problem is applied to various engineering fields. Many researchers study this problem. In this paper, we propose a new routing algorithm which is based on Ant Colony Optimization. The proposed algorithm introduces the tabu search mechanism to escape the blind alley. Thus, the proposed algorithm enables to find the shortest route, even if the map data contains the blind alley. Experiments using map data prove the effectiveness in comparison with Dijkstra algorithm which is the most popular conventional routing algorithm.
Fractal Landscape Algorithms for Environmental Simulations
NASA Astrophysics Data System (ADS)
Mao, H.; Moran, S.
2014-12-01
Natural science and geographical research are now able to take advantage of environmental simulations that more accurately test experimental hypotheses, resulting in deeper understanding. Experiments affected by the natural environment can benefit from 3D landscape simulations capable of simulating a variety of terrains and environmental phenomena. Such simulations can employ random terrain generation algorithms that dynamically simulate environments to test specific models against a variety of factors. Through the use of noise functions such as Perlin noise, Simplex noise, and diamond square algorithms, computers can generate simulations that model a variety of landscapes and ecosystems. This study shows how these algorithms work together to create realistic landscapes. By seeding values into the diamond square algorithm, one can control the shape of landscape. Perlin noise and Simplex noise are also used to simulate moisture and temperature. The smooth gradient created by coherent noise allows more realistic landscapes to be simulated. Terrain generation algorithms can be used in environmental studies and physics simulations. Potential studies that would benefit from simulations include the geophysical impact of flash floods or drought on a particular region and regional impacts on low lying area due to global warming and rising sea levels. Furthermore, terrain generation algorithms also serve as aesthetic tools to display landscapes (Google Earth), and simulate planetary landscapes. Hence, it can be used as a tool to assist science education. Algorithms used to generate these natural phenomena provide scientists a different approach in analyzing our world. The random algorithms used in terrain generation not only contribute to the generating the terrains themselves, but are also capable of simulating weather patterns.
Flocking algorithm for autonomous flying robots.
Virágh, Csaba; Vásárhelyi, Gábor; Tarcai, Norbert; Szörényi, Tamás; Somorjai, Gergő; Nepusz, Tamás; Vicsek, Tamás
2014-06-01
Animal swarms displaying a variety of typical flocking patterns would not exist without the underlying safe, optimal and stable dynamics of the individuals. The emergence of these universal patterns can be efficiently reconstructed with agent-based models. If we want to reproduce these patterns with artificial systems, such as autonomous aerial robots, agent-based models can also be used in their control algorithms. However, finding the proper algorithms and thus understanding the essential characteristics of the emergent collective behaviour requires thorough and realistic modeling of the robot and also the environment. In this paper, we first present an abstract mathematical model of an autonomous flying robot. The model takes into account several realistic features, such as time delay and locality of communication, inaccuracy of the on-board sensors and inertial effects. We present two decentralized control algorithms. One is based on a simple self-propelled flocking model of animal collective motion, the other is a collective target tracking algorithm. Both algorithms contain a viscous friction-like term, which aligns the velocities of neighbouring agents parallel to each other. We show that this term can be essential for reducing the inherent instabilities of such a noisy and delayed realistic system. We discuss simulation results on the stability of the control algorithms, and perform real experiments to show the applicability of the algorithms on a group of autonomous quadcopters. In our case, bio-inspiration works in two ways. On the one hand, the whole idea of trying to build and control a swarm of robots comes from the observation that birds tend to flock to optimize their behaviour as a group. On the other hand, by using a realistic simulation framework and studying the group behaviour of autonomous robots we can learn about the major factors influencing the flight of bird flocks. PMID:24852272
Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi
2014-01-01
Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425
One high-accuracy camera calibration algorithm based on computer vision images
NASA Astrophysics Data System (ADS)
Wang, Ying; Huang, Jianming; Wei, Xiangquan
2015-12-01
Camera calibration is the first step of computer vision and one of the most active research fields nowadays. In order to improve the measurement precision, the internal parameters of the camera should be accurately calibrated. So one high-accuracy camera calibration algorithm is proposed based on the images of planar targets or tridimensional targets. By using the algorithm, the internal parameters of the camera are calibrated based on the existing planar target at the vision-based navigation experiment. The experimental results show that the accuracy of the proposed algorithm is obviously improved compared with the conventional linear algorithm, Tsai general algorithm, and Zhang Zhengyou calibration algorithm. The algorithm proposed by the article can satisfy the need of computer vision and provide reference for precise measurement of the relative position and attitude.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Lomax, Harvard
1987-01-01
The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.
NASA Astrophysics Data System (ADS)
Sellitto, P.; Del Frate, F.
2014-07-01
Atmospheric temperature profiles are inferred from passive satellite instruments, using thermal infrared or microwave observations. Here we investigate on the feasibility of the retrieval of height resolved temperature information in the ultraviolet spectral region. The temperature dependence of the absorption cross sections of ozone in the Huggins band, in particular in the interval 320-325 nm, is exploited. We carried out a sensitivity analysis and demonstrated that a non-negligible information on the temperature profile can be extracted from this small band. Starting from these results, we developed a neural network inversion algorithm, trained and tested with simulated nadir EnviSat-SCIAMACHY ultraviolet observations. The algorithm is able to retrieve the temperature profile with root mean square errors and biases comparable to existing retrieval schemes that use thermal infrared or microwave observations. This demonstrates, for the first time, the feasibility of temperature profiles retrieval from space-borne instruments operating in the ultraviolet.
A hybrid cuckoo search algorithm with Nelder Mead method for solving global optimization problems.
Ali, Ahmed F; Tawhid, Mohamed A
2016-01-01
Cuckoo search algorithm is a promising metaheuristic population based method. It has been applied to solve many real life problems. In this paper, we propose a new cuckoo search algorithm by combining the cuckoo search algorithm with the Nelder-Mead method in order to solve the integer and minimax optimization problems. We call the proposed algorithm by hybrid cuckoo search and Nelder-Mead method (HCSNM). HCSNM starts the search by applying the standard cuckoo search for number of iterations then the best obtained solution is passing to the Nelder-Mead algorithm as an intensification process in order to accelerate the search and overcome the slow convergence of the standard cuckoo search algorithm. The proposed algorithm is balancing between the global exploration of the Cuckoo search algorithm and the deep exploitation of the Nelder-Mead method. We test HCSNM algorithm on seven integer programming problems and ten minimax problems and compare against eight algorithms for solving integer programming problems and seven algorithms for solving minimax problems. The experiments results show the efficiency of the proposed algorithm and its ability to solve integer and minimax optimization problems in reasonable time. PMID:27217988
Kernel MAD Algorithm for Relative Radiometric Normalization
NASA Astrophysics Data System (ADS)
Bai, Yang; Tang, Ping; Hu, Changmiao
2016-06-01
The multivariate alteration detection (MAD) algorithm is commonly used in relative radiometric normalization. This algorithm is based on linear canonical correlation analysis (CCA) which can analyze only linear relationships among bands. Therefore, we first introduce a new version of MAD in this study based on the established method known as kernel canonical correlation analysis (KCCA). The proposed method effectively extracts the non-linear and complex relationships among variables. We then conduct relative radiometric normalization experiments on both the linear CCA and KCCA version of the MAD algorithm with the use of Landsat-8 data of Beijing, China, and Gaofen-1(GF-1) data derived from South China. Finally, we analyze the difference between the two methods. Results show that the KCCA-based MAD can be satisfactorily applied to relative radiometric normalization, this algorithm can well describe the nonlinear relationship between multi-temporal images. This work is the first attempt to apply a KCCA-based MAD algorithm to relative radiometric normalization.
Efficient algorithms for survivable virtual network embedding
NASA Astrophysics Data System (ADS)
Sun, Gang; Yu, Hongfang; Li, Lemin; Anand, Vishal; di, Hao; Gao, Xiujiao
2010-12-01
Network Virtualization Technology is serving as an effective method for providing a flexible and highly adaptable shared substrate network to satisfy the diversity of demands. But the problem of efficiently embedding Virtual Network (VN) onto substrate network is intractable since it is NP-hard. How to guarantee survivability of the embedding efficiently is another great challenge. In this paper, we investigate the Survivable Virtual Network Embedding (SVNE) problem and propose two efficient algorithms for solving this problem efficiently. Firstly, we formulate the model with minimum-cost objective of survivable network virtualization problem by Mixed Integer Linear Programming (MILP). We then devise two efficient relaxation-based algorithms for solving survivable virtual network embedding problem: (1) Lagrangian Relaxation based algorithm, called LR-SVNE in this paper; (2) Decomposition based algorithm called DSVNE in this paper. The results of simulation experiments show that these two algorithms both have good performance on time efficiency but LR-SVNE can guarantee the solution converge to optimal one under small scale substrate network.
General lossless planar coupler design algorithms.
Vance, Rod
2015-08-01
This paper reviews and extends two classes of algorithms for the design of planar couplers with any unitary transfer matrix as design goals. Such couplers find use in optical sensing for fading free interferometry, coherent optical network demodulation, and also for quantum state preparation in quantum optical experiments and technology. The two classes are (1) "atomic coupler algorithms" decomposing a unitary transfer matrix into a planar network of 2×2 couplers, and (2) "Lie theoretic algorithms" concatenating unit cell devices with variable phase delay sets that form canonical coordinates for neighborhoods in the Lie group U(N), so that the concatenations realize any transfer matrix in U(N). As well as review, this paper gives (1) a Lie theoretic proof existence proof showing that both classes of algorithms work and (2) direct proofs of the efficacy of the "atomic coupler" algorithms. The Lie theoretic proof strengthens former results. 5×5 couplers designed by both methods are compared by Monte Carlo analysis, which would seem to imply atomic rather than Lie theoretic methods yield designs more resilient to manufacturing imperfections. PMID:26367295
A compilation of jet finding algorithms
Flaugher, B.; Meier, K.
1992-12-31
Technical descriptions of jet finding algorithms currently in use in p{anti p} collider experiments (CDF, UA1, UA2), e{sup +}e{sup {minus}} experiments and Monte-Carlo event generators (LUND programs, ISAJET) have been collected. For the hadron collider experiments, the clustering methods fall into two categories: cone algorithms and nearest-neighbor algorithms. In addition, UA2 has employed a combination of both methods for some analysis. While there are clearly differences between the cone and nearest-neighbor algorithms, the authors have found that there are also differences among the cone algorithms in the details of how the centroid of a cone cluster is located and how the E{sub T} and P{sub T} of the jet are defined. The most commonly used jet algorithm in electron-positron experiments is the JADE-type cluster algorithm. Five various incarnations of this approach have been described.
Research on Chord Searching Algorithm Base on Cache Strategy
NASA Astrophysics Data System (ADS)
Jun, Guo; Chen, Chen
How to improve search efficiency is a core problem in P2P network, Chord is a successful searching algorithm, but its lookup efficiency is lower because finger table has redundant information proposed the recently visited table and improved to gain more useful information in Chord. The simulation experiments show that approach can availably improve the routing efficiently.
Preconditioned quantum linear system algorithm.
Clader, B D; Jacobs, B C; Sprouse, C R
2013-06-21
We describe a quantum algorithm that generalizes the quantum linear system algorithm [Harrow et al., Phys. Rev. Lett. 103, 150502 (2009)] to arbitrary problem specifications. We develop a state preparation routine that can initialize generic states, show how simple ancilla measurements can be used to calculate many quantities of interest, and integrate a quantum-compatible preconditioner that greatly expands the number of problems that can achieve exponential speedup over classical linear systems solvers. To demonstrate the algorithm's applicability, we show how it can be used to compute the electromagnetic scattering cross section of an arbitrary target exponentially faster than the best classical algorithm. PMID:23829722
Wang, Jie-sheng; Li, Shu-xia; Song, Jiang-di
2015-01-01
In order to improve convergence velocity and optimization accuracy of the cuckoo search (CS) algorithm for solving the function optimization problems, a new improved cuckoo search algorithm based on the repeat-cycle asymptotic self-learning and self-evolving disturbance (RC-SSCS) is proposed. A disturbance operation is added into the algorithm by constructing a disturbance factor to make a more careful and thorough search near the bird's nests location. In order to select a reasonable repeat-cycled disturbance number, a further study on the choice of disturbance times is made. Finally, six typical test functions are adopted to carry out simulation experiments, meanwhile, compare algorithms of this paper with two typical swarm intelligence algorithms particle swarm optimization (PSO) algorithm and artificial bee colony (ABC) algorithm. The results show that the improved cuckoo search algorithm has better convergence velocity and optimization accuracy. PMID:26366164
Wang, Jie-sheng; Li, Shu-xia; Song, Jiang-di
2015-01-01
In order to improve convergence velocity and optimization accuracy of the cuckoo search (CS) algorithm for solving the function optimization problems, a new improved cuckoo search algorithm based on the repeat-cycle asymptotic self-learning and self-evolving disturbance (RC-SSCS) is proposed. A disturbance operation is added into the algorithm by constructing a disturbance factor to make a more careful and thorough search near the bird's nests location. In order to select a reasonable repeat-cycled disturbance number, a further study on the choice of disturbance times is made. Finally, six typical test functions are adopted to carry out simulation experiments, meanwhile, compare algorithms of this paper with two typical swarm intelligence algorithms particle swarm optimization (PSO) algorithm and artificial bee colony (ABC) algorithm. The results show that the improved cuckoo search algorithm has better convergence velocity and optimization accuracy. PMID:26366164
On algorithmic rate-coded AER generation.
Linares-Barranco, Alejandro; Jimenez-Moreno, Gabriel; Linares-Barranco, Bernabé; Civit-Balcells, Antón
2006-05-01
This paper addresses the problem of converting a conventional video stream based on sequences of frames into the spike event-based representation known as the address-event-representation (AER). In this paper we concentrate on rate-coded AER. The problem is addressed as an algorithmic problem, in which different methods are proposed, implemented and tested through software algorithms. The proposed algorithms are comparatively evaluated according to different criteria. Emphasis is put on the potential of such algorithms for a) doing the frame-based to event-based representation in real time, and b) that the resulting event streams ressemble as much as possible those generated naturally by rate-coded address-event VLSI chips, such as silicon AER retinae. It is found that simple and straightforward algorithms tend to have high potential for real time but produce event distributions that differ considerably from those obtained in AER VLSI chips. On the other hand, sophisticated algorithms that yield better event distributions are not efficient for real time operations. The methods based on linear-feedback-shift-register (LFSR) pseudorandom number generation is a good compromise, which is feasible for real time and yield reasonably well distributed events in time. Our software experiments, on a 1.6-GHz Pentium IV, show that at 50% AER bus load the proposed algorithms require between 0.011 and 1.14 ms per 8 bit-pixel per frame. One of the proposed LFSR methods is implemented in real time hardware using a prototyping board that includes a VirtexE 300 FPGA. The demonstration hardware is capable of transforming frames of 64 x 64 pixels of 8-bit depth at a frame rate of 25 frames per second, producing spike events at a peak rate of 10(7) events per second. PMID:16722179
A multi-level solution algorithm for steady-state Markov chains
NASA Technical Reports Server (NTRS)
Horton, Graham; Leutenegger, Scott T.
1993-01-01
A new iterative algorithm, the multi-level algorithm, for the numerical solution of steady state Markov chains is presented. The method utilizes a set of recursively coarsened representations of the original system to achieve accelerated convergence. It is motivated by multigrid methods, which are widely used for fast solution of partial differential equations. Initial results of numerical experiments are reported, showing significant reductions in computation time, often an order of magnitude or more, relative to the Gauss-Seidel and optimal SOR algorithms for a variety of test problems. The multi-level method is compared and contrasted with the iterative aggregation-disaggregation algorithm of Takahashi.
An Effective Intrusion Detection Algorithm Based on Improved Semi-supervised Fuzzy Clustering
NASA Astrophysics Data System (ADS)
Li, Xueyong; Zhang, Baojian; Sun, Jiaxia; Yan, Shitao
An algorithm for intrusion detection based on improved evolutionary semi- supervised fuzzy clustering is proposed which is suited for situation that gaining labeled data is more difficulty than unlabeled data in intrusion detection systems. The algorithm requires a small number of labeled data only and a large number of unlabeled data and class labels information provided by labeled data is used to guide the evolution process of each fuzzy partition on unlabeled data, which plays the role of chromosome. This algorithm can deal with fuzzy label, uneasily plunges locally optima and is suited to implement on parallel architecture. Experiments show that the algorithm can improve classification accuracy and has high detection efficiency.
Analysis and applications of a general boresight algorithm for the DSS-13 beam waveguide antenna
NASA Technical Reports Server (NTRS)
Alvarez, L. S.
1992-01-01
A general antenna beam boresight algorithm is presented. Equations for axial pointing error, peak received signal level, and antenna half-power beamwidth are given. A pointing error variance equation is derived that illustrates the dependence of the measurement estimation performance on the various algorithm inputs, including RF signal level uncertainty. Plots showing pointing error uncertainty as function of algorithm inputs are presented. Insight gained from the performance analysis is discussed in terms of its application to the areas of antenna controller and receiver interfacing, pointing error compensation, and antenna calibrations. Current and planned applications of the boresight algorithm, including its role in the upcoming Ka-band downlink experiment (KABLE), are highlighted.
Directional algorithms for the frequency isolation problem in undamped vibrational systems
NASA Astrophysics Data System (ADS)
Moro, Julio; Egaña, Juan C.
2016-06-01
A new algorithm is presented to solve the frequency isolation problem for vibrational systems with no damping: given an undamped mass-spring system with resonant eigenvalues, the system must be re-designed, finding some close-by non-resonant system at a reasonable cost. Our approach relies on modifying masses and stiffnesses along directions in parameter space which produce a maximal variation in the resonant eigenvalues, provided the non-resonant ones do not undergo large variations. The algorithm is derived from first principles, implemented, and numerically tested. The numerical experiments show that the new algorithms are considerably faster and more robust than previous algorithms solving the same problem.
Zhang, Hao; Zhao, Yan; Cao, Liangcai; Jin, Guofan
2015-02-23
We propose an algorithm based on fully computed holographic stereogram for calculating full-parallax computer-generated holograms (CGHs) with accurate depth cues. The proposed method integrates point source algorithm and holographic stereogram based algorithm to reconstruct the three-dimensional (3D) scenes. Precise accommodation cue and occlusion effect can be created, and computer graphics rendering techniques can be employed in the CGH generation to enhance the image fidelity. Optical experiments have been performed using a spatial light modulator (SLM) and a fabricated high-resolution hologram, the results show that our proposed algorithm can perform quality reconstructions of 3D scenes with arbitrary depth information. PMID:25836429
ETD: an extended time delay algorithm for ventricular fibrillation detection.
Kim, Jungyoon; Chu, Chao-Hsien
2014-01-01
Ventricular fibrillation (VF) is the most serious type of heart attack which requires quick detection and first aid to improve patients' survival rates. To be most effective in using wearable devices for VF detection, it is vital that the detection algorithms be accurate, robust, reliable and computationally efficient. Previous studies and our experiments both indicate that the time-delay (TD) algorithm has a high reliability for separating sinus rhythm (SR) from VF and is resistant to variable factors, such as window size and filtering method. However, it fails to detect some VF cases. In this paper, we propose an extended time-delay (ETD) algorithm for VF detection and conduct experiments comparing the performance of ETD against five good VF detection algorithms, including TD, using the popular Creighton University (CU) database. Our study shows that (1) TD and ETD outperform the other four algorithms considered and (2) with the same sensitivity setting, ETD improves upon TD in three other quality measures for up to 7.64% and in terms of aggregate accuracy, the ETD algorithm shows an improvement of 2.6% of the area under curve (AUC) compared to TD. PMID:25571480
Semi-flocking algorithm for motion control of mobile sensors in large-scale surveillance systems.
Semnani, Samaneh Hosseini; Basir, Otman A
2015-01-01
The ability of sensors to self-organize is an important asset in surveillance sensor networks. Self-organize implies self-control at the sensor level and coordination at the network level. Biologically inspired approaches have recently gained significant attention as a tool to address the issue of sensor control and coordination in sensor networks. These approaches are exemplified by the two well-known algorithms, namely, the Flocking algorithm and the Anti-Flocking algorithm. Generally speaking, although these two biologically inspired algorithms have demonstrated promising performance, they expose deficiencies when it comes to their ability to maintain simultaneous robust dynamic area coverage and target coverage. These two coverage performance objectives are inherently conflicting. This paper presents Semi-Flocking, a biologically inspired algorithm that benefits from key characteristics of both the Flocking and Anti-Flocking algorithms. The Semi-Flocking algorithm approaches the problem by assigning a small flock of sensors to each target, while at the same time leaving some sensors free to explore the environment. This allows the algorithm to strike balance between robust area coverage and target coverage. Such balance is facilitated via flock-sensor coordination. The performance of the proposed Semi-Flocking algorithm is examined and compared with other two flocking-based algorithms once using randomly moving targets and once using a standard walking pedestrian dataset. The results of both experiments show that the Semi-Flocking algorithm outperforms both the Flocking algorithm and the Anti-Flocking algorithm with respect to the area of coverage and the target coverage objectives. Furthermore, the results show that the proposed algorithm demonstrates shorter target detection time and fewer undetected targets than the other two flocking-based algorithms. PMID:25014985
Designing experiments through compressed sensing.
Young, Joseph G.; Ridzal, Denis
2013-06-01
In the following paper, we discuss how to design an ensemble of experiments through the use of compressed sensing. Specifically, we show how to conduct a small number of physical experiments and then use compressed sensing to reconstruct a larger set of data. In order to accomplish this, we organize our results into four sections. We begin by extending the theory of compressed sensing to a finite product of Hilbert spaces. Then, we show how these results apply to experiment design. Next, we develop an efficient reconstruction algorithm that allows us to reconstruct experimental data projected onto a finite element basis. Finally, we verify our approach with two computational experiments.
Study of image matching algorithm and sub-pixel fitting algorithm in target tracking
NASA Astrophysics Data System (ADS)
Yang, Ming-dong; Jia, Jianjun; Qiang, Jia; Wang, Jian-yu
2015-03-01
Image correlation matching is a tracking method that searched a region most approximate to the target template based on the correlation measure between two images. Because there is no need to segment the image, and the computation of this method is little. Image correlation matching is a basic method of target tracking. This paper mainly studies the image matching algorithm of gray scale image, which precision is at sub-pixel level. The matching algorithm used in this paper is SAD (Sum of Absolute Difference) method. This method excels in real-time systems because of its low computation complexity. The SAD method is introduced firstly and the most frequently used sub-pixel fitting algorithms are introduced at the meantime. These fitting algorithms can't be used in real-time systems because they are too complex. However, target tracking often requires high real-time performance, we put forward a fitting algorithm named paraboloidal fitting algorithm based on the consideration above, this algorithm is simple and realized easily in real-time system. The result of this algorithm is compared with that of surface fitting algorithm through image matching simulation. By comparison, the precision difference between these two algorithms is little, it's less than 0.01pixel. In order to research the influence of target rotation on precision of image matching, the experiment of camera rotation was carried on. The detector used in the camera is a CMOS detector. It is fixed to an arc pendulum table, take pictures when the camera rotated different angles. Choose a subarea in the original picture as the template, and search the best matching spot using image matching algorithm mentioned above. The result shows that the matching error is bigger when the target rotation angle is larger. It's an approximate linear relation. Finally, the influence of noise on matching precision was researched. Gaussian noise and pepper and salt noise were added in the image respectively, and the image
Compound algorithm for restoration of heavy turbulence-degraded image for space target
NASA Astrophysics Data System (ADS)
Wang, Liang-liang; Wang, Ru-jie; Li, Ming; Kang, Zi-qian; Xu, Xiao-qin; Gao, Xin
2012-11-01
Restoration of atmospheric turbulence degraded image is needed to be solved as soon as possible in the field of astronomical space technology. Owing to the fact that the point spread function of turbulence is unknown, changeable with time, hard to be described by mathematics models, withal, kinds of noises would be brought during the imaging processes (such as sensor noise), the image for space target is edge blurred and heavy noised, which making a single restoration algorithm to reach the requirement of restoration difficult. Focusing the fact that the image for space target which was fetched during observation by ground-based optical telescopes is heavy noisy turbulence degraded, this paper discusses the adjustment and reformation of various algorithm structures as well as the selection of various parameters, after the combination of the nonlinear filter algorithm based on noise spatial characteristics, restoration algorithm of heavy turbulence degrade image for space target based on regularization, and the statistics theory based EM restoration algorithm. In order to test the validity of the algorithm, a series of restoration experiments are performed on the heavy noisy turbulence-degraded images for space target. The experiment results show that the new compound algorithm can achieve noise restriction and detail preservation simultaneously, which is effective and practical. Withal, the definition measures and relative definition measures show that the new compound algorithm is better than the traditional algorithms.
An improved robust ADMM algorithm for quantum state tomography
NASA Astrophysics Data System (ADS)
Li, Kezhi; Zhang, Hui; Kuang, Sen; Meng, Fangfang; Cong, Shuang
2016-06-01
In this paper, an improved adaptive weights alternating direction method of multipliers algorithm is developed to implement the optimization scheme for recovering the quantum state in nearly pure states. The proposed approach is superior to many existing methods because it exploits the low-rank property of density matrices, and it can deal with unexpected sparse outliers as well. The numerical experiments are provided to verify our statements by comparing the results to three different optimization algorithms, using both adaptive and fixed weights in the algorithm, in the cases of with and without external noise, respectively. The results indicate that the improved algorithm has better performances in both estimation accuracy and robustness to external noise. The further simulation results show that the successful recovery rate increases when more qubits are estimated, which in fact satisfies the compressive sensing theory and makes the proposed approach more promising.
An improved HMM/SVM dynamic hand gesture recognition algorithm
NASA Astrophysics Data System (ADS)
Zhang, Yi; Yao, Yuanyuan; Luo, Yuan
2015-10-01
In order to improve the recognition rate and stability of dynamic hand gesture recognition, for the low accuracy rate of the classical HMM algorithm in train the B parameter, this paper proposed an improved HMM/SVM dynamic gesture recognition algorithm. In the calculation of the B parameter of HMM model, this paper introduced the SVM algorithm which has the strong ability of classification. Through the sigmoid function converted the state output of the SVM into the probability and treat this probability as the observation state transition probability of the HMM model. After this, it optimized the B parameter of HMM model and improved the recognition rate of the system. At the same time, it also enhanced the accuracy and the real-time performance of the human-computer interaction. Experiments show that this algorithm has a strong robustness under the complex background environment and the varying illumination environment. The average recognition rate increased from 86.4% to 97.55%.
A Cultural Algorithm for the Urban Public Transportation
NASA Astrophysics Data System (ADS)
Reyes, Laura Cruz; Zezzatti, Carlos Alberto Ochoa Ortíz; Santillán, Claudia Gómez; Hernández, Paula Hernández; Fuerte, Mercedes Villa
In the last years the population of Leon City, located in the state of Guanajuato in Mexico, has been considerably increasing, causing the inhabitants to waste most of their time with public transportation. As a consequence of the demographic growth and traffic bottleneck, users deal with the daily problem of optimizing their travel so that to get to their destination on time. To give a solution to this problem of obtaining an optimized route between two points in a public transportation, a method based on the cultural algorithms technique is proposed. Cultural algorithms are used in the generated knowledge in a set of time periods for a same population, using a belief space. These types of algorithms are a recent creation. The proposed method seeks a path that minimizes the time of traveling and the number of transfers. The results of the experiment show that the technique of the cultural algorithms is applicable to these kinds of multi-objective problems.
Temperature Corrected Bootstrap Algorithm
NASA Technical Reports Server (NTRS)
Comiso, Joey C.; Zwally, H. Jay
1997-01-01
A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.
Power spectral estimation algorithms
NASA Technical Reports Server (NTRS)
Bhatia, Manjit S.
1989-01-01
Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.
Algorithm Visualization in Teaching Practice
ERIC Educational Resources Information Center
Törley, Gábor
2014-01-01
This paper presents the history of algorithm visualization (AV), highlighting teaching-methodology aspects. A combined, two-group pedagogical experiment will be presented as well, which measured the efficiency and the impact on the abstract thinking of AV. According to the results, students, who learned with AV, performed better in the experiment.
Optimisation of nonlinear motion cueing algorithm based on genetic algorithm
NASA Astrophysics Data System (ADS)
Asadi, Houshyar; Mohamed, Shady; Rahim Zadeh, Delpak; Nahavandi, Saeid
2015-04-01
Motion cueing algorithms (MCAs) are playing a significant role in driving simulators, aiming to deliver the most accurate human sensation to the simulator drivers compared with a real vehicle driver, without exceeding the physical limitations of the simulator. This paper provides the optimisation design of an MCA for a vehicle simulator, in order to find the most suitable washout algorithm parameters, while respecting all motion platform physical limitations, and minimising human perception error between real and simulator driver. One of the main limitations of the classical washout filters is that it is attuned by the worst-case scenario tuning method. This is based on trial and error, and is effected by driving and programmers experience, making this the most significant obstacle to full motion platform utilisation. This leads to inflexibility of the structure, production of false cues and makes the resulting simulator fail to suit all circumstances. In addition, the classical method does not take minimisation of human perception error and physical constraints into account. Production of motion cues and the impact of different parameters of classical washout filters on motion cues remain inaccessible for designers for this reason. The aim of this paper is to provide an optimisation method for tuning the MCA parameters, based on nonlinear filtering and genetic algorithms. This is done by taking vestibular sensation error into account between real and simulated cases, as well as main dynamic limitations, tilt coordination and correlation coefficient. Three additional compensatory linear blocks are integrated into the MCA, to be tuned in order to modify the performance of the filters successfully. The proposed optimised MCA is implemented in MATLAB/Simulink software packages. The results generated using the proposed method show increased performance in terms of human sensation, reference shape tracking and exploiting the platform more efficiently without reaching
Alshamlan, Hala M; Badr, Ghada H; Alohali, Yousef A
2015-06-01
Naturally inspired evolutionary algorithms prove effectiveness when used for solving feature selection and classification problems. Artificial Bee Colony (ABC) is a relatively new swarm intelligence method. In this paper, we propose a new hybrid gene selection method, namely Genetic Bee Colony (GBC) algorithm. The proposed algorithm combines the used of a Genetic Algorithm (GA) along with Artificial Bee Colony (ABC) algorithm. The goal is to integrate the advantages of both algorithms. The proposed algorithm is applied to a microarray gene expression profile in order to select the most predictive and informative genes for cancer classification. In order to test the accuracy performance of the proposed algorithm, extensive experiments were conducted. Three binary microarray datasets are use, which include: colon, leukemia, and lung. In addition, another three multi-class microarray datasets are used, which are: SRBCT, lymphoma, and leukemia. Results of the GBC algorithm are compared with our recently proposed technique: mRMR when combined with the Artificial Bee Colony algorithm (mRMR-ABC). We also compared the combination of mRMR with GA (mRMR-GA) and Particle Swarm Optimization (mRMR-PSO) algorithms. In addition, we compared the GBC algorithm with other related algorithms that have been recently published in the literature, using all benchmark datasets. The GBC algorithm shows superior performance as it achieved the highest classification accuracy along with the lowest average number of selected genes. This proves that the GBC algorithm is a promising approach for solving the gene selection problem in both binary and multi-class cancer classification. PMID:25880524
Ozone Differential Absorption Lidar Algorithm Intercomparison
NASA Astrophysics Data System (ADS)
Godin, Sophie; Carswell, Allen I.; Donovan, David P.; Claude, Hans; Steinbrecht, Wolfgang; McDermid, I. Stuart; McGee, Thomas J.; Gross, Michael R.; Nakane, Hideaki; Swart, Daan P. J.; Bergwerff, Hans B.; Uchino, Osamu; von der Gathen, Peter; Neuber, Roland
1999-10-01
An intercomparison of ozone differential absorption lidar algorithms was performed in 1996 within the framework of the Network for the Detection of Stratospheric Changes (NDSC) lidar working group. The objective of this research was mainly to test the differentiating techniques used by the various lidar teams involved in the NDSC for the calculation of the ozone number density from the lidar signals. The exercise consisted of processing synthetic lidar signals computed from simple Rayleigh scattering and three initial ozone profiles. Two of these profiles contained perturbations in the low and the high stratosphere to test the vertical resolution of the various algorithms. For the unperturbed profiles the results of the simulations show the correct behavior of the lidar processing methods in the low and the middle stratosphere with biases of less than 1% with respect to the initial profile to as high as 30 km in most cases. In the upper stratosphere, significant biases reaching 10% at 45 km for most of the algorithms are obtained. This bias is due to the decrease in the signal-to-noise ratio with altitude, which makes it necessary to increase the number of points of the derivative low-pass filter used for data processing. As a consequence the response of the various retrieval algorithms to perturbations in the ozone profile is much better in the lower stratosphere than in the higher range. These results show the necessity of limiting the vertical smoothing in the ozone lidar retrieval algorithm and questions the ability of current lidar systems to detect long-term ozone trends above 40 km. Otherwise the simulations show in general a correct estimation of the ozone profile random error and, as shown by the tests involving the perturbed ozone profiles, some inconsistency in the estimation of the vertical resolution among the lidar teams involved in this experiment.
Predicting the performance of a spatial gamut mapping algorithm
NASA Astrophysics Data System (ADS)
Bakke, Arne M.; Farup, Ivar; Hardeberg, Jon Y.
2009-01-01
Gamut mapping algorithms are currently being developed to take advantage of the spatial information in an image to improve the utilization of the destination gamut. These algorithms try to preserve the spatial information between neighboring pixels in the image, such as edges and gradients, without sacrificing global contrast. Experiments have shown that such algorithms can result in significantly improved reproduction of some images compared with non-spatial methods. However, due to the spatial processing of images, they introduce unwanted artifacts when used on certain types of images. In this paper we perform basic image analysis to predict whether a spatial algorithm is likely to perform better or worse than a good, non-spatial algorithm. Our approach starts by detecting the relative amount of areas in the image that are made up of uniformly colored pixels, as well as the amount of areas that contain details in out-of-gamut areas. A weighted difference is computed from these numbers, and we show that the result has a high correlation with the observed performance of the spatial algorithm in a previously conducted psychophysical experiment.
Research on Multirobot Pursuit Task Allocation Algorithm Based on Emotional Cooperation Factor
Fang, Baofu; Chen, Lu; Wang, Hao; Dai, Shuanglu; Zhong, Qiubo
2014-01-01
Multirobot task allocation is a hot issue in the field of robot research. A new emotional model is used with the self-interested robot, which gives a new way to measure self-interested robots' individual cooperative willingness in the problem of multirobot task allocation. Emotional cooperation factor is introduced into self-interested robot; it is updated based on emotional attenuation and external stimuli. Then a multirobot pursuit task allocation algorithm is proposed, which is based on emotional cooperation factor. Combined with the two-step auction algorithm recruiting team leaders and team collaborators, set up pursuit teams, and finally use certain strategies to complete the pursuit task. In order to verify the effectiveness of this algorithm, some comparing experiments have been done with the instantaneous greedy optimal auction algorithm; the results of experiments show that the total pursuit time and total team revenue can be optimized by using this algorithm. PMID:25152925
Dynamically Incremental K-means++ Clustering Algorithm Based on Fuzzy Rough Set Theory
NASA Astrophysics Data System (ADS)
Li, Wei; Wang, Rujing; Jia, Xiufang; Jiang, Qing
Being classic K-means++ clustering algorithm only for static data, dynamically incremental K-means++ clustering algorithm (DK-Means++) is presented based on fuzzy rough set theory in this paper. Firstly, in DK-Means++ clustering algorithm, the formula of similar degree is improved by weights computed by using of the important degree of attributes which are reduced on the basis of rough fuzzy set theory. Secondly, new data only need match granular which was clustered by K-means++ algorithm or seldom new data is clustered by classic K-means++ algorithm in global data. In this way, that all data is re-clustered each time in dynamic data set is avoided, so the efficiency of clustering is improved. Throughout our experiments showing, DK-Means++ algorithm can objectively and efficiently deal with clustering problem of dynamically incremental data.
Che, Yanting; Wang, Qiuying; Gao, Wei; Yu, Fei
2015-01-01
In this paper, an improved inertial frame alignment algorithm for a marine SINS under mooring conditions is proposed, which significantly improves accuracy. Since the horizontal alignment is easy to complete, and a characteristic of gravity is that its component in the horizontal plane is zero, we use a clever method to improve the conventional inertial alignment algorithm. Firstly, a large misalignment angle model and a dimensionality reduction Gauss-Hermite filter are employed to establish the fine horizontal reference frame. Based on this, the projection of the gravity in the body inertial coordinate frame can be calculated easily. Then, the initial alignment algorithm is accomplished through an inertial frame alignment algorithm. The simulation and experiment results show that the improved initial alignment algorithm performs better than the conventional inertial alignment algorithm, and meets the accuracy requirements of a medium-accuracy marine SINS. PMID:26445048
Che, Yanting; Wang, Qiuying; Gao, Wei; Yu, Fei
2015-01-01
In this paper, an improved inertial frame alignment algorithm for a marine SINS under mooring conditions is proposed, which significantly improves accuracy. Since the horizontal alignment is easy to complete, and a characteristic of gravity is that its component in the horizontal plane is zero, we use a clever method to improve the conventional inertial alignment algorithm. Firstly, a large misalignment angle model and a dimensionality reduction Gauss-Hermite filter are employed to establish the fine horizontal reference frame. Based on this, the projection of the gravity in the body inertial coordinate frame can be calculated easily. Then, the initial alignment algorithm is accomplished through an inertial frame alignment algorithm. The simulation and experiment results show that the improved initial alignment algorithm performs better than the conventional inertial alignment algorithm, and meets the accuracy requirements of a medium-accuracy marine SINS. PMID:26445048
Adaptive optics image deconvolution based on a modified Richardson-Lucy algorithm
NASA Astrophysics Data System (ADS)
Chen, Bo; Geng, Ze-xun; Yan, Xiao-dong; Yang, Yang; Sui, Xue-lian; Zhao, Zhen-lei
2007-12-01
Adaptive optical (AO) system provides a real-time compensation for atmospheric turbulence. However, the correction is often only partial, and a deconvolution is required for reaching the diffraction limit. The Richardson-Lucy (R-L) Algorithm is the technique most widely used for AO image deconvolution, but Standard R-L Algorithm (SRLA) is often puzzled by speckling phenomenon, wraparound artifact and noise problem. A Modified R-L Algorithm (MRLA) for AO image deconvolution is presented. This novel algorithm applies Magain's correct sampling approach and incorporating noise statistics to Standard R-L Algorithm. The alternant iterative method is applied to estimate PSF and object in the novel algorithm. Comparing experiments for indoor data and AO image are done with SRLA and the MRLA in this paper. Experimental results show that this novel MRLA outperforms the SRLA.
[An improved fast algorithm for ray casting volume rendering of medical images].
Tao, Ling; Wang, Huina; Tian, Zhiliang
2006-10-01
Ray casting algorithm can obtain better quality images in volume rendering, however, it presents some problems such as powerful computing capacity and slow rendering velocity. Therefore, a new fast algorithm of ray casting volume rendering is proposed in this paper. This algorithm reduces matrix computation by the matrix transformation characteristics of re-sampling points in two coordinate system, so re-sampled computational process is accelerated. By extending the Bresenham algorithm to three dimension and utilizing boundary box technique, this algorithm avoids the sampling in empty voxel and greatly improves the efficiency of ray casting. The experiment results show that the improved acceleration algorithm can produce the required quality images, at the same time reduces the total operations remarkably, and speeds up the volume rendering. PMID:17121341
NASA Astrophysics Data System (ADS)
Nawi, Nazri Mohd.; Khan, Abdullah; Rehman, M. Z.
2015-05-01
A nature inspired behavior metaheuristic techniques which provide derivative-free solutions to solve complex problems. One of the latest additions to the group of nature inspired optimization procedure is Cuckoo Search (CS) algorithm. Artificial Neural Network (ANN) training is an optimization task since it is desired to find optimal weight set of a neural network in training process. Traditional training algorithms have some limitation such as getting trapped in local minima and slow convergence rate. This study proposed a new technique CSLM by combining the best features of two known algorithms back-propagation (BP) and Levenberg Marquardt algorithm (LM) for improving the convergence speed of ANN training and avoiding local minima problem by training this network. Some selected benchmark classification datasets are used for simulation. The experiment result show that the proposed cuckoo search with Levenberg Marquardt algorithm has better performance than other algorithm used in this study.
Fast, Parallel and Secure Cryptography Algorithm Using Lorenz's Attractor
NASA Astrophysics Data System (ADS)
Marco, Anderson Gonçalves; Martinez, Alexandre Souto; Bruno, Odemir Martinez
A novel cryptography method based on the Lorenz's attractor chaotic system is presented. The proposed algorithm is secure and fast, making it practical for general use. We introduce the chaotic operation mode, which provides an interaction among the password, message and a chaotic system. It ensures that the algorithm yields a secure codification, even if the nature of the chaotic system is known. The algorithm has been implemented in two versions: one sequential and slow and the other, parallel and fast. Our algorithm assures the integrity of the ciphertext (we know if it has been altered, which is not assured by traditional algorithms) and consequently its authenticity. Numerical experiments are presented, discussed and show the behavior of the method in terms of security and performance. The fast version of the algorithm has a performance comparable to AES, a popular cryptography program used commercially nowadays, but it is more secure, which makes it immediately suitable for general purpose cryptography applications. An internet page has been set up, which enables the readers to test the algorithm and also to try to break into the cipher.
Nicolini, G.; Clivio, A.; Vanetti, E.; Cozzi, L.; Fogliata, A.; Krauss, H.; Fenoglietto, P.
2013-11-15
Purpose: To demonstrate the feasibility of portal dosimetry with an amorphous silicon mega voltage imager for flattening filter free (FFF) photon beams by means of the GLAaS methodology and to validate it for pretreatment quality assurance of volumetric modulated arc therapy (RapidArc).Methods: The GLAaS algorithm, developed for flattened beams, was applied to FFF beams of nominal energy of 6 and 10 MV generated by a Varian TrueBeam (TB). The amorphous silicon electronic portal imager [named mega voltage imager (MVI) on TB] was used to generate integrated images that were converted into matrices of absorbed dose to water. To enable GLAaS use under the increased dose-per-pulse and dose-rate conditions of the FFF beams, new operational source-detector-distance (SDD) was identified to solve detector saturation issues. Empirical corrections were defined to account for the shape of the profiles of the FFF beams to expand the original methodology of beam profile and arm backscattering correction. GLAaS for FFF beams was validated on pretreatment verification of RapidArc plans for three different TB linacs. In addition, the first pretreatment results from clinical experience on 74 arcs were reported in terms of γ analysis.Results: MVI saturates at 100 cm SDD for FFF beams but this can be avoided if images are acquired at 150 cm for all nominal dose rates of FFF beams. Rotational stability of the gantry-imager system was tested and resulted in a minimal apparent imager displacement during rotation of 0.2 ± 0.2 mm at SDD = 150 cm. The accuracy of this approach was tested with three different Varian TrueBeam linacs from different institutes. Data were stratified per energy and machine and showed no dependence with beam quality and MLC model. The results from clinical pretreatment quality assurance, provided a gamma agreement index (GAI) in the field area for six and ten FFF beams of (99.8 ± 0.3)% and (99.5 ± 0.6)% with distance to agreement and dose difference criteria
RES: Regularized Stochastic BFGS Algorithm
NASA Astrophysics Data System (ADS)
Mokhtari, Aryan; Ribeiro, Alejandro
2014-12-01
RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.
An Indoor Continuous Positioning Algorithm on the Move by Fusing Sensors and Wi-Fi on Smartphones
Li, Huaiyu; Chen, Xiuwan; Jing, Guifei; Wang, Yuan; Cao, Yanfeng; Li, Fei; Zhang, Xinlong; Xiao, Han
2015-01-01
Wi-Fi indoor positioning algorithms experience large positioning error and low stability when continuously positioning terminals that are on the move. This paper proposes a novel indoor continuous positioning algorithm that is on the move, fusing sensors and Wi-Fi on smartphones. The main innovative points include an improved Wi-Fi positioning algorithm and a novel positioning fusion algorithm named the Trust Chain Positioning Fusion (TCPF) algorithm. The improved Wi-Fi positioning algorithm was designed based on the properties of Wi-Fi signals on the move, which are found in a novel “quasi-dynamic” Wi-Fi signal experiment. The TCPF algorithm is proposed to realize the “process-level” fusion of Wi-Fi and Pedestrians Dead Reckoning (PDR) positioning, including three parts: trusted point determination, trust state and positioning fusion algorithm. An experiment is carried out for verification in a typical indoor environment, and the average positioning error on the move is 1.36 m, a decrease of 28.8% compared to an existing algorithm. The results show that the proposed algorithm can effectively reduce the influence caused by the unstable Wi-Fi signals, and improve the accuracy and stability of indoor continuous positioning on the move. PMID:26690447
An Indoor Continuous Positioning Algorithm on the Move by Fusing Sensors and Wi-Fi on Smartphones.
Li, Huaiyu; Chen, Xiuwan; Jing, Guifei; Wang, Yuan; Cao, Yanfeng; Li, Fei; Zhang, Xinlong; Xiao, Han
2015-01-01
Wi-Fi indoor positioning algorithms experience large positioning error and low stability when continuously positioning terminals that are on the move. This paper proposes a novel indoor continuous positioning algorithm that is on the move, fusing sensors and Wi-Fi on smartphones. The main innovative points include an improved Wi-Fi positioning algorithm and a novel positioning fusion algorithm named the Trust Chain Positioning Fusion (TCPF) algorithm. The improved Wi-Fi positioning algorithm was designed based on the properties of Wi-Fi signals on the move, which are found in a novel "quasi-dynamic" Wi-Fi signal experiment. The TCPF algorithm is proposed to realize the "process-level" fusion of Wi-Fi and Pedestrians Dead Reckoning (PDR) positioning, including three parts: trusted point determination, trust state and positioning fusion algorithm. An experiment is carried out for verification in a typical indoor environment, and the average positioning error on the move is 1.36 m, a decrease of 28.8% compared to an existing algorithm. The results show that the proposed algorithm can effectively reduce the influence caused by the unstable Wi-Fi signals, and improve the accuracy and stability of indoor continuous positioning on the move. PMID:26690447
The Chopthin Algorithm for Resampling
NASA Astrophysics Data System (ADS)
Gandy, Axel; Lau, F. Din-Houn
2016-08-01
Resampling is a standard step in particle filters and more generally sequential Monte Carlo methods. We present an algorithm, called chopthin, for resampling weighted particles. In contrast to standard resampling methods the algorithm does not produce a set of equally weighted particles; instead it merely enforces an upper bound on the ratio between the weights. Simulation studies show that the chopthin algorithm consistently outperforms standard resampling methods. The algorithms chops up particles with large weight and thins out particles with low weight, hence its name. It implicitly guarantees a lower bound on the effective sample size. The algorithm can be implemented efficiently, making it practically useful. We show that the expected computational effort is linear in the number of particles. Implementations for C++, R (on CRAN), Python and Matlab are available.
VIEW SHOWING WEST ELEVATION, EAST SIDE OF MEYER AVENUE. SHOWS ...
VIEW SHOWING WEST ELEVATION, EAST SIDE OF MEYER AVENUE. SHOWS 499-501, MUNOZ HOUSE (AZ-73-37) ON FAR RIGHT - Antonio Bustamente House, 485-489 South Meyer Avenue & 186 West Kennedy Street, Tucson, Pima County, AZ
15. Detail showing lower chord pinconnected to vertical member, showing ...
15. Detail showing lower chord pin-connected to vertical member, showing floor beam riveted to extension of vertical member below pin-connection, and showing brackets supporting cantilevered sidewalk. View to southwest. - Selby Avenue Bridge, Spanning Short Line Railways track at Selby Avenue between Hamline & Snelling Avenues, Saint Paul, Ramsey County, MN
A graph spectrum based geometric biclustering algorithm.
Wang, Doris Z; Yan, Hong
2013-01-21
Biclustering is capable of performing simultaneous clustering on two dimensions of a data matrix and has many applications in pattern classification. For example, in microarray experiments, a subset of genes is co-expressed in a subset of conditions, and biclustering algorithms can be used to detect the coherent patterns in the data for further analysis of function. In this paper, we present a graph spectrum based geometric biclustering (GSGBC) algorithm. In the geometrical view, biclusters can be seen as different linear geometrical patterns in high dimensional spaces. Based on this, the modified Hough transform is used to find the Hough vector (HV) corresponding to sub-bicluster patterns in 2D spaces. A graph can be built regarding each HV as a node. The graph spectrum is utilized to identify the eigengroups in which the sub-biclusters are grouped naturally to produce larger biclusters. Through a comparative study, we find that the GSGBC achieves as good a result as GBC and outperforms other kinds of biclustering algorithms. Also, compared with the original geometrical biclustering algorithm, it reduces the computing time complexity significantly. We also show that biologically meaningful biclusters can be identified by our method from real microarray gene expression data. PMID:23079285
Evaluation of Algorithms for Compressing Hyperspectral Data
NASA Technical Reports Server (NTRS)
Cook, Sid; Harsanyi, Joseph; Faber, Vance
2003-01-01
With EO-1 Hyperion in orbit NASA is showing their continued commitment to hyperspectral imaging (HSI). As HSI sensor technology continues to mature, the ever-increasing amounts of sensor data generated will result in a need for more cost effective communication and data handling systems. Lockheed Martin, with considerable experience in spacecraft design and developing special purpose onboard processors, has teamed with Applied Signal & Image Technology (ASIT), who has an extensive heritage in HSI spectral compression and Mapping Science (MSI) for JPEG 2000 spatial compression expertise, to develop a real-time and intelligent onboard processing (OBP) system to reduce HSI sensor downlink requirements. Our goal is to reduce the downlink requirement by a factor > 100, while retaining the necessary spectral and spatial fidelity of the sensor data needed to satisfy the many science, military, and intelligence goals of these systems. Our compression algorithms leverage commercial-off-the-shelf (COTS) spectral and spatial exploitation algorithms. We are currently in the process of evaluating these compression algorithms using statistical analysis and NASA scientists. We are also developing special purpose processors for executing these algorithms onboard a spacecraft.
Comparison of neuron selection algorithms of wavelet-based neural network
NASA Astrophysics Data System (ADS)
Mei, Xiaodan; Sun, Sheng-He
2001-09-01
Wavelet networks have increasingly received considerable attention in various fields such as signal processing, pattern recognition, robotics and automatic control. Recently people are interested in employing wavelet functions as activation functions and have obtained some satisfying results in approximating and localizing signals. However, the function estimation will become more and more complex with the growth of the input dimension. The hidden neurons contribute to minimize the approximation error, so it is important to study suitable algorithms for neuron selection. It is obvious that exhaustive search procedure is not satisfying when the number of neurons is large. The study in this paper focus on what type of selection algorithm has faster convergence speed and less error for signal approximation. Therefore, the Genetic algorithm and the Tabu Search algorithm are studied and compared by some experiments. This paper first presents the structure of the wavelet-based neural network, then introduces these two selection algorithms and discusses their properties and learning processes, and analyzes the experiments and results. We used two wavelet functions to test these two algorithms. The experiments show that the Tabu Search selection algorithm's performance is better than the Genetic selection algorithm, TSA has faster convergence rate than GA under the same stopping criterion.
Improved imaging algorithm for bridge crack detection
NASA Astrophysics Data System (ADS)
Lu, Jingxiao; Song, Pingli; Han, Kaihong
2012-04-01
This paper present an improved imaging algorithm for bridge crack detection, through optimizing the eight-direction Sobel edge detection operator, making the positioning of edge points more accurate than without the optimization, and effectively reducing the false edges information, so as to facilitate follow-up treatment. In calculating the crack geometry characteristics, we use the method of extracting skeleton on single crack length. In order to calculate crack area, we construct the template of area by making logical bitwise AND operation of the crack image. After experiment, the results show errors of the crack detection method and actual manual measurement are within an acceptable range, meet the needs of engineering applications. This algorithm is high-speed and effective for automated crack measurement, it can provide more valid data for proper planning and appropriate performance of the maintenance and rehabilitation processes of bridge.
Genetic Algorithms for Multiple-Choice Problems
NASA Astrophysics Data System (ADS)
Aickelin, Uwe
2010-04-01
This thesis investigates the use of problem-specific knowledge to enhance a genetic algorithm approach to multiple-choice optimisation problems.It shows that such information can significantly enhance performance, but that the choice of information and the way it is included are important factors for success.Two multiple-choice problems are considered.The first is constructing a feasible nurse roster that considers as many requests as possible.In the second problem, shops are allocated to locations in a mall subject to constraints and maximising the overall income.Genetic algorithms are chosen for their well-known robustness and ability to solve large and complex discrete optimisation problems.However, a survey of the literature reveals room for further research into generic ways to include constraints into a genetic algorithm framework.Hence, the main theme of this work is to balance feasibility and cost of solutions.In particular, co-operative co-evolution with hierarchical sub-populations, problem structure exploiting repair schemes and indirect genetic algorithms with self-adjusting decoder functions are identified as promising approaches.The research starts by applying standard genetic algorithms to the problems and explaining the failure of such approaches due to epistasis.To overcome this, problem-specific information is added in a variety of ways, some of which are designed to increase the number of feasible solutions found whilst others are intended to improve the quality of such solutions.As well as a theoretical discussion as to the underlying reasons for using each operator,extensive computational experiments are carried out on a variety of data.These show that the indirect approach relies less on problem structure and hence is easier to implement and superior in solution quality.
Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction.
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-11-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835
Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-01-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835
Research on Routing Selection Algorithm Based on Genetic Algorithm
NASA Astrophysics Data System (ADS)
Gao, Guohong; Zhang, Baojian; Li, Xueyong; Lv, Jinna
The hereditary algorithm is a kind of random searching and method of optimizing based on living beings natural selection and hereditary mechanism. In recent years, because of the potentiality in solving complicate problems and the successful application in the fields of industrial project, hereditary algorithm has been widely concerned by the domestic and international scholar. Routing Selection communication has been defined a standard communication model of IP version 6.This paper proposes a service model of Routing Selection communication, and designs and implements a new Routing Selection algorithm based on genetic algorithm.The experimental simulation results show that this algorithm can get more resolution at less time and more balanced network load, which enhances search ratio and the availability of network resource, and improves the quality of service.
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.