Science.gov

Sample records for algorithm produces results

  1. New Results in Astrodynamics Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Coverstone-Carroll, V.; Hartmann, J. W.; Williams, S. N.; Mason, W. J.

    1998-01-01

    Generic algorithms have gained popularity as an effective procedure for obtaining solutions to traditionally difficult space mission optimization problems. In this paper, a brief survey of the use of genetic algorithms to solve astrodynamics problems is presented and is followed by new results obtained from applying a Pareto genetic algorithm to the optimization of low-thrust interplanetary spacecraft missions.

  2. Wake Vortex Algorithm Scoring Results

    NASA Technical Reports Server (NTRS)

    Robins, R. E.; Delisi, D. P.; Hinton, David (Technical Monitor)

    2002-01-01

    This report compares the performance of two models of trailing vortex evolution for which interaction with the ground is not a significant factor. One model uses eddy dissipation rate (EDR) and the other uses the kinetic energy of turbulence fluctuations (TKE) to represent the effect of turbulence. In other respects, the models are nearly identical. The models are evaluated by comparing their predictions of circulation decay, vertical descent, and lateral transport to observations for over four hundred cases from Memphis and Dallas/Fort Worth International Airports. These observations were obtained during deployments in support of NASA's Aircraft Vortex Spacing System (AVOSS). The results of the comparisons show that the EDR model usually performs slightly better than the TKE model.

  3. Simulation results for the Viterbi decoding algorithm

    NASA Technical Reports Server (NTRS)

    Batson, B. H.; Moorehead, R. W.; Taqvi, S. Z. H.

    1972-01-01

    Concepts involved in determining the performance of coded digital communications systems are introduced. The basic concepts of convolutional encoding and decoding are summarized, and hardware implementations of sequential and maximum likelihood decoders are described briefly. Results of parametric studies of the Viterbi decoding algorithm are summarized. Bit error probability is chosen as the measure of performance and is calculated, by using digital computer simulations, for various encoder and decoder parameters. Results are presented for code rates of one-half and one-third, for constraint lengths of 4 to 8, for both hard-decision and soft-decision bit detectors, and for several important systematic and nonsystematic codes. The effect of decoder block length on bit error rate also is considered, so that a more complete estimate of the relationship between performance and decoder complexity can be made.

  4. MODIL cryocooler producibility demonstration project results

    SciTech Connect

    Cruz, G.E.; Franks, R.M.

    1993-06-24

    The production of large quantities of spacecraft needed by SDIO will require a cultural change in design and production practices. Low rates production and the need for exceedingly high reliability has driven the industry to custom designed, hand crafted, and exhaustingly tested satellites. These factors have mitigated against employing design and manufacturing cost reduction methods commonly used in tactical missile production. Additional challenges to achieving production efficiencies are presented by the SDI spacecraft mission requirement. IR sensor systems, for example, are comprised of subassemblies and components that require the design, manufacture, and maintenance of ultra precision tolerances over challenging operational lifetimes. These IR sensors demand the use of reliable, closed loop, cryogenic refrigerators or active cryocoolers to meet stringent system acquisition and pointing requirements. The authors summarize some spacecraft cryocooler requirements and discuss their observations regarding Industry`s current production capabilities of cryocoolers. The results of the Lawrence Livermore National Laboratory (LLNL) Spacecraft Fabrication and Test (SF and T) MODIL`s Phase I producibility demonstration project are presented. The current project that involves LLNL and industrial participants is discussed.

  5. The Aquarius Salinity Retrieval Algorithm: Early Results

    NASA Technical Reports Server (NTRS)

    Meissner, Thomas; Wentz, Frank J.; Lagerloef, Gary; LeVine, David

    2012-01-01

    The Aquarius L-band radiometer/scatterometer system is designed to provide monthly salinity maps at 150 km spatial scale to a 0.2 psu accuracy. The sensor was launched on June 10, 2011, aboard the Argentine CONAE SAC-D spacecraft. The L-band radiometers and the scatterometer have been taking science data observations since August 25, 2011. The first part of this presentation gives an overview over the Aquarius salinity retrieval algorithm. The instrument calibration converts Aquarius radiometer counts into antenna temperatures (TA). The salinity retrieval algorithm converts those TA into brightness temperatures (TB) at a flat ocean surface. As a first step, contributions arising from the intrusion of solar, lunar and galactic radiation are subtracted. The antenna pattern correction (APC) removes the effects of cross-polarization contamination and spillover. The Aquarius radiometer measures the 3rd Stokes parameter in addition to vertical (v) and horizontal (h) polarizations, which allows for an easy removal of ionospheric Faraday rotation. The atmospheric absorption at L-band is almost entirely due to O2, which can be calculated based on auxiliary input fields from numerical weather prediction models and then successively removed from the TB. The final step in the TA to TB conversion is the correction for the roughness of the sea surface due to wind. This is based on the radar backscatter measurements by the scatterometer. The TB of the flat ocean surface can now be matched to a salinity value using a surface emission model that is based on a model for the dielectric constant of sea water and an auxiliary field for the sea surface temperature. In the current processing (as of writing this abstract) only v-pol TB are used for this last process and NCEP winds are used for the roughness correction. Before the salinity algorithm can be operationally implemented and its accuracy assessed by comparing versus in situ measurements, an extensive calibration and validation

  6. Convergence Results on Iteration Algorithms to Linear Systems

    PubMed Central

    Wang, Zhuande; Yang, Chuansheng; Yuan, Yubo

    2014-01-01

    In order to solve the large scale linear systems, backward and Jacobi iteration algorithms are employed. The convergence is the most important issue. In this paper, a unified backward iterative matrix is proposed. It shows that some well-known iterative algorithms can be deduced with it. The most important result is that the convergence results have been proved. Firstly, the spectral radius of the Jacobi iterative matrix is positive and the one of backward iterative matrix is strongly positive (lager than a positive constant). Secondly, the mentioned two iterations have the same convergence results (convergence or divergence simultaneously). Finally, some numerical experiments show that the proposed algorithms are correct and have the merit of backward methods. PMID:24991640

  7. An automatic method for producing robust regression models from hyperspectral data using multiple simple genetic algorithms

    NASA Astrophysics Data System (ADS)

    Sykas, Dimitris; Karathanassi, Vassilia

    2015-06-01

    This paper presents a new method for automatically determining the optimum regression model, which enable the estimation of a parameter. The concept lies on the combination of k spectral pre-processing algorithms (SPPAs) that enhance spectral features correlated to the desired parameter. Initially a pre-processing algorithm uses as input a single spectral signature and transforms it according to the SPPA function. A k-step combination of SPPAs uses k preprocessing algorithms serially. The result of each SPPA is used as input to the next SPPA, and so on until the k desired pre-processed signatures are reached. These signatures are then used as input to three different regression methods: the Normalized band Difference Regression (NDR), the Multiple Linear Regression (MLR) and the Partial Least Squares Regression (PLSR). Three Simple Genetic Algorithms (SGAs) are used, one for each regression method, for the selection of the optimum combination of k SPPAs. The performance of the SGAs is evaluated based on the RMS error of the regression models. The evaluation not only indicates the selection of the optimum SPPA combination but also the regression method that produces the optimum prediction model. The proposed method was applied on soil spectral measurements in order to predict Soil Organic Matter (SOM). In this study, the maximum value assigned to k was 3. PLSR yielded the highest accuracy while NDR's accuracy was satisfactory compared to its complexity. MLR method showed severe drawbacks due to the presence of noise in terms of collinearity at the spectral bands. Most of the regression methods required a 3-step combination of SPPAs for achieving the highest performance. The selected preprocessing algorithms were different for each regression method since each regression method handles with a different way the explanatory variables.

  8. Evaluation of registration, compression and classification algorithms. Volume 1: Results

    NASA Technical Reports Server (NTRS)

    Jayroe, R.; Atkinson, R.; Callas, L.; Hodges, J.; Gaggini, B.; Peterson, J.

    1979-01-01

    The registration, compression, and classification algorithms were selected on the basis that such a group would include most of the different and commonly used approaches. The results of the investigation indicate clearcut, cost effective choices for registering, compressing, and classifying multispectral imagery.

  9. A simple algorithm for analyzing uncertainty of accident reconstruction results.

    PubMed

    Zou, Tiefang; Hu, Lin; Li, Pingfan; Wu, Hequan

    2015-12-01

    In order to analyzing the uncertainty in accident reconstruction, based on the theory of extreme value and the convex model theory, the uncertainty analysis problem is turn to an extreme value problem. In order to calculate the range of the dependent variable, the extreme value in the definition domain and on the boundary of the definition domain are calculated independently, and then the upper and lower bound of the dependent variable can be given by these obtained extreme values. Based on such idea and through analyzing five numerical cases, a simple algorithm for calculating the range of an accident reconstruction result was given; appropriate results can be obtained through the proposed algorithm in these cases. Finally, a real world vehicle-motorcycle accident was given, the range of the reconstructed velocity of the vehicle was calculated by employing the Pc-Crash, the response surface methodology and the new proposed algorithm, the range was [66.1-67.3] km/h. This research will provide another choice for uncertainty analysis in accident reconstruction. PMID:26386339

  10. The Effect of Pansharpening Algorithms on the Resulting Orthoimagery

    NASA Astrophysics Data System (ADS)

    Agrafiotis, P.; Georgopoulos, A.; Karantzalos, K.

    2016-06-01

    This paper evaluates the geometric effects of pansharpening algorithms on automatically generated DSMs and thus on the resulting orthoimagery through a quantitative assessment of the accuracy on the end products. The main motivation was based on the fact that for automatically generated Digital Surface Models, an image correlation step is employed for extracting correspondences between the overlapping images. Thus their accuracy and reliability is strictly related to image quality, while pansharpening may result into lower image quality which may affect the DSM generation and the resulting orthoimage accuracy. To this direction, an iterative methodology was applied in order to combine the process described by Agrafiotis and Georgopoulos (2015) with different pansharpening algorithms and check the accuracy of orthoimagery resulting from pansharpened data. Results are thoroughly examined and statistically analysed. The overall evaluation indicated that the pansharpening process didn't affect the geometric accuracy of the resulting DSM with a 10m interval, as well as the resulting orthoimagery. Although some residuals in the orthoimages were observed, their magnitude cannot adversely affect the accuracy of the final orthoimagery.

  11. Adaptively resizing populations: Algorithm, analysis, and first results

    NASA Technical Reports Server (NTRS)

    Smith, Robert E.; Smuda, Ellen

    1993-01-01

    Deciding on an appropriate population size for a given Genetic Algorithm (GA) application can often be critical to the algorithm's success. Too small, and the GA can fall victim to sampling error, affecting the efficacy of its search. Too large, and the GA wastes computational resources. Although advice exists for sizing GA populations, much of this advice involves theoretical aspects that are not accessible to the novice user. An algorithm for adaptively resizing GA populations is suggested. This algorithm is based on recent theoretical developments that relate population size to schema fitness variance. The suggested algorithm is developed theoretically, and simulated with expected value equations. The algorithm is then tested on a problem where population sizing can mislead the GA. The work presented suggests that the population sizing algorithm may be a viable way to eliminate the population sizing decision from the application of GA's.

  12. Flight test results of failure detection and isolation algorithms for a redundant strapdown inertial measurement unit

    NASA Technical Reports Server (NTRS)

    Morrell, F. R.; Motyka, P. R.; Bailey, M. L.

    1990-01-01

    Flight test results for two sensor fault-tolerant algorithms developed for a redundant strapdown inertial measurement unit are presented. The inertial measurement unit (IMU) consists of four two-degrees-of-freedom gyros and accelerometers mounted on the faces of a semi-octahedron. Fault tolerance is provided by edge vector test and generalized likelihood test algorithms, each of which can provide dual fail-operational capability for the IMU. To detect the wide range of failure magnitudes in inertial sensors, which provide flight crucial information for flight control and navigation, failure detection and isolation are developed in terms of a multi level structure. Threshold compensation techniques, developed to enhance the sensitivity of the failure detection process to navigation level failures, are presented. Four flight tests were conducted in a commercial transport-type environment to compare and determine the performance of the failure detection and isolation methods. Dual flight processors enabled concurrent tests for the algorithms. Failure signals such as hard-over, null, or bias shift, were added to the sensor outputs as simple or multiple failures during the flights. Both algorithms provided timely detection and isolation of flight control level failures. The generalized likelihood test algorithm provided more timely detection of low-level sensor failures, but it produced one false isolation. Both algorithms demonstrated the capability to provide dual fail-operational performance for the skewed array of inertial sensors.

  13. Evaluation of an improved algorithm for producing realistic 3D breast software phantoms: Application for mammography

    SciTech Connect

    Bliznakova, K.; Suryanarayanan, S.; Karellas, A.; Pallikarakis, N.

    2010-11-15

    Purpose: This work presents an improved algorithm for the generation of 3D breast software phantoms and its evaluation for mammography. Methods: The improved methodology has evolved from a previously presented 3D noncompressed breast modeling method used for the creation of breast models of different size, shape, and composition. The breast phantom is composed of breast surface, duct system and terminal ductal lobular units, Cooper's ligaments, lymphatic and blood vessel systems, pectoral muscle, skin, 3D mammographic background texture, and breast abnormalities. The key improvement is the development of a new algorithm for 3D mammographic texture generation. Simulated images of the enhanced 3D breast model without lesions were produced by simulating mammographic image acquisition and were evaluated subjectively and quantitatively. For evaluation purposes, a database with regions of interest taken from simulated and real mammograms was created. Four experienced radiologists participated in a visual subjective evaluation trial, as they judged the quality of the simulated mammograms, using the new algorithm compared to mammograms, obtained with the old modeling approach. In addition, extensive quantitative evaluation included power spectral analysis and calculation of fractal dimension, skewness, and kurtosis of simulated and real mammograms from the database. Results: The results from the subjective evaluation strongly suggest that the new methodology for mammographic breast texture creates improved breast models compared to the old approach. Calculated parameters on simulated images such as {beta} exponent deducted from the power law spectral analysis and fractal dimension are similar to those calculated on real mammograms. The results for the kurtosis and skewness are also in good coincidence with those calculated from clinical images. Comparison with similar calculations published in the literature showed good agreement in the majority of cases. Conclusions: The

  14. Upper cervical injuries: Clinical results using a new treatment algorithm

    PubMed Central

    Joaquim, Andrei F.; Ghizoni, Enrico; Tedeschi, Helder; Yacoub, Alexandre R. D.; Brodke, Darrel S.; Vaccaro, Alexander R.; Patel, Alpesh A.

    2015-01-01

    Introduction: Upper cervical injuries (UCI) have a wide range of radiological and clinical presentation due to the unique complex bony, ligamentous and vascular anatomy. We recently proposed a rational approach in an attempt to unify prior classification system and guide treatment. In this paper, we evaluate the clinical results of our algorithm for UCI treatment. Materials and Methods: A prospective cohort series of patients with UCI was performed. The primary outcome was the AIS. Surgical treatment was proposed based on our protocol: Ligamentous injuries (abnormal misalignment, facet perched or locked, increase atlanto-dens interval) were treated surgically. Bone fractures without ligamentous injuries were treated with a rigid cervical orthosis, with exception of fractures in the dens base with risk factors for non-union. Results: Twenty-three patients treated initially conservatively had some follow-up (mean of 171 days, range from 60 to 436 days). All of them were neurologically intact. None of the patients developed a new neurological deficit. Fifteen patients were initially surgically treated (mean of 140 days of follow-up, ranging from 60 to 270 days). In the surgical group, preoperatively, 11 (73.3%) patients were AIS E, 2 (13.3%) AIS C and 2 (13.3%) AIS D. At the final follow-up, the American Spine Injury Association (ASIA) score was: 13 (86.6%) AIS E and 2 (13.3%) AIS D. None of the patients had neurological worsening during the follow-up. Conclusions: This prospective cohort suggested that our UCI treatment algorithm can be safely used. Further prospective studies with longer follow-up are necessary to further establish its clinical validity and safety. PMID:25788816

  15. Syndromes resulting from ectopic hormone-producing tumors.

    PubMed

    Gomez-Uria, A; Pazianos, A G

    1975-03-01

    Among the malignant tumors of nonendocrine origin that are capable of producing polypeptide hormones and of manifesting as different endocrine syndromes discussed here are ectopic ACTH syndrome, SIADH, and ectopic gonadotropin-producing tumors. PMID:163945

  16. A Computational Algorithm to Produce Virtual X-ray and Electron Diffraction Patterns from Atomistic Simulations

    NASA Astrophysics Data System (ADS)

    Coleman, Shawn P.; Sichani, Mehrdad M.; Spearot, Douglas E.

    2014-03-01

    Electron and x-ray diffraction are well-established experimental methods used to explore the atomic scale structure of materials. In this work, a computational algorithm is developed to produce virtual electron and x-ray diffraction patterns directly from atomistic simulations. This algorithm advances beyond previous virtual diffraction methods by using a high-resolution mesh of reciprocal space that eliminates the need for a priori knowledge of the crystal structure being modeled or other assumptions concerning the diffraction conditions. At each point on the reciprocal space mesh, the diffraction intensity is computed via explicit computation of the structure factor equation. To construct virtual selected-area electron diffraction patterns, a hemispherical slice of the reciprocal lattice mesh lying on the surface of the Ewald sphere is isolated and viewed along a specified zone axis. X-ray diffraction line profiles are created by binning the intensity of each reciprocal lattice point by its associated scattering angle, effectively mimicking powder diffraction conditions. The virtual diffraction algorithm is sufficiently generic to be applied to atomistic simulations of any atomic species. In this article, the capability and versatility of the virtual diffraction algorithm is exhibited by presenting findings from atomistic simulations of <100> symmetric tilt Ni grain boundaries, nanocrystalline Cu models, and a heterogeneous interface formed between α-Al2O3 (0001) and γ-Al2O3 (111).

  17. Veggie ISS Validation Test Results and Produce Consumption

    NASA Technical Reports Server (NTRS)

    Massa, Gioia; Hummerick, Mary; Spencer, LaShelle; Smith, Trent

    2015-01-01

    The Veggie vegetable production system flew to the International Space Station (ISS) in the spring of 2014. The first set of plants, Outredgeous red romaine lettuce, was grown, harvested, frozen, and returned to Earth in October. Ground control and flight plant tissue was sub-sectioned for microbial analysis, anthocyanin antioxidant phenolic analysis, and elemental analysis. Microbial analysis was also performed on samples swabbed on orbit from plants, Veggie bellows, and plant pillow surfaces, on water samples, and on samples of roots, media, and wick material from two returned plant pillows. Microbial levels of plants were comparable to ground controls, with some differences in community composition. The range in aerobic bacterial plate counts between individual plants was much greater in the ground controls than in flight plants. No pathogens were found. Anthocyanin concentrations were the same between ground and flight plants, while antioxidant and phenolic levels were slightly higher in flight plants. Elements varied, but key target elements for astronaut nutrition were similar between ground and flight plants. Aerobic plate counts of the flight plant pillow components were significantly higher than ground controls. Surface swab samples showed low microbial counts, with most below detection limits. Flight plant microbial levels were less than bacterial guidelines set for non-thermostabalized food and near or below those for fungi. These guidelines are not for fresh produce but are the closest approximate standards. Forward work includes the development of standards for space-grown produce. A produce consumption strategy for Veggie on ISS includes pre-flight assessments of all crops to down select candidates, wiping flight-grown plants with sanitizing food wipes, and regular Veggie hardware cleaning and microbial monitoring. Produce then could be consumed by astronauts, however some plant material would be reserved and returned for analysis. Implementation of

  18. Interdisciplinary research produces results in understanding planetary dunes

    USGS Publications Warehouse

    Titus, Timothy N.; Hayward, Rosalyn K.; Dinwiddie, Cynthia L.

    2012-01-01

    Third International Planetary Dunes Workshop: Remote Sensing and Image Analysis of Planetary Dunes; Flagstaff, Arizona, 12–16 June 2012. This workshop, the third in a biennial series, was convened as a means of bringing together terrestrial and planetary researchers from diverse backgrounds with the goal of fostering collaborative interdisciplinary research. The small-group setting facilitated intensive discussions of many problems associated with aeolian processes on Earth, Mars, Venus, Titan, Triton, and Pluto. The workshop produced a list of key scientifc questions about planetary dune felds.

  19. A two stage algorithm for target and suspect analysis of produced water via gas chromatography coupled with high resolution time of flight mass spectrometry.

    PubMed

    Samanipour, Saer; Langford, Katherine; Reid, Malcolm J; Thomas, Kevin V

    2016-09-01

    Gas chromatography coupled with high resolution time of flight mass spectrometry (GC-HR-TOFMS) has gained popularity for the target and suspect analysis of complex samples. However, confident detection of target/suspect analytes in complex samples, such as produced water, remains a challenging task. Here we report on the development and validation of a two stage algorithm for the confident target and suspect analysis of produced water extracts. We performed both target and suspect analysis for 48 standards, which were a mixture of 28 aliphatic hydrocarbons and 20 alkylated phenols, in 3 produced water extracts. The two stage algorithm produces a chemical standard database of spectra, in the first stage, which is used for target and suspect analysis during the second stage. The first stage is carried out through five steps via an algorithm here referred to as unique ion extractor (UIE). During the first step the m/z values in the spectrum of a standard that do not belong to that standard are removed in order to produce a clean spectrum and then during the last step the cleaned spectrum is calibrated. The Dot-product algorithm, during the second stage, uses the cleaned and calibrated spectra of the standards for both target and suspect analysis. We performed the target analysis of 48 standards in all 3 samples via conventional methods, in order to validate the two stage algorithm. The two stage algorithm was demonstrated to be more robust, reliable, and less sensitive to the signal-to-noise ratio (S/N), when compared to the conventional method. The Dot-product algorithm showed lower potential in producing false positives compared to the conventional methods, when dealing with complex samples. We also evaluated the effect of the mass accuracy on the performances of Dot-product algorithm. Our results indicated the crucial importance of HR-MS data and the mass accuracy for confident suspect analysis in complex samples. PMID:27524301

  20. Massachusetts General Physicians Organization's quality incentive program produces encouraging results.

    PubMed

    Torchiana, David F; Colton, Deborah G; Rao, Sandhya K; Lenz, Sarah K; Meyer, Gregg S; Ferris, Timothy G

    2013-10-01

    Physicians are increasingly becoming salaried employees of hospitals or large physician groups. Yet few published reports have evaluated provider-driven quality incentive programs for salaried physicians. In 2006 the Massachusetts General Physicians Organization began a quality incentive program for its salaried physicians. Eligible physicians were given performance targets for three quality measures every six months. The incentive payments could be as much as 2 percent of a physician's annual income. Over thirteen six-month terms, the program used 130 different quality measures. Although quality-of-care improvements and cost reductions were difficult to calculate, anecdotal evidence points to multiple successes. For example, the program helped physicians meet many federal health information technology meaningful-use criteria and produced $15.5 million in incentive payments. The program also facilitated the adoption of an electronic health record, improved hand hygiene compliance, increased efficiency in radiology and the cancer center, and decreased emergency department use. The program demonstrated that even small incentives tied to carefully structured metrics, priority setting, and clear communication can help change salaried physicians' behavior in ways that improve the quality and safety of health care and ease the physicians' sense of administrative burden. PMID:24101064

  1. Results of drug screening from a producer's view.

    PubMed

    Adams, J B

    1994-07-01

    The dairy industry is faced with increasing governmental and public concern about the safety of the nation's milk supply. New regulations under the Grade A Pasteurized Milk Ordinance require that prescription drugs be properly labeled and that all tanker loads of milk be tested for beta-lactam antimicrobial residues. Concern over the use of animal drugs in an extralabel manner has prompted the National Milk Producers Federation and the American Veterinary Medical Association to develop a quality assurance program for on-farm residue prevention known as the Dairy Quality Assurance 10-Point Milk and Dairy Beef Residue Prevention Protocol. The program promotes the concept of Hazard Analysis Critical Control Points, applied to a pre-harvest farm environment. Screening limitations at point of milk receipt necessitates widespread adoption of the Dairy Quality Assurance protocol to address controlled use of all animal medications under a valid relationship among veterinarian, client, and animals, thus minimizing the potential for violative residues in the milk and meat supply. PMID:7929955

  2. Process for producing a high emittance coating and resulting article

    NASA Technical Reports Server (NTRS)

    Le, Huong G. (Inventor); O'Brien, Dudley L. (Inventor)

    1993-01-01

    Process for anodizing aluminum or its alloys to obtain a surface particularly having high infrared emittance by anodizing an aluminum or aluminum alloy substrate surface in an aqueous sulfuric acid solution at elevated temperature and by a step-wise current density procedure, followed by sealing the resulting anodized surface. In a preferred embodiment the aluminum or aluminum alloy substrate is first alkaline cleaned and then chemically brightened in an acid bath The resulting cleaned substrate is anodized in a 15% by weight sulfuric acid bath maintained at a temperature of 30.degree. C. Anodizing is carried out by a step-wise current density procedure at 19 amperes per square ft. (ASF) for 20 minutes, 15 ASF for 20 minutes and 10 ASF for 20 minutes. After anodizing the sample is sealed by immersion in water at 200.degree. F. and then air dried. The resulting coating has a high infrared emissivity of about 0.92 and a solar absorptivity of about 0.2, for a 5657 aluminum alloy, and a relatively thick anodic coating of about 1 mil.

  3. Test results of LHC interaction regions quadrupoles produced by Fermilab

    SciTech Connect

    Bossert, R.; Carson, J.; Chichili, D.R.; Feher, S.; Kerby, J.; Lamm, M.J.; Nobrega, A.; Nicol, T.; Ogitsu, T.; Orris, D.; Page, T.; Peterson, T.; Rabehl, R.; Robotham, W.; Scanlan, R.; Schlabach, P.; Sylvester, C.; Strait, J.; Tartaglia, M.; Tompkins, J.C.; Velev, G.; /Fermilab

    2004-10-01

    The US-LHC Accelerator Project is responsible for the production of the Q2 optical elements of the final focus triplets in the LHC interaction regions. As part of this program Fermilab is in the process of manufacturing and testing cryostat assemblies (LQXB) containing two identical quadrupoles (MQXB) with a dipole corrector between them. The 5.5 m long Fermilab designed MQXB have a 70 mm aperture and operate in superfluid helium at 1.9 K with a peak field gradient of 215 T/m. This paper summarizes the test results of several production MQXB quadrupoles with emphasis on quench performance and alignment studies. Quench localization studies using quench antenna signals are also presented.

  4. Evaluation of observation-driven evaporation algorithms: results of the WACMOS-ET project

    NASA Astrophysics Data System (ADS)

    Miralles, Diego G.; Jimenez, Carlos; Ershadi, Ali; McCabe, Matthew F.; Michel, Dominik; Hirschi, Martin; Seneviratne, Sonia I.; Jung, Martin; Wood, Eric F.; (Bob) Su, Z.; Timmermans, Joris; Chen, Xuelong; Fisher, Joshua B.; Mu, Quiaozen; Fernandez, Diego

    2015-04-01

    Terrestrial evaporation (ET) links the continental water, energy and carbon cycles. Understanding the magnitude and variability of ET at the global scale is an essential step towards reducing uncertainties in our projections of climatic conditions and water availability for the future. However, the requirement of global observational data of ET can neither be satisfied with our sparse global in-situ networks, nor with the existing satellite sensors (which cannot measure evaporation directly from space). This situation has led to the recent rise of several algorithms dedicated to deriving ET fields from satellite data indirectly, based on the combination of ET-drivers that can be observed from space (e.g. radiation, temperature, phenological variability, water content, etc.). These algorithms can either be based on physics (e.g. Priestley and Taylor or Penman-Monteith approaches) or be purely statistical (e.g., machine learning). However, and despite the efforts from different initiatives like GEWEX LandFlux (Jimenez et al., 2011; Mueller et al., 2013), the uncertainties inherent in the resulting global ET datasets remain largely unexplored, partly due to a lack of inter-product consistency in forcing data. In response to this need, the ESA WACMOS-ET project started in 2012 with the main objectives of (a) developing a Reference Input Data Set to derive and validate ET estimates, and (b) performing a cross-comparison, error characterization and validation exercise of a group of selected ET algorithms driven by this Reference Input Data Set and by in-situ forcing data. The algorithms tested are SEBS (Su et al., 2002), the Penman- Monteith approach from MODIS (Mu et al., 2011), the Priestley and Taylor JPL model (Fisher et al., 2008), the MPI-MTE model (Jung et al., 2010) and GLEAM (Miralles et al., 2011). In this presentation we will show the first results from the ESA WACMOS-ET project. The performance of the different algorithms at multiple spatial and temporal

  5. The XH-map algorithm: A method to process stereo video to produce a real-time obstacle map

    NASA Astrophysics Data System (ADS)

    Rosselot, Donald; Hall, Ernest L.

    2005-10-01

    This paper presents a novel, simple and fast algorithm to produce a "floor plan" obstacle map in real time using video. The XH-map algorithm is a transformation of stereo vision data in disparity map space into a two dimensional obstacle map space using a method that can be likened to a histogram reduction of image information. The classic floor-ground background noise problem is addressed with a simple one-time semi-automatic calibration method incorporated into the algorithm. This implementation of this algorithm utilizes the Intel Performance Primitives library and OpenCV libraries for extremely fast and efficient execution, creating a scaled obstacle map from a 480x640x256 stereo pair in 1.4 milliseconds. This algorithm has many applications in robotics and computer vision including enabling an "Intelligent Robot" robot to "see" for path planning and obstacle avoidance.

  6. Fast and accurate image recognition algorithms for fresh produce food safety sensing

    NASA Astrophysics Data System (ADS)

    Yang, Chun-Chieh; Kim, Moon S.; Chao, Kuanglin; Kang, Sukwon; Lefcourt, Alan M.

    2011-06-01

    This research developed and evaluated the multispectral algorithms derived from hyperspectral line-scan fluorescence imaging under violet LED excitation for detection of fecal contamination on Golden Delicious apples. The algorithms utilized the fluorescence intensities at four wavebands, 680 nm, 684 nm, 720 nm, and 780 nm, for computation of simple functions for effective detection of contamination spots created on the apple surfaces using four concentrations of aqueous fecal dilutions. The algorithms detected more than 99% of the fecal spots. The effective detection of feces showed that a simple multispectral fluorescence imaging algorithm based on violet LED excitation may be appropriate to detect fecal contamination on fast-speed apple processing lines.

  7. Experimental Results in the Comparison of Search Algorithms Used with Room Temperature Detectors

    SciTech Connect

    Guss, P., Yuan, D., Cutler, M., Beller, D.

    2010-11-01

    Analysis of time sequence data was run for several higher resolution scintillation detectors using a variety of search algorithms, and results were obtained in predicting the relative performance for these detectors, which included a slightly superior performance by CeBr{sub 3}. Analysis of several search algorithms shows that inclusion of the RSPRT methodology can improve sensitivity.

  8. Quality analysis of the solution produced by dissection algorithms applied to the traveling salesman problem

    SciTech Connect

    Cesari, G.

    1994-12-31

    The aim of this paper is to analyze experimentally the quality of the solution obtained with dissection algorithms applied to the geometric Traveling Salesman Problem. Starting from Karp`s results. We apply a divide and conquer strategy, first dividing the plane into subregions where we calculate optimal subtours and then merging these subtours to obtain the final tour. The analysis is restricted to problem instances where points are uniformly distributed in the unit square. For relatively small sets of cities we analyze the quality of the solution by calculating the length of the optimal tour and by comparing it with our approximate solution. When the problem instance is too large we perform an asymptotical analysis estimating the length of the optimal tour. We apply the same dissection strategy also to classical heuristics by calculating approximate subtours and by comparing the results with the average quality of the heuristic. Our main result is the estimate of the rate of convergence of the approximate solution to the optimal solution as a function of the number of dissection steps, of the criterion used for the plane division and of the quality of the subtours. We have implemented our programs on MUSIC (MUlti Signal processor system with Intelligent Communication), a Single-Program-Multiple-Data parallel computer with distributed memory developed at the ETH Zurich.

  9. Implementation and comparative analysis of the optimisations produced by evolutionary algorithms for the parameter extraction of PSP MOSFET model

    NASA Astrophysics Data System (ADS)

    Hadia, Sarman K.; Thakker, R. A.; Bhatt, Kirit R.

    2016-05-01

    The study proposes an application of evolutionary algorithms, specifically an artificial bee colony (ABC), variant ABC and particle swarm optimisation (PSO), to extract the parameters of metal oxide semiconductor field effect transistor (MOSFET) model. These algorithms are applied for the MOSFET parameter extraction problem using a Pennsylvania surface potential model. MOSFET parameter extraction procedures involve reducing the error between measured and modelled data. This study shows that ABC algorithm optimises the parameter values based on intelligent activities of honey bee swarms. Some modifications have also been applied to the basic ABC algorithm. Particle swarm optimisation is a population-based stochastic optimisation method that is based on bird flocking activities. The performances of these algorithms are compared with respect to the quality of the solutions. The simulation results of this study show that the PSO algorithm performs better than the variant ABC and basic ABC algorithm for the parameter extraction of the MOSFET model; also the implementation of the ABC algorithm is shown to be simpler than that of the PSO algorithm.

  10. MUlti-Dimensional Spline-Based Estimator (MUSE) for Motion Estimation: Algorithm Development and Initial Results

    PubMed Central

    Viola, Francesco; Coe, Ryan L.; Owen, Kevin; Guenther, Drake A.; Walker, William F.

    2008-01-01

    Image registration and motion estimation play central roles in many fields, including RADAR, SONAR, light microscopy, and medical imaging. Because of its central significance, estimator accuracy, precision, and computational cost are of critical importance. We have previously presented a highly accurate, spline-based time delay estimator that directly determines sub-sample time delay estimates from sampled data. The algorithm uses cubic splines to produce a continuous representation of a reference signal and then computes an analytical matching function between this reference and a delayed signal. The location of the minima of this function yields estimates of the time delay. In this paper we describe the MUlti-dimensional Spline-based Estimator (MUSE) that allows accurate and precise estimation of multidimensional displacements/strain components from multidimensional data sets. We describe the mathematical formulation for two- and three-dimensional motion/strain estimation and present simulation results to assess the intrinsic bias and standard deviation of this algorithm and compare it to currently available multi-dimensional estimators. In 1000 noise-free simulations of ultrasound data we found that 2D MUSE exhibits maximum bias of 2.6 × 10−4 samples in range and 2.2 × 10−3 samples in azimuth (corresponding to 4.8 and 297 nm, respectively). The maximum simulated standard deviation of estimates in both dimensions was comparable at roughly 2.8 × 10−3 samples (corresponding to 54 nm axially and 378 nm laterally). These results are between two and three orders of magnitude better than currently used 2D tracking methods. Simulation of performance in 3D yielded similar results to those observed in 2D. We also present experimental results obtained using 2D MUSE on data acquired by an Ultrasonix Sonix RP imaging system with an L14-5/38 linear array transducer operating at 6.6 MHz. While our validation of the algorithm was performed using ultrasound data, MUSE

  11. Algorithm for calculating turbine cooling flow and the resulting decrease in turbine efficiency

    NASA Technical Reports Server (NTRS)

    Gauntner, J. W.

    1980-01-01

    An algorithm is presented for calculating both the quantity of compressor bleed flow required to cool the turbine and the decrease in turbine efficiency caused by the injection of cooling air into the gas stream. The algorithm, which is intended for an axial flow, air routine in a properly written thermodynamic cycle code. Ten different cooling configurations are available for each row of cooled airfoils in the turbine. Results from the algorithm are substantiated by comparison with flows predicted by major engine manufacturers for given bulk metal temperatures and given cooling configurations. A list of definitions for the terms in the subroutine is presented.

  12. Shuttle Entry Air Data System (SEADS) - Optimization of preflight algorithms based on flight results

    NASA Technical Reports Server (NTRS)

    Wolf, H.; Henry, M. W.; Siemers, Paul M., III

    1988-01-01

    The SEADS pressure model algorithm results were tested against other sources of air data, in particular, the Shuttle Best Estimated Trajectory (BET). The algorithm basis was also tested through a comparison of flight-measured pressure distribution vs the wind tunnel database. It is concluded that the successful flight of SEADS and the subsequent analysis of the data shows good agreement between BET and SEADS air data.

  13. An Automated Algorithm for Producing Land Cover Information from Landsat Surface Reflectance Data Acquired Between 1984 and Present

    NASA Astrophysics Data System (ADS)

    Rover, J.; Goldhaber, M. B.; Holen, C.; Dittmeier, R.; Wika, S.; Steinwand, D.; Dahal, D.; Tolk, B.; Quenzer, R.; Nelson, K.; Wylie, B. K.; Coan, M.

    2015-12-01

    Multi-year land cover mapping from remotely sensed data poses challenges. Producing land cover products at spatial and temporal scales required for assessing longer-term trends in land cover change are typically a resource-limited process. A recently developed approach utilizes open source software libraries to automatically generate datasets, decision tree classifications, and data products while requiring minimal user interaction. Users are only required to supply coordinates for an area of interest, land cover from an existing source such as National Land Cover Database and percent slope from a digital terrain model for the same area of interest, two target acquisition year-day windows, and the years of interest between 1984 and present. The algorithm queries the Landsat archive for Landsat data intersecting the area and dates of interest. Cloud-free pixels meeting the user's criteria are mosaicked to create composite images for training the classifiers and applying the classifiers. Stratification of training data is determined by the user and redefined during an iterative process of reviewing classifiers and resulting predictions. The algorithm outputs include yearly land cover raster format data, graphics, and supporting databases for further analysis. Additional analytical tools are also incorporated into the automated land cover system and enable statistical analysis after data are generated. Applications tested include the impact of land cover change and water permanence. For example, land cover conversions in areas where shrubland and grassland were replaced by shale oil pads during hydrofracking of the Bakken Formation were quantified. Analytical analysis of spatial and temporal changes in surface water included identifying wetlands in the Prairie Pothole Region of North Dakota with potential connectivity to ground water, indicating subsurface permeability and geochemistry.

  14. A comprehensive performance evaluation on the prediction results of existing cooperative transcription factors identification algorithms

    PubMed Central

    2014-01-01

    Background Eukaryotic transcriptional regulation is known to be highly connected through the networks of cooperative transcription factors (TFs). Measuring the cooperativity of TFs is helpful for understanding the biological relevance of these TFs in regulating genes. The recent advances in computational techniques led to various predictions of cooperative TF pairs in yeast. As each algorithm integrated different data resources and was developed based on different rationales, it possessed its own merit and claimed outperforming others. However, the claim was prone to subjectivity because each algorithm compared with only a few other algorithms and only used a small set of performance indices for comparison. This motivated us to propose a series of indices to objectively evaluate the prediction performance of existing algorithms. And based on the proposed performance indices, we conducted a comprehensive performance evaluation. Results We collected 14 sets of predicted cooperative TF pairs (PCTFPs) in yeast from 14 existing algorithms in the literature. Using the eight performance indices we adopted/proposed, the cooperativity of each PCTFP was measured and a ranking score according to the mean cooperativity of the set was given to each set of PCTFPs under evaluation for each performance index. It was seen that the ranking scores of a set of PCTFPs vary with different performance indices, implying that an algorithm used in predicting cooperative TF pairs is of strength somewhere but may be of weakness elsewhere. We finally made a comprehensive ranking for these 14 sets. The results showed that Wang J's study obtained the best performance evaluation on the prediction of cooperative TF pairs in yeast. Conclusions In this study, we adopted/proposed eight performance indices to make a comprehensive performance evaluation on the prediction results of 14 existing cooperative TFs identification algorithms. Most importantly, these proposed indices can be easily applied to

  15. A comparison of direction finding results from an FFT peak identification technique with those from the music algorithm

    NASA Astrophysics Data System (ADS)

    Montbriand, L. E.

    1991-07-01

    A peak identification technique which uses the fast Fourier transform (FFT) algorithm is presented for unambiguously identifying up to three sources in signals received by the sampled aperture receiving array (SARA) of the Communications Research Center. The technique involves removing phase rotations resulting from the FFT and the data configuration and interpreting this result as the direction cosine distribution of the received signal. The locations and amplitudes of all peaks for one array arm are matched with those in a master list for a single source in order to identify actual sources. The identification of actual sources was found to be subject to the limitations of the FFT in that there was an inherent bias for the secondary and tertiary sources to appear at the side-lobe positions of the strongest source. There appears to be a limit in the ratio of the magnitude of a weaker source to that of the strongest source, below which it becomes too difficult to reliably identify true sources. For the SARA array this ratio is near-10 dB. Some of the data were also analyzed using the more complex MUSIC algorithm which yields a narrower directional peak for the sources than the FFT. For the SARA array, using ungroomed data, the largest side and grating lobes that the MUSIC algorithm produces are some 10 dB below the largest side and grating lobes that are produced using the FFT algorithm. Consequently the source-separation problem is less than that encountered using the FFT algorithm, but is not eliminated.

  16. Using a hybrid Monte Carlo/Genetic Algorithm Slip Estimator to produce high resolution models of paleoearthquakes from geodetic data

    NASA Astrophysics Data System (ADS)

    Lindsay, A.; McCloskey, J.; Nalbant, S. S.; Simao, N.; Murphy, S.; NicBhloscaidh, M.; Steacy, S.

    2013-12-01

    Identifying fault sections where slip deficits have accumulated may provide a means for understanding sequences of large megathrust earthquakes. Stress accumulated during the interseismic period on locked sections of an active fault is stored as potential slip. Where this potential slip remains unreleased during earthquakes, a slip deficit can be said to have accrued. Analysis of the spatial distribution of slip during antecedent events along the fault will show where the locked plate has spent its stored slip and indicate where the potential for large events remains. The location of recent earthquakes and their distribution of slip can be estimated instrumentally. To develop the idea of long-term slip-deficit modelling it is necessary to constrain the size and distribution of slip for pre-instrumental events dating back hundreds of years covering more than one ';seismic cycle'. This requires the exploitation of proxy sources of data. Coral microatolls, growing in the intertidal zone of the outer island arc of the Sunda trench, present the possibility of producing high resolution reconstructions of slip for a number of pre-instrumental earthquakes. Their growth is influenced by tectonic flexing of the continental plate beneath them allows them to act as long term geodetic recorders. However, the sparse distribution of data available using coral geodesy results in a under determined problem with non-unique solutions. Instead of producing one definite model satisfying the observed corals displacements, a Monte Carlo Slip Estimator based on a Genetic Algorithm (MCSE-GA) accelerating the rate of convergence is used to identify a suite of models consistent with the data. Successive iterations of the MCSE-GA sample different displacements at each coral location, from within the spread of associated uncertainties, producing a catalog of models from the full range of possibilities. The suite of best slip distributions are weighted according to their fitness and stacked to

  17. A Comparison of Lung Nodule Segmentation Algorithms: Methods and Results from a Multi-institutional Study.

    PubMed

    Kalpathy-Cramer, Jayashree; Zhao, Binsheng; Goldgof, Dmitry; Gu, Yuhua; Wang, Xingwei; Yang, Hao; Tan, Yongqiang; Gillies, Robert; Napel, Sandy

    2016-08-01

    Tumor volume estimation, as well as accurate and reproducible borders segmentation in medical images, are important in the diagnosis, staging, and assessment of response to cancer therapy. The goal of this study was to demonstrate the feasibility of a multi-institutional effort to assess the repeatability and reproducibility of nodule borders and volume estimate bias of computerized segmentation algorithms in CT images of lung cancer, and to provide results from such a study. The dataset used for this evaluation consisted of 52 tumors in 41 CT volumes (40 patient datasets and 1 dataset containing scans of 12 phantom nodules of known volume) from five collections available in The Cancer Imaging Archive. Three academic institutions developing lung nodule segmentation algorithms submitted results for three repeat runs for each of the nodules. We compared the performance of lung nodule segmentation algorithms by assessing several measurements of spatial overlap and volume measurement. Nodule sizes varied from 29 μl to 66 ml and demonstrated a diversity of shapes. Agreement in spatial overlap of segmentations was significantly higher for multiple runs of the same algorithm than between segmentations generated by different algorithms (p < 0.05) and was significantly higher on the phantom dataset compared to the other datasets (p < 0.05). Algorithms differed significantly in the bias of the measured volumes of the phantom nodules (p < 0.05) underscoring the need for assessing performance on clinical data in addition to phantoms. Algorithms that most accurately estimated nodule volumes were not the most repeatable, emphasizing the need to evaluate both their accuracy and precision. There were considerable differences between algorithms, especially in a subset of heterogeneous nodules, underscoring the recommendation that the same software be used at all time points in longitudinal studies. PMID:26847203

  18. Respiratory rate detection algorithm based on RGB-D camera: theoretical background and experimental results.

    PubMed

    Benetazzo, Flavia; Freddi, Alessandro; Monteriù, Andrea; Longhi, Sauro

    2014-09-01

    Both the theoretical background and the experimental results of an algorithm developed to perform human respiratory rate measurements without any physical contact are presented. Based on depth image sensing techniques, the respiratory rate is derived by measuring morphological changes of the chest wall. The algorithm identifies the human chest, computes its distance from the camera and compares this value with the instantaneous distance, discerning if it is due to the respiratory act or due to a limited movement of the person being monitored. To experimentally validate the proposed algorithm, the respiratory rate measurements coming from a spirometer were taken as a benchmark and compared with those estimated by the algorithm. Five tests were performed, with five different persons sat in front of the camera. The first test aimed to choose the suitable sampling frequency. The second test was conducted to compare the performances of the proposed system with respect to the gold standard in ideal conditions of light, orientation and clothing. The third, fourth and fifth tests evaluated the algorithm performances under different operating conditions. The experimental results showed that the system can correctly measure the respiratory rate, and it is a viable alternative to monitor the respiratory activity of a person without using invasive sensors. PMID:26609383

  19. Comparative Evaluation of Registration Algorithms in Different Brain Databases With Varying Difficulty: Results and Insights

    PubMed Central

    Akbari, Hamed; Bilello, Michel; Da, Xiao; Davatzikos, Christos

    2015-01-01

    Evaluating various algorithms for the inter-subject registration of brain magnetic resonance images (MRI) is a necessary topic receiving growing attention. Existing studies evaluated image registration algorithms in specific tasks or using specific databases (e.g., only for skull-stripped images, only for single-site images, etc.). Consequently, the choice of registration algorithms seems task- and usage/parameter-dependent. Nevertheless, recent large-scale, often multi-institutional imaging-related studies create the need and raise the question whether some registration algorithms can 1) generally apply to various tasks/databases posing various challenges; 2) perform consistently well, and while doing so, 3) require minimal or ideally no parameter tuning. In seeking answers to this question, we evaluated 12 general-purpose registration algorithms, for their generality, accuracy and robustness. We fixed their parameters at values suggested by algorithm developers as reported in the literature. We tested them in 7 databases/tasks, which present one or more of 4 commonly-encountered challenges: 1) inter-subject anatomical variability in skull-stripped images; 2) intensity homogeneity, noise and large structural differences in raw images; 3) imaging protocol and field-of-view (FOV) differences in multi-site data; and 4) missing correspondences in pathology-bearing images. Totally 7,562 registrations were performed. Registration accuracies were measured by (multi-)expert-annotated landmarks or regions of interest (ROIs). To ensure reproducibility, we used public software tools, public databases (whenever possible), and we fully disclose the parameter settings. We show evaluation results, and discuss the performances in light of algorithms’ similarity metrics, transformation models and optimization strategies. We also discuss future directions for the algorithm development and evaluations. PMID:24951685

  20. Algorithms for detecting antibodies to HIV-1: results from a rural Ugandan cohort.

    PubMed

    Nunn, A J; Biryahwaho, B; Downing, R G; van der Groen, G; Ojwiya, A; Mulder, D W

    1993-08-01

    Although the Western blot test is widely used to confirm HIV-1 serostatus, concerns over its additional cost have prompted review of the need for supplementary testing and the evaluation of alternative test algorithms. Serostatus tends to be confirmed with this additional test especially when tested individuals will be informed of their serostatus or when results will be used for research purposes. The confirmation procedure has been adopted as a means of securing suitably high levels of specificity and sensitivity. With the goal of exploring potential alternatives to Western blot confirmation, the authors describe the use of parallel testing with a competitive and an indirect enzyme immunoassay with and without supplementary Western blots. Sera were obtained from 7895 people in the rural population survey and tested with an algorithm based on the Recombigen HIV-1 EIA and Wellcozyme HIV-1 Recombinant; alternative algorithms were assessed on negative or confirmed positive sera. None of the 227 sera classified as negative by the 2 assays were positive by Western blot. Of the 192 identified ass positive by both assays, 4 were found to be seronegative with Western blot. The possibility of technical error does, however, exist for 3 of these latter cases. One of the alternative algorithms assessed classified all borderline or discordant assay results as negative with 100% specificity and 98.4% sensitivity. This particular algorithm costs only one-third the price of the conventional algorithm. These results therefore suggest that high specificity and sensitivity may be obtained without using Western blot and at a considerable reduction in cost. PMID:8397940

  1. Image Artifacts Resulting from Gamma-Ray Tracking Algorithms Used with Compton Imagers

    SciTech Connect

    Seifert, Carolyn E.; He, Zhong

    2005-10-01

    For Compton imaging it is necessary to determine the sequence of gamma-ray interactions in a single detector or array of detectors. This can be done by time-of-flight measurements if the interactions are sufficiently far apart. However, in small detectors the time between interactions can be too small to measure, and other means of gamma-ray sequencing must be used. In this work, several popular sequencing algorithms are reviewed for sequences with two observed events and three or more observed events in the detector. These algorithms can result in poor imaging resolution and introduce artifacts in the backprojection images. The effects of gamma-ray tracking algorithms on Compton imaging are explored in the context of the 4π Compton imager built by the University of Michigan.

  2. The design and results of an algorithm for intelligent ground vehicles

    NASA Astrophysics Data System (ADS)

    Duncan, Matthew; Milam, Justin; Tote, Caleb; Riggins, Robert N.

    2010-01-01

    This paper addresses the design, design method, test platform, and test results of an algorithm used in autonomous navigation for intelligent vehicles. The Bluefield State College (BSC) team created this algorithm for its 2009 Intelligent Ground Vehicle Competition (IGVC) robot called Anassa V. The BSC robotics team is comprised of undergraduate computer science, engineering technology, marketing students, and one robotics faculty advisor. The team has participated in IGVC since the year 2000. A major part of the design process that the BSC team uses each year for IGVC is a fully documented "Post-IGVC Analysis." Over the nine years since 2000, the lessons the students learned from these analyses have resulted in an ever-improving, highly successful autonomous algorithm. The algorithm employed in Anassa V is a culmination of past successes and new ideas, resulting in Anassa V earning several excellent IGVC 2009 performance awards, including third place overall. The paper will discuss all aspects of the design of this autonomous robotic system, beginning with the design process and ending with test results for both simulation and real environments.

  3. Correlation between standard plate count and somatic cell count milk quality results for Wisconsin dairy producers.

    PubMed

    Borneman, Darand L; Ingham, Steve

    2014-05-01

    The objective of this study was to determine if a correlation exists between standard plate count (SPC) and somatic cell count (SCC) monthly reported results for Wisconsin dairy producers. Such a correlation may indicate that Wisconsin producers effectively controlling sanitation and milk temperature (reflected in low SPC) also have implemented good herd health management practices (reflected in low SCC). The SPC and SCC results for all grade A and B dairy producers who submitted results to the Wisconsin Department of Agriculture, Trade, and Consumer Protection, in each month of 2012 were analyzed. Grade A producer SPC results were less dispersed than grade B producer SPC results. Regression analysis showed a highly significant correlation between SPC and SCC, but the R(2) value was very small (0.02-0.03), suggesting that many other factors, besides SCC, influence SPC. Average SCC (across 12 mo) for grade A and B producers decreased with an increase in the number of monthly SPC results (out of 12) that were ≤ 25,000 cfu/mL. A chi-squared test of independence showed that the proportion of monthly SCC results >250,000 cells/mL varied significantly depending on whether the corresponding SPC result was ≤ 25,000 or >25,000 cfu/mL. This significant difference occurred in all months of 2012 for grade A and B producers. The results suggest that a generally consistent level of skill exists across dairy production practices affecting SPC and SCC. PMID:24630657

  4. Results from the New IGS Time Scale Algorithm (version 2.0)

    NASA Astrophysics Data System (ADS)

    Senior, K.; Ray, J.

    2009-12-01

    Since 2004 the IGS Rapid and Final clock products have been aligned to a highly stable time scale derived from a weighted ensemble of clocks in the IGS network. The time scale is driven mostly by Hydrogen Maser ground clocks though the GPS satellite clocks also carry non-negligible weight, resulting in a time scale having a one-day frequency stability of about 1E-15. However, because of the relatively simple weighting scheme used in the time scale algorithm and because the scale is aligned to UTC by steering it to GPS Time the resulting stability beyond several days suffers. The authors present results of a new 2.0 version of the IGS time scale highlighting the improvements to the algorithm, new modeling considerations, as well as improved time scale stability.

  5. Modal characterization of the ASCIE segmented optics testbed: New algorithms and experimental results

    NASA Technical Reports Server (NTRS)

    Carrier, Alain C.; Aubrun, Jean-Noel

    1993-01-01

    New frequency response measurement procedures, on-line modal tuning techniques, and off-line modal identification algorithms are developed and applied to the modal identification of the Advanced Structures/Controls Integrated Experiment (ASCIE), a generic segmented optics telescope test-bed representative of future complex space structures. The frequency response measurement procedure uses all the actuators simultaneously to excite the structure and all the sensors to measure the structural response so that all the transfer functions are measured simultaneously. Structural responses to sinusoidal excitations are measured and analyzed to calculate spectral responses. The spectral responses in turn are analyzed as the spectral data become available and, which is new, the results are used to maintain high quality measurements. Data acquisition, processing, and checking procedures are fully automated. As the acquisition of the frequency response progresses, an on-line algorithm keeps track of the actuator force distribution that maximizes the structural response to automatically tune to a structural mode when approaching a resonant frequency. This tuning is insensitive to delays, ill-conditioning, and nonproportional damping. Experimental results show that is useful for modal surveys even in high modal density regions. For thorough modeling, a constructive procedure is proposed to identify the dynamics of a complex system from its frequency response with the minimization of a least-squares cost function as a desirable objective. This procedure relies on off-line modal separation algorithms to extract modal information and on least-squares parameter subset optimization to combine the modal results and globally fit the modal parameters to the measured data. The modal separation algorithms resolved modal density of 5 modes/Hz in the ASCIE experiment. They promise to be useful in many challenging applications.

  6. Efficient algorithms for mixed aleatory-epistemic uncertainty quantification with application to radiation-hardened electronics. Part I, algorithms and benchmark results.

    SciTech Connect

    Swiler, Laura Painton; Eldred, Michael Scott

    2009-09-01

    This report documents the results of an FY09 ASC V&V Methods level 2 milestone demonstrating new algorithmic capabilities for mixed aleatory-epistemic uncertainty quantification. Through the combination of stochastic expansions for computing aleatory statistics and interval optimization for computing epistemic bounds, mixed uncertainty analysis studies are shown to be more accurate and efficient than previously achievable. Part I of the report describes the algorithms and presents benchmark performance results. Part II applies these new algorithms to UQ analysis of radiation effects in electronic devices and circuits for the QASPR program.

  7. A treatment algorithm for patients with large skull bone defects and first results.

    PubMed

    Lethaus, Bernd; Ter Laak, Marielle Poort; Laeven, Paul; Beerens, Maikel; Koper, David; Poukens, Jules; Kessler, Peter

    2011-09-01

    Large skull bone defects resulting from craniotomies due to cerebral insults, trauma or tumours create functional and aesthetic disturbances to the patient. The reconstruction of large osseous defects is still challenging. A treatment algorithm is presented based on the close interaction of radiologists, computer engineers and cranio-maxillofacial surgeons. From 2004 until today twelve consecutive patients have been operated on successfully according to this treatment plan. Titanium and polyetheretherketone (PEEK) were used to manufacture the implants. The treatment algorithm is proved to be reliable. No corrections had to be performed either to the skull bone or to the implant. Short operations and hospitalization periods are essential prerequisites for treatment success and justify the high expenses. PMID:21055960

  8. Knowledge-Aided Multichannel Adaptive SAR/GMTI Processing: Algorithm and Experimental Results

    NASA Astrophysics Data System (ADS)

    Wu, Di; Zhu, Daiyin; Zhu, Zhaoda

    2010-12-01

    The multichannel synthetic aperture radar ground moving target indication (SAR/GMTI) technique is a simplified implementation of space-time adaptive processing (STAP), which has been proved to be feasible in the past decades. However, its detection performance will be degraded in heterogeneous environments due to the rapidly varying clutter characteristics. Knowledge-aided (KA) STAP provides an effective way to deal with the nonstationary problem in real-world clutter environment. Based on the KA STAP methods, this paper proposes a KA algorithm for adaptive SAR/GMTI processing in heterogeneous environments. It reduces sample support by its fast convergence properties and shows robust to non-stationary clutter distribution relative to the traditional adaptive SAR/GMTI scheme. Experimental clutter suppression results are employed to verify the virtue of this algorithm.

  9. Performance analysis results of a battery fuel gauge algorithm at multiple temperatures

    NASA Astrophysics Data System (ADS)

    Balasingam, B.; Avvari, G. V.; Pattipati, K. R.; Bar-Shalom, Y.

    2015-01-01

    Evaluating a battery fuel gauge (BFG) algorithm is a challenging problem due to the fact that there are no reliable mathematical models to represent the complex features of a Li-ion battery, such as hysteresis and relaxation effects, temperature effects on parameters, aging, power fade (PF), and capacity fade (CF) with respect to the chemical composition of the battery. The existing literature is largely focused on developing different BFG strategies and BFG validation has received little attention. In this paper, using hardware in the loop (HIL) data collected form three Li-ion batteries at nine different temperatures ranging from -20 °C to 40 °C, we demonstrate detailed validation results of a battery fuel gauge (BFG) algorithm. The BFG validation is based on three different BFG validation metrics; we provide implementation details of these three BFG evaluation metrics by proposing three different BFG validation load profiles that satisfy varying levels of user requirements.

  10. Algorithms for personalized therapy of type 2 diabetes: results of a web-based international survey

    PubMed Central

    Gallo, Marco; Mannucci, Edoardo; De Cosmo, Salvatore; Gentile, Sandro; Candido, Riccardo; De Micheli, Alberto; Di Benedetto, Antonino; Esposito, Katherine; Genovese, Stefano; Medea, Gerardo; Ceriello, Antonio

    2015-01-01

    Objective In recent years increasing interest in the issue of treatment personalization for type 2 diabetes (T2DM) has emerged. This international web-based survey aimed to evaluate opinions of physicians about tailored therapeutic algorithms developed by the Italian Association of Diabetologists (AMD) and available online, and to get suggestions for future developments. Another aim of this initiative was to assess whether the online advertising and the survey would have increased the global visibility of the AMD algorithms. Research design and methods The web-based survey, which comprised five questions, has been available from the homepage of the web-version of the journal Diabetes Care throughout the month of December 2013, and on the AMD website between December 2013 and September 2014. Participation was totally free and responders were anonymous. Results Overall, 452 physicians (M=58.4%) participated in the survey. Diabetologists accounted for 76.8% of responders. The results of the survey show wide agreement (>90%) by participants on the utility of the algorithms proposed, even if they do not cover all possible needs of patients with T2DM for a personalized therapeutic approach. In the online survey period and in the months after its conclusion, a relevant and durable increase in the number of unique users who visited the websites was registered, compared to the period preceding the survey. Conclusions Patients with T2DM are heterogeneous, and there is interest toward accessible and easy to use personalized therapeutic algorithms. Responders opinions probably reflect the peculiar organization of diabetes care in each country. PMID:26301097

  11. First Results from the OMI Rotational Raman Scattering Cloud Pressure Algorithm

    NASA Technical Reports Server (NTRS)

    Joiner, Joanna; Vasilkov, Alexander P.

    2006-01-01

    We have developed an algorithm to retrieve scattering cloud pressures and other cloud properties with the Aura Ozone Monitoring Instrument (OMI). The scattering cloud pressure is retrieved using the effects of rotational Raman scattering (RRS). It is defined as the pressure of a Lambertian surface that would produce the observed amount of RRS consistent with the derived reflectivity of that surface. The independent pixel approximation is used in conjunction with the Lambertian-equivalent reflectivity model to provide an effective radiative cloud fraction and scattering pressure in the presence of broken or thin cloud. The derived cloud pressures will enable accurate retrievals of trace gas mixing ratios, including ozone, in the troposphere within and above clouds. We describe details of the algorithm that will be used for the first release of these products. We compare our scattering cloud pressures with cloud-top pressures and other cloud properties from the Aqua Moderate-Resolution Imaging Spectroradiometer (MODIS) instrument. OMI and MODIS are part of the so-called A-train satellites flying in formation within 30 min of each other. Differences between OMI and MODIS are expected because the MODIS observations in the thermal infrared are more sensitive to the cloud top whereas the backscattered photons in the ultraviolet can penetrate deeper into clouds. Radiative transfer calculations are consistent with the observed differences. The OMI cloud pressures are shown to be correlated with the cirrus reflectance. This relationship indicates that OMI can probe through thin or moderately thick cirrus to lower lying water clouds.

  12. Six clustering algorithms applied to the WAIS-R: the problem of dissimilar cluster results.

    PubMed

    Fraboni, M; Cooper, D

    1989-11-01

    Clusterings of the Wechsler Adult Intelligence Scale-Revised subtests were obtained from the application of six hierarchical clustering methods (N = 113). These sets of clusters were compared for similarities using the Rand index. The calculated indices suggested similarities of cluster group membership between the Complete Linkage and Centroid methods; Complete Linkage and Ward's methods; Centroid and Ward's methods; and Single Linkage and Average Linkage Between Groups methods. Cautious use of single clustering methods is implied, though the authors suggest some advantages of knowing specific similarities and differences. If between-method comparisons consistently reveal similar cluster membership, a choice could be made from those algorithms that tend to produce similar partitions, thereby enhancing cluster interpretation. PMID:2613904

  13. Orion Guidance and Control Ascent Abort Algorithm Design and Performance Results

    NASA Technical Reports Server (NTRS)

    Proud, Ryan W.; Bendle, John R.; Tedesco, Mark B.; Hart, Jeremy J.

    2009-01-01

    During the ascent flight phase of NASA s Constellation Program, the Ares launch vehicle propels the Orion crew vehicle to an agreed to insertion target. If a failure occurs at any point in time during ascent then a system must be in place to abort the mission and return the crew to a safe landing with a high probability of success. To achieve continuous abort coverage one of two sets of effectors is used. Either the Launch Abort System (LAS), consisting of the Attitude Control Motor (ACM) and the Abort Motor (AM), or the Service Module (SM), consisting of SM Orion Main Engine (OME), Auxiliary (Aux) Jets, and Reaction Control System (RCS) jets, is used. The LAS effectors are used for aborts from liftoff through the first 30 seconds of second stage flight. The SM effectors are used from that point through Main Engine Cutoff (MECO). There are two distinct sets of Guidance and Control (G&C) algorithms that are designed to maximize the performance of these abort effectors. This paper will outline the necessary inputs to the G&C subsystem, the preliminary design of the G&C algorithms, the ability of the algorithms to predict what abort modes are achievable, and the resulting success of the abort system. Abort success will be measured against the Preliminary Design Review (PDR) abort performance metrics and overall performance will be reported. Finally, potential improvements to the G&C design will be discussed.

  14. Fast and robust segmentation of solar EUV images: algorithm and results for solar cycle 23

    NASA Astrophysics Data System (ADS)

    Barra, V.; Delouille, V.; Kretzschmar, M.; Hochedez, J.-F.

    2009-10-01

    Context: The study of the variability of the solar corona and the monitoring of coronal holes, quiet sun and active regions are of great importance in astrophysics as well as for space weather and space climate applications. Aims: In a previous work, we presented the spatial possibilistic clustering algorithm (SPoCA). This is a multi-channel unsupervised spatially-constrained fuzzy clustering method that automatically segments solar extreme ultraviolet (EUV) images into regions of interest. The results we reported on SoHO-EIT images taken from February 1997 to May 2005 were consistent with previous knowledge in terms of both areas and intensity estimations. However, they presented some artifacts due to the method itself. Methods: Herein, we propose a new algorithm, based on SPoCA, that removes these artifacts. We focus on two points: the definition of an optimal clustering with respect to the regions of interest, and the accurate definition of the cluster edges. We moreover propose methodological extensions to this method, and we illustrate these extensions with the automatic tracking of active regions. Results: The much improved algorithm can decompose the whole set of EIT solar images over the 23rd solar cycle into regions that can clearly be identified as quiet sun, coronal hole and active region. The variations of the parameters resulting from the segmentation, i.e. the area, mean intensity, and relative contribution to the solar irradiance, are consistent with previous results and thus validate the decomposition. Furthermore, we find indications for a small variation of the mean intensity of each region in correlation with the solar cycle. Conclusions: The method is generic enough to allow the introduction of other channels or data. New applications are now expected, e.g. related to SDO-AIA data.

  15. Comparative Results of AIRS AMSU and CrIS/ATMS Retrievals Using a Scientifically Equivalent Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Kouvaris, Louis; Iredell, Lena

    2016-01-01

    The AIRS Science Team Version 6 retrieval algorithm is currently producing high quality level-3 Climate Data Records (CDRs) from AIRSAMSU which are critical for understanding climate processes. The AIRS Science Team is finalizing an improved Version-7 retrieval algorithm to reprocess all old and future AIRS data. AIRS CDRs should eventually cover the period September 2002 through at least 2020. CrISATMS is the only scheduled follow on to AIRSAMSU. The objective of this research is to prepare for generation of a long term CrISATMS level-3 data using a finalized retrieval algorithm that is scientifically equivalent to AIRSAMSU Version-7.

  16. Active Learning in Large Classes: Can Small Interventions Produce Greater Results than Are Statistically Predictable?

    ERIC Educational Resources Information Center

    Adrian, Lynne M.

    2010-01-01

    Six online postings and six one-minute papers were added to an introductory first-year class, forming 5 percent of the final grade, but represented significant intervention in class functioning and amount of active learning. Active learning produced results in student performance beyond the percentage of the final grade it constituted. (Contains 1…

  17. Results from CrIS/ATMS Obtained Using an AIRS "Version-6 like" Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Kouvaris, Louis; Iredell, Lena

    2015-01-01

    We tested and evaluated Version-6.22 AIRS and Version-6.22 CrIS products on a single day, December 4, 2013, and compared results to those derived using AIRS Version-6. AIRS and CrIS Version-6.22 O3(p) and q(p) products are both superior to those of AIRS Version-6All AIRS and CrIS products agree reasonably well with each other. CrIS Version-6.22 T(p) and q(p) results are slightly poorer than AIRS over land, especially under very cloudy conditions. Both AIRS and CrIS Version-6.22 run now at JPL. Our short term plans are to analyze many common months at JPL in the near future using Version-6.22 or a further improved algorithm to assess the compatibility of AIRS and CrIS monthly mean products and their interannual differences. Updates to the calibration of both CrIS and ATMS are still being finalized. JPL plans, in collaboration with the Goddard DISC, to reprocess all AIRS data using a still to be finalized Version-7 retrieval algorithm, and to reprocess all recalibrated CrISATMS data using Version-7 as well.

  18. Results from CrIS/ATMS Obtained Using an AIRS "Version-6 Like" Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Kouvaris, Louis; Iredell, Lena

    2015-01-01

    We have tested and evaluated Version-6.22 AIRS and Version-6.22 CrIS products on a single day, December 4, 2013, and compared results to those derived using AIRS Version-6. AIRS and CrIS Version-6.22 O3(p) and q(p) products are both superior to those of AIRS Version-6All AIRS and CrIS products agree reasonably well with each other CrIS Version-6.22 T(p) and q(p) results are slightly poorer than AIRS under very cloudy conditions. Both AIRS and CrIS Version-6.22 run now at JPL. Our short term plans are to analyze many common months at JPL in the near future using Version-6.22 or a further improved algorithm to assess the compatibility of AIRS and CrIS monthly mean products and their interannual differencesUpdates to the calibration of both CrIS and ATMS are still being finalized. JPL plans, in collaboration with the Goddard DISC, to reprocess all AIRS data using a still to be finalized Version-7 retrieval algorithm, and to reprocess all recalibrated CrISATMS data using Version-7 as well.

  19. A Formal Algorithm for Verifying the Validity of Clustering Results Based on Model Checking

    PubMed Central

    Huang, Shaobin; Cheng, Yuan; Lang, Dapeng; Chi, Ronghua; Liu, Guofeng

    2014-01-01

    The limitations in general methods to evaluate clustering will remain difficult to overcome if verifying the clustering validity continues to be based on clustering results and evaluation index values. This study focuses on a clustering process to analyze crisp clustering validity. First, we define the properties that must be satisfied by valid clustering processes and model clustering processes based on program graphs and transition systems. We then recast the analysis of clustering validity as the problem of verifying whether the model of clustering processes satisfies the specified properties with model checking. That is, we try to build a bridge between clustering and model checking. Experiments on several datasets indicate the effectiveness and suitability of our algorithms. Compared with traditional evaluation indices, our formal method can not only indicate whether the clustering results are valid but, in the case the results are invalid, can also detect the objects that have led to the invalidity. PMID:24608823

  20. Statistically significant performance results of a mine detector and fusion algorithm from an x-band high-resolution SAR

    NASA Astrophysics Data System (ADS)

    Williams, Arnold C.; Pachowicz, Peter W.

    2004-09-01

    Current mine detection research indicates that no single sensor or single look from a sensor will detect mines/minefields in a real-time manner at a performance level suitable for a forward maneuver unit. Hence, the integrated development of detectors and fusion algorithms are of primary importance. A problem in this development process has been the evaluation of these algorithms with relatively small data sets, leading to anecdotal and frequently over trained results. These anecdotal results are often unreliable and conflicting among various sensors and algorithms. Consequently, the physical phenomena that ought to be exploited and the performance benefits of this exploitation are often ambiguous. The Army RDECOM CERDEC Night Vision Laboratory and Electron Sensors Directorate has collected large amounts of multisensor data such that statistically significant evaluations of detection and fusion algorithms can be obtained. Even with these large data sets care must be taken in algorithm design and data processing to achieve statistically significant performance results for combined detectors and fusion algorithms. This paper discusses statistically significant detection and combined multilook fusion results for the Ellipse Detector (ED) and the Piecewise Level Fusion Algorithm (PLFA). These statistically significant performance results are characterized by ROC curves that have been obtained through processing this multilook data for the high resolution SAR data of the Veridian X-Band radar. We discuss the implications of these results on mine detection and the importance of statistical significance, sample size, ground truth, and algorithm design in performance evaluation.

  1. Assessing the Public Health Risk of Shiga Toxin-Producing Escherichia coli by Use of a Rapid Diagnostic Screening Algorithm

    PubMed Central

    Ferdous, Mithila; Ott, Alewijn; Scheper, Henk R.; Wisselink, Guido J.; Heck, Max E.; Rossen, John W.; Kooistra-Smid, Anna M. D.

    2015-01-01

    Shiga toxin-producing Escherichia coli (STEC) is an enteropathogen of public health concern because of its ability to cause serious illness and outbreaks. In this prospective study, a diagnostic screening algorithm to categorize STEC infections into risk groups was evaluated. The algorithm consists of prescreening stool specimens with real-time PCR (qPCR) for the presence of stx genes. The qPCR-positive stool samples were cultured in enrichment broth and again screened for stx genes and additional virulence factors (escV, aggR, aat, bfpA) and O serogroups (O26, O103, O104, O111, O121, O145, O157). Also, PCR-guided culture was performed with sorbitol MacConkey agar (SMAC) and CHROMagar STEC medium. The presence of virulence factors and O serogroups was used for presumptive pathotype (PT) categorization in four PT groups. The potential risk for severe disease was categorized from high risk for PT group I to low risk for PT group III, whereas PT group IV consists of unconfirmed stx qPCR-positive samples. In total, 5,022 stool samples of patients with gastrointestinal symptoms were included. The qPCR detected stx genes in 1.8% of samples. Extensive screening for virulence factors and O serogroups was performed on 73 samples. After enrichment, the presence of stx genes was confirmed in 65 samples (89%). By culture on selective media, STEC was isolated in 36% (26/73 samples). Threshold cycle (CT) values for stx genes were significantly lower after enrichment compared to direct qPCR (P < 0.001). In total, 11 (15%), 19 (26%), 35 (48%), and 8 (11%) samples were categorized into PT groups I, II, III, and IV, respectively. Several virulence factors (stx2, stx2a, stx2f, toxB, eae, efa1, cif, espA, tccP, espP, nleA and/or nleB, tir cluster) were associated with PT groups I and II, while others (stx1, eaaA, mch cluster, ireA) were associated with PT group III. Furthermore, the number of virulence factors differed between PT groups (analysis of variance, P < 0.0001). In

  2. Mars Entry Atmospheric Data System Trajectory Reconstruction Algorithms and Flight Results

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark; Shidner, Jeremy; Munk, Michelle

    2013-01-01

    The Mars Entry Atmospheric Data System is a part of the Mars Science Laboratory, Entry, Descent, and Landing Instrumentation project. These sensors are a system of seven pressure transducers linked to ports on the entry vehicle forebody to record the pressure distribution during atmospheric entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. Specifically, angle of attack, angle of sideslip, dynamic pressure, Mach number, and freestream atmospheric properties are reconstructed from the measured pressures. Such data allows for the aerodynamics to become decoupled from the assumed atmospheric properties, allowing for enhanced trajectory reconstruction and performance analysis as well as an aerodynamic reconstruction, which has not been possible in past Mars entry reconstructions. This paper provides details of the data processing algorithms that are utilized for this purpose. The data processing algorithms include two approaches that have commonly been utilized in past planetary entry trajectory reconstruction, and a new approach for this application that makes use of the pressure measurements. The paper describes assessments of data quality and preprocessing, and results of the flight data reduction from atmospheric entry, which occurred on August 5th, 2012.

  3. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  4. Development of region processing algorithm for HSTAMIDS: status and field test results

    NASA Astrophysics Data System (ADS)

    Ngan, Peter; Burke, Sean; Cresci, Roger; Wilson, Joseph N.; Gader, Paul; Ho, K. C.; Bartosz, Elizabeth; Duvoisin, Herbert

    2007-04-01

    The Region Processing Algorithm (RPA) has been developed by the Office of the Army Humanitarian Demining Research and Development (HD R&D) Program as part of improvements for the AN/PSS-14. The effort was a collaboration between the HD R&D Program, L-3 Communication CyTerra Corporation, University of Florida, Duke University and University of Missouri. RPA has been integrated into and implemented in a real-time AN/PSS-14. The subject unit was used to collect data and tested for its performance at three Army test sites within the United States of America. This paper describes the status of the technology and its recent test results.

  5. A super-resolution algorithm for enhancement of flash lidar data: flight test results

    NASA Astrophysics Data System (ADS)

    Bulyshev, Alexander; Amzajerdian, Farzin; Roback, Eric; Reisse, Robert

    2013-03-01

    This paper describes the results of a 3D super-resolution algorithm applied to the range data obtained from a recent Flash Lidar helicopter flight test. The flight test was conducted by the NASA's Autonomous Landing and Hazard Avoidance Technology (ALHAT) project over a simulated lunar terrain facility at NASA Kennedy Space Center. ALHAT is developing the technology for safe autonomous landing on the surface of celestial bodies: Moon, Mars, asteroids. One of the test objectives was to verify the ability of 3D super-resolution technique to generate high resolution digital elevation models (DEMs) and to determine time resolved relative positions and orientations of the vehicle. 3D super-resolution algorithm was developed earlier and tested in computational modeling, and laboratory experiments, and in a few dynamic experiments using a moving truck. Prior to the helicopter flight test campaign, a 100mX100m hazard field was constructed having most of the relevant extraterrestrial hazard: slopes, rocks, and craters with different sizes. Data were collected during the flight and then processed by the super-resolution code. The detailed DEM of the hazard field was constructed using independent measurement to be used for comparison. ALHAT navigation system data were used to verify abilities of super-resolution method to provide accurate relative navigation information. Namely, the 6 degree of freedom state vector of the instrument as a function of time was restored from super-resolution data. The results of comparisons show that the super-resolution method can construct high quality DEMs and allows for identifying hazards like rocks and craters within the accordance of ALHAT requirements.

  6. A Super-Resolution Algorithm for Enhancement of FLASH LIDAR Data: Flight Test Results

    NASA Technical Reports Server (NTRS)

    Bulyshev, Alexander; Amzajerdian, Farzin; Roback, Eric; Reisse Robert

    2014-01-01

    This paper describes the results of a 3D super-resolution algorithm applied to the range data obtained from a recent Flash Lidar helicopter flight test. The flight test was conducted by the NASA's Autonomous Landing and Hazard Avoidance Technology (ALHAT) project over a simulated lunar terrain facility at NASA Kennedy Space Center. ALHAT is developing the technology for safe autonomous landing on the surface of celestial bodies: Moon, Mars, asteroids. One of the test objectives was to verify the ability of 3D super-resolution technique to generate high resolution digital elevation models (DEMs) and to determine time resolved relative positions and orientations of the vehicle. 3D super-resolution algorithm was developed earlier and tested in computational modeling, and laboratory experiments, and in a few dynamic experiments using a moving truck. Prior to the helicopter flight test campaign, a 100mX100m hazard field was constructed having most of the relevant extraterrestrial hazard: slopes, rocks, and craters with different sizes. Data were collected during the flight and then processed by the super-resolution code. The detailed DEM of the hazard field was constructed using independent measurement to be used for comparison. ALHAT navigation system data were used to verify abilities of super-resolution method to provide accurate relative navigation information. Namely, the 6 degree of freedom state vector of the instrument as a function of time was restored from super-resolution data. The results of comparisons show that the super-resolution method can construct high quality DEMs and allows for identifying hazards like rocks and craters within the accordance of ALHAT requirements.

  7. Results from CrIS/ATMS Obtained Using an "AIRS Version-6 Like Retrieval Algorithm

    NASA Astrophysics Data System (ADS)

    Susskind, J.

    2015-12-01

    A main objective of AIRS/AMSU on EOS is to provide accurate sounding products that are used to generate climate data sets. Suomi NPP carries CrIS/ATMS that were designed as follow-ons to AIRS/AMSU. Our objective is to generate a long term climate data set of products derived from CrIS/ATMS to serve as a continuation of the AIRS/AMSU products. The Goddard DISC has generated AIRS/AMSU retrieval products, extending from September 2002 through real time, using the AIRS Science Team Version-6 retrieval algorithm. Level-3 gridded monthly mean values of these products, generated using AIRS Version-6, form a state of the art multi-year set of Climate Data Records (CDRs), which is expected to continue through 2022 and possibly beyond, as the AIRS instrument is extremely stable. The goal of this research is to develop and implement a CrIS/ATMS retrieval system to generate CDRs that are compatible with, and are of comparable quality to, those generated operationally using AIRS/AMSU data. The AIRS Science Team has made considerable improvements in AIRS Science Team retrieval methodology and is working on the development of an improved AIRS Science Team Version-7 retrieval methodology to be used to reprocess all AIRS data in the relatively near future. Research is underway by Dr. Susskind and co-workers at the NASA GSFC Sounder Research Team (SRT) towards the finalization of the AIRS Version-7 retrieval algorithm, the current version of which is called SRT AIRS Version-6.22. Dr. Susskind and co-workers have developed analogous retrieval methodology for analysis of CrIS/ATMS data, called SRT CrIS Version-6.22. Results will be presented that show that AIRS and CrIS products derived using a common further improved retrieval algorithm agree closely with each other and are both superior to AIRS Version 6. The goal of the AIRS Science Team is to continue to improve both AIRS and CrIS retrieval products and then use the improved retrieval methodology for the processing of past and

  8. One-year results of an algorithmic approach to managing failed back surgery syndrome

    PubMed Central

    Avellanal, Martín; Diaz-Reganon, Gonzalo; Orts, Alejandro; Soto, Silvia

    2014-01-01

    BACKGROUND: Failed back surgery syndrome (FBSS) is a major clinical problem. Different etiologies with different incidence rates have been proposed. There are currently no standards regarding the management of these patients. Epiduroscopy is an endoscopic technique that may play a role in the management of FBSS. OBJECTIVE: To evaluate an algorithm for management of severe FBSS including epiduroscopy as a diagnostic and therapeutic tool. METHODS: A total of 133 patients with severe symptoms of FBSS (visual analogue scale score ≥7) and no response to pharmacological treatment and physical therapy were included. A six-step management algorithm was applied. Data, including patient demographics, pain and surgical procedure, were analyzed. In all cases, one or more objective causes of pain were established. Treatment success was defined as ≥50% long-term pain relief maintained during the first year of follow-up. Final allocation of patients was registered: good outcome with conservative treatment, surgical reintervention and palliative treatment with implantable devices. RESULTS: Of 122 patients enrolled, 59.84% underwent instrumented surgery and 40.16% a noninstrumented procedure. Most (64.75%) experienced significant pain relief with conventional pain clinic treatments; 15.57% required surgical treatment. Palliative spinal cord stimulation and spinal analgesia were applied in 9.84% and 2.46% of the cases, respectively. The most common diagnosis was epidural fibrosis, followed by disc herniation, global or lateral stenosis, and foraminal stenosis. CONCLUSIONS: A new six-step ladder approach to severe FBSS management that includes epiduroscopy was analyzed. Etiologies are accurately described and a useful role of epiduroscopy was confirmed. PMID:25222573

  9. Aircraft-Produced Ice Particles (APIPs): Additional Results and Further Insights.

    NASA Astrophysics Data System (ADS)

    Woodley, William L.; Gordon, Glenn; Henderson, Thomas J.; Vonnegut, Bernard; Rosenfeld, Daniel; Detwiler, Andrew

    2003-05-01

    This paper presents new results from studies of aircraft-produced ice particles (APIPs) in supercooled fog and clouds. Nine aircraft, including a Beech King Air 200T cloud physics aircraft, a Piper Aztec, a Cessna 421-C, two North American T-28s, an Aero Commander, a Piper Navajo, a Beech Turbo Baron, and a second four-bladed King Air were involved in the tests. The instrumented King Air served as the monitoring aircraft for trails of ice particles created, or not created, when the other aircraft were flown through clouds at various temperatures and served as both the test and monitoring aircraft when it itself was tested. In some cases sulfur hexafluoride (SF6) gas was released by the test aircraft during its test run and was detected by the King Air during its monitoring passes to confirm the location of the test aircraft wake. Ambient temperatures for the tests ranged between 5° and 12°C. The results confirm earlier published results and provide further insights into the APIPs phenomenon. The King Air at ambient temperatures less than 8°C can produce APIPs readily. The Piper Aztec and the Aero Commander also produced APIPs under the test conditions in which they were flown. The Cessna 421, Piper Navajo, and Beech Turbo Baron did not. The APIPs production potential of a T-28 is still indeterminate because a limited range of conditions was tested. Homogeneous nucleation in the adiabatically cooled regions where air is expanding around the rapidly rotating propeller tips is the cause of APIPs. An equation involving the propeller efficiency, engine thrust, and true airspeed of the aircraft is used along with the published thrust characteristics of the propellers to predict when the aircraft will produce APIPs. In most cases the predictions agree well with the field tests. Of all of the aircraft tested, the Piper Aztec, despite its small size and low horsepower, was predicted to be the most prolific producer of APIPs, and this was confirmed in field tests. The

  10. Short Hairpin RNA Suppression of Thymidylate Synthase Produces DNA Mismatches and Results in Excellent Radiosensitization

    SciTech Connect

    Flanagan, Sheryl A.; Cooper, Kristin S.; Mannava, Sudha; Nikiforov, Mikhail A.; Shewach, Donna S.

    2012-12-01

    Purpose: To determine the effect of short hairpin ribonucleic acid (shRNA)-mediated suppression of thymidylate synthase (TS) on cytotoxicity and radiosensitization and the mechanism by which these events occur. Methods and Materials: shRNA suppression of TS was compared with 5-fluoro-2 Prime -deoxyuridine (FdUrd) inactivation of TS with or without ionizing radiation in HCT116 and HT29 colon cancer cells. Cytotoxicity and radiosensitization were measured by clonogenic assay. Cell cycle effects were measured by flow cytometry. The effects of FdUrd or shRNA suppression of TS on dNTP deoxynucleotide triphosphate imbalances and consequent nucleotide misincorporations into deoxyribonucleic acid (DNA) were analyzed by high-pressure liquid chromatography and as pSP189 plasmid mutations, respectively. Results: TS shRNA produced profound ({>=}90%) and prolonged ({>=}8 days) suppression of TS in HCT116 and HT29 cells, whereas FdUrd increased TS expression. TS shRNA also produced more specific and prolonged effects on dNTPs deoxynucleotide triphosphates compared with FdUrd. TS shRNA suppression allowed accumulation of cells in S-phase, although its effects were not as long-lasting as those of FdUrd. Both treatments resulted in phosphorylation of Chk1. TS shRNA alone was less cytotoxic than FdUrd but was equally effective as FdUrd in eliciting radiosensitization (radiation enhancement ratio: TS shRNA, 1.5-1.7; FdUrd, 1.4-1.6). TS shRNA and FdUrd produced a similar increase in the number and type of pSP189 mutations. Conclusions: TS shRNA produced less cytotoxicity than FdUrd but was equally effective at radiosensitizing tumor cells. Thus, the inhibitory effect of FdUrd on TS alone is sufficient to elicit radiosensitization with FdUrd, but it only partially explains FdUrd-mediated cytotoxicity and cell cycle inhibition. The increase in DNA mismatches after TS shRNA or FdUrd supports a causal and sufficient role for the depletion of dTTP thymidine triphosphate and consequent DNA

  11. Full-scale engine demonstration of an advanced sensor failure detection, isolation and accommodation algorithm: Preliminary results

    NASA Technical Reports Server (NTRS)

    Merrill, Walter C.; Delaat, John C.; Kroszkewicz, Steven M.; Abdelwahab, Mahmood

    1987-01-01

    The objective of the advanced detection, isolation, and accommodation (ADIA) program is to improve the overall demonstrated reliability of digital electronic control systems for turbine engines. For this purpose, algorithms were developed which detect, isolate, and accommodate sensor failures using analytical redundancy. Preliminary results of a full scale engine demonstration of the ADIA algorithm are presented. Minimum detectable levels of sensor failures for an F100 turbofan engine control system are determined and compared to those obtained during a previous evaluation of this algorithm using a real-time hybrid computer simulation of the engine.

  12. Full-scale engine demonstration of an advanced sensor failure detection, isolation and accommodation algorithm: Preliminary results

    NASA Astrophysics Data System (ADS)

    Merrill, Walter C.; Delaat, John C.; Kroszkewicz, Steven M.; Abdelwahab, Mahmood

    The objective of the advanced detection, isolation, and accommodation (ADIA) program is to improve the overall demonstrated reliability of digital electronic control systems for turbine engines. For this purpose, algorithms were developed which detect, isolate, and accommodate sensor failures using analytical redundancy. Preliminary results of a full scale engine demonstration of the ADIA algorithm are presented. Minimum detectable levels of sensor failures for an F100 turbofan engine control system are determined and compared to those obtained during a previous evaluation of this algorithm using a real-time hybrid computer simulation of the engine.

  13. Full-scale engine demonstration of an advanced sensor failure detection isolation, and accommodation algorithm - Preliminary results

    NASA Technical Reports Server (NTRS)

    Merrill, Walter C.; Delaat, John C.; Kroszkewicz, Steven M.; Abdelwahab, Mahmood

    1987-01-01

    The objective of the advanced detection, isolation, and accommodation (ADIA) program is to improve the overall demonstrated reliability of digital electronic control systems for turbine engines. For this purpose, algorithms were developed which detect, isolate, and accommodate sensor failures using analytical redundancy. Preliminary results of a full scale engine demonstration of the ADIA algorithm are presented. Minimum detectable levels of sensor failures for an F100 turbofan engine control system are determined and compared to those obtained during a previous evaluation of this algorithm using a real-time hybrid computer simulation of the engine.

  14. The Treatment Results of a Standard Algorithm for Choosing the Best Entry Vessel for Intravenous Port Implantation

    PubMed Central

    Wei, Wen-Cheng; Wu, Ching-Yang; Wu, Ching-Feng; Fu, Jui-Ying; Su, Ta-Wei; Yu, Sheng-Yueh; Kao, Tsung-Chi; Ko, Po-Jen

    2015-01-01

    Abstract Vascular cutdown and echo guide puncture methods have its own limitations under certain conditions. There was no available algorithm for choosing entry vessel. A standard algorithm was introduced to help choose the entry vessel location according to our clinical experience and review of the literature. The goal of this study is to analyze the treatment results of the standard algorithm used to choose the entry vessel for intravenous port implantation. During the period between March 2012 and March 2013, 507 patients who received intravenous port implantation due to advanced chemotherapy were included into this study. Choice of entry vessel was according to standard algorithm. All clinical characteristic factors were collected and complication rate and incidence were further analyzed. Compared with our clinical experience in 2006, procedure-related complication rate declined from 1.09% to 0.4%, whereas the late complication rate decreased from 19.97% to 3.55%. No more pneumothorax, hematoma, catheter kinking, fractures, and pocket erosion were identified after using the standard algorithm. In alive oncology patients, 98% implanted port could serve a functional vascular access to fit therapeutic needs. This standard algorithm for choosing the best entry vessel is a simple guideline that is easy to follow. The algorithm has excellent efficiency and can minimize complication rates and incidence. PMID:26287429

  15. Consequences of transfer of an in vitro-produced embryo for the dam and resultant calf.

    PubMed

    Bonilla, L; Block, J; Denicol, A C; Hansen, P J

    2014-01-01

    No reports exist on consequences of in vitro production (IVP) of embryos for the postnatal development of the calf or on postparturient function of the dam of the calf. Three hypotheses were evaluated: calves born as a result of transfer of an IVP embryo have reduced neonatal survival and altered postnatal growth, fertility, and milk yield compared with artificial insemination (AI) calves; cows giving birth to IVP calves have lower milk yield and fertility and higher incidence of postparturient disease than cows giving birth to AI calves; and the medium used for IVP affects the incidence of developmental abnormalities. In the first experiment, calves were produced by AI using conventional semen or by embryo transfer (ET) using a fresh or vitrified embryo produced in vitro with X-sorted semen. Gestation length was longer for cows receiving a vitrified embryo than for cows receiving a fresh embryo or AI. The percentage of dams experiencing calving difficulty was higher for ET than AI. We observed a tendency for incidence of retained placenta to be higher for ET than AI but found no significant effect of treatment on incidence of prolapse or metritis, pregnancy rate at first service, services per conception, or any measured characteristic of milk production in the subsequent lactation. Among Holstein heifers produced by AI or ET, treatment had no effect on birth weight but the variance tended to be greater in the ET groups. More Holstein heifer calves tended to be born dead, died, or were euthanized within the first 20d of life for the ET groups than for AI. Similarly, the proportion of Holstein heifer calves that either died or were culled for poor health after 20d of age was greater for the ET groups than for AI. We observed no effect of ET compared with AI on age at first service or on the percentage of heifers pregnant at first service, calf growth, or milk yield or composition in the first 120d in milk of the first lactation. In a second experiment, embryos were

  16. Flight test results of a vector-based failure detection and isolation algorithm for a redundant strapdown inertial measurement unit

    NASA Technical Reports Server (NTRS)

    Morrell, F. R.; Bailey, M. L.; Motyka, P. R.

    1988-01-01

    Flight test results of a vector-based fault-tolerant algorithm for a redundant strapdown inertial measurement unit are presented. Because the inertial sensors provide flight-critical information for flight control and navigation, failure detection and isolation is developed in terms of a multi-level structure. Threshold compensation techniques for gyros and accelerometers, developed to enhance the sensitivity of the failure detection process to low-level failures, are presented. Four flight tests, conducted in a commercial transport type environment, were used to determine the ability of the failure detection and isolation algorithm to detect failure signals, such a hard-over, null, or bias shifts. The algorithm provided timely detection and correct isolation of flight control- and low-level failures. The flight tests of the vector-based algorithm demonstrated its capability to provide false alarm free dual fail-operational performance for the skewed array of inertial sensors.

  17. New Cirrus Retrieval Algorithms and Results from eMAS during SEAC4RS

    NASA Astrophysics Data System (ADS)

    Holz, R.; Platnick, S. E.; Meyer, K.; Wang, C.; Wind, G.; Arnold, T.; King, M. D.; Yorks, J. E.; McGill, M. J.

    2014-12-01

    The enhanced MODIS Airborne Simulator (eMAS) scanning imager was flown on the ER-2 during the SEAC4RS field campaign. The imager provides measurements in 38 spectral channels from the visible into the 13μm CO2 absorption bands at approximately 25 m nadir spatial resolution at cirrus altitudes, and with a swath width of about 18 km, provided substantial context and synergy for other ER-2 cirrus observations. The eMAS is an update to the original MAS scanner, having new midwave and IR spectrometers coupled with the previous VNIR/SWIR spectrometers. In addition to the standard MODIS-like cloud retrieval algorithm (MOD06/MYD06 for MODIS Terra/Aqua, respectively) that provides cirrus optical thickness (COT) and effective particle radius (CER) from several channel combinations, three new algorithms were developed to take advantage of unique aspects of eMAS and/or other ER-2 observations. The first uses a combination of two solar reflectance channels within the 1.88 μm water vapor absorption band, each with significantly different single scattering albedo, allowing for simultaneous COT and CER retrievals. The advantage of this algorithm is that the strong water vapor absorption can significantly reduce the sensitivity to lower level clouds and ocean/land surface properties thus better isolating cirrus properties. A second algorithm uses a suite of infrared channels in an optimal estimation algorithm to simultaneously retrieve COT, CER, and cloud-top pressure/temperature. Finally, a window IR algorithm is used to retrieve COT in synergy with the ER-2 Cloud Physics Lidar (CPL) cloud top/base boundary measurements. Using a variety of quantifiable error sources, uncertainties for all eMAS retrievals will be shown along with comparisons with CPL COT retrievals.

  18. An algorithm for automatic measurement of stimulation thresholds: clinical performance and preliminary results.

    PubMed

    Danilovic, D; Ohm, O J; Stroebel, J; Breivik, K; Hoff, P I; Markowitz, T

    1998-05-01

    We have developed an algorithmic method for automatic determination of stimulation thresholds in both cardiac chambers in patients with intact atrioventricular (AV) conduction. The algorithm utilizes ventricular sensing, may be used with any type of pacing leads, and may be downloaded via telemetry links into already implanted dual-chamber Thera pacemakers. Thresholds are determined with 0.5 V amplitude and 0.06 ms pulse-width resolution in unipolar, bipolar, or both lead configurations, with a programmable sampling interval from 2 minutes to 48 hours. Measured values are stored in the pacemaker memory for later retrieval and do not influence permanent output settings. The algorithm was intended to gather information on continuous behavior of stimulation thresholds, which is important in the formation of strategies for programming pacemaker outputs. Clinical performance of the algorithm was evaluated in eight patients who received bipolar tined steroid-eluting leads and were observed for a mean of 5.1 months. Patient safety was not compromised by the algorithm, except for the possibility of pacing during the physiologic refractory period. Methods for discrimination of incorrect data points were developed and incorrect values were discarded. Fine resolution threshold measurements collected during this study indicated that: (1) there were great differences in magnitude of threshold peaking in different patients; (2) the initial intensive threshold peaking was usually followed by another less intensive but longer-lasting wave of threshold peaking; (3) the pattern of tissue reaction in the atrium appeared different from that in the ventricle; and (4) threshold peaking in the bipolar lead configuration was greater than in the unipolar configuration. The algorithm proved to be useful in studying ambulatory thresholds. PMID:9604237

  19. Photometric redshifts with the quasi Newton algorithm (MLPQNA) Results in the PHAT1 contest

    NASA Astrophysics Data System (ADS)

    Cavuoti, S.; Brescia, M.; Longo, G.; Mercurio, A.

    2012-10-01

    Context. Since the advent of modern multiband digital sky surveys, photometric redshifts (photo-z's) have become relevant if not crucial to many fields of observational cosmology, such as the characterization of cosmic structures and the weak and strong lensing. Aims: We describe an application to an astrophysical context, namely the evaluation of photometric redshifts, of MLPQNA, which is a machine-learning method based on the quasi Newton algorithm. Methods: Theoretical methods for photo-z evaluation are based on the interpolation of a priori knowledge (spectroscopic redshifts or SED templates), and they represent an ideal comparison ground for neural network-based methods. The MultiLayer Perceptron with quasi Newton learning rule (MLPQNA) described here is an effective computing implementation of neural networks exploited for the first time to solve regression problems in the astrophysical context. It is offered to the community through the DAMEWARE (DAta Mining & Exploration Web Application REsource) infrastructure. Results: The PHAT contest (Hildebrandt et al. 2010, A&A, 523, A31) provides a standard dataset to test old and new methods for photometric redshift evaluation and with a set of statistical indicators that allow a straightforward comparison among different methods. The MLPQNA model has been applied on the whole PHAT1 dataset of 1984 objects after an optimization of the model performed with the 515 available spectroscopic redshifts as training set. When applied to the PHAT1 dataset, MLPQNA obtains the best bias accuracy (0.0006) and very competitive accuracies in terms of scatter (0.056) and outlier percentage (16.3%), scoring as the second most effective empirical method among those that have so far participated in the contest. MLPQNA shows better generalization capabilities than most other empirical methods especially in the presence of underpopulated regions of the knowledge base.

  20. Soil chemical changes resulting from irrigation with water co-produced with coalbed natural gas

    SciTech Connect

    Ganjegunte, G.K.; Vance, G.F.; King, L.A.

    2005-12-01

    Land application of coalbed natural gas (CBNG) co-produced water is a popular management option within northwestern Powder River Basin (PRB) of Wyoming. This study evaluated the impacts of land application of CBNG waters on soil chemical properties at five sites. Soil samples were collected from different depths (0-5, 5-15, 15-30, 30-60, 60-90, and 90-120 cm) from sites that were irrigated with CBNG water for 2 to 3 yr and control sites. Chemical properties of CBNG water used for irrigation on the study sites indicate that electrical conductivity of CBNG water (EC{sub w}) and sodium adsorption ratio of CBNG water (SAR{sub w}) values were greater than those recommended for irrigation use on the soils at the study sites. Soil chemical analyses indicated that electrical conductivity of soil saturated paste extracts (ECe) and sodium adsorption ratio of soil saturated paste extracts (SAR(e)) values for irrigated sites were significantly greater (P < 0.05) than control plots in the upper 30-cm soil depths. Mass balance calculations suggested that there has been significant buildup of Na in irrigated soils due to CBNG irrigation water as well as Na mobilization within the soil profiles. Results indicate that irrigation with CBNG water significantly impacts certain soil properties, particularly if amendments are not properly utilized. This study provides information for better understanding changes in soil properties due to land application of CBNG water.

  1. First Results using the New γ-Ray Beam Produced at the Duke FEL Laboratory

    NASA Astrophysics Data System (ADS)

    Weller, Henry R.

    1999-11-01

    TUNL nuclear physicists and Duke Free Electron Laser physicists have developed a mono-energetic polarized high-intensity gamma-ray beam utilizing the facilities of the Duke Free Electron Laser Laboratory. This system currently includes the 280 MeV LINAC injector, the 1.2 GeV electron storage ring, and the OK-4 undulator. It is possible to tune the electron beam in a manner which allows the FEL photons produced by one electron bunch to backscatter from a second electron bunch, all within the ring. This leads to an intense beam of almost 100% linearly polarized γ-rays whose energy can be tuned from about 2 MeV to greater than 200 MeV with an energy spread of 1% or less. Beams having energies up to 55 MeV have been produced to date. Two prototype experiments, designed to demonstrate the viability of the facility for nuclear physics experiments, have been performed. The ^13C(γ,n)^12C reaction was studied from E_γ = 7.7 to 10.26 MeV. Analyzing powers were measured in 150 keV steps in this region, which contains several resonances. An analysis, designed to test the previous interpretation, is underway. The second experiment was a measurement of the analyzing power of the ^2H(γ,n)p reaction at E_γ = 3.58 MeV. The inverse n-p capture reaction is important at these energies, being a key reaction in the synthesis of nuclei in the early universe. The measured analyzing power will provide information on the M1/E1 ratio in this near threshold regime. No previous experimental data exist in this energy region which are sensitive to this ratio. These data are being analyzed, and preliminary results will be presented. A major component of the planned research program of the near future will consist of performing precision measurements of photo-pion production from polarized protons in the threshold region. This work will lead the development of present and emerging effective field theories. The ultimate goal of this program is to understand low energy QCD by examining the

  2. Near real-time expectation-maximization algorithm: computational performance and passive millimeter-wave imaging field test results

    NASA Astrophysics Data System (ADS)

    Reynolds, William R.; Talcott, Denise; Hilgers, John W.

    2002-07-01

    A new iterative algorithm (EMLS) via the expectation maximization method is derived for extrapolating a non- negative object function from noisy, diffraction blurred image data. The algorithm has the following desirable attributes; fast convergence is attained for high frequency object components, is less sensitive to constraint parameters, and will accommodate randomly missing data. Speed and convergence results are presented. Field test imagery was obtained with a passive millimeter wave imaging sensor having a 30.5 cm aperture. The algorithm was implemented and tested in near real time using field test imagery. Theoretical results and experimental results using the field test imagery will be compared using an effective aperture measure of resolution increase. The effective aperture measure, based on examination of the edge-spread function, will be detailed.

  3. Hydroclast and Peperite generation: Experimental Results produced using the Silicate Melt Injection Laboratory Experiment

    NASA Astrophysics Data System (ADS)

    Downey, W. S.; Mastin, L. G.; Spieler, O.; Kunzmann, T.; Shaw, C. S.; Dingwell, D. B.

    2008-12-01

    The Silicate Melt Injection Laboratory Experiment (SMILE) allows for the effusive and explosive injection of molten glass into a variety of media - air, water, water spray, and wet sediments. Experiments have been preformed using the SMILE apparatus to evaluate the mechanisms of "turbulent shedding" during shallow submarine volcanic eruptions and magma/wet-sediment interactions. In these experiments, approximately 0.5 kg of basaltic melt with 5 wt.% Spectromelt (dilithium tetraborate) is produced in an internally heated autoclave at 1150° C and ambient pressure. The molten charge is ejected via the bursting of a rupture disc at 3.5 MPa into the reaction media, situated within the low pressure tank (atmospheric conditions). Preliminary experiments ejecting melt into a standing water column have yielded hydroclasts of basalt. SEM images of the clasts show ubiquitous discontinuous skins ("rinds") that are flaked, peeled, or smeared away in strips. Adhering to the clast surfaces are flakes, blocks, and blobs of detached material, up to 10 μm in size. The presence of partially detached rinds and rind debris likely reflects repeated bending, scraping, impact, and other disruption through turbulent velocity fluctuations. These textures are comparable to littoral explosive deposits at Kilauea Volcano, Hawaii, where lava tubes are torn apart by wave action, the lava is quenched, and thrown back on the beach as loose fragments (hyaloclastite). Preliminary experiments injecting melt into wet sediments show evidence of sediment ingestion and fluidal textures. These results support the interpretation that peperite generation can be driven by hydrodynamic mixing of a fuel and a coolant.

  4. Results with an Algorithmic Approach to Hybrid Repair of the Aortic Arch

    PubMed Central

    Andersen, Nicholas D.; Williams, Judson B.; Hanna, Jennifer M.; Shah, Asad A.; McCann, Richard L.; Hughes, G. Chad

    2013-01-01

    Objective Hybrid repair of the transverse aortic arch may allow for aortic arch repair with reduced morbidity in patients who are suboptimal candidates for conventional open surgery. Here, we present our results with an algorithmic approach to hybrid arch repair, based upon the extent of aortic disease and patient comorbidities. Methods Between August 2005 and January 2012, 87 patients underwent hybrid arch repair by three principal procedures: zone 1 endograft coverage with extra-anatomic left carotid revascularization (zone 1, n=19), zone 0 endograft coverage with aortic arch debranching (zone 0, n=48), or total arch replacement with staged stented elephant trunk completion (stented elephant trunk, n=20). Results The mean patient age was 64 years and the mean expected in-hospital mortality rate was 16.3% as calculated by the EuroSCORE II. 22% (n=19) of operations were non-elective. Sternotomy, cardiopulmonary bypass, and deep hypothermic circulatory arrest were required in 78% (n=68), 45% (n=39), and 31% (n=27) of patients, respectively, to allow for total arch replacement, arch debranching, or other concomitant cardiac procedures, including ascending ± hemi-arch replacement in 17% (n=8) of patients undergoing zone 0 repair. All stented elephant trunk procedures (n=20) and 19% (n=9) of zone 0 procedures were staged, with 41% (n=12) of patients undergoing staged repair during a single hospitalization. The 30-day/in-hospital rates of stroke and permanent paraplegia/paraparesis were 4.6% (n=4) and 1.2% (n=1), respectively. Three of 27 (11.1%) patients with native ascending aorta zone 0 proximal landing zone experienced retrograde type A dissection following endograft placement. The overall in-hospital mortality rate was 5.7% (n=5), however, 30-day/in-hospital mortality increased to 14.9% (n=13) due to eight 30-day out-of-hospital deaths. Native ascending aorta zone 0 endograft placement was found to be the only univariate predictor of 30-day/in-hospital mortality

  5. Modeling transport and dilution of produced water and the resulting uptake and biomagnification in marine biota

    SciTech Connect

    Rye, H.; Reed, M.; Slagstad, D.

    1996-12-31

    The paper explains the numerical modelling efforts undertaken in order to study possible marine biological impacts caused by releases of produced water from the Haltenbanken area outside the western coast of Norway. Acute effects on marine life from releases of produced water appear to be relatively small and confined to areas rather lose to the release site. Biomagnification may however be experienced for relatively low concentrations at larger distances from the release point. Such effects can he modeled by performing a step-wise approach which includes: The use of 3-D hydrodynamic models to determine the ocean current fields; The use of 3-D multi-source numerical models to determine the concentration fields from the produced water releases, given the current field; and The use of biologic models to simulate the behavior of and larvae (passive marine biota) and fish (active marine biota) and their interaction with the concentration field. The paper explains the experiences gained by using this approach for the calculation of possible influences on marine life below the EC{sub 50} or LC{sub 50} concentration levels. The models are used for simulating concentration fields from 5 simultaneous sources at the Haltenbank area and simulation of magnification in some marine species from 2 simultaneous sources in the same area. Naphthalenes and phenols, which are both present in the produced water, were used as the chemical substances in the simulations.

  6. Results from CrIS/ATMS Obtained Using an "AIRS Version-6 Like" Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Kouvaris, Louis; Iredell, Lena

    2015-01-01

    A main objective of AIRS/AMSU on EOS is to provide accurate sounding products that are used to generate climate data sets. Suomi NPP carries CrIS/ATMS that were designed as follow-ons to AIRS/AMSU. Our objective is to generate a long term climate data set of products derived from CrIS/ATMS to serve as a continuation of the AIRS/AMSU products. We have modified an improved version of the operational AIRS Version-6 retrieval algorithm for use with CrIS/ATMS. CrIS/ATMS products are of very good quality, and are comparable to, and consistent with, those of AIRS.

  7. Mesoscale Convective Systems Which Do and Do Not Produce Sprites: Results From STEPS 2000

    NASA Astrophysics Data System (ADS)

    Lyons, W. A.; Andersen, L. M.; Nelson, T. E.; Cummer, S. A.; Jauget, N. C.; Huffines, G. R.

    2005-12-01

    During the Severe Thunderstorm Electrification and Precipitation Study (STEPS) conducted during the summer of 2000 over the High Plains, we addressed two basic questions. First, what are the characteristics of those positive cloud-to-ground strokes (+CGs) which produce transient luminous events (TLEs), especially sprites, halos and elves? It was found the vast majority TLEs optically confirmed over High Plains storms were associated with large charge moment change events (DMq), exceeding thresholds of several hundred C km, substantially larger than the DMq for "normal" lightning. This finding is entirely consistent with present theoretical models of sprite ignition at ~75 km due to conventional breakdown. The second question addressed concerned what types of storms produced these unusual CG discharges. Not all mesoscale convective systems (MCSs) produce TLEs, or if they do, do so only for certain stages in their life cycle. Why? Meteorological analyses of TLE-producing systems had determined that the TLE parent +CGs were concentrated mostly in the stratiform region of their parent storms. Initial and updated analyses for Lightning Mapping Array data from the New Mexico Tech system suggested that the majority of the charge in the parent +CGs was removed from relatively low altitudes in the storm, typically 3 to 5 km AGL. After summarizing the characteristics of over 1500 TLEs and their parent MCSs, some clear criteria have become evident. First the cloud top canopy must be larger than 20,000 sq. km at the 50C level, and the coldest temperature must be at least -55C. Second, the peak reflectivity somewhere in the parent storm must exceed 55 dBZ. This requirement for a very tall and also intense storm initially seems at odds with the known environment of TLE parent CGs (low in the startiform region). Yet, as will be discussed, the emerging conceptual models of TLEs within trailing stratiform regions suggests the overall picture is indeed consistent with what is known

  8. Using a hybrid Monte Carlo/ Slip Estimator-Genetic Algorithm (MCSE-GA) to produce high resolution estimates of paleoearthquakes from geodetic data

    NASA Astrophysics Data System (ADS)

    Lindsay, Anthony; McCloskey, John; Simão, Nuno; Murphy, Shane; Bhloscaidh, Mairead Nic

    2014-05-01

    Identifying fault sections where slip deficits have accumulated may provide a means for understanding sequences of large megathrust earthquakes. Stress accumulated during the interseismic period on an active megathrust is stored as potential slip, referred to as slip deficit, along locked sections of the fault. Analysis of the spatial distribution of slip during antecedent events along the fault will show where the locked plate has spent its stored slip. Areas of unreleased slip indicate where the potential for large events remain. The location of recent earthquakes and their distribution of slip can be estimated from instrumentally recorded seismic and geodetic data. However, long-term slip-deficit modelling requires detailed information on the size and distribution of slip for pre-instrumental events over hundreds of years covering more than one 'seismic cycle'. This requires the exploitation of proxy sources of data. Coral microatolls, growing in the intertidal zone of the outer island arc of the Sunda trench, present the possibility of reconstructing slip for a number of pre-instrumental earthquakes. Their growth is influenced by tectonic flexing of the continental plate beneath them; they act as long term recorders of the vertical component of deformation. However, the sparse distribution of data available using coral geodesy results in a under determined problem with non-unique solutions. Rather than accepting any one realisation as the definite model satisfying the coral displacement data, a Monte Carlo approach identifies a suite of models consistent with the observations. Using a Genetic Algorithm to accelerate the identification of desirable models, we have developed a Monte Carlo Slip Estimator- Genetic Algorithm (MCSE-GA) which exploits the full range of uncertainty associated with the displacements. Each iteration of the MCSE-GA samples different values from within the spread of uncertainties associated with each coral displacement. The Genetic

  9. Gasbuggy, New Mexico, Natural Gas and Produced Water Sampling and Analysis Results for 2011

    SciTech Connect

    2011-09-01

    The U.S. Department of Energy (DOE) Office of Legacy Management conducted natural gas sampling for the Gasbuggy, New Mexico, site on June 7 and 8, 2011. Natural gas sampling consists of collecting both gas samples and samples of produced water from gas production wells. Water samples from gas production wells were analyzed for gamma-emitting radionuclides, gross alpha, gross beta, and tritium. Natural gas samples were analyzed for tritium and carbon-14. ALS Laboratory Group in Fort Collins, Colorado, analyzed water samples. Isotech Laboratories in Champaign, Illinois, analyzed natural gas samples.

  10. Gasbuggy, New Mexico, Natural Gas and Produced Water Sampling Results for 2012

    SciTech Connect

    2012-12-01

    The U.S. Department of Energy (DOE) Office of Legacy Management conducted annual natural gas sampling for the Gasbuggy, New Mexico, Site on June 20 and 21, 2012. This long-term monitoring of natural gas includes samples of produced water from gas production wells that are located near the site. Water samples from gas production wells were analyzed for gamma-emitting radionuclides, gross alpha, gross beta, and tritium. Natural gas samples were analyzed for tritium and carbon-14. ALS Laboratory Group in Fort Collins, Colorado, analyzed water samples. Isotech Laboratories in Champaign, Illinois, analyzed natural gas samples.

  11. Validation of electronic medical record-based phenotyping algorithms: results and lessons learned from the eMERGE network

    PubMed Central

    Newton, Katherine M; Peissig, Peggy L; Kho, Abel Ngo; Bielinski, Suzette J; Berg, Richard L; Choudhary, Vidhu; Basford, Melissa; Chute, Christopher G; Kullo, Iftikhar J; Li, Rongling; Pacheco, Jennifer A; Rasmussen, Luke V; Spangler, Leslie; Denny, Joshua C

    2013-01-01

    Background Genetic studies require precise phenotype definitions, but electronic medical record (EMR) phenotype data are recorded inconsistently and in a variety of formats. Objective To present lessons learned about validation of EMR-based phenotypes from the Electronic Medical Records and Genomics (eMERGE) studies. Materials and methods The eMERGE network created and validated 13 EMR-derived phenotype algorithms. Network sites are Group Health, Marshfield Clinic, Mayo Clinic, Northwestern University, and Vanderbilt University. Results By validating EMR-derived phenotypes we learned that: (1) multisite validation improves phenotype algorithm accuracy; (2) targets for validation should be carefully considered and defined; (3) specifying time frames for review of variables eases validation time and improves accuracy; (4) using repeated measures requires defining the relevant time period and specifying the most meaningful value to be studied; (5) patient movement in and out of the health plan (transience) can result in incomplete or fragmented data; (6) the review scope should be defined carefully; (7) particular care is required in combining EMR and research data; (8) medication data can be assessed using claims, medications dispensed, or medications prescribed; (9) algorithm development and validation work best as an iterative process; and (10) validation by content experts or structured chart review can provide accurate results. Conclusions Despite the diverse structure of the five EMRs of the eMERGE sites, we developed, validated, and successfully deployed 13 electronic phenotype algorithms. Validation is a worthwhile process that not only measures phenotype performance but also strengthens phenotype algorithm definitions and enhances their inter-institutional sharing. PMID:23531748

  12. Genetic Algorithms and Local Search

    NASA Technical Reports Server (NTRS)

    Whitley, Darrell

    1996-01-01

    The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.

  13. Results from CrIS/ATMS Obtained Using an "AIRS Version-6 Like" Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Kouvaris, Louis; Iredell, Lena; Blaisdell, John

    2015-01-01

    AIRS and CrIS Version-6.22 O3(p) and q(p) products are both superior to those of AIRS Version-6.Monthly mean August 2014 Version-6.22 AIRS and CrIS products agree reasonably well with OMPS, CERES, and witheach other. JPL plans to process AIRS and CrIS for many months and compare interannual differences. Updates to thecalibration of both CrIS and ATMS are still being finalized. We are also working with JPL to develop a joint AIRS/CrISlevel-1 to level-3 processing system using a still to be finalized Version-7 retrieval algorithm. The NASA Goddard DISCwill eventually use this system to reprocess all AIRS and recalibrated CrIS/ATMS. .

  14. French regional surveillance program of carbapenemase-producing Gram-negative bacilli: results from a 2-year period.

    PubMed

    Pantel, A; Boutet-Dubois, A; Jean-Pierre, H; Marchandin, H; Sotto, A; Lavigne, J-P

    2014-12-01

    In February 2011, the CARB-LR group was created as a sentinel laboratory-based surveillance network to control the emergence of carbapenem-resistant Gram-negative bacilli (CR GNB) in a French Southern Region. We report the epidemiological results of a 2-year study. All the Gram-negative bacilli isolates detected in the different labs (hospital and community settings) of a French Southern Region and with reduced susceptibility to ertapenem and/or imipenem were characterised with regard to antibiotic resistance, bla genes content, repetitive sequence-based polymerase chain reaction (rep-PCR) profiles and multilocus sequence typing (MLST). A total of 221 strains were analysed. Acinetobacter baumannii was the most prevalent carbapenemase-producing bacteria, with a majority of OXA-23 producers (n = 37). One isolate co-produced OXA-23 and OXA-58 enzymes. Klebsiella pneumoniae was the most frequent carbapenemase-producing Enterobacteriaceae (CPE) (OXA-48 producer: n = 29, KPC producer: n = 1), followed by Escherichia coli (OXA-48 producer: n = 8, KPC producer: n = 1) and Enterobacter cloacae (OXA-48 producer, n = 1). One isolate of Pseudomonas aeruginosa produced a VIM-1 carbapenemase. A clonal diversity of carbapenemase-producing K. pneumoniae and E. coli was noted with different MLSTs. On the other hand, almost all OXA-23-producing A. baumannii strains belonged to the widespread ST2/international clone II. The link between the detection of CR GNB and a foreign country was less obvious, suggesting the beginning of a local cross-transmission. The number of CR GNB cases in our French Southern Region has sharply increased very recently due to the diffusion of OXA-48 producers. PMID:25037867

  15. Results of Aging Tests of Vendor-Produced Blended Feed Simulant

    SciTech Connect

    Russell, Renee L.; Buchmiller, William C.; Cantrell, Kirk J.; Peterson, Reid A.; Rinehart, Donald E.

    2009-04-21

    The Hanford Tank Waste Treatment and Immobilization Plant (WTP) is procuring through Pacific Northwest National Laboratory (PNNL) a minimum of five 3,500 gallon batches of waste simulant for Phase 1 testing in the Pretreatment Engineering Platform (PEP). To make sure that the quality of the simulant is acceptable, the production method was scaled up starting from laboratory-prepared simulant through 15-gallon vendor prepared simulant and 250-gallon vendor prepared simulant before embarking on the production of the 3500-gallon simulant batch by the vendor. The 3500-gallon PEP simulant batches were packaged in 250-gallon high molecular weight polyethylene totes at NOAH Technologies. The simulant was stored in an environmentally controlled environment at NOAH Technologies within their warehouse before blending or shipping. For the 15-gallon, 250-gallon, and 3500-gallon batch 0, the simulant was shipped in ambient temperature trucks with shipment requiring nominally 3 days. The 3500-gallon batch 1 traveled in a 70-75°F temperature controlled truck. Typically the simulant was uploaded in a PEP receiving tank within 24-hours of receipt. The first uploading required longer with it stored outside. Physical and chemical characterization of the 250-gallon batch was necessary to determine the effect of aging on the simulant in transit from the vendor and in storage before its use in the PEP. Therefore, aging tests were conducted on the 250-gallon batch of the vendor-produced PEP blended feed simulant to identify and determine any changes to the physical characteristics of the simulant when in storage. The supernate was also chemically characterized. Four aging scenarios for the vendor-produced blended simulant were studied: 1) stored outside in a 250-gallon tote, 2) stored inside in a gallon plastic bottle, 3) stored inside in a well mixed 5-L tank, and 4) subject to extended temperature cycling under summer temperature conditions in a gallon plastic bottle. The following

  16. Lightning-produced NOx during the Northern Australian monsoon; results from the ACTIVE campaign

    NASA Astrophysics Data System (ADS)

    Labrador, L.; Vaughan, G.; Heyes, W.; Waddicor, D.; Volz-Thomas, A.; Pätz, H.-W.; Höller, H.

    2009-10-01

    Measurements of nitrogen oxides onboard a high altitude aircraft were carried out for the first time during the Northern Australian monsoon in the framework of the Aerosol and Chemical Transport in Tropical Convection (ACTIVE) campaign, in the area around Darwin, Australia. During one flight on 22 January 2006, average NOx volume mixing ratios (vmr) of 984 and 723 parts per trillion (ppt) were recorded for both in and out of cloud conditions, respectively. The in-cloud measurements were made in the convective outflow region of a storm 56 km south-west of Darwin, whereas those out of cloud were made due south of Darwin and upwind from the storm sampled. This storm produced a total of only 8 lightning strokes, as detected by an in-situ lightning detection network, ruling out significant lightning-NOx production. 5-day backward trajectories suggest that the sampled airmasses had travelled over convectively-active land in Northern Australia during that period. The low stroke count of the sampled storm, along with the high out-of-cloud NOx concentration, suggest that, in the absence of other major NOx sources during the monsoon season, a combination of processes including regional transport patterns, convective vertical transport and entrainment may lead to accumulation of lightning-produced NOx, a situation that contrasts with the pre-monsoon period in Northern Australia, where the high NOx values occur mainly in or in the vicinity of storms. These high NOx concentrations may help start ozone photochemistry and OH radical production in an otherwise NOx-limited environment.

  17. Lightning-produced NOx during the Northern Australian monsoon; results from the ACTIVE campaign

    NASA Astrophysics Data System (ADS)

    Labrador, L.; Vaughan, G.; Heyes, W.; Waddicor, D.; Volz-Thomas, A.; Pätz, H.-W.; Höller, H.

    2009-05-01

    Measurements of nitrogen oxides onboard a high altitude aircraft were carried out for the first time during the Northern Australian monsoon in the framework of the Aerosol and Chemical Transport in Tropical Convection (ACTIVE) campaign, in the area around Darwin, Australia. During one flight on 22 January 2006, average NOx mixing ratios (mrs) of 723 and 984 parts per trillion volume (pptv) were recorded for both in and out of cloud conditions, respectively. The in-cloud measurements were made in the convective outflow region of a storm 56 km south-west of Darwin, whereas those out of cloud were made due south of Darwin and upwind from the storm sampled. This storm produced a total of only 8 lightning strokes, as detected by an in-situ lightning detection network, ruling out significant lightning-NOx production. 5-day backward trajectories suggest that the sampled airmasses had travelled over convectively-active land in Northern Australia during that period. The low stroke count of the sampled storm, along with the high out-of-cloud NOx concentration, suggest that, in the absence of other major NOx sources during the monsoon season, a combination of processes including regional transport patterns, convective vertical transport and entrainment may lead to accretion of lightning-produced NOx, a situation that contrasts with the pre-monsoon period in Northern Australia, where the high NOx values occur mainly in or in the vicinity of storms. These high NOx concentrations may help start ozone photochemistry and OH radical production in an otherwise NOx-limited environment.

  18. Competing Sudakov veto algorithms

    NASA Astrophysics Data System (ADS)

    Kleiss, Ronald; Verheyen, Rob

    2016-07-01

    We present a formalism to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The formal analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance in a semi-realistic setting and show that there are significantly faster alternatives to the commonly used algorithms.

  19. Characterization results for 106-AN grout produced in a pilot-scale test

    SciTech Connect

    Lokken, R.O.; Bagaasen, L.M.; Martin, P.F.C.; Palmer, S.E.; Anderson, C.M.

    1993-06-01

    The Grout Treatment Facility (GTF) at Hanford. Washington, will process the low-level fraction of selected double-shell tank (DST) wastes into a cementitious waste form. This facility, which is operated by Westinghouse Hanford Company (WHC), mixes liquid waste with cementitious materials to produce a waste form that immobilizes hazardous constituents through chemical reactions and/or microencapsulation. Over one million gallons of phosphate/sulfate waste were solidified in the first production campaign with this facility. The next tank waste scheduled for treatment is 106-AN (the waste from Tank 241-AN-106). After laboratory studies were conducted to select the grout formulation, tests using the 1/4-scale pilot facilities at the Pacific Northwest Laboratory (PNL) were conducted as part of the formulation verification process. The major objectives of these pilot-scale tests were to determine if the proposed grout formulation could be processed in the pilotscale equipment. to collect thermal information to help determine the best way to manage the grout hydration heat, and to characterize the solidified grout.

  20. Arctic Mixed-Phase Cloud Properties from AERI Lidar Observations: Algorithm and Results from SHEBA

    SciTech Connect

    Turner, David D.

    2005-04-01

    A new approach to retrieve microphysical properties from mixed-phase Arctic clouds is presented. This mixed-phase cloud property retrieval algorithm (MIXCRA) retrieves cloud optical depth, ice fraction, and the effective radius of the water and ice particles from ground-based, high-resolution infrared radiance and lidar cloud boundary observations. The theoretical basis for this technique is that the absorption coefficient of ice is greater than that of liquid water from 10 to 13 μm, whereas liquid water is more absorbing than ice from 16 to 25 μm. MIXCRA retrievals are only valid for optically thin (τvisible < 6) single-layer clouds when the precipitable water vapor is less than 1 cm. MIXCRA was applied to the Atmospheric Emitted Radiance Interferometer (AERI) data that were collected during the Surface Heat Budget of the Arctic Ocean (SHEBA) experiment from November 1997 to May 1998, where 63% of all of the cloudy scenes above the SHEBA site met this specification. The retrieval determined that approximately 48% of these clouds were mixed phase and that a significant number of clouds (during all 7 months) contained liquid water, even for cloud temperatures as low as 240 K. The retrieved distributions of effective radii for water and ice particles in single-phase clouds are shown to be different than the effective radii in mixed-phase clouds.

  1. A New Retrieval Algorithm for OMI NO2: Tropospheric Results and Comparisons with Measurements and Models

    NASA Technical Reports Server (NTRS)

    Swartz, W. H.; Bucesla, E. J.; Lamsal, L. N.; Celarier, E. A.; Krotkov, N. A.; Bhartia, P, K,; Strahan, S. E.; Gleason, J. F.; Herman, J.; Pickering, K.

    2012-01-01

    Nitrogen oxides (NOx =NO+NO2) are important atmospheric trace constituents that impact tropospheric air pollution chemistry and air quality. We have developed a new NASA algorithm for the retrieval of stratospheric and tropospheric NO2 vertical column densities using measurements from the nadir-viewing Ozone Monitoring Instrument (OMI) on NASA's Aura satellite. The new products rely on an improved approach to stratospheric NO2 column estimation and stratosphere-troposphere separation and a new monthly NO2 climatology based on the NASA Global Modeling Initiative chemistry-transport model. The retrieval does not rely on daily model profiles, minimizing the influence of a priori information. We evaluate the retrieved tropospheric NO2 columns using surface in situ (e.g., AQS/EPA), ground-based (e.g., DOAS), and airborne measurements (e.g., DISCOVER-AQ). The new, improved OMI tropospheric NO2 product is available at high spatial resolution for the years 200S-present. We believe that this product is valuable for the evaluation of chemistry-transport models, examining the spatial and temporal patterns of NOx emissions, constraining top-down NOx inventories, and for the estimation of NOx lifetimes.

  2. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  3. [Fractal dimension and histogram method: algorithm and some preliminary results of noise-like time series analysis].

    PubMed

    Pancheliuga, V A; Pancheliuga, M S

    2013-01-01

    In the present work a methodological background for the histogram method of time series analysis is developed. Connection between shapes of smoothed histograms constructed on the basis of short segments of time series of fluctuations and the fractal dimension of the segments is studied. It is shown that the fractal dimension possesses all main properties of the histogram method. Based on it a further development of fractal dimension determination algorithm is proposed. This algorithm allows more precision determination of the fractal dimension by using the "all possible combination" method. The application of the method to noise-like time series analysis leads to results, which could be obtained earlier only by means of the histogram method based on human expert comparisons of histograms shapes. PMID:23755565

  4. Inductive learning of thyroid functional states using the ID3 algorithm. The effect of poor examples on the learning result.

    PubMed

    Forsström, J

    1992-01-01

    The ID3 algorithm for inductive learning was tested using preclassified material for patients suspected to have a thyroid illness. Classification followed a rule-based expert system for the diagnosis of thyroid function. Thus, the knowledge to be learned was limited to the rules existing in the knowledge base of that expert system. The learning capability of the ID3 algorithm was tested with an unselected learning material (with some inherent missing data) and with a selected learning material (no missing data). The selected learning material was a subgroup which formed a part of the unselected learning material. When the number of learning cases was increased, the accuracy of the program improved. When the learning material was large enough, an increase in the learning material did not improve the results further. A better learning result was achieved with the selected learning material not including missing data as compared to unselected learning material. With this material we demonstrate a weakness in the ID3 algorithm: it can not find available information from good example cases if we add poor examples to the data. PMID:1551737

  5. Effectiveness of Ventricular Intrinsic Preference (VIP™) and Ventricular AutoCapture (VAC) algorithms in pacemaker patients: Results of the validate study

    PubMed Central

    Yadav, Rakesh; Jaswal, Aparna; Chennapragada, Sridevi; Kamath, Prakash; Hiremath, Shirish M.S.; Kahali, Dhiman; Anand, Sumit; Sood, Naresh K.; Mishra, Anil; Makkar, Jitendra S.; Kaul, Upendra

    2015-01-01

    Background Several past clinical studies have demonstrated that frequent and unnecessary right ventricular pacing in patients with sick sinus syndrome and compromised atrio-ventricular conduction (AVC) produces long-term adverse effects. The safety and efficacy of two pacemaker algorithms, Ventricular Intrinsic Preference™ (VIP) and Ventricular AutoCapture (VAC), were evaluated in a multi-center study in pacemaker patients. Methods We evaluated 80 patients across 10 centers in India. Patients were enrolled within 15 days of dual chamber pacemaker (DDDR) implantation, and within 45 days thereafter were classified to either a compromised AVC (cAVC) arm or an intact AVC (iAVC) arm based on intrinsic paced/sensed (AV/PV) delays. In each arm, patients were then randomized (1:1) into the following groups: VIP OFF and VAC OFF (Control group; CG), or VIP ON and VAC ON (Treatment Group; TG). Subsequently, the AV/PV delays in the CG groups were mandatorily programmed at 180/150 ms, and to up to 350 ms in the TG groups. The percentage of right ventricular pacing (%RVp) evaluated at 12-month post-implantation follow-ups were compared between the two groups in each arm. Additionally, in-clinic time required for collecting device data was compared between patients programmed with the automated AutoCapture algorithm activated (VAC ON) vs. the manually programmed method (VAC OFF). Results Patients randomized to the TG with the VIP algorithm activated exhibited a significantly lower %RVp at 12 months than those in the CG in both the cAVC arm (39±41% vs. 97±3%; p=0.0004) and the iAVC arm (15±25% vs. 68±39%; p=0.0067). In-clinic time required to collect device data was less in patients with the VAC algorithm activated. No device-related adverse events were reported during the year-long study period. Conclusions In our study cohort, the use of the VIP algorithm significantly reduced the %RVp, while the VAC algorithm reduced in-clinic time needed to collect device data. PMID

  6. So now what?-- things to do if your IR program stops producing results

    NASA Astrophysics Data System (ADS)

    Lucier, Ronald D.

    1991-03-01

    The title of this paper may seem surprising and, to some, clearly a case of heresy. However, a look at the statistical representation of the classic 'bathtub reliability' curve, suggests that, at some point, the numbers of 'findings' from an Infrared Thermographic (IR) program could diminish to zero for a period of time. Therefore, a facility may be left with an expensive piece of equipment, an extensive inspection program, trained thermographers, and few reportable results. This paper deals with some suggestions for preparing for this inevitable situation.

  7. Adding socioeconomic data to hospital readmissions calculations may produce more useful results.

    PubMed

    Nagasako, Elna M; Reidhead, Mat; Waterman, Brian; Dunagan, W Claiborne

    2014-05-01

    To better understand the degree to which risk-standardized thirty-day readmission rates may be influenced by social factors, we compared results for hospitals in Missouri under two types of models. The first type of model is currently used by the Centers for Medicare and Medicaid Services for public reporting of condition-specific hospital readmission rates of Medicare patients. The second type of model is an "enriched" version of the first type of model with census tract-level socioeconomic data, such as poverty rate, educational attainment, and housing vacancy rate. We found that the inclusion of these factors had a pronounced effect on calculated hospital readmission rates for patients admitted with acute myocardial infarction, heart failure, and pneumonia. Specifically, the models including socioeconomic data narrowed the range of observed variation in readmission rates for the above conditions, in percentage points, from 6.5 to 1.8, 14.0 to 7.4, and 7.4 to 3.7, respectively. Interestingly, the average readmission rates for the three conditions did not change significantly between the two types of models. The results of our exploratory analysis suggest that further work to characterize and report the effects of socioeconomic factors on standardized readmission measures may assist efforts to improve care quality and deliver more equitable care on the part of hospitals, payers, and other stakeholders. PMID:24799575

  8. Can a minimalist model of wind forced baroclinic Rossby waves produce reasonable results?

    NASA Astrophysics Data System (ADS)

    Watanabe, Wandrey B.; Polito, Paulo S.; da Silveira, Ilson C. A.

    2016-04-01

    The linear theory predicts that Rossby waves are the large scale mechanism of adjustment to perturbations of the geophysical fluid. Satellite measurements of sea level anomaly (SLA) provided sturdy evidence of the existence of these waves. Recent studies suggest that the variability in the altimeter records is mostly due to mesoscale nonlinear eddies and challenges the original interpretation of westward propagating features as Rossby waves. The objective of this work is to test whether a classic linear dynamic model is a reasonable explanation for the observed SLA. A linear-reduced gravity non-dispersive Rossby wave model is used to estimate the SLA forced by direct and remote wind stress. Correlations between model results and observations are up to 0.88. The best agreement is in the tropical region of all ocean basins. These correlations decrease towards insignificance in mid-latitudes. The relative contributions of eastern boundary (remote) forcing and local wind forcing in the generation of Rossby waves are also estimated and suggest that the main wave forming mechanism is the remote forcing. Results suggest that linear long baroclinic Rossby wave dynamics explain a significant part of the SLA annual variability at least in the tropical oceans.

  9. The Doubly Labeled Water Method Produces Highly Reproducible Longitudinal Results in Nutrition Studies12

    PubMed Central

    Wong, William W.; Roberts, Susan B.; Racette, Susan B.; Das, Sai Krupa; Redman, Leanne M.; Rochon, James; Bhapkar, Manjushri V.; Clarke, Lucinda L.; Kraus, William E.

    2014-01-01

    The doubly labeled water (DLW) method is considered the reference method for the measurement of energy expenditure under free-living conditions. However, the reproducibility of the DLW method in longitudinal studies is not well documented. This study was designed to evaluate the longitudinal reproducibility of the DLW method using 2 protocols developed and implemented in a multicenter clinical trial—the Comprehensive Assessment of Long-term Effects of Reducing Intake of Energy (CALERIE). To document the longitudinal reproducibility of the DLW method, 2 protocols, 1 based on repeated analysis of dose dilutions over the course of the clinical trial (dose-dilution protocol) and 1 based on repeated but blinded analysis of randomly selected DLW studies (test-retest protocol), were carried out. The dose-dilution protocol showed that the theoretical fractional turnover rates for 2H and 18O and the difference between the 2 fractional turnover rates were reproducible to within 1% and 5%, respectively, over 4.5 y. The Bland-Altman pair-wise comparisons of the results generated from 50 test-retest DLW studies showed that the fractional turnover rates and isotope dilution spaces for 2H and 18O, and total energy expenditure, were highly reproducible over 2.4 y. Our results show that the DLW method is reproducible in longitudinal studies and confirm the validity of this method to measure energy expenditure, define energy intake prescriptions, and monitor adherence and body composition changes over the period of 2.5–4.4 y. The 2 protocols can be adopted by other laboratories to document the longitudinal reproducibility of their measurements to ensure the long-term outcomes of interest are meaningful biologically. This trial was registered at clinicaltrials.gov as NCT00427193. PMID:24523488

  10. Different Techniques For Producing Precision Holes (>20 mm) In Hardened Steel—Comparative Results

    NASA Astrophysics Data System (ADS)

    Coelho, R. T.; Tanikawa, S. T.

    2009-11-01

    High speed machining (HSM), or high performance machining, has been one of the most recent technological advances. When applied to milling operations, using adequate machines, CAM programs and tooling, it allows cutting hardened steels, which was not feasible just a couple of years ago. The use of very stiff and precision machines has created the possibilities of machining holes in hardened steels, such as AISI H13 with 48-50 HRC, using helical interpolations, for example. Such process is particularly useful for holes with diameter bigger than normal solid carbide drills commercially available, around 20 mm, or higher. Such holes may need narrow tolerances, fine surface finishing, which can be obtained just by end milling operations. The present work compares some of the strategies used to obtain such holes by end milling, and also some techniques employed to finish them, by milling, boring and also by fine grinding at the same machine. Results indicate that it is possible to obtain holes with less than 0.36 m in circularity, 7.41 m in cylindricity and 0.12 m in surface roughness Ra. Additionally, there is less possibilities of obtaining heat affected layers when using such technique.

  11. Intra-operative ultrasound hand-held strain imaging for the visualization of ablations produced in the liver with a toroidal HIFU transducer: first in vivo results

    PubMed Central

    Chenot, Jérémy; Melodelima, David; N'Djin, William Apoutou; Souchon, Rémi; Rivoire, Michel; Chapelon, Jean-Yves

    2010-01-01

    The use of hand-held ultrasound strain imaging for intra-operative real-time visualization of HIFU ablations produced in the liver by a toroidal transducer was investigated. A linear 12 MHz ultrasound imaging probe was used to obtain radiofrequency signals. Using a fast cross-correlation algorithm, strain images were calculated and displayed at 60 frames/s, allowing the use of hand-held strain imaging intra-operatively. Fourteen HIFU lesions were produced in 4 pigs. Intra-operative strain imaging of HIFU ablations in the liver was feasible owing to the high frame rate. The correlation between dimensions measured on gross pathology and dimensions measured on B-mode images and on strain images were R = 0.72 and R = 0.94 respectively. The contrast between ablated and non-ablated tissue was significantly higher (p<0.05) in the strain images (22 dB) than in the B-mode images (9 dB). Strain images allowed equivalent or improved definition of ablated regions when compared with B-mode images. Real-time intra-operative hand-held strain imaging seems to be a promising complement to conventional B-Mode imaging for the guidance of HIFU ablations produced in the liver during an open procedure. These results support that hand-held strain imaging outperforms conventional B-mode ultrasound and could potentially be used for assessment of thermal therapies. PMID:20479514

  12. First results from the COST-HOME monthly benchmark dataset with temperature and precipitation data for testing homogenisation algorithms

    NASA Astrophysics Data System (ADS)

    Venema, Victor; Mestre, Olivier

    2010-05-01

    As part of the COST Action HOME (Advances in homogenisation methods of climate series: an integrated approach) a dataset was generated that serves as a benchmark for homogenisation algorithms. Members of the Action and third parties have been invited to homogenise this dataset. The results of this exercise are analysed by the HOME Working Groups (WG) on detection (WG2) and correction (WG3) algorithms to obtain recommendations for a standard homogenisation procedure for climate data. This talk will shortly describe this benchmark dataset and present first results comparing the quality of the about 25 contributions. Based upon a survey among homogenisation experts we chose to work with monthly values for temperature and precipitation. Temperature and precipitation were selected because most participants consider these elements the most relevant for their studies. Furthermore, they represent two important types of statistics (additive and multiplicative). The benchmark has three different types of datasets: real data, surrogate data and synthetic data. The real datasets allow comparing the different homogenisation methods with the most realistic type of data and inhomogeneities. Thus this part of the benchmark is important for a faithful comparison of algorithms with each other. However, as in this case the truth is not known, it is not possible to quantify the improvements due to homogenisation. Therefore, the benchmark also has two datasets with artificial data to which we inserted known inhomogeneities: surrogate and synthetic data. The aim of surrogate data is to reproduce the structure of measured data accurately enough that it can be used as substitute for measurements. The surrogate climate networks have the spatial and temporal auto- and cross-correlation functions of real homogenised networks as well as the exact (non-Gaussian) distribution for each station. The idealised synthetic data is based on the surrogate networks. The change is that the difference

  13. Preliminary results from an airdata enhancement algorithm with application to high-angle-of-attack flight

    NASA Technical Reports Server (NTRS)

    Moes, Timothy R.; Whitmore, Stephen A.

    1991-01-01

    A technique was developed to improve the fidelity of airdata measurements during dynamic maneuvering. This technique is particularly useful for airdata measured during flight at high angular rates and high angles of attack. To support this research, flight tests using the F-18 high alpha research vehicle (HARV) were conducted at NASA Ames Research Center, Dryden Flight Research Facility. A Kalman filter was used to combine information from research airdata, linear accelerometers, angular rate gyros, and attitude gyros to determine better estimates of airdata quantities such as angle of attack, angle of sideslip, airspeed, and altitude. The state and observation equations used by the Kalman filter are briefly developed and it is shown how the state and measurement covariance matrices were determined from flight data. Flight data are used to show the results of the technique and these results are compared to an independent measurement source. This technique is applicable to both postflight and real-time processing of data.

  14. Remote sensing of gases by hyperspectral imaging: algorithms and results of field measurements

    NASA Astrophysics Data System (ADS)

    Sabbah, Samer; Rusch, Peter; Eichmann, Jens; Gerhard, Jörn-Hinnrich; Harig, Roland

    2012-09-01

    Remote gas detection and visualization provides vital information in scenarios involving chemical accidents, terrorist attacks or gas leaks. Previous work showed how imaging infrared spectroscopy can be used to assess the location, the dimensions, and the dispersion of a potentially hazardous cloud. In this work the latest developments of an infrared hyperspectral imager based on a Michelson interferometer in combination with a focal plane array detector are presented. The performance of the system is evaluated by laboratory measurements. The system was deployed in field measurements to identify industrial gas emissions. Excellent results were obtained by successfully identifying released gases from relatively long distances.

  15. Simulation Results of the Huygens Probe Entry and Descent Trajectory Reconstruction Algorithm

    NASA Technical Reports Server (NTRS)

    Kazeminejad, B.; Atkinson, D. H.; Perez-Ayucar, M.

    2005-01-01

    Cassini/Huygens is a joint NASA/ESA mission to explore the Saturnian system. The ESA Huygens probe is scheduled to be released from the Cassini spacecraft on December 25, 2004, enter the atmosphere of Titan in January, 2005, and descend to Titan s surface using a sequence of different parachutes. To correctly interpret and correlate results from the probe science experiments and to provide a reference set of data for "ground-truthing" Orbiter remote sensing measurements, it is essential that the probe entry and descent trajectory reconstruction be performed as early as possible in the postflight data analysis phase. The Huygens Descent Trajectory Working Group (DTWG), a subgroup of the Huygens Science Working Team (HSWT), is responsible for developing a methodology and performing the entry and descent trajectory reconstruction. This paper provides an outline of the trajectory reconstruction methodology, preliminary probe trajectory retrieval test results using a simulated synthetic Huygens dataset developed by the Huygens Project Scientist Team at ESA/ESTEC, and a discussion of strategies for recovery from possible instrument failure.

  16. Deriving Arctic Cloud Microphysics at Barrow, Alaska. Algorithms, Results, and Radiative Closure

    SciTech Connect

    Shupe, Matthew D.; Turner, David D.; Zwink, Alexander; Thieman, Mandana M.; Mlawer, Eli J.; Shippert, Timothy

    2015-07-01

    Cloud phase and microphysical properties control the radiative effects of clouds in the climate system and are therefore crucial to characterize in a variety of conditions and locations. An Arctic-specific, ground-based, multi-sensor cloud retrieval system is described here and applied to two years of observations from Barrow, Alaska. Over these two years, clouds occurred 75% of the time, with cloud ice and liquid each occurring nearly 60% of the time. Liquid water occurred at least 25% of the time even in the winter, and existed up to heights of 8 km. The vertically integrated mass of liquid was typically larger than that of ice. While it is generally difficult to evaluate the overall uncertainty of a comprehensive cloud retrieval system of this type, radiative flux closure analyses were performed where flux calculations using the derived microphysical properties were compared to measurements at the surface and top-of-atmosphere. Radiative closure biases were generally smaller for cloudy scenes relative to clear skies, while the variability of flux closure results was only moderately larger than under clear skies. The best closure at the surface was obtained for liquid-containing clouds. Radiative closure results were compared to those based on a similar, yet simpler, cloud retrieval system. These comparisons demonstrated the importance of accurate cloud phase classification, and specifically the identification of liquid water, for determining radiative fluxes. Enhanced retrievals of liquid water path for thin clouds were also shown to improve radiative flux calculations.

  17. Advanced Transport Delay Compensation Algorithms: Results of Delay Measurement and Piloted Performance Tests

    NASA Technical Reports Server (NTRS)

    Guo, Liwen; Cardullo, Frank M.; Kelly, Lon C.

    2007-01-01

    This report summarizes the results of delay measurement and piloted performance tests that were conducted to assess the effectiveness of the adaptive compensator and the state space compensator for alleviating the phase distortion of transport delay in the visual system in the VMS at the NASA Langley Research Center. Piloted simulation tests were conducted to assess the effectiveness of two novel compensators in comparison to the McFarland predictor and the baseline system with no compensation. Thirteen pilots with heterogeneous flight experience executed straight-in and offset approaches, at various delay configurations, on a flight simulator where different predictors were applied to compensate for transport delay. The glideslope and touchdown errors, power spectral density of the pilot control inputs, NASA Task Load Index, and Cooper-Harper rating of the handling qualities were employed for the analyses. The overall analyses show that the adaptive predictor results in slightly poorer compensation for short added delay (up to 48 ms) and better compensation for long added delay (up to 192 ms) than the McFarland compensator. The analyses also show that the state space predictor is fairly superior for short delay and significantly superior for long delay than the McFarland compensator.

  18. Minimal Sign Representation of Boolean Functions: Algorithms and Exact Results for Low Dimensions.

    PubMed

    Sezener, Can Eren; Oztop, Erhan

    2015-08-01

    Boolean functions (BFs) are central in many fields of engineering and mathematics, such as cryptography, circuit design, and combinatorics. Moreover, they provide a simple framework for studying neural computation mechanisms of the brain. Many representation schemes for BFs exist to satisfy the needs of the domain they are used in. In neural computation, it is of interest to know how many input lines a neuron would need to represent a given BF. A common BF representation to study this is the so-called polynomial sign representation where [Formula: see text] and 1 are associated with true and false, respectively. The polynomial is treated as a real-valued function and evaluated at its parameters, and the sign of the polynomial is then taken as the function value. The number of input lines for the modeled neuron is exactly the number of terms in the polynomial. This letter investigates the minimum number of terms, that is, the minimum threshold density, that is sufficient to represent a given BF and more generally aims to find the maximum over this quantity for all BFs in a given dimension. With this work, for the first time exact results for four- and five-variable BFs are obtained, and strong bounds for six-variable BFs are derived. In addition, some connections between the sign representation framework and bent functions are derived, which are generally studied for their desirable cryptographic properties. PMID:26079754

  19. The equation of state for stellar envelopes. II - Algorithm and selected results

    NASA Technical Reports Server (NTRS)

    Mihalas, Dimitri; Dappen, Werner; Hummer, D. G.

    1988-01-01

    A free-energy-minimization method for computing the dissociation and ionization equilibrium of a multicomponent gas is discussed. The adopted free energy includes terms representing the translational free energy of atoms, ions, and molecules; the internal free energy of particles with excited states; the free energy of a partially degenerate electron gas; and the configurational free energy from shielded Coulomb interactions among charged particles. Internal partition functions are truncated using an occupation probability formalism that accounts for perturbations of bound states by both neutral and charged perturbers. The entire theory is analytical and differentiable to all orders, so it is possible to write explicit analytical formulas for all derivatives required in a Newton-Raphson iteration; these are presented to facilitate future work. Some representative results for both Saha and free-energy-minimization equilibria are presented for a hydrogen-helium plasma with N(He)/N(H) = 0.10. These illustrate nicely the phenomena of pressure dissociation and ionization, and also demonstrate vividly the importance of choosing a reliable cutoff procedure for internal partition functions.

  20. Analysis of an Optimized MLOS Tomographic Reconstruction Algorithm and Comparison to the MART Reconstruction Algorithm

    NASA Astrophysics Data System (ADS)

    La Foy, Roderick; Vlachos, Pavlos

    2011-11-01

    An optimally designed MLOS tomographic reconstruction algorithm for use in 3D PIV and PTV applications is analyzed. Using a set of optimized reconstruction parameters, the reconstructions produced by the MLOS algorithm are shown to be comparable to reconstructions produced by the MART algorithm for a range of camera geometries, camera numbers, and particle seeding densities. The resultant velocity field error calculated using PIV and PTV algorithms is further minimized by applying both pre and post processing to the reconstructed data sets.

  1. Preliminary test results of a flight management algorithm for fuel conservative descents in a time based metered traffic environment. [flight tests of an algorithm to minimize fuel consumption of aircraft based on flight time

    NASA Technical Reports Server (NTRS)

    Knox, C. E.; Cannon, D. G.

    1979-01-01

    A flight management algorithm designed to improve the accuracy of delivering the airplane fuel efficiently to a metering fix at a time designated by air traffic control is discussed. The algorithm provides a 3-D path with time control (4-D) for a test B 737 airplane to make an idle thrust, clean configured descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path is calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithms and the results of the flight tests are discussed.

  2. Reexamination of recent experimental results in surface-wave-produced argon plasmas at 2.45 GHz: Comparison with the diffusion-recombination model results

    NASA Astrophysics Data System (ADS)

    Sola, A.; Gamero, A.; Cotrino, J.; Colomer, V.

    1988-10-01

    In this paper we comment on recently reported experimental data about some characteristic magnitudes of plasma columns produced and maintained by surface microwaves. We then compare them with theoretical values obtained from the diffusion-recombination model of Mateev, Zhelyazkov, and Atanassov [J. Appl. Phys. 54, 3049 (1988)] and Zhelyazkov, Benova, and Atanassov [J. Appl. Phys. 59, 1466 (1986)] for the same magnitudes, in a wide range of operating conditions. Such a comparison allows us to make conclusions about the results of the model and its hypothesis.

  3. A 3D photon superposition/convolution algorithm and its foundation on results of Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Ulmer, W.; Pyyry, J.; Kaissl, W.

    2005-04-01

    Based on previous publications on a triple Gaussian analytical pencil beam model and on Monte Carlo calculations using Monte Carlo codes GEANT-Fluka, versions 95, 98, 2002, and BEAMnrc/EGSnrc, a three-dimensional (3D) superposition/convolution algorithm for photon beams (6 MV, 18 MV) is presented. Tissue heterogeneity is taken into account by electron density information of CT images. A clinical beam consists of a superposition of divergent pencil beams. A slab-geometry was used as a phantom model to test computed results by measurements. An essential result is the existence of further dose build-up and build-down effects in the domain of density discontinuities. These effects have increasing magnitude for field sizes <=5.5 cm2 and densities <=0.25 g cm-3, in particular with regard to field sizes considered in stereotaxy. They could be confirmed by measurements (mean standard deviation 2%). A practical impact is the dose distribution at transitions from bone to soft tissue, lung or cavities. This work has partially been presented at WC 2003, Sydney.

  4. Evaluation of producer and consumer benefits resulting from eradication of bovine viral diarrhoea (BVD) in Scotland, United Kingdom.

    PubMed

    Weldegebriel, Habtu T; Gunn, George J; Stott, Alistair W

    2009-01-01

    In this paper we evaluated the distributional effects on actors in the milk market of a hypothetical programme to eradicate bovine viral diarrhoea (BVD) from the Scottish dairy herd. With this in mind, we applied an economic welfare methodology which utilizes data on price, on output quantity, on elasticities of supply and demand and on simulated cost and yield effects of an eradication programme. Our analysis is based on Markov-chain Monte Carlo simulation of BVD spread in the dairy herd. We found that consequent upon the eradication of the disease milk yield per cow increased for all herd sizes in Scotland whereas milk price received by farmers fell. Consequently, milk consumers gained around pound11 million in discounted economic surplus and producers with infected herds gained around pound39 million whereas producers with un-infected herds lost around pound2 million in discounted surplus. On balance, however, the eradication programme generated around pound 47 million in discounted economic gain for Scotland. We found that the results are sensitive to changes in yield gains made by owners of the infected herd. PMID:18937987

  5. Designing Summer Research Experiences for Teachers and Students That Promote Classroom Science Inquiry Projects and Produce Research Results

    NASA Astrophysics Data System (ADS)

    George, L. A.; Parra, J.; Rao, M.; Offerman, L.

    2007-12-01

    Research experiences for science teachers are an important mechanism for increasing classroom teachers' science content knowledge and facility with "real world" research processes. We have developed and implemented a summer scientific research and education workshop model for high school teachers and students which promotes classroom science inquiry projects and produces important research results supporting our overarching scientific agenda. The summer training includes development of a scientific research framework, design and implementation of preliminary studies, extensive field research and training in and access to instruments, measurement techniques and statistical tools. The development and writing of scientific papers is used to reinforce the scientific research process. Using these skills, participants collaborate with scientists to produce research quality data and analysis. Following the summer experience, teachers report increased incorporation of research inquiry in their classrooms and student participation in science fair projects. This workshop format was developed for an NSF Biocomplexity Research program focused on the interaction of urban climates, air quality and human response and can be easily adapted for other scientific research projects.

  6. Producing K indices by the interactive method based on the traditional hand-scaling methodology - preliminary results

    NASA Astrophysics Data System (ADS)

    Valach, Fridrich; Váczyová, Magdaléna; Revallo, Miloš

    2016-01-01

    This paper reports on an interactive computer method for producing K indices. The method is based on the traditional hand-scaling methodology that had been practised at Hurbanovo Geomagnetic Observatory till the end of 1997. Here, the performance of the method was tested on the data of the Kakioka Magnetic Observatory. We have found that in some ranges of the K-index values our method might be a beneficial supplement to the computer-based methods approved and endorsed by IAGA. This result was achieved for both very low (K=0) and high (K ≥ 5) levels of the geomagnetic activity. The method incorporated an interactive procedure of selecting quiet days by a human operator (observer). This introduces a certain amount of subjectivity, similarly as the traditional hand-scaling method.

  7. Can laboratory tholins mimic the chemistry producing Titan's aerosols? A review in light of ACP experimental results

    NASA Astrophysics Data System (ADS)

    Coll, P.; Navarro-González, R.; Szopa, C.; Poch, O.; Ramírez, S. I.; Coscia, D.; Raulin, F.; Cabane, M.; Buch, A.; Israël, G.

    2013-03-01

    The first results obtained by the ACP experiment onboard Huygens probe revealed that the main products obtained after thermolysis of Titan's collected aerosols, were ammonia (NH3) and hydrogen cyanide (HCN). Titan's aerosols, and their laboratory analogues named tholins, have been the subject of experimental or theoretical studies during the last four decades. These studies have been mainly devoted to understanding their origin and formation mechanisms, their physical, chemical and optical properties, and their role in the radiative equilibrium of the satellite. Before the arrival of the Cassini-Huygens mission, the dense layer of aerosols hid many aspects of the satellite's surface and precious information about its composition. If Titan's aerosols have been in the eye and mind of planetary scientists during such a long time, it is not surprising that a literature survey displays a good quantity of papers on aerosol analogues. With aerosol analogues we mean any material produced in a terrestrial laboratory under conditions that try to represent those of Titan's atmosphere. We present here a study aimed to understand the particularities of aerosol analogues synthesized in different laboratories around the world in order to determine some of their most representative chemical fingerprints and in some cases, to perform a direct comparison of the volatiles produced after a thermal treatment done in conditions similar to the ones used by the ACP experiment. From the information collected, we propose a broad classification of aerosol analogues highlighting the materials that can be more representative of Titan's aerosols in terms of their content of organic volatiles. We identify the laboratory analogs that best suit the ACP results; such identification is of prime importance to correctly predict the optical properties of Titan's aerosol and to accurately estimate their contribution in radiative equilibrium models and/or to assess their role in chemical reactions of

  8. Do different decision-analytic modeling approaches produce different results? A systematic review of cross-validation studies.

    PubMed

    Tsoi, Bernice; Goeree, Ron; Jegathisawaran, Jathishinie; Tarride, Jean-Eric; Blackhouse, Gord; O'Reilly, Daria

    2015-06-01

    When choosing a modeling approach for health economic evaluation, certain criteria are often considered (e.g., population resolution, interactivity, time advancement mechanism, resource constraints). However, whether these criteria and their associated modeling approach impacts results remain poorly understood. A systematic review was conducted to identify cross-validation studies (i.e., modeling a problem using different approaches with the same body of evidence) to offer insight on this topic. With respect to population resolution, reviewed studies suggested that both aggregate- and individual-level models will generate comparable results, although a practical trade-off exists between validity and feasibility. In terms of interactivity, infectious-disease models consistently showed that, depending on the assumptions regarding probability of disease exposure, dynamic and static models may produce dissimilar results with opposing policy recommendations. Empirical evidence on the remaining criteria is limited. Greater discussion will therefore be necessary to promote a deeper understanding of the benefits and limits to each modeling approach. PMID:25728942

  9. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  10. RESULTS OF MONITORING METALLO-BETA-LACTAMASE-PRODUCING STRAINS OF PSEUDOMONAS AERUGINOSA IN A MULTI-PROFILE HOSPITAL.

    PubMed

    Shamaeva, S K; Portnyagina, U S; Edelstein, M V; Kuzmina, A A; Maloguloval, S; Varfolomeeva, N A

    2015-01-01

    The authors present the results of long-term monitoring of metallo-beta-lactamase (MBL) producing strains of Pseudomonas aeruginosa in the Republican Hospital No 2 of Yakutsk, Russian Federation. Hospitals across Russia, as well as the rest of the world, face a rapid appearance and a virtually unchecked spread of multiresistant and panresistant nosocomial pathogens. Especially prevalent are multidrug-resistant isolates of P. aeruginosa, most often found among the patients of intensive care and intensive therapy units, as well as surgery departments. The aim of this study is to investigate the prevalence of metallo-beta-lactamase-producing strains of P. aeruginosa in a multi-profile hospital. 2,135 isolates of P. aeruginosa were studied, collected during a time span of seven years (2008-2014) from clinical specimens of hospitalised patients in acute surgery, purulent surgery, neurosurgery, otolaryngology, coloproctology departments, intensive care and intensive therapy, burn units, as well as intensive care unit for patients with acute cerebrovascular accidents and coronary care unit. Strains were identified and re-identified using established methods, NEFERMtest 24 (MICROLATEST) biochemical microtest and API (bioMerieux) test systems were used. For all carbapenem-resistant strains a phenotype screening for MBL was performed using the double-disks method with EDTA. In order to identify VIM-type and IMP-type MBL genes a real-time multiplex polymerase chain reaction was used. Among the investigated strains the largest number of P. aeruginosa - 35.6% (761 isolates) was found in patients at intensive care and intensive therapy units. Clonal expansion of extensively drug-resistant strain P. aeruginosa ST235 (VIM-2) was determined, the resistance mechanism of which is connected to MBL. Sensitivity determination of MBL-producing isolates of P. aeruginosa has shown that isolated strains have a high level of resistance (100%) to all tested antibacterial agents: piperacillin

  11. Planning fuel-conservative descents in an airline environmental using a small programmable calculator: Algorithm development and flight test results

    NASA Technical Reports Server (NTRS)

    Knox, C. E.; Vicroy, D. D.; Simmon, D. A.

    1985-01-01

    A simple, airborne, flight-management descent algorithm was developed and programmed into a small programmable calculator. The algorithm may be operated in either a time mode or speed mode. The time mode was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The speed model was designed for planning fuel-conservative descents when time is not a consideration. The descent path for both modes was calculated for a constant with considerations given for the descent Mach/airspeed schedule, gross weight, wind, wind gradient, and nonstandard temperature effects. Flight tests, using the algorithm on the programmable calculator, showed that the open-loop guidance could be useful to airline flight crews for planning and executing fuel-conservative descents.

  12. Planning fuel-conservative descents in an airline environmental using a small programmable calculator: algorithm development and flight test results

    SciTech Connect

    Knox, C.E.; Vicroy, D.D.; Simmon, D.A.

    1985-05-01

    A simple, airborne, flight-management descent algorithm was developed and programmed into a small programmable calculator. The algorithm may be operated in either a time mode or speed mode. The time mode was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The speed model was designed for planning fuel-conservative descents when time is not a consideration. The descent path for both modes was calculated for a constant with considerations given for the descent Mach/airspeed schedule, gross weight, wind, wind gradient, and nonstandard temperature effects. Flight tests, using the algorithm on the programmable calculator, showed that the open-loop guidance could be useful to airline flight crews for planning and executing fuel-conservative descents.

  13. Preliminary Structural Design Using Topology Optimization with a Comparison of Results from Gradient and Genetic Algorithm Methods

    NASA Technical Reports Server (NTRS)

    Burt, Adam O.; Tinker, Michael L.

    2014-01-01

    In this paper, genetic algorithm based and gradient-based topology optimization is presented in application to a real hardware design problem. Preliminary design of a planetary lander mockup structure is accomplished using these methods that prove to provide major weight savings by addressing the structural efficiency during the design cycle. This paper presents two alternative formulations of the topology optimization problem. The first is the widely-used gradient-based implementation using commercially available algorithms. The second is formulated using genetic algorithms and internally developed capabilities. These two approaches are applied to a practical design problem for hardware that has been built, tested and proven to be functional. Both formulations converged on similar solutions and therefore were proven to be equally valid implementations of the process. This paper discusses both of these formulations at a high level.

  14. Plasmodium-specific molecular assays produce uninterpretable results and non-Plasmodium spp. sequences in field-collected Anopheles vectors.

    PubMed

    Harrison, Genelle F; Foley, Desmond H; Rueda, Leopoldo M; Melanson, Vanessa R; Wilkerson, Richard C; Long, Lewis S; Richardson, Jason H; Klein, Terry A; Kim, Heung-Chul; Lee, Won-Ja

    2013-12-01

    The Malaria Research and Reference Reagent Resource-recommended PLF/UNR/VIR polymerase chain reaction (PCR) was used to detect Plasmodium vivax in Anopheles spp. mosquitoes collected in South Korea. Samples that were amplified were sequenced and compared with known Plasmodium spp. by using the PlasmoDB.org Basic Local Alignment Search Tool/n and the National Center for Biotechnology Information Basic Local Alignment Search Tool/n tools. Results show that the primers PLF/UNR/VIR used in this PCR can produce uninterpretable results and non-specific sequences in field-collected mosquitoes. Three additional PCRs (PLU/VIV, specific for 18S small subunit ribosomal DNA; Pvr47, specific for a nuclear repeat; and GDCW/PLAS, specific for the mitochondrial marker, cytB) were then used to find a more accurate and interpretable assay. Samples that were amplified were again sequenced. The PLU/VIV and Pvr47 assays showed cross-reactivity with non-Plasmodium spp. and an arthropod fungus (Zoophthora lanceolata). The GDCW/PLAS assay amplified only Plasmodium spp. but also amplified the non-human specific parasite P. berghei from an Anopheles belenrae mosquito. Detection of P. berghei in South Korea is a new finding. PMID:24189365

  15. Difference in percentage of ventricular pacing between two algorithms for minimizing ventricular pacing: results of the IDEAL RVP (Identify the Best Algorithm for Reducing Unnecessary Right Ventricular Pacing) study

    PubMed Central

    Murakami, Yoshimasa; Tsuboi, Naoya; Inden, Yasuya; Yoshida, Yukihiko; Murohara, Toyoaki; Ihara, Zenichi; Takami, Mitsuaki

    2010-01-01

    Aims Managed ventricular pacing (MVP) and Search AV+ are representative dual-chamber pacing algorithms for minimizing ventricular pacing (VP). This randomized, crossover study aimed to examine the difference in ability to reduce percentage of VP (%VP) between these two algorithms. Methods and results Symptomatic bradyarrhythmia patients implanted with a pacemaker equipped with both algorithms (Adapta DR, Medtronic) were enrolled. The %VPs of the patients during two periods were compared: 1 month operation of either one of the two algorithms for each period. All patients were categorized into subgroups according to the atrioventricular block (AVB) status at baseline: no AVB (nAVB), first-degree AVB (1AVB), second-degree AVB (2AVB), episodic third-degree AVB (e3AVB), and persistent third-degree AVB (p3AVB). Data were available from 127 patients for the analysis. For all patient subgroups, except for p3AVB category, the median %VPs were lower during the MVP operation than those during the Search AV+ (nAVB: 0.2 vs. 0.8%, P < 0.0001; 1AVB: 2.3 vs. 27.4%, P = 0.001; 2AVB: 16.4% vs. 91.9%, P = 0.0052; e3AVB: 37.7% vs. 92.7%, P = 0.0003). Conclusion Managed ventricular pacing algorithm, when compared with Search AV+, offers further %VP reduction in patients implanted with a dual-chamber pacemaker, except for patients diagnosed with persistent loss of atrioventricular conduction. PMID:19762332

  16. Two Measurement Methods of Leaf Dry Matter Content Produce Similar Results in a Broad Range of Species

    PubMed Central

    Vaieretti, María Victoria; Díaz, Sandra; Vile, Denis; Garnier, Eric

    2007-01-01

    Background and Aims Leaf dry matter content (LDMC) is widely used as an indicator of plant resource use in plant functional trait databases. Two main methods have been proposed to measure LDMC, which basically differ in the rehydration procedure to which leaves are subjected after harvesting. These are the ‘complete rehydration’ protocol of Garnier et al. (2001, Functional Ecology 15: 688–695) and the ‘partial rehydration’ protocol of Vendramini et al. (2002, New Phytologist 154: 147–157). Methods To test differences in LDMC due to the use of different methods, LDMC was measured on 51 native and cultivated species representing a wide range of plant families and growth forms from central-western Argentina, following the complete rehydration and partial rehydration protocols. Key Results and Conclusions The LDMC values obtained by both methods were strongly and positively correlated, clearly showing that LDMC is highly conserved between the two procedures. These trends were not altered by the exclusion of plants with non-laminar leaves. Although the complete rehydration method is the safest to measure LDMC, the partial rehydration procedure produces similar results and is faster. It therefore appears as an acceptable option for those situations in which the complete rehydration method cannot be applied. Two notes of caution are given for cases in which different datasets are compared or combined: (1) the discrepancy between the two rehydration protocols is greatest in the case of high-LDMC (succulent or tender) leaves; (2) the results suggest that, when comparing many studies across unrelated datasets, differences in the measurement protocol may be less important than differences among seasons, years and the quality of local habitats. PMID:17353207

  17. Multimedia and Training: Practice and Skills of European Producers, (Part 1) Results of the European Project "START-UP."

    ERIC Educational Resources Information Center

    Gutierrez, Christine Gardiol; Boder, Andre

    1992-01-01

    Describes the START-UP project developed by the European Community to identify educational and training multimedia producers in European countries and to define the methodologies that these producers use in developing their products. Highlights include production stages, multimedia skills, teamwork, decision making, learning processes, learner…

  18. Native-sized recombinant spider silk protein produced in metabolically engineered Escherichia coli results in a strong fiber.

    PubMed

    Xia, Xiao-Xia; Qian, Zhi-Gang; Ki, Chang Seok; Park, Young Hwan; Kaplan, David L; Lee, Sang Yup

    2010-08-10

    Spider dragline silk is a remarkably strong fiber that makes it attractive for numerous applications. Much has thus been done to make similar fibers by biomimic spinning of recombinant dragline silk proteins. However, success is limited in part due to the inability to successfully express native-sized recombinant silk proteins (250-320 kDa). Here we show that a 284.9 kDa recombinant protein of the spider Nephila clavipes is produced and spun into a fiber displaying mechanical properties comparable to those of the native silk. The native-sized protein, predominantly rich in glycine (44.9%), was favorably expressed in metabolically engineered Escherichia coli within which the glycyl-tRNA pool was elevated. We also found that the recombinant proteins of lower molecular weight versions yielded inferior fiber properties. The results provide insight into evolution of silk protein size related to mechanical performance, and also clarify why spinning lower molecular weight proteins does not recapitulate the properties of native fibers. Furthermore, the silk expression, purification, and spinning platform established here should be useful for sustainable production of natural quality dragline silk, potentially enabling broader applications. PMID:20660779

  19. Profiling Wind and Greenhouse Gases by Infrared-laser Occultation: Algorithm and Results from Simulations in Windy Air

    NASA Astrophysics Data System (ADS)

    Plach, Andreas; Proschek, Veronika; Kirchengast, Gottfried

    2014-05-01

    We employ the Low Earth Orbit (LEO-LEO) microwave and infrared-laser occultation (LMIO) method to derive a full set of thermodynamic state variables from microwave signals and climate benchmark profiling of greenhouse gases (GHGs) and line-of-sight (l.o.s.) wind using infrared-laser signals. The focus lies on the upper troposphere/lower stratosphere region (UTLS - 5 km to 35 km). The GHG retrieval errors are generally smaller than 1% to 3% r.m.s., at a vertical resolution of about 1 km. In this study we focus on the infrared-laser part of LMIO, where we introduce a new, advanced wind retrieval algorithm to derive accurate l.o.s. wind profiles. The wind retrieval uses the reasonable assumption of the wind blowing along spherical shells (horizontal winds) and therefore the l.o.s. wind speed can be retrieved by using an Abel integral transform. A 'delta-differential transmission' principle is applied to two thoroughly selected infrared-laser signals placed at the wings of the highly symmetric C18OO absorption line (nominally ±0.004 cm-1 from the line center near 4767 cm-1) plus a related 'off-line' reference signal. The delta-differential transmission obtained by differencing these signals is clear from atmospheric broadband effects and is proportional to the wind-induced Doppler shift; it serves as the integrand of the Abel transform. The Doppler frequency shift calculated along with the wind retrieval is in turn also used in the GHG retrieval to correct the frequency of GHG-sensitive infrared-laser signals for the wind-induced Doppler shift, which enables improved GHG estimation. This step therefore provides the capability to correct potential wind-induced residual errors of the GHG retrieval in case of strong winds. We performed end-to-end simulations to test the performance of the new retrieval in windy air. The simulations used realistic atmospheric conditions (thermodynamic state variables and wind profiles) from an analysis field of the European Centre for

  20. Development and test results of a flight management algorithm for fuel conservative descents in a time-based metered traffic environment

    NASA Technical Reports Server (NTRS)

    Knox, C. E.; Cannon, D. G.

    1980-01-01

    A simple flight management descent algorithm designed to improve the accuracy of delivering an airplane in a fuel-conservative manner to a metering fix at a time designated by air traffic control was developed and flight tested. This algorithm provides a three dimensional path with terminal area time constraints (four dimensional) for an airplane to make an idle thrust, clean configured (landing gear up, flaps zero, and speed brakes retracted) descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithm is described. The results of the flight tests flown with the Terminal Configured Vehicle airplane are presented.

  1. Planning fuel-conservative descents with or without time constraints using a small programmable calculator: Algorithm development and flight test results

    NASA Technical Reports Server (NTRS)

    Knox, C. E.

    1983-01-01

    A simplified flight-management descent algorithm, programmed on a small programmable calculator, was developed and flight tested. It was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel-conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight-management descent algorithm is described. The results of flight tests flown with a T-39A (Sabreliner) airplane are presented.

  2. Formation of trichloromethane in chlorinated water and fresh-cut produce and as a result of reacting with citric acid

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Chlorine (sodium hypochlorite) is commonly used by the fresh produce industry to sanitize wash water, fresh and fresh-cut fruits and vegetables. However, possible formation of harmful chlorine by-products is a concern. The objectives of this study were to compare chlorine and chlorine dioxide in t...

  3. The Operational MODIS Cloud Optical and Microphysical Property Product: Overview of the Collection 6 Algorithm and Preliminary Results

    NASA Technical Reports Server (NTRS)

    Platnick, Steven; King, Michael D.; Wind, Galina; Amarasinghe, Nandana; Marchant, Benjamin; Arnold, G. Thomas

    2012-01-01

    Operational Moderate Resolution Imaging Spectroradiometer (MODIS) retrievals of cloud optical and microphysical properties (part of the archived products MOD06 and MYD06, for MODIS Terra and Aqua, respectively) are currently being reprocessed along with other MODIS Atmosphere Team products. The latest "Collection 6" processing stream, which is expected to begin production by summer 2012, includes updates to the previous cloud retrieval algorithm along with new capabilities. The 1 km retrievals, based on well-known solar reflectance techniques, include cloud optical thickness, effective particle radius, and water path, as well as thermodynamic phase derived from a combination of solar and infrared tests. Being both global and of high spatial resolution requires an algorithm that is computationally efficient and can perform over all surface types. Collection 6 additions and enhancements include: (i) absolute effective particle radius retrievals derived separately from the 1.6 and 3.7 !-lm bands (instead of differences relative to the standard 2.1 !-lm retrieval), (ii) comprehensive look-up tables for cloud reflectance and emissivity (no asymptotic theory) with a wind-speed interpolated Cox-Munk BRDF for ocean surfaces, (iii) retrievals for both liquid water and ice phases for each pixel, and a subsequent determination of the phase based, in part, on effective radius retrieval outcomes for the two phases, (iv) new ice cloud radiative models using roughened particles with a specified habit, (v) updated spatially-complete global spectral surface albedo maps derived from MODIS Collection 5, (vi) enhanced pixel-level uncertainty calculations incorporating additional radiative error sources including the MODIS L1 B uncertainty index for assessing band and scene-dependent radiometric uncertainties, (v) and use of a new 1 km cloud top pressure/temperature algorithm (also part of MOD06) for atmospheric corrections and low cloud non-unity emissivity temperature adjustments.

  4. New products of GOSAT/TANSO-FTS TIR CO2 and CH4 profiles: Algorithm and initial validation results

    NASA Astrophysics Data System (ADS)

    Saitoh, N.; Imasu, R.; Sugita, T.; Hayashida, S.; Shiomi, K.; Kawakami, S.; Machida, T.; Sawa, Y.; Matsueda, H.; Terao, Y.

    2013-12-01

    The Thermal and Near-infrared Sensor for Carbon Observation Fourier Transform Spectrometer (TANSO-FTS) on board the Greenhouse Gases Observing Satellite (GO-SAT) simultaneously observes column abundances and profiles of CO2 and CH4 in the same field of view, from the shortwave infrared (SWIR) and thermal infrared (TIR) bands, respectively. The latest version of the GOSAT Level 1B (L1B) radiance spectra, the version 160.160, is improved compared to the previous versions, but still has a bias judging from comparisons with spectral data of other coincident instruments. The bias is largest at around 14-15 micron band that includes carbon dioxide strong absorption lines [Kataoka et al., 2013]; it probably causes a high bias in mid-tropospheric carbon dioxide concentration of the current released V00.01 TIR products. Besides, relatively low sig-nal-to-noise ratio (SNR) less than 100 at around 7-8 micron band makes CH4 retrieval unstable. We have improved an algorithm for retrieving CO2 and CH4 profiles in order to overcome the spectral bias and low SNR problems. In our new algorithm, we treated surface temperature and surface emissivity as correction parameters for radi-ance-independent and radiance-dependent spectral biases, respectively, and retrieved them simultaneously with gas retrieval. We used 7-8 micron band (1140-1370 wave-number) for methane retrieval and 10 and 14-15 micron bands (930-990, 1040-1090, 690-750, and 790-795 wavenumber) for carbon dioxide retrieval. Temperature, water vapor, ozone, and nitrous oxide were retrieved simultaneously other than CO2 and CH4. CO2 profiles retrieved using our new algorithm have no clear bias in mid-troposphere compared to the previous V00.01 CO2 product. New retrieved CH4 profiles show better agreement with aircraft CH4 profiles than the a priori profiles.

  5. Preliminary results of real-time PPP-RTK positioning algorithm development for moving platforms and its performance validation

    NASA Astrophysics Data System (ADS)

    Won, Jihye; Park, Kwan-Dong

    2015-04-01

    Real-time PPP-RTK positioning algorithms were developed for the purpose of getting precise coordinates of moving platforms. In this implementation, corrections for the satellite orbit and satellite clock were taken from the IGS-RTS products while the ionospheric delay was removed through ionosphere-free combination and the tropospheric delay was either taken care of using the Global Pressure and Temperature (GPT) model or estimated as a stochastic parameter. To improve the convergence speed, all the available GPS and GLONASS measurements were used and Extended Kalman Filter parameters were optimized. To validate our algorithms, we collected the GPS and GLONASS data from a geodetic-quality receiver installed on a roof of a moving vehicle in an open-sky environment and used IGS final products of satellite orbits and clock offsets. The horizontal positioning error got less than 10 cm within 5 minutes, and the error stayed below 10 cm even after the vehicle start moving. When the IGS-RTS product and the GPT model were used instead of the IGS precise product, the positioning accuracy of the moving vehicle was maintained at better than 20 cm once convergence was achieved at around 6 minutes.

  6. Transfer of cattle embryos produced with sex-sorted semen results in impaired pregnancy rate and increased male calf mortality.

    PubMed

    Mikkola, M; Andersson, M; Taponen, J

    2015-10-15

    This study investigated the pregnancy rate and calf mortality after transfer of embryos produced using sex-sorted semen. Data for 12,438 embryo transfers performed on dairy farms were analyzed. Of these, 10,697 embryos were produced using conventional semen (CONV embryos) and 1741 using sex-sorted semen from 97 bulls (SEX embryos), predominantly of Ayrshire and Holstein breeds. Of the CONV embryos, 27.4% were transferred fresh, whereas of the SEX embryos, 55.7% were fresh. Recipient attributes (breed, parity, number of previous breeding attempts, and interval from calving to transfer) were comparable for both embryo types, heifers representing 57.8% of recipients in the CONV group and 54.8% in the SEX group. Recipients that were not artificially inseminated or did not undergo a new embryo transfer after the initial embryo transfer and had registered calving in fewer than 290 days after the transfer were considered pregnant. Pregnancy rate for recipients receiving CONV embryos was 44.1%, and for those receiving SEX embryos, it was 38.8%. The odds ratio for pregnancy in recipients receiving CONV embryos was 1.34 compared with SEX embryos (P < 0.001). The proportion of female calves was 49.6% and 92.3% in CONV and SEX groups, respectively. Overall, calf mortality was comparable in both groups. Mortality was similar in CONV and SEX groups (6.6% and 7.7%, respectively) for female calves. For male calves, mortality was 9.2% in the CONV group but significantly higher, 16.0% (P < 0.05), in the SEX group. This study showed that transfer of embryos produced with sex-sorted semen decreased the pregnancy rate by about 12% compared with embryos produced using conventional semen. Mortality of male calves born from SEX embryos was higher than for those born from CONV embryos. PMID:26174034

  7. The Results of a Simulator Study to Determine the Effects on Pilot Performance of Two Different Motion Cueing Algorithms and Various Delays, Compensated and Uncompensated

    NASA Technical Reports Server (NTRS)

    Guo, Li-Wen; Cardullo, Frank M.; Telban, Robert J.; Houck, Jacob A.; Kelly, Lon C.

    2003-01-01

    A study was conducted employing the Visual Motion Simulator (VMS) at the NASA Langley Research Center, Hampton, Virginia. This study compared two motion cueing algorithms, the NASA adaptive algorithm and a new optimal control based algorithm. Also, the study included the effects of transport delays and the compensation thereof. The delay compensation algorithm employed is one developed by Richard McFarland at NASA Ames Research Center. This paper reports on the analyses of the results of analyzing the experimental data collected from preliminary simulation tests. This series of tests was conducted to evaluate the protocols and the methodology of data analysis in preparation for more comprehensive tests which will be conducted during the spring of 2003. Therefore only three pilots were used. Nevertheless some useful results were obtained. The experimental conditions involved three maneuvers; a straight-in approach with a rotating wind vector, an offset approach with turbulence and gust, and a takeoff with and without an engine failure shortly after liftoff. For each of the maneuvers the two motion conditions were combined with four delay conditions (0, 50, 100 & 200ms), with and without compensation.

  8. Cosmic ray exposure dating with in situ produced cosmogenic He-3 - Results from young Hawaiian lava flows

    NASA Technical Reports Server (NTRS)

    Kurz, Mark D.; Colodner, Debra; Trull, Thomas W.; Moore, Richard B.; O'Brien, Keran

    1990-01-01

    Cosmogenic helium contents in a suite of Hawaiian radiocarbon-dated lava flows were measured to study the use of the production rate of spallation-produced cosmogenic He-3 as a surface exposure chronometer. Basalt samples from the Mauna Loa and Hualalai volcanoes were analyzed, showing that exposure-age dating is feasible in the 600-13000 year age range. The data suggest a present-day sea-level production rate in olivine of 125 + or - 30 atoms/g yr.

  9. The $11 Billion Mystery: New York More Than Doubled Its Spending on the Schools during the 1980s. Why Didn't All That Money Produce Better Results?

    ERIC Educational Resources Information Center

    Public Policy Inst., Albany, NY.

    Over the past 10 years, New York has more than doubled its spending on elementary and secondary education, in a fervent attempt to produce greater student achievement and prepare our young people for the fast changing world in which they will have to earn a living. Better results have not been produced as the education system has focused on more…

  10. Haplotyping algorithms

    SciTech Connect

    Sobel, E.; Lange, K.; O`Connell, J.R.

    1996-12-31

    Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.

  11. Automatic control algorithm effects on energy production

    NASA Technical Reports Server (NTRS)

    Mcnerney, G. M.

    1981-01-01

    A computer model was developed using actual wind time series and turbine performance data to simulate the power produced by the Sandia 17-m VAWT operating in automatic control. The model was used to investigate the influence of starting algorithms on annual energy production. The results indicate that, depending on turbine and local wind characteristics, a bad choice of a control algorithm can significantly reduce overall energy production. The model can be used to select control algorithms and threshold parameters that maximize long term energy production. The results from local site and turbine characteristics were generalized to obtain general guidelines for control algorithm design.

  12. Preliminary results from a subsonic high-angle-of-attack flush airdata sensing (HI-FADS) system - Design, calibration, algorithm development, and flight test evaluation

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A.; Moes, Timothy R.; Larson, Terry J.

    1990-01-01

    A nonintrusive high angle-of-attack flush airdata sensing (HI-FADS) system was installed and flight-tested on the F-18 high alpha research vehicle. This paper discusses the airdata algorithm development and composite results expressed as airdata parameter estimates and describes the HI-FADS system hardware, calibration techniques, and algorithm development. An independent empirical verification was performed over a large portion of the subsonic flight envelope. Test points were obtained for Mach numbers from 0.15 to 0.94 and angles of attack from -8.0 to 55.0 deg. Angles of sideslip ranged from -15.0 to 15.0 deg, and test altitudes ranged from 18,000 to 40,000 ft. The HI-FADS system gave excellent results over the entire subsonic Mach number range up to 55 deg angle of attack. The internal pneumatic frequency response of the system is accurate to beyond 10 Hz.

  13. Cosmic ray exposure dating with in situ produced cosmogenic 3He: results from young Hawaiian lava flows

    USGS Publications Warehouse

    Kurz, M.D.; Colodner, D.; Trull, T.W.; Moore, R.B.; O'Brien, K.

    1990-01-01

    In an effort to determine the in situ production rate of spallation-produced cosmogenic 3He, and evaluate its use as a surface exposure chronometer, we have measured cosmogenic helium contents in a suite of Hawaiian radiocarbon-dated lava flows. The lava flows, ranging in age from 600 to 13,000 years, were collected from Hualalai and Mauna Loa volcanoes on the island of Hawaii. Because cosmic ray surface-exposure dating requires the complete absence of erosion or soil cover, these lava flows were selected specifically for this purpose. The 3He production rate, measured within olivine phenocrysts, was found to vary significantly, ranging from 47 to 150 atoms g-1 yr-1 (normalized to sea level). Although there is considerable scatter in the data, the samples younger than 10,000 years are well-preserved and exposed, and the production rate variations are therefore not related to erosion or soil cover. Data averaged over the past 2000 years indicate a sea-level 3He production rate of 125 ?? 30 atoms g-1 yr-1, which agrees well with previous estimates. The longer record suggests a minimum in sea level normalized 3He production rate between 2000 and 7000 years (55 ?? 15 atoms g-1 yr-1), as compared to samples younger than 2000 years (125 ?? 30 atoms g-1 yr-1), and those between 7000 and 10,000 years (127 ?? 19 atoms g-1 yr-1). The minimum in production rate is similar in age to that which would be produced by variations in geomagnetic field strength, as indicated by archeomagnetic data. However, the production rate variations (a factor of 2.3 ?? 0.8) are poorly determined due to the large uncertainties in the youngest samples and questions of surface preservation for the older samples. Calculations using the atmospheric production model of O'Brien (1979) [35], and the method of Lal and Peters (1967) [11], predict smaller production rate variations for similar variation in dipole moment (a factor of 1.15-1.65). Because the production rate variations, archeomagnetic data

  14. Lymphoid precursors are directed to produce dendritic cells as a result of TLR9 ligation during herpes infection.

    PubMed

    Welner, Robert S; Pelayo, Rosana; Nagai, Yoshinori; Garrett, Karla P; Wuest, Todd R; Carr, Daniel J; Borghesi, Lisa A; Farrar, Michael A; Kincade, Paul W

    2008-11-01

    Hematopoietic stem and progenitor cells were previously found to express Toll-like receptors (TLRs), suggesting that bacterial/viral products may influence blood cell formation. We now show that common lymphoid progenitors (CLPs) from mice with active HSV-1 infection are biased to dendritic cell (DC) differentiation, and the phenomenon is largely TLR9 dependent. Similarly, CLPs from mice treated with the TLR9 ligand CpG ODN had little ability to generate CD19+ B lineage cells and had augmented competence to generate DCs. TNFalpha mediates the depletion of late-stage lymphoid progenitors from bone marrow in many inflammatory conditions, but redirection of lymphopoiesis occurred in TNFalpha-/- mice treated with CpG ODN. Increased numbers of DCs with a lymphoid past were identified in Ig gene recombination substrate reporter mice treated with CpG ODN. TLR9 is highly expressed on lymphoid progenitors, and culture studies revealed that those receptors, rather than inflammatory cytokines, accounted for the production of several types of functional DCs. Common myeloid progenitors are normally a good source of DCs, but this potential was reduced by TLR9 ligation. Thus, alternate differentiation pathways may be used to produce innate effector cells in health and disease. PMID:18552210

  15. Motion Cueing Algorithm Development: Initial Investigation and Redesign of the Algorithms

    NASA Technical Reports Server (NTRS)

    Telban, Robert J.; Wu, Weimin; Cardullo, Frank M.; Houck, Jacob A. (Technical Monitor)

    2000-01-01

    In this project four motion cueing algorithms were initially investigated. The classical algorithm generated results with large distortion and delay and low magnitude. The NASA adaptive algorithm proved to be well tuned with satisfactory performance, while the UTIAS adaptive algorithm produced less desirable results. Modifications were made to the adaptive algorithms to reduce the magnitude of undesirable spikes. The optimal algorithm was found to have the potential for improved performance with further redesign. The center of simulator rotation was redefined. More terms were added to the cost function to enable more tuning flexibility. A new design approach using a Fortran/Matlab/Simulink setup was employed. A new semicircular canals model was incorporated in the algorithm. With these changes results show the optimal algorithm has some advantages over the NASA adaptive algorithm. Two general problems observed in the initial investigation required solutions. A nonlinear gain algorithm was developed that scales the aircraft inputs by a third-order polynomial, maximizing the motion cues while remaining within the operational limits of the motion system. A braking algorithm was developed to bring the simulator to a full stop at its motion limit and later release the brake to follow the cueing algorithm output.

  16. Fusing face-verification algorithms and humans.

    PubMed

    O'Toole, Alice J; Abdi, Hervé; Jiang, Fang; Phillips, P Jonathon

    2007-10-01

    It has been demonstrated recently that state-of-the-art face-recognition algorithms can surpass human accuracy at matching faces over changes in illumination. The ranking of algorithms and humans by accuracy, however, does not provide information about whether algorithms and humans perform the task comparably or whether algorithms and humans can be fused to improve performance. In this paper, we fused humans and algorithms using partial least square regression (PLSR). In the first experiment, we applied PLSR to face-pair similarity scores generated by seven algorithms participating in the Face Recognition Grand Challenge. The PLSR produced an optimal weighting of the similarity scores, which we tested for generality with a jackknife procedure. Fusing the algorithms' similarity scores using the optimal weights produced a twofold reduction of error rate over the most accurate algorithm. Next, human-subject-generated similarity scores were added to the PLSR analysis. Fusing humans and algorithms increased the performance to near-perfect classification accuracy. These results are discussed in terms of maximizing face-verification accuracy with hybrid systems consisting of multiple algorithms and humans. PMID:17926698

  17. Preliminary results and economics of the New York University process: continuous acid hydrolysis of cellulose, producing glucose for fermentation

    SciTech Connect

    Rugg, B.; Armstrong, P.; Stanton, R.

    1981-01-01

    The title process for the continuous acid hydrolysis of cellulose to glucose was evaluated in both batch- and pilot plant-scales. The optimal temperature and reaction time for batch-scale dilute acid hydrolysis were 232 degrees and 10-20 s, respectively. Comparison of glucose yield from newspaper pulp (10% solids) with sawdust (95% solids) as feedstock indicated that 50-60% conversions of alpha-cellulose to glucose were possible on a pilot-plant scale. Acceptable recovery of glucose (greater than 90%) was best accomplished by centrifugation at glucose concentrations of less than 4% from a 30% solids cake. In general, favorable results with respect to sugar yield and energy consumption were obtained.

  18. Clutter discrimination algorithm simulation in pulse laser radar imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; Su, Xuan; Zhu, Fule

    2015-10-01

    Pulse laser radar imaging performance is greatly influenced by different kinds of clutter. Various algorithms are developed to mitigate clutter. However, estimating performance of a new algorithm is difficult. Here, a simulation model for estimating clutter discrimination algorithms is presented. This model consists of laser pulse emission, clutter jamming, laser pulse reception and target image producing. Additionally, a hardware platform is set up gathering clutter data reflected by ground and trees. The data logging is as clutter jamming input in the simulation model. The hardware platform includes a laser diode, a laser detector and a high sample rate data logging circuit. The laser diode transmits short laser pulses (40ns FWHM) at 12.5 kilohertz pulse rate and at 905nm wavelength. An analog-to-digital converter chip integrated in the sample circuit works at 250 mega samples per second. The simulation model and the hardware platform contribute to a clutter discrimination algorithm simulation system. Using this system, after analyzing clutter data logging, a new compound pulse detection algorithm is developed. This new algorithm combines matched filter algorithm and constant fraction discrimination (CFD) algorithm. Firstly, laser echo pulse signal is processed by matched filter algorithm. After the first step, CFD algorithm comes next. Finally, clutter jamming from ground and trees is discriminated and target image is produced. Laser radar images are simulated using CFD algorithm, matched filter algorithm and the new algorithm respectively. Simulation result demonstrates that the new algorithm achieves the best target imaging effect of mitigating clutter reflected by ground and trees.

  19. Results.

    ERIC Educational Resources Information Center

    Zemsky, Robert; Shaman, Susan; Shapiro, Daniel B.

    2001-01-01

    Describes the Collegiate Results Instrument (CRI), which measures a range of collegiate outcomes for alumni 6 years after graduation. The CRI was designed to target alumni from institutions across market segments and assess their values, abilities, work skills, occupations, and pursuit of lifelong learning. (EV)

  20. Epidemic Diffusion of OXA-23-Producing Acinetobacter baumannii Isolates in Italy: Results of the First Cross-Sectional Countrywide Survey

    PubMed Central

    Principe, Luigi; Piazza, Aurora; Giani, Tommaso; Bracco, Silvia; Caltagirone, Maria Sofia; Arena, Fabio; Nucleo, Elisabetta; Tammaro, Federica; Rossolini, Gian Maria; Pagani, Laura

    2014-01-01

    Carbapenem-resistant Acinetobacter baumannii (CRAb) is emerging worldwide as a public health problem in various settings. The aim of this study was to investigate the prevalence of CRAb isolates in Italy and to characterize their resistance mechanisms and genetic relatedness. A countrywide cross-sectional survey was carried out at 25 centers in mid-2011. CRAb isolates were reported from all participating centers, with overall proportions of 45.7% and 22.2% among consecutive nonreplicate clinical isolates of A. baumannii from inpatients (n = 508) and outpatients (n = 63), respectively. Most of them were resistant to multiple antibiotics, whereas all remained susceptible to colistin, with MIC50 and MIC90 values of ≤0.5 mg/liter. The genes coding for carbapenemase production were identified by PCR and sequencing. OXA-23 enzymes (found in all centers) were by far the most common carbapenemases (81.7%), followed by OXA-58 oxacillinases (4.5%), which were found in 7 of the 25 centers. In 6 cases, CRAb isolates carried both blaOXA-23-like and blaOXA-58-like genes. A repetitive extragenic palindromic (REP)-PCR technique, multiplex PCRs for group identification, and multilocus sequence typing (MLST) were used to determine the genetic relationships among representative isolates (n = 55). Two different clonal lineages were identified, including a dominant clone of sequence type 2 (ST2) related to the international clone II (sequence group 1 [SG1], SG4, and SG5) and a clone of ST78 (SG6) previously described in Italy. Overall, our results demonstrate that OXA-23 enzymes have become the most prevalent carbapenemases and are now endemic in Italy. In addition, molecular typing profiles showed the presence of international and national clonal lineages in Italy. PMID:24920776

  1. Preservation of deep Himalayan PT conditions that formed during multiple events in garnet cores: Mylonitization produces erroneous results for rims

    NASA Astrophysics Data System (ADS)

    Sapkota, J.; Sanislav, I. V.

    2013-03-01

    The Kathmandu Thrust Sheet, which overlies the Lesser Himalayas along the southern part of the Main Central Thrust (MCT) and forms the leading edge of the Higher Himalayan crystalline rocks, is folded at a regional scale by the Gorkha-Kathmandu fold couplet in Central Nepal. Garnet porphyroblasts lying close to the MCT within this thrust sheet preserve structural and metamorphic history that predates mylonitization during thrust emplacement. The succession of five FIA sets preserved within these porphyroblasts formed due to changes in the direction of India's motion relative to Asia after they collided. The intersection of Fe, Ca and Mn isopleths for garnet cores reveals that FIA sets 1, 2, 3, 4 and 5 nucleated respectively at 6.2 kbar and 515 °C, 6-7 kbar and 545-550 °C, 6.6 kbar and 530 °C, 5.6-6.2 kbar and 525-550 °C and 6.8-6.9 kbar and 520-560 °C. The average PT mode of THERMOCALC, which relies on equilibrium being achieved between the garnet rims and the matrix, gives pressures around 11 kbar that do not accord with the lengthy succession of lower core pressures. The many foliations in the matrix, which formed during top to the south thrusting plus subsequent deformations that eventually led to these rocks reaching the surface, truncate all foliations preserved within the porphyroblasts that are defined by inclusion trails. This has resulted in the garnet rims not being in equilibrium with the matrix and the anomalously high pressures. The garnet rims may have been affected by slow dissolution and solution transfer over the period of time that the matrix was deforming plastically at high strain rates as the rocks were uplifted. The assumption of equilibrium between garnet rims and surrounding silicates used by various rim geothermobarometric methods does not hold for these rocks.

  2. Solutions of the Two-Dimensional Hubbard Model: Benchmarks and Results from a Wide Range of Numerical Algorithms

    NASA Astrophysics Data System (ADS)

    LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; Bulik, Ireneusz W.; Chan, Garnet Kin-Lic; Chung, Chia-Min; Deng, Youjin; Ferrero, Michel; Henderson, Thomas M.; Jiménez-Hoyos, Carlos A.; Kozik, E.; Liu, Xuan-Wen; Millis, Andrew J.; Prokof'ev, N. V.; Qin, Mingpu; Scuseria, Gustavo E.; Shi, Hao; Svistunov, B. V.; Tocchio, Luca F.; Tupitsyn, I. S.; White, Steven R.; Zhang, Shiwei; Zheng, Bo-Xiao; Zhu, Zhenyue; Gull, Emanuel; Simons Collaboration on the Many-Electron Problem

    2015-10-01

    Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification of uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.

  3. Solutions of the two-dimensional Hubbard model: Benchmarks and results from a wide range of numerical algorithms

    DOE PAGESBeta

    LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; Bulik, Ireneusz W.; Chan, Garnet Kin-Lic; Chung, Chia -Min; Deng, Youjin; Ferrero, Michel; Henderson, Thomas M.; Jiménez-Hoyos, Carlos A.; et al

    2015-12-14

    Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification ofmore » uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Furthermore, cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.« less

  4. Solutions of the two-dimensional Hubbard model: Benchmarks and results from a wide range of numerical algorithms

    SciTech Connect

    LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; Bulik, Ireneusz W.; Chan, Garnet Kin-Lic; Chung, Chia -Min; Deng, Youjin; Ferrero, Michel; Henderson, Thomas M.; Jiménez-Hoyos, Carlos A.; Kozik, E.; Liu, Xuan -Wen; Millis, Andrew J.; Prokof’ev, N. V.; Qin, Mingpu; Scuseria, Gustavo E.; Shi, Hao; Svistunov, B. V.; Tocchio, Luca F.; Tupitsyn, I. S.; White, Steven R.; Zhang, Shiwei; Zheng, Bo -Xiao; Zhu, Zhenyue; Gull, Emanuel

    2015-12-14

    Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification of uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Furthermore, cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.

  5. Testing block subdivision algorithms on block designs

    NASA Astrophysics Data System (ADS)

    Wiseman, Natalie; Patterson, Zachary

    2016-01-01

    Integrated land use-transportation models predict future transportation demand taking into account how households and firms arrange themselves partly as a function of the transportation system. Recent integrated models require parcels as inputs and produce household and employment predictions at the parcel scale. Block subdivision algorithms automatically generate parcel patterns within blocks. Evaluating block subdivision algorithms is done by way of generating parcels and comparing them to those in a parcel database. Three block subdivision algorithms are evaluated on how closely they reproduce parcels of different block types found in a parcel database from Montreal, Canada. While the authors who developed each of the algorithms have evaluated them, they have used their own metrics and block types to evaluate their own algorithms. This makes it difficult to compare their strengths and weaknesses. The contribution of this paper is in resolving this difficulty with the aim of finding a better algorithm suited to subdividing each block type. The proposed hypothesis is that given the different approaches that block subdivision algorithms take, it's likely that different algorithms are better adapted to subdividing different block types. To test this, a standardized block type classification is used that consists of mutually exclusive and comprehensive categories. A statistical method is used for finding a better algorithm and the probability it will perform well for a given block type. Results suggest the oriented bounding box algorithm performs better for warped non-uniform sites, as well as gridiron and fragmented uniform sites. It also produces more similar parcel areas and widths. The Generalized Parcel Divider 1 algorithm performs better for gridiron non-uniform sites. The Straight Skeleton algorithm performs better for loop and lollipop networks as well as fragmented non-uniform and warped uniform sites. It also produces more similar parcel shapes and patterns.

  6. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  7. The Soil Moisture Active Passive Mission (SMAP) Science Data Products: Results of Testing with Field Experiment and Algorithm Testbed Simulation Environment Data

    NASA Technical Reports Server (NTRS)

    Entekhabi, Dara; Njoku, Eni E.; O'Neill, Peggy E.; Kellogg, Kent H.; Entin, Jared K.

    2010-01-01

    Talk outline 1. Derivation of SMAP basic and applied science requirements from the NRC Earth Science Decadal Survey applications 2. Data products and latencies 3. Algorithm highlights 4. SMAP Algorithm Testbed 5. SMAP Working Groups and community engagement

  8. An SMP soft classification algorithm for remote sensing

    NASA Astrophysics Data System (ADS)

    Phillips, Rhonda D.; Watson, Layne T.; Easterling, David R.; Wynne, Randolph H.

    2014-07-01

    This work introduces a symmetric multiprocessing (SMP) version of the continuous iterative guided spectral class rejection (CIGSCR) algorithm, a semiautomated classification algorithm for remote sensing (multispectral) images. The algorithm uses soft data clusters to produce a soft classification containing inherently more information than a comparable hard classification at an increased computational cost. Previous work suggests that similar algorithms achieve good parallel scalability, motivating the parallel algorithm development work here. Experimental results of applying parallel CIGSCR to an image with approximately 108 pixels and six bands demonstrate superlinear speedup. A soft two class classification is generated in just over 4 min using 32 processors.

  9. A splitting algorithm for Vlasov simulation with filamentation filtration

    NASA Technical Reports Server (NTRS)

    Klimas, A. J.; Farrell, W. M.

    1994-01-01

    A Fourier-Fourier transformed version of the splitting algorithm for simulating solutions of the Vlasov-Poisson system of equations is introduced. It is shown that with the inclusion of filamentation filtration in this transformed algorithm it is both faster and more stable than the standard splitting algorithm. It is further shown that in a scalar computer environment this new algorithm is approximately equal in speed and far less noisy than its particle-in-cell counterpart. It is conjectured that in a multiprocessor environment the filtered splitting algorithm would be faster while producing more precise results.

  10. Sequence-matched probes produce increased cross-platform consistency and more reproducible biological results in microarray-based gene expression measurements

    PubMed Central

    Mecham, Brigham H.; Klus, Gregory T.; Strovel, Jeffrey; Augustus, Meena; Byrne, David; Bozso, Peter; Wetmore, Daniel Z.; Mariani, Thomas J.; Kohane, Isaac S.; Szallasi, Zoltan

    2004-01-01

    Cancer derived microarray data sets are routinely produced by various platforms that are either commercially available or manufactured by academic groups. The fundamental difference in their probe selection strategies holds the promise that identical observations produced by more than one platform prove to be more robust when validated by biology. However, cross-platform comparison requires matching corresponding probe sets. We are introducing here sequence-based matching of probes instead of gene identifier-based matching. We analyzed breast cancer cell line derived RNA aliquots using Agilent cDNA and Affymetrix oligonucleotide microarray platforms to assess the advantage of this method. We show, that at different levels of the analysis, including gene expression ratios and difference calls, cross-platform consistency is significantly improved by sequence- based matching. We also present evidence that sequence-based probe matching produces more consistent results when comparing similar biological data sets obtained by different microarray platforms. This strategy allowed a more efficient transfer of classification of breast cancer samples between data sets produced by cDNA microarray and Affymetrix gene-chip platforms. PMID:15161944

  11. Sequence-matched probes produce increased cross-platform consistency and more reproducible biological results in microarray-based gene expression measurements.

    PubMed

    Mecham, Brigham H; Klus, Gregory T; Strovel, Jeffrey; Augustus, Meena; Byrne, David; Bozso, Peter; Wetmore, Daniel Z; Mariani, Thomas J; Kohane, Isaac S; Szallasi, Zoltan

    2004-01-01

    Cancer derived microarray data sets are routinely produced by various platforms that are either commercially available or manufactured by academic groups. The fundamental difference in their probe selection strategies holds the promise that identical observations produced by more than one platform prove to be more robust when validated by biology. However, cross-platform comparison requires matching corresponding probe sets. We are introducing here sequence-based matching of probes instead of gene identifier-based matching. We analyzed breast cancer cell line derived RNA aliquots using Agilent cDNA and Affymetrix oligonucleotide microarray platforms to assess the advantage of this method. We show, that at different levels of the analysis, including gene expression ratios and difference calls, cross-platform consistency is significantly improved by sequence- based matching. We also present evidence that sequence-based probe matching produces more consistent results when comparing similar biological data sets obtained by different microarray platforms. This strategy allowed a more efficient transfer of classification of breast cancer samples between data sets produced by cDNA microarray and Affymetrix gene-chip platforms. PMID:15161944

  12. Univariate time series forecasting algorithm validation

    NASA Astrophysics Data System (ADS)

    Ismail, Suzilah; Zakaria, Rohaiza; Muda, Tuan Zalizam Tuan

    2014-12-01

    Forecasting is a complex process which requires expert tacit knowledge in producing accurate forecast values. This complexity contributes to the gaps between end users and expert. Automating this process by using algorithm can act as a bridge between them. Algorithm is a well-defined rule for solving a problem. In this study a univariate time series forecasting algorithm was developed in JAVA and validated using SPSS and Excel. Two set of simulated data (yearly and non-yearly); several univariate forecasting techniques (i.e. Moving Average, Decomposition, Exponential Smoothing, Time Series Regressions and ARIMA) and recent forecasting process (such as data partition, several error measures, recursive evaluation and etc.) were employed. Successfully, the results of the algorithm tally with the results of SPSS and Excel. This algorithm will not just benefit forecaster but also end users that lacking in depth knowledge of forecasting process.

  13. Image-Data Compression Using Edge-Optimizing Algorithm for WFA Inference.

    ERIC Educational Resources Information Center

    Culik, Karel II; Kari, Jarkko

    1994-01-01

    Presents an inference algorithm that produces a weighted finite automata (WFA), in particular, the grayness functions of graytone images. Image-data compression results based on the new inference algorithm produces a WFA with a relatively small number of edges. Image-data compression results alone and in combination with wavelets are discussed.…

  14. A Simple Calculator Algorithm.

    ERIC Educational Resources Information Center

    Cook, Lyle; McWilliam, James

    1983-01-01

    The problem of finding cube roots when limited to a calculator with only square root capability is discussed. An algorithm is demonstrated and explained which should always produce a good approximation within a few iterations. (MP)

  15. RADFLO physics and algorithms

    SciTech Connect

    Symbalisty, E.M.D.; Zinn, J.; Whitaker, R.W.

    1995-09-01

    This paper describes the history, physics, and algorithms of the computer code RADFLO and its extension HYCHEM. RADFLO is a one-dimensional, radiation-transport hydrodynamics code that is used to compute early-time fireball behavior for low-altitude nuclear bursts. The primary use of the code is the prediction of optical signals produced by nuclear explosions. It has also been used to predict thermal and hydrodynamic effects that are used for vulnerability and lethality applications. Another closely related code, HYCHEM, is an extension of RADFLO which includes the effects of nonequilibrium chemistry. Some examples of numerical results will be shown, along with scaling expressions derived from those results. We describe new computations of the structures and luminosities of steady-state shock waves and radiative thermal waves, which have been extended to cover a range of ambient air densities for high-altitude applications. We also describe recent modifications of the codes to use a one-dimensional analog of the CAVEAT fluid-dynamics algorithm in place of the former standard Richtmyer-von Neumann algorithm.

  16. Escape transition of a polymer chain from a nanotube: how to avoid spurious results by use of the force-biased pruned-enriched Rosenbluth algorithm.

    PubMed

    Hsu, Hsiao-Ping; Binder, Kurt; Klushin, Leonid I; Skvortsov, Alexander M

    2008-10-01

    A polymer chain containing N monomers confined in a finite cylindrical tube of diameter D grafted at a distance L from the open end of the tube may undergo a rather abrupt transition, where part of the chain escapes from the tube to form a "crownlike" coil outside of the tube. When this problem is studied by Monte Carlo simulation of self-avoiding walks on the simple cubic lattice applying a cylindrical confinement and using the standard pruned-enriched Rosenbluth method (PERM), one obtains spurious results, however, with increasing chain length the transition gets weaker and weaker, due to insufficient sampling of the "escaped" states, as a detailed analysis shows. In order to solve this problem, a new variant of a biased sequential sampling algorithm with resampling is proposed, force-biased PERM: the difficulty of sampling both phases in the region of the first order transition with the correct weights is treated by applying a force at the free end pulling it out of the tube. Different strengths of this force need to be used and reweighting techniques are applied. Using rather long chains (up to N=18000 ) and wide tubes (up to D=29 lattice spacings), the free energy of the chain, its end-to-end distance, the number of "imprisoned" monomers can be estimated, as well as the order parameter and its distribution. It is suggested that this algorithm should be useful for other problems involving state changes of polymers, where the different states belong to rather disjunct "valleys" in the phase space of the system. PMID:18999448

  17. Filtered refocusing: a volumetric reconstruction algorithm for plenoptic-PIV

    NASA Astrophysics Data System (ADS)

    Fahringer, Timothy W.; Thurow, Brian S.

    2016-09-01

    A new algorithm for reconstruction of 3D particle fields from plenoptic image data is presented. The algorithm is based on the technique of computational refocusing with the addition of a post reconstruction filter to remove the out of focus particles. This new algorithm is tested in terms of reconstruction quality on synthetic particle fields as well as a synthetically generated 3D Gaussian ring vortex. Preliminary results indicate that the new algorithm performs as well as the MART algorithm (used in previous work) in terms of the reconstructed particle position accuracy, but produces more elongated particles. The major advantage to the new algorithm is the dramatic reduction in the computational cost required to reconstruct a volume. It is shown that the new algorithm takes 1/9th the time to reconstruct the same volume as MART while using minimal resources. Experimental results are presented in the form of the wake behind a cylinder at a Reynolds number of 185.

  18. Benchmarking image fusion algorithm performance

    NASA Astrophysics Data System (ADS)

    Howell, Christopher L.

    2012-06-01

    Registering two images produced by two separate imaging sensors having different detector sizes and fields of view requires one of the images to undergo transformation operations that may cause its overall quality to degrade with regards to visual task performance. This possible change in image quality could add to an already existing difference in measured task performance. Ideally, a fusion algorithm would take as input unaltered outputs from each respective sensor used in the process. Therefore, quantifying how well an image fusion algorithm performs should be base lined to whether the fusion algorithm retained the performance benefit achievable by each independent spectral band being fused. This study investigates an identification perception experiment using a simple and intuitive process for discriminating between image fusion algorithm performances. The results from a classification experiment using information theory based image metrics is presented and compared to perception test results. The results show an effective performance benchmark for image fusion algorithms can be established using human perception test data. Additionally, image metrics have been identified that either agree with or surpass the performance benchmark established.

  19. Retrieval of CH4, CO, and CO2 total column amounts from SCIAMACHY near-infrared nadir spectra: retrieval algorithm and first results

    NASA Astrophysics Data System (ADS)

    Buchwitz, Michael; Burrows, John P.

    2004-02-01

    SCIAMACHY is a UV/visible/near-infrared grating spectrometer on board the European environmental satellite ENVISAT that observes the atmosphere in nadir, limb, and solar and lunar occultation viewing geometries with moderate spectral resolution (0.2-1.5 nm). At the University of Bremen a modified DOAS algorithm (WFM-DOAS) is being developed primarily for the retrieval of CH4, CO, CO2, H2O, N2O, and O2 total columns from SCIAMACHY near-infrared and visible nadir spectra. A first version of this algorithm has been implemented based on a fast look-up table approach. The algorithm and the look-up table is described along with an initial error analysis. Weighting functions and averaging kernels indicate that the SCIAMACHY near-infrared nadir measurements are highly sensitive to trace gas concentration changes even in the lowest kilometer of the atmosphere. The results presented have been obtained by applying WFM-DOAS to small spectral fitting windows focusing on CH4, CO2, CO, and O2 column retrieval and CH4 and CO2 to O2 column ratios (denoted XCH4 and XCO2, respectively). These type of data products are planned to be used within the EU research project EVERGREEN to constrain surface sources and sinks of CH4 and CO2 using inverse modeling techniques. This study discussed the first set of WFM-DOAS products generated for and to be further improved within EVERGREEN. Although no detailed validation has been performed yet we found that the retrieved columns have the right order of magnitude and show (at least qualitatively) the expected correlation of the well mixed gases CO2 and CH4 with O2 and surface topography. The standard deviation of the dry air column averaged mixing ration XCO2 within 10° latitude bands is +/-10 ppmv or 2.7% (XCH4: +/-50 ppbv or +/-2.8%) for measurements over land (over ocean the scatter is a factor of 2-4 larger). These values have been determined from ~25% of the ground pixels of one orbit which fulfill the following requirements: (nearly) cloud

  20. Basic firefly algorithm for document clustering

    NASA Astrophysics Data System (ADS)

    Mohammed, Athraa Jasim; Yusof, Yuhanis; Husni, Husniza

    2015-12-01

    The Document clustering plays significant role in Information Retrieval (IR) where it organizes documents prior to the retrieval process. To date, various clustering algorithms have been proposed and this includes the K-means and Particle Swarm Optimization. Even though these algorithms have been widely applied in many disciplines due to its simplicity, such an approach tends to be trapped in a local minimum during its search for an optimal solution. To address the shortcoming, this paper proposes a Basic Firefly (Basic FA) algorithm to cluster text documents. The algorithm employs the Average Distance to Document Centroid (ADDC) as the objective function of the search. Experiments utilizing the proposed algorithm were conducted on the 20Newsgroups benchmark dataset. Results demonstrate that the Basic FA generates a more robust and compact clusters than the ones produced by K-means and Particle Swarm Optimization (PSO).

  1. Semioptimal practicable algorithmic cooling

    SciTech Connect

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-04-15

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  2. Automated classification of seismic sources in large database using random forest algorithm: First results at Piton de la Fournaise volcano (La Réunion).

    NASA Astrophysics Data System (ADS)

    Hibert, Clément; Provost, Floriane; Malet, Jean-Philippe; Stumpf, André; Maggi, Alessia; Ferrazzini, Valérie

    2016-04-01

    In the past decades the increasing quality of seismic sensors and capability to transfer remotely large quantity of data led to a fast densification of local, regional and global seismic networks for near real-time monitoring. This technological advance permits the use of seismology to document geological and natural/anthropogenic processes (volcanoes, ice-calving, landslides, snow and rock avalanches, geothermal fields), but also led to an ever-growing quantity of seismic data. This wealth of seismic data makes the construction of complete seismicity catalogs, that include earthquakes but also other sources of seismic waves, more challenging and very time-consuming as this critical pre-processing stage is classically done by human operators. To overcome this issue, the development of automatic methods for the processing of continuous seismic data appears to be a necessity. The classification algorithm should satisfy the need of a method that is robust, precise and versatile enough to be deployed to monitor the seismicity in very different contexts. We propose a multi-class detection method based on the random forests algorithm to automatically classify the source of seismic signals. Random forests is a supervised machine learning technique that is based on the computation of a large number of decision trees. The multiple decision trees are constructed from training sets including each of the target classes. In the case of seismic signals, these attributes may encompass spectral features but also waveform characteristics, multi-stations observations and other relevant information. The Random Forests classifier is used because it provides state-of-the-art performance when compared with other machine learning techniques (e.g. SVM, Neural Networks) and requires no fine tuning. Furthermore it is relatively fast, robust, easy to parallelize, and inherently suitable for multi-class problems. In this work, we present the first results of the classification method applied

  3. A flight management algorithm and guidance for fuel-conservative descents in a time-based metered air traffic environment: Development and flight test results

    NASA Technical Reports Server (NTRS)

    Knox, C. E.

    1984-01-01

    A simple airborne flight management descent algorithm designed to define a flight profile subject to the constraints of using idle thrust, a clean airplane configuration (landing gear up, flaps zero, and speed brakes retracted), and fixed-time end conditions was developed and flight tested in the NASA TSRV B-737 research airplane. The research test flights, conducted in the Denver ARTCC automated time-based metering LFM/PD ATC environment, demonstrated that time guidance and control in the cockpit was acceptable to the pilots and ATC controllers and resulted in arrival of the airplane over the metering fix with standard deviations in airspeed error of 6.5 knots, in altitude error of 23.7 m (77.8 ft), and in arrival time accuracy of 12 sec. These accuracies indicated a good representation of airplane performance and wind modeling. Fuel savings will be obtained on a fleet-wide basis through a reduction of the time error dispersions at the metering fix and on a single-airplane basis by presenting the pilot with guidance for a fuel-efficient descent.

  4. HEATR project: ATR algorithm parallelization

    NASA Astrophysics Data System (ADS)

    Deardorf, Catherine E.

    1998-09-01

    High Performance Computing (HPC) Embedded Application for Target Recognition (HEATR) is a project funded by the High Performance Computing Modernization Office through the Common HPC Software Support Initiative (CHSSI). The goal of CHSSI is to produce portable, parallel, multi-purpose, freely distributable, support software to exploit emerging parallel computing technologies and enable application of scalable HPC's for various critical DoD applications. Specifically, the CHSSI goal for HEATR is to provide portable, parallel versions of several existing ATR detection and classification algorithms to the ATR-user community to achieve near real-time capability. The HEATR project will create parallel versions of existing automatic target recognition (ATR) detection and classification algorithms and generate reusable code that will support porting and software development process for ATR HPC software. The HEATR Team has selected detection/classification algorithms from both the model- based and training-based (template-based) arena in order to consider the parallelization requirements for detection/classification algorithms across ATR technology. This would allow the Team to assess the impact that parallelization would have on detection/classification performance across ATR technology. A field demo is included in this project. Finally, any parallel tools produced to support the project will be refined and returned to the ATR user community along with the parallel ATR algorithms. This paper will review: (1) HPCMP structure as it relates to HEATR, (2) Overall structure of the HEATR project, (3) Preliminary results for the first algorithm Alpha Test, (4) CHSSI requirements for HEATR, and (5) Project management issues and lessons learned.

  5. Effect of vertebral surface extraction on registration accuracy: a comparison of registration results for iso-intensity algorithms applied to computed tomography images

    NASA Astrophysics Data System (ADS)

    Herring, Jeannette L.; Maurer, Calvin R., Jr.; Muratore, Diane M.; Galloway, Robert L., Jr.; Dawant, Benoit M.

    1999-05-01

    This paper presents a comparison of iso-intensity-based surface extraction algorithms applied to computed tomography (CT) images of the spine. The extracted vertebral surfaces are used in surface-based registration of CT images to physical space, where our ultimate goal is the development of a technique that can be used for image-guided spinal surgery. The surface extraction process has a direct effect on image-guided surgery in two ways: the extracted surface must provide an accurate representation of the actual surface so that a good registration can be achieved, and the number of polygons in the mesh representation of the extracted surface must be small enough to allow the registration to be performed quickly. To examine the effect of the surface extraction process on registration error and run time, we have performed a large number of experiments on two plastic spine phantoms. Using a marker-based system to assess accuracy, we have found that submillimetric registration accuracy can be achieved using a point-to- surface registration algorithm with simplified and unsimplified members of the general class of iso-intensity- based surface extraction algorithms. This research has practical implications, since it shows that several versions of the widely available class of intensity-based surface extraction algorithms can be used to provide sufficient accuracy for vertebral registration. Since intensity-based algorithms are completely deterministic and fully automatic, this finding simplifies the pre-processing required for image-guided back surgery.

  6. Scheduling Earth Observing Satellites with Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna

    2003-01-01

    We hypothesize that evolutionary algorithms can effectively schedule coordinated fleets of Earth observing satellites. The constraints are complex and the bottlenecks are not well understood, a condition where evolutionary algorithms are often effective. This is, in part, because evolutionary algorithms require only that one can represent solutions, modify solutions, and evaluate solution fitness. To test the hypothesis we have developed a representative set of problems, produced optimization software (in Java) to solve them, and run experiments comparing techniques. This paper presents initial results of a comparison of several evolutionary and other optimization techniques; namely the genetic algorithm, simulated annealing, squeaky wheel optimization, and stochastic hill climbing. We also compare separate satellite vs. integrated scheduling of a two satellite constellation. While the results are not definitive, tests to date suggest that simulated annealing is the best search technique and integrated scheduling is superior.

  7. Damage evaluation on a multi-story framed structures: comparison of results retrieved from algorithms based on modal and non-modal parameters

    NASA Astrophysics Data System (ADS)

    Auletta, Gianluca; Ditommaso, Rocco; Iacovino, Chiara; Carlo Ponzo, Felice; Pina Limongelli, Maria

    2016-04-01

    Continuous monitoring based on vibrational identification methods is increasingly employed with the aim of evaluate the state of the health of existing structures and infrastructures and to evaluate the performance of safety interventions over time. In case of earthquakes, data acquired by means of continuous monitoring systems can be used to localize and quantify a possible damage occurred on a monitored structure using appropriate algorithms based on the variations of structural parameters. Most of the damage identification methods are based on the variation of few modal and/or non-modal parameters: the former, are strictly related to the structural eigenfrequencies, equivalent viscous damping factors and mode shapes; the latter, are based on the variation of parameters related to the geometric characteristics of the monitored structure whose variations could be correlated related to damage. In this work results retrieved from the application of a curvature evolution based method and an interpolation error based method are compared. The first method is based on the evaluation of the curvature variation (related to the fundamental mode of vibration) over time and compares the variations before, during and after the earthquake. The Interpolation Method is based on the detection of localized reductions of smoothness in the Operational Deformed Shapes (ODSs) of the structure. A damage feature is defined in terms of the error related to the use of a spline function in interpolating the ODSs of the structure: statistically significant variations of the interpolation error between two successive inspections of the structure indicate the onset of damage. Both methods have been applied using both numerical data retrieved from nonlinear FE models and experimental tests on scaled structures carried out on the shaking table of the University of Basilicata. Acknowledgements This study was partially funded by the Italian Civil Protection Department within the project DPC

  8. Sprouting Healthy Kids Promotes Local Produce and Healthy Eating Behavior in Austin, Texas, Middle Schools: Promoting the Use of Local Produce and Healthy Eating Behavior in Austin City Schools. Program Results Report

    ERIC Educational Resources Information Center

    Feiden, Karyn

    2010-01-01

    The Sustainable Food Center, which promotes healthy food choices, partnered with six middle schools in Austin, Texas, to implement Sprouting Healthy Kids. The pilot project was designed to increase children's knowledge of the food system, their consumption of fruits and vegetables and their access to local farm produce. Most students at these…

  9. Automatic-control-algorithm effects on energy production

    SciTech Connect

    McNerney, G.M.

    1981-01-01

    Algorithm control strategy for unattended wind turbine operation is a potentially important aspect of wind energy production that has thus far escaped treatment in the literature. Early experience in automatic operation of the Sandia 17-m VAWT has demonstrated the need for a systematic study of control algorithms. To this end, a computer model has been developed using actual wind time series and turbine performance data to simulate the power produced by the Sandia 17-m VAWT operating in automatic control. The model has been used to investigate the influence of starting algorithms on annual energy production. The results indicate that, depending on turbine and local wind characteristics, a bad choice of a control algorithm can significantly reduce overall energy production. The model can be used to select control algorithms and threshold parameters that maximize long-term energy production. An attempt has been made to generalize these results from local site and turbine characteristics to obtain general guidelines for control algorithm design.

  10. Coronary CTA using scout-based automated tube potential and current selection algorithm, with breast displacement results in lower radiation exposure in females compared to males

    PubMed Central

    Vadvala, Harshna; Kim, Phillip; Mayrhofer, Thomas; Pianykh, Oleg; Kalra, Mannudeep; Hoffmann, Udo

    2014-01-01

    Purpose To evaluate the effect of automatic tube potential selection and automatic exposure control combined with female breast displacement during coronary computed tomography angiography (CCTA) on radiation exposure in women versus men of the same body size. Materials and methods Consecutive clinical exams between January 2012 and July 2013 at an academic medical center were retrospectively analyzed. All examinations were performed using ECG-gating, automated tube potential, and tube current selection algorithm (APS-AEC) with breast displacement in females. Cohorts were stratified by sex and standard World Health Organization body mass index (BMI) ranges. CT dose index volume (CTDIvol), dose length product (DLP) median effective dose (ED), and size specific dose estimate (SSDE) were recorded. Univariable and multivariable regression analyses were performed to evaluate the effect of gender on radiation exposure per BMI. Results A total of 726 exams were included, 343 (47%) were females; mean BMI was similar by gender (28.6±6.9 kg/m2 females vs. 29.2±6.3 kg/m2 males; P=0.168). Median ED was 2.3 mSv (1.4-5.2) for females and 3.6 (2.5-5.9) for males (P<0.001). Females were exposed to less radiation by a difference in median ED of –1.3 mSv, CTDIvol –4.1 mGy, and SSDE –6.8 mGy (all P<0.001). After adjusting for BMI, patient characteristics, and gating mode, females exposure was lower by a median ED of –0.7 mSv, CTDIvol –2.3 mGy, and SSDE –3.15 mGy, respectively (all P<0.01). Conclusions: We observed a difference in radiation exposure to patients undergoing CCTA with the combined use of AEC-APS and breast displacement in female patients as compared to their BMI-matched male counterparts, with female patients receiving one third less exposure. PMID:25610804

  11. Severe sepsis and septic shock in pre-hospital emergency medicine: survey results of medical directors of emergency medical services concerning antibiotics, blood cultures and algorithms.

    PubMed

    Casu, Sebastian; Häske, David

    2016-06-01

    Delayed antibiotic treatment for patients in severe sepsis and septic shock decreases the probability of survival. In this survey, medical directors of different emergency medical services (EMS) in Germany were asked if they are prepared for pre-hospital sepsis therapy with antibiotics or special algorithms to evaluate the individual preparations of the different rescue areas for the treatment of patients with this infectious disease. The objective of the survey was to obtain a general picture of the current status of the EMS with respect to rapid antibiotic treatment for sepsis. A total of 166 medical directors were invited to complete a short survey on behalf of the different rescue service districts in Germany via an electronic cover letter. Of the rescue districts, 25.6 % (n = 20) stated that they keep antibiotics on EMS vehicles. In addition, 2.6 % carry blood cultures on the vehicles. The most common antibiotic is ceftriaxone (third generation cephalosporin). In total, 8 (10.3 %) rescue districts use an algorithm for patients with sepsis, severe sepsis or septic shock. Although the German EMS is an emergency physician-based rescue system, special opportunities in the form of antibiotics on emergency physician vehicles are missing. Simultaneously, only 10.3 % of the rescue districts use a special algorithm for sepsis therapy. Sepsis, severe sepsis and septic shock do not appear to be prioritized as highly as these deadly diseases should be in the pre-hospital setting. PMID:26719078

  12. Algorithm for dynamic Speckle pattern processing

    NASA Astrophysics Data System (ADS)

    Cariñe, J.; Guzmán, R.; Torres-Ruiz, F. A.

    2016-07-01

    In this paper we present a new algorithm for determining surface activity by processing speckle pattern images recorded with a CCD camera. Surface activity can be produced by motility or small displacements among other causes, and is manifested as a change in the pattern recorded in the camera with reference to a static background pattern. This intensity variation is considered to be a small perturbation compared with the mean intensity. Based on a perturbative method we obtain an equation with which we can infer information about the dynamic behavior of the surface that generates the speckle pattern. We define an activity index based on our algorithm that can be easily compared with the outcomes from other algorithms. It is shown experimentally that this index evolves in time in the same way as the Inertia Moment method, however our algorithm is based on direct processing of speckle patterns without the need for other kinds of post-processes (like THSP and co-occurrence matrix), making it a viable real-time method. We also show how this algorithm compares with several other algorithms when applied to calibration experiments. From these results we conclude that our algorithm offer qualitative and quantitative advantages over current methods.

  13. Validation of a treatment algorithm for orthopaedic implant-related infections with device-retention-results from a prospective observational cohort study.

    PubMed

    Tschudin-Sutter, S; Frei, R; Dangel, M; Jakob, M; Balmelli, C; Schaefer, D J; Weisser, M; Elzi, L; Battegay, M; Widmer, A F

    2016-05-01

    Success rates for treatment regimens involving retention of an infected implant are conflicting and failure rates of up to 80% have been reported. We aimed to validate a proposed treatment algorithm, based on strict selection criteria, by assessing long-term outcome of treatment for orthopaedic device-related infection (ODRI) with retention. From January 1999 to December 2009, all patients diagnosed with ODRI at the University Hospital Basel, Switzerland were eligible for treatment with open surgical debridement, implant-retention and antibiotics, if duration of clinical symptoms was ≤3 weeks, the implant was stable, the soft-tissue had no abscess or sinus tract, and the causative pathogen was susceptible to antimicrobial agents with activity against surface-adhering microorganisms. Antimicrobial treatment was administered according to a predefined algorithm. The primary outcome was treatment failure after 2-year follow up. A total of 455 patients were diagnosed with an ODRI, of whom 233 (51.2%) patients were eligible for treatment involving implant-retention. Causative pathogens were mainly Staphylococcus aureus (41.6%) and coagulase-negative staphylococci (33.9%). Among patients with ODRIs related to prostheses, failure was documented in 10.8% (12/111) and in patients with ODRIs related to osteosyntheses, failure occurred in 9.8% (12/122) after 2 years of follow up. In all, 90% of ODRIs were successfully cured with surgical debridement and implant-retention in addition to long-term antimicrobial therapy according to a predefined treatment algorithm: if patients fulfilled strict selection criteria and there was susceptibility to rifampin for Gram-positive pathogens and ciprofloxacin for Gram-negative pathogens. PMID:26806134

  14. Quantum algorithms

    NASA Astrophysics Data System (ADS)

    Abrams, Daniel S.

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

  15. Adaptive image contrast enhancement algorithm for point-based rendering

    NASA Astrophysics Data System (ADS)

    Xu, Shaoping; Liu, Xiaoping P.

    2015-03-01

    Surgical simulation is a major application in computer graphics and virtual reality, and most of the existing work indicates that interactive real-time cutting simulation of soft tissue is a fundamental but challenging research problem in virtual surgery simulation systems. More specifically, it is difficult to achieve a fast enough graphic update rate (at least 30 Hz) on commodity PC hardware by utilizing traditional triangle-based rendering algorithms. In recent years, point-based rendering (PBR) has been shown to offer the potential to outperform the traditional triangle-based rendering in speed when it is applied to highly complex soft tissue cutting models. Nevertheless, the PBR algorithms are still limited in visual quality due to inherent contrast distortion. We propose an adaptive image contrast enhancement algorithm as a postprocessing module for PBR, providing high visual rendering quality as well as acceptable rendering efficiency. Our approach is based on a perceptible image quality technique with automatic parameter selection, resulting in a visual quality comparable to existing conventional PBR algorithms. Experimental results show that our adaptive image contrast enhancement algorithm produces encouraging results both visually and numerically compared to representative algorithms, and experiments conducted on the latest hardware demonstrate that the proposed PBR framework with the postprocessing module is superior to the conventional PBR algorithm and that the proposed contrast enhancement algorithm can be utilized in (or compatible with) various variants of the conventional PBR algorithm.

  16. SOLIDIFICATION OF THE HANFORD LAW WASTE STREAM PRODUCED AS A RESULT OF NEAR-TANK CONTINUOUS SLUDGE LEACHING AND SODIUM HYDROXIDE RECOVERY

    SciTech Connect

    Reigel, M.; Johnson, F.; Crawford, C.; Jantzen, C.

    2011-09-20

    The U.S. Department of Energy (DOE), Office of River Protection (ORP), is responsible for the remediation and stabilization of the Hanford Site tank farms, including 53 million gallons of highly radioactive mixed wasted waste contained in 177 underground tanks. The plan calls for all waste retrieved from the tanks to be transferred to the Waste Treatment Plant (WTP). The WTP will consist of three primary facilities including pretreatment facilities for Low Activity Waste (LAW) to remove aluminum, chromium and other solids and radioisotopes that are undesirable in the High Level Waste (HLW) stream. Removal of aluminum from HLW sludge can be accomplished through continuous sludge leaching of the aluminum from the HLW sludge as sodium aluminate; however, this process will introduce a significant amount of sodium hydroxide into the waste stream and consequently will increase the volume of waste to be dispositioned. A sodium recovery process is needed to remove the sodium hydroxide and recycle it back to the aluminum dissolution process. The resulting LAW waste stream has a high concentration of aluminum and sodium and will require alternative immobilization methods. Five waste forms were evaluated for immobilization of LAW at Hanford after the sodium recovery process. The waste forms considered for these two waste streams include low temperature processes (Saltstone/Cast stone and geopolymers), intermediate temperature processes (steam reforming and phosphate glasses) and high temperature processes (vitrification). These immobilization methods and the waste forms produced were evaluated for (1) compliance with the Performance Assessment (PA) requirements for disposal at the IDF, (2) waste form volume (waste loading), and (3) compatibility with the tank farms and systems. The iron phosphate glasses tested using the product consistency test had normalized release rates lower than the waste form requirements although the CCC glasses had higher release rates than the

  17. An Artificial Immune Univariate Marginal Distribution Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Qingbin; Kang, Shuo; Gao, Junxiang; Wu, Song; Tian, Yanping

    Hybridization is an extremely effective way of improving the performance of the Univariate Marginal Distribution Algorithm (UMDA). Owing to its diversity and memory mechanisms, artificial immune algorithm has been widely used to construct hybrid algorithms with other optimization algorithms. This paper proposes a hybrid algorithm which combines the UMDA with the principle of general artificial immune algorithm. Experimental results on deceptive function of order 3 show that the proposed hybrid algorithm can get more building blocks (BBs) than the UMDA.

  18. The effect of sub-surface volume scattering on the accuracy of ice-sheet altimeter retracking algorithms

    NASA Technical Reports Server (NTRS)

    Davis, Curt H.

    1993-01-01

    The NASA and ESA retracking algorithms are compared with an algorithm based upon a combined surface and volume (S/V) scattering model. First, the S/V, NASA, and ESA algorithms were used to retrack over 400,000 altimeter return waveforms from the Greenland and Antarctic ice sheets. The surface elevations from the S/V algorithm were compared with the elevations produced by the NASA and ESA algorithms to determine the relative accuracy of these algorithms when subsurface volume-scattering occurs. The results show that the NASA algorithm produced surface elevations within 35 to 50 cm of the S/V algorithm, while the performance of the ESA algorithm was slightly worse. Next, by analyzing several thousand satellite crossover points from the Antarctic data set, we determined the retracking algorithm that produced the most repeatable surface elevations. The elevations derived from the S/V algorithm had the smallest RMS error for the region of the East Antarctic plateau examined here. The ESA algorithm produced erroneous estimates of elevation change when seasonal variations were present; it measured 0.7 to 1.6-m change in elevation over a 6-month period on the East Antarctic plateau where accumulation rates are only 10 cm/year.

  19. Speckle imaging algorithms for planetary imaging

    SciTech Connect

    Johansson, E.

    1994-11-15

    I will discuss the speckle imaging algorithms used to process images of the impact sites of the collision of comet Shoemaker-Levy 9 with Jupiter. The algorithms use a phase retrieval process based on the average bispectrum of the speckle image data. High resolution images are produced by estimating the Fourier magnitude and Fourier phase of the image separately, then combining them and inverse transforming to achieve the final result. I will show raw speckle image data and high-resolution image reconstructions from our recent experiment at Lick Observatory.

  20. Genetic Algorithm for Optimization: Preprocessor and Algorithm

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Shaykhian, Gholam A.

    2006-01-01

    Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.

  1. Application of Least Mean Square Algorithms to Spacecraft Vibration Compensation

    NASA Technical Reports Server (NTRS)

    Woodard , Stanley E.; Nagchaudhuri, Abhijit

    1998-01-01

    This paper describes the application of the Least Mean Square (LMS) algorithm in tandem with the Filtered-X Least Mean Square algorithm for controlling a science instrument's line-of-sight pointing. Pointing error is caused by a periodic disturbance and spacecraft vibration. A least mean square algorithm is used on-orbit to produce the transfer function between the instrument's servo-mechanism and error sensor. The result is a set of adaptive transversal filter weights tuned to the transfer function. The Filtered-X LMS algorithm, which is an extension of the LMS, tunes a set of transversal filter weights to the transfer function between the disturbance source and the servo-mechanism's actuation signal. The servo-mechanism's resulting actuation counters the disturbance response and thus maintains accurate science instrumental pointing. A simulation model of the Upper Atmosphere Research Satellite is used to demonstrate the algorithms.

  2. Identifying Risk Factors for Recent HIV Infection in Kenya Using a Recent Infection Testing Algorithm: Results from a Nationally Representative Population-Based Survey

    PubMed Central

    Kim, Andrea A.; Parekh, Bharat S.; Umuro, Mamo; Galgalo, Tura; Bunnell, Rebecca; Makokha, Ernest; Dobbs, Trudy; Murithi, Patrick; Muraguri, Nicholas; De Cock, Kevin M.; Mermin, Jonathan

    2016-01-01

    Introduction A recent infection testing algorithm (RITA) that can distinguish recent from long-standing HIV infection can be applied to nationally representative population-based surveys to characterize and identify risk factors for recent infection in a country. Materials and Methods We applied a RITA using the Limiting Antigen Avidity Enzyme Immunoassay (LAg) on stored HIV-positive samples from the 2007 Kenya AIDS Indicator Survey. The case definition for recent infection included testing recent on LAg and having no evidence of antiretroviral therapy use. Multivariate analysis was conducted to determine factors associated with recent and long-standing infection compared to HIV-uninfected persons. All estimates were weighted to adjust for sampling probability and nonresponse. Results Of 1,025 HIV-antibody-positive specimens, 64 (6.2%) met the case definition for recent infection and 961 (93.8%) met the case definition for long-standing infection. Compared to HIV-uninfected individuals, factors associated with higher adjusted odds of recent infection were living in Nairobi (adjusted odds ratio [AOR] 11.37; confidence interval [CI] 2.64–48.87) and Nyanza (AOR 4.55; CI 1.39–14.89) provinces compared to Western province; being widowed (AOR 8.04; CI 1.42–45.50) or currently married (AOR 6.42; CI 1.55–26.58) compared to being never married; having had ≥ 2 sexual partners in the last year (AOR 2.86; CI 1.51–5.41); not using a condom at last sex in the past year (AOR 1.61; CI 1.34–1.93); reporting a sexually transmitted infection (STI) diagnosis or symptoms of STI in the past year (AOR 1.97; CI 1.05–8.37); and being aged <30 years with: 1) HSV-2 infection (AOR 8.84; CI 2.62–29.85), 2) male genital ulcer disease (AOR 8.70; CI 2.36–32.08), or 3) lack of male circumcision (AOR 17.83; CI 2.19–144.90). Compared to HIV-uninfected persons, factors associated with higher adjusted odds of long-standing infection included living in Coast (AOR 1.55; CI 1.04–2

  3. New Attitude Sensor Alignment Calibration Algorithms

    NASA Technical Reports Server (NTRS)

    Hashmall, Joseph A.; Sedlak, Joseph E.; Harman, Richard (Technical Monitor)

    2002-01-01

    Accurate spacecraft attitudes may only be obtained if the primary attitude sensors are well calibrated. Launch shock, relaxation of gravitational stresses and similar effects often produce large enough alignment shifts so that on-orbit alignment calibration is necessary if attitude accuracy requirements are to be met. A variety of attitude sensor alignment algorithms have been developed to meet the need for on-orbit calibration. Two new algorithms are presented here: ALICAL and ALIQUEST. Each of these has advantages in particular circumstances. ALICAL is an attitude independent algorithm that uses near simultaneous measurements from two or more sensors to produce accurate sensor alignments. For each set of simultaneous observations the attitude is overdetermined. The information content of the extra degrees of freedom can be combined over numerous sets to provide the sensor alignments. ALIQUEST is an attitude dependent algorithm that combines sensor and attitude data into a loss function that has the same mathematical form as the Wahba problem. Alignments can then be determined using any of the algorithms (such as the QUEST quaternion estimator) that have been developed to solve the Wahba problem for attitude. Results from the use of these methods on active missions are presented.

  4. Brief Report: HIV-1 Infection Results in Increased Frequency of Active and Inflammatory SlanDCs that Produce High Level of IL-1β.

    PubMed

    Tufa, Dejene M; Ahmad, Fareed; Chatterjee, Debanjana; Ahrenstorf, Gerrit; Schmidt, Reinhold E; Jacobs, Roland

    2016-09-01

    HIV infection is marked by phenotypic and functional alterations of immune cells. Different studies have shown both numerical and functional deterioration of dendritic cells in HIV-1-infected patients. In this study, we report an increase of inflammatory 6-sulfo LacNAc dendritic cells (slanDCs) that are more activated and produce higher amounts of interleukin (IL)-1β during HIV-1 infection as compared with healthy controls. IL-1β plays a regulatory role in chronic inflammatory disorders. Therefore, our findings might reveal a compensatory regulatory function of slanDCs during HIV-1 infection. PMID:27243902

  5. Long-term results of adrenalectomy in patients with aldosterone-producing adenomas: multivariate analysis of factors affecting unresolved hypertension and review of the literature.

    PubMed

    Lumachi, Franco; Ermani, Mario; Basso, Stefano M M; Armanini, Decio; Iacobone, Maurizio; Favia, Gennaro

    2005-10-01

    The long-term surgical cure rate of patients with primary aldosteronism varies widely, and causes of persistent hypertension are not completely established. We reviewed retrospectively charts from 98 patients (range, 19-70 years old) with aldosterone-producing adenomas who underwent unilateral adrenalectomy. At a median follow-up of 81 months (range, 18-186 months), the mean blood pressure values improved in 95 out of 98 (96.9%) patients, although hypertension was cured only in 71 out of 98 (72.4%) patients. Multivariate analysis using a logistic regression model adjusted for duration of follow-up showed that only age of the patients and duration of the disease independently correlated with unresolved hypertension. The cumulative odds ratio (OR), obtained using the logistic regression function, was 5.38 (95% CI 1.78-16.22), and the OR of single variables were 1.32 (95% CI 0.36-19.83) and 4.56 (95% CI 1.41-14.78), respectively. By using discriminant analysis to derive a classification function for the prediction of unresolved hypertension, a maximum predictive power of 75 per cent was achieved. In conclusion, in patients with an aldosterone-producing adenoma undergoing surgery, the combination of age and duration of hypertension gave the best predictive power of a linear classification function and represented the main independent risk factors affecting hypertension cure rate. PMID:16468537

  6. An enhanced fast scanning algorithm for image segmentation

    NASA Astrophysics Data System (ADS)

    Ismael, Ahmed Naser; Yusof, Yuhanis binti

    2015-12-01

    Segmentation is an essential and important process that separates an image into regions that have similar characteristics or features. This will transform the image for a better image analysis and evaluation. An important benefit of segmentation is the identification of region of interest in a particular image. Various algorithms have been proposed for image segmentation and this includes the Fast Scanning algorithm which has been employed on food, sport and medical images. It scans all pixels in the image and cluster each pixel according to the upper and left neighbor pixels. The clustering process in Fast Scanning algorithm is performed by merging pixels with similar neighbor based on an identified threshold. Such an approach will lead to a weak reliability and shape matching of the produced segments. This paper proposes an adaptive threshold function to be used in the clustering process of the Fast Scanning algorithm. This function used the gray'value in the image's pixels and variance Also, the level of the image that is more the threshold are converted into intensity values between 0 and 1, and other values are converted into intensity values zero. The proposed enhanced Fast Scanning algorithm is realized on images of the public and private transportation in Iraq. Evaluation is later made by comparing the produced images of proposed algorithm and the standard Fast Scanning algorithm. The results showed that proposed algorithm is faster in terms the time from standard fast scanning.

  7. Algorithmic chemistry

    SciTech Connect

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  8. Approximation algorithms for planning and control

    NASA Technical Reports Server (NTRS)

    Boddy, Mark; Dean, Thomas

    1989-01-01

    A control system operating in a complex environment will encounter a variety of different situations, with varying amounts of time available to respond to critical events. Ideally, such a control system will do the best possible with the time available. In other words, its responses should approximate those that would result from having unlimited time for computation, where the degree of the approximation depends on the amount of time it actually has. There exist approximation algorithms for a wide variety of problems. Unfortunately, the solution to any reasonably complex control problem will require solving several computationally intensive problems. Algorithms for successive approximation are a subclass of the class of anytime algorithms, algorithms that return answers for any amount of computation time, where the answers improve as more time is allotted. An architecture is described for allocating computation time to a set of anytime algorithms, based on expectations regarding the value of the answers they return. The architecture described is quite general, producing optimal schedules for a set of algorithms under widely varying conditions.

  9. First experimental results of a cryogenic stopping cell with short-lived, heavy uranium fragments produced at 1000 MeV/u

    NASA Astrophysics Data System (ADS)

    Purushothaman, S.; Reiter, M. P.; Haettner, E.; Dendooven, P.; Dickel, T.; Geissel, H.; Ebert, J.; Jesch, C.; Plass, W. R.; Ranjan, M.; Weick, H.; Amjad, F.; Ayet, S.; Diwisch, M.; Estrade, A.; Farinon, F.; Greiner, F.; Kalantar-Nayestanaki, N.; Knöbel, R.; Kurcewicz, J.; Lang, J.; Moore, I. D.; Mukha, I.; Nociforo, C.; Petrick, M.; Pfützner, M.; Pietri, S.; Prochazka, A.; Rink, A.-K.; Rinta-Antila, S.; Scheidenberger, C.; Takechi, M.; Tanaka, Y. K.; Winfield, J. S.; Yavor, M. I.

    2013-11-01

    A cryogenic stopping cell (CSC) has been commissioned with 238U projectile fragments produced at 1000 MeV/u. The spatial isotopic separation in flight was performed with the FRS applying a monoenergetic degrader. For the first time, a stopping cell was operated with exotic nuclei at cryogenic temperatures (70 to 100 K). A helium stopping gas density of up to 0.05\\ \\text{mg/cm}^3 was used, about two times higher than reached before for a stopping cell with RF ion repelling structures. An overall efficiency of up to 15%, a combined ion survival and extraction efficiency of about 50%, and extraction times of 24 ms were achieved for heavy α-decaying uranium fragments. Mass spectrometry with a multiple-reflection time-of-flight mass spectrometer has demonstrated the excellent cleanliness of the CSC. This setup has opened a new field for the spectroscopy of short-lived nuclei.

  10. Produced water extracts from North Sea oil production platforms result in cellular oxidative stress in a rainbow trout in vitro bioassay.

    PubMed

    Farmen, E; Harman, C; Hylland, K; Tollefsen, K-E

    2010-07-01

    Produced water (PW) discharged from offshore oil industry contains chemicals known to contribute to different mechanisms of toxicity. The present study aimed to investigate oxidative stress and cytotoxicity in rainbow trout primary hepatocytes exposed to the water soluble and particulate organic fraction of PW from 10 different North Sea oil production platforms. The PW fractions caused a concentration-dependent increase in reactive oxygen species (ROS) after 1h exposure, as well as changes in levels of total glutathione (tGSH) and cytotoxicity after 96 h. Interestingly, the water soluble organic compounds of PW were major contributors to oxidative stress and cytotoxicity, and effects was not correlated to the content of total oil in PW. Bioassay effects were only observed at high PW concentrations (3-fold concentrated), indicating that bioaccumulation needs to occur to cause similar short term toxic effects in wild fish. PMID:20144836

  11. On algorithmic rate-coded AER generation.

    PubMed

    Linares-Barranco, Alejandro; Jimenez-Moreno, Gabriel; Linares-Barranco, Bernabé; Civit-Balcells, Antón

    2006-05-01

    This paper addresses the problem of converting a conventional video stream based on sequences of frames into the spike event-based representation known as the address-event-representation (AER). In this paper we concentrate on rate-coded AER. The problem is addressed as an algorithmic problem, in which different methods are proposed, implemented and tested through software algorithms. The proposed algorithms are comparatively evaluated according to different criteria. Emphasis is put on the potential of such algorithms for a) doing the frame-based to event-based representation in real time, and b) that the resulting event streams ressemble as much as possible those generated naturally by rate-coded address-event VLSI chips, such as silicon AER retinae. It is found that simple and straightforward algorithms tend to have high potential for real time but produce event distributions that differ considerably from those obtained in AER VLSI chips. On the other hand, sophisticated algorithms that yield better event distributions are not efficient for real time operations. The methods based on linear-feedback-shift-register (LFSR) pseudorandom number generation is a good compromise, which is feasible for real time and yield reasonably well distributed events in time. Our software experiments, on a 1.6-GHz Pentium IV, show that at 50% AER bus load the proposed algorithms require between 0.011 and 1.14 ms per 8 bit-pixel per frame. One of the proposed LFSR methods is implemented in real time hardware using a prototyping board that includes a VirtexE 300 FPGA. The demonstration hardware is capable of transforming frames of 64 x 64 pixels of 8-bit depth at a frame rate of 25 frames per second, producing spike events at a peak rate of 10(7) events per second. PMID:16722179

  12. Programming parallel vision algorithms

    SciTech Connect

    Shapiro, L.G.

    1988-01-01

    Computer vision requires the processing of large volumes of data and requires parallel architectures and algorithms to be useful in real-time, industrial applications. The INSIGHT dataflow language was designed to allow encoding of vision algorithms at all levels of the computer vision paradigm. INSIGHT programs, which are relational in nature, can be translated into a graph structure that represents an architecture for solving a particular vision problem or a configuration of a reconfigurable computational network. The authors consider here INSIGHT programs that produce a parallel net architecture for solving low-, mid-, and high-level vision tasks.

  13. Acoustic design of rotor blades using a genetic algorithm

    NASA Technical Reports Server (NTRS)

    Wells, V. L.; Han, A. Y.; Crossley, W. A.

    1995-01-01

    A genetic algorithm coupled with a simplified acoustic analysis was used to generate low-noise rotor blade designs. The model includes thickness, steady loading and blade-vortex interaction noise estimates. The paper presents solutions for several variations in the fitness function, including thickness noise only, loading noise only, and combinations of the noise types. Preliminary results indicate that the analysis provides reasonable assessments of the noise produced, and that genetic algorithm successfully searches for 'good' designs. The results show that, for a given required thrust coefficient, proper blade design can noticeably reduce the noise produced at some expense to the power requirements.

  14. Multipartite entanglement in quantum algorithms

    SciTech Connect

    Bruss, D.; Macchiavello, C.

    2011-05-15

    We investigate the entanglement features of the quantum states employed in quantum algorithms. In particular, we analyze the multipartite entanglement properties in the Deutsch-Jozsa, Grover, and Simon algorithms. Our results show that for these algorithms most instances involve multipartite entanglement.

  15. Polynomial Algorithms for Item Matching.

    ERIC Educational Resources Information Center

    Armstrong, Ronald D.; Jones, Douglas H.

    1992-01-01

    Polynomial algorithms are presented that are used to solve selected problems in test theory, and computational results from sample problems with several hundred decision variables are provided that demonstrate the benefits of these algorithms. The algorithms are based on optimization theory in networks (graphs). (SLD)

  16. Detecting Danger: The Dendritic Cell Algorithm

    NASA Astrophysics Data System (ADS)

    Greensmith, Julie; Aickelin, Uwe; Cayzer, Steve

    The "Dendritic Cell Algorithm" (DCA) is inspired by the function of the dendritic cells of the human immune system. In nature, dendritic cells are the intrusion detection agents of the human body, policing the tissue and organs for potential invaders in the form of pathogens. In this research, an abstract model of dendritic cell (DC) behavior is developed and subsequently used to form an algorithm—the DCA. The abstraction process was facilitated through close collaboration with laboratory-based immunologists, who performed bespoke experiments, the results of which are used as an integral part of this algorithm. The DCA is a population-based algorithm, with each agent in the system represented as an "artificial DC". Each DC has the ability to combine multiple data streams and can add context to data suspected as anomalous. In this chapter, the abstraction process and details of the resultant algorithm are given. The algorithm is applied to numerous intrusion detection problems in computer security including the detection of port scans and botnets, where it has produced impressive results with relatively low rates of false positives.

  17. Anomalous oxygen isotope enrichment in CO{sub 2} produced from O+CO: Estimates based on experimental results and model predictions

    SciTech Connect

    Pandey, Antra; Bhattacharya, S.K.

    2006-06-21

    The oxygen isotope fractionation associated with O+CO{yields}CO{sub 2} reaction was investigated experimentally where the oxygen atom was derived from ozone or oxygen photolysis. The isotopic composition of the product CO{sub 2} was analyzed by mass spectrometry. A kinetic model was used to calculate the expected CO{sub 2} composition based on available reaction rates and their modifications for isotopic variants of the participating molecules. A comparison of the two (experimental data and model predictions) shows that the product CO{sub 2} is endowed with an anomalous enrichment of heavy oxygen isotopes. The enrichment is similar to that observed earlier in case of O{sub 3} produced by O+O{sub 2} reaction and varies from 70 per mille to 136 per mille for {sup 18}O and 41 per mille to 83 per mille for {sup 17}O. Cross plot of {delta} {sup 17}O and {delta} {sup 18}O of CO{sub 2} shows a linear relation with slope of {approx}0.90 for different experimental configurations. The enrichment observed in CO{sub 2} does not depend on the isotopic composition of the O atom or the sources from which it is produced. A plot of {delta}({delta} {sup 17}O) versus {delta}({delta} {sup 18}O) (two enrichments) shows linear correlation with the best fit line having a slope of {approx}0.8. As in case of ozone, this anomalous enrichment can be explained by invoking the concept of differential randomization/stabilization time scale for two types of intermediate transition complex which forms symmetric ({sup 16}O{sup 12}C{sup 16}O) molecule in one case and asymmetric ({sup 16}O{sup 12}C{sup 18}O and {sup 16}O{sup 12}C{sup 17}O) molecules in the other. The {delta} {sup 13}C value of CO{sub 2} is also found to be different from that of the initial CO due to the mass dependent fractionation processes that occur in the O+CO{yields}CO{sub 2} reaction. Negative values of {delta}({delta} {sup 13}C) ({approx}12.1 per mille ) occur due to the preference of {sup 12}C in CO{sub 2}{sup *} formation

  18. A New Approximate Chimera Donor Cell Search Algorithm

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Nixon, David (Technical Monitor)

    1998-01-01

    The objectives of this study were to develop chimera-based full potential methodology which is compatible with overflow (Euler/Navier-Stokes) chimera flow solver and to develop a fast donor cell search algorithm that is compatible with the chimera full potential approach. Results of this work included presenting a new donor cell search algorithm suitable for use with a chimera-based full potential solver. This algorithm was found to be extremely fast and simple producing donor cells as fast as 60,000 per second.

  19. Evaluation of feedback-reduction algorithms for hearing aids.

    PubMed

    Greenberg, J E; Zurek, P M; Brantley, M

    2000-11-01

    Three adaptive feedback-reduction algorithms were implemented in a laboratory-based digital hearing aid system and evaluated with dynamic feedback paths and hearing-impaired subjects. The evaluation included measurements of maximum stable gain and subjective quality ratings. The continuously adapting CNN algorithm (Closed-loop processing with No probe Noise) provided the best performance: 8.5 dB of added stable gain (ASG) relative to a reference algorithm averaged over all subjects, ears, and vent conditions. Two intermittently adapting algorithms, ONO (Open-loop with Noise when Oscillation detected) and ONQ (Open-loop with Noise when Quiet detected), provided an average of 5 dB of ASG. Subjects with more severe hearing losses received greater benefits: 13 dB average ASG for the CNN algorithm and 7-8 dB average ASG for the ONO and ONQ algorithms. These values are conservative estimates of ASG because the fitting procedure produced a frequency-gain characteristic that already included precautions against feedback. Speech quality ratings showed no substantial algorithm effect on pleasantness or intelligibility, although subjects informally expressed strong objections to the probe noise used by the ONO and ONQ algorithms. This objection was not reflected in the speech quality ratings because of limitations of the experimental procedure. The results clearly indicate that the CNN algorithm is the most promising choice for adaptive feedback reduction in hearing aids. PMID:11108377

  20. Technical Report: Scalable Parallel Algorithms for High Dimensional Numerical Integration

    SciTech Connect

    Masalma, Yahya; Jiao, Yu

    2010-10-01

    We implemented a scalable parallel quasi-Monte Carlo numerical high-dimensional integration for tera-scale data points. The implemented algorithm uses the Sobol s quasi-sequences to generate random samples. Sobol s sequence was used to avoid clustering effects in the generated random samples and to produce low-discrepancy random samples which cover the entire integration domain. The performance of the algorithm was tested. Obtained results prove the scalability and accuracy of the implemented algorithms. The implemented algorithm could be used in different applications where a huge data volume is generated and numerical integration is required. We suggest using the hyprid MPI and OpenMP programming model to improve the performance of the algorithms. If the mixed model is used, attention should be paid to the scalability and accuracy.

  1. ON THE VERIFICATION AND VALIDATION OF GEOSPATIAL IMAGE ANALYSIS ALGORITHMS

    SciTech Connect

    Roberts, Randy S.; Trucano, Timothy G.; Pope, Paul A.; Aragon, Cecilia R.; Jiang , Ming; Wei, Thomas; Chilton, Lawrence; Bakel, A. J.

    2010-07-25

    Verification and validation (V&V) of geospatial image analysis algorithms is a difficult task and is becoming increasingly important. While there are many types of image analysis algorithms, we focus on developing V&V methodologies for algorithms designed to provide textual descriptions of geospatial imagery. In this paper, we present a novel methodological basis for V&V that employs a domain-specific ontology, which provides a naming convention for a domain-bounded set of objects and a set of named relationship between these objects. We describe a validation process that proceeds through objectively comparing benchmark imagery, produced using the ontology, with algorithm results. As an example, we describe how the proposed V&V methodology would be applied to algorithms designed to provide textual descriptions of facilities

  2. An efficient parallel algorithm for mesh smoothing

    SciTech Connect

    Freitag, L.; Plassmann, P.; Jones, M.

    1995-12-31

    Automatic mesh generation and adaptive refinement methods have proven to be very successful tools for the efficient solution of complex finite element applications. A problem with these methods is that they can produce poorly shaped elements; such elements are undesirable because they introduce numerical difficulties in the solution process. However, the shape of the elements can be improved through the determination of new geometric locations for mesh vertices by using a mesh smoothing algorithm. In this paper the authors present a new parallel algorithm for mesh smoothing that has a fast parallel runtime both in theory and in practice. The authors present an efficient implementation of the algorithm that uses non-smooth optimization techniques to find the new location of each vertex. Finally, they present experimental results obtained on the IBM SP system demonstrating the efficiency of this approach.

  3. Firefly algorithm with chaos

    NASA Astrophysics Data System (ADS)

    Gandomi, A. H.; Yang, X.-S.; Talatahari, S.; Alavi, A. H.

    2013-01-01

    A recently developed metaheuristic optimization algorithm, firefly algorithm (FA), mimics the social behavior of fireflies based on the flashing and attraction characteristics of fireflies. In the present study, we will introduce chaos into FA so as to increase its global search mobility for robust global optimization. Detailed studies are carried out on benchmark problems with different chaotic maps. Here, 12 different chaotic maps are utilized to tune the attractive movement of the fireflies in the algorithm. The results show that some chaotic FAs can clearly outperform the standard FA.

  4. Risk factors for bloodstream infections due to colistin-resistant KPC-producing Klebsiella pneumoniae: results from a multicenter case-control-control study.

    PubMed

    Giacobbe, D R; Del Bono, V; Trecarichi, E M; De Rosa, F G; Giannella, M; Bassetti, M; Bartoloni, A; Losito, A R; Corcione, S; Bartoletti, M; Mantengoli, E; Saffioti, C; Pagani, N; Tedeschi, S; Spanu, T; Rossolini, G M; Marchese, A; Ambretti, S; Cauda, R; Viale, P; Viscoli, C; Tumbarello, M

    2015-12-01

    The increasing prevalence of colistin resistance (ColR) Klebsiella pneumoniae carbapenemase (KPC)-producing K. pneumoniae (Kp) is a matter of concern because of its unfavourable impact on mortality of KPC-Kp bloodstream infections (BSI) and the shortage of alternative therapeutic options. A matched case-control-control analysis was conducted. The primary study end point was to assess risk factors for ColR KPC-Kp BSI. The secondary end point was to describe mortality and clinical characteristics of these infections. To assess risk factors for ColR, 142 patients with ColR KPC-Kp BSI were compared to two controls groups: 284 controls without infections caused by KPC-Kp (control group A) and 284 controls with colistin-susceptible (ColS) KPC-Kp BSI (control group B). In the first multivariate analysis (cases vs. group A), previous colistin therapy, previous KPC-Kp colonization, ≥3 previous hospitalizations, Charlson score ≥3 and neutropenia were found to be associated with the development of ColR KPC-Kp BSI. In the second multivariate analysis (cases vs. group B), only previous colistin therapy, previous KPC-Kp colonization and Charlson score ≥3 were associated with ColR. Overall, ColR among KPC-Kp blood isolates increased more than threefold during the 4.5-year study period, and 30-day mortality of ColR KPC-Kp BSI was as high as 51%. Strict rules for the use of colistin are mandatory to staunch the dissemination of ColR in KPC-Kp-endemic hospitals. PMID:26278669

  5. Project resource reallocation algorithm

    NASA Technical Reports Server (NTRS)

    Myers, J. E.

    1981-01-01

    A methodology for adjusting baseline cost estimates according to project schedule changes is described. An algorithm which performs a linear expansion or contraction of the baseline project resource distribution in proportion to the project schedule expansion or contraction is presented. Input to the algorithm consists of the deck of cards (PACE input data) prepared for the baseline project schedule as well as a specification of the nature of the baseline schedule change. Output of the algorithm is a new deck of cards with all work breakdown structure block and element of cost estimates redistributed for the new project schedule. This new deck can be processed through PACE to produce a detailed cost estimate for the new schedule.

  6. Improved multiprocessor garbage collection algorithms

    SciTech Connect

    Newman, I.A.; Stallard, R.P.; Woodward, M.C.

    1983-01-01

    Outlines the results of an investigation of existing multiprocessor garbage collection algorithms and introduces two new algorithms which significantly improve some aspects of the performance of their predecessors. The two algorithms arise from different starting assumptions. One considers the case where the algorithm will terminate successfully whatever list structure is being processed and assumes that the extra data space should be minimised. The other seeks a very fast garbage collection time for list structures that do not contain loops. Results of both theoretical and experimental investigations are given to demonstrate the efficacy of the algorithms. 7 references.

  7. On Learning Algorithms for Nash Equilibria

    NASA Astrophysics Data System (ADS)

    Daskalakis, Constantinos; Frongillo, Rafael; Papadimitriou, Christos H.; Pierrakos, George; Valiant, Gregory

    Can learning algorithms find a Nash equilibrium? This is a natural question for several reasons. Learning algorithms resemble the behavior of players in many naturally arising games, and thus results on the convergence or non-convergence properties of such dynamics may inform our understanding of the applicability of Nash equilibria as a plausible solution concept in some settings. A second reason for asking this question is in the hope of being able to prove an impossibility result, not dependent on complexity assumptions, for computing Nash equilibria via a restricted class of reasonable algorithms. In this work, we begin to answer this question by considering the dynamics of the standard multiplicative weights update learning algorithms (which are known to converge to a Nash equilibrium for zero-sum games). We revisit a 3×3 game defined by Shapley [10] in the 1950s in order to establish that fictitious play does not converge in general games. For this simple game, we show via a potential function argument that in a variety of settings the multiplicative updates algorithm impressively fails to find the unique Nash equilibrium, in that the cumulative distributions of players produced by learning dynamics actually drift away from the equilibrium.

  8. A novel bee swarm optimization algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Akbari, Reza; Mohammadi, Alireza; Ziarati, Koorush

    2010-10-01

    The optimization algorithms which are inspired from intelligent behavior of honey bees are among the most recently introduced population based techniques. In this paper, a novel algorithm called bee swarm optimization, or BSO, and its two extensions for improving its performance are presented. The BSO is a population based optimization technique which is inspired from foraging behavior of honey bees. The proposed approach provides different patterns which are used by the bees to adjust their flying trajectories. As the first extension, the BSO algorithm introduces different approaches such as repulsion factor and penalizing fitness (RP) to mitigate the stagnation problem. Second, to maintain efficiently the balance between exploration and exploitation, time-varying weights (TVW) are introduced into the BSO algorithm. The proposed algorithm (BSO) and its two extensions (BSO-RP and BSO-RPTVW) are compared with existing algorithms which are based on intelligent behavior of honey bees, on a set of well known numerical test functions. The experimental results show that the BSO algorithms are effective and robust; produce excellent results, and outperform other algorithms investigated in this consideration.

  9. Improved Ant Colony Clustering Algorithm and Its Performance Study.

    PubMed

    Gao, Wei

    2016-01-01

    Clustering analysis is used in many disciplines and applications; it is an important tool that descriptively identifies homogeneous groups of objects based on attribute values. The ant colony clustering algorithm is a swarm-intelligent method used for clustering problems that is inspired by the behavior of ant colonies that cluster their corpses and sort their larvae. A new abstraction ant colony clustering algorithm using a data combination mechanism is proposed to improve the computational efficiency and accuracy of the ant colony clustering algorithm. The abstraction ant colony clustering algorithm is used to cluster benchmark problems, and its performance is compared with the ant colony clustering algorithm and other methods used in existing literature. Based on similar computational difficulties and complexities, the results show that the abstraction ant colony clustering algorithm produces results that are not only more accurate but also more efficiently determined than the ant colony clustering algorithm and the other methods. Thus, the abstraction ant colony clustering algorithm can be used for efficient multivariate data clustering. PMID:26839533

  10. Improved Ant Colony Clustering Algorithm and Its Performance Study

    PubMed Central

    Gao, Wei

    2016-01-01

    Clustering analysis is used in many disciplines and applications; it is an important tool that descriptively identifies homogeneous groups of objects based on attribute values. The ant colony clustering algorithm is a swarm-intelligent method used for clustering problems that is inspired by the behavior of ant colonies that cluster their corpses and sort their larvae. A new abstraction ant colony clustering algorithm using a data combination mechanism is proposed to improve the computational efficiency and accuracy of the ant colony clustering algorithm. The abstraction ant colony clustering algorithm is used to cluster benchmark problems, and its performance is compared with the ant colony clustering algorithm and other methods used in existing literature. Based on similar computational difficulties and complexities, the results show that the abstraction ant colony clustering algorithm produces results that are not only more accurate but also more efficiently determined than the ant colony clustering algorithm and the other methods. Thus, the abstraction ant colony clustering algorithm can be used for efficient multivariate data clustering. PMID:26839533

  11. An efficient algorithm for function optimization: modified stem cells algorithm

    NASA Astrophysics Data System (ADS)

    Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad

    2013-03-01

    In this paper, we propose an optimization algorithm based on the intelligent behavior of stem cell swarms in reproduction and self-organization. Optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm and Artificial Bee Colony (ABC) algorithm, can give solutions to linear and non-linear problems near to the optimum for many applications; however, in some case, they can suffer from becoming trapped in local optima. The Stem Cells Algorithm (SCA) is an optimization algorithm inspired by the natural behavior of stem cells in evolving themselves into new and improved cells. The SCA avoids the local optima problem successfully. In this paper, we have made small changes in the implementation of this algorithm to obtain improved performance over previous versions. Using a series of benchmark functions, we assess the performance of the proposed algorithm and compare it with that of the other aforementioned optimization algorithms. The obtained results prove the superiority of the Modified Stem Cells Algorithm (MSCA).

  12. Producing approximate answers to database queries

    NASA Technical Reports Server (NTRS)

    Vrbsky, Susan V.; Liu, Jane W. S.

    1993-01-01

    We have designed and implemented a query processor, called APPROXIMATE, that makes approximate answers available if part of the database is unavailable or if there is not enough time to produce an exact answer. The accuracy of the approximate answers produced improves monotonically with the amount of data retrieved to produce the result. The exact answer is produced if all of the needed data are available and query processing is allowed to continue until completion. The monotone query processing algorithm of APPROXIMATE works within the standard relational algebra framework and can be implemented on a relational database system with little change to the relational architecture. We describe here the approximation semantics of APPROXIMATE that serves as the basis for meaningful approximations of both set-valued and single-valued queries. We show how APPROXIMATE is implemented to make effective use of semantic information, provided by an object-oriented view of the database, and describe the additional overhead required by APPROXIMATE.

  13. Algorithmic Processes for Increasing Design Efficiency.

    ERIC Educational Resources Information Center

    Terrell, William R.

    1983-01-01

    Discusses the role of algorithmic processes as a supplementary method for producing cost-effective and efficient instructional materials. Examines three approaches to problem solving in the context of developing training materials for the Naval Training Command: application of algorithms, quasi-algorithms, and heuristics. (EAO)

  14. The coral reefs optimization algorithm: a novel metaheuristic for efficiently solving optimization problems.

    PubMed

    Salcedo-Sanz, S; Del Ser, J; Landa-Torres, I; Gil-López, S; Portilla-Figueras, J A

    2014-01-01

    This paper presents a novel bioinspired algorithm to tackle complex optimization problems: the coral reefs optimization (CRO) algorithm. The CRO algorithm artificially simulates a coral reef, where different corals (namely, solutions to the optimization problem considered) grow and reproduce in coral colonies, fighting by choking out other corals for space in the reef. This fight for space, along with the specific characteristics of the corals' reproduction, produces a robust metaheuristic algorithm shown to be powerful for solving hard optimization problems. In this research the CRO algorithm is tested in several continuous and discrete benchmark problems, as well as in practical application scenarios (i.e., optimum mobile network deployment and off-shore wind farm design). The obtained results confirm the excellent performance of the proposed algorithm and open line of research for further application of the algorithm to real-world problems. PMID:25147860

  15. The Coral Reefs Optimization Algorithm: A Novel Metaheuristic for Efficiently Solving Optimization Problems

    PubMed Central

    Salcedo-Sanz, S.; Del Ser, J.; Landa-Torres, I.; Gil-López, S.; Portilla-Figueras, J. A.

    2014-01-01

    This paper presents a novel bioinspired algorithm to tackle complex optimization problems: the coral reefs optimization (CRO) algorithm. The CRO algorithm artificially simulates a coral reef, where different corals (namely, solutions to the optimization problem considered) grow and reproduce in coral colonies, fighting by choking out other corals for space in the reef. This fight for space, along with the specific characteristics of the corals' reproduction, produces a robust metaheuristic algorithm shown to be powerful for solving hard optimization problems. In this research the CRO algorithm is tested in several continuous and discrete benchmark problems, as well as in practical application scenarios (i.e., optimum mobile network deployment and off-shore wind farm design). The obtained results confirm the excellent performance of the proposed algorithm and open line of research for further application of the algorithm to real-world problems. PMID:25147860

  16. A digitally reconstructed radiograph algorithm calculated from first principles

    SciTech Connect

    Staub, David; Murphy, Martin J.

    2013-01-15

    Purpose: To develop an algorithm for computing realistic digitally reconstructed radiographs (DRRs) that match real cone-beam CT (CBCT) projections with no artificial adjustments. Methods: The authors used measured attenuation data from cone-beam CT projection radiographs of different materials to obtain a function to convert CT number to linear attenuation coefficient (LAC). The effects of scatter, beam hardening, and veiling glare were first removed from the attenuation data. Using this conversion function the authors calculated the line integral of LAC through a CT along rays connecting the radiation source and detector pixels with a ray-tracing algorithm, producing raw DRRs. The effects of scatter, beam hardening, and veiling glare were then included in the DRRs through postprocessing. Results: The authors compared actual CBCT projections to DRRs produced with all corrections (scatter, beam hardening, and veiling glare) and to uncorrected DRRs. Algorithm accuracy was assessed through visual comparison of projections and DRRs, pixel intensity comparisons, intensity histogram comparisons, and correlation plots of DRR-to-projection pixel intensities. In general, the fully corrected algorithm provided a small but nontrivial improvement in accuracy over the uncorrected algorithm. The authors also investigated both measurement- and computation-based methods for determining the beam hardening correction, and found the computation-based method to be superior, as it accounted for nonuniform bowtie filter thickness. The authors benchmarked the algorithm for speed and found that it produced DRRs in about 0.35 s for full detector and CT resolution at a ray step-size of 0.5 mm. Conclusions: The authors have demonstrated a DRR algorithm calculated from first principles that accounts for scatter, beam hardening, and veiling glare in order to produce accurate DRRs. The algorithm is computationally efficient, making it a good candidate for iterative CT reconstruction techniques

  17. Enhanced Memetic Algorithm for Task Scheduling

    NASA Astrophysics Data System (ADS)

    Padmavathi, S.; Shalinie, S. Mercy; Someshwar, B. C.; Sasikumar, T.

    Scheduling tasks onto the processors of a parallel system is a crucial part of program parallelization. Due to the NP-hardness of the task scheduling problem, scheduling algorithms are based on heuristics that try to produce good rather than optimal schedules. This paper proposes a Memetic algorithm with Tabu search and Simulated Annealing as local search for solving Task scheduling problem considering communication contention. This problem consists of finding a schedule for a general task graph to be executed on a cluster of workstations and hence the schedule length can be minimized. Our approach combines local search (by self experience) and global search (by neighboring experience) possessing high search efficiency. The proposed approach is compared with existing list scheduling heuristics. The numerical results clearly indicate that our proposed approach produces solutions which are closer to optimality and/or better quality than the existing list scheduling heuristics.

  18. Cloud model bat algorithm.

    PubMed

    Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi

    2014-01-01

    Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425

  19. High-performance combinatorial algorithms

    SciTech Connect

    Pinar, Ali

    2003-10-31

    Combinatorial algorithms have long played an important role in many applications of scientific computing such as sparse matrix computations and parallel computing. The growing importance of combinatorial algorithms in emerging applications like computational biology and scientific data mining calls for development of a high performance library for combinatorial algorithms. Building such a library requires a new structure for combinatorial algorithms research that enables fast implementation of new algorithms. We propose a structure for combinatorial algorithms research that mimics the research structure of numerical algorithms. Numerical algorithms research is nicely complemented with high performance libraries, and this can be attributed to the fact that there are only a small number of fundamental problems that underlie numerical solvers. Furthermore there are only a handful of kernels that enable implementation of algorithms for these fundamental problems. Building a similar structure for combinatorial algorithms will enable efficient implementations for existing algorithms and fast implementation of new algorithms. Our results will promote utilization of combinatorial techniques and will impact research in many scientific computing applications, some of which are listed.

  20. Depletion of mucin in mucin-producing human gastrointestinal carcinoma: Results from in vitro and in vivo studies with bromelain and N-acetylcysteine

    PubMed Central

    Amini, Afshin; Masoumi-Moghaddam, Samar; Ehteda, Anahid; Liauw, Winston; Morris, David L.

    2015-01-01

    Aberrant expression of membrane-associated and secreted mucins, as evident in epithelial tumors, is known to facilitate tumor growth, progression and metastasis, and to provide protection against adverse growth conditions, chemotherapy and immune surveillance. Emerging evidence provides support for the oncogenic role of MUC1 in gastrointestinal carcinomas and relates its expression to an invasive phenotype. Similarly, mucinous differentiation of gastrointestinal tumors, in particular increased or de novo expression of MUC2 and/or MUC5AC, is widely believed to imply an adverse clinicopathological feature. Through formation of viscous gels, too, MUC2 and MUC5AC significantly contribute to the biology and pathogenesis of mucin-secreting gastrointestinal tumors. Here, we investigated the mucin-depleting effects of bromelain (BR) and N-acetylcysteine (NAC), in nine different regimens as single or combination therapy, in in vitro (MKN45, KATOIII and LS174T cell lines) and in vivo (female nude mice bearing intraperitoneal MKN45 and LS174T) settings. The inhibitory effects of the treatment on cancer cell growth and proliferation were also evaluated in vivo. Our results suggest that a combination of BR and NAC with dual effects on growth and mucin products of mucin-expressing tumor cells is a promising candidate towards the development of novel approaches to gastrointestinal malignancies with the involvement of mucin pathology. This capability supports the use of this combination formulation in locoregional approaches for reducing the adverse effects of the aberrantly secreted gel-forming mucins, as in pseudomyxoma peritonei and similar pathologies with ectopic production of mucin. PMID:26436698

  1. Depletion of mucin in mucin-producing human gastrointestinal carcinoma: Results from in vitro and in vivo studies with bromelain and N-acetylcysteine.

    PubMed

    Amini, Afshin; Masoumi-Moghaddam, Samar; Ehteda, Anahid; Liauw, Winston; Morris, David L

    2015-10-20

    Aberrant expression of membrane-associated and secreted mucins, as evident in epithelial tumors, is known to facilitate tumor growth, progression and metastasis, and to provide protection against adverse growth conditions, chemotherapy and immune surveillance. Emerging evidence provides support for the oncogenic role of MUC1 in gastrointestinal carcinomas and relates its expression to an invasive phenotype. Similarly, mucinous differentiation of gastrointestinal tumors, in particular increased or de novo expression of MUC2 and/or MUC5AC, is widely believed to imply an adverse clinicopathological feature. Through formation of viscous gels, too, MUC2 and MUC5AC significantly contribute to the biology and pathogenesis of mucin-secreting gastrointestinal tumors. Here, we investigated the mucin-depleting effects of bromelain (BR) and N-acetylcysteine (NAC), in nine different regimens as single or combination therapy, in in vitro (MKN45, KATOIII and LS174T cell lines) and in vivo (female nude mice bearing intraperitoneal MKN45 and LS174T) settings. The inhibitory effects of the treatment on cancer cell growth and proliferation were also evaluated in vivo. Our results suggest that a combination of BR and NAC with dual effects on growth and mucin products of mucin-expressing tumor cells is a promising candidate towards the development of novel approaches to gastrointestinal malignancies with the involvement of mucin pathology. This capability supports the use of this combination formulation in locoregional approaches for reducing the adverse effects of the aberrantly secreted gel-forming mucins, as in pseudomyxoma peritonei and similar pathologies with ectopic production of mucin. PMID:26436698

  2. Three hypothesis algorithm with occlusion reasoning for multiple people tracking

    NASA Astrophysics Data System (ADS)

    Reta, Carolina; Altamirano, Leopoldo; Gonzalez, Jesus A.; Medina-Carnicer, Rafael

    2015-01-01

    This work proposes a detection-based tracking algorithm able to locate and keep the identity of multiple people, who may be occluded, in uncontrolled stationary environments. Our algorithm builds a tracking graph that models spatio-temporal relationships among attributes of interacting people to predict and resolve partial and total occlusions. When a total occlusion occurs, the algorithm generates various hypotheses about the location of the occluded person considering three cases: (a) the person keeps the same direction and speed, (b) the person follows the direction and speed of the occluder, and (c) the person remains motionless during occlusion. By analyzing the graph, our algorithm can detect trajectories produced by false alarms and estimate the location of missing or occluded people. Our algorithm performs acceptably under complex conditions, such as partial visibility of individuals getting inside or outside the scene, continuous interactions and occlusions among people, wrong or missing information on the detection of persons, as well as variation of the person's appearance due to illumination changes and background-clutter distracters. Our algorithm was evaluated on test sequences in the field of intelligent surveillance achieving an overall precision of 93%. Results show that our tracking algorithm outperforms even trajectory-based state-of-the-art algorithms.

  3. Project Produce

    ERIC Educational Resources Information Center

    Wolfinger, Donna M.

    2005-01-01

    The grocery store produce section used to be a familiar but rather dull place. There were bananas next to the oranges next to the limes. Broccoli was next to corn and lettuce. Apples and pears, radishes and onions, eggplants and zucchinis all lay in their appropriate bins. Those days are over. Now, broccoli may be next to bok choy, potatoes beside…

  4. Motion Cueing Algorithm Development: Human-Centered Linear and Nonlinear Approaches

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.

    2005-01-01

    While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. Prior research identified viable features from two algorithms: the nonlinear "adaptive algorithm", and the "optimal algorithm" that incorporates human vestibular models. A novel approach to motion cueing, the "nonlinear algorithm" is introduced that combines features from both approaches. This algorithm is formulated by optimal control, and incorporates a new integrated perception model that includes both visual and vestibular sensation and the interaction between the stimuli. Using a time-varying control law, the matrix Riccati equation is updated in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. The neurocomputing approach was crucial in that the number of presentations of an input vector could be reduced to meet the real time requirement without degrading the quality of the motion cues.

  5. A Scheduling Algorithm for Replicated Real-Time Tasks

    NASA Technical Reports Server (NTRS)

    Yu, Albert C.; Lin, Kwei-Jay

    1991-01-01

    We present an algorithm for scheduling real-time periodic tasks on a multiprocessor system under fault-tolerant requirement. Our approach incorporates both the redundancy and masking technique and the imprecise computation model. Since the tasks in hard real-time systems have stringent timing constraints, the redundancy and masking technique are more appropriate than the rollback techniques which usually require extra time for error recovery. The imprecise computation model provides flexible functionality by trading off the quality of the result produced by a task with the amount of processing time required to produce it. It therefore permits the performance of a real-time system to degrade gracefully. We evaluate the algorithm by stochastic analysis and Monte Carlo simulations. The results show that the algorithm is resilient under hardware failures.

  6. A fast neural-network algorithm for VLSI cell placement.

    PubMed

    Aykanat, Cevdet; Bultan, Tevfik; Haritaoğlu, Ismail

    1998-12-01

    Cell placement is an important phase of current VLSI circuit design styles such as standard cell, gate array, and Field Programmable Gate Array (FPGA). Although nondeterministic algorithms such as Simulated Annealing (SA) were successful in solving this problem, they are known to be slow. In this paper, a neural network algorithm is proposed that produces solutions as good as SA in substantially less time. This algorithm is based on Mean Field Annealing (MFA) technique, which was successfully applied to various combinatorial optimization problems. A MFA formulation for the cell placement problem is derived which can easily be applied to all VLSI design styles. To demonstrate that the proposed algorithm is applicable in practice, a detailed formulation for the FPGA design style is derived, and the layouts of several benchmark circuits are generated. The performance of the proposed cell placement algorithm is evaluated in comparison with commercial automated circuit design software Xilinx Automatic Place and Route (APR) which uses SA technique. Performance evaluation is conducted using ACM/SIGDA Design Automation benchmark circuits. Experimental results indicate that the proposed MFA algorithm produces comparable results with APR. However, MFA is almost 20 times faster than APR on the average. PMID:12662737

  7. GOES-R Space Environment In-Situ Suite: instruments overview, calibration results, and data processing algorithms, and expected on-orbit performance

    NASA Astrophysics Data System (ADS)

    Galica, G. E.; Dichter, B. K.; Tsui, S.; Golightly, M. J.; Lopate, C.; Connell, J. J.

    2016-05-01

    The space weather instruments (Space Environment In-Situ Suite - SEISS) on the soon to be launched, NOAA GOES-R series spacecraft offer significant space weather measurement performance advances over the previous GOES N-P series instruments. The specifications require that the instruments ensure proper operation under the most stressful high flux conditions corresponding to the largest solar particle event expected during the program, while maintaining high sensitivity at low flux levels. Since the performance of remote sensing instruments is sensitive to local space weather conditions, the SEISS data will be of be of use to a broad community of users. The SEISS suite comprises five individual sensors and a data processing unit: Magnetospheric Particle Sensor-Low (0.03-30 keV electrons and ions), Magnetospheric Particle Sensor-High (0.05-4 MeV electrons, 0.08-12 MeV protons), two Solar And Galactic Proton Sensors (1 to >500 MeV protons), and the Energetic Heavy ion Sensor (10-200 MeV for H, H to Fe with single element resolution). We present comparisons between the enhanced GOES-R instruments and the current GOES space weather measurement capabilities. We provide an overview of the sensor configurations and performance. Results of extensive sensor modeling with GEANT, FLUKA and SIMION are compared with calibration data measured over nearly the entire energy range of the instruments. Combination of the calibration results and model are used to calculate the geometric factors of the various energy channels. The calibrated geometric factors and typical and extreme space weather environments are used to calculate the expected on-orbit performance.

  8. The Chopthin Algorithm for Resampling

    NASA Astrophysics Data System (ADS)

    Gandy, Axel; Lau, F. Din-Houn

    2016-08-01

    Resampling is a standard step in particle filters and more generally sequential Monte Carlo methods. We present an algorithm, called chopthin, for resampling weighted particles. In contrast to standard resampling methods the algorithm does not produce a set of equally weighted particles; instead it merely enforces an upper bound on the ratio between the weights. Simulation studies show that the chopthin algorithm consistently outperforms standard resampling methods. The algorithms chops up particles with large weight and thins out particles with low weight, hence its name. It implicitly guarantees a lower bound on the effective sample size. The algorithm can be implemented efficiently, making it practically useful. We show that the expected computational effort is linear in the number of particles. Implementations for C++, R (on CRAN), Python and Matlab are available.

  9. Stride search: A general algorithm for storm detection in high resolution climate data

    SciTech Connect

    Bosler, Peter Andrew; Roesler, Erika Louise; Taylor, Mark A.; Mundt, Miranda

    2015-09-08

    This article discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared. The commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. Stride Search is designed to work at all latitudes, while grid point searches may fail in polar regions. Results from the two algorithms are compared for the application of tropical cyclone detection, and shown to produce similar results for the same set of storm identification criteria. The time required for both algorithms to search the same data set is compared. Furthermore, Stride Search's ability to search extreme latitudes is demonstrated for the case of polar low detection.

  10. D-leaping: Accelerating stochastic simulation algorithms for reactions with delays

    SciTech Connect

    Bayati, Basil; Chatelain, Philippe; Koumoutsakos, Petros

    2009-09-01

    We propose a novel, accelerated algorithm for the approximate stochastic simulation of biochemical systems with delays. The present work extends existing accelerated algorithms by distributing, in a time adaptive fashion, the delayed reactions so as to minimize the computational effort while preserving their accuracy. The accuracy of the present algorithm is assessed by comparing its results to those of the corresponding delay differential equations for a representative biochemical system. In addition, the fluctuations produced from the present algorithm are comparable to those from an exact stochastic simulation with delays. The algorithm is used to simulate biochemical systems that model oscillatory gene expression. The results indicate that the present algorithm is competitive with existing works for several benchmark problems while it is orders of magnitude faster for certain systems of biochemical reactions.

  11. Stride search: A general algorithm for storm detection in high resolution climate data

    DOE PAGESBeta

    Bosler, Peter Andrew; Roesler, Erika Louise; Taylor, Mark A.; Mundt, Miranda

    2015-09-08

    This article discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared. The commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. Stride Search is designed to work at all latitudes, while grid point searches may fail in polar regions. Results from the two algorithms are compared for the application of tropicalmore » cyclone detection, and shown to produce similar results for the same set of storm identification criteria. The time required for both algorithms to search the same data set is compared. Furthermore, Stride Search's ability to search extreme latitudes is demonstrated for the case of polar low detection.« less

  12. A 3D analysis algorithm to improve interpretation of heat pulse sensor results for the determination of small-scale flow directions and velocities in the hyporheic zone

    NASA Astrophysics Data System (ADS)

    Angermann, Lisa; Lewandowski, Jörg; Fleckenstein, Jan H.; Nützmann, Gunnar

    2012-12-01

    The hyporheic zone is strongly influenced by the adjacent surface water and groundwater systems. It is subject to hydraulic head and pressure fluctuations at different space and time scales, causing dynamic and heterogeneous flow patterns. These patterns are crucial for many biogeochemical processes in the shallow sediment and need to be considered in investigations of this hydraulically dynamic and biogeochemical active interface. For this purpose a device employing heat as an artificial tracer and a data analysis routine were developed. The method aims at measuring hyporheic flow direction and velocity in three dimensions at a scale of a few centimeters. A short heat pulse is injected into the sediment by a point source and its propagation is detected by up to 24 temperature sensors arranged cylindrically around the heater. The resulting breakthrough curves are analyzed using an analytical solution of the heat transport equation. The device was tested in two laboratory flow-through tanks with defined flow velocities and directions. Using different flow situations and sensor arrays the sensitivity of the method was evaluated. After operational reliability was demonstrated in the laboratory, its applicability in the field was tested in the hyporheic zone of a low gradient stream with sandy streambed in NE-Germany. Median and maximum flow velocity in the hyporheic zone at the site were determined as 0.9 × 10-4 and 2.1 × 10-4 m s-1 respectively. Horizontal flow components were found to be spatially very heterogeneous, while vertical flow component appear to be predominantly driven by the streambed morphology.

  13. A fast algorithm for sparse matrix computations related to inversion

    NASA Astrophysics Data System (ADS)

    Li, S.; Wu, W.; Darve, E.

    2013-06-01

    We have developed a fast algorithm for computing certain entries of the inverse of a sparse matrix. Such computations are critical to many applications, such as the calculation of non-equilibrium Green's functions Gr and G< for nano-devices. The FIND (Fast Inverse using Nested Dissection) algorithm is optimal in the big-O sense. However, in practice, FIND suffers from two problems due to the width-2 separators used by its partitioning scheme. One problem is the presence of a large constant factor in the computational cost of FIND. The other problem is that the partitioning scheme used by FIND is incompatible with most existing partitioning methods and libraries for nested dissection, which all use width-1 separators. Our new algorithm resolves these problems by thoroughly decomposing the computation process such that width-1 separators can be used, resulting in a significant speedup over FIND for realistic devices — up to twelve-fold in simulation. The new algorithm also has the added advantage that desired off-diagonal entries can be computed for free. Consequently, our algorithm is faster than the current state-of-the-art recursive methods for meshes of any size. Furthermore, the framework used in the analysis of our algorithm is the first attempt to explicitly apply the widely-used relationship between mesh nodes and matrix computations to the problem of multiple eliminations with reuse of intermediate results. This framework makes our algorithm easier to generalize, and also easier to compare against other methods related to elimination trees. Finally, our accuracy analysis shows that the algorithms that require back-substitution are subject to significant extra round-off errors, which become extremely large even for some well-conditioned matrices or matrices with only moderately large condition numbers. When compared to these back-substitution algorithms, our algorithm is generally a few orders of magnitude more accurate, and our produced round-off errors

  14. Impact of an integrated treatment algorithm based on platelet function testing and clinical risk assessment: results of the TRIAGE Patients Undergoing Percutaneous Coronary Interventions To Improve Clinical Outcomes Through Optimal Platelet Inhibition study.

    PubMed

    Chandrasekhar, Jaya; Baber, Usman; Mehran, Roxana; Aquino, Melissa; Sartori, Samantha; Yu, Jennifer; Kini, Annapoorna; Sharma, Samin; Skurk, Carsten; Shlofmitz, Richard A; Witzenbichler, Bernhard; Dangas, George

    2016-08-01

    Assessment of platelet reactivity alone for thienopyridine selection with percutaneous coronary intervention (PCI) has not been associated with improved outcomes. In TRIAGE, a prospective multicenter observational pilot study we sought to evaluate the benefit of an integrated algorithm combining clinical risk and platelet function testing to select type of thienopyridine in patients undergoing PCI. Patients on chronic clopidogrel therapy underwent platelet function testing prior to PCI using the VerifyNow assay to determine high on treatment platelet reactivity (HTPR, ≥230 P2Y12 reactivity units or PRU). Based on both PRU and clinical (ischemic and bleeding) risks, patients were switched to prasugrel or continued on clopidogrel per the study algorithm. The primary endpoints were (i) 1-year major adverse cardiovascular events (MACE) composite of death, non-fatal myocardial infarction, or definite or probable stent thrombosis; and (ii) major bleeding, Bleeding Academic Research Consortium type 2, 3 or 5. Out of 318 clopidogrel treated patients with a mean age of 65.9 ± 9.8 years, HTPR was noted in 33.3 %. Ninety (28.0 %) patients overall were switched to prasugrel and 228 (72.0 %) continued clopidogrel. The prasugrel group had fewer smokers and more patients with heart failure. At 1-year MACE occurred in 4.4 % of majority HTPR patients on prasugrel versus 3.5 % of primarily non-HTPR patients on clopidogrel (p = 0.7). Major bleeding (5.6 vs 7.9 %, p = 0.47) was numerically higher with clopidogrel compared with prasugrel. Use of the study clinical risk algorithm for choice and intensity of thienopyridine prescription following PCI resulted in similar ischemic outcomes in HTPR patients receiving prasugrel and primarily non-HTPR patients on clopidogrel without an untoward increase in bleeding with prasugrel. However, the study was prematurely terminated and these findings are therefore hypothesis generating. PMID:27100112

  15. Advisory Algorithm for Scheduling Open Sectors, Operating Positions, and Workstations

    NASA Technical Reports Server (NTRS)

    Bloem, Michael; Drew, Michael; Lai, Chok Fung; Bilimoria, Karl D.

    2012-01-01

    Air traffic controller supervisors configure available sector, operating position, and work-station resources to safely and efficiently control air traffic in a region of airspace. In this paper, an algorithm for assisting supervisors with this task is described and demonstrated on two sample problem instances. The algorithm produces configuration schedule advisories that minimize a cost. The cost is a weighted sum of two competing costs: one penalizing mismatches between configurations and predicted air traffic demand and another penalizing the effort associated with changing configurations. The problem considered by the algorithm is a shortest path problem that is solved with a dynamic programming value iteration algorithm. The cost function contains numerous parameters. Default values for most of these are suggested based on descriptions of air traffic control procedures and subject-matter expert feedback. The parameter determining the relative importance of the two competing costs is tuned by comparing historical configurations with corresponding algorithm advisories. Two sample problem instances for which appropriate configuration advisories are obvious were designed to illustrate characteristics of the algorithm. Results demonstrate how the algorithm suggests advisories that appropriately utilize changes in airspace configurations and changes in the number of operating positions allocated to each open sector. The results also demonstrate how the advisories suggest appropriate times for configuration changes.

  16. Estimation of Contextual Effects through Nonlinear Multilevel Latent Variable Modeling with a Metropolis-Hastings Robbins-Monro Algorithm

    ERIC Educational Resources Information Center

    Yang, Ji Seung; Cai, Li

    2014-01-01

    The main purpose of this study is to improve estimation efficiency in obtaining maximum marginal likelihood estimates of contextual effects in the framework of nonlinear multilevel latent variable model by adopting the Metropolis-Hastings Robbins-Monro algorithm (MH-RM). Results indicate that the MH-RM algorithm can produce estimates and standard…

  17. Algorithms, games, and evolution

    PubMed Central

    Chastain, Erick; Livnat, Adi; Papadimitriou, Christos; Vazirani, Umesh

    2014-01-01

    Even the most seasoned students of evolution, starting with Darwin himself, have occasionally expressed amazement that the mechanism of natural selection has produced the whole of Life as we see it around us. There is a computational way to articulate the same amazement: “What algorithm could possibly achieve all this in a mere three and a half billion years?” In this paper we propose an answer: We demonstrate that in the regime of weak selection, the standard equations of population genetics describing natural selection in the presence of sex become identical to those of a repeated game between genes played according to multiplicative weight updates (MWUA), an algorithm known in computer science to be surprisingly powerful and versatile. MWUA maximizes a tradeoff between cumulative performance and entropy, which suggests a new view on the maintenance of diversity in evolution. PMID:24979793

  18. The Relegation Algorithm

    NASA Astrophysics Data System (ADS)

    Deprit, André; Palacián, Jesúus; Deprit, Etienne

    2001-03-01

    The relegation algorithm extends the method of normalization by Lie transformations. Given a Hamiltonian that is a power series ℋ = ℋ0+ ɛℋ1+ ... of a small parameter ɛ, normalization constructs a map which converts the principal part ℋ0into an integral of the transformed system — relegation does the same for an arbitrary function ℋ[G]. If the Lie derivative induced by ℋ[G] is semi-simple, a double recursion produces the generator of the relegating transformation. The relegation algorithm is illustrated with an elementary example borrowed from galactic dynamics; the exercise serves as a standard against which to test software implementations. Relegation is also applied to the more substantial example of a Keplerian system perturbed by radiation pressure emanating from a rotating source.

  19. Decryption of pure-position permutation algorithms.

    PubMed

    Zhao, Xiao-Yu; Chen, Gang; Zhang, Dan; Wang, Xiao-Hong; Dong, Guang-Chang

    2004-07-01

    Pure position permutation image encryption algorithms, commonly used as image encryption investigated in this work are unfortunately frail under known-text attack. In view of the weakness of pure position permutation algorithm, we put forward an effective decryption algorithm for all pure-position permutation algorithms. First, a summary of the pure position permutation image encryption algorithms is given by introducing the concept of ergodic matrices. Then, by using probability theory and algebraic principles, the decryption probability of pure-position permutation algorithms is verified theoretically; and then, by defining the operation system of fuzzy ergodic matrices, we improve a specific decryption algorithm. Finally, some simulation results are shown. PMID:15495308

  20. Parallel algorithms for unconstrained optimizations by multisplitting

    SciTech Connect

    He, Qing

    1994-12-31

    In this paper a new parallel iterative algorithm for unconstrained optimization using the idea of multisplitting is proposed. This algorithm uses the existing sequential algorithms without any parallelization. Some convergence and numerical results for this algorithm are presented. The experiments are performed on an Intel iPSC/860 Hyper Cube with 64 nodes. It is interesting that the sequential implementation on one node shows that if the problem is split properly, the algorithm converges much faster than one without splitting.

  1. Research of Electronic Image Stabilization Algorithm Based on Orbital Character

    NASA Astrophysics Data System (ADS)

    Xian, Xiaodong; Hou, Peipei; Liang, Shan; Gan, Ping

    The monocular vision is the key technology for the locomotive anti-collision warning system. The range precision influence the system's performance. In this paper according to the question of video jitter result in the accuracy reducing, proposes a new EIS algorithm based on the orbital characteristic, through extracting and matching partial feature template obtained the global movement vector. Treat the partial feature template instead of treating the global image, speed of the system is improved obviously. The result of simulation indicates that this algorithm can effectively eliminate image migration which produces because of the video jitter, has solved the deviation of ranging precision, and satisfies the real-time request of system.

  2. [An improved fast algorithm for ray casting volume rendering of medical images].

    PubMed

    Tao, Ling; Wang, Huina; Tian, Zhiliang

    2006-10-01

    Ray casting algorithm can obtain better quality images in volume rendering, however, it presents some problems such as powerful computing capacity and slow rendering velocity. Therefore, a new fast algorithm of ray casting volume rendering is proposed in this paper. This algorithm reduces matrix computation by the matrix transformation characteristics of re-sampling points in two coordinate system, so re-sampled computational process is accelerated. By extending the Bresenham algorithm to three dimension and utilizing boundary box technique, this algorithm avoids the sampling in empty voxel and greatly improves the efficiency of ray casting. The experiment results show that the improved acceleration algorithm can produce the required quality images, at the same time reduces the total operations remarkably, and speeds up the volume rendering. PMID:17121341

  3. Temperature Corrected Bootstrap Algorithm

    NASA Technical Reports Server (NTRS)

    Comiso, Joey C.; Zwally, H. Jay

    1997-01-01

    A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.

  4. General cardinality genetic algorithms

    PubMed

    Koehler; Bhattacharyya; Vose

    1997-01-01

    A complete generalization of the Vose genetic algorithm model from the binary to higher cardinality case is provided. Boolean AND and EXCLUSIVE-OR operators are replaced by multiplication and addition over rings of integers. Walsh matrices are generalized with finite Fourier transforms for higher cardinality usage. Comparison of results to the binary case are provided. PMID:10021767

  5. Study and application on accelerated algorithm of ray-casting

    NASA Astrophysics Data System (ADS)

    Sun, Xiaoping; Wu, Jian; Cui, Zhiming; Ma, Jianlin

    2007-12-01

    Medical image 3D reconstruct is an important application filed for volume rendering, for it special using, it required fast interactive speed and high image quality. The ray casting algorithm (RCA) is a widely used basic volume rendering algorithm. It can get high quality image but the rendering speed is very slowly for powerful computing capacity. Due to these shortcomings and deficiencies, the accelerated ray casting algorithm is presented in this paper to improve its rendering speed and apply it to medical image 3D reconstruct. Firstly, accelerate algorithms for ray casting are fully studied and compared. Secondly, improved tri-linear interpolation technology has been selected and extended to continuous ray casting in order to reduce matrix computation by matrix transformation characteristics of re-sampling points. Then ray interval casting technology is used to reduce the number of rays. Utilizing volume data sets cropping technology that improving boundary box technique avoids the sampling in empty voxel. Finally, the synthesized accelerate algorithm has been proposed. The result shown that compare with standard ray casting algorithm, the accelerate algorithm not only improve the rendering speed but also produce the required quality images.

  6. Efficient iterative image reconstruction algorithm for dedicated breast CT

    NASA Astrophysics Data System (ADS)

    Antropova, Natalia; Sanchez, Adrian; Reiser, Ingrid S.; Sidky, Emil Y.; Boone, John; Pan, Xiaochuan

    2016-03-01

    Dedicated breast computed tomography (bCT) is currently being studied as a potential screening method for breast cancer. The X-ray exposure is set low to achieve an average glandular dose comparable to that of mammography, yielding projection data that contains high levels of noise. Iterative image reconstruction (IIR) algorithms may be well-suited for the system since they potentially reduce the effects of noise in the reconstructed images. However, IIR outcomes can be difficult to control since the algorithm parameters do not directly correspond to the image properties. Also, IIR algorithms are computationally demanding and have optimal parameter settings that depend on the size and shape of the breast and positioning of the patient. In this work, we design an efficient IIR algorithm with meaningful parameter specifications and that can be used on a large, diverse sample of bCT cases. The flexibility and efficiency of this method comes from having the final image produced by a linear combination of two separately reconstructed images - one containing gray level information and the other with enhanced high frequency components. Both of the images result from few iterations of separate IIR algorithms. The proposed algorithm depends on two parameters both of which have a well-defined impact on image quality. The algorithm is applied to numerous bCT cases from a dedicated bCT prototype system developed at University of California, Davis.

  7. The Langley Parameterized Shortwave Algorithm (LPSA) for Surface Radiation Budget Studies. 1.0

    NASA Technical Reports Server (NTRS)

    Gupta, Shashi K.; Kratz, David P.; Stackhouse, Paul W., Jr.; Wilber, Anne C.

    2001-01-01

    An efficient algorithm was developed during the late 1980's and early 1990's by W. F. Staylor at NASA/LaRC for the purpose of deriving shortwave surface radiation budget parameters on a global scale. While the algorithm produced results in good agreement with observations, the lack of proper documentation resulted in a weak acceptance by the science community. The primary purpose of this report is to develop detailed documentation of the algorithm. In the process, the algorithm was modified whenever discrepancies were found between the algorithm and its referenced literature sources. In some instances, assumptions made in the algorithm could not be justified and were replaced with those that were justifiable. The algorithm uses satellite and operational meteorological data for inputs. Most of the original data sources have been replaced by more recent, higher quality data sources, and fluxes are now computed on a higher spatial resolution. Many more changes to the basic radiation scheme and meteorological inputs have been proposed to improve the algorithm and make the product more useful for new research projects. Because of the many changes already in place and more planned for the future, the algorithm has been renamed the Langley Parameterized Shortwave Algorithm (LPSA).

  8. Messy genetic algorithms: Recent developments

    SciTech Connect

    Kargupta, H.

    1996-09-01

    Messy genetic algorithms define a rare class of algorithms that realize the need for detecting appropriate relations among members of the search domain in optimization. This paper reviews earlier works in messy genetic algorithms and describes some recent developments. It also describes the gene expression messy GA (GEMGA)--an {Omicron}({Lambda}{sup {kappa}}({ell}{sup 2} + {kappa})) sample complexity algorithm for the class of order-{kappa} delineable problems (problems that can be solved by considering no higher than order-{kappa} relations) of size {ell} and alphabet size {Lambda}. Experimental results are presented to demonstrate the scalability of the GEMGA.

  9. A new image enhancement algorithm with applications to forestry stand mapping

    NASA Technical Reports Server (NTRS)

    Kan, E. P. F. (Principal Investigator); Lo, J. K.

    1975-01-01

    The author has identified the following significant results. Results show that the new algorithm produced cleaner classification maps in which holes of small predesignated sizes were eliminated and significant boundary information was preserved. These cleaner post-processed maps better resemble true life timber stand maps and are thus more usable products than the pre-post-processing ones: Compared to an accepted neighbor-checking post-processing technique, the new algorithm is more appropriate for timber stand mapping.

  10. Description of a Normal-Force In-Situ Turbulence Algorithm for Airplanes

    NASA Technical Reports Server (NTRS)

    Stewart, Eric C.

    2003-01-01

    A normal-force in-situ turbulence algorithm for potential use on commercial airliners is described. The algorithm can produce information that can be used to predict hazardous accelerations of airplanes or to aid meteorologists in forecasting weather patterns. The algorithm uses normal acceleration and other measures of the airplane state to approximate the vertical gust velocity. That is, the fundamental, yet simple, relationship between normal acceleration and the change in normal force coefficient is exploited to produce an estimate of the vertical gust velocity. This simple approach is robust and produces a time history of the vertical gust velocity that would be intuitively useful to pilots. With proper processing, the time history can be transformed into the eddy dissipation rate that would be useful to meteorologists. Flight data for a simplified research implementation of the algorithm are presented for a severe turbulence encounter of the NASA ARIES Boeing 757 research airplane. The results indicate that the algorithm has potential for producing accurate in-situ turbulence measurements. However, more extensive tests and analysis are needed with an operational implementation of the algorithm to make comparisons with other algorithms or methods.

  11. MO-G-17A-07: Improved Image Quality in Brain F-18 FDG PET Using Penalized-Likelihood Image Reconstruction Via a Generalized Preconditioned Alternating Projection Algorithm: The First Patient Results

    SciTech Connect

    Schmidtlein, CR; Beattie, B; Humm, J; Li, S; Wu, Z; Xu, Y; Zhang, J; Shen, L; Vogelsang, L; Feiglin, D; Krol, A

    2014-06-15

    Purpose: To investigate the performance of a new penalized-likelihood PET image reconstruction algorithm using the 1{sub 1}-norm total-variation (TV) sum of the 1st through 4th-order gradients as the penalty. Simulated and brain patient data sets were analyzed. Methods: This work represents an extension of the preconditioned alternating projection algorithm (PAPA) for emission-computed tomography. In this new generalized algorithm (GPAPA), the penalty term is expanded to allow multiple components, in this case the sum of the 1st to 4th order gradients, to reduce artificial piece-wise constant regions (“staircase” artifacts typical for TV) seen in PAPA images penalized with only the 1st order gradient. Simulated data were used to test for “staircase” artifacts and to optimize the penalty hyper-parameter in the root-mean-squared error (RMSE) sense. Patient FDG brain scans were acquired on a GE D690 PET/CT (370 MBq at 1-hour post-injection for 10 minutes) in time-of-flight mode and in all cases were reconstructed using resolution recovery projectors. GPAPA images were compared PAPA and RMSE-optimally filtered OSEM (fully converged) in simulations and to clinical OSEM reconstructions (3 iterations, 32 subsets) with 2.6 mm XYGaussian and standard 3-point axial smoothing post-filters. Results: The results from the simulated data show a significant reduction in the 'staircase' artifact for GPAPA compared to PAPA and lower RMSE (up to 35%) compared to optimally filtered OSEM. A simple power-law relationship between the RMSE-optimal hyper-parameters and the noise equivalent counts (NEC) per voxel is revealed. Qualitatively, the patient images appear much sharper and with less noise than standard clinical images. The convergence rate is similar to OSEM. Conclusions: GPAPA reconstructions using the 1{sub 1}-norm total-variation sum of the 1st through 4th-order gradients as the penalty show great promise for the improvement of image quality over that currently achieved

  12. Raytracing Based upon the Sympletic Algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Li, C.

    2014-12-01

    The raytracing is the basic problem in seismic imaging, and the reliability of the imaging depends on the accuracies both spatial trajectory and traveltime of the ray, and is using in seismology broadly. The seismic ray travels through the inhomogeneous media fallows the the eikonal equation, and the eikonal equation is an one order differential equation of traveltime, and satisfies the Hamilton System. In Cartesian coordinate system, we use a separable Hamilton System function. In this paper, the Sympletic algorithm method with bi-cubic convolution algorithm was used to solve the Hamilton System to deal with the raytracing problem. Compared with the Fsat Marching Method (FMM), The result shows that the Sympletic algorithm method (SAM) can keep the stability of the solution for the eikonal equation. Due to the use of the Sympletic algorithm, the method can produce a reliable seismic wavefront with an accurate ray trajectory (Fig.1). Meanwhile, the numerical modeling shows that the use of SAM can not only keep the stability of the Hamilton System with a fast computation but also improve the accuracy of the seismic ray tracing (Fig.2).

  13. A graph spectrum based geometric biclustering algorithm.

    PubMed

    Wang, Doris Z; Yan, Hong

    2013-01-21

    Biclustering is capable of performing simultaneous clustering on two dimensions of a data matrix and has many applications in pattern classification. For example, in microarray experiments, a subset of genes is co-expressed in a subset of conditions, and biclustering algorithms can be used to detect the coherent patterns in the data for further analysis of function. In this paper, we present a graph spectrum based geometric biclustering (GSGBC) algorithm. In the geometrical view, biclusters can be seen as different linear geometrical patterns in high dimensional spaces. Based on this, the modified Hough transform is used to find the Hough vector (HV) corresponding to sub-bicluster patterns in 2D spaces. A graph can be built regarding each HV as a node. The graph spectrum is utilized to identify the eigengroups in which the sub-biclusters are grouped naturally to produce larger biclusters. Through a comparative study, we find that the GSGBC achieves as good a result as GBC and outperforms other kinds of biclustering algorithms. Also, compared with the original geometrical biclustering algorithm, it reduces the computing time complexity significantly. We also show that biologically meaningful biclusters can be identified by our method from real microarray gene expression data. PMID:23079285

  14. Backtracking algorithm for lepton reconstruction with HADES

    NASA Astrophysics Data System (ADS)

    Sellheim, P.; HADES Collaboration

    2015-04-01

    The High Acceptance Di-Electron Spectrometer (HADES) at the GSI Helmholtzzentrum für Schwerionenforschung investigates dilepton and strangeness production in elementary and heavy-ion collisions. In April - May 2012 HADES recorded 7 billion Au+Au events at a beam energy of 1.23 GeV/u with the highest multiplicities measured so far. The track reconstruction and particle identification in the high track density environment are challenging. The most important detector component for lepton identification is the Ring Imaging Cherenkov detector. Its main purpose is the separation of electrons and positrons from large background of charged hadrons produced in heavy-ion collisions. In order to improve lepton identification this backtracking algorithm was developed. In this contribution we will show the results of the algorithm compared to the currently applied method for e+/-identification. Efficiency and purity of a reconstructed e+/- sample will be discussed as well.

  15. Algorithmic commonalities in the parallel environment

    NASA Technical Reports Server (NTRS)

    Mcanulty, Michael A.; Wainer, Michael S.

    1987-01-01

    The ultimate aim of this project was to analyze procedures from substantially different application areas to discover what is either common or peculiar in the process of conversion to the Massively Parallel Processor (MPP). Three areas were identified: molecular dynamic simulation, production systems (rule systems), and various graphics and vision algorithms. To date, only selected graphics procedures have been investigated. They are the most readily available, and produce the most visible results. These include simple polygon patch rendering, raycasting against a constructive solid geometric model, and stochastic or fractal based textured surface algorithms. Only the simplest of conversion strategies, mapping a major loop to the array, has been investigated so far. It is not entirely satisfactory.

  16. Stochastic Formal Correctness of Numerical Algorithms

    NASA Technical Reports Server (NTRS)

    Daumas, Marc; Lester, David; Martin-Dorel, Erik; Truffert, Annick

    2009-01-01

    We provide a framework to bound the probability that accumulated errors were never above a given threshold on numerical algorithms. Such algorithms are used for example in aircraft and nuclear power plants. This report contains simple formulas based on Levy's and Markov's inequalities and it presents a formal theory of random variables with a special focus on producing concrete results. We selected four very common applications that fit in our framework and cover the common practices of systems that evolve for a long time. We compute the number of bits that remain continuously significant in the first two applications with a probability of failure around one out of a billion, where worst case analysis considers that no significant bit remains. We are using PVS as such formal tools force explicit statement of all hypotheses and prevent incorrect uses of theorems.

  17. Comparing a Coevolutionary Genetic Algorithm for Multiobjective Optimization

    NASA Technical Reports Server (NTRS)

    Lohn, Jason D.; Kraus, William F.; Haith, Gary L.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    We present results from a study comparing a recently developed coevolutionary genetic algorithm (CGA) against a set of evolutionary algorithms using a suite of multiobjective optimization benchmarks. The CGA embodies competitive coevolution and employs a simple, straightforward target population representation and fitness calculation based on developmental theory of learning. Because of these properties, setting up the additional population is trivial making implementation no more difficult than using a standard GA. Empirical results using a suite of two-objective test functions indicate that this CGA performs well at finding solutions on convex, nonconvex, discrete, and deceptive Pareto-optimal fronts, while giving respectable results on a nonuniform optimization. On a multimodal Pareto front, the CGA finds a solution that dominates solutions produced by eight other algorithms, yet the CGA has poor coverage across the Pareto front.

  18. Artificial immune algorithm for multi-depot vehicle scheduling problems

    NASA Astrophysics Data System (ADS)

    Wu, Zhongyi; Wang, Donggen; Xia, Linyuan; Chen, Xiaoling

    2008-10-01

    In the fast-developing logistics and supply chain management fields, one of the key problems in the decision support system is that how to arrange, for a lot of customers and suppliers, the supplier-to-customer assignment and produce a detailed supply schedule under a set of constraints. Solutions to the multi-depot vehicle scheduling problems (MDVRP) help in solving this problem in case of transportation applications. The objective of the MDVSP is to minimize the total distance covered by all vehicles, which can be considered as delivery costs or time consumption. The MDVSP is one of nondeterministic polynomial-time hard (NP-hard) problem which cannot be solved to optimality within polynomial bounded computational time. Many different approaches have been developed to tackle MDVSP, such as exact algorithm (EA), one-stage approach (OSA), two-phase heuristic method (TPHM), tabu search algorithm (TSA), genetic algorithm (GA) and hierarchical multiplex structure (HIMS). Most of the methods mentioned above are time consuming and have high risk to result in local optimum. In this paper, a new search algorithm is proposed to solve MDVSP based on Artificial Immune Systems (AIS), which are inspirited by vertebrate immune systems. The proposed AIS algorithm is tested with 30 customers and 6 vehicles located in 3 depots. Experimental results show that the artificial immune system algorithm is an effective and efficient method for solving MDVSP problems.

  19. Level 1 Radiance Scaling and Conditioning Algorithm Theoretical Basis

    NASA Technical Reports Server (NTRS)

    Bruegge, C.; Diner, D.; Korechoff, R.; Lee, M.

    2000-01-01

    The Algorithm Theoretical Basis (ATB) document describes the algorithms used to produce the Multi-angle Imaging SpectroRadiometer (MISR) Level 1B1 Radiometric Product, and certain parameters of the Level 1A Reformatted Annotated Product.

  20. Highly Scalable Matching Pursuit Signal Decomposition Algorithm

    NASA Technical Reports Server (NTRS)

    Christensen, Daniel; Das, Santanu; Srivastava, Ashok N.

    2009-01-01

    Matching Pursuit Decomposition (MPD) is a powerful iterative algorithm for signal decomposition and feature extraction. MPD decomposes any signal into linear combinations of its dictionary elements or atoms . A best fit atom from an arbitrarily defined dictionary is determined through cross-correlation. The selected atom is subtracted from the signal and this procedure is repeated on the residual in the subsequent iterations until a stopping criterion is met. The reconstructed signal reveals the waveform structure of the original signal. However, a sufficiently large dictionary is required for an accurate reconstruction; this in return increases the computational burden of the algorithm, thus limiting its applicability and level of adoption. The purpose of this research is to improve the scalability and performance of the classical MPD algorithm. Correlation thresholds were defined to prune insignificant atoms from the dictionary. The Coarse-Fine Grids and Multiple Atom Extraction techniques were proposed to decrease the computational burden of the algorithm. The Coarse-Fine Grids method enabled the approximation and refinement of the parameters for the best fit atom. The ability to extract multiple atoms within a single iteration enhanced the effectiveness and efficiency of each iteration. These improvements were implemented to produce an improved Matching Pursuit Decomposition algorithm entitled MPD++. Disparate signal decomposition applications may require a particular emphasis of accuracy or computational efficiency. The prominence of the key signal features required for the proper signal classification dictates the level of accuracy necessary in the decomposition. The MPD++ algorithm may be easily adapted to accommodate the imposed requirements. Certain feature extraction applications may require rapid signal decomposition. The full potential of MPD++ may be utilized to produce incredible performance gains while extracting only slightly less energy than the

  1. Improved document image segmentation algorithm using multiresolution morphology

    NASA Astrophysics Data System (ADS)

    Bukhari, Syed Saqib; Shafait, Faisal; Breuel, Thomas M.

    2011-01-01

    Page segmentation into text and non-text elements is an essential preprocessing step before optical character recognition (OCR) operation. In case of poor segmentation, an OCR classification engine produces garbage characters due to the presence of non-text elements. This paper describes modifications to the text/non-text segmentation algorithm presented by Bloomberg,1 which is also available in his open-source Leptonica library.2The modifications result in significant improvements and achieved better segmentation accuracy than the original algorithm for UW-III, UNLV, ICDAR 2009 page segmentation competition test images and circuit diagram datasets.

  2. Evolutionary Algorithm for Optimal Vaccination Scheme

    NASA Astrophysics Data System (ADS)

    Parousis-Orthodoxou, K. J.; Vlachos, D. S.

    2014-03-01

    The following work uses the dynamic capabilities of an evolutionary algorithm in order to obtain an optimal immunization strategy in a user specified network. The produced algorithm uses a basic genetic algorithm with crossover and mutation techniques, in order to locate certain nodes in the inputted network. These nodes will be immunized in an SIR epidemic spreading process, and the performance of each immunization scheme, will be evaluated by the level of containment that provides for the spreading of the disease.

  3. Research on Routing Selection Algorithm Based on Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Guohong; Zhang, Baojian; Li, Xueyong; Lv, Jinna

    The hereditary algorithm is a kind of random searching and method of optimizing based on living beings natural selection and hereditary mechanism. In recent years, because of the potentiality in solving complicate problems and the successful application in the fields of industrial project, hereditary algorithm has been widely concerned by the domestic and international scholar. Routing Selection communication has been defined a standard communication model of IP version 6.This paper proposes a service model of Routing Selection communication, and designs and implements a new Routing Selection algorithm based on genetic algorithm.The experimental simulation results show that this algorithm can get more resolution at less time and more balanced network load, which enhances search ratio and the availability of network resource, and improves the quality of service.

  4. Modal parameters estimation using ant colony optimisation algorithm

    NASA Astrophysics Data System (ADS)

    Sitarz, Piotr; Powałka, Bartosz

    2016-08-01

    The paper puts forward a new estimation method of modal parameters for dynamical systems. The problem of parameter estimation has been simplified to optimisation which is carried out using the ant colony system algorithm. The proposed method significantly constrains the solution space, determined on the basis of frequency plots of the receptance FRFs (frequency response functions) for objects presented in the frequency domain. The constantly growing computing power of readily accessible PCs makes this novel approach a viable solution. The combination of deterministic constraints of the solution space with modified ant colony system algorithms produced excellent results for systems in which mode shapes are defined by distinctly different natural frequencies and for those in which natural frequencies are similar. The proposed method is fully autonomous and the user does not need to select a model order. The last section of the paper gives estimation results for two sample frequency plots, conducted with the proposed method and the PolyMAX algorithm.

  5. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  6. Stability of Bareiss algorithm

    NASA Astrophysics Data System (ADS)

    Bojanczyk, Adam W.; Brent, Richard P.; de Hoog, F. R.

    1991-12-01

    In this paper, we present a numerical stability analysis of Bareiss algorithm for solving a symmetric positive definite Toeplitz system of linear equations. We also compare Bareiss algorithm with Levinson algorithm and conclude that the former has superior numerical properties.

  7. Steady-state MSE analysis of the multimodulus blind equalization algorithm in QAM communication systems

    NASA Astrophysics Data System (ADS)

    Rao, Wei

    2011-10-01

    The constant modulus algorithm (CMA) for blind equalization requires a separate carrier-recovery system for phase recovery. A modified CMA, called the multimodulus algorithm (MMA), which may perform joint blind equalization and carrier recovery without the need for a separate carrier-recovery system for quadrature amplitude modulation (QAM) signal constellations. This letter mathematically analyzes the steady-state mean square error (MSE) of MMA. Analysis results indicate that MMA produces 50% fewer steady-state MSE than CMA.

  8. Comparison of cone beam artifacts reduction: two pass algorithm vs TV-based CS algorithm

    NASA Astrophysics Data System (ADS)

    Choi, Shinkook; Baek, Jongduk

    2015-03-01

    In a cone beam computed tomography (CBCT), the severity of the cone beam artifacts is increased as the cone angle increases. To reduce the cone beam artifacts, several modified FDK algorithms and compressed sensing based iterative algorithms have been proposed. In this paper, we used two pass algorithm and Gradient-Projection-Barzilai-Borwein (GPBB) algorithm to reduce the cone beam artifacts, and compared their performance using structural similarity (SSIM) index. In two pass algorithm, it is assumed that the cone beam artifacts are mainly caused by extreme-density(ED) objects, and therefore the algorithm reproduces the cone beam artifacts(i.e., error image) produced by ED objects, and then subtract it from the original image. GPBB algorithm is a compressed sensing based iterative algorithm which minimizes an energy function for calculating the gradient projection with the step size determined by the Barzilai- Borwein formulation, therefore it can estimate missing data caused by the cone beam artifacts. To evaluate the performance of two algorithms, we used testing objects consisting of 7 ellipsoids separated along the z direction and cone beam artifacts were generated using 30 degree cone angle. Even though the FDK algorithm produced severe cone beam artifacts with a large cone angle, two pass algorithm reduced the cone beam artifacts with small residual errors caused by inaccuracy of ED objects. In contrast, GPBB algorithm completely removed the cone beam artifacts and restored the original shape of the objects.

  9. Monitoring and Commissioning Verification Algorithms for CHP Systems

    SciTech Connect

    Brambley, Michael R.; Katipamula, Srinivas; Jiang, Wei

    2008-03-31

    This document provides the algorithms for CHP system performance monitoring and commissioning verification (CxV). It starts by presenting system-level and component-level performance metrics, followed by descriptions of algorithms for performance monitoring and commissioning verification, using the metric presented earlier. Verification of commissioning is accomplished essentially by comparing actual measured performance to benchmarks for performance provided by the system integrator and/or component manufacturers. The results of these comparisons are then automatically interpreted to provide conclusions regarding whether the CHP system and its components have been properly commissioned and where problems are found, guidance is provided for corrections. A discussion of uncertainty handling is then provided, which is followed by a description of how simulations models can be used to generate data for testing the algorithms. A model is described for simulating a CHP system consisting of a micro-turbine, an exhaust-gas heat recovery unit that produces hot water, a absorption chiller and a cooling tower. The process for using this model for generating data for testing the algorithms for a selected set of faults is described. The next section applies the algorithms developed to CHP laboratory and field data to illustrate their use. The report then concludes with a discussion of the need for laboratory testing of the algorithms on a physical CHP systems and identification of the recommended next steps.

  10. Quantum defragmentation algorithm

    SciTech Connect

    Burgarth, Daniel; Giovannetti, Vittorio

    2010-08-15

    In this addendum to our paper [D. Burgarth and V. Giovannetti, Phys. Rev. Lett. 99, 100501 (2007)] we prove that during the transformation that allows one to enforce control by relaxation on a quantum system, the ancillary memory can be kept at a finite size, independently from the fidelity one wants to achieve. The result is obtained by introducing the quantum analog of defragmentation algorithms which are employed for efficiently reorganizing classical information in conventional hard disks.

  11. Sarsat location algorithms

    NASA Astrophysics Data System (ADS)

    Nardi, Jerry

    The Satellite Aided Search and Rescue (Sarsat) is designed to detect and locate distress beacons using satellite receivers. Algorithms used for calculating the positions of 406 MHz beacons and 121.5/243 MHz beacons are presented. The techniques for matching, resolving and averaging calculated locations from multiple satellite passes are also described along with results pertaining to single pass and multiple pass location estimate accuracy.

  12. Cluster algorithms and computational complexity

    NASA Astrophysics Data System (ADS)

    Li, Xuenan

    Cluster algorithms for the 2D Ising model with a staggered field have been studied and a new cluster algorithm for path sampling has been worked out. The complexity properties of Bak-Seppen model and the Growing network model have been studied by using the Computational Complexity Theory. The dynamic critical behavior of the two-replica cluster algorithm is studied. Several versions of the algorithm are applied to the two-dimensional, square lattice Ising model with a staggered field. The dynamic exponent for the full algorithm is found to be less than 0.5. It is found that odd translations of one replica with respect to the other together with global flips are essential for obtaining a small value of the dynamic exponent. The path sampling problem for the 1D Ising model is studied using both a local algorithm and a novel cluster algorithm. The local algorithm is extremely inefficient at low temperature, where the integrated autocorrelation time is found to be proportional to the fourth power of correlation length. The dynamic exponent of the cluster algorithm is found to be zero and therefore proved to be much more efficient than the local algorithm. The parallel computational complexity of the Bak-Sneppen evolution model is studied. It is shown that Bak-Sneppen histories can be generated by a massively parallel computer in a time that is polylog in the length of the history, which means that the logical depth of producing a Bak-Sneppen history is exponentially less than the length of the history. The parallel dynamics for generating Bak-Sneppen histories is contrasted to standard Bak-Sneppen dynamics. The parallel computational complexity of the Growing Network model is studied. The growth of the network with linear kernels is shown to be not complex and an algorithm with polylog parallel running time is found. The growth of the network with gamma ≥ 2 super-linear kernels can be realized by a randomized parallel algorithm with polylog expected running time.

  13. Linear Bregman algorithm implemented in parallel GPU

    NASA Astrophysics Data System (ADS)

    Li, Pengyan; Ke, Jue; Sui, Dong; Wei, Ping

    2015-08-01

    At present, most compressed sensing (CS) algorithms have poor converging speed, thus are difficult to run on PC. To deal with this issue, we use a parallel GPU, to implement a broadly used compressed sensing algorithm, the Linear Bregman algorithm. Linear iterative Bregman algorithm is a reconstruction algorithm proposed by Osher and Cai. Compared with other CS reconstruction algorithms, the linear Bregman algorithm only involves the vector and matrix multiplication and thresholding operation, and is simpler and more efficient for programming. We use C as a development language and adopt CUDA (Compute Unified Device Architecture) as parallel computing architectures. In this paper, we compared the parallel Bregman algorithm with traditional CPU realized Bregaman algorithm. In addition, we also compared the parallel Bregman algorithm with other CS reconstruction algorithms, such as OMP and TwIST algorithms. Compared with these two algorithms, the result of this paper shows that, the parallel Bregman algorithm needs shorter time, and thus is more convenient for real-time object reconstruction, which is important to people's fast growing demand to information technology.

  14. Stochastic simulation for imaging spatial uncertainty: Comparison and evaluation of available algorithms

    SciTech Connect

    Gotway, C.A.; Rutherford, B.M.

    1993-09-01

    Stochastic simulation has been suggested as a viable method for characterizing the uncertainty associated with the prediction of a nonlinear function of a spatially-varying parameter. Geostatistical simulation algorithms generate realizations of a random field with specified statistical and geostatistical properties. A nonlinear function is evaluated over each realization to obtain an uncertainty distribution of a system response that reflects the spatial variability and uncertainty in the parameter. Crucial management decisions, such as potential regulatory compliance of proposed nuclear waste facilities and optimal allocation of resources in environmental remediation, are based on the resulting system response uncertainty distribution. Many geostatistical simulation algorithms have been developed to generate the random fields, and each algorithm will produce fields with different statistical properties. These different properties will result in different distributions for system response, and potentially, different managerial decisions. The statistical properties of the resulting system response distributions are not completely understood, nor is the ability of the various algorithms to generate response distributions that adequately reflect the associated uncertainty. This paper reviews several of the algorithms available for generating random fields. Algorithms are compared in a designed experiment using seven exhaustive data sets with different statistical and geostatistical properties. For each exhaustive data set, a number of realizations are generated using each simulation algorithm. The realizations are used with each of several deterministic transfer functions to produce a cumulative uncertainty distribution function of a system response. The uncertainty distributions are then compared to the single value obtained from the corresponding exhaustive data set.

  15. An efficient clustering algorithm for partitioning Y-short tandem repeats data

    PubMed Central

    2012-01-01

    Background Y-Short Tandem Repeats (Y-STR) data consist of many similar and almost similar objects. This characteristic of Y-STR data causes two problems with partitioning: non-unique centroids and local minima problems. As a result, the existing partitioning algorithms produce poor clustering results. Results Our new algorithm, called k-Approximate Modal Haplotypes (k-AMH), obtains the highest clustering accuracy scores for five out of six datasets, and produces an equal performance for the remaining dataset. Furthermore, clustering accuracy scores of 100% are achieved for two of the datasets. The k-AMH algorithm records the highest mean accuracy score of 0.93 overall, compared to that of other algorithms: k-Population (0.91), k-Modes-RVF (0.81), New Fuzzy k-Modes (0.80), k-Modes (0.76), k-Modes-Hybrid 1 (0.76), k-Modes-Hybrid 2 (0.75), Fuzzy k-Modes (0.74), and k-Modes-UAVM (0.70). Conclusions The partitioning performance of the k-AMH algorithm for Y-STR data is superior to that of other algorithms, owing to its ability to solve the non-unique centroids and local minima problems. Our algorithm is also efficient in terms of time complexity, which is recorded as O(km(n-k)) and considered to be linear. PMID:23039132

  16. Reduction Algorithm

    2010-12-31

    Conventional methods used for modeling a transmission network have resulted in a high degree of error and instability. This methodology condenses the network for analysis purposes without a loss of precision.

  17. EVALUATION OF REGISTRATION, COMPRESSION AND CLASSIFICATION ALGORITHMS

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R.

    1994-01-01

    Several types of algorithms are generally used to process digital imagery such as Landsat data. The most commonly used algorithms perform the task of registration, compression, and classification. Because there are different techniques available for performing registration, compression, and classification, imagery data users need a rationale for selecting a particular approach to meet their particular needs. This collection of registration, compression, and classification algorithms was developed so that different approaches could be evaluated and the best approach for a particular application determined. Routines are included for six registration algorithms, six compression algorithms, and two classification algorithms. The package also includes routines for evaluating the effects of processing on the image data. This collection of routines should be useful to anyone using or developing image processing software. Registration of image data involves the geometrical alteration of the imagery. Registration routines available in the evaluation package include image magnification, mapping functions, partitioning, map overlay, and data interpolation. The compression of image data involves reducing the volume of data needed for a given image. Compression routines available in the package include adaptive differential pulse code modulation, two-dimensional transforms, clustering, vector reduction, and picture segmentation. Classification of image data involves analyzing the uncompressed or compressed image data to produce inventories and maps of areas of similar spectral properties within a scene. The classification routines available include a sequential linear technique and a maximum likelihood technique. The choice of the appropriate evaluation criteria is quite important in evaluating the image processing functions. The user is therefore given a choice of evaluation criteria with which to investigate the available image processing functions. All of the available

  18. Thermostat algorithm for generating target ensembles

    NASA Astrophysics Data System (ADS)

    Bravetti, A.; Tapias, D.

    2016-02-01

    We present a deterministic algorithm called contact density dynamics that generates any prescribed target distribution in the physical phase space. Akin to the famous model of Nosé and Hoover, our algorithm is based on a non-Hamiltonian system in an extended phase space. However, the equations of motion in our case follow from contact geometry and we show that in general they have a similar form to those of the so-called density dynamics algorithm. As a prototypical example, we apply our algorithm to produce a Gibbs canonical distribution for a one-dimensional harmonic oscillator.

  19. Thermostat algorithm for generating target ensembles.

    PubMed

    Bravetti, A; Tapias, D

    2016-02-01

    We present a deterministic algorithm called contact density dynamics that generates any prescribed target distribution in the physical phase space. Akin to the famous model of Nosé and Hoover, our algorithm is based on a non-Hamiltonian system in an extended phase space. However, the equations of motion in our case follow from contact geometry and we show that in general they have a similar form to those of the so-called density dynamics algorithm. As a prototypical example, we apply our algorithm to produce a Gibbs canonical distribution for a one-dimensional harmonic oscillator. PMID:26986320

  20. Real-time robot deliberation by compilation and monitoring of anytime algorithms

    NASA Technical Reports Server (NTRS)

    Zilberstein, Shlomo

    1994-01-01

    Anytime algorithms are algorithms whose quality of results improves gradually as computation time increases. Certainty, accuracy, and specificity are metrics useful in anytime algorighm construction. It is widely accepted that a successful robotic system must trade off between decision quality and the computational resources used to produce it. Anytime algorithms were designed to offer such a trade off. A model of compilation and monitoring mechanisms needed to build robots that can efficiently control their deliberation time is presented. This approach simplifies the design and implementation of complex intelligent robots, mechanizes the composition and monitoring processes, and provides independent real time robotic systems that automatically adjust resource allocation to yield optimum performance.

  1. Towards Enhancement of Performance of K-Means Clustering Using Nature-Inspired Optimization Algorithms

    PubMed Central

    Deb, Suash; Yang, Xin-She

    2014-01-01

    Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario. PMID:25202730

  2. Optimization of Electrical Energy Production by using Modified Differential Evolution Algorithm

    NASA Astrophysics Data System (ADS)

    Glotic, Arnel

    The dissertation addressed the optimization of electrical energy production from hydro power plants and thermal power plants. It refers to short-term optimization and presents a complex optimization problem. The complexity of the problem arises from an extensive number of co-dependent variables and power plant constraints. According to the complexity of the problem, the differential evolution algorithm known as the successful and robust optimization algorithm was selected as an appropriate algorithm for optimization. The performance of this differential evolution algorithm is closely connected with a control parameters' set and its capabilities being inter alia improved by the algorithm's parallelization. The capabilities of achieving a global optimal solution within the optimization of electrical energy production are improved by the proposed modified differential evolution algorithm with new parallelization mode. This algorithm's performance is also improved by its proposed dynamic population size throughout the optimization process. In addition to achieving better optimization results in comparison with the classic differential evolution algorithm, the proposed dynamic population size reduces convergence time. The improvements of this algorithm presented in the dissertation, besides power plant models mostly used in scientific publications, were also tested on the power plant models represented by real parameters'. The optimization of electrical energy from hydro and thermal power plants is followed by certain criteria; satisfying system demand, minimizing usage of water quantity per produced electrical energy unit, minimizing or eliminating water spillage, satisfying the final reservoir states of hydro power plants and minimizing fuel costs and emissions of thermal power plants.

  3. Towards enhancement of performance of K-means clustering using nature-inspired optimization algorithms.

    PubMed

    Fong, Simon; Deb, Suash; Yang, Xin-She; Zhuang, Yan

    2014-01-01

    Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario. PMID:25202730

  4. Comparing barrier algorithms

    NASA Technical Reports Server (NTRS)

    Arenstorf, Norbert S.; Jordan, Harry F.

    1987-01-01

    A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor, that support these conclusions are detailed.

  5. Comparing barrier algorithms

    NASA Technical Reports Server (NTRS)

    Arenstorf, Norbert S.; Jordan, Harry F.

    1989-01-01

    A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor that support these conclusions, are detailed.

  6. Algorithms for builder guidelines

    SciTech Connect

    Balcomb, J.D.; Lekov, A.B.

    1989-06-01

    The Builder Guidelines are designed to make simple, appropriate guidelines available to builders for their specific localities. Builders may select from passive solar and conservation strategies with different performance potentials. They can then compare the calculated results for their particular house design with a typical house in the same location. Algorithms used to develop the Builder Guidelines are described. The main algorithms used are the monthly solar ratio (SLR) method for winter heating, the diurnal heat capacity (DHC) method for temperature swing, and a new simplified calculation method (McCool) for summer cooling. This paper applies the algorithms to estimate the performance potential of passive solar strategies, and the annual heating and cooling loads of various combinations of conservation and passive solar strategies. The basis of the McCool method is described. All three methods are implemented in a microcomputer program used to generate the guideline numbers. Guidelines for Denver, Colorado, are used to illustrate the results. The structure of the guidelines and worksheet booklets are also presented. 5 refs., 3 tabs.

  7. Large scale tracking algorithms.

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  8. Evaluating super resolution algorithms

    NASA Astrophysics Data System (ADS)

    Kim, Youn Jin; Park, Jong Hyun; Shin, Gun Shik; Lee, Hyun-Seung; Kim, Dong-Hyun; Park, Se Hyeok; Kim, Jaehyun

    2011-01-01

    This study intends to establish a sound testing and evaluation methodology based upon the human visual characteristics for appreciating the image restoration accuracy; in addition to comparing the subjective results with predictions by some objective evaluation methods. In total, six different super resolution (SR) algorithms - such as iterative back-projection (IBP), robust SR, maximum a posteriori (MAP), projections onto convex sets (POCS), a non-uniform interpolation, and frequency domain approach - were selected. The performance comparison between the SR algorithms in terms of their restoration accuracy was carried out through both subjectively and objectively. The former methodology relies upon the paired comparison method that involves the simultaneous scaling of two stimuli with respect to image restoration accuracy. For the latter, both conventional image quality metrics and color difference methods are implemented. Consequently, POCS and a non-uniform interpolation outperformed the others for an ideal situation, while restoration based methods appear more accurate to the HR image in a real world case where any prior information about the blur kernel is remained unknown. However, the noise-added-image could not be restored successfully by any of those methods. The latest International Commission on Illumination (CIE) standard color difference equation CIEDE2000 was found to predict the subjective results accurately and outperformed conventional methods for evaluating the restoration accuracy of those SR algorithms.

  9. A New Image Denoising Algorithm that Preserves Structures of Astronomical Data

    NASA Astrophysics Data System (ADS)

    Bressert, Eli; Edmonds, P.; Kowal Arcand, K.

    2007-05-01

    We have processed numerous x-ray data sets using several well-known algorithms such as Gaussian and adaptive smoothing for public related image releases. These algorithms are used to denoise/smooth images and retain the overall structure of observed objects. Recently, a new PDE algorithm and program, provided by Dr. David Tschumperle and referred to as GREYCstoration, has been tested and is in the progress of being implemented into the Chandra EPO imaging group. Results of GREYCstoration will be presented and compared to the currently used methods for x-ray and multiple wavelength images. What demarcates Tschumperle's algorithm from the current algorithms used by the EPO imaging group is its ability to preserve the main structures of an image strongly, while reducing noise. In addition to denoising images, GREYCstoration can be used to erase artifacts accumulated during observation and mosaicing stages. GREYCstoration produces results that are comparable and in some cases more preferable than the current denoising/smoothing algorithms. From our early stages of testing, the results of the new algorithm will provide insight on the algorithm's initial capabilities on multiple wavelength astronomy data sets.

  10. Leaf Sequencing Algorithm Based on MLC Shape Constraint

    NASA Astrophysics Data System (ADS)

    Jing, Jia; Pei, Xi; Wang, Dong; Cao, Ruifen; Lin, Hui

    2012-06-01

    Intensity modulated radiation therapy (IMRT) requires the determination of the appropriate multileaf collimator settings to deliver an intensity map. The purpose of this work was to attempt to regulate the shape between adjacent multileaf collimator apertures by a leaf sequencing algorithm. To qualify and validate this algorithm, the integral test for the segment of the multileaf collimator of ARTS was performed with clinical intensity map experiments. By comparisons and analyses of the total number of monitor units and number of segments with benchmark results, the proposed algorithm performed well while the segment shape constraint produced segments with more compact shapes when delivering the planned intensity maps, which may help to reduce the multileaf collimator's specific effects.

  11. Library of Continuation Algorithms

    2005-03-01

    LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newton’s method for their nonlinear solve.

  12. The BR eigenvalue algorithm

    SciTech Connect

    Geist, G.A.; Howell, G.W.; Watkins, D.S.

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  13. New human-centered linear and nonlinear motion cueing algorithms for control of simulator motion systems

    NASA Astrophysics Data System (ADS)

    Telban, Robert J.

    While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. To address this, new human-centered motion cueing algorithms were developed. A revised "optimal algorithm" uses time-invariant filters developed by optimal control, incorporating human vestibular system models. The "nonlinear algorithm" is a novel approach that is also formulated by optimal control, but can also be updated in real time. It incorporates a new integrated visual-vestibular perception model that includes both visual and vestibular sensation and the interaction between the stimuli. A time-varying control law requires the matrix Riccati equation to be solved in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. As a result of unsatisfactory sensation, an augmented turbulence cue was added to the vertical mode for both the optimal and nonlinear algorithms. The relative effectiveness of the algorithms, in simulating aircraft maneuvers, was assessed with an eleven-subject piloted performance test conducted on the NASA Langley Visual Motion Simulator (VMS). Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input

  14. Spaceborne SAR Imaging Algorithm for Coherence Optimized

    PubMed Central

    Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun

    2016-01-01

    This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446

  15. Algorithmic complexity of a protein

    NASA Astrophysics Data System (ADS)

    Dewey, T. Gregory

    1996-07-01

    The information contained in a protein's amino acid sequence dictates its three-dimensional structure. To quantitate the transfer of information that occurs in the protein folding process, the Kolmogorov information entropy or algorithmic complexity of the protein structure is investigated. The algorithmic complexity of an object provides a means of quantitating its information content. Recent results have indicated that the algorithmic complexity of microstates of certain statistical mechanical systems can be estimated from the thermodynamic entropy. In the present work, it is shown that the algorithmic complexity of a protein is given by its configurational entropy. Using this result, a quantitative estimate of the information content of a protein's structure is made and is compared to the information content of the sequence. Additionally, the mutual information between sequence and structure is determined. It is seen that virtually all the information contained in the protein structure is shared with the sequence.

  16. Economic Dispatch Using Genetic Algorithm Based Hybrid Approach

    SciTech Connect

    Tahir Nadeem Malik; Aftab Ahmad; Shahab Khushnood

    2006-07-01

    Power Economic Dispatch (ED) is vital and essential daily optimization procedure in the system operation. Present day large power generating units with multi-valves steam turbines exhibit a large variation in the input-output characteristic functions, thus non-convexity appears in the characteristic curves. Various mathematical and optimization techniques have been developed, applied to solve economic dispatch (ED) problem. Most of these are calculus-based optimization algorithms that are based on successive linearization and use the first and second order differentiations of objective function and its constraint equations as the search direction. They usually require heat input, power output characteristics of generators to be of monotonically increasing nature or of piecewise linearity. These simplifying assumptions result in an inaccurate dispatch. Genetic algorithms have used to solve the economic dispatch problem independently and in conjunction with other AI tools and mathematical programming approaches. Genetic algorithms have inherent ability to reach the global minimum region of search space in a short time, but then take longer time to converge the solution. GA based hybrid approaches get around this problem and produce encouraging results. This paper presents brief survey on hybrid approaches for economic dispatch, an architecture of extensible computational framework as common environment for conventional, genetic algorithm and hybrid approaches based solution for power economic dispatch, the implementation of three algorithms in the developed framework. The framework tested on standard test systems for its performance evaluation. (authors)

  17. Generating folded protein structures with a lattice chain growth algorithm

    NASA Astrophysics Data System (ADS)

    Gan, Hin Hark; Tropsha, Alexander; Schlick, Tamar

    2000-10-01

    We present a new application of the chain growth algorithm to lattice generation of protein structure and thermodynamics. Given the difficulty of ab initio protein structure prediction, this approach provides an alternative to current folding algorithms. The chain growth algorithm, unlike Metropolis folding algorithms, generates independent protein structures to achieve rapid and efficient exploration of configurational space. It is a modified version of the Rosenbluth algorithm where the chain growth transition probability is a normalized Boltzmann factor; it was previously applied only to simple polymers and protein models with two residue types. The independent protein configurations, generated segment-by-segment on a refined cubic lattice, are based on a single interaction site for each amino acid and a statistical interaction energy derived by Miyazawa and Jernigan. We examine for several proteins the algorithm's ability to produce nativelike folds and its effectiveness for calculating protein thermodynamics. Thermal transition profiles associated with the internal energy, entropy, and radius of gyration show characteristic folding/unfolding transitions and provide evidence for unfolding via partially unfolded (molten-globule) states. From the configurational ensembles, the protein structures with the lowest distance root-mean-square deviations (dRMSD) vary between 2.2 to 3.8 Å, a range comparable to results of an exhaustive enumeration search. Though the ensemble-averaged dRMSD values are about 1.5 to 2 Å larger, the lowest dRMSD structures have similar overall folds to the native proteins. These results demonstrate that the chain growth algorithm is a viable alternative to protein simulations using the whole chain.

  18. Optimum Actuator Selection with a Genetic Algorithm for Aircraft Control

    NASA Technical Reports Server (NTRS)

    Rogers, James L.

    2004-01-01

    The placement of actuators on a wing determines the control effectiveness of the airplane. One approach to placement maximizes the moments about the pitch, roll, and yaw axes, while minimizing the coupling. For example, the desired actuators produce a pure roll moment without at the same time causing much pitch or yaw. For a typical wing, there is a large set of candidate locations for placing actuators, resulting in a substantially larger number of combinations to examine in order to find an optimum placement satisfying the mission requirements and mission constraints. A genetic algorithm has been developed for finding the best placement for four actuators to produce an uncoupled pitch moment. The genetic algorithm has been extended to find the minimum number of actuators required to provide uncoupled pitch, roll, and yaw control. A simplified, untapered, unswept wing is the model for each application.

  19. Calibration of FRESIM for Singapore expressway using genetic algorithm

    SciTech Connect

    Cheu, R.L.; Jin, X.; Srinivasa, D.; Ng, K.C.; Ng, Y.L.

    1998-11-01

    FRESIM is a microscopic time-stepping simulation model for freeway corridor traffic operations. To enable FRESIM to realistically simulate expressway traffic flow in Singapore, parameters that govern the movement of vehicles needed to be recalibrated for local traffic conditions. This paper presents the application of a genetic algorithm as an optimization method for finding a suitable combination of FRESIM parameter values. The calibration is based on field data collected on weekdays over a 5.8 km segment of the Ayer Rajar Expressway. Independent calibrations have been made for evening peak and midday off-peak traffic. The results show that the genetic algorithm is able to search for two sets of parameter values that enable FRESIM to produce 30-s loop-detector volume and speed (averaged across all lanes) closely matching the field data under two different traffic conditions. The two sets of parameter values are found to produce a consistently good match for data collected in different days.

  20. Evaluation of five non-rigid image registration algorithms using the NIREP framework

    NASA Astrophysics Data System (ADS)

    Wei, Ying; Christensen, Gary E.; Song, Joo Hyun; Rudrauf, David; Bruss, Joel; Kuhl, Jon G.; Grabowski, Thomas J.

    2010-03-01

    Evaluating non-rigid image registration algorithm performance is a difficult problem since there is rarely a "gold standard" (i.e., known) correspondence between two images. This paper reports the analysis and comparison of five non-rigid image registration algorithms using the Non-Rigid Image Registration Evaluation Project (NIREP) (www.nirep.org) framework. The NIREP framework evaluates registration performance using centralized databases of well-characterized images and standard evaluation statistics (methods) which are implemented in a software package. The performance of five non-rigid registration algorithms (Affine, AIR, Demons, SLE and SICLE) was evaluated using 22 images from two NIREP neuroanatomical evaluation databases. Six evaluation statistics (relative overlap, intensity variance, normalized ROI overlap, alignment of calcarine sulci, inverse consistency error and transitivity error) were used to evaluate and compare image registration performance. The results indicate that the Demons registration algorithm produced the best registration results with respect to the relative overlap statistic but produced nearly the worst registration results with respect to the inverse consistency statistic. The fact that one registration algorithm produced the best result for one criterion and nearly the worst for another illustrates the need to use multiple evaluation statistics to fully assess performance.

  1. Parameter incremental learning algorithm for neural networks.

    PubMed

    Wan, Sheng; Banta, Larry E

    2006-11-01

    In this paper, a novel stochastic (or online) training algorithm for neural networks, named parameter incremental learning (PIL) algorithm, is proposed and developed. The main idea of the PIL strategy is that the learning algorithm should not only adapt to the newly presented input-output training pattern by adjusting parameters, but also preserve the prior results. A general PIL algorithm for feedforward neural networks is accordingly presented as the first-order approximate solution to an optimization problem, where the performance index is the combination of proper measures of preservation and adaptation. The PIL algorithms for the multilayer perceptron (MLP) are subsequently derived. Numerical studies show that for all the three benchmark problems used in this paper the PIL algorithm for MLP is measurably superior to the standard online backpropagation (BP) algorithm and the stochastic diagonal Levenberg-Marquardt (SDLM) algorithm in terms of the convergence speed and accuracy. Other appealing features of the PIL algorithm are that it is computationally as simple as the BP algorithm, and as easy to use as the BP algorithm. It, therefore, can be applied, with better performance, to any situations where the standard online BP algorithm is applicable. PMID:17131658

  2. Parallelized Dilate Algorithm for Remote Sensing Image

    PubMed Central

    Zhang, Suli; Hu, Haoran; Pan, Xin

    2014-01-01

    As an important algorithm, dilate algorithm can give us more connective view of a remote sensing image which has broken lines or objects. However, with the technological progress of satellite sensor, the resolution of remote sensing image has been increasing and its data quantities become very large. This would lead to the decrease of algorithm running speed or cannot obtain a result in limited memory or time. To solve this problem, our research proposed a parallelized dilate algorithm for remote sensing Image based on MPI and MP. Experiments show that our method runs faster than traditional single-process algorithm. PMID:24955392

  3. Algorithme intelligent d'optimisation d'un design structurel de grande envergure

    NASA Astrophysics Data System (ADS)

    Dominique, Stephane

    genetic algorithm that prevents new individuals to be born too close to previously evaluated solutions. The restricted area becomes smaller or larger during the optimisation to allow global or local search when necessary. Also, a new search operator named Substitution Operator is incorporated in GATE. This operator allows an ANN surrogate model to guide the algorithm toward the most promising areas of the design space. The suggested CBR approach and GATE were tested on several simple test problems, as well as on the industrial problem of designing a gas turbine engine rotor's disc. These results are compared to other results obtained for the same problems by many other popular optimisation algorithms, such as (depending of the problem) gradient algorithms, binary genetic algorithm, real number genetic algorithm, genetic algorithm using multiple parents crossovers, differential evolution genetic algorithm, Hookes & Jeeves generalized pattern search method and POINTER from the software I-SIGHT 3.5. Results show that GATE is quite competitive, giving the best results for 5 of the 6 constrained optimisation problem. GATE also provided the best results of all on problem produced by a Maximum Set Gaussian landscape generator. Finally, GATE provided a disc 4.3% lighter than the best other tested algorithm (POINTER) for the gas turbine engine rotor's disc problem. One drawback of GATE is a lesser efficiency for highly multimodal unconstrained problems, for which he gave quite poor results with respect to its implementation cost. To conclude, according to the preliminary results obtained during this thesis, the suggested CBR process, combined with GATE, seems to be a very good candidate to automate and accelerate the structural design of mechanical devices, potentially reducing significantly the cost of industrial preliminary design processes.

  4. A new minimax algorithm

    NASA Technical Reports Server (NTRS)

    Vardi, A.

    1984-01-01

    The representation min t s.t. F(I)(x). - t less than or equal to 0 for all i is examined. An active set strategy is designed of functions: active, semi-active, and non-active. This technique will help in preventing zigzagging which often occurs when an active set strategy is used. Some of the inequality constraints are handled with slack variables. Also a trust region strategy is used in which at each iteration there is a sphere around the current point in which the local approximation of the function is trusted. The algorithm is implemented into a successful computer program. Numerical results are provided.

  5. A fast algorithm for sparse matrix computations related to inversion

    SciTech Connect

    Li, S.; Wu, W.; Darve, E.

    2013-06-01

    We have developed a fast algorithm for computing certain entries of the inverse of a sparse matrix. Such computations are critical to many applications, such as the calculation of non-equilibrium Green’s functions G{sup r} and G{sup <} for nano-devices. The FIND (Fast Inverse using Nested Dissection) algorithm is optimal in the big-O sense. However, in practice, FIND suffers from two problems due to the width-2 separators used by its partitioning scheme. One problem is the presence of a large constant factor in the computational cost of FIND. The other problem is that the partitioning scheme used by FIND is incompatible with most existing partitioning methods and libraries for nested dissection, which all use width-1 separators. Our new algorithm resolves these problems by thoroughly decomposing the computation process such that width-1 separators can be used, resulting in a significant speedup over FIND for realistic devices — up to twelve-fold in simulation. The new algorithm also has the added advantage that desired off-diagonal entries can be computed for free. Consequently, our algorithm is faster than the current state-of-the-art recursive methods for meshes of any size. Furthermore, the framework used in the analysis of our algorithm is the first attempt to explicitly apply the widely-used relationship between mesh nodes and matrix computations to the problem of multiple eliminations with reuse of intermediate results. This framework makes our algorithm easier to generalize, and also easier to compare against other methods related to elimination trees. Finally, our accuracy analysis shows that the algorithms that require back-substitution are subject to significant extra round-off errors, which become extremely large even for some well-conditioned matrices or matrices with only moderately large condition numbers. When compared to these back-substitution algorithms, our algorithm is generally a few orders of magnitude more accurate, and our produced round

  6. Recombinant Temporal Aberration Detection Algorithms for Enhanced Biosurveillance

    PubMed Central

    Murphy, Sean Patrick; Burkom, Howard

    2008-01-01

    Objective Broadly, this research aims to improve the outbreak detection performance and, therefore, the cost effectiveness of automated syndromic surveillance systems by building novel, recombinant temporal aberration detection algorithms from components of previously developed detectors. Methods This study decomposes existing temporal aberration detection algorithms into two sequential stages and investigates the individual impact of each stage on outbreak detection performance. The data forecasting stage (Stage 1) generates predictions of time series values a certain number of time steps in the future based on historical data. The anomaly measure stage (Stage 2) compares features of this prediction to corresponding features of the actual time series to compute a statistical anomaly measure. A Monte Carlo simulation procedure is then used to examine the recombinant algorithms’ ability to detect synthetic aberrations injected into authentic syndromic time series. Results New methods obtained with procedural components of published, sometimes widely used, algorithms were compared to the known methods using authentic datasets with plausible stochastic injected signals. Performance improvements were found for some of the recombinant methods, and these improvements were consistent over a range of data types, outbreak types, and outbreak sizes. For gradual outbreaks, the WEWD MovAvg7+WEWD Z-Score recombinant algorithm performed best; for sudden outbreaks, the HW+WEWD Z-Score performed best. Conclusion This decomposition was found not only to yield valuable insight into the effects of the aberration detection algorithms but also to produce novel combinations of data forecasters and anomaly measures with enhanced detection performance. PMID:17947614

  7. Systolic array architecture for convolutional decoding algorithms: Viterbi algorithm and stack algorithm

    SciTech Connect

    Chang, C.Y.

    1986-01-01

    New results on efficient forms of decoding convolutional codes based on Viterbi and stack algorithms using systolic array architecture are presented. Some theoretical aspects of systolic arrays are also investigated. First, systolic array implementation of Viterbi algorithm is considered, and various properties of convolutional codes are derived. A technique called strongly connected trellis decoding is introduced to increase the efficient utilization of all the systolic array processors. The issues dealing with the composite branch metric generation, survivor updating, overall system architecture, throughput rate, and computations overhead ratio are also investigated. Second, the existing stack algorithm is modified and restated in a more concise version so that it can be efficiently implemented by a special type of systolic array called systolic priority queue. Three general schemes of systolic priority queue based on random access memory, shift register, and ripple register are proposed. Finally, a systematic approach is presented to design systolic arrays for certain general classes of recursively formulated algorithms.

  8. In vivo optic nerve head biomechanics: performance testing of a three-dimensional tracking algorithm

    PubMed Central

    Girard, Michaël J. A.; Strouthidis, Nicholas G.; Desjardins, Adrien; Mari, Jean Martial; Ethier, C. Ross

    2013-01-01

    Measurement of optic nerve head (ONH) deformations could be useful in the clinical management of glaucoma. Here, we propose a novel three-dimensional tissue-tracking algorithm designed to be used in vivo. We carry out preliminary verification of the algorithm by testing its accuracy and its robustness. An algorithm based on digital volume correlation was developed to extract ONH tissue displacements from two optical coherence tomography (OCT) volumes of the ONH (undeformed and deformed). The algorithm was tested by applying artificial deformations to a baseline OCT scan while manipulating speckle noise, illumination and contrast enhancement. Tissue deformations determined by our algorithm were compared with the known (imposed) values. Errors in displacement magnitude, orientation and strain decreased with signal averaging and were 0.15 µm, 0.15° and 0.0019, respectively (for optimized algorithm parameters). Previous computational work suggests that these errors are acceptable to provide in vivo characterization of ONH biomechanics. Our algorithm is robust to OCT speckle noise as well as to changes in illumination conditions, and increasing signal averaging can produce better results. This algorithm has potential be used to quantify ONH three-dimensional strains in vivo, of benefit in the diagnosis and identification of risk factors in glaucoma. PMID:23883953

  9. Basis for a neuronal version of Grover's quantum algorithm.

    PubMed

    Clark, Kevin B

    2014-01-01

    Grover's quantum (search) algorithm exploits principles of quantum information theory and computation to surpass the strong Church-Turing limit governing classical computers. The algorithm initializes a search field into superposed N (eigen)states to later execute nonclassical "subroutines" involving unitary phase shifts of measured states and to produce root-rate or quadratic gain in the algorithmic time (O(N (1/2))) needed to find some "target" solution m. Akin to this fast technological search algorithm, single eukaryotic cells, such as differentiated neurons, perform natural quadratic speed-up in the search for appropriate store-operated Ca(2+) response regulation of, among other processes, protein and lipid biosynthesis, cell energetics, stress responses, cell fate and death, synaptic plasticity, and immunoprotection. Such speed-up in cellular decision making results from spatiotemporal dynamics of networked intracellular Ca(2+)-induced Ca(2+) release and the search (or signaling) velocity of Ca(2+) wave propagation. As chemical processes, such as the duration of Ca(2+) mobilization, become rate-limiting over interstore distances, Ca(2+) waves quadratically decrease interstore-travel time from slow saltatory to fast continuous gradients proportional to the square-root of the classical Ca(2+) diffusion coefficient, D (1/2), matching the computing efficiency of Grover's quantum algorithm. In this Hypothesis and Theory article, I elaborate on these traits using a fire-diffuse-fire model of store-operated cytosolic Ca(2+) signaling valid for glutamatergic neurons. Salient model features corresponding to Grover's quantum algorithm are parameterized to meet requirements for the Oracle Hadamard transform and Grover's iteration. A neuronal version of Grover's quantum algorithm figures to benefit signal coincidence detection and integration, bidirectional synaptic plasticity, and other vital cell functions by rapidly selecting, ordering, and/or counting optional

  10. Basis for a neuronal version of Grover's quantum algorithm

    PubMed Central

    Clark, Kevin B.

    2014-01-01

    Grover's quantum (search) algorithm exploits principles of quantum information theory and computation to surpass the strong Church–Turing limit governing classical computers. The algorithm initializes a search field into superposed N (eigen)states to later execute nonclassical “subroutines” involving unitary phase shifts of measured states and to produce root-rate or quadratic gain in the algorithmic time (O(N1/2)) needed to find some “target” solution m. Akin to this fast technological search algorithm, single eukaryotic cells, such as differentiated neurons, perform natural quadratic speed-up in the search for appropriate store-operated Ca2+ response regulation of, among other processes, protein and lipid biosynthesis, cell energetics, stress responses, cell fate and death, synaptic plasticity, and immunoprotection. Such speed-up in cellular decision making results from spatiotemporal dynamics of networked intracellular Ca2+-induced Ca2+ release and the search (or signaling) velocity of Ca2+ wave propagation. As chemical processes, such as the duration of Ca2+ mobilization, become rate-limiting over interstore distances, Ca2+ waves quadratically decrease interstore-travel time from slow saltatory to fast continuous gradients proportional to the square-root of the classical Ca2+ diffusion coefficient, D1/2, matching the computing efficiency of Grover's quantum algorithm. In this Hypothesis and Theory article, I elaborate on these traits using a fire-diffuse-fire model of store-operated cytosolic Ca2+ signaling valid for glutamatergic neurons. Salient model features corresponding to Grover's quantum algorithm are parameterized to meet requirements for the Oracle Hadamard transform and Grover's iteration. A neuronal version of Grover's quantum algorithm figures to benefit signal coincidence detection and integration, bidirectional synaptic plasticity, and other vital cell functions by rapidly selecting, ordering, and/or counting optional response

  11. The Effect of Algorithms on Copy Number Variant Detection

    PubMed Central

    Ely, Benjamin; Chi, Peter; Wang, Kenneth; Raskind, Wendy H.; Kim, Sulgi; Brkanac, Zoran; Yu, Chang-En

    2010-01-01

    Background The detection of copy number variants (CNVs) and the results of CNV-disease association studies rely on how CNVs are defined, and because array-based technologies can only infer CNVs, CNV-calling algorithms can produce vastly different findings. Several authors have noted the large-scale variability between CNV-detection methods, as well as the substantial false positive and false negative rates associated with those methods. In this study, we use variations of four common algorithms for CNV detection (PennCNV, QuantiSNP, HMMSeg, and cnvPartition) and two definitions of overlap (any overlap and an overlap of at least 40% of the smaller CNV) to illustrate the effects of varying algorithms and definitions of overlap on CNV discovery. Methodology and Principal Findings We used a 56 K Illumina genotyping array enriched for CNV regions to generate hybridization intensities and allele frequencies for 48 Caucasian schizophrenia cases and 48 age-, ethnicity-, and gender-matched control subjects. No algorithm found a difference in CNV burden between the two groups. However, the total number of CNVs called ranged from 102 to 3,765 across algorithms. The mean CNV size ranged from 46 kb to 787 kb, and the average number of CNVs per subject ranged from 1 to 39. The number of novel CNVs not previously reported in normal subjects ranged from 0 to 212. Conclusions and Significance Motivated by the availability of multiple publicly available genome-wide SNP arrays, investigators are conducting numerous analyses to identify putative additional CNVs in complex genetic disorders. However, the number of CNVs identified in array-based studies, and whether these CNVs are novel or valid, will depend on the algorithm(s) used. Thus, given the variety of methods used, there will be many false positives and false negatives. Both guidelines for the identification of CNVs inferred from high-density arrays and the establishment of a gold standard for validation of CNVs are needed

  12. Scheduling with genetic algorithms

    NASA Technical Reports Server (NTRS)

    Fennel, Theron R.; Underbrink, A. J., Jr.; Williams, George P. W., Jr.

    1994-01-01

    In many domains, scheduling a sequence of jobs is an important function contributing to the overall efficiency of the operation. At Boeing, we develop schedules for many different domains, including assembly of military and commercial aircraft, weapons systems, and space vehicles. Boeing is under contract to develop scheduling systems for the Space Station Payload Planning System (PPS) and Payload Operations and Integration Center (POIC). These applications require that we respect certain sequencing restrictions among the jobs to be scheduled while at the same time assigning resources to the jobs. We call this general problem scheduling and resource allocation. Genetic algorithms (GA's) offer a search method that uses a population of solutions and benefits from intrinsic parallelism to search the problem space rapidly, producing near-optimal solutions. Good intermediate solutions are probabalistically recombined to produce better offspring (based upon some application specific measure of solution fitness, e.g., minimum flowtime, or schedule completeness). Also, at any point in the search, any intermediate solution can be accepted as a final solution; allowing the search to proceed longer usually produces a better solution while terminating the search at virtually any time may yield an acceptable solution. Many processes are constrained by restrictions of sequence among the individual jobs. For a specific job, other jobs must be completed beforehand. While there are obviously many other constraints on processes, it is these on which we focussed for this research: how to allocate crews to jobs while satisfying job precedence requirements and personnel, and tooling and fixture (or, more generally, resource) requirements.

  13. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.

    1986-01-01

    Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.

  14. Algorithm-development activities

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    1994-01-01

    The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.

  15. Genetic algorithms - A new technique for solving the neutron spectrum unfolding problem

    NASA Astrophysics Data System (ADS)

    Freeman, David W.; Ray Edwards, D.; Bolon, Albert E.

    1999-04-01

    A new technique utilizing genetic algorithms has been applied to the Bonner sphere neutron spectrum unfolding problem. Genetic algorithms are part of a relatively new field of "evolutionary" solution techniques that mimic living systems with computer-simulated "chromosome" solutions. Solutions mate and mutate to create better solutions. Several benchmark problems, considered representative of radiation protection environments, have been evaluated using the newly developed UMRGA code which implements the genetic algorithm unfolding technique. The results are compared with results from other well-established unfolding codes. The genetic algorithm technique works remarkably well and produces solutions with relatively high spectral qualities. UMRGA appears to be a superior technique in the absence of a priori data - it does not rely on "lucky" guesses of input spectra. Calculated personnel doses associated with the unfolded spectra match benchmark values within a few percent.

  16. Improvements on particle swarm optimization algorithm for velocity calibration in microseismic monitoring

    NASA Astrophysics Data System (ADS)

    Yang, Yue; Wen, Jian; Chen, Xiaofei

    2015-07-01

    In this paper, we apply particle swarm optimization (PSO), an artificial intelligence technique, to velocity calibration in microseismic monitoring. We ran simulations with four 1-D layered velocity models and three different initial model ranges. The results using the basic PSO algorithm were reliable and accurate for simple models, but unsuccessful for complex models. We propose the staged shrinkage strategy (SSS) for the PSO algorithm. The SSS-PSO algorithm produced robust inversion results and had a fast convergence rate. We investigated the effects of PSO's velocity clamping factor in terms of the algorithm reliability and computational efficiency. The velocity clamping factor had little impact on the reliability and efficiency of basic PSO, whereas it had a large effect on the efficiency of SSS-PSO. Reassuringly, SSS-PSO exhibits marginal reliability fluctuations, which suggests that it can be confidently implemented.

  17. Genetic Algorithm Calibration of Probabilistic Cellular Automata for Modeling Mining Permit Activity

    USGS Publications Warehouse

    Louis, S.J.; Raines, G.L.

    2003-01-01

    We use a genetic algorithm to calibrate a spatially and temporally resolved cellular automata to model mining activity on public land in Idaho and western Montana. The genetic algorithm searches through a space of transition rule parameters of a two dimensional cellular automata model to find rule parameters that fit observed mining activity data. Previous work by one of the authors in calibrating the cellular automaton took weeks - the genetic algorithm takes a day and produces rules leading to about the same (or better) fit to observed data. These preliminary results indicate that genetic algorithms are a viable tool in calibrating cellular automata for this application. Experience gained during the calibration of this cellular automata suggests that mineral resource information is a critical factor in the quality of the results. With automated calibration, further refinements of how the mineral-resource information is provided to the cellular automaton will probably improve our model.

  18. Algorithms versus architectures for computational chemistry

    NASA Technical Reports Server (NTRS)

    Partridge, H.; Bauschlicher, C. W., Jr.

    1986-01-01

    The algorithms employed are computationally intensive and, as a result, increased performance (both algorithmic and architectural) is required to improve accuracy and to treat larger molecular systems. Several benchmark quantum chemistry codes are examined on a variety of architectures. While these codes are only a small portion of a typical quantum chemistry library, they illustrate many of the computationally intensive kernels and data manipulation requirements of some applications. Furthermore, understanding the performance of the existing algorithm on present and proposed supercomputers serves as a guide for future programs and algorithm development. The algorithms investigated are: (1) a sparse symmetric matrix vector product; (2) a four index integral transformation; and (3) the calculation of diatomic two electron Slater integrals. The vectorization strategies are examined for these algorithms for both the Cyber 205 and Cray XMP. In addition, multiprocessor implementations of the algorithms are looked at on the Cray XMP and on the MIT static data flow machine proposed by DENNIS.

  19. A Synthesized Heuristic Task Scheduling Algorithm

    PubMed Central

    Dai, Yanyan; Zhang, Xiangli

    2014-01-01

    Aiming at the static task scheduling problems in heterogeneous environment, a heuristic task scheduling algorithm named HCPPEFT is proposed. In task prioritizing phase, there are three levels of priority in the algorithm to choose task. First, the critical tasks have the highest priority, secondly the tasks with longer path to exit task will be selected, and then algorithm will choose tasks with less predecessors to schedule. In resource selection phase, the algorithm is selected task duplication to reduce the interresource communication cost, besides forecasting the impact of an assignment for all children of the current task permits better decisions to be made in selecting resources. The algorithm proposed is compared with STDH, PEFT, and HEFT algorithms through randomly generated graphs and sets of task graphs. The experimental results show that the new algorithm can achieve better scheduling performance. PMID:25254244

  20. Search properties of some sequential decoding algorithms.

    NASA Technical Reports Server (NTRS)

    Geist, J. M.

    1973-01-01

    Sequential decoding procedures are studied in the context of selecting a path through a tree. Several algorithms are considered, and their properties are compared. It is shown that the stack algorithm introduced by Zigangirov (1966) and by Jelinek (1969) is essentially equivalent to the Fano algorithm with regard to the set of nodes examined and the path selected, although the description, implementation, and action of the two algorithms are quite different. A modified Fano algorithm is introduced, in which the quantizing parameter is eliminated. It can be inferred from limited simulation results that, at least in some applications, the new algorithm is computationally inferior to the old. However, it is of some theoretical interest since the conventional Fano algorithm may be considered to be a quantized version of it.

  1. Short read DNA fragment anchoring algorithm

    PubMed Central

    Wang, Wendi; Zhang, Peiheng; Liu, Xinchun

    2009-01-01

    Background The emerging next-generation sequencing method based on PCR technology boosts genome sequencing speed considerably, the expense is also get decreased. It has been utilized to address a broad range of bioinformatics problems. Limited by reliable output sequence length of next-generation sequencing technologies, we are confined to study gene fragments with 30~50 bps in general and it is relatively shorter than traditional gene fragment length. Anchoring gene fragments in long reference sequence is an essential and prerequisite step for further assembly and analysis works. Due to the sheer number of fragments produced by next-generation sequencing technologies and the huge size of reference sequences, anchoring would rapidly becoming a computational bottleneck. Results and discussion We compared algorithm efficiency on BLAT, SOAP and EMBF. The efficiency is defined as the count of total output results divided by time consumed to retrieve them. The data show that our algorithm EMBF have 3~4 times efficiency advantage over SOAP, and at least 150 times over BLAT. Moreover, when the reference sequence size is increased, the efficiency of SOAP will get degraded as far as 30%, while EMBF have preferable increasing tendency. Conclusion In conclusion, we deem that EMBF is more suitable for short fragment anchoring problem where result completeness and accuracy is predominant and the reference sequences are relatively large. PMID:19208116

  2. Development, Comparisons and Evaluation of Aerosol Retrieval Algorithms

    NASA Astrophysics Data System (ADS)

    de Leeuw, G.; Holzer-Popp, T.; Aerosol-cci Team

    2011-12-01

    The Climate Change Initiative (cci) of the European Space Agency (ESA) has brought together a team of European Aerosol retrieval groups working on the development and improvement of aerosol retrieval algorithms. The goal of this cooperation is the development of methods to provide the best possible information on climate and climate change based on satellite observations. To achieve this, algorithms are characterized in detail as regards the retrieval approaches, the aerosol models used in each algorithm, cloud detection and surface treatment. A round-robin intercomparison of results from the various participating algorithms serves to identify the best modules or combinations of modules for each sensor. Annual global datasets including their uncertainties will then be produced and validated. The project builds on 9 existing algorithms to produce spectral aerosol optical depth (AOD and Ångström exponent) as well as other aerosol information; two instruments are included to provide the absorbing aerosol index (AAI) and stratospheric aerosol information. The algorithms included are: - 3 for ATSR (ORAC developed by RAL / Oxford university, ADV developed by FMI and the SU algorithm developed by Swansea University ) - 2 for MERIS (BAER by Bremen university and the ESA standard handled by HYGEOS) - 1 for POLDER over ocean (LOA) - 1 for synergetic retrieval (SYNAER by DLR ) - 1 for OMI retreival of the absorbing aerosol index with averaging kernel information (KNMI) - 1 for GOMOS stratospheric extinction profile retrieval (BIRA) The first seven algorithms aim at the retrieval of the AOD. However, each of the algorithms used differ in their approach, even for algorithms working with the same instrument such as ATSR or MERIS. To analyse the strengths and weaknesses of each algorithm several tests are made. The starting point for comparison and measurement of improvements is a retrieval run for 1 month, September 2008. The data from the same month are subsequently used for

  3. Genetic Algorithm Approaches to Prebiobiotic Chemistry Modeling

    NASA Technical Reports Server (NTRS)

    Lohn, Jason; Colombano, Silvano

    1997-01-01

    We model an artificial chemistry comprised of interacting polymers by specifying two initial conditions: a distribution of polymers and a fixed set of reversible catalytic reactions. A genetic algorithm is used to find a set of reactions that exhibit a desired dynamical behavior. Such a technique is useful because it allows an investigator to determine whether a specific pattern of dynamics can be produced, and if it can, the reaction network found can be then analyzed. We present our results in the context of studying simplified chemical dynamics in theorized protocells - hypothesized precursors of the first living organisms. Our results show that given a small sample of plausible protocell reaction dynamics, catalytic reaction sets can be found. We present cases where this is not possible and also analyze the evolved reaction sets.

  4. NEKF IMM tracking algorithm

    NASA Astrophysics Data System (ADS)

    Owen, Mark W.; Stubberud, Allen R.

    2003-12-01

    Highly maneuvering threats are a major concern for the Navy and the DoD and the technology discussed in this paper is intended to help address this issue. A neural extended Kalman filter algorithm has been embedded in an interacting multiple model architecture for target tracking. The neural extended Kalman filter algorithm is used to improve motion model prediction during maneuvers. With a better target motion mode, noise reduction can be achieved through a maneuver. Unlike the interacting multiple model architecture which uses a high process noise model to hold a target through a maneuver with poor velocity and acceleration estimates, a neural extended Kalman filter is used to predict corrections to the velocity and acceleration states of a target through a maneuver. The neural extended Kalman filter estimates the weights of a neural network, which in turn are used to modify the state estimate predictions of the filter as measurements are processed. The neural network training is performed on-line as data is processed. In this paper, the simulation results of a tracking problem using a neural extended Kalman filter embedded in an interacting multiple model tracking architecture are shown. Preliminary results on the 2nd Benchmark Problem are also given.

  5. NEKF IMM tracking algorithm

    NASA Astrophysics Data System (ADS)

    Owen, Mark W.; Stubberud, Allen R.

    2004-01-01

    Highly maneuvering threats are a major concern for the Navy and the DoD and the technology discussed in this paper is intended to help address this issue. A neural extended Kalman filter algorithm has been embedded in an interacting multiple model architecture for target tracking. The neural extended Kalman filter algorithm is used to improve motion model prediction during maneuvers. With a better target motion mode, noise reduction can be achieved through a maneuver. Unlike the interacting multiple model architecture which uses a high process noise model to hold a target through a maneuver with poor velocity and acceleration estimates, a neural extended Kalman filter is used to predict corrections to the velocity and acceleration states of a target through a maneuver. The neural extended Kalman filter estimates the weights of a neural network, which in turn are used to modify the state estimate predictions of the filter as measurements are processed. The neural network training is performed on-line as data is processed. In this paper, the simulation results of a tracking problem using a neural extended Kalman filter embedded in an interacting multiple model tracking architecture are shown. Preliminary results on the 2nd Benchmark Problem are also given.

  6. Visual Empirical Region of Influence (VERI) Pattern Recognition Algorithms

    2002-05-01

    We developed new pattern recognition (PR) algorithms based on a human visual perception model. We named these algorithms Visual Empirical Region of Influence (VERI) algorithms. To compare the new algorithm's effectiveness against othe PR algorithms, we benchmarked their clustering capabilities with a standard set of two-dimensional data that is well known in the PR community. The VERI algorithm succeeded in clustering all the data correctly. No existing algorithm had previously clustered all the pattens inmore » the data set successfully. The commands to execute VERI algorithms are quite difficult to master when executed from a DOS command line. The algorithm requires several parameters to operate correctly. From our own experiences we realized that if we wanted to provide a new data analysis tool to the PR community we would have to provide a new data analysis tool to the PR community we would have to make the tool powerful, yet easy and intuitive to use. That was our motivation for developing graphical user interfaces (GUI's) to the VERI algorithms. We developed GUI's to control the VERI algorithm in a single pass mode and in an optimization mode. We also developed a visualization technique that allows users to graphically animate and visually inspect multi-dimensional data after it has been classified by the VERI algorithms. The visualization technique that allows users to graphically animate and visually inspect multi-dimensional data after it has been classified by the VERI algorithms. The visualization package is integrated into the single pass interface. Both the single pass interface and optimization interface are part of the PR software package we have developed and make available to other users. The single pass mode only finds PR results for the sets of features in the data set that are manually requested by the user. The optimization model uses a brute force method of searching through the cominations of features in a data set for features that produce

  7. Ordered subsets algorithms for transmission tomography.

    PubMed

    Erdogan, H; Fessler, J A

    1999-11-01

    The ordered subsets EM (OSEM) algorithm has enjoyed considerable interest for emission image reconstruction due to its acceleration of the original EM algorithm and ease of programming. The transmission EM reconstruction algorithm converges very slowly and is not used in practice. In this paper, we introduce a simultaneous update algorithm called separable paraboloidal surrogates (SPS) that converges much faster than the transmission EM algorithm. Furthermore, unlike the 'convex algorithm' for transmission tomography, the proposed algorithm is monotonic even with nonzero background counts. We demonstrate that the ordered subsets principle can also be applied to the new SPS algorithm for transmission tomography to accelerate 'convergence', albeit with similar sacrifice of global convergence properties as for OSEM. We implemented and evaluated this ordered subsets transmission (OSTR) algorithm. The results indicate that the OSTR algorithm speeds up the increase in the objective function by roughly the number of subsets in the early iterates when compared to the ordinary SPS algorithm. We compute mean square errors and segmentation errors for different methods and show that OSTR is superior to OSEM applied to the logarithm of the transmission data. However, penalized-likelihood reconstructions yield the best quality images among all other methods tested. PMID:10588288

  8. Iterative algorithms for large sparse linear systems on parallel computers

    NASA Technical Reports Server (NTRS)

    Adams, L. M.

    1982-01-01

    Algorithms for assembling in parallel the sparse system of linear equations that result from finite difference or finite element discretizations of elliptic partial differential equations, such as those that arise in structural engineering are developed. Parallel linear stationary iterative algorithms and parallel preconditioned conjugate gradient algorithms are developed for solving these systems. In addition, a model for comparing parallel algorithms on array architectures is developed and results of this model for the algorithms are given.

  9. Control algorithms for dynamic attenuators

    SciTech Connect

    Hsieh, Scott S.; Pelc, Norbert J.

    2014-06-15

    Purpose: The authors describe algorithms to control dynamic attenuators in CT and compare their performance using simulated scans. Dynamic attenuators are prepatient beam shaping filters that modulate the distribution of x-ray fluence incident on the patient on a view-by-view basis. These attenuators can reduce dose while improving key image quality metrics such as peak or mean variance. In each view, the attenuator presents several degrees of freedom which may be individually adjusted. The total number of degrees of freedom across all views is very large, making many optimization techniques impractical. The authors develop a theory for optimally controlling these attenuators. Special attention is paid to a theoretically perfect attenuator which controls the fluence for each ray individually, but the authors also investigate and compare three other, practical attenuator designs which have been previously proposed: the piecewise-linear attenuator, the translating attenuator, and the double wedge attenuator. Methods: The authors pose and solve the optimization problems of minimizing the mean and peak variance subject to a fixed dose limit. For a perfect attenuator and mean variance minimization, this problem can be solved in simple, closed form. For other attenuator designs, the problem can be decomposed into separate problems for each view to greatly reduce the computational complexity. Peak variance minimization can be approximately solved using iterated, weighted mean variance (WMV) minimization. Also, the authors develop heuristics for the perfect and piecewise-linear attenuators which do not requirea priori knowledge of the patient anatomy. The authors compare these control algorithms on different types of dynamic attenuators using simulated raw data from forward projected DICOM files of a thorax and an abdomen. Results: The translating and double wedge attenuators reduce dose by an average of 30% relative to current techniques (bowtie filter with tube current

  10. SMOS first results over land

    NASA Astrophysics Data System (ADS)

    Kerr, Yann; Waldteufel, Philippe; Cabot, François; Richaume, Philippe; Jacquette, Elsa; Bitar, Ahmad Al; Mamhoodi, Ali; Delwart, Steven; Wigneron, Jean-Pierre

    2010-05-01

    The Soil Moisture and Ocean Salinity (SMOS) mission is ESA's (European Space Agency ) second Earth Explorer Opportunity mission, launched in November 2009. It is a joint programme between ESA CNES (Centre National d'Etudes Spatiales) and CDTI (Centro para el Desarrollo Tecnologico Industrial). SMOS carries a single payload, an L-band 2D interferometric radiometer in the 1400-1427 MHz protected band. This wavelength penetrates well through the atmosphere and hence the instrument probes the Earth surface emissivity. Surface emissivity can then be related to the moisture content in the first few centimeters of soil, and, after some surface roughness and temperature corrections, to the sea surface salinity over ocean. In order to prepare the data use and dissemination, the ground segment will produce level 1 and 2 data. Level 1 consists mainly of angular brightness temperatures while level 2 consists of geophysical products. In this context, a group of institutes prepared the soil moisture and ocean salinity Algorithm Theoretical Basis documents (ATBD) to be used to produce the operational algorithm. The principle of the soil moisture retrieval algorithm is based on an iterative approach which aims at minimizing a cost function given by the sum of the squared weighted differences between measured and modelled brightness temperature (TB) data, for a variety of incidence angles. This is achieved by finding the best suited set of the parameters which drive the direct TB model, e.g. soil moisture (SM) and vegetation characteristics. Despite the simplicity of this principle, the main reason for the complexity of the algorithm is that SMOS "pixels" can correspond to rather large, inhomogeneous surface areas whose contribution to the radiometric signal is difficult to model. Moreover, the exact description of pixels, given by a weighting function which expresses the directional pattern of the SMOS interferometric radiometer, depends on the incidence angle. The goal is to

  11. Improved autonomous star identification algorithm

    NASA Astrophysics Data System (ADS)

    Luo, Li-Yan; Xu, Lu-Ping; Zhang, Hua; Sun, Jing-Rong

    2015-06-01

    The log-polar transform (LPT) is introduced into the star identification because of its rotation invariance. An improved autonomous star identification algorithm is proposed in this paper to avoid the circular shift of the feature vector and to reduce the time consumed in the star identification algorithm using LPT. In the proposed algorithm, the star pattern of the same navigation star remains unchanged when the stellar image is rotated, which makes it able to reduce the star identification time. The logarithmic values of the plane distances between the navigation and its neighbor stars are adopted to structure the feature vector of the navigation star, which enhances the robustness of star identification. In addition, some efforts are made to make it able to find the identification result with fewer comparisons, instead of searching the whole feature database. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition rate and robustness by the proposed algorithm are better than those by the LPT algorithm and the modified grid algorithm. Project supported by the National Natural Science Foundation of China (Grant Nos. 61172138 and 61401340), the Open Research Fund of the Academy of Satellite Application, China (Grant No. 2014_CXJJ-DH_12), the Fundamental Research Funds for the Central Universities, China (Grant Nos. JB141303 and 201413B), the Natural Science Basic Research Plan in Shaanxi Province, China (Grant No. 2013JQ8040), the Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20130203120004), and the Xi’an Science and Technology Plan, China (Grant. No CXY1350(4)).

  12. An improved Richardson-Lucy algorithm based on local prior

    NASA Astrophysics Data System (ADS)

    Yongpan, Wang; Huajun, Feng; Zhihai, Xu; Qi, Li; Chaoyue, Dai

    2010-07-01

    Ringing is one of the most common disturbing artifacts in image deconvolution. With a totally known kernel, the standard Richardson-Lucy (RL) algorithm succeeds in many motion deblurring processes, but the resulting images still contain visible ringing. When the estimated kernel is different from the real one, the result of the standard RL iterative algorithm will be worse. To suppress the ringing artifacts caused by failures in the blur kernel estimation, this paper improves the RL algorithm based on the local prior. Firstly, the standard deviation of pixels in the local window is computed to find the smooth region and the image gradient in the region is constrained to make its distribution consistent with the deblurring image gradient. Secondly, in order to suppress the ringing near the edge of a rigid body in the image, a new mask was obtained by computing the sharp edge of the image produced using the first step. If the kernel is large-scale, where the foreground is rigid and the background is smoothing, this step could produce a significant inhibitory effect on ringing artifacts. Thirdly, the boundary constraint is strengthened if the boundary is relatively smooth. As a result of the steps above, high-quality deblurred images can be obtained even when the estimated kernels are not perfectly accurate. On the basis of blurred images and the related kernel information taken by the additional hardware, our approach proved to be effective.

  13. Algorithm Optimally Orders Forward-Chaining Inference Rules

    NASA Technical Reports Server (NTRS)

    James, Mark

    2008-01-01

    People typically develop knowledge bases in a somewhat ad hoc manner by incrementally adding rules with no specific organization. This often results in a very inefficient execution of those rules since they are so often order sensitive. This is relevant to tasks like Deep Space Network in that it allows the knowledge base to be incrementally developed and have it automatically ordered for efficiency. Although data flow analysis was first developed for use in compilers for producing optimal code sequences, its usefulness is now recognized in many software systems including knowledge-based systems. However, this approach for exhaustively computing data-flow information cannot directly be applied to inference systems because of the ubiquitous execution of the rules. An algorithm is presented that efficiently performs a complete producer/consumer analysis for each antecedent and consequence clause in a knowledge base to optimally order the rules to minimize inference cycles. An algorithm was developed that optimally orders a knowledge base composed of forwarding chaining inference rules such that independent inference cycle executions are minimized, thus, resulting in significantly faster execution. This algorithm was integrated into the JPL tool Spacecraft Health Inference Engine (SHINE) for verification and it resulted in a significant reduction in inference cycles for what was previously considered an ordered knowledge base. For a knowledge base that is completely unordered, then the improvement is much greater.

  14. A modified decision tree algorithm based on genetic algorithm for mobile user classification problem.

    PubMed

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389

  15. A Modified Decision Tree Algorithm Based on Genetic Algorithm for Mobile User Classification Problem

    PubMed Central

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389

  16. Producing Uniform Lesion Pattern in HIFU Ablation

    NASA Astrophysics Data System (ADS)

    Zhou, Yufeng; Kargl, Steven G.; Hwang, Joo Ha

    2009-04-01

    High intensity focused ultrasound (HIFU) is emerging as a modality for treatment of solid tumors. The temperature at the focus can reach over 65° C denaturing cellular proteins resulting in coagulative necrosis. Typically, HIFU parameters are the same for each treated spot in most HIFU control systems. Because of thermal diffusion from nearby spots, the size of lesions will gradually become larger as the HIFU therapy progresses, which may cause insufficient treatment of initial spots, and over-treatment of later ones. It is found that the produced lesion pattern also depends on the scanning pathway. From the viewpoint of the physician creating uniform lesions and minimizing energy exposure are preferred in tumor ablation. An algorithm has been developed to adaptively determine the treatment parameters for every spot in a theoretical model in order to maintain similar lesion size throughout the HIFU therapy. In addition, the exposure energy needed using the traditional raster scanning is compared with those of two other scanning pathways, spiral scanning from the center to the outside and from the outside to the center. The theoretical prediction and proposed algorithm were further evaluated using transparent gel phantoms as a target. Digital images of the lesions were obtained, quantified, and then compared with each other. Altogether, dynamically changing treatment parameters can improve the efficacy and safety of HIFU ablation.

  17. Fast decoding algorithms for coded aperture systems

    NASA Astrophysics Data System (ADS)

    Byard, Kevin

    2014-08-01

    Fast decoding algorithms are described for a number of established coded aperture systems. The fast decoding algorithms for all these systems offer significant reductions in the number of calculations required when reconstructing images formed by a coded aperture system and hence require less computation time to produce the images. The algorithms may therefore be of use in applications that require fast image reconstruction, such as near real-time nuclear medicine and location of hazardous radioactive spillage. Experimental tests confirm the efficacy of the fast decoding techniques.

  18. Geochemical effects of CO2 injection on produced water chemistry at an enhanced oil recovery site in the Permian Basin of northwest Texas, USA: Preliminary geochemical and Li isotope results

    NASA Astrophysics Data System (ADS)

    Pfister, S.; Gardiner, J.; Phan, T. T.; Macpherson, G. L.; Diehl, J. R.; Lopano, C. L.; Stewart, B. W.; Capo, R. C.

    2014-12-01

    Injection of supercritical CO2 for enhanced oil recovery (EOR) presents an opportunity to evaluate the effects of CO2 on reservoir properties and formation waters during geologic carbon sequestration. Produced water from oil wells tapping a carbonate-hosted reservoir at an active EOR site in the Permian Basin of Texas both before and after injection were sampled to evaluate geochemical and isotopic changes associated with water-rock-CO2 interaction. Produced waters from the carbonate reservoir rock are Na-Cl brines with TDS levels of 16.5-34 g/L and detectable H2S. These brines are potentially diluted with shallow groundwater from earlier EOR water flooding. Initial lithium isotope data (δ7Li) from pre-injection produced water in the EOR field fall within the range of Gulf of Mexico Coastal sedimentary basin and Appalachian basin values (Macpherson et al., 2014, Geofluids, doi: 10.1111/gfl.12084). Pre-injection produced water 87Sr/86Sr ratios (0.70788-0.70795) are consistent with mid-late Permian seawater/carbonate. CO2 injection took place in October 2013, and four of the wells sampled in May 2014 showed CO2 breakthrough. Preliminary comparison of pre- and post-injection produced waters indicates no significant changes in the major inorganic constituents following breakthrough, other than a possible drop in K concentration. Trace element and isotope data from pre- and post-breakthrough wells are currently being evaluated and will be presented.

  19. Stride Search: a general algorithm for storm detection in high-resolution climate data

    NASA Astrophysics Data System (ADS)

    Bosler, Peter A.; Roesler, Erika L.; Taylor, Mark A.; Mundt, Miranda R.

    2016-04-01

    This article discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared: the commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. The Stride Search algorithm is defined independently of the spatial discretization associated with a particular data set. Results from the two algorithms are compared for the application of tropical cyclone detection, and shown to produce similar results for the same set of storm identification criteria. Differences between the two algorithms arise for some storms due to their different definition of search regions in physical space. The physical space associated with each Stride Search region is constant, regardless of data resolution or latitude, and Stride Search is therefore capable of searching all regions of the globe in the same manner. Stride Search's ability to search high latitudes is demonstrated for the case of polar low detection. Wall clock time required for Stride Search is shown to be smaller than a grid point search of the same data, and the relative speed up associated with Stride Search increases as resolution increases.

  20. Harmony Search Algorithm for Word Sense Disambiguation.

    PubMed

    Abed, Saad Adnan; Tiun, Sabrina; Omar, Nazlia

    2015-01-01

    Word Sense Disambiguation (WSD) is the task of determining which sense of an ambiguous word (word with multiple meanings) is chosen in a particular use of that word, by considering its context. A sentence is considered ambiguous if it contains ambiguous word(s). Practically, any sentence that has been classified as ambiguous usually has multiple interpretations, but just one of them presents the correct interpretation. We propose an unsupervised method that exploits knowledge based approaches for word sense disambiguation using Harmony Search Algorithm (HSA) based on a Stanford dependencies generator (HSDG). The role of the dependency generator is to parse sentences to obtain their dependency relations. Whereas, the goal of using the HSA is to maximize the overall semantic similarity of the set of parsed words. HSA invokes a combination of semantic similarity and relatedness measurements, i.e., Jiang and Conrath (jcn) and an adapted Lesk algorithm, to perform the HSA fitness function. Our proposed method was experimented on benchmark datasets, which yielded results comparable to the state-of-the-art WSD methods. In order to evaluate the effectiveness of the dependency generator, we perform the same methodology without the parser, but with a window of words. The empirical results demonstrate that the proposed method is able to produce effective solutions for most instances of the datasets used. PMID:26422368

  1. Harmony Search Algorithm for Word Sense Disambiguation

    PubMed Central

    Abed, Saad Adnan; Tiun, Sabrina; Omar, Nazlia

    2015-01-01

    Word Sense Disambiguation (WSD) is the task of determining which sense of an ambiguous word (word with multiple meanings) is chosen in a particular use of that word, by considering its context. A sentence is considered ambiguous if it contains ambiguous word(s). Practically, any sentence that has been classified as ambiguous usually has multiple interpretations, but just one of them presents the correct interpretation. We propose an unsupervised method that exploits knowledge based approaches for word sense disambiguation using Harmony Search Algorithm (HSA) based on a Stanford dependencies generator (HSDG). The role of the dependency generator is to parse sentences to obtain their dependency relations. Whereas, the goal of using the HSA is to maximize the overall semantic similarity of the set of parsed words. HSA invokes a combination of semantic similarity and relatedness measurements, i.e., Jiang and Conrath (jcn) and an adapted Lesk algorithm, to perform the HSA fitness function. Our proposed method was experimented on benchmark datasets, which yielded results comparable to the state-of-the-art WSD methods. In order to evaluate the effectiveness of the dependency generator, we perform the same methodology without the parser, but with a window of words. The empirical results demonstrate that the proposed method is able to produce effective solutions for most instances of the datasets used. PMID:26422368

  2. An efficient parallel termination detection algorithm

    SciTech Connect

    Baker, A. H.; Crivelli, S.; Jessup, E. R.

    2004-05-27

    Information local to any one processor is insufficient to monitor the overall progress of most distributed computations. Typically, a second distributed computation for detecting termination of the main computation is necessary. In order to be a useful computational tool, the termination detection routine must operate concurrently with the main computation, adding minimal overhead, and it must promptly and correctly detect termination when it occurs. In this paper, we present a new algorithm for detecting the termination of a parallel computation on distributed-memory MIMD computers that satisfies all of those criteria. A variety of termination detection algorithms have been devised. Of these, the algorithm presented by Sinha, Kale, and Ramkumar (henceforth, the SKR algorithm) is unique in its ability to adapt to the load conditions of the system on which it runs, thereby minimizing the impact of termination detection on performance. Because their algorithm also detects termination quickly, we consider it to be the most efficient practical algorithm presently available. The termination detection algorithm presented here was developed for use in the PMESC programming library for distributed-memory MIMD computers. Like the SKR algorithm, our algorithm adapts to system loads and imposes little overhead. Also like the SKR algorithm, ours is tree-based, and it does not depend on any assumptions about the physical interconnection topology of the processors or the specifics of the distributed computation. In addition, our algorithm is easier to implement and requires only half as many tree traverses as does the SKR algorithm. This paper is organized as follows. In section 2, we define our computational model. In section 3, we review the SKR algorithm. We introduce our new algorithm in section 4, and prove its correctness in section 5. We discuss its efficiency and present experimental results in section 6.

  3. Advancements to the planogram frequency–distance rebinning algorithm

    PubMed Central

    Champley, Kyle M; Raylman, Raymond R; Kinahan, Paul E

    2010-01-01

    In this paper we consider the task of image reconstruction in positron emission tomography (PET) with the planogram frequency–distance rebinning (PFDR) algorithm. The PFDR algorithm is a rebinning algorithm for PET systems with panel detectors. The algorithm is derived in the planogram coordinate system which is a native data format for PET systems with panel detectors. A rebinning algorithm averages over the redundant four-dimensional set of PET data to produce a three-dimensional set of data. Images can be reconstructed from this rebinned three-dimensional set of data. This process enables one to reconstruct PET images more quickly than reconstructing directly from the four-dimensional PET data. The PFDR algorithm is an approximate rebinning algorithm. We show that implementing the PFDR algorithm followed by the (ramp) filtered backprojection (FBP) algorithm in linogram coordinates from multiple views reconstructs a filtered version of our image. We develop an explicit formula for this filter which can be used to achieve exact reconstruction by means of a modified FBP algorithm applied to the stack of rebinned linograms and can also be used to quantify the errors introduced by the PFDR algorithm. This filter is similar to the filter in the planogram filtered backprojection algorithm derived by Brasse et al. The planogram filtered backprojection and exact reconstruction with the PFDR algorithm require complete projections which can be completed with a reprojection algorithm. The PFDR algorithm is similar to the rebinning algorithm developed by Kao et al. By expressing the PFDR algorithm in detector coordinates, we provide a comparative analysis between the two algorithms. Numerical experiments using both simulated data and measured data from a positron emission mammography/tomography (PEM/PET) system are performed. Images are reconstructed by PFDR+FBP (PFDR followed by 2D FBP reconstruction), PFDRX (PFDR followed by the modified FBP algorithm for exact

  4. GIFTS SM EDU Data Processing and Algorithms

    NASA Technical Reports Server (NTRS)

    Tian, Jialin; Johnson, David G.; Reisse, Robert A.; Gazarik, Michael J.

    2007-01-01

    The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) Sensor Module (SM) Engineering Demonstration Unit (EDU) is a high resolution spectral imager designed to measure infrared (IR) radiances using a Fourier transform spectrometer (FTS). The GIFTS instrument employs three Focal Plane Arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The raw interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra, which are further processed to obtain atmospheric profiles via retrieval algorithms. This paper describes the processing algorithms involved in the calibration stage. The calibration procedures can be subdivided into three stages. In the pre-calibration stage, a phase correction algorithm is applied to the decimated and filtered complex interferogram. The resulting imaginary part of the spectrum contains only the noise component of the uncorrected spectrum. Additional random noise reduction can be accomplished by applying a spectral smoothing routine to the phase-corrected blackbody reference spectra. In the radiometric calibration stage, we first compute the spectral responsivity based on the previous results, from which, the calibrated ambient blackbody (ABB), hot blackbody (HBB), and scene spectra can be obtained. During the post-processing stage, we estimate the noise equivalent spectral radiance (NESR) from the calibrated ABB and HBB spectra. We then implement a correction scheme that compensates for the effect of fore-optics offsets. Finally, for off-axis pixels, the FPA off-axis effects correction is performed. To estimate the performance of the entire FPA, we developed an efficient method of generating pixel performance assessments. In addition, a random pixel selection scheme is designed based on the pixel performance evaluation.

  5. Kalman plus weights: a time scale algorithm

    NASA Technical Reports Server (NTRS)

    Greenhall, C. A.

    2001-01-01

    KPW is a time scale algorithm that combines Kalman filtering with the basic time scale equation (BTSE). A single Kalman filter that estimates all clocks simultaneously is used to generate the BTSE frequency estimates, while the BTSE weights are inversely proportional to the white FM variances of the clocks. Results from simulated clock ensembles are compared to previous simulation results from other algorithms.

  6. IUS guidance algorithm gamma guide assessment

    NASA Technical Reports Server (NTRS)

    Bray, R. E.; Dauro, V. A.

    1980-01-01

    The Gamma Guidance Algorithm which controls the inertial upper stage is described. The results of an independent assessment of the algorithm's performance in satisfying the NASA missions' targeting objectives are presented. The results of a launch window analysis for a Galileo mission, and suggested improvements are included.

  7. Automated Antenna Design with Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Linden, Derek; Hornby, Greg; Lohn, Jason; Globus, Al; Krishunkumor, K.

    2006-01-01

    constrain the evolutionary design to a monopole wire antenna. The results of the runs produced requirements-compliant antennas that were subsequently fabricated and tested. The evolved antenna has a number of advantages with regard to power consumption, fabrication time and complexity, and performance. Lower power requirements result from achieving high gain across a wider range of elevation angles, thus allowing a broader range of angles over which maximum data throughput can be achieved. Since the evolved antenna does not require a phasing circuit, less design and fabrication work is required. In terms of overall work, the evolved antenna required approximately three person-months to design and fabricate whereas the conventional antenna required about five. Furthermore, when the mission was modified and new orbital parameters selected, a redesign of the antenna to new requirements was required. The evolutionary system was rapidly modified and a new antenna evolved in a few weeks. The evolved antenna was shown to be compliant to the ST5 mission requirements. It has an unusual organic looking structure, one that expert antenna designers would not likely produce. This antenna has been tested, baselined and is scheduled to fly this year. In addition to the ST5 antenna, our laboratory has evolved an S-band phased array antenna element design that meets the requirements for NASA's TDRS-C communications satellite scheduled for launch early next decade. A combination of fairly broad bandwidth, high efficiency and circular polarization at high gain made for another challenging design problem. We chose to constrain the evolutionary design to a crossed-element Yagi antenna. The specification called for two types of elements, one for receive only and one for transmit/receive. We were able to evolve a single element design that meets both specifications thereby simplifying the antenna and reducing testing and integration costs. The highest performance antenna found using a getic

  8. A convergent hybrid decomposition algorithm model for SVM training.

    PubMed

    Lucidi, Stefano; Palagi, Laura; Risi, Arnaldo; Sciandrone, Marco

    2009-06-01

    Training of support vector machines (SVMs) requires to solve a linearly constrained convex quadratic problem. In real applications, the number of training data may be very huge and the Hessian matrix cannot be stored. In order to take into account this issue, a common strategy consists in using decomposition algorithms which at each iteration operate only on a small subset of variables, usually referred to as the working set. Training time can be significantly reduced by using a caching technique that allocates some memory space to store the columns of the Hessian matrix corresponding to the variables recently updated. The convergence properties of a decomposition method can be guaranteed by means of a suitable selection of the working set and this can limit the possibility of exploiting the information stored in the cache. We propose a general hybrid algorithm model which combines the capability of producing a globally convergent sequence of points with a flexible use of the information in the cache. As an example of a specific realization of the general hybrid model, we describe an algorithm based on a particular strategy for exploiting the information deriving from a caching technique. We report the results of computational experiments performed by simple implementations of this algorithm. The numerical results point out the potentiality of the approach. PMID:19435679

  9. An automated intensity-weighted brachytherapy seed localization algorithm

    SciTech Connect

    Whitehead, Gregory; Chang Zheng; Ji, Jim

    2008-03-15

    Brachytherapy has proven to be an effective treatment for various forms of cancer, whereby radioactive material is inserted directly into the body to maximize dosage to malignant tumors while preserving healthy tissue. In order to validate the preoperative or intraoperative dosimetric model, a postimplant evaluation procedure is needed to ensure that the locations of the implanted seeds are consistent with the planning stage. Moreover, development of an automated algorithm for seed detection and localization is necessary to expedite the postimplant evaluation process and reduce human error. Most previously reported algorithms have performed binary transforms on images before attempting to localize seeds. Furthermore, traditional approaches based upon three-dimensional seed shape parameterization and matching require high resolution imaging. The authors propose a new computationally efficient algorithm for automatic seed localization for full three-dimensional, low-resolution data sets that directly applies voxel intensity to the estimation of both seed centroid location and angular seed orientation. Computer simulations, phantom studies, and in vivo computed tomography prostate seed imaging results show that the proposed algorithm can produce reliable results even for low-resolution images.

  10. Masseter segmentation using an improved watershed algorithm with unsupervised classification.

    PubMed

    Ng, H P; Ong, S H; Foong, K W C; Goh, P S; Nowinski, W L

    2008-02-01

    The watershed algorithm always produces a complete division of the image. However, it is susceptible to over-segmentation and sensitivity to false edges. In medical images this leads to unfavorable representations of the anatomy. We address these drawbacks by introducing automated thresholding and post-segmentation merging. The automated thresholding step is based on the histogram of the gradient magnitude map while post-segmentation merging is based on a criterion which measures the similarity in intensity values between two neighboring partitions. Our improved watershed algorithm is able to merge more than 90% of the initial partitions, which indicates that a large amount of over-segmentation has been reduced. To further improve the segmentation results, we make use of K-means clustering to provide an initial coarse segmentation of the highly textured image before the improved watershed algorithm is applied to it. When applied to the segmentation of the masseter from 60 magnetic resonance images of 10 subjects, the proposed algorithm achieved an overlap index (kappa) of 90.6%, and was able to merge 98% of the initial partitions on average. The segmentation results are comparable to those obtained using the gradient vector flow snake. PMID:17950265

  11. Presence of Shiga toxin-producing Escherichia coli O-groups in small and very-small beef-processing plants and resulting ground beef detected by a multiplex polymerase chain reaction assay.

    PubMed

    Svoboda, Amanda L; Dudley, Edward G; Debroy, Chitrita; Mills, Edward W; Cutter, Catherine N

    2013-09-01

    Shiga toxin-producing Escherichia coli (STEC) are associated with foodborne illnesses, including hemolytic uremic syndrome in humans. Cattle and consequently, beef products are considered a major source of STEC. E. coli O157:H7 has been regulated as an adulterant in ground beef since 1996. The United States Department of Agriculture Food Safety and Inspection Service began regulating six additional STEC (O145, O121, O111, O103, O45, and O26) as adulterants in beef trim and raw ground beef in June 2012. Little is known about the presence of STEC in small and very-small beef-processing plants. Therefore, we propose to determine whether small and very-small beef-processing plants are a potential source of non-O157:H7 STEC. Environmental swabs, carcass swabs, hide swabs, and ground beef from eight small and very-small beef-processing plants were obtained from October 2010 to December 2011. A multiplex polymerase chain reaction assay was used to determine the presence of STEC O-groups: O157, O145, O121, O113, O111, O103, O45, and O26 in the samples. Results demonstrated that 56.6% (154/272) of the environmental samples, 35.0% (71/203) of the carcass samples, 85.2% (23/27) of the hide samples, and 17.0% (20/118) of the ground beef samples tested positive for one or more of the serogroups. However, only 7.4% (20/272) of the environmental samples, 4.4% (9/203) of the carcass samples, and 0% (0/118) ground beef samples tested positive for both the serogroup and Shiga toxin genes. Based on this survey, small and very-small beef processors may be a source of non-O157:H7 STEC. The information from this study may be of interest to regulatory officials, researchers, public health personnel, and the beef industry that are interested in the presence of these pathogens in the beef supply. PMID:23742295

  12. A Parallel Newton-Krylov-Schur Algorithm for the Reynolds-Averaged Navier-Stokes Equations

    NASA Astrophysics Data System (ADS)

    Osusky, Michal

    Aerodynamic shape optimization and multidisciplinary optimization algorithms have the potential not only to improve conventional aircraft, but also to enable the design of novel configurations. By their very nature, these algorithms generate and analyze a large number of unique shapes, resulting in high computational costs. In order to improve their efficiency and enable their use in the early stages of the design process, a fast and robust flow solution algorithm is necessary. This thesis presents an efficient parallel Newton-Krylov-Schur flow solution algorithm for the three-dimensional Navier-Stokes equations coupled with the Spalart-Allmaras one-equation turbulence model. The algorithm employs second-order summation-by-parts (SBP) operators on multi-block structured grids with simultaneous approximation terms (SATs) to enforce block interface coupling and boundary conditions. The discrete equations are solved iteratively with an inexact-Newton method, while the linear system at each Newton iteration is solved using the flexible Krylov subspace iterative method GMRES with an approximate-Schur parallel preconditioner. The algorithm is thoroughly verified and validated, highlighting the correspondence of the current algorithm with several established flow solvers. The solution for a transonic flow over a wing on a mesh of medium density (15 million nodes) shows good agreement with experimental results. Using 128 processors, deep convergence is obtained in under 90 minutes. The solution of transonic flow over the Common Research Model wing-body geometry with grids with up to 150 million nodes exhibits the expected grid convergence behavior. This case was completed as part of the Fifth AIAA Drag Prediction Workshop, with the algorithm producing solutions that compare favourably with several widely used flow solvers. The algorithm is shown to scale well on over 6000 processors. The results demonstrate the effectiveness of the SBP-SAT spatial discretization, which can

  13. WDM Multicast Tree Construction Algorithms and Their Comparative Evaluations

    NASA Astrophysics Data System (ADS)

    Makabe, Tsutomu; Mikoshi, Taiju; Takenaka, Toyofumi

    We propose novel tree construction algorithms for multicast communication in photonic networks. Since multicast communications consume many more link resources than unicast communications, effective algorithms for route selection and wavelength assignment are required. We propose a novel tree construction algorithm, called the Weighted Steiner Tree (WST) algorithm and a variation of the WST algorithm, called the Composite Weighted Steiner Tree (CWST) algorithm. Because these algorithms are based on the Steiner Tree algorithm, link resources among source and destination pairs tend to be commonly used and link utilization ratios are improved. Because of this, these algorithms can accept many more multicast requests than other multicast tree construction algorithms based on the Dijkstra algorithm. However, under certain delay constraints, the blocking characteristics of the proposed Weighted Steiner Tree algorithm deteriorate since some light paths between source and destinations use many hops and cannot satisfy the delay constraint. In order to adapt the approach to the delay-sensitive environments, we have devised the Composite Weighted Steiner Tree algorithm comprising the Weighted Steiner Tree algorithm and the Dijkstra algorithm for use in a delay constrained environment such as an IPTV application. In this paper, we also give the results of simulation experiments which demonstrate the superiority of the proposed Composite Weighted Steiner Tree algorithm compared with the Distributed Minimum Hop Tree (DMHT) algorithm, from the viewpoint of the light-tree request blocking.

  14. A sustainable genetic algorithm for satellite resource allocation

    NASA Technical Reports Server (NTRS)

    Abbott, R. J.; Campbell, M. L.; Krenz, W. C.

    1995-01-01

    A hybrid genetic algorithm is used to schedule tasks for 8 satellites, which can be modelled as a robot whose task is to retrieve objects from a two dimensional field. The objective is to find a schedule that maximizes the value of objects retrieved. Typical of the real-world tasks to which this corresponds is the scheduling of ground contacts for a communications satellite. An important feature of our application is that the amount of time available for running the scheduler is not necessarily known in advance. This requires that the scheduler produce reasonably good results after a short period but that it also continue to improve its results if allowed to run for a longer period. We satisfy this requirement by developing what we call a sustainable genetic algorithm.

  15. Research on Palmprint Identification Method Based on Quantum Algorithms

    PubMed Central

    Zhang, Zhanzhan

    2014-01-01

    Quantum image recognition is a technology by using quantum algorithm to process the image information. It can obtain better effect than classical algorithm. In this paper, four different quantum algorithms are used in the three stages of palmprint recognition. First, quantum adaptive median filtering algorithm is presented in palmprint filtering processing. Quantum filtering algorithm can get a better filtering result than classical algorithm through the comparison. Next, quantum Fourier transform (QFT) is used to extract pattern features by only one operation due to quantum parallelism. The proposed algorithm exhibits an exponential speed-up compared with discrete Fourier transform in the feature extraction. Finally, quantum set operations and Grover algorithm are used in palmprint matching. According to the experimental results, quantum algorithm only needs to apply square of N operations to find out the target palmprint, but the traditional method needs N times of calculation. At the same time, the matching accuracy of quantum algorithm is almost 100%. PMID:25105165

  16. Sequential and Parallel Algorithms for Spherical Interpolation

    NASA Astrophysics Data System (ADS)

    De Rossi, Alessandra

    2007-09-01

    Given a large set of scattered points on a sphere and their associated real values, we analyze sequential and parallel algorithms for the construction of a function defined on the sphere satisfying the interpolation conditions. The algorithms we implemented are based on a local interpolation method using spherical radial basis functions and the Inverse Distance Weighted method. Several numerical results show accuracy and efficiency of the algorithms.

  17. An Upperbound to the Performance of Ranked-Output Searching: Optimal Weighting of Query Terms Using A Genetic Algorithm.

    ERIC Educational Resources Information Center

    Robertson, Alexander M.; Willett, Peter

    1996-01-01

    Describes a genetic algorithm (GA) that assigns weights to query terms in a ranked-output document retrieval system. Experiments showed the GA often found weights slightly superior to those produced by deterministic weighting (F4). Many times, however, the two methods gave the same results and sometimes the F4 results were superior, indicating…

  18. Knowledge-based tracking algorithm

    NASA Astrophysics Data System (ADS)

    Corbeil, Allan F.; Hawkins, Linda J.; Gilgallon, Paul F.

    1990-10-01

    This paper describes the Knowledge-Based Tracking (KBT) algorithm for which a real-time flight test demonstration was recently conducted at Rome Air Development Center (RADC). In KBT processing, the radar signal in each resolution cell is thresholded at a lower than normal setting to detect low RCS targets. This lower threshold produces a larger than normal false alarm rate. Therefore, additional signal processing including spectral filtering, CFAR and knowledge-based acceptance testing are performed to eliminate some of the false alarms. TSC's knowledge-based Track-Before-Detect (TBD) algorithm is then applied to the data from each azimuth sector to detect target tracks. In this algorithm, tentative track templates are formed for each threshold crossing and knowledge-based association rules are applied to the range, Doppler, and azimuth measurements from successive scans. Lastly, an M-association out of N-scan rule is used to declare a detection. This scan-to-scan integration enhances the probability of target detection while maintaining an acceptably low output false alarm rate. For a real-time demonstration of the KBT algorithm, the L-band radar in the Surveillance Laboratory (SL) at RADC was used to illuminate a small Cessna 310 test aircraft. The received radar signal wa digitized and processed by a ST-100 Array Processor and VAX computer network in the lab. The ST-100 performed all of the radar signal processing functions, including Moving Target Indicator (MTI) pulse cancelling, FFT Doppler filtering, and CFAR detection. The VAX computers performed the remaining range-Doppler clustering, beamsplitting and TBD processing functions. The KBT algorithm provided a 9.5 dB improvement relative to single scan performance with a nominal real time delay of less than one second between illumination and display.

  19. An Improved Neutron Transport Algorithm for HZETRN

    NASA Technical Reports Server (NTRS)

    Slaba, Tony C.; Blattnig, Steve R.; Clowdsley, Martha S.; Walker, Steven A.; Badavi, Francis F.

    2010-01-01

    Long term human presence in space requires the inclusion of radiation constraints in mission planning and the design of shielding materials, structures, and vehicles. In this paper, the numerical error associated with energy discretization in HZETRN is addressed. An inadequate numerical integration scheme in the transport algorithm is shown to produce large errors in the low energy portion of the neutron and light ion fluence spectra. It is further shown that the errors result from the narrow energy domain of the neutron elastic cross section spectral distributions, and that an extremely fine energy grid is required to resolve the problem under the current formulation. Two numerical methods are developed to provide adequate resolution in the energy domain and more accurately resolve the neutron elastic interactions. Convergence testing is completed by running the code for various environments and shielding materials with various energy grids to ensure stability of the newly implemented method.

  20. Programming environment for parallel vision algorithms. Annual report, February 1986-February 1987

    SciTech Connect

    Brown, C.

    1987-02-01

    During the second year of the award period, the Computer Science Department of the University of Rochester continued work in: 1) systems support algorithms, 2) the Butterfly programming environment, and 3) vision applications. This research produced several internal and external reports as well as much exportable code. The University of Rochester also employed DARPA Parallel Architecture Benchmark problems to test different algorithms using four different Butterfly programming environments. These tests produced several interesting results and demonstrated that the Butterfly architecture is a flexible general-purpose architecture that can be effectively programmed by non-experts, using tools developed at BBN and Rochester. The University of Rochester is continuing to study the issues and concerns surrounding the effective implementation of parallel algorithms.

  1. Development of the theory and algorithms for synthesis of reflector antenna systems

    NASA Astrophysics Data System (ADS)

    Oliker, Vladimir

    1995-01-01

    The main objective of this work was research and development of the theory and constructive computational algorithms for synthesis of single and dual reflector antenna systems in geometrical optics approximation. During the contracting period a variety of new analytic techniques and computational algorithms have been developed. In particular, for single and dual reflector antenna systems conditions for solvability of the synthesis equations have been established. Numerical algorithms for computing surface data of the reflectors have been developed and successfully tested. In addition, efficient techniques have been developed for computing radiation patterns produced by reflections/refractions off surfaces with arbitrary geometry. These techniques can be used for geometrical optics analysis of complex geometric structures such as aircrafts. They can also be applied to determine effectively the aperture excitations required to produce specified fields at given observation points. The results have a variety of applications in military, civilian, and commercial sectors.

  2. Effects of visualization on algorithm comprehension

    NASA Astrophysics Data System (ADS)

    Mulvey, Matthew

    Computer science students are expected to learn and apply a variety of core algorithms which are an essential part of the field. Any one of these algorithms by itself is not necessarily extremely complex, but remembering the large variety of algorithms and the differences between them is challenging. To address this challenge, we present a novel algorithm visualization tool designed to enhance students understanding of Dijkstra's algorithm by allowing them to discover the rules of the algorithm for themselves. It is hoped that a deeper understanding of the algorithm will help students correctly select, adapt and apply the appropriate algorithm when presented with a problem to solve, and that what is learned here will be applicable to the design of other visualization tools designed to teach different algorithms. Our visualization tool is currently in the prototype stage, and this thesis will discuss the pedagogical approach that informs its design, as well as the results of some initial usability testing. Finally, to clarify the direction for further development of the tool, four different variations of the prototype were implemented, and the instructional effectiveness of each was assessed by having a small sample participants use the different versions of the prototype and then take a quiz to assess their comprehension of the algorithm.

  3. Coupled Inertial Navigation and Flush Air Data Sensing Algorithm for Atmosphere Estimation

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark

    2015-01-01

    This paper describes an algorithm for atmospheric state estimation that is based on a coupling between inertial navigation and flush air data sensing pressure measurements. In this approach, the full navigation state is used in the atmospheric estimation algorithm along with the pressure measurements and a model of the surface pressure distribution to directly estimate atmospheric winds and density using a nonlinear weighted least-squares algorithm. The approach uses a high fidelity model of atmosphere stored in table-look-up form, along with simplified models of that are propagated along the trajectory within the algorithm to provide prior estimates and covariances to aid the air data state solution. Thus, the method is essentially a reduced-order Kalman filter in which the inertial states are taken from the navigation solution and atmospheric states are estimated in the filter. The algorithm is applied to data from the Mars Science Laboratory entry, descent, and landing from August 2012. Reasonable estimates of the atmosphere and winds are produced by the algorithm. The observability of winds along the trajectory are examined using an index based on the discrete-time observability Gramian and the pressure measurement sensitivity matrix. The results indicate that bank reversals are responsible for adding information content to the system. The algorithm is then applied to the design of the pressure measurement system for the Mars 2020 mission. The pressure port layout is optimized to maximize the observability of atmospheric states along the trajectory. Linear covariance analysis is performed to assess estimator performance for a given pressure measurement uncertainty. The results indicate that the new tightly-coupled estimator can produce enhanced estimates of atmospheric states when compared with existing algorithms.

  4. Evaluation of Various Radar Data Quality Control Algorithms Based on Accumulated Radar Rainfall Statistics

    NASA Technical Reports Server (NTRS)

    Robinson, Michael; Steiner, Matthias; Wolff, David B.; Ferrier, Brad S.; Kessinger, Cathy; Einaudi, Franco (Technical Monitor)

    2000-01-01

    The primary function of the TRMM Ground Validation (GV) Program is to create GV rainfall products that provide basic validation of satellite-derived precipitation measurements for select primary sites. A fundamental and extremely important step in creating high-quality GV products is radar data quality control. Quality control (QC) processing of TRMM GV radar data is based on some automated procedures, but the current QC algorithm is not fully operational and requires significant human interaction to assure satisfactory results. Moreover, the TRMM GV QC algorithm, even with continuous manual tuning, still can not completely remove all types of spurious echoes. In an attempt to improve the current operational radar data QC procedures of the TRMM GV effort, an intercomparison of several QC algorithms has been conducted. This presentation will demonstrate how various radar data QC algorithms affect accumulated radar rainfall products. In all, six different QC algorithms will be applied to two months of WSR-88D radar data from Melbourne, Florida. Daily, five-day, and monthly accumulated radar rainfall maps will be produced for each quality-controlled data set. The QC algorithms will be evaluated and compared based on their ability to remove spurious echoes without removing significant precipitation. Strengths and weaknesses of each algorithm will be assessed based on, their abilit to mitigate both erroneous additions and reductions in rainfall accumulation from spurious echo contamination and true precipitation removal, respectively. Contamination from individual spurious echo categories will be quantified to further diagnose the abilities of each radar QC algorithm. Finally, a cost-benefit analysis will be conducted to determine if a more automated QC algorithm is a viable alternative to the current, labor-intensive QC algorithm employed by TRMM GV.

  5. A Short Survey of Document Structure Similarity Algorithms

    SciTech Connect

    Buttler, D

    2004-02-27

    This paper provides a brief survey of document structural similarity algorithms, including the optimal Tree Edit Distance algorithm and various approximation algorithms. The approximation algorithms include the simple weighted tag similarity algorithm, Fourier transforms of the structure, and a new application of the shingle technique to structural similarity. We show three surprising results. First, the Fourier transform technique proves to be the least accurate of any of approximation algorithms, while also being slowest. Second, optimal Tree Edit Distance algorithms may not be the best technique for clustering pages from different sites. Third, the simplest approximation to structure may be the most effective and efficient mechanism for many applications.

  6. An automatic and fast centerline extraction algorithm for virtual colonoscopy.

    PubMed

    Jiang, Guangxiang; Gu, Lixu

    2005-01-01

    This paper introduces a new refined centerline extraction algorithm, which is based on and significantly improved from distance mapping algorithms. The new approach include three major parts: employing a colon segmentation method; designing and realizing a fast Euclidean Transform algorithm and inducting boundary voxels cutting (BVC) approach. The main contribution is the BVC processing, which greatly speeds up the Dijkstra algorithm and improves the whole performance of the new algorithm. Experimental results demonstrate that the new centerline algorithm was more efficient and accurate comparing with existing algorithms. PMID:17281406

  7. NWRA AVOSS Wake Vortex Prediction Algorithm. 3.1.1

    NASA Technical Reports Server (NTRS)

    Robins, R. E.; Delisi, D. P.; Hinton, David (Technical Monitor)

    2002-01-01

    This report provides a detailed description of the wake vortex prediction algorithm used in the Demonstration Version of NASA's Aircraft Vortex Spacing System (AVOSS). The report includes all equations used in the algorithm, an explanation of how to run the algorithm, and a discussion of how the source code for the algorithm is organized. Several appendices contain important supplementary information, including suggestions for enhancing the algorithm and results from test cases.

  8. Bayesian fusion of algorithms for the robust estimation of respiratory rate from the photoplethysmogram.

    PubMed

    Zhu, Tingting; Pimentel, Marco A F; Clifford, Gari D; Clifton, David A

    2015-08-01

    Respiratory rate (RR) is a key vital sign that is monitored to assess the health of patients. With the increase of the availability of wearable devices, it is important that RR is extracted in a robust and noninvasive manner from the photoplethysmogram (PPG) acquired from pulse oximeters and similar devices. However, existing methods of noninvasive RR estimation suffer from a lack of robustness, resulting in the fact that they are not used in clinical practice. We propose a Bayesian approach to fusing the outputs of many RR estimation algorithms to improve the overall robustness of the resulting estimates. Our method estimates the accuracy of each algorithm and jointly infers the fused RR estimate in an unsupervised manner, with aim of producing a fused estimate that is more accurate than any of the algorithms taken individually. This approach is novel in the literature, where the latter has so far concentrated on attempting to produce single algorithms for RR estimation, without resulting in systems that have penetrated into clinical practice. A publicly-available dataset, Capnobase, was used to validate the performance of our proposed model. Our proposed methodology was compared to the best-performing individual algorithm from the literature, as well as to the results of using common fusing methodologies such as averaging, median, and maximum likelihood (ML). Our proposed methodology resulted in a mean-absolute-error (MAE) of 1.98 breaths per minute (bpm), outperformed other fusing strategies (mean fusion: 2.95 bpm; median fusion: 2.33 bpm; ML: 2.30 bpm). It also outperformed the best single algorithm (2.39 bpm) and the benchmark algorithm proposed for use with Capnobase (2.22 bpm). We conclude that the proposed fusion methodology can be used to combine RR estimates from multiple sources derived from the PPG, to infer a reliable and robust estimation of the respiratory rate in an unsupervised manner. PMID:26737693

  9. SDR input power estimation algorithms

    NASA Astrophysics Data System (ADS)

    Briones, J. C.; Nappier, J. M.

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  10. Ensemble algorithms in reinforcement learning.

    PubMed

    Wiering, Marco A; van Hasselt, Hado

    2008-08-01

    This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and implemented four different ensemble methods combining the following five different RL algorithms: Q-learning, Sarsa, actor-critic (AC), QV-learning, and AC learning automaton. The intuitively designed ensemble methods, namely, majority voting (MV), rank voting, Boltzmann multiplication (BM), and Boltzmann addition, combine the policies derived from the value functions of the different RL algorithms, in contrast to previous work where ensemble methods have been used in RL for representing and learning a single value function. We show experiments on five maze problems of varying complexity; the first problem is simple, but the other four maze tasks are of a dynamic or partially observable nature. The results indicate that the BM and MV ensembles significantly outperform the single RL algorithms. PMID:18632380

  11. Fourier Lucas-Kanade algorithm.

    PubMed

    Lucey, Simon; Navarathna, Rajitha; Ashraf, Ahmed Bilal; Sridharan, Sridha

    2013-06-01

    In this paper, we propose a framework for both gradient descent image and object alignment in the Fourier domain. Our method centers upon the classical Lucas & Kanade (LK) algorithm where we represent the source and template/model in the complex 2D Fourier domain rather than in the spatial 2D domain. We refer to our approach as the Fourier LK (FLK) algorithm. The FLK formulation is advantageous when one preprocesses the source image and template/model with a bank of filters (e.g., oriented edges, Gabor, etc.) as 1) it can handle substantial illumination variations, 2) the inefficient preprocessing filter bank step can be subsumed within the FLK algorithm as a sparse diagonal weighting matrix, 3) unlike traditional LK, the computational cost is invariant to the number of filters and as a result is far more efficient, and 4) this approach can be extended to the Inverse Compositional (IC) form of the LK algorithm where nearly all steps (including Fourier transform and filter bank preprocessing) can be precomputed, leading to an extremely efficient and robust approach to gradient descent image matching. Further, these computational savings translate to nonrigid object alignment tasks that are considered extensions of the LK algorithm, such as those found in Active Appearance Models (AAMs). PMID:23599053

  12. SDR Input Power Estimation Algorithms

    NASA Technical Reports Server (NTRS)

    Nappier, Jennifer M.; Briones, Janette C.

    2013-01-01

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  13. POSE Algorithms for Automated Docking

    NASA Technical Reports Server (NTRS)

    Heaton, Andrew F.; Howard, Richard T.

    2011-01-01

    POSE (relative position and attitude) can be computed in many different ways. Given a sensor that measures bearing to a finite number of spots corresponding to known features (such as a target) of a spacecraft, a number of different algorithms can be used to compute the POSE. NASA has sponsored the development of a flash LIDAR proximity sensor called the Vision Navigation Sensor (VNS) for use by the Orion capsule in future docking missions. This sensor generates data that can be used by a variety of algorithms to compute POSE solutions inside of 15 meters, including at the critical docking range of approximately 1-2 meters. Previously NASA participated in a DARPA program called Orbital Express that achieved the first automated docking for the American space program. During this mission a large set of high quality mated sensor data was obtained at what is essentially the docking distance. This data set is perhaps the most accurate truth data in existence for docking proximity sensors in orbit. In this paper, the flight data from Orbital Express is used to test POSE algorithms at 1.22 meters range. Two different POSE algorithms are tested for two different Fields-of-View (FOVs) and two different pixel noise levels. The results of the analysis are used to predict future performance of the POSE algorithms with VNS data.

  14. Algorithms for automated DNA assembly

    PubMed Central

    Densmore, Douglas; Hsiau, Timothy H.-C.; Kittleson, Joshua T.; DeLoache, Will; Batten, Christopher; Anderson, J. Christopher

    2010-01-01

    Generating a defined set of genetic constructs within a large combinatorial space provides a powerful method for engineering novel biological functions. However, the process of assembling more than a few specific DNA sequences can be costly, time consuming and error prone. Even if a correct theoretical construction scheme is developed manually, it is likely to be suboptimal by any number of cost metrics. Modular, robust and formal approaches are needed for exploring these vast design spaces. By automating the design of DNA fabrication schemes using computational algorithms, we can eliminate human error while reducing redundant operations, thus minimizing the time and cost required for conducting biological engineering experiments. Here, we provide algorithms that optimize the simultaneous assembly of a collection of related DNA sequences. We compare our algorithms to an exhaustive search on a small synthetic dataset and our results show that our algorithms can quickly find an optimal solution. Comparison with random search approaches on two real-world datasets show that our algorithms can also quickly find lower-cost solutions for large datasets. PMID:20335162

  15. A discrete artificial bee colony algorithm for detecting transcription factor binding sites in DNA sequences.

    PubMed

    Karaboga, D; Aslan, S

    2016-01-01

    The great majority of biological sequences share significant similarity with other sequences as a result of evolutionary processes, and identifying these sequence similarities is one of the most challenging problems in bioinformatics. In this paper, we present a discrete artificial bee colony (ABC) algorithm, which is inspired by the intelligent foraging behavior of real honey bees, for the detection of highly conserved residue patterns or motifs within sequences. Experimental studies on three different data sets showed that the proposed discrete model, by adhering to the fundamental scheme of the ABC algorithm, produced competitive or better results than other metaheuristic motif discovery techniques. PMID:27173272

  16. Genetic Algorithm and Tabu Search for Vehicle Routing Problems with Stochastic Demand

    NASA Astrophysics Data System (ADS)

    Ismail, Zuhaimy; Irhamah

    2010-11-01

    This paper presents a problem of designing solid waste collection routes, involving scheduling of vehicles where each vehicle begins at the depot, visits customers and ends at the depot. It is modeled as a Vehicle Routing Problem with Stochastic Demands (VRPSD). A data set from a real world problem (a case) is used in this research. We developed Genetic Algorithm (GA) and Tabu Search (TS) procedure and these has produced the best possible result. The problem data are inspired by real case of VRPSD in waste collection. Results from the experiment show the advantages of the proposed algorithm that are its robustness and better solution qualities.

  17. A Winner Determination Algorithm for Combinatorial Auctions Based on Hybrid Artificial Fish Swarm Algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Genrang; Lin, ZhengChun

    The problem of winner determination in combinatorial auctions is a hotspot electronic business, and a NP hard problem. A Hybrid Artificial Fish Swarm Algorithm(HAFSA), which is combined with First Suite Heuristic Algorithm (FSHA) and Artificial Fish Swarm Algorithm (AFSA), is proposed to solve the problem after probing it base on the theories of AFSA. Experiment results show that the HAFSA is a rapidly and efficient algorithm for The problem of winner determining. Compared with Ant colony Optimization Algorithm, it has a good performance with broad and prosperous application.

  18. ALGORITHM DEVELOPMENT FOR SPATIAL OPERATORS.

    USGS Publications Warehouse

    Claire, Robert W.

    1984-01-01

    An approach is given that develops spatial operators about the basic geometric elements common to spatial data structures. In this fashion, a single set of spatial operators may be accessed by any system that reduces its operands to such basic generic representations. Algorithms based on this premise have been formulated to perform operations such as separation, overlap, and intersection. Moreover, this generic approach is well suited for algorithms that exploit concurrent properties of spatial operators. The results may provide a framework for a geometry engine to support fundamental manipulations within a geographic information system.

  19. Scheduling periodic jobs using imprecise results

    NASA Technical Reports Server (NTRS)

    Chung, Jen-Yao; Liu, Jane W. S.; Lin, Kwei-Jay

    1987-01-01

    One approach to avoid timing faults in hard, real-time systems is to make available intermediate, imprecise results produced by real-time processes. When a result of the desired quality cannot be produced in time, an imprecise result of acceptable quality produced before the deadline can be used. The problem of scheduling periodic jobs to meet deadlines on a system that provides the necessary programming language primitives and run-time support for processes to return imprecise results is discussed. Since the scheduler may choose to terminate a task before it is completed, causing it to produce an acceptable but imprecise result, the amount of processor time assigned to any task in a valid schedule can be less than the amount of time required to complete the task. A meaningful formulation of the scheduling problem must take into account the overall quality of the results. Depending on the different types of undesirable effects caused by errors, jobs are classified as type N or type C. For type N jobs, the effects of errors in results produced in different periods are not cumulative. A reasonable performance measure is the average error over all jobs. Three heuristic algorithms that lead to feasible schedules with small average errors are described. For type C jobs, the undesirable effects of errors produced in different periods are cumulative. Schedulability criteria of type C jobs are discussed.

  20. A fast non-local image denoising algorithm

    NASA Astrophysics Data System (ADS)

    Dauwe, A.; Goossens, B.; Luong, H. Q.; Philips, W.

    2008-02-01

    In this paper we propose several improvements to the original non-local means algorithm introduced by Buades et al. which obtains state-of-the-art denoising results. The strength of this algorithm is to exploit the repetitive character of the image in order to denoise the image unlike conventional denoising algorithms, which typically operate in a local neighbourhood. Due to the enormous amount of weight computations, the original algorithm has a high computational cost. An improvement of image quality towards the original algorithm is to ignore the contributions from dissimilar windows. Even though their weights are very small at first sight, the new estimated pixel value can be severely biased due to the many small contributions. This bad influence of dissimilar windows can be eliminated by setting their corresponding weights to zero. Using the preclassification based on the first three statistical moments, only contributions from similar neighborhoods are computed. To decide whether a window is similar or dissimilar, we will derive thresholds for images corrupted with additive white Gaussian noise. Our accelerated approach is further optimized by taking advantage of the symmetry in the weights, which roughly halves the computation time, and by using a lookup table to speed up the weight computations. Compared to the original algorithm, our proposed method produces images with increased PSNR and better visual performance in less computation time. Our proposed method even outperforms state-of-the-art wavelet denoising techniques in both visual quality and PSNR values for images containing a lot of repetitive structures such as textures: the denoised images are much sharper and contain less artifacts. The proposed optimizations can also be applied in other image processing tasks which employ the concept of repetitive structures such as intra-frame super-resolution or detection of digital image forgery.

  1. Byte structure variable length coding (BS-VLC): a new specific algorithm applied in the compression of trajectories generated by molecular dynamics

    PubMed

    Melo; Puga; Gentil; Brito; Alves; Ramos

    2000-05-01

    Molecular dynamics is a well-known technique very much used in the study of biomolecular systems. The trajectory files produced by molecular dynamics simulations are extensive, and the classical lossless algorithms give poor efficiencies in their compression. In this work, a new specific algorithm, named byte structure variable length coding (BS-VLC), is introduced. Trajectory files, obtained by molecular dynamics applied to trypsin and a trypsin:pancreatic trypsin inhibitor complex, were compressed using four classical lossless algorithms (Huffman, adaptive Huffman, LZW, and LZ77) as well as the BS-VLC algorithm. The results obtained show that BS-VLC nearly triplicates the compression efficiency of the best classical lossless algorithm, preserving a near lossless behavior. Compression efficiencies close to 50% can be obtained with a high degree of precision, and the maximum efficiency possible (75%), within this algorithm, can be performed with good precision. PMID:10850759

  2. Projection Classification Based Iterative Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Ruiqiu; Li, Chen; Gao, Wenhua

    2015-05-01

    Iterative algorithm has good performance as it does not need complete projection data in 3D image reconstruction area. It is possible to be applied in BGA based solder joints inspection but with low convergence speed which usually acts with x-ray Laminography that has a worse reconstruction image compared to the former one. This paper explores to apply one projection classification based method which tries to separate the object to three parts, i.e. solute, solution and air, and suppose that the reconstruction speed decrease from solution to two other parts on both side lineally. And then SART and CAV algorithms are improved under the proposed idea. Simulation experiment result with incomplete projection images indicates the fast convergence speed of the improved iterative algorithms and the effectiveness of the proposed method. Less the projection images, more the superiority is also founded.

  3. Algorithm for identifying and separating beats from arterial pulse records

    PubMed Central

    Treo, Ernesto F; Herrera, Myriam C; Valentinuzzi, Max E

    2005-01-01

    positive/negative criteria. Synchronization ability was measured through the coefficient of variation and the median value of correlation for each patient. These parameters were assessed by means of Friedman's ANOVA and Kendall Concordance test. Results Sensitivity was 97% and 91% for the two operators, respectively, while accuracy was cero for both of them. The synchronism variability analysis was significant (p < 0.01) for the two statistics, showing that the algorithm produced the best result. Conclusion The proposed algorithm showed good performance as expressed by its high sensitivity. The correlation analysis demonstrated that, from the synchronism point of view, the algorithm performed the best detection. Patients with marked arrhythmic processes are not good candidates for this kind of analysis. At most, they would be singled out by the algorithm and, thereafter, to be checked by an operator. PMID:16095532

  4. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.; Subrahmanyam, P.A.

    1988-12-01

    The authors present a methodology for verifying correctness of systolic algorithms. The methodology is based on solving a set of Uniform Recurrence Equations obtained from a description of systolic algorithms as a set of recursive equations. They present an approach to mechanically verify correctness of systolic algorithms, using the Boyer-Moore theorem proven. A mechanical correctness proof of an example from the literature is also presented.

  5. Algorithm Visualization in Teaching Practice

    ERIC Educational Resources Information Center

    Törley, Gábor

    2014-01-01

    This paper presents the history of algorithm visualization (AV), highlighting teaching-methodology aspects. A combined, two-group pedagogical experiment will be presented as well, which measured the efficiency and the impact on the abstract thinking of AV. According to the results, students, who learned with AV, performed better in the experiment.

  6. Understanding Algorithms in Different Presentations

    ERIC Educational Resources Information Center

    Csernoch, Mária; Biró, Piroska; Abari, Kálmán; Máth, János

    2015-01-01

    Within the framework of the Testing Algorithmic and Application Skills project we tested first year students of Informatics at the beginning of their tertiary education. We were focusing on the students' level of understanding in different programming environments. In the present paper we provide the results from the University of Debrecen, the…

  7. Discrete artificial bee colony algorithm for lot-streaming flowshop with total flowtime minimization

    NASA Astrophysics Data System (ADS)

    Sang, Hongyan; Gao, Liang; Pan, Quanke

    2012-09-01

    Unlike a traditional flowshop problem where a job is assumed to be indivisible, in the lot-streaming flowshop problem, a job is allowed to overlap its operations between successive machines by splitting it into a number of smaller sub-lots and moving the completed portion of the sub-lots to downstream machine. In this way, the production is accelerated. This paper presents a discrete artificial bee colony (DABC) algorithm for a lot-streaming flowshop scheduling problem with total flowtime criterion. Unlike the basic ABC algorithm, the proposed DABC algorithm represents a solution as a discrete job permutation. An efficient initialization scheme based on the extended Nawaz-Enscore-Ham heuristic is utilized to produce an initial population with a certain level of quality and diversity. Employed and onlooker bees generate new solutions in their neighborhood, whereas scout bees generate new solutions by performing insert operator and swap operator to the best solution found so far. Moreover, a simple but effective local search is embedded in the algorithm to enhance local exploitation capability. A comparative experiment is carried out with the existing discrete particle swarm optimization, hybrid genetic algorithm, threshold accepting, simulated annealing and ant colony optimization algorithms based on a total of 160 randomly generated instances. The experimental results show that the proposed DABC algorithm is quite effective for the lot-streaming flowshop with total flowtime criterion in terms of searching quality, robustness and effectiveness. This research provides the references to the optimization research on lot-streaming flowshop.

  8. Genetic algorithms and their use in Geophysical Problems

    SciTech Connect

    Parker, Paul B.

    1999-04-01

    Genetic algorithms (GAs), global optimization methods that mimic Darwinian evolution are well suited to the nonlinear inverse problems of geophysics. A standard genetic algorithm selects the best or ''fittest'' models from a ''population'' and then applies operators such as crossover and mutation in order to combine the most successful characteristics of each model and produce fitter models. More sophisticated operators have been developed, but the standard GA usually provides a robust and efficient search. Although the choice of parameter settings such as crossover and mutation rate may depend largely on the type of problem being solved, numerous results show that certain parameter settings produce optimal performance for a wide range of problems and difficulties. In particular, a low (about half of the inverse of the population size) mutation rate is crucial for optimal results, but the choice of crossover method and rate do not seem to affect performance appreciably. Optimal efficiency is usually achieved with smaller (< 50) populations. Lastly, tournament selection appears to be the best choice of selection methods due to its simplicity and its autoscaling properties. However, if a proportional selection method is used such as roulette wheel selection, fitness scaling is a necessity, and a high scaling factor (> 2.0) should be used for the best performance. Three case studies are presented in which genetic algorithms are used to invert for crustal parameters. The first is an inversion for basement depth at Yucca mountain using gravity data, the second an inversion for velocity structure in the crust of the south island of New Zealand using receiver functions derived from teleseismic events, and the third is a similar receiver function inversion for crustal velocities beneath the Mendocino Triple Junction region of Northern California. The inversions demonstrate that genetic algorithms are effective in solving problems with reasonably large numbers of free

  9. Land cover and land use mapping of the iSimangaliso Wetland Park, South Africa: comparison of oblique and orthogonal random forest algorithms

    NASA Astrophysics Data System (ADS)

    Bassa, Zaakirah; Bob, Urmilla; Szantoi, Zoltan; Ismail, Riyad

    2016-01-01

    In recent years, the popularity of tree-based ensemble methods for land cover classification has increased significantly. Using WorldView-2 image data, we evaluate the potential of the oblique random forest algorithm (oRF) to classify a highly heterogeneous protected area. In contrast to the random forest (RF) algorithm, the oRF algorithm builds multivariate trees by learning the optimal split using a supervised model. The oRF binary algorithm is adapted to a multiclass land cover and land use application using both the "one-against-one" and "one-against-all" combination approaches. Results show that the oRF algorithms are capable of achieving high classification accuracies (>80%). However, there was no statistical difference in classification accuracies obtained by the oRF algorithms and the more popular RF algorithm. For all the algorithms, user accuracies (UAs) and producer accuracies (PAs) >80% were recorded for most of the classes. Both the RF and oRF algorithms poorly classified the indigenous forest class as indicated by the low UAs and PAs. Finally, the results from this study advocate and support the utility of the oRF algorithm for land cover and land use mapping of protected areas using WorldView-2 image data.

  10. ENAS-RIF algorithm for image restoration

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Yang, Zhen-wen; Shen, Tian-shuang; Chen, Bo

    2012-11-01

    mage of objects is inevitably encountered by space-based working in the atmospheric turbulence environment, such as those used in astronomy, remote sensing and so on. The observed images are seriously blurred. The restoration is required for reconstruction turbulence degraded images. In order to enhance the performance of image restoration, a novel enhanced nonnegativity and support constants recursive inverse filtering(ENAS-RIF) algorithm was presented, which was based on the reliable support region and enhanced cost function. Firstly, the Curvelet denoising algorithm was used to weaken image noise. Secondly, the reliable object support region estimation was used to accelerate the algorithm convergence. Then, the average gray was set as the gray of image background pixel. Finally, an object construction limit and the logarithm function were add to enhance algorithm stability. The experimental results prove that the convergence speed of the novel ENAS-RIF algorithm is faster than that of NAS-RIF algorithm and it is better in image restoration.

  11. Modified OMP Algorithm for Exponentially Decaying Signals

    PubMed Central

    Kazimierczuk, Krzysztof; Kasprzak, Paweł

    2015-01-01

    A group of signal reconstruction methods, referred to as compressed sensing (CS), has recently found a variety of applications in numerous branches of science and technology. However, the condition of the applicability of standard CS algorithms (e.g., orthogonal matching pursuit, OMP), i.e., the existence of the strictly sparse representation of a signal, is rarely met. Thus, dedicated algorithms for solving particular problems have to be developed. In this paper, we introduce a modification of OMP motivated by nuclear magnetic resonance (NMR) application of CS. The algorithm is based on the fact that the NMR spectrum consists of Lorentzian peaks and matches a single Lorentzian peak in each of its iterations. Thus, we propose the name Lorentzian peak matching pursuit (LPMP). We also consider certain modification of the algorithm by introducing the allowed positions of the Lorentzian peaks' centers. Our results show that the LPMP algorithm outperforms other CS algorithms when applied to exponentially decaying signals. PMID:25609044

  12. Ares I-X Best Estimated Trajectory Analysis and Results

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.; Beck, Roger E.; Starr, Brett R.; Derry, Stephen D.; Brandon, Jay; Olds, Aaron D.

    2011-01-01

    The Ares I-X trajectory reconstruction produced best estimated trajectories of the flight test vehicle ascent through stage separation, and of the first and upper stage entries after separation. The trajectory reconstruction process combines on-board, ground-based, and atmospheric measurements to produce the trajectory estimates. The Ares I-X vehicle had a number of on-board and ground based sensors that were available, including inertial measurement units, radar, air-data, and weather balloons. However, due to problems with calibrations and/or data, not all of the sensor data were used. The trajectory estimate was generated using an Iterative Extended Kalman Filter algorithm, which is an industry standard processing algorithm for filtering and estimation applications. This paper describes the methodology and results of the trajectory reconstruction process, including flight data preprocessing and input uncertainties, trajectory estimation algorithms, output transformations, and comparisons with preflight predictions.

  13. A parallel algorithm for global routing

    NASA Technical Reports Server (NTRS)

    Brouwer, Randall J.; Banerjee, Prithviraj

    1990-01-01

    A Parallel Hierarchical algorithm for Global Routing (PHIGURE) is presented. The router is based on the work of Burstein and Pelavin, but has many extensions for general global routing and parallel execution. Main features of the algorithm include structured hierarchical decomposition into separate independent tasks which are suitable for parallel execution and adaptive simplex solution for adding feedthroughs and adjusting channel heights for row-based layout. Alternative decomposition methods and the various levels of parallelism available in the algorithm are examined closely. The algorithm is described and results are presented for a shared-memory multiprocessor implementation.

  14. Algorithm to search for genomic rearrangements

    NASA Astrophysics Data System (ADS)

    Nałecz-Charkiewicz, Katarzyna; Nowak, Robert

    2013-10-01

    The aim of this article is to discuss the issue of comparing nucleotide sequences in order to detect chromosomal rearrangements (for example, in the study of genomes of two cucumber varieties, Polish and Chinese). Two basic algorithms for detecting rearrangements has been described: Smith-Waterman algorithm, as well as a new method of searching genetic markers in combination with Knuth-Morris-Pratt algorithm. The computer program in client-server architecture was developed. The algorithms properties were examined on genomes Escherichia coli and Arabidopsis thaliana genomes, and are prepared to compare two cucumber varieties, Polish and Chinese. The results are promising and further works are planned.

  15. Unifying parametrized VLSI Jacobi algorithms and architectures

    NASA Astrophysics Data System (ADS)

    Deprettere, Ed F. A.; Moonen, Marc

    1993-11-01

    Implementing Jacobi algorithms in parallel VLSI processor arrays is a non-trivial task, in particular when the algorithms are parametrized with respect to size and the architectures are parametrized with respect to space-time trade-offs. The paper is concerned with an approach to implement several time-adaptive Jacobi-type algorithms on a parallel processor array, using only Cordic arithmetic and asynchronous communications, such that any degree of parallelism, ranging from single-processor up to full-size array implementation, is supported by a `universal' processing unit. This result is attributed to a gracious interplay between algorithmic and architectural engineering.

  16. Practical algorithmic probability: an image inpainting example

    NASA Astrophysics Data System (ADS)

    Potapov, Alexey; Scherbakov, Oleg; Zhdanov, Innokentii

    2013-12-01

    Possibility of practical application of algorithmic probability is analyzed on an example of image inpainting problem that precisely corresponds to the prediction problem. Such consideration is fruitful both for the theory of universal prediction and practical image inpaiting methods. Efficient application of algorithmic probability implies that its computation is essentially optimized for some specific data representation. In this paper, we considered one image representation, namely spectral representation, for which an image inpainting algorithm is proposed based on the spectrum entropy criterion. This algorithm showed promising results in spite of very simple representation. The same approach can be used for introducing ALP-based criterion for more powerful image representations.

  17. Generation of attributes for learning algorithms

    SciTech Connect

    Hu, Yuh-Jyh; Kibler, D.

    1996-12-31

    Inductive algorithms rely strongly on their representational biases. Constructive induction can mitigate representational inadequacies. This paper introduces the notion of a relative gain measure and describes a new constructive induction algorithm (GALA) which is independent of the learning algorithm. Unlike most previous research on constructive induction, our methods are designed as preprocessing step before standard machine learning algorithms are applied. We present the results which demonstrate the effectiveness of GALA on artificial and real domains for several learners: C4.5, CN2, perceptron and backpropagation.

  18. Performance Comparison Of Evolutionary Algorithms For Image Clustering

    NASA Astrophysics Data System (ADS)

    Civicioglu, P.; Atasever, U. H.; Ozkan, C.; Besdok, E.; Karkinli, A. E.; Kesikoglu, A.

    2014-09-01

    Evolutionary computation tools are able to process real valued numerical sets in order to extract suboptimal solution of designed problem. Data clustering algorithms have been intensively used for image segmentation in remote sensing applications. Despite of wide usage of evolutionary algorithms on data clustering, their clustering performances have been scarcely studied by using clustering validation indexes. In this paper, the recently proposed evolutionary algorithms (i.e., Artificial Bee Colony Algorithm (ABC), Gravitational Search Algorithm (GSA), Cuckoo Search Algorithm (CS), Adaptive Differential Evolution Algorithm (JADE), Differential Search Algorithm (DSA) and Backtracking Search Optimization Algorithm (BSA)) and some classical image clustering techniques (i.e., k-means, fcm, som networks) have been used to cluster images and their performances have been compared by using four clustering validation indexes. Experimental test results exposed that evolutionary algorithms give more reliable cluster-centers than classical clustering techniques, but their convergence time is quite long.

  19. Robustness of Tree Extraction Algorithms from LIDAR

    NASA Astrophysics Data System (ADS)

    Dumitru, M.; Strimbu, B. M.

    2015-12-01

    Forest inventory faces a new era as unmanned aerial systems (UAS) increased the precision of measurements, while reduced field effort and price of data acquisition. A large number of algorithms were developed to identify various forest attributes from UAS data. The objective of the present research is to assess the robustness of two types of tree identification algorithms when UAS data are combined with digital elevation models (DEM). The algorithms use as input photogrammetric point cloud, which are subsequent rasterized. The first type of algorithms associate tree crown with an inversed watershed (subsequently referred as watershed based), while the second type is based on simultaneous representation of tree crown as an individual entity, and its relation with neighboring crowns (subsequently referred as simultaneous representation). A DJI equipped with a SONY a5100 was used to acquire images over an area from center Louisiana. The images were processed with Pix4D, and a photogrammetric point cloud with 50 points / m2 was attained. DEM was obtained from a flight executed in 2013, which also supplied a LIDAR point cloud with 30 points/m2. The algorithms were tested on two plantations with different species and crown class complexities: one homogeneous (i.e., a mature loblolly pine plantation), and one heterogeneous (i.e., an unmanaged uneven-aged stand with mixed species pine -hardwoods). Tree identification on photogrammetric point cloud reveled that simultaneous representation algorithm outperforms watershed algorithm, irrespective stand complexity. Watershed algorithm exhibits robustness to parameters, but the results were worse than majority sets of parameters needed by the simultaneous representation algorithm. The simultaneous representation algorithm is a better alternative to watershed algorithm even when parameters are not accurately estimated. Similar results were obtained when the two algorithms were run on the LIDAR point cloud.

  20. Investigation of range extension with a genetic algorithm

    SciTech Connect

    Austin, A. S., LLNL

    1998-03-04

    Range optimization is one of the tasks associated with the development of cost- effective, stand-off, air-to-surface munitions systems. The search for the optimal input parameters that will result in the maximum achievable range often employ conventional Monte Carlo techniques. Monte Carlo approaches can be time-consuming, costly, and insensitive to mutually dependent parameters and epistatic parameter effects. An alternative search and optimization technique is available in genetic algorithms. In the experiments discussed in this report, a simplified platform motion simulator was the fitness function for a genetic algorithm. The parameters to be optimized were the inputs to this motion generator and the simulator`s output (terminal range) was the fitness measure. The parameters of interest were initial launch altitude, initial launch speed, wing angle-of-attack, and engine ignition time. The parameter values the GA produced were validated by Monte Carlo investigations employing a full-scale six-degree-of-freedom (6 DOF) simulation. The best results produced by Monte Carlo processes using values based on the GA derived parameters were within - 1% of the ranges generated by the simplified model using the evolved parameter values. This report has five sections. Section 2 discusses the motivation for the range extension investigation and reviews the surrogate flight model developed as a fitness function for the genetic algorithm tool. Section 3 details the representation and implementation of the task within the genetic algorithm framework. Section 4 discusses the results. Section 5 concludes the report with a summary and suggestions for further research.

  1. Performance study of a new time-delay estimation algorithm in ultrasonic echo signals and ultrasound elastography.

    PubMed

    Shaswary, Elyas; Xu, Yuan; Tavakkoli, Jahan

    2016-07-01

    Time-delay estimation has countless applications in ultrasound medical imaging. Previously, we proposed a new time-delay estimation algorithm, which was based on the summation of the sign function to compute the time-delay estimate (Shaswary et al., 2015). We reported that the proposed algorithm performs similar to normalized cross-correlation (NCC) and sum squared differences (SSD) algorithms, even though it was significantly more computationally efficient. In this paper, we study the performance of the proposed algorithm using statistical analysis and image quality analysis in ultrasound elastography imaging. Field II simulation software was used for generation of ultrasound radio frequency (RF) echo signals for statistical analysis, and a clinical ultrasound scanner (Sonix® RP scanner, Ultrasonix Medical Corp., Richmond, BC, Canada) was used to scan a commercial ultrasound elastography tissue-mimicking phantom for image quality analysis. The statistical analysis results confirmed that, in overall, the proposed algorithm has similar performance compared to NCC and SSD algorithms. The image quality analysis results indicated that the proposed algorithm produces strain images with marginally higher signal-to-noise and contrast-to-noise ratios compared to NCC and SSD algorithms. PMID:27010697

  2. Improved restoration algorithm for weakly blurred and strongly noisy image

    NASA Astrophysics Data System (ADS)

    Liu, Qianshun; Xia, Guo; Zhou, Haiyang; Bai, Jian; Yu, Feihong

    2015-10-01

    In real applications, such as consumer digital imaging, it is very common to record weakly blurred and strongly noisy images. Recently, a state-of-art algorithm named geometric locally adaptive sharpening (GLAS) has been proposed. By capturing local image structure, it can effectively combine denoising and sharpening together. However, there still exist two problems in the practice. On one hand, two hard thresholds have to be constantly adjusted with different images so as not to produce over-sharpening artifacts. On the other hand, the smoothing parameter must be manually set precisely. Otherwise, it will seriously magnify the noise. However, these parameters have to be set in advance and totally empirically. In a practical application, this is difficult to achieve. Thus, it is not easy to use and not smart enough. In an effort to improve the restoration effect of this situation by way of GLAS, an improved GLAS (IGLAS) algorithm by introducing the local phase coherence sharpening Index (LPCSI) metric is proposed in this paper. With the help of LPCSI metric, the two hard thresholds can be fixed at constant values for all images. Compared to the original method, the thresholds in our new algorithm no longer need to change with different images. Based on our proposed IGLAS, its automatic version is also developed in order to compensate for the disadvantages of manual intervention. Simulated and real experimental results show that the proposed algorithm can not only obtain better performances compared with the original method, but it is very easy to apply.

  3. SETI Pulse Detection Algorithm: Analysis of False-alarm Rates

    NASA Technical Reports Server (NTRS)

    Levitt, B. K.

    1983-01-01

    Some earlier work by the Search for Extraterrestrial Intelligence (SETI) Science Working Group (SWG) on the derivation of spectrum analyzer thresholds for a pulse detection algorithm based on an analysis of false alarm rates is extended. The algorithm previously analyzed was intended to detect a finite sequence of i periodically spaced pulses that did not necessarily occupy the entire observation interval. This algorithm would recognize the presence of such a signal only if all i-received pulse powers exceeded a threshold T(i): these thresholds were selected to achieve a desired false alarm rate, independent of i. To simplify the analysis, it was assumed that the pulses were synchronous with the spectrum sample times. This analysis extends the earlier effort to include infinite and/or asynchronous pulse trains. Furthermore, to decrease the possibility of missing an extraterrestrial intelligence signal, the algorithm was modified to detect a pulse train even if some of the received pulse powers fall below the threshold. The analysis employs geometrical arguments that make it conceptually easy to incorporate boundary conditions imposed on the derivation of the false alarm rates. While the exact results can be somewhat complex, simple closed form approximations are derived that produce a negligible loss of accuracy.

  4. Lidar detection algorithm for time and range anomalies

    NASA Astrophysics Data System (ADS)

    Ben-David, Avishai; Davidson, Charles E.; Vanderbeek, Richard G.

    2007-10-01

    A new detection algorithm for lidar applications has been developed. The detection is based on hyperspectral anomaly detection that is implemented for time anomaly where the question "is a target (aerosol cloud) present at range R within time t1 to t2" is addressed, and for range anomaly where the question "is a target present at time t within ranges R1 and R2" is addressed. A detection score significantly different in magnitude from the detection scores for background measurements suggests that an anomaly (interpreted as the presence of a target signal in space/time) exists. The algorithm employs an option for a preprocessing stage where undesired oscillations and artifacts are filtered out with a low-rank orthogonal projection technique. The filtering technique adaptively removes the one over range-squared dependence of the background contribution of the lidar signal and also aids visualization of features in the data when the signal-to-noise ratio is low. A Gaussian-mixture probability model for two hypotheses (anomaly present or absent) is computed with an expectation-maximization algorithm to produce a detection threshold and probabilities of detection and false alarm. Results of the algorithm for CO2 lidar measurements of bioaerosol clouds Bacillus atrophaeus (formerly known as Bacillus subtilis niger, BG) and Pantoea agglomerans, Pa (formerly known as Erwinia herbicola, Eh) are shown and discussed.

  5. Lidar detection algorithm for time and range anomalies.

    PubMed

    Ben-David, Avishai; Davidson, Charles E; Vanderbeek, Richard G

    2007-10-10

    A new detection algorithm for lidar applications has been developed. The detection is based on hyperspectral anomaly detection that is implemented for time anomaly where the question "is a target (aerosol cloud) present at range R within time t(1) to t(2)" is addressed, and for range anomaly where the question "is a target present at time t within ranges R(1) and R(2)" is addressed. A detection score significantly different in magnitude from the detection scores for background measurements suggests that an anomaly (interpreted as the presence of a target signal in space/time) exists. The algorithm employs an option for a preprocessing stage where undesired oscillations and artifacts are filtered out with a low-rank orthogonal projection technique. The filtering technique adaptively removes the one over range-squared dependence of the background contribution of the lidar signal and also aids visualization of features in the data when the signal-to-noise ratio is low. A Gaussian-mixture probability model for two hypotheses (anomaly present or absent) is computed with an expectation-maximization algorithm to produce a detection threshold and probabilities of detection and false alarm. Results of the algorithm for CO(2) lidar measurements of bioaerosol clouds Bacillus atrophaeus (formerly known as Bacillus subtilis niger, BG) and Pantoea agglomerans, Pa (formerly known as Erwinia herbicola, Eh) are shown and discussed. PMID:17932542

  6. Evaluation of a cone beam CT artefact reduction algorithm

    PubMed Central

    Bechara, B; McMahan, CA; Geha, H; Noujeim, M

    2012-01-01

    Objectives An algorithm and software to reduce metal artefact has been developed recently and is available in the Picasso Master 3D® (VATECH, Hwaseong, Republic of Korea), which under visual assessment produces better quality images than were obtainable previously. The objective of this in vitro study was to investigate whether the metal artefact reduction (MAR) algorithm of the Picasso Master 3D machine reduced the incidence of metal artefacts and increased the contrast-to-noise ratio (CNR) while maintaining the same gray value when there was no metallic body present within the scanned volume. Methods 20 scans with a range of 50–90 kVp were acquired, of which 10 had a metallic bead inserted within a phantom. The images obtained were analysed using public domain software (ImageJ; NIH Image, Bethesda, MD). Area histograms were used to evaluate the mean gray level variation of the epoxy resin-based substitute (ERBS) block and a control area. The CNR was calculated. Results The MAR algorithm increased the CNR when the metallic bead was present; it enhanced the ERBS gray level independently of the presence of the metallic bead. The image quality also improved as peak tube potential was increased. Conclusion Improved quality of images and regaining of the control gray values of a phantom were achieved when the MAR algorithm was used in the presence of a metallic bead. PMID:22362221

  7. A biological phantom for evaluation of CT image reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Cammin, J.; Fung, G. S. K.; Fishman, E. K.; Siewerdsen, J. H.; Stayman, J. W.; Taguchi, K.

    2014-03-01

    In recent years, iterative algorithms have become popular in diagnostic CT imaging to reduce noise or radiation dose to the patient. The non-linear nature of these algorithms leads to non-linearities in the imaging chain. However, the methods to assess the performance of CT imaging systems were developed assuming the linear process of filtered backprojection (FBP). Those methods may not be suitable any longer when applied to non-linear systems. In order to evaluate the imaging performance, a phantom is typically scanned and the image quality is measured using various indices. For reasons of practicality, cost, and durability, those phantoms often consist of simple water containers with uniform cylinder inserts. However, these phantoms do not represent the rich structure and patterns of real tissue accurately. As a result, the measured image quality or detectability performance for lesions may not reflect the performance on clinical images. The discrepancy between estimated and real performance may be even larger for iterative methods which sometimes produce "plastic-like", patchy images with homogeneous patterns. Consequently, more realistic phantoms should be used to assess the performance of iterative algorithms. We designed and constructed a biological phantom consisting of porcine organs and tissue that models a human abdomen, including liver lesions. We scanned the phantom on a clinical CT scanner and compared basic image quality indices between filtered backprojection and an iterative reconstruction algorithm.

  8. Research on registration algorithm for check seal verification

    NASA Astrophysics Data System (ADS)

    Wang, Shuang; Liu, Tiegen

    2008-03-01

    Nowadays seals play an important role in China. With the development of social economy, the traditional method of manual check seal identification can't meet the need s of banking transactions badly. This paper focus on pre-processing and registration algorithm for check seal verification using theory of image processing and pattern recognition. First of all, analyze the complex characteristics of check seals. To eliminate the difference of producing conditions and the disturbance caused by background and writing in check image, many methods are used in the pre-processing of check seal verification, such as color components transformation, linearity transform to gray-scale image, medium value filter, Otsu, close calculations and labeling algorithm of mathematical morphology. After the processes above, the good binary seal image can be obtained. On the basis of traditional registration algorithm, a double-level registration method including rough and precise registration method is proposed. The deflection angle of precise registration method can be precise to 0.1°. This paper introduces the concepts of difference inside and difference outside and use the percent of difference inside and difference outside to judge whether the seal is real or fake. The experimental results of a mass of check seals are satisfied. It shows that the methods and algorithmic presented have good robustness to noise sealing conditions and satisfactory tolerance of difference within class.

  9. Accelerating scientific computations with mixed precision algorithms

    NASA Astrophysics Data System (ADS)

    Baboulin, Marc; Buttari, Alfredo; Dongarra, Jack; Kurzak, Jakub; Langou, Julie; Langou, Julien; Luszczek, Piotr; Tomov, Stanimire

    2009-12-01

    factorization of the coefficient matrix using Gaussian elimination. First, the coefficient matrix A is factored into the product of a lower triangular matrix L and an upper triangular matrix U. Partial row pivoting is in general used to improve numerical stability resulting in a factorization PA=LU, where P is a permutation matrix. The solution for the system is achieved by first solving Ly=Pb (forward substitution) and then solving Ux=y (backward substitution). Due to round-off errors, the computed solution, x, carries a numerical error magnified by the condition number of the coefficient matrix A. In order to improve the computed solution, an iterative process can be applied, which produces a correction to the computed solution at each iteration, which then yields the method that is commonly known as the iterative refinement algorithm. Provided that the system is not too ill-conditioned, the algorithm produces a solution correct to the working precision. Running time: seconds/minutes

  10. Mapped Landmark Algorithm for Precision Landing

    NASA Technical Reports Server (NTRS)

    Johnson, Andrew; Ansar, Adnan; Matthies, Larry

    2007-01-01

    A report discusses a computer vision algorithm for position estimation to enable precision landing during planetary descent. The Descent Image Motion Estimation System for the Mars Exploration Rovers has been used as a starting point for creating code for precision, terrain-relative navigation during planetary landing. The algorithm is designed to be general because it handles images taken at different scales and resolutions relative to the map, and can produce mapped landmark matches for any planetary terrain of sufficient texture. These matches provide a measurement of horizontal position relative to a known landing site specified on the surface map. Multiple mapped landmarks generated per image allow for automatic detection and elimination of bad matches. Attitude and position can be generated from each image; this image-based attitude measurement can be used by the onboard navigation filter to improve the attitude estimate, which will improve the position estimates. The algorithm uses normalized correlation of grayscale images, producing precise, sub-pixel images. The algorithm has been broken into two sub-algorithms: (1) FFT Map Matching (see figure), which matches a single large template by correlation in the frequency domain, and (2) Mapped Landmark Refinement, which matches many small templates by correlation in the spatial domain. Each relies on feature selection, the homography transform, and 3D image correlation. The algorithm is implemented in C++ and is rated at Technology Readiness Level (TRL) 4.

  11. NASA, Navy, and AES/York sea ice concentration comparison of SSM/I algorithms with SAR derived values

    NASA Technical Reports Server (NTRS)

    Jentz, R. R.; Wackerman, C. C.; Shuchman, R. A.; Onstott, R. G.; Gloersen, Per; Cavalieri, Don; Ramseier, Rene; Rubinstein, Irene; Comiso, Joey; Hollinger, James

    1991-01-01

    Previous research studies have focused on producing algorithms for extracting geophysical information from passive microwave data regarding ice floe size, sea ice concentration, open water lead locations, and sea ice extent. These studies have resulted in four separate algorithms for extracting these geophysical parameters. Sea ice concentration estimates generated from each of these algorithms (i.e., NASA/Team, NASA/Comiso, AES/York, and Navy) are compared to ice concentration estimates produced from coincident high-resolution synthetic aperture radar (SAR) data. The SAR concentration estimates are produced from data collected in both the Beaufort Sea and the Greenland Sea in March 1988 and March 1989, respectively. The SAR data are coincident to the passive microwave data generated by the Special Sensor Microwave/Imager (SSM/I).

  12. The Orthogonally Partitioned EM Algorithm: Extending the EM Algorithm for Algorithmic Stability and Bias Correction Due to Imperfect Data.

    PubMed

    Regier, Michael D; Moodie, Erica E M

    2016-05-01

    We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience. PMID:27227718

  13. Algorithm That Synthesizes Other Algorithms for Hashing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2010-01-01

    An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the

  14. Algorithm for genome contig assembly. Final report

    SciTech Connect

    1995-09-01

    An algorithm was developed for genome contig assembly which extended the range of data types that could be included in assembly and which ran on the order of a hundred times faster than the algorithm it replaced. Maps of all existing cosmid clone and YAC data at the Human Genome Information Resource were assembled using ICA. The resulting maps are summarized.

  15. Pyroelectric sensors and classification algorithms for border / perimeter security

    NASA Astrophysics Data System (ADS)

    Jacobs, Eddie L.; Chari, Srikant; Halford, Carl; McClellan, Harry

    2009-09-01

    It has been shown that useful classifications can be made with a sensor that detects the shape of moving objects. This type of sensor has been referred to as a profiling sensor. In this research, two configurations of pyroelectric detectors are considered for use in a profiling sensor, a linear array and a circular array. The linear array produces crude images representing the shape of objects moving through the field of view. The circular array produces a temporal motion vector. A simulation of the output of each detector configuration is created and used to generate simulated profiles. The simulation is performed by convolving the pyroelectric detector response with images derived from calibrated thermal infrared video sequences. Profiles derived from these simulations are then used to train and test classification algorithms. Classification algorithms examined in this study include a naive Bayesian (NB) classifier and Linear discriminant analysis (LDA). Each classification algorithm assumes a three class problem where profiles are classified as either human, animal, or vehicle. Simulation results indicate that these systems can reliably classify outputs from these types of sensors. These types of sensors can be used in applications involving border or perimeter security.

  16. A Modified Algorithm For Scanning Tomographic Acoustic Microscopy

    NASA Astrophysics Data System (ADS)

    Meyyappan, A.; Wade, G.

    1988-07-01

    Acoustic microscopy is an invaluable tool in non-destructive evaluation because of its ability to provide high-resolution images of microscopic structure in small objects. When such a microscope operates in the transmission mode, the micrograph produced is simply a shadowgraph of all the struc-tures encountered by the acoustic wave passing through the object. Because of diffraction and over-lapping, the resultant images are difficult to comprehend, especially in the case of objects of sub-stantial thickness with complex structures. To over-come these problems, we have developed a scanning tomographic acoustic microscope (STAM) which is capable of producing unambiguous high-resolution tomograms. We have described in previously-published work how a scanning laser acoustic micro-scope can be employed to realize STAM. We use an algorithm based on "back-and-forth propagation" to reconstruct tomograms of the various layers to be imaged. When these layers are physically close to one another, we see ambiguities in the reconstructions. In this paper we describe a modified algorithm which removes these ambiguities. With the new algorithm, we can resolve layers that are only two wavelengths apart.

  17. Explicit Time-Scale Splitting Algorithm for Stiff Problems: Auto-ignition of Gaseous Mixtures behind a Steady Shock

    NASA Astrophysics Data System (ADS)

    Valorani, Mauro; Goussis, Dimitrios A.

    2001-05-01

    A new explicit algorithm based on the computational singular perturbation (CSP) method is presented. This algorithm is specifically designed to solve stiff problems, and its performance increases with stiffness. The key concept in its structure is the splitting of the fast from the slow time scales in the problem, realized by embedding CSP concepts into an explicit scheme. In simple terms, the algorithm marches in time with only the terms producing the slow time scales, while the contribution of the terms producing the fast time scales is taken into account at the end of each integration step as a correction. The new algorithm is designed for the integration of stiff systems of PDEs by means of explicit schemes. For simplicity in the presentation and discussion of the different features of the new algorithm, a simple test case is considered, involving the auto-ignition of a methane/air mixture behind a normal shock wave, which is described by a system of ODEs. The performance of the new algorithm (accuracy and computational efficiency) is then compared with the well-known LSODE package. Its merits when used for the solution of systems of PDEs are discussed. Although when dealing with a stiff system of ODEs the new algorithm is shown to provide equal accuracy with that delivered by LSODE at the cost of higher execution time, the results indicate that its performance could be superior when facing a stiff system of PDEs.

  18. Parallel scheduling algorithms

    SciTech Connect

    Dekel, E.; Sahni, S.

    1983-01-01

    Parallel algorithms are given for scheduling problems such as scheduling to minimize the number of tardy jobs, job sequencing with deadlines, scheduling to minimize earliness and tardiness penalties, channel assignment, and minimizing the mean finish time. The shared memory model of parallel computers is used to obtain fast algorithms. 26 references.

  19. Developmental Algorithms Have Meaning!

    ERIC Educational Resources Information Center

    Green, John

    1997-01-01

    Adapts Stanic and McKillip's ideas for the use of developmental algorithms to propose that the present emphasis on symbolic manipulation should be tempered with an emphasis on the conceptual understanding of the mathematics underlying the algorithm. Uses examples from the areas of numeric computation, algebraic manipulation, and equation solving…

  20. Improved ant algorithms for software testing cases generation.

    PubMed

    Yang, Shunkun; Man, Tianlong; Xu, Jiaqi

    2014-01-01

    Existing ant colony optimization (ACO) for software testing cases generation is a very popular domain in software testing engineering. However, the traditional ACO has flaws, as early search pheromone is relatively scarce, search efficiency is low, search model is too simple, positive feedback mechanism is easy to produce the phenomenon of stagnation and precocity. This paper introduces improved ACO for software testing cases generation: improved local pheromone update strategy for ant colony optimization, improved pheromone volatilization coefficient for ant colony optimization (IPVACO), and improved the global path pheromone update strategy for ant colony optimization (IGPACO). At last, we put forward a comprehensive improved ant colony optimization (ACIACO), which is based on all the above three methods. The proposed technique will be compared with random algorithm (RND) and genetic algorithm (GA) in terms of both efficiency and coverage. The results indicate that the improved method can effectively improve the search efficiency, restrain precocity, promote case coverage, and reduce the number of iterations. PMID:24883391

  1. Reconstruction algorithms for optoacoustic imaging based on fiber optic detectors

    NASA Astrophysics Data System (ADS)

    Lamela, Horacio; Díaz-Tendero, Gonzalo; Gutiérrez, Rebeca; Gallego, Daniel

    2011-06-01

    Optoacoustic Imaging (OAI), a novel hybrid imaging technology, offers high contrast, molecular specificity and excellent resolution to overcome limitations of the current clinical modalities for detection of solid tumors. The exact time-domain reconstruction formula produces images with excellent resolution but poor contrast. Some approximate time-domain filtered back-projection reconstruction algorithms have also been reported to solve this problem. A wavelet transform implementation filtering can be used to sharpen object boundaries while simultaneously preserving high contrast of the reconstructed objects. In this paper, several algorithms, based on Back Projection (BP) techniques, have been suggested to process OA images in conjunction with signal filtering for ultrasonic point detectors and integral detectors. We apply these techniques first directly to a numerical generated sample image and then to the laserdigitalized image of a tissue phantom, obtaining in both cases the best results in resolution and contrast for a waveletbased filter.

  2. Revealing ecological networks using Bayesian network inference algorithms.

    PubMed

    Milns, Isobel; Beale, Colin M; Smith, V Anne

    2010-07-01

    Understanding functional relationships within ecological networks can help reveal keys to ecosystem stability or fragility. Revealing these relationships is complicated by the difficulties of isolating variables or performing experimental manipulations within a natural ecosystem, and thus inferences are often made by matching models to observational data. Such models, however, require assumptions-or detailed measurements-of parameters such as birth and death rate, encounter frequency, territorial exclusion, and predation success. Here, we evaluate the use of a Bayesian network inference algorithm, which can reveal ecological networks based upon species and habitat abundance alone. We test the algorithm's performance and applicability on observational data of avian communities and habitat in the Peak District National Park, United Kingdom. The resulting networks correctly reveal known relationships among habitat types and known interspecific relationships. In addition, the networks produced novel insights into ecosystem structure and identified key species with high connectivity. Thus, Bayesian networks show potential for becoming a valuable tool in ecosystem analysis. PMID:20715607

  3. Neural network implementations of data association algorithms for sensor fusion

    NASA Technical Reports Server (NTRS)

    Brown, Donald E.; Pittard, Clarence L.; Martin, Worthy N.

    1989-01-01

    The paper is concerned with locating a time varying set of entities in a fixed field when the entities are sensed at discrete time instances. At a given time instant a collection of bivariate Gaussian sensor reports is produced, and these reports estimate the location of a subset of the entities present in the field. A database of reports is maintained, which ideally should contain one report for each entity sensed. Whenever a collection of sensor reports is received, the database must be updated to reflect the new information. This updating requires association processing between the database reports and the new sensor reports to determine which pairs of sensor and database reports correspond to the same entity. Algorithms for performing this association processing are presented. Neural network implementation of the algorithms, along with simulation results comparing the approaches are provided.

  4. Motion Cueing Algorithm Modification for Improved Turbulence Simulation

    NASA Technical Reports Server (NTRS)

    Ercole, Anthony V.; Cardullo, Frank M.; Zaychik, Kirill; Kelly, Lon C.; Houck, Jacob

    2009-01-01

    Atmospheric turbulence cueing produced by flight simulator motion systems has been less than satisfactory because the turbulence profiles have been attenuated by the motion cueing algorithms. Cardullo and Ellor initially addressed this problem by directly porting the turbulence model output to the motion system. Reid and Robinson addressed the problem by employing a parallel aircraft model, which is only stimulated by the turbulence inputs and adding a filter specially designed to pass the higher turbulence frequencies. There have been advances in motion cueing algorithm development at the Man-Machine Systems Laboratory, at SUNY Binghamton. In particular, the system used to generate turbulence cues has been studied. The Reid approach, implemented by Telban and Cardullo, was employed to augment the optimal motion cueing algorithm installed at the NASA LaRC Simulation Laboratory, driving the Visual Motion Simulator. In this implementation, the output of the primary flight channel was added to the output of the turbulence channel and then sent through a non-linear cueing filter. The cueing filter is an adaptive filter; therefore, it is not desirable for the output of the turbulence channel to be augmented by this type of filter. The likelihood of the signal becoming divergent was also an issue in this design. After testing on-site it became apparent that the architecture of the turbulence algorithm was generating unacceptable cues. As mentioned above, this cueing algorithm comprised a filter that was designed to operate at low bandwidth. Therefore, the turbulence was also filtered, augmenting the cues generated by the model. If any filtering is to be done to the turbulence, it will utilize a filter with a much higher bandwidth, above the frequencies produced by the aircraft response to turbulence. The authors have developed an implementation wherein only the signal from the primary flight channel passes through the nonlinear cueing filter. This paper discusses three

  5. Line-drawing algorithms for parallel machines

    NASA Technical Reports Server (NTRS)

    Pang, Alex T.

    1990-01-01

    The fact that conventional line-drawing algorithms, when applied directly on parallel machines, can lead to very inefficient codes is addressed. It is suggested that instead of modifying an existing algorithm for a parallel machine, a more efficient implementation can be produced by going back to the invariants in the definition. Popular line-drawing algorithms are compared with two alternatives; distance to a line (a point is on the line if sufficiently close to it) and intersection with a line (a point on the line if an intersection point). For massively parallel single-instruction-multiple-data (SIMD) machines (with thousands of processors and up), the alternatives provide viable line-drawing algorithms. Because of the pixel-per-processor mapping, their performance is independent of the line length and orientation.

  6. Validation of a stochastic digital packing algorithm for porosity prediction in fluvial gravel deposits

    NASA Astrophysics Data System (ADS)

    Liang, Rui; Schruff, Tobias; Jia, Xiaodong; Schüttrumpf, Holger; Frings, Roy M.

    2015-11-01

    Porosity as one of the key properties of sediment mixtures is poorly understood. Most of the existing porosity predictors based upon grain size characteristics have been unable to produce satisfying results for fluvial sediment porosity, due to the lack of consideration of other porosity-controlling factors like grain shape and depositional condition. Considering this, a stochastic digital packing algorithm was applied in this work, which provides an innovative way to pack particles of arbitrary shapes and sizes based on digitization of both particles and packing space. The purpose was to test the applicability of this packing algorithm in predicting fluvial sediment porosity by comparing its predictions with outcomes obtained from laboratory measurements. Laboratory samples examined were two natural fluvial sediments from the Rhine River and Kall River (Germany), and commercial glass beads (spheres). All samples were artificially combined into seven grain size distributions: four unimodal distributions and three bimodal distributions. Our study demonstrates that apart from grain size, grain shape also has a clear impact on porosity. The stochastic digital packing algorithm successfully reproduced the measured variations in porosity for the three different particle sources. However, the packing algorithm systematically overpredicted the porosity measured in random dense packing conditions, mainly because the random motion of particles during settling introduced unwanted kinematic sorting and shape effects. The results suggest that the packing algorithm produces loose packing structures, and is useful for trend analysis of packing porosity.

  7. Parallel paving: An algorithm for generating distributed, adaptive, all-quadrilateral meshes on parallel computers

    SciTech Connect

    Lober, R.R.; Tautges, T.J.; Vaughan, C.T.

    1997-03-01

    Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.

  8. A modified Fuzzy C-Means (FCM) Clustering algorithm and its application on carbonate fluid identification

    NASA Astrophysics Data System (ADS)

    Liu, Lifeng; Sun, Sam Zandong; Yu, Hongyu; Yue, Xingtong; Zhang, Dong

    2016-06-01

    Considering the fact that the fluid distribution in carbonate reservoir is very complicated and the existing fluid prediction methods are not able to produce ideal predicted results, this paper proposes a new fluid identification method in carbonate reservoir based on the modified Fuzzy C-Means (FCM) Clustering algorithm. Both initialization and globally optimum cluster center are produced by Chaotic Quantum Particle Swarm Optimization (CQPSO) algorithm, which can effectively avoid the disadvantage of sensitivity to initial values and easily falling into local convergence in the traditional FCM Clustering algorithm. Then, the modified algorithm is applied to fluid identification in the carbonate X area in Tarim Basin of China, and a mapping relation between fluid properties and pre-stack elastic parameters will be built in multi-dimensional space. It has been proven that this modified algorithm has a good ability of fuzzy cluster and its total coincidence rate of fluid prediction reaches 97.10%. Besides, the membership of different fluids can be accumulated to obtain respective probability, which can evaluate the uncertainty in fluid identification result.

  9. Barzilai-Borwein method in graph drawing algorithm based on Kamada-Kawai algorithm

    NASA Astrophysics Data System (ADS)

    Hasal, Martin; Pospisil, Lukas; Nowakova, Jana

    2016-06-01

    Extension of Kamada-Kawai algorithm, which was designed for calculating layouts of simple undirected graphs, is presented in this paper. Graphs drawn by Kamada-Kawai algorithm exhibit symmetries, tend to produce aesthetically pleasing and crossing-free layouts for planar graphs. Minimization of Kamada-Kawai algorithm is based on Newton-Raphson method, which needs Hessian matrix of second derivatives of minimized node. Disadvantage of Kamada-Kawai embedder algorithm is computational requirements. This is caused by searching of minimal potential energy of the whole system, which is minimized node by node. The node with highest energy is minimized against all nodes till the local equilibrium state is reached. In this paper with Barzilai-Borwein (BB) minimization algorithm, which needs only gradient for minimum searching, instead of Newton-Raphson method, is worked. It significantly improves the computational time and requirements.

  10. A new frame-based registration algorithm.

    PubMed

    Yan, C H; Whalen, R T; Beaupre, G S; Sumanaweera, T S; Yen, S Y; Napel, S

    1998-01-01

    This paper presents a new algorithm for frame registration. Our algorithm requires only that the frame be comprised of straight rods, as opposed to the N structures or an accurate frame model required by existing algorithms. The algorithm utilizes the full 3D information in the frame as well as a least squares weighting scheme to achieve highly accurate registration. We use simulated CT data to assess the accuracy of our algorithm. We compare the performance of the proposed algorithm to two commonly used algorithms. Simulation results show that the proposed algorithm is comparable to the best existing techniques with knowledge of the exact mathematical frame model. For CT data corrupted with an unknown in-plane rotation or translation, the proposed technique is also comparable to the best existing techniques. However, in situations where there is a discrepancy of more than 2 mm (0.7% of the frame dimension) between the frame and the mathematical model, the proposed technique is significantly better (p < or = 0.05) than the existing techniques. The proposed algorithm can be applied to any existing frame without modification. It provides better registration accuracy and is robust against model mis-match. It allows greater flexibility on the frame structure. Lastly, it reduces the frame construction cost as adherence to a concise model is not required. PMID:9472834

  11. A new frame-based registration algorithm

    NASA Technical Reports Server (NTRS)

    Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Sumanaweera, T. S.; Yen, S. Y.; Napel, S.

    1998-01-01

    This paper presents a new algorithm for frame registration. Our algorithm requires only that the frame be comprised of straight rods, as opposed to the N structures or an accurate frame model required by existing algorithms. The algorithm utilizes the full 3D information in the frame as well as a least squares weighting scheme to achieve highly accurate registration. We use simulated CT data to assess the accuracy of our algorithm. We compare the performance of the proposed algorithm to two commonly used algorithms. Simulation results show that the proposed algorithm is comparable to the best existing techniques with knowledge of the exact mathematical frame model. For CT data corrupted with an unknown in-plane rotation or translation, the proposed technique is also comparable to the best existing techniques. However, in situations where there is a discrepancy of more than 2 mm (0.7% of the frame dimension) between the frame and the mathematical model, the proposed technique is significantly better (p < or = 0.05) than the existing techniques. The proposed algorithm can be applied to any existing frame without modification. It provides better registration accuracy and is robust against model mis-match. It allows greater flexibility on the frame structure. Lastly, it reduces the frame construction cost as adherence to a concise model is not required.

  12. The annealing robust backpropagation (ARBP) learning algorithm.

    PubMed

    Chuang, C C; Su, S F; Hsiao, C C

    2000-01-01

    Multilayer feedforward neural networks are often referred to as universal approximators. Nevertheless, if the used training data are corrupted by large noise, such as outliers, traditional backpropagation learning schemes may not always come up with acceptable performance. Even though various robust learning algorithms have been proposed in the literature, those approaches still suffer from the initialization problem. In those robust learning algorithms, the so-called M-estimator is employed. For the M-estimation type of learning algorithms, the loss function is used to play the role in discriminating against outliers from the majority by degrading the effects of those outliers in learning. However, the loss function used in those algorithms may not correctly discriminate against those outliers. In this paper, the annealing robust backpropagation learning algorithm (ARBP) that adopts the annealing concept into the robust learning algorithms is proposed to deal with the problem of modeling under the existence of outliers. The proposed algorithm has been employed in various examples. Those results all demonstrated the superiority over other robust learning algorithms independent of outliers. In the paper, not only is the annealing concept adopted into the robust learning algorithms but also the annealing schedule k/t was found experimentally to achieve the best performance among other annealing schedules, where k is a constant and is the epoch number. PMID:18249835

  13. Sorting on STAR. [CDC computer algorithm timing comparison

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1978-01-01

    Timing comparisons are given for three sorting algorithms written for the CDC STAR computer. One algorithm is Hoare's (1962) Quicksort, which is the fastest or nearly the fastest sorting algorithm for most computers. A second algorithm is a vector version of Quicksort that takes advantage of the STAR's vector operations. The third algorithm is an adaptation of Batcher's (1968) sorting algorithm, which makes especially good use of vector operations but has a complexity of N(log N)-squared as compared with a complexity of N log N for the Quicksort algorithms. In spite of its worse complexity, Batcher's sorting algorithm is competitive with the serial version of Quicksort for vectors up to the largest that can be treated by STAR. Vector Quicksort outperforms the other two algorithms and is generally preferred. These results indicate that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis.

  14. An MM-Based Algorithm for ℓ1-Regularized Least-Squares Estimation With an Application to Ground Penetrating Radar Image Reconstruction.

    PubMed

    Ndoye, Mandoye; Anderson, John M M; Greene, David J

    2016-05-01

    An estimation method known as least absolute shrinkage and selection operator (LASSO) or ℓ1-regularized LS estimation has been found to perform well in a number of applications. In this paper, we use the majorize-minimize method to develop an algorithm for minimizing the LASSO objective function, which is the sum of a linear LS objective function plus an ℓ1 penalty term. The proposed algorithm, which we call the LASSO estimation via majorization-minimization (LMM) algorithm, is straightforward to implement, parallelizable, and guaranteed to produce LASSO objective function values that monotonically decrease. In addition, we formulate an extension of the LMM algorithm for reconstructing ground penetrating radar (GPR) images, that is much faster than the standard LMM algorithm and utilizes significantly less memory. Thus, the GPR specific LMM (GPR-LMM) algorithm is able to accommodate the big data associated with GPR imaging. We compare our proposed algorithms to the state-of-the-art ℓ1-regularized LS algorithms using a time and space complexity analysis. The GPR-LMM greatly outperforms the competing algorithms in terms of the performance metrics we considered. In addition, the reconstruction results of the standard LMM and GPR-LMM algorithms are evaluated using both simulated and real GPR data. PMID:26800538

  15. Oscillation Detection Algorithm Development Summary Report and Test Plan

    SciTech Connect

    Zhou, Ning; Huang, Zhenyu; Tuffner, Francis K.; Jin, Shuangshuang

    2009-10-03

    Small signal stability problems are one of the major threats to grid stability and reliability in California and the western U.S. power grid. An unstable oscillatory mode can cause large-amplitude oscillations and may result in system breakup and large-scale blackouts. There have been several incidents of system-wide oscillations. Of them, the most notable is the August 10, 1996 western system breakup produced as a result of undamped system-wide oscillations. There is a great need for real-time monitoring of small-signal oscillations in the system. In power systems, a small-signal oscillation is the result of poor electromechanical damping. Considerable understanding and literature have been developed on the small-signal stability problem over the past 50+ years. These studies have been mainly based on a linearized system model and eigenvalue analysis of its characteristic matrix. However, its practical feasibility is greatly limited as power system models have been found inadequate in describing real-time operating conditions. Significant efforts have been devoted to monitoring system oscillatory behaviors from real-time measurements in the past 20 years. The deployment of phasor measurement units (PMU) provides high-precision time-synchronized data needed for estimating oscillation modes. Measurement-based modal analysis, also known as ModeMeter, uses real-time phasor measure-ments to estimate system oscillation modes and their damping. Low damping indicates potential system stability issues. Oscillation alarms can be issued when the power system is lightly damped. A good oscillation alarm tool can provide time for operators to take remedial reaction and reduce the probability of a system breakup as a result of a light damping condition. Real-time oscillation monitoring requires ModeMeter algorithms to have the capability to work with various kinds of measurements: disturbance data (ringdown signals), noise probing data, and ambient data. Several measurement

  16. Enhanced Deep Blue aerosol retrieval algorithm: The second generation

    NASA Astrophysics Data System (ADS)

    Hsu, N. C.; Jeong, M.-J.; Bettenhausen, C.; Sayer, A. M.; Hansell, R.; Seftor, C. S.; Huang, J.; Tsay, S.-C.

    2013-08-01

    The aerosol products retrieved using the Moderate Resolution Imaging Spectroradiometer (MODIS) collection 5.1 Deep Blue algorithm have provided useful information about aerosol properties over bright-reflecting land surfaces, such as desert, semiarid, and urban regions. However, many components of the C5.1 retrieval algorithm needed to be improved; for example, the use of a static surface database to estimate surface reflectances. This is particularly important over regions of mixed vegetated and nonvegetated surfaces, which may undergo strong seasonal changes in land cover. In order to address this issue, we develop a hybrid approach, which takes advantage of the combination of precalculated surface reflectance database and normalized difference vegetation index in determining the surface reflectance for aerosol retrievals. As a result, the spatial coverage of aerosol data generated by the enhanced Deep Blue algorithm has been extended from the arid and semiarid regions to the entire land areas. In this paper, the changes made in the enhanced Deep Blue algorithm regarding the surface reflectance estimation, aerosol model selection, and cloud screening schemes for producing the MODIS collection 6 aerosol products are discussed. A similar approach has also been applied to the algorithm that generates the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Deep Blue products. Based upon our preliminary results of comparing the enhanced Deep Blue aerosol products with the Aerosol Robotic Network (AERONET) measurements, the expected error of the Deep Blue aerosol optical thickness (AOT) is estimated to be better than 0.05 + 20%. Using 10 AERONET sites with long-term time series, 79% of the best quality Deep Blue AOT values are found to fall within this expected error.

  17. Interpreting the flock algorithm from a statistical perspective.

    PubMed

    Anderson, Eric C; Barry, Patrick D

    2015-09-01

    We show that the algorithm in the program flock (Duchesne & Turgeon 2009) can be interpreted as an estimation procedure based on a model essentially identical to the structure (Pritchard et al. 2000) model with no admixture and without correlated allele frequency priors. Rather than using MCMC, the flock algorithm searches for the maximum a posteriori estimate of this structure model via a simulated annealing algorithm with a rapid cooling schedule (namely, the exponent on the objective function →∞). We demonstrate the similarities between the two programs in a two-step approach. First, to enable rapid batch processing of many simulated data sets, we modified the source code of structure to use the flock algorithm, producing the program flockture. With simulated data, we confirmed that results obtained with flock and flockture are very similar (though flockture is some 200 times faster). Second, we simulated multiple large data sets under varying levels of population differentiation for both microsatellite and SNP genotypes. We analysed them with flockture and structure and assessed each program on its ability to cluster individuals to their correct subpopulation. We show that flockture yields results similar to structure albeit with greater variability from run to run. flockture did perform better than structure when genotypes were composed of SNPs and differentiation was moderate (FST= 0.022-0.032). When differentiation was low, structure outperformed flockture for both marker types. On large data sets like those we simulated, it appears that flock's reliance on inference rules regarding its 'plateau record' is not helpful. Interpreting flock's algorithm as a special case of the model in structure should aid in understanding the program's output and behaviour. PMID:25913195

  18. Automatic Data Filter Customization Using a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Mandrake, Lukas

    2013-01-01

    This work predicts whether a retrieval algorithm will usefully determine CO2 concentration from an input spectrum of GOSAT (Greenhouse Gases Observing Satellite). This was done to eliminate needless runtime on atmospheric soundings that would never yield useful results. A space of 50 dimensions was examined for predictive power on the final CO2 results. Retrieval algorithms are frequently expensive to run, and wasted effort defeats requirements and expends needless resources. This algorithm could be used to help predict and filter unneeded runs in any computationally expensive regime. Traditional methods such as the Fischer discriminant analysis and decision trees can attempt to predict whether a sounding will be properly processed. However, this work sought to detect a subsection of the dimensional space that can be simply filtered out to eliminate unwanted runs. LDAs (linear discriminant analyses) and other systems examine the entire data and judge a "best fit," giving equal weight to complex and problematic regions as well as simple, clear-cut regions. In this implementation, a genetic space of "left" and "right" thresholds outside of which all data are rejected was defined. These left/right pairs are created for each of the 50 input dimensions. A genetic algorithm then runs through countless potential filter settings using a JPL computer cluster, optimizing the tossed-out data s yield (proper vs. improper run removal) and number of points tossed. This solution is robust to an arbitrary decision boundary within the data and avoids the global optimization problem of whole-dataset fitting using LDA or decision trees. It filters out runs that would not have produced useful CO2 values to save needless computation. This would be an algorithmic preprocessing improvement to any computationally expensive system.

  19. Human vision-based algorithm to hide defective pixels in LCDs

    NASA Astrophysics Data System (ADS)

    Kimpe, Tom; Coulier, Stefaan; Van Hoey, Gert

    2006-02-01

    Producing displays without pixel defects or repairing defective pixels is technically not possible at this moment. This paper presents a new approach to solve this problem: defects are made invisible for the user by using image processing algorithms based on characteristics of the human eye. The performance of this new algorithm has been evaluated using two different methods. First of all the theoretical response of the human eye was analyzed on a series of images and this before and after applying the defective pixel compensation algorithm. These results show that indeed it is possible to mask a defective pixel. A second method was to perform a psycho-visual test where users were asked whether or not a defective pixel could be perceived. The results of these user tests also confirm the value of the new algorithm. Our "defective pixel correction" algorithm can be implemented very efficiently and cost-effectively as pixel-dataprocessing algorithms inside the display in for instance an FPGA, a DSP or a microprocessor. The described techniques are also valid for both monochrome and color displays ranging from high-quality medical displays to consumer LCDTV applications.

  20. Decomposition of Large Scale Semantic Graphsvia an Efficient Communities Algorithm

    SciTech Connect

    Yao, Y

    2008-02-08

    's decomposition algorithm, much more efficiently, leading to significantly reduced computation time. Test runs on a desktop computer have shown reductions of up to 89%. Our focus this year has been on the implementation of parallel graph clustering on one of LLNL's supercomputers. In order to achieve efficiency in parallel computing, we have exploited the fact that large semantic graphs tend to be sparse, comprising loosely connected dense node clusters. When implemented on distributed memory computers, our approach performed well on several large graphs with up to one billion nodes, as shown in Table 2. The rightmost column of Table 2 contains the associated Newman's modularity [1], a metric that is widely used to assess the quality of community structure. Existing algorithms produce results that merely approximate the optimal solution, i.e., maximum modularity. We have developed a verification tool for decomposition algorithms, based upon a novel integer linear programming (ILP) approach, that computes an exact solution. We have used this ILP methodology to find the maximum modularity and corresponding optimal community structure for several well-studied graphs in the literature (e.g., Figure 1) [3]. The above approaches assume that modularity is the best measure of quality for community structure. In an effort to enhance this quality metric, we have also generalized Newman's modularity based upon an insightful random walk interpretation that allows us to vary the scope of the metric. Generalized modularity has enabled us to develop new, more flexible versions of our algorithms. In developing these methodologies, we have made several contributions to both graph theoretic algorithms and software engineering. We have written two research papers for refereed publication [3-4] and are working on another one [5]. In addition, we have presented our research findings at three academic and professional conferences.

  1. The theory of hybrid stochastic algorithms

    SciTech Connect

    Kennedy, A.D. . Supercomputer Computations Research Inst.)

    1989-11-21

    These lectures introduce the family of Hybrid Stochastic Algorithms for performing Monte Carlo calculations in Quantum Field Theory. After explaining the basic concepts of Monte Carlo integration we discuss the properties of Markov processes and one particularly useful example of them: the Metropolis algorithm. Building upon this framework we consider the Hybrid and Langevin algorithms from the viewpoint that they are approximate versions of the Hybrid Monte Carlo method; and thus we are led to consider Molecular Dynamics using the Leapfrog algorithm. The lectures conclude by reviewing recent progress in these areas, explaining higher-order integration schemes, the asymptotic large-volume behaviour of the various algorithms, and some simple exact results obtained by applying them to free field theory. It is attempted throughout to give simple yet correct proofs of the various results encountered. 38 refs.

  2. Self-adaptive parameters in genetic algorithms

    NASA Astrophysics Data System (ADS)

    Pellerin, Eric; Pigeon, Luc; Delisle, Sylvain

    2004-04-01

    Genetic algorithms are powerful search algorithms that can be applied to a wide range of problems. Generally, parameter setting is accomplished prior to running a Genetic Algorithm (GA) and this setting remains unchanged during execution. The problem of interest to us here is the self-adaptive parameters adjustment of a GA. In this research, we propose an approach in which the control of a genetic algorithm"s parameters can be encoded within the chromosome of each individual. The parameters" values are entirely dependent on the evolution mechanism and on the problem context. Our preliminary results show that a GA is able to learn and evaluate the quality of self-set parameters according to their degree of contribution to the resolution of the problem. These results are indicative of a promising approach to the development of GAs with self-adaptive parameter settings that do not require the user to pre-adjust parameters at the outset.

  3. A parallel algorithm for mesh smoothing

    SciTech Connect

    Freitag, L.; Jones, M.; Plassmann, P.

    1999-07-01

    Maintaining good mesh quality during the generation and refinement of unstructured meshes in finite-element applications is an important aspect in obtaining accurate discretizations and well-conditioned linear systems. In this article, the authors present a mesh-smoothing algorithm based on nonsmooth optimization techniques and a scalable implementation of this algorithm. They prove that the parallel algorithm has a provably fast runtime bound and executes correctly for a parallel random access machine (PRAM) computational model. They extend the PRAM algorithm to distributed memory computers and report results for two-and three-dimensional simplicial meshes that demonstrate the efficiency and scalability of this approach for a number of different test cases. They also examine the effect of different architectures on the parallel algorithm and present results for the IBM SP supercomputer and an ATM-connected network of SPARC Ultras.

  4. Improved Monkey-King Genetic Algorithm for Solving Large Winner Determination in Combinatorial Auction

    NASA Astrophysics Data System (ADS)

    Li, Yuzhong

    Using GA solve the winner determination problem (WDP) with large bids and items, run under different distribution, because the search space is large, constraint complex and it may easy to produce infeasible solution, would affect the efficiency and quality of algorithm. This paper present improved MKGA, including three operator: preprocessing, insert bid and exchange recombination, and use Monkey-king elite preservation strategy. Experimental results show that improved MKGA is better than SGA in population size and computation. The problem that traditional branch and bound algorithm hard to solve, improved MKGA can solve and achieve better effect.

  5. Domain Decomposition Algorithms for First-Order System Least Squares Methods

    NASA Technical Reports Server (NTRS)

    Pavarino, Luca F.

    1996-01-01

    Least squares methods based on first-order systems have been recently proposed and analyzed for second-order elliptic equations and systems. They produce symmetric and positive definite discrete systems by using standard finite element spaces, which are not required to satisfy the inf-sup condition. In this paper, several domain decomposition algorithms for these first-order least squares methods are studied. Some representative overlapping and substructuring algorithms are considered in their additive and multiplicative variants. The theoretical and numerical results obtained show that the classical convergence bounds (on the iteration operator) for standard Galerkin discretizations are also valid for least squares methods.

  6. A Family of Algorithms for Computing Consensus about Node State from Network Data

    PubMed Central

    Brush, Eleanor R.; Krakauer, David C.; Flack, Jessica C.

    2013-01-01

    Biological and social networks are composed of heterogeneous nodes that contribute differentially to network structure and function. A number of algorithms have been developed to measure this variation. These algorithms have proven useful for applications that require assigning scores to individual nodes–from ranking websites to determining critical species in ecosystems–yet the mechanistic basis for why they produce good rankings remains poorly understood. We show that a unifying property of these algorithms is that they quantify consensus in the network about a node's state or capacity to perform a function. The algorithms capture consensus by either taking into account the number of a target node's direct connections, and, when the edges are weighted, the uniformity of its weighted in-degree distribution (breadth), or by measuring net flow into a target node (depth). Using data from communication, social, and biological networks we find that that how an algorithm measures consensus–through breadth or depth– impacts its ability to correctly score nodes. We also observe variation in sensitivity to source biases in interaction/adjacency matrices: errors arising from systematic error at the node level or direct manipulation of network connectivity by nodes. Our results indicate that the breadth algorithms, which are derived from information theory, correctly score nodes (assessed using independent data) and are robust to errors. However, in cases where nodes “form opinions” about other nodes using indirect information, like reputation, depth algorithms, like Eigenvector Centrality, are required. One caveat is that Eigenvector Centrality is not robust to error unless the network is transitive or assortative. In these cases the network structure allows the depth algorithms to effectively capture breadth as well as depth. Finally, we discuss the algorithms' cognitive and computational demands. This is an important consideration in systems in which

  7. A family of algorithms for computing consensus about node state from network data.

    PubMed

    Brush, Eleanor R; Krakauer, David C; Flack, Jessica C

    2013-01-01

    Biological and social networks are composed of heterogeneous nodes that contribute differentially to network structure and function. A number of algorithms have been developed to measure this variation. These algorithms have proven useful for applications that require assigning scores to individual nodes-from ranking websites to determining critical species in ecosystems-yet the mechanistic basis for why they produce good rankings remains poorly understood. We show that a unifying property of these algorithms is that they quantify consensus in the network about a node's state or capacity to perform a function. The algorithms capture consensus by either taking into account the number of a target node's direct connections, and, when the edges are weighted, the uniformity of its weighted in-degree distribution (breadth), or by measuring net flow into a target node (depth). Using data from communication, social, and biological networks we find that that how an algorithm measures consensus-through breadth or depth- impacts its ability to correctly score nodes. We also observe variation in sensitivity to source biases in interaction/adjacency matrices: errors arising from systematic error at the node level or direct manipulation of network connectivity by nodes. Our results indicate that the breadth algorithms, which are derived from information theory, correctly score nodes (assessed using independent data) and are robust to errors. However, in cases where nodes "form opinions" about other nodes using indirect information, like reputation, depth algorithms, like Eigenvector Centrality, are required. One caveat is that Eigenvector Centrality is not robust to error unless the network is transitive or assortative. In these cases the network structure allows the depth algorithms to effectively capture breadth as well as depth. Finally, we discuss the algorithms' cognitive and computational demands. This is an important consideration in systems in which individuals use the

  8. A third order Runge-Kutta algorithm on a manifold

    NASA Technical Reports Server (NTRS)

    Crouch, P. E.; Grossman, R. G.; Yan, Y.

    1992-01-01

    A third order Runge-Kutta type algorithm is described with the property that it preserves certain geometric structures. In particular, if the algorithm is initialized on a Lie group, then the resulting iterates remain on the Lie group.

  9. Lightning Jump Algorithm and Relation to Thunderstorm Cell Tracking, GLM Proxy and Other Meteorological Measurements

    NASA Technical Reports Server (NTRS)

    Schultz, Christopher J.; Carey, Lawrence D.; Cecil, Daniel J.; Bateman, Monte

    2012-01-01

    The lightning jump algorithm has a robust history in correlating upward trends in lightning to severe and hazardous weather occurrence. The algorithm uses the correlation between the physical principles that govern an updraft's ability to produce microphysical and kinematic conditions conducive for electrification and its role in the development of severe weather conditions. Recent work has demonstrated that the lightning jump algorithm concept holds significant promise in the operational realm, aiding in the identification of thunderstorms that have potential to produce severe or hazardous weather. However, a large amount of work still needs to be completed in spite of these positive results. The total lightning jump algorithm is not a stand-alone concept that can be used independent of other meteorological measurements, parameters, and techniques. For example, the algorithm is highly dependent upon thunderstorm tracking to build lightning histories on convective cells. Current tracking methods show that thunderstorm cell tracking is most reliable and cell histories are most accurate when radar information is incorporated with lightning data. In the absence of radar data, the cell tracking is a bit less reliable but the value added by the lightning information is much greater. For optimal application, the algorithm should be integrated with other measurements that assess storm scale properties (e.g., satellite, radar). Therefore, the recent focus of this research effort has been assessing the lightning jump's relation to thunderstorm tracking, meteorological parameters, and its potential uses in operational meteorology. Furthermore, the algorithm must be tailored for the optically-based GOES-R Geostationary Lightning Mapper (GLM), as what has been observed using Very High Frequency Lightning Mapping Array (VHF LMA) measurements will not exactly translate to what will be observed by GLM due to resolution and other instrument differences. Herein, we present some of

  10. Cyclic cooling algorithm

    SciTech Connect

    Rempp, Florian; Mahler, Guenter; Michel, Mathias

    2007-09-15

    We introduce a scheme to perform the cooling algorithm, first presented by Boykin et al. in 2002, for an arbitrary number of times on the same set of qbits. We achieve this goal by adding an additional SWAP gate and a bath contact to the algorithm. This way one qbit may repeatedly be cooled without adding additional qbits to the system. By using a product Liouville space to model the bath contact we calculate the density matrix of the system after a given number of applications of the algorithm.

  11. Parallel algorithms and architectures

    SciTech Connect

    Albrecht, A.; Jung, H.; Mehlhorn, K.

    1987-01-01

    Contents of this book are the following: Preparata: Deterministic simulation of idealized parallel computers on more realistic ones; Convex hull of randomly chosen points from a polytope; Dataflow computing; Parallel in sequence; Towards the architecture of an elementary cortical processor; Parallel algorithms and static analysis of parallel programs; Parallel processing of combinatorial search; Communications; An O(nlogn) cost parallel algorithms for the single function coarsest partition problem; Systolic algorithms for computing the visibility polygon and triangulation of a polygonal region; and RELACS - A recursive layout computing system. Parallel linear conflict-free subtree access.

  12. The Algorithm Selection Problem

    NASA Technical Reports Server (NTRS)

    Minton, Steve; Allen, John; Deiss, Ron (Technical Monitor)

    1994-01-01

    Work on NP-hard problems has shown that many instances of these theoretically computationally difficult problems are quite easy. The field has also shown that choosing the right algorithm for the problem can have a profound effect on the time needed to find a solution. However, to date there has been little work showing how to select the right algorithm for solving any particular problem. The paper refers to this as the algorithm selection problem. It describes some of the aspects that make this problem difficult, as well as proposes a technique for addressing it.

  13. QPSO-Based Adaptive DNA Computing Algorithm

    PubMed Central

    Karakose, Mehmet; Cigdem, Ugur

    2013-01-01

    DNA (deoxyribonucleic acid) computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO). Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1) parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2) adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3) numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm. PMID:23935409

  14. Aerocapture Guidance Algorithm Comparison Campaign

    NASA Technical Reports Server (NTRS)

    Rousseau, Stephane; Perot, Etienne; Graves, Claude; Masciarelli, James P.; Queen, Eric

    2002-01-01

    The aerocapture is a promising technique for the future human interplanetary missions. The Mars Sample Return was initially based on an insertion by aerocapture. A CNES orbiter Mars Premier was developed to demonstrate this concept. Mainly due to budget constraints, the aerocapture was cancelled for the French orbiter. A lot of studies were achieved during the three last years to develop and test different guidance algorithms (APC, EC, TPC, NPC). This work was shared between CNES and NASA, with a fruitful joint working group. To finish this study an evaluation campaign has been performed to test the different algorithms. The objective was to assess the robustness, accuracy, capability to limit the load, and the complexity of each algorithm. A simulation campaign has been specified and performed by CNES, with a similar activity on the NASA side to confirm the CNES results. This evaluation has demonstrated that the numerical guidance principal is not competitive compared to the analytical concepts. All the other algorithms are well adapted to guaranty the success of the aerocapture. The TPC appears to be the more robust, the APC the more accurate, and the EC appears to be a good compromise.

  15. Adaptive color image watermarking algorithm

    NASA Astrophysics Data System (ADS)

    Feng, Gui; Lin, Qiwei

    2008-03-01

    As a major method for intellectual property right protecting, digital watermarking techniques have been widely studied and used. But due to the problems of data amount and color shifted, watermarking techniques on color image was not so widespread studied, although the color image is the principal part for multi-medium usages. Considering the characteristic of Human Visual System (HVS), an adaptive color image watermarking algorithm is proposed in this paper. In this algorithm, HSI color model was adopted both for host and watermark image, the DCT coefficient of intensity component (I) of the host color image was used for watermark date embedding, and while embedding watermark the amount of embedding bit was adaptively changed with the complex degree of the host image. As to the watermark image, preprocessing is applied first, in which the watermark image is decomposed by two layer wavelet transformations. At the same time, for enhancing anti-attack ability and security of the watermarking algorithm, the watermark image was scrambled. According to its significance, some watermark bits were selected and some watermark bits were deleted as to form the actual embedding data. The experimental results show that the proposed watermarking algorithm is robust to several common attacks, and has good perceptual quality at the same time.

  16. Daylighting simulation: methods, algorithms, and resources

    SciTech Connect

    Carroll, William L.

    1999-12-01

    This document presents work conducted as part of Subtask C, ''Daylighting Design Tools'', Subgroup C2, ''New Daylight Algorithms'', of the IEA SHC Task 21 and the ECBCS Program Annex 29 ''Daylight in Buildings''. The search for and collection of daylighting analysis methods and algorithms led to two important observations. First, there is a wide range of needs for different types of methods to produce a complete analysis tool. These include: Geometry; Light modeling; Characterization of the natural illumination resource; Materials and components properties, representations; and Usability issues (interfaces, interoperability, representation of analysis results, etc). Second, very advantageously, there have been rapid advances in many basic methods in these areas, due to other forces. They are in part driven by: The commercial computer graphics community (commerce, entertainment); The lighting industry; Architectural rendering and visualization for projects; and Academia: Course materials, research. This has led to a very rich set of information resources that have direct applicability to the small daylighting analysis community. Furthermore, much of this information is in fact available online. Because much of the information about methods and algorithms is now online, an innovative reporting strategy was used: the core formats are electronic, and used to produce a printed form only secondarily. The electronic forms include both online WWW pages and a downloadable .PDF file with the same appearance and content. Both electronic forms include live primary and indirect links to actual information sources on the WWW. In most cases, little additional commentary is provided regarding the information links or citations that are provided. This in turn allows the report to be very concise. The links are expected speak for themselves. The report consists of only about 10+ pages, with about 100+ primary links, but with potentially thousands of indirect links. For purposes of

  17. Towards a Framework for Evaluating and Comparing Diagnosis Algorithms

    NASA Technical Reports Server (NTRS)

    Kurtoglu, Tolga; Narasimhan, Sriram; Poll, Scott; Garcia,David; Kuhn, Lukas; deKleer, Johan; vanGemund, Arjan; Feldman, Alexander

    2009-01-01

    Diagnostic inference involves the detection of anomalous system behavior and the identification of its cause, possibly down to a failed unit or to a parameter of a failed unit. Traditional approaches to solving this problem include expert/rule-based, model-based, and data-driven methods. Each approach (and various techniques within each approach) use different representations of the knowledge required to perform the diagnosis. The sensor data is expected to be combined with these internal representations to produce the diagnosis result. In spite of the availability of various diagnosis technologies, there have been only minimal efforts to develop a standardized software framework to run, evaluate, and compare different diagnosis technologies on the same system. This paper presents a framework that defines a standardized representation of the system knowledge, the sensor data, and the form of the diagnosis results and provides a run-time architecture that can execute diagnosis algorithms, send sensor data to the algorithms at appropriate time steps from a variety of sources (including the actual physical system), and collect resulting diagnoses. We also define a set of metrics that can be used to evaluate and compare the performance of the algorithms, and provide software to calculate the metrics.

  18. A genetic algorithm to reduce stream channel cross section data

    USGS Publications Warehouse

    Berenbrock, C.

    2006-01-01

    A genetic algorithm (GA) was used to reduce cross section data for a hypothetical example consisting of 41 data points and for 10 cross sections on the Kootenai River. The number of data points for the Kootenai River cross sections ranged from about 500 to more than 2,500. The GA was applied to reduce the number of data points to a manageable dataset because most models and other software require fewer than 100 data points for management, manipulation, and analysis. Results indicated that the program successfully reduced the data. Fitness values from the genetic algorithm were lower (better) than those in a previous study that used standard procedures of reducing the cross section data. On average, fitnesses were 29 percent lower, and several were about 50 percent lower. Results also showed that cross sections produced by the genetic algorithm were representative of the original section and that near-optimal results could be obtained in a single run, even for large problems. Other data also can be reduced in a method similar to that for cross section data.

  19. Comprehensive evaluation and clinical implementation of commercially available Monte Carlo dose calculation algorithm.

    PubMed

    Zhang, Aizhen; Wen, Ning; Nurushev, Teamour; Burmeister, Jay; Chetty, Indrin J

    2013-01-01

    A commercial electron Monte Carlo (eMC) dose calculation algorithm has become available in Eclipse treatment planning system. The purpose of this work was to evaluate the eMC algorithm and investigate the clinical implementation of this system. The beam modeling of the eMC algorithm was performed for beam energies of 6, 9, 12, 16, and 20 MeV for a Varian Trilogy and all available applicator sizes in the Eclipse treatment planning system. The accuracy of the eMC algorithm was evaluated in a homogeneous water phantom, solid water phantoms containing lung and bone materials, and an anthropomorphic phantom. In addition, dose calculation accuracy was compared between pencil beam (PB) and eMC algorithms in the same treatment planning system for heterogeneous phantoms. The overall agreement between eMC calculations and measurements was within 3%/2 mm, while the PB algorithm had large errors (up to 25%) in predicting dose distributions in the presence of inhomogeneities such as bone and lung. The clinical implementation of the eMC algorithm was investigated by performing treatment planning for 15 patients with lesions in the head and neck, breast, chest wall, and sternum. The dose distributions were calculated using PB and eMC algorithms with no smoothing and all three levels of 3D Gaussian smoothing for comparison. Based on a routine electron beam therapy prescription method, the number of eMC calculated monitor units (MUs) was found to increase with increased 3D Gaussian smoothing levels. 3D Gaussian smoothing greatly improved the visual usability of dose distributions and produced better target coverage. Differences of calculated MUs and dose distributions between eMC and PB algorithms could be significant when oblique beam incidence, surface irregularities, and heterogeneous tissues were present in the treatment plans. In our patient cases, monitor unit differences of up to 7% were observed between PB and eMC algorithms. Monitor unit calculations were also preformed

  20. A fast portable implementation of the Secure Hash Algorithm, III.

    SciTech Connect

    McCurley, Kevin S.

    1992-10-01

    In 1992, NIST announced a proposed standard for a collision-free hash function. The algorithm for producing the hash value is known as the Secure Hash Algorithm (SHA), and the standard using the algorithm in known as the Secure Hash Standard (SHS). Later, an announcement was made that a scientist at NSA had discovered a weakness in the original algorithm. A revision to this standard was then announced as FIPS 180-1, and includes a slight change to the algorithm that eliminates the weakness. This new algorithm is called SHA-1. In this report we describe a portable and efficient implementation of SHA-1 in the C language. Performance information is given, as well as tips for porting the code to other architectures. We conclude with some observations on the efficiency of the algorithm, and a discussion of how the efficiency of SHA might be improved.

  1. Integrating Algorithm Visualization Video into a First-Year Algorithm and Data Structure Course

    ERIC Educational Resources Information Center

    Crescenzi, Pilu; Malizia, Alessio; Verri, M. Cecilia; Diaz, Paloma; Aedo, Ignacio

    2012-01-01

    In this paper we describe the results that we have obtained while integrating algorithm visualization (AV) movies (strongly tightened with the other teaching material), within a first-year undergraduate course on algorithms and data structures. Our experimental results seem to support the hypothesis that making these movies available significantly…

  2. Bouc-Wen hysteresis model identification using Modified Firefly Algorithm

    NASA Astrophysics Data System (ADS)

    Zaman, Mohammad Asif; Sikder, Urmita

    2015-12-01

    The parameters of Bouc-Wen hysteresis model are identified using a Modified Firefly Algorithm. The proposed algorithm uses dynamic process control parameters to improve its performance. The algorithm is used to find the model parameter values that results in the least amount of error between a set of given data points and points obtained from the Bouc-Wen model. The performance of the algorithm is compared with the performance of conventional Firefly Algorithm, Genetic Algorithm and Differential Evolution algorithm in terms of convergence rate and accuracy. Compared to the other three optimization algorithms, the proposed algorithm is found to have good convergence rate with high degree of accuracy in identifying Bouc-Wen model parameters. Finally, the proposed method is used to find the Bouc-Wen model parameters from experimental data. The obtained model is found to be in good agreement with measured data.

  3. Algorithm for Simulating Atmospheric Turbulence and Aeroelastic Effects on Simulator Motion Systems

    NASA Technical Reports Server (NTRS)

    Ercole, Anthony V.; Cardullo, Frank M.; Kelly, Lon C.; Houck, Jacob A.

    2012-01-01

    Atmospheric turbulence produces high frequency accelerations in aircraft, typically greater than the response to pilot input. Motion system equipped flight simulators must present cues representative of the aircraft response to turbulence in order to maintain the integrity of the simulation. Currently, turbulence motion cueing produced by flight simulator motion systems has been less than satisfactory because the turbulence profiles have been attenuated by the motion cueing algorithms. This report presents a new turbulence motion cueing algorithm, referred to as the augmented turbulence channel. Like the previous turbulence algorithms, the output of the channel only augments the vertical degree of freedom of motion. This algorithm employs a parallel aircraft model and an optional high bandwidth cueing filter. Simulation of aeroelastic effects is also an area where frequency content must be preserved by the cueing algorithm. The current aeroelastic implementation uses a similar secondary channel that supplements the primary motion cue. Two studies were conducted using the NASA Langley Visual Motion Simulator and Cockpit Motion Facility to evaluate the effect of the turbulence channel and aeroelastic model on pilot control input. Results indicate that the pilot is better correlated with the aircraft response, when the augmented channel is in place.

  4. Hyperspectral images lossless compression using the 3D binary EZW algorithm

    NASA Astrophysics Data System (ADS)

    Cheng, Kai-jen; Dill, Jeffrey

    2013-02-01

    This paper presents a transform based lossless compression for hyperspectral images which is inspired by Shapiro (1993)'s EZW algorithm. The proposed compression method uses a hybrid transform which includes an integer Karhunrn-Loeve transform (KLT) and integer discrete wavelet transform (DWT). The integer KLT is employed to eliminate the presence of correlations among the bands of the hyperspectral image. The integer 2D discrete wavelet transform (DWT) is applied to eliminate the correlations in the spatial dimensions and produce wavelet coefficients. These coefficients are then coded by a proposed binary EZW algorithm. The binary EZW eliminates the subordinate pass of conventional EZW by coding residual values, and produces binary sequences. The binary EZW algorithm combines the merits of well-known EZW and SPIHT algorithms, and it is computationally simpler for lossless compression. The proposed method was applied to AVIRIS images and compared to other state-of-the-art image compression techniques. The results show that the proposed lossless image compression is more efficient and it also has higher compression ratio than other algorithms.

  5. A seed-based plant propagation algorithm: the feeding station model.

    PubMed

    Sulaiman, Muhammad; Salhi, Abdellah

    2015-01-01

    The seasonal production of fruit and seeds is akin to opening a feeding station, such as a restaurant. Agents coming to feed on the fruit are like customers attending the restaurant; they arrive at a certain rate and get served at a certain rate following some appropriate processes. The same applies to birds and animals visiting and feeding on ripe fruit produced by plants such as the strawberry plant. This phenomenon underpins the seed dispersion of the plants. Modelling it as a queuing process results in a seed-based search/optimisation algorithm. This variant of the Plant Propagation Algorithm is described, analysed, tested on nontrivial problems, and compared with well established algorithms. The results are included. PMID:25821858

  6. A Seed-Based Plant Propagation Algorithm: The Feeding Station Model

    PubMed Central

    Salhi, Abdellah

    2015-01-01

    The seasonal production of fruit and seeds is akin to opening a feeding station, such as a restaurant. Agents coming to feed on the fruit are like customers attending the restaurant; they arrive at a certain rate and get served at a certain rate following some appropriate processes. The same applies to birds and animals visiting and feeding on ripe fruit produced by plants such as the strawberry plant. This phenomenon underpins the seed dispersion of the plants. Modelling it as a queuing process results in a seed-based search/optimisation algorithm. This variant of the Plant Propagation Algorithm is described, analysed, tested on nontrivial problems, and compared with well established algorithms. The results are included. PMID:25821858

  7. Generation of Referring Expressions: Assessing the Incremental Algorithm

    ERIC Educational Resources Information Center

    van Deemter, Kees; Gatt, Albert; van der Sluis, Ielka; Power, Richard

    2012-01-01

    A substantial amount of recent work in natural language generation has focused on the generation of "one-shot" referring expressions whose only aim is to identify a target referent. Dale and Reiter's Incremental Algorithm (IA) is often thought to be the best algorithm for maximizing the similarity to referring expressions produced by people. We…

  8. DNA-based watermarks using the DNA-Crypt algorithm

    PubMed Central

    Heider, Dominik; Barnekow, Angelika

    2007-01-01

    Background The aim of this paper is to demonstrate the application of watermarks based on DNA sequences to identify the unauthorized use of genetically modified organisms (GMOs) protected by patents. Predicted mutations in the genome can be corrected by the DNA-Crypt program leaving the encrypted information intact. Existing DNA cryptographic and steganographic algorithms use synthetic DNA sequences to store binary information however, although these sequences can be used for authentication, they may change the target DNA sequence when introduced into living organisms. Results The DNA-Crypt algorithm and image steganography are based on the same watermark-hiding principle, namely using the least significant base in case of DNA-Crypt and the least significant bit in case of the image steganography. It can be combined with binary encryption algorithms like AES, RSA or Blowfish. DNA-Crypt is able to correct mutations in the target DNA with several mutation correction codes such as the Hamming-code or the WDH-code. Mutations which can occur infrequently may destroy the encrypted information, however an integrated fuzzy controller decides on a set of heuristics based on three input dimensions, and recommends whether or not to use a correction code. These three input dimensions are the length of the sequence, the individual mutation rate and the stability over time, which is represented by the number of generations. In silico experiments using the Ypt7 in Saccharomyces cerevisiae shows that the DNA watermarks produced by DNA-Crypt do not alter the translation of mRNA into protein. Conclusion The program is able to store watermarks in living organisms and can maintain the original information by correcting mutations itself. Pairwise or multiple sequence alignments show that DNA-Crypt produces few mismatches between the sequences similar to all steganographic algorithms. PMID:17535434

  9. A systematic comparison of genome-scale clustering algorithms

    PubMed Central

    2012-01-01

    Background A wealth of clustering algorithms has been applied to gene co-expression experiments. These algorithms cover a broad range of approaches, from conventional techniques such as k-means and hierarchical clustering, to graphical approaches such as k-clique communities, weighted gene co-expression networks (WGCNA) and paraclique. Comparison of these methods to evaluate their relative effectiveness provides guidance to algorithm selection, development and implementation. Most prior work on comparative clustering evaluation has focused on parametric methods. Graph theoretical methods are recent additions to the tool set for the global analysis and decomposition of microarray co-expression matrices that have not generally been included in earlier methodological comparisons. In the present study, a variety of parametric and graph theoretical clustering algorithms are compared using well-characterized transcriptomic data at a genome scale from Saccharomyces cerevisiae. Methods For each clustering method under study, a variety of parameters were tested. Jaccard similarity was used to measure each cluster's agreement with every GO and KEGG annotation set, and the highest Jaccard score was assigned to the cluster. Clusters were grouped into small, medium, and large bins, and the Jaccard score of the top five scoring clusters in each bin were averaged and reported as the best average top 5 (BAT5) score for the particular method. Results Clusters produced by each method were evaluated based upon the positive match to known pathways. This produces a readily interpretable ranking of the relative effectiveness of clustering on the genes. Methods were also tested to determine whether they were able to identify clusters consistent with those identified by other clustering methods. Conclusions Validation of clusters against known gene classifications demonstrate that for this data, graph-based techniques outperform conventional clustering approaches, suggesting that further

  10. Modified artificial bee colony algorithm for reactive power optimization

    NASA Astrophysics Data System (ADS)

    Sulaiman, Noorazliza; Mohamad-Saleh, Junita; Abro, Abdul Ghani

    2015-05-01

    Bio-inspired algorithms (BIAs) implemented to solve various optimization problems have shown promising results which are very important in this severely complex real-world. Artificial Bee Colony (ABC) algorithm, a kind of BIAs has demonstrated tremendous results as compared to other optimization algorithms. This paper presents a new modified ABC algorithm referred to as JA-ABC3 with the aim to enhance convergence speed and avoid premature convergence. The proposed algorithm has been simulated on ten commonly used benchmarks functions. Its performance has also been compared with other existing ABC variants. To justify its robust applicability, the proposed algorithm has been tested to solve Reactive Power Optimization problem. The results have shown that the proposed algorithm has superior performance to other existing ABC variants e.g. GABC, BABC1, BABC2, BsfABC dan IABC in terms of convergence speed. Furthermore, the proposed algorithm has also demonstrated excellence performance in solving Reactive Power Optimization problem.

  11. Line Thinning Algorithm

    NASA Astrophysics Data System (ADS)

    Feigin, G.; Ben-Yosef, N.

    1983-10-01

    A thinning algorithm, of the banana-peel type, is presented. In each iteration pixels are attacked from all directions (there are no sub-iterations), and the deletion criteria depend on the 24 nearest neighbours.

  12. Diagnostic Algorithm Benchmarking

    NASA Technical Reports Server (NTRS)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  13. Algorithmically specialized parallel computers

    SciTech Connect

    Snyder, L.; Jamieson, L.H.; Gannon, D.B.; Siegel, H.J.

    1985-01-01

    This book is based on a workshop which dealt with array processors. Topics considered include algorithmic specialization using VLSI, innovative architectures, signal processing, speech recognition, image processing, specialized architectures for numerical computations, and general-purpose computers.

  14. Algorithmic Strategies in Combinatorial Chemistry

    SciTech Connect

    GOLDMAN,DEBORAH; ISTRAIL,SORIN; LANCIA,GIUSEPPE; PICCOLBONI,ANTONIO; WALENZ,BRIAN

    2000-08-01

    Combinatorial Chemistry is a powerful new technology in drug design and molecular recognition. It is a wet-laboratory methodology aimed at ``massively parallel'' screening of chemical compounds for the discovery of compounds that have a certain biological activity. The power of the method comes from the interaction between experimental design and computational modeling. Principles of ``rational'' drug design are used in the construction of combinatorial libraries to speed up the discovery of lead compounds with the desired biological activity. This paper presents algorithms, software development and computational complexity analysis for problems arising in the design of combinatorial libraries for drug discovery. The authors provide exact polynomial time algorithms and intractability results for several Inverse Problems-formulated as (chemical) graph reconstruction problems-related to the design of combinatorial libraries. These are the first rigorous algorithmic results in the literature. The authors also present results provided by the combinatorial chemistry software package OCOTILLO for combinatorial peptide design using real data libraries. The package provides exact solutions for general inverse problems based on shortest-path topological indices. The results are superior both in accuracy and computing time to the best software reports published in the literature. For 5-peptoid design, the computation is rigorously reduced to an exhaustive search of about 2% of the search space; the exact solutions are found in a few minutes.

  15. Conditional disruption of miR17-92 cluster in collagen type I-producing osteoblasts results in reduced periosteal bone formation and bone anabolic response to exercise

    PubMed Central

    Mohan, Subburaman; Wergedal, Jon E.; Das, Subhashri

    2014-01-01

    In this study, we evaluated the role of the microRNA (miR)17-92 cluster in osteoblast lineage cells using a Cre-loxP approach in which Cre expression is driven by the entire regulatory region of the type I collagen α2 gene. Conditional knockout (cKO) mice showed a 13–34% reduction in total body bone mineral content and area with little or no change in bone mineral density (BMD) by DXA at 2, 4, and 8 wk in both sexes. Micro-CT analyses of the femur revealed an 8% reduction in length and 25–27% reduction in total volume at the diaphyseal and metaphyseal sites. Neither cortical nor trabecular volumetric BMD was different in the cKO mice. Bone strength (maximum load) was reduced by 10% with no change in bone toughness. Quantitative histomorphometric analyses revealed a 28% reduction in the periosteal bone formation rate and in the mineral apposition rate but with no change in the resorbing surface. Expression levels of periostin, Elk3, Runx2 genes that are targeted by miRs from the cluster were decreased by 25–30% in the bones of cKO mice. To determine the contribution of the miR17-92 cluster to the mechanical strain effect on periosteal bone formation, we subjected cKO and control mice to 2 wk of mechanical loading by four-point bending. We found that the periosteal bone response to mechanical strain was significantly reduced in the cKO mice. We conclude that the miR17-92 cluster expressed in type I collagen-producing cells is a key regulator of periosteal bone formation in mice. PMID:25492928

  16. Conditional disruption of miR17-92 cluster in collagen type I-producing osteoblasts results in reduced periosteal bone formation and bone anabolic response to exercise.

    PubMed

    Mohan, Subburaman; Wergedal, Jon E; Das, Subhashri; Kesavan, Chandrasekhar

    2015-02-01

    In this study, we evaluated the role of the microRNA (miR)17-92 cluster in osteoblast lineage cells using a Cre-loxP approach in which Cre expression is driven by the entire regulatory region of the type I collagen α2 gene. Conditional knockout (cKO) mice showed a 13-34% reduction in total body bone mineral content and area with little or no change in bone mineral density (BMD) by DXA at 2, 4, and 8 wk in both sexes. Micro-CT analyses of the femur revealed an 8% reduction in length and 25-27% reduction in total volume at the diaphyseal and metaphyseal sites. Neither cortical nor trabecular volumetric BMD was different in the cKO mice. Bone strength (maximum load) was reduced by 10% with no change in bone toughness. Quantitative histomorphometric analyses revealed a 28% reduction in the periosteal bone formation rate and in the mineral apposition rate but with no change in the resorbing surface. Expression levels of periostin, Elk3, Runx2 genes that are targeted by miRs from the cluster were decreased by 25-30% in the bones of cKO mice. To determine the contribution of the miR17-92 cluster to the mechanical strain effect on periosteal bone formation, we subjected cKO and control mice to 2 wk of mechanical loading by four-point bending. We found that the periosteal bone response to mechanical strain was significantly reduced in the cKO mice. We conclude that the miR17-92 cluster expressed in type I collagen-producing cells is a key regulator of periosteal bone formation in mice. PMID:25492928

  17. Applications and accuracy of the parallel diagonal dominant algorithm

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He

    1993-01-01

    The Parallel Diagonal Dominant (PDD) algorithm is a highly efficient, ideally scalable tridiagonal solver. In this paper, a detailed study of the PDD algorithm is given. First the PDD algorithm is introduced. Then the algorithm is extended to solve periodic tridiagonal systems. A variant, the reduced PDD algorithm, is also proposed. Accuracy analysis is provided for a class of tridiagonal systems, the symmetric, and anti-symmetric Toeplitz tridiagonal systems. Implementation results show that the analysis gives a good bound on the relative error, and the algorithm is a good candidate for the emerging massively parallel machines.

  18. Exploring a new best information algorithm for Iliad.

    PubMed Central

    Guo, D.; Lincoln, M. J.; Haug, P. J.; Turner, C. W.; Warner, H. R.

    1991-01-01

    Iliad is a diagnostic expert system for internal medicine. One important feature that Iliad offers is the ability to analyze a particular patient case and to determine the most cost-effective method for pursuing the work-up. Iliad's current "best information" algorithm has not been previously validated and compared to other potential algorithms. Therefore, this paper presents a comparison of four new algorithms to the current algorithm. The basis for this comparison was eighteen "vignette" cases derived from real patient cases from the University of Utah Medical Center. The results indicated that the current algorithm can be significantly improved. More promising algorithms are suggested for future investigation. PMID:1807677

  19. Optimization Algorithm for Designing Diffractive Optical Elements

    NASA Astrophysics Data System (ADS)

    Agudelo, Viviana A.; Orozco, Ricardo Amézquita

    2008-04-01

    Diffractive Optical Elements (DOEs) are commonly used in many applications such as laser beam shaping, recording of micro reliefs, wave front analysis, metrology and many others where they can replace single or multiple conventional optical elements (diffractive or refractive). One of the most versatile way to produce them, is to use computer assisted techniques for their design and optimization, as well as optical or electron beam micro-lithography techniques for the final fabrication. The fundamental figures of merit involved in the optimization of such devices are both the diffraction efficiency and the signal to noise ratio evaluated in the reconstructed wave front at the image plane. A design and optimization algorithm based on the error—reduction method (Gerchberg and Saxton) is proposed to obtain binary discrete phase-only Fresnel DOEs that will be used to produce specific intensity patterns. Some experimental results were obtained using a spatial light modulator acting as a binary programmable diffractive phase element. Although the DOEs optimized here are discrete in phase, they present an acceptable signal noise relation and diffraction efficiency.

  20. An algorithm to estimate the object support in truncated images

    SciTech Connect

    Hsieh, Scott S.; Nett, Brian E.; Cao, Guangzhi; Pelc, Norbert J.

    2014-07-15

    Purpose: Truncation artifacts in CT occur if the object to be imaged extends past the scanner field of view (SFOV). These artifacts impede diagnosis and could possibly introduce errors in dose plans for radiation therapy. Several approaches exist for correcting truncation artifacts, but existing correction algorithms do not accurately recover the skin line (or support) of the patient, which is important in some dose planning methods. The purpose of this paper was to develop an iterative algorithm that recovers the support of the object. Methods: The authors assume that the truncated portion of the image is made up of soft tissue of uniform CT number and attempt to find a shape consistent with the measured data. Each known measurement in the sinogram is interpreted as an estimate of missing mass along a line. An initial estimate of the object support is generated by thresholding a reconstruction made using a previous truncation artifact correction algorithm (e.g., water cylinder extrapolation). This object support is iteratively deformed to reduce the inconsistency with the measured data. The missing data are estimated using this object support to complete the dataset. The method was tested on simulated and experimentally truncated CT data. Results: The proposed algorithm produces a better defined skin line than water cylinder extrapolation. On the experimental data, the RMS error of the skin line is reduced by about 60%. For moderately truncated images, some soft tissue contrast is retained near the SFOV. As the extent of truncation increases, the soft tissue contrast outside the SFOV becomes unusable although the skin line remains clearly defined, and in reformatted images it varies smoothly from slice to slice as expected. Conclusions: The support recovery algorithm provides a more accurate estimate of the patient outline than thresholded, basic water cylinder extrapolation, and may be preferred in some radiation therapy applications.