Science.gov

Sample records for algorithm produces results

  1. New Results in Astrodynamics Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Coverstone-Carroll, V.; Hartmann, J. W.; Williams, S. N.; Mason, W. J.

    1998-01-01

    Generic algorithms have gained popularity as an effective procedure for obtaining solutions to traditionally difficult space mission optimization problems. In this paper, a brief survey of the use of genetic algorithms to solve astrodynamics problems is presented and is followed by new results obtained from applying a Pareto genetic algorithm to the optimization of low-thrust interplanetary spacecraft missions.

  2. Wake Vortex Algorithm Scoring Results

    NASA Technical Reports Server (NTRS)

    Robins, R. E.; Delisi, D. P.; Hinton, David (Technical Monitor)

    2002-01-01

    This report compares the performance of two models of trailing vortex evolution for which interaction with the ground is not a significant factor. One model uses eddy dissipation rate (EDR) and the other uses the kinetic energy of turbulence fluctuations (TKE) to represent the effect of turbulence. In other respects, the models are nearly identical. The models are evaluated by comparing their predictions of circulation decay, vertical descent, and lateral transport to observations for over four hundred cases from Memphis and Dallas/Fort Worth International Airports. These observations were obtained during deployments in support of NASA's Aircraft Vortex Spacing System (AVOSS). The results of the comparisons show that the EDR model usually performs slightly better than the TKE model.

  3. MODIL cryocooler producibility demonstration project results

    SciTech Connect

    Cruz, G.E.; Franks, R.M.

    1993-06-24

    The production of large quantities of spacecraft needed by SDIO will require a cultural change in design and production practices. Low rates production and the need for exceedingly high reliability has driven the industry to custom designed, hand crafted, and exhaustingly tested satellites. These factors have mitigated against employing design and manufacturing cost reduction methods commonly used in tactical missile production. Additional challenges to achieving production efficiencies are presented by the SDI spacecraft mission requirement. IR sensor systems, for example, are comprised of subassemblies and components that require the design, manufacture, and maintenance of ultra precision tolerances over challenging operational lifetimes. These IR sensors demand the use of reliable, closed loop, cryogenic refrigerators or active cryocoolers to meet stringent system acquisition and pointing requirements. The authors summarize some spacecraft cryocooler requirements and discuss their observations regarding Industry`s current production capabilities of cryocoolers. The results of the Lawrence Livermore National Laboratory (LLNL) Spacecraft Fabrication and Test (SF and T) MODIL`s Phase I producibility demonstration project are presented. The current project that involves LLNL and industrial participants is discussed.

  4. The Aquarius Salinity Retrieval Algorithm: Early Results

    NASA Technical Reports Server (NTRS)

    Meissner, Thomas; Wentz, Frank J.; Lagerloef, Gary; LeVine, David

    2012-01-01

    The Aquarius L-band radiometer/scatterometer system is designed to provide monthly salinity maps at 150 km spatial scale to a 0.2 psu accuracy. The sensor was launched on June 10, 2011, aboard the Argentine CONAE SAC-D spacecraft. The L-band radiometers and the scatterometer have been taking science data observations since August 25, 2011. The first part of this presentation gives an overview over the Aquarius salinity retrieval algorithm. The instrument calibration converts Aquarius radiometer counts into antenna temperatures (TA). The salinity retrieval algorithm converts those TA into brightness temperatures (TB) at a flat ocean surface. As a first step, contributions arising from the intrusion of solar, lunar and galactic radiation are subtracted. The antenna pattern correction (APC) removes the effects of cross-polarization contamination and spillover. The Aquarius radiometer measures the 3rd Stokes parameter in addition to vertical (v) and horizontal (h) polarizations, which allows for an easy removal of ionospheric Faraday rotation. The atmospheric absorption at L-band is almost entirely due to O2, which can be calculated based on auxiliary input fields from numerical weather prediction models and then successively removed from the TB. The final step in the TA to TB conversion is the correction for the roughness of the sea surface due to wind. This is based on the radar backscatter measurements by the scatterometer. The TB of the flat ocean surface can now be matched to a salinity value using a surface emission model that is based on a model for the dielectric constant of sea water and an auxiliary field for the sea surface temperature. In the current processing (as of writing this abstract) only v-pol TB are used for this last process and NCEP winds are used for the roughness correction. Before the salinity algorithm can be operationally implemented and its accuracy assessed by comparing versus in situ measurements, an extensive calibration and validation

  5. Evaluation of registration, compression and classification algorithms. Volume 1: Results

    NASA Technical Reports Server (NTRS)

    Jayroe, R.; Atkinson, R.; Callas, L.; Hodges, J.; Gaggini, B.; Peterson, J.

    1979-01-01

    The registration, compression, and classification algorithms were selected on the basis that such a group would include most of the different and commonly used approaches. The results of the investigation indicate clearcut, cost effective choices for registering, compressing, and classifying multispectral imagery.

  6. An automatic method for producing robust regression models from hyperspectral data using multiple simple genetic algorithms

    NASA Astrophysics Data System (ADS)

    Sykas, Dimitris; Karathanassi, Vassilia

    2015-06-01

    This paper presents a new method for automatically determining the optimum regression model, which enable the estimation of a parameter. The concept lies on the combination of k spectral pre-processing algorithms (SPPAs) that enhance spectral features correlated to the desired parameter. Initially a pre-processing algorithm uses as input a single spectral signature and transforms it according to the SPPA function. A k-step combination of SPPAs uses k preprocessing algorithms serially. The result of each SPPA is used as input to the next SPPA, and so on until the k desired pre-processed signatures are reached. These signatures are then used as input to three different regression methods: the Normalized band Difference Regression (NDR), the Multiple Linear Regression (MLR) and the Partial Least Squares Regression (PLSR). Three Simple Genetic Algorithms (SGAs) are used, one for each regression method, for the selection of the optimum combination of k SPPAs. The performance of the SGAs is evaluated based on the RMS error of the regression models. The evaluation not only indicates the selection of the optimum SPPA combination but also the regression method that produces the optimum prediction model. The proposed method was applied on soil spectral measurements in order to predict Soil Organic Matter (SOM). In this study, the maximum value assigned to k was 3. PLSR yielded the highest accuracy while NDR's accuracy was satisfactory compared to its complexity. MLR method showed severe drawbacks due to the presence of noise in terms of collinearity at the spectral bands. Most of the regression methods required a 3-step combination of SPPAs for achieving the highest performance. The selected preprocessing algorithms were different for each regression method since each regression method handles with a different way the explanatory variables.

  7. The Effect of Pansharpening Algorithms on the Resulting Orthoimagery

    NASA Astrophysics Data System (ADS)

    Agrafiotis, P.; Georgopoulos, A.; Karantzalos, K.

    2016-06-01

    This paper evaluates the geometric effects of pansharpening algorithms on automatically generated DSMs and thus on the resulting orthoimagery through a quantitative assessment of the accuracy on the end products. The main motivation was based on the fact that for automatically generated Digital Surface Models, an image correlation step is employed for extracting correspondences between the overlapping images. Thus their accuracy and reliability is strictly related to image quality, while pansharpening may result into lower image quality which may affect the DSM generation and the resulting orthoimage accuracy. To this direction, an iterative methodology was applied in order to combine the process described by Agrafiotis and Georgopoulos (2015) with different pansharpening algorithms and check the accuracy of orthoimagery resulting from pansharpened data. Results are thoroughly examined and statistically analysed. The overall evaluation indicated that the pansharpening process didn't affect the geometric accuracy of the resulting DSM with a 10m interval, as well as the resulting orthoimagery. Although some residuals in the orthoimages were observed, their magnitude cannot adversely affect the accuracy of the final orthoimagery.

  8. Adaptively resizing populations: Algorithm, analysis, and first results

    NASA Technical Reports Server (NTRS)

    Smith, Robert E.; Smuda, Ellen

    1993-01-01

    Deciding on an appropriate population size for a given Genetic Algorithm (GA) application can often be critical to the algorithm's success. Too small, and the GA can fall victim to sampling error, affecting the efficacy of its search. Too large, and the GA wastes computational resources. Although advice exists for sizing GA populations, much of this advice involves theoretical aspects that are not accessible to the novice user. An algorithm for adaptively resizing GA populations is suggested. This algorithm is based on recent theoretical developments that relate population size to schema fitness variance. The suggested algorithm is developed theoretically, and simulated with expected value equations. The algorithm is then tested on a problem where population sizing can mislead the GA. The work presented suggests that the population sizing algorithm may be a viable way to eliminate the population sizing decision from the application of GA's.

  9. Flight test results of failure detection and isolation algorithms for a redundant strapdown inertial measurement unit

    NASA Technical Reports Server (NTRS)

    Morrell, F. R.; Motyka, P. R.; Bailey, M. L.

    1990-01-01

    Flight test results for two sensor fault-tolerant algorithms developed for a redundant strapdown inertial measurement unit are presented. The inertial measurement unit (IMU) consists of four two-degrees-of-freedom gyros and accelerometers mounted on the faces of a semi-octahedron. Fault tolerance is provided by edge vector test and generalized likelihood test algorithms, each of which can provide dual fail-operational capability for the IMU. To detect the wide range of failure magnitudes in inertial sensors, which provide flight crucial information for flight control and navigation, failure detection and isolation are developed in terms of a multi level structure. Threshold compensation techniques, developed to enhance the sensitivity of the failure detection process to navigation level failures, are presented. Four flight tests were conducted in a commercial transport-type environment to compare and determine the performance of the failure detection and isolation methods. Dual flight processors enabled concurrent tests for the algorithms. Failure signals such as hard-over, null, or bias shift, were added to the sensor outputs as simple or multiple failures during the flights. Both algorithms provided timely detection and isolation of flight control level failures. The generalized likelihood test algorithm provided more timely detection of low-level sensor failures, but it produced one false isolation. Both algorithms demonstrated the capability to provide dual fail-operational performance for the skewed array of inertial sensors.

  10. Evaluation of an improved algorithm for producing realistic 3D breast software phantoms: Application for mammography

    SciTech Connect

    Bliznakova, K.; Suryanarayanan, S.; Karellas, A.; Pallikarakis, N.

    2010-11-15

    Purpose: This work presents an improved algorithm for the generation of 3D breast software phantoms and its evaluation for mammography. Methods: The improved methodology has evolved from a previously presented 3D noncompressed breast modeling method used for the creation of breast models of different size, shape, and composition. The breast phantom is composed of breast surface, duct system and terminal ductal lobular units, Cooper's ligaments, lymphatic and blood vessel systems, pectoral muscle, skin, 3D mammographic background texture, and breast abnormalities. The key improvement is the development of a new algorithm for 3D mammographic texture generation. Simulated images of the enhanced 3D breast model without lesions were produced by simulating mammographic image acquisition and were evaluated subjectively and quantitatively. For evaluation purposes, a database with regions of interest taken from simulated and real mammograms was created. Four experienced radiologists participated in a visual subjective evaluation trial, as they judged the quality of the simulated mammograms, using the new algorithm compared to mammograms, obtained with the old modeling approach. In addition, extensive quantitative evaluation included power spectral analysis and calculation of fractal dimension, skewness, and kurtosis of simulated and real mammograms from the database. Results: The results from the subjective evaluation strongly suggest that the new methodology for mammographic breast texture creates improved breast models compared to the old approach. Calculated parameters on simulated images such as {beta} exponent deducted from the power law spectral analysis and fractal dimension are similar to those calculated on real mammograms. The results for the kurtosis and skewness are also in good coincidence with those calculated from clinical images. Comparison with similar calculations published in the literature showed good agreement in the majority of cases. Conclusions: The

  11. Veggie ISS Validation Test Results and Produce Consumption

    NASA Technical Reports Server (NTRS)

    Massa, Gioia; Hummerick, Mary; Spencer, LaShelle; Smith, Trent

    2015-01-01

    The Veggie vegetable production system flew to the International Space Station (ISS) in the spring of 2014. The first set of plants, Outredgeous red romaine lettuce, was grown, harvested, frozen, and returned to Earth in October. Ground control and flight plant tissue was sub-sectioned for microbial analysis, anthocyanin antioxidant phenolic analysis, and elemental analysis. Microbial analysis was also performed on samples swabbed on orbit from plants, Veggie bellows, and plant pillow surfaces, on water samples, and on samples of roots, media, and wick material from two returned plant pillows. Microbial levels of plants were comparable to ground controls, with some differences in community composition. The range in aerobic bacterial plate counts between individual plants was much greater in the ground controls than in flight plants. No pathogens were found. Anthocyanin concentrations were the same between ground and flight plants, while antioxidant and phenolic levels were slightly higher in flight plants. Elements varied, but key target elements for astronaut nutrition were similar between ground and flight plants. Aerobic plate counts of the flight plant pillow components were significantly higher than ground controls. Surface swab samples showed low microbial counts, with most below detection limits. Flight plant microbial levels were less than bacterial guidelines set for non-thermostabalized food and near or below those for fungi. These guidelines are not for fresh produce but are the closest approximate standards. Forward work includes the development of standards for space-grown produce. A produce consumption strategy for Veggie on ISS includes pre-flight assessments of all crops to down select candidates, wiping flight-grown plants with sanitizing food wipes, and regular Veggie hardware cleaning and microbial monitoring. Produce then could be consumed by astronauts, however some plant material would be reserved and returned for analysis. Implementation of

  12. Interdisciplinary research produces results in understanding planetary dunes

    USGS Publications Warehouse

    Titus, Timothy N.; Hayward, Rosalyn K.; Dinwiddie, Cynthia L.

    2012-01-01

    Third International Planetary Dunes Workshop: Remote Sensing and Image Analysis of Planetary Dunes; Flagstaff, Arizona, 12–16 June 2012. This workshop, the third in a biennial series, was convened as a means of bringing together terrestrial and planetary researchers from diverse backgrounds with the goal of fostering collaborative interdisciplinary research. The small-group setting facilitated intensive discussions of many problems associated with aeolian processes on Earth, Mars, Venus, Titan, Triton, and Pluto. The workshop produced a list of key scientifc questions about planetary dune felds.

  13. Massachusetts General Physicians Organization's quality incentive program produces encouraging results.

    PubMed

    Torchiana, David F; Colton, Deborah G; Rao, Sandhya K; Lenz, Sarah K; Meyer, Gregg S; Ferris, Timothy G

    2013-10-01

    Physicians are increasingly becoming salaried employees of hospitals or large physician groups. Yet few published reports have evaluated provider-driven quality incentive programs for salaried physicians. In 2006 the Massachusetts General Physicians Organization began a quality incentive program for its salaried physicians. Eligible physicians were given performance targets for three quality measures every six months. The incentive payments could be as much as 2 percent of a physician's annual income. Over thirteen six-month terms, the program used 130 different quality measures. Although quality-of-care improvements and cost reductions were difficult to calculate, anecdotal evidence points to multiple successes. For example, the program helped physicians meet many federal health information technology meaningful-use criteria and produced $15.5 million in incentive payments. The program also facilitated the adoption of an electronic health record, improved hand hygiene compliance, increased efficiency in radiology and the cancer center, and decreased emergency department use. The program demonstrated that even small incentives tied to carefully structured metrics, priority setting, and clear communication can help change salaried physicians' behavior in ways that improve the quality and safety of health care and ease the physicians' sense of administrative burden. PMID:24101064

  14. Agricultural produce grading and sorting system using color CCD and new color identification algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Dongsheng; Zou, Jizuo; Yang, Yunping; Dong, Jianhua; Zhang, Yuanxiang

    1996-10-01

    A high-speed automatic agricultural produce grading and sorting system using color CCD and new color identification algorithm has been developed. In a typical application, the system can sort almonds into tow output grades according to their color. Almonds ar rich in 18 kinds of amino acids and 13 kinds of micro minerals and vitamins and can be made into almond drink. In order to ensure the drink quality, almonds must be sorted carefully before being made into a drink. Using this system, almonds can be sorted into two grades: up to grade and below grade almonds or foreign materials. A color CCD inspects the almonds passing on a conveyor of rotating rollers, a color identification algorithm grades almonds and distinguishes foreign materials from almonds. Employing an elaborately designed mechanism, the below grade almonds and foreign materials can be removed effectively from the raw almonds. This system can be easily adapted for inspecting and sorting other kinds of agricultural produce such as peanuts, beans tomatoes and so on.

  15. A two stage algorithm for target and suspect analysis of produced water via gas chromatography coupled with high resolution time of flight mass spectrometry.

    PubMed

    Samanipour, Saer; Langford, Katherine; Reid, Malcolm J; Thomas, Kevin V

    2016-09-01

    Gas chromatography coupled with high resolution time of flight mass spectrometry (GC-HR-TOFMS) has gained popularity for the target and suspect analysis of complex samples. However, confident detection of target/suspect analytes in complex samples, such as produced water, remains a challenging task. Here we report on the development and validation of a two stage algorithm for the confident target and suspect analysis of produced water extracts. We performed both target and suspect analysis for 48 standards, which were a mixture of 28 aliphatic hydrocarbons and 20 alkylated phenols, in 3 produced water extracts. The two stage algorithm produces a chemical standard database of spectra, in the first stage, which is used for target and suspect analysis during the second stage. The first stage is carried out through five steps via an algorithm here referred to as unique ion extractor (UIE). During the first step the m/z values in the spectrum of a standard that do not belong to that standard are removed in order to produce a clean spectrum and then during the last step the cleaned spectrum is calibrated. The Dot-product algorithm, during the second stage, uses the cleaned and calibrated spectra of the standards for both target and suspect analysis. We performed the target analysis of 48 standards in all 3 samples via conventional methods, in order to validate the two stage algorithm. The two stage algorithm was demonstrated to be more robust, reliable, and less sensitive to the signal-to-noise ratio (S/N), when compared to the conventional method. The Dot-product algorithm showed lower potential in producing false positives compared to the conventional methods, when dealing with complex samples. We also evaluated the effect of the mass accuracy on the performances of Dot-product algorithm. Our results indicated the crucial importance of HR-MS data and the mass accuracy for confident suspect analysis in complex samples. PMID:27524301

  16. A two stage algorithm for target and suspect analysis of produced water via gas chromatography coupled with high resolution time of flight mass spectrometry.

    PubMed

    Samanipour, Saer; Langford, Katherine; Reid, Malcolm J; Thomas, Kevin V

    2016-09-01

    Gas chromatography coupled with high resolution time of flight mass spectrometry (GC-HR-TOFMS) has gained popularity for the target and suspect analysis of complex samples. However, confident detection of target/suspect analytes in complex samples, such as produced water, remains a challenging task. Here we report on the development and validation of a two stage algorithm for the confident target and suspect analysis of produced water extracts. We performed both target and suspect analysis for 48 standards, which were a mixture of 28 aliphatic hydrocarbons and 20 alkylated phenols, in 3 produced water extracts. The two stage algorithm produces a chemical standard database of spectra, in the first stage, which is used for target and suspect analysis during the second stage. The first stage is carried out through five steps via an algorithm here referred to as unique ion extractor (UIE). During the first step the m/z values in the spectrum of a standard that do not belong to that standard are removed in order to produce a clean spectrum and then during the last step the cleaned spectrum is calibrated. The Dot-product algorithm, during the second stage, uses the cleaned and calibrated spectra of the standards for both target and suspect analysis. We performed the target analysis of 48 standards in all 3 samples via conventional methods, in order to validate the two stage algorithm. The two stage algorithm was demonstrated to be more robust, reliable, and less sensitive to the signal-to-noise ratio (S/N), when compared to the conventional method. The Dot-product algorithm showed lower potential in producing false positives compared to the conventional methods, when dealing with complex samples. We also evaluated the effect of the mass accuracy on the performances of Dot-product algorithm. Our results indicated the crucial importance of HR-MS data and the mass accuracy for confident suspect analysis in complex samples.

  17. Surface reflectance retrieval from satellite and aircraft sensors - Results of sensors and algorithm comparisons during FIFE

    NASA Technical Reports Server (NTRS)

    Markham, B. L.; Halthore, R. N.; Goetz, S. J.

    1992-01-01

    Visible to shortwave infrared radiometric data collected by a number of remote sensing instruments on aircraft and satellite platforms were compared over common areas in the First International Satellite Land Surface Climatology Project (ISLSCP) Field Experiment (FIFE) site on August 4, 1989, to assess their radiometric consistency and the adequacy of atmospheric correction algorithms. The instruments in the study included the Landsat 5 Thematic Mapper (TM), the SPOT 1 high-resolution visible (HRV) 1 sensor, the NS001 Thematic Mapper simulator, and the modular multispectral radiometers (MMRs). Atmospheric correction routines analyzed were an algorithm developed for FIFE, LOWTRAN 7, and 5S. A comparison between corresponding bands of the SPOT 1 HRV 1 and the Landsat 5 TM sensors indicated that the two instruments were radiometrically consistent to within about 5 percent. Retrieved surface reflectance factors using the FIFE algorithm over one site under clear atmospheric conditions indicated a capability to determine near-nadir surface reflectance factors to within about 0.01 at a reflectance of 0.06 in the visible (0.4-0.7 microns) and about 0.30 in the near infrared (0.7-1.2 microns) for all but the NS001 sensor. All three atmospheric correction procedures produced absolute reflectances to within 0.005 in the visible and near infrared. In the shortwave infrared (1.2-2.5 microns) region the three algorithms differed in the retrieved surface reflectances primarily owing to differences in predicted gaseous absorption. Although uncertainties in the measured surface reflectance in the shortwave infrared precluded definitive results, the 5S code appeared to predict gaseous transmission marginally more accurately than LOWTRAN 7.

  18. Surface reflectance retrieval from satellite and aircraft sensors: Results of sensor and algorithm comparisons during FIFE

    NASA Astrophysics Data System (ADS)

    Markham, B. L.; Halthore, R. N.; Goetz, S. J.

    1992-11-01

    Visible to shortwave infrared radiometric data collected by a number of remote sensing instruments on aircraft and satellite platforms were compared over common areas in the First International Satellite Land Surface Climatology Project (ISLSCP) Field Experiment (FIFE) site on August 4, 1989, to assess their radiometric consistency and the adequacy of atmospheric correction algorithms. The instruments in the study included the Landsat 5 thematic mapper (TM), the SPOT 1 high-resolution visible (HRV) 1 sensor, the NS001 thematic mapper simulator, and the modular multispectral radiometers (MMRs). Atmospheric correction routines analyzed were an algorithm developed for FIFE, LOWTRAN 7, and 5S. A comparison between corresponding bands of the SPOT 1 HRV 1 and the Landsat 5 TM sensors indicated that the two instruments were radiometrically consistent to within about 5%. Retrieved surface reflectance factors using the FIFE algorithm over one site under clear atmospheric conditions indicated a capability to determine near-nadir surface reflectance factors to within about 0.01 at a reflectance of 0.06 in the visible (0.4-0.7 μm) and about 0.30 in the near infrared (0.7-1.2 μm) for all but the NS001 sensor. All three atmospheric correction procedures produced absolute reflectances to within 0.005 in the visible and near infrared. In the shortwave infrared (1.2-2.5 μm) region the three algorithms differed in the retrieved surface reflectances primarily owing to differences in predicted gaseous absorption. Although uncertainties in the measured surface reflectance in the shortwave infrared precluded definitive results, the 5S code appeared to predict gaseous transmission marginally more accurately than LOWTRAN 7.

  19. Evaluation of observation-driven evaporation algorithms: results of the WACMOS-ET project

    NASA Astrophysics Data System (ADS)

    Miralles, Diego G.; Jimenez, Carlos; Ershadi, Ali; McCabe, Matthew F.; Michel, Dominik; Hirschi, Martin; Seneviratne, Sonia I.; Jung, Martin; Wood, Eric F.; (Bob) Su, Z.; Timmermans, Joris; Chen, Xuelong; Fisher, Joshua B.; Mu, Quiaozen; Fernandez, Diego

    2015-04-01

    Terrestrial evaporation (ET) links the continental water, energy and carbon cycles. Understanding the magnitude and variability of ET at the global scale is an essential step towards reducing uncertainties in our projections of climatic conditions and water availability for the future. However, the requirement of global observational data of ET can neither be satisfied with our sparse global in-situ networks, nor with the existing satellite sensors (which cannot measure evaporation directly from space). This situation has led to the recent rise of several algorithms dedicated to deriving ET fields from satellite data indirectly, based on the combination of ET-drivers that can be observed from space (e.g. radiation, temperature, phenological variability, water content, etc.). These algorithms can either be based on physics (e.g. Priestley and Taylor or Penman-Monteith approaches) or be purely statistical (e.g., machine learning). However, and despite the efforts from different initiatives like GEWEX LandFlux (Jimenez et al., 2011; Mueller et al., 2013), the uncertainties inherent in the resulting global ET datasets remain largely unexplored, partly due to a lack of inter-product consistency in forcing data. In response to this need, the ESA WACMOS-ET project started in 2012 with the main objectives of (a) developing a Reference Input Data Set to derive and validate ET estimates, and (b) performing a cross-comparison, error characterization and validation exercise of a group of selected ET algorithms driven by this Reference Input Data Set and by in-situ forcing data. The algorithms tested are SEBS (Su et al., 2002), the Penman- Monteith approach from MODIS (Mu et al., 2011), the Priestley and Taylor JPL model (Fisher et al., 2008), the MPI-MTE model (Jung et al., 2010) and GLEAM (Miralles et al., 2011). In this presentation we will show the first results from the ESA WACMOS-ET project. The performance of the different algorithms at multiple spatial and temporal

  20. The XH-map algorithm: A method to process stereo video to produce a real-time obstacle map

    NASA Astrophysics Data System (ADS)

    Rosselot, Donald; Hall, Ernest L.

    2005-10-01

    This paper presents a novel, simple and fast algorithm to produce a "floor plan" obstacle map in real time using video. The XH-map algorithm is a transformation of stereo vision data in disparity map space into a two dimensional obstacle map space using a method that can be likened to a histogram reduction of image information. The classic floor-ground background noise problem is addressed with a simple one-time semi-automatic calibration method incorporated into the algorithm. This implementation of this algorithm utilizes the Intel Performance Primitives library and OpenCV libraries for extremely fast and efficient execution, creating a scaled obstacle map from a 480x640x256 stereo pair in 1.4 milliseconds. This algorithm has many applications in robotics and computer vision including enabling an "Intelligent Robot" robot to "see" for path planning and obstacle avoidance.

  1. Experimental Results in the Comparison of Search Algorithms Used with Room Temperature Detectors

    SciTech Connect

    Guss, P., Yuan, D., Cutler, M., Beller, D.

    2010-11-01

    Analysis of time sequence data was run for several higher resolution scintillation detectors using a variety of search algorithms, and results were obtained in predicting the relative performance for these detectors, which included a slightly superior performance by CeBr{sub 3}. Analysis of several search algorithms shows that inclusion of the RSPRT methodology can improve sensitivity.

  2. Fast and accurate image recognition algorithms for fresh produce food safety sensing

    NASA Astrophysics Data System (ADS)

    Yang, Chun-Chieh; Kim, Moon S.; Chao, Kuanglin; Kang, Sukwon; Lefcourt, Alan M.

    2011-06-01

    This research developed and evaluated the multispectral algorithms derived from hyperspectral line-scan fluorescence imaging under violet LED excitation for detection of fecal contamination on Golden Delicious apples. The algorithms utilized the fluorescence intensities at four wavebands, 680 nm, 684 nm, 720 nm, and 780 nm, for computation of simple functions for effective detection of contamination spots created on the apple surfaces using four concentrations of aqueous fecal dilutions. The algorithms detected more than 99% of the fecal spots. The effective detection of feces showed that a simple multispectral fluorescence imaging algorithm based on violet LED excitation may be appropriate to detect fecal contamination on fast-speed apple processing lines.

  3. MUlti-Dimensional Spline-Based Estimator (MUSE) for motion estimation: algorithm development and initial results.

    PubMed

    Viola, Francesco; Coe, Ryan L; Owen, Kevin; Guenther, Drake A; Walker, William F

    2008-12-01

    Image registration and motion estimation play central roles in many fields, including RADAR, SONAR, light microscopy, and medical imaging. Because of its central significance, estimator accuracy, precision, and computational cost are of critical importance. We have previously presented a highly accurate, spline-based time delay estimator that directly determines sub-sample time delay estimates from sampled data. The algorithm uses cubic splines to produce a continuous representation of a reference signal and then computes an analytical matching function between this reference and a delayed signal. The location of the minima of this function yields estimates of the time delay. In this paper we describe the MUlti-dimensional Spline-based Estimator (MUSE) that allows accurate and precise estimation of multi-dimensional displacements/strain components from multi-dimensional data sets. We describe the mathematical formulation for two- and three-dimensional motion/strain estimation and present simulation results to assess the intrinsic bias and standard deviation of this algorithm and compare it to currently available multi-dimensional estimators. In 1000 noise-free simulations of ultrasound data we found that 2D MUSE exhibits maximum bias of 2.6 x 10(-4) samples in range and 2.2 x 10(-3) samples in azimuth (corresponding to 4.8 and 297 nm, respectively). The maximum simulated standard deviation of estimates in both dimensions was comparable at roughly 2.8 x 10(-3) samples (corresponding to 54 nm axially and 378 nm laterally). These results are between two and three orders of magnitude better than currently used 2D tracking methods. Simulation of performance in 3D yielded similar results to those observed in 2D. We also present experimental results obtained using 2D MUSE on data acquired by an Ultrasonix Sonix RP imaging system with an L14-5/38 linear array transducer operating at 6.6 MHz. While our validation of the algorithm was performed using ultrasound data, MUSE is

  4. Implementation and comparative analysis of the optimisations produced by evolutionary algorithms for the parameter extraction of PSP MOSFET model

    NASA Astrophysics Data System (ADS)

    Hadia, Sarman K.; Thakker, R. A.; Bhatt, Kirit R.

    2016-05-01

    The study proposes an application of evolutionary algorithms, specifically an artificial bee colony (ABC), variant ABC and particle swarm optimisation (PSO), to extract the parameters of metal oxide semiconductor field effect transistor (MOSFET) model. These algorithms are applied for the MOSFET parameter extraction problem using a Pennsylvania surface potential model. MOSFET parameter extraction procedures involve reducing the error between measured and modelled data. This study shows that ABC algorithm optimises the parameter values based on intelligent activities of honey bee swarms. Some modifications have also been applied to the basic ABC algorithm. Particle swarm optimisation is a population-based stochastic optimisation method that is based on bird flocking activities. The performances of these algorithms are compared with respect to the quality of the solutions. The simulation results of this study show that the PSO algorithm performs better than the variant ABC and basic ABC algorithm for the parameter extraction of the MOSFET model; also the implementation of the ABC algorithm is shown to be simpler than that of the PSO algorithm.

  5. Quality analysis of the solution produced by dissection algorithms applied to the traveling salesman problem

    SciTech Connect

    Cesari, G.

    1994-12-31

    The aim of this paper is to analyze experimentally the quality of the solution obtained with dissection algorithms applied to the geometric Traveling Salesman Problem. Starting from Karp`s results. We apply a divide and conquer strategy, first dividing the plane into subregions where we calculate optimal subtours and then merging these subtours to obtain the final tour. The analysis is restricted to problem instances where points are uniformly distributed in the unit square. For relatively small sets of cities we analyze the quality of the solution by calculating the length of the optimal tour and by comparing it with our approximate solution. When the problem instance is too large we perform an asymptotical analysis estimating the length of the optimal tour. We apply the same dissection strategy also to classical heuristics by calculating approximate subtours and by comparing the results with the average quality of the heuristic. Our main result is the estimate of the rate of convergence of the approximate solution to the optimal solution as a function of the number of dissection steps, of the criterion used for the plane division and of the quality of the subtours. We have implemented our programs on MUSIC (MUlti Signal processor system with Intelligent Communication), a Single-Program-Multiple-Data parallel computer with distributed memory developed at the ETH Zurich.

  6. Corroboration of mechanoregulatory algorithms for tissue differentiation during fracture healing: Comparison with in vivo results.

    PubMed

    Isaksson, Hanna; van Donkelaar, Corrinus C; Huiskes, Rik; Ito, Keita

    2006-05-01

    Several mechanoregulation algorithms proposed to control tissue differentiation during bone healing have been shown to accurately predict temporal and spatial tissue distributions during normal fracture healing. As these algorithms are different in nature and biophysical parameters, it raises the question of which reflects the actual mechanobiological processes the best. The aim of this study was to resolve this issue by corroborating the mechanoregulatory algorithms with more extensive in vivo bone healing data from animal experiments. A poroelastic three-dimensional finite element model of an ovine tibia with a 2.4 mm gap and external callus was used to simulate the course of tissue differentiation during fracture healing in an adaptive model. The mechanical conditions applied were similar to those used experimentally, with axial compression or torsional rotation as two distinct cases. Histological data at 4 and 8 weeks, and weekly radiographs, were used for comparison. By applying new mechanical conditions, torsional rotation, the predictions of the algorithms were distinguished successfully. In torsion, the algorithms regulated by strain and hydrostatic pressure failed to predict healing and bone formation as seen in experimental data. The algorithm regulated by deviatoric strain and fluid velocity predicted bridging and healing in torsion, as observed in vivo. The predictions of the algorithm regulated by deviatoric strain alone did not agree with in vivo data. None of the algorithms predicted patterns of healing entirely similar to those observed experimentally for both loading modes. However, patterns predicted by the algorithm based on deviatoric strain and fluid velocity was closest to experimental results. It was the only algorithm able to predict healing with torsional loading as seen in vivo.

  7. Algorithmic and complexity results for decompositions of biological networks into monotone subsystems.

    PubMed

    DasGupta, Bhaskar; Enciso, German Andres; Sontag, Eduardo; Zhang, Yi

    2007-01-01

    A useful approach to the mathematical analysis of large-scale biological networks is based upon their decompositions into monotone dynamical systems. This paper deals with two computational problems associated to finding decompositions which are optimal in an appropriate sense. In graph-theoretic language, the problems can be recast in terms of maximal sign-consistent subgraphs. The theoretical results include polynomial-time approximation algorithms as well as constant-ratio inapproximability results. One of the algorithms, which has a worst-case guarantee of 87.9% from optimality, is based on the semidefinite programming relaxation approach of Goemans-Williamson [Goemans, M., Williamson, D., 1995. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. J. ACM 42 (6), 1115-1145]. The algorithm was implemented and tested on a Drosophila segmentation network and an Epidermal Growth Factor Receptor pathway model, and it was found to perform close to optimally.

  8. Shuttle Entry Air Data System (SEADS) - Optimization of preflight algorithms based on flight results

    NASA Technical Reports Server (NTRS)

    Wolf, H.; Henry, M. W.; Siemers, Paul M., III

    1988-01-01

    The SEADS pressure model algorithm results were tested against other sources of air data, in particular, the Shuttle Best Estimated Trajectory (BET). The algorithm basis was also tested through a comparison of flight-measured pressure distribution vs the wind tunnel database. It is concluded that the successful flight of SEADS and the subsequent analysis of the data shows good agreement between BET and SEADS air data.

  9. Improving the Interpretability of Classification Rules Discovered by an Ant Colony Algorithm: Extended Results.

    PubMed

    Otero, Fernando E B; Freitas, Alex A

    2016-01-01

    Most ant colony optimization (ACO) algorithms for inducing classification rules use a ACO-based procedure to create a rule in a one-at-a-time fashion. An improved search strategy has been proposed in the cAnt-Miner[Formula: see text] algorithm, where an ACO-based procedure is used to create a complete list of rules (ordered rules), i.e., the ACO search is guided by the quality of a list of rules instead of an individual rule. In this paper we propose an extension of the cAnt-Miner[Formula: see text] algorithm to discover a set of rules (unordered rules). The main motivations for this work are to improve the interpretation of individual rules by discovering a set of rules and to evaluate the impact on the predictive accuracy of the algorithm. We also propose a new measure to evaluate the interpretability of the discovered rules to mitigate the fact that the commonly used model size measure ignores how the rules are used to make a class prediction. Comparisons with state-of-the-art rule induction algorithms, support vector machines, and the cAnt-Miner[Formula: see text] producing ordered rules are also presented.

  10. A comprehensive performance evaluation on the prediction results of existing cooperative transcription factors identification algorithms

    PubMed Central

    2014-01-01

    Background Eukaryotic transcriptional regulation is known to be highly connected through the networks of cooperative transcription factors (TFs). Measuring the cooperativity of TFs is helpful for understanding the biological relevance of these TFs in regulating genes. The recent advances in computational techniques led to various predictions of cooperative TF pairs in yeast. As each algorithm integrated different data resources and was developed based on different rationales, it possessed its own merit and claimed outperforming others. However, the claim was prone to subjectivity because each algorithm compared with only a few other algorithms and only used a small set of performance indices for comparison. This motivated us to propose a series of indices to objectively evaluate the prediction performance of existing algorithms. And based on the proposed performance indices, we conducted a comprehensive performance evaluation. Results We collected 14 sets of predicted cooperative TF pairs (PCTFPs) in yeast from 14 existing algorithms in the literature. Using the eight performance indices we adopted/proposed, the cooperativity of each PCTFP was measured and a ranking score according to the mean cooperativity of the set was given to each set of PCTFPs under evaluation for each performance index. It was seen that the ranking scores of a set of PCTFPs vary with different performance indices, implying that an algorithm used in predicting cooperative TF pairs is of strength somewhere but may be of weakness elsewhere. We finally made a comprehensive ranking for these 14 sets. The results showed that Wang J's study obtained the best performance evaluation on the prediction of cooperative TF pairs in yeast. Conclusions In this study, we adopted/proposed eight performance indices to make a comprehensive performance evaluation on the prediction results of 14 existing cooperative TFs identification algorithms. Most importantly, these proposed indices can be easily applied to

  11. An Algorithm Approach to Determining Smoking Cessation Treatment for Persons Living with HIV/AIDS: Results of a Pilot Trial

    PubMed Central

    Cropsey, Karen L.; Jardin, Bianca; Burkholder, Greer; Clark, C. Brendan; Raper, James L.; Saag, Michael

    2015-01-01

    Background Smoking now represents one of the biggest modifiable risk factors for disease and mortality in PLHIV. To produce significant changes in smoking rates among this population, treatments will need to be both acceptable to the larger segment of PLHIV smokers as well as feasible to implement in busy HIV clinics. The purpose of this study was to evaluate the feasibility and effects of a novel proactive algorithm-based intervention in an HIV/AIDS clinic. Methods PLHIV smokers (N =100) were proactively identified via their electronic medical records and were subsequently randomized at baseline to receive a 12-week pharmacotherapy-based algorithm treatment or treatment as usual. Participants were tracked in-person for 12-weeks. Participants provided information on smoking behaviors and associated constructs of cessation at each follow-up session. Results The findings revealed that many smokers reported utilizing prescribed medications when provided with a supply of cessation medication as determined by an algorithm. Compared to smokers receiving treatment as usual, PLHIV smokers prescribed these medications reported more quit attempts and greater reduction in smoking. Proxy measures of cessation readiness (e.g., motivation, self-efficacy) also favored participants receiving algorithm treatment. Conclusions This algorithm-derived treatment produced positive changes across a number of important clinical markers associated with smoking cessation. Given these promising findings coupled with the brief nature of this treatment, the overall pattern of results suggests strong potential for dissemination into clinical settings as well as significant promise for further advancing clinical health outcomes in this population. PMID:26181705

  12. Sensitivity of Structural Results to Initial Configurations and Quench Algorithms of Lead Silicate Glass

    SciTech Connect

    Hemesath, Eric R.; Corrales, Louis R.

    2005-06-15

    The sensitivity of resulting structures to starting configurations and quench algorithms were characterized using molecular dynamics (MD) simulations. The classical potential model introduced by Damodaran, Rao, and Rao (DRR) Phys. Chem. Glasses 31, 212 (1990) for lead silicate glass was used. Glasses were prepared using five distinct initial configurations and four glass forming algorithms. In previous MD work of bulk lead silicate glasses the ability of this potential model to provide good structural results were established by comparing to experimental results. Here the sensitivity of the results to the simulation methodology and the persistence of clustering with attention to details of molecular structure are determined.

  13. An Automated Algorithm for Producing Land Cover Information from Landsat Surface Reflectance Data Acquired Between 1984 and Present

    NASA Astrophysics Data System (ADS)

    Rover, J.; Goldhaber, M. B.; Holen, C.; Dittmeier, R.; Wika, S.; Steinwand, D.; Dahal, D.; Tolk, B.; Quenzer, R.; Nelson, K.; Wylie, B. K.; Coan, M.

    2015-12-01

    Multi-year land cover mapping from remotely sensed data poses challenges. Producing land cover products at spatial and temporal scales required for assessing longer-term trends in land cover change are typically a resource-limited process. A recently developed approach utilizes open source software libraries to automatically generate datasets, decision tree classifications, and data products while requiring minimal user interaction. Users are only required to supply coordinates for an area of interest, land cover from an existing source such as National Land Cover Database and percent slope from a digital terrain model for the same area of interest, two target acquisition year-day windows, and the years of interest between 1984 and present. The algorithm queries the Landsat archive for Landsat data intersecting the area and dates of interest. Cloud-free pixels meeting the user's criteria are mosaicked to create composite images for training the classifiers and applying the classifiers. Stratification of training data is determined by the user and redefined during an iterative process of reviewing classifiers and resulting predictions. The algorithm outputs include yearly land cover raster format data, graphics, and supporting databases for further analysis. Additional analytical tools are also incorporated into the automated land cover system and enable statistical analysis after data are generated. Applications tested include the impact of land cover change and water permanence. For example, land cover conversions in areas where shrubland and grassland were replaced by shale oil pads during hydrofracking of the Bakken Formation were quantified. Analytical analysis of spatial and temporal changes in surface water included identifying wetlands in the Prairie Pothole Region of North Dakota with potential connectivity to ground water, indicating subsurface permeability and geochemistry.

  14. Using a hybrid Monte Carlo/Genetic Algorithm Slip Estimator to produce high resolution models of paleoearthquakes from geodetic data

    NASA Astrophysics Data System (ADS)

    Lindsay, A.; McCloskey, J.; Nalbant, S. S.; Simao, N.; Murphy, S.; NicBhloscaidh, M.; Steacy, S.

    2013-12-01

    Identifying fault sections where slip deficits have accumulated may provide a means for understanding sequences of large megathrust earthquakes. Stress accumulated during the interseismic period on locked sections of an active fault is stored as potential slip. Where this potential slip remains unreleased during earthquakes, a slip deficit can be said to have accrued. Analysis of the spatial distribution of slip during antecedent events along the fault will show where the locked plate has spent its stored slip and indicate where the potential for large events remains. The location of recent earthquakes and their distribution of slip can be estimated instrumentally. To develop the idea of long-term slip-deficit modelling it is necessary to constrain the size and distribution of slip for pre-instrumental events dating back hundreds of years covering more than one ';seismic cycle'. This requires the exploitation of proxy sources of data. Coral microatolls, growing in the intertidal zone of the outer island arc of the Sunda trench, present the possibility of producing high resolution reconstructions of slip for a number of pre-instrumental earthquakes. Their growth is influenced by tectonic flexing of the continental plate beneath them allows them to act as long term geodetic recorders. However, the sparse distribution of data available using coral geodesy results in a under determined problem with non-unique solutions. Instead of producing one definite model satisfying the observed corals displacements, a Monte Carlo Slip Estimator based on a Genetic Algorithm (MCSE-GA) accelerating the rate of convergence is used to identify a suite of models consistent with the data. Successive iterations of the MCSE-GA sample different displacements at each coral location, from within the spread of associated uncertainties, producing a catalog of models from the full range of possibilities. The suite of best slip distributions are weighted according to their fitness and stacked to

  15. Respiratory rate detection algorithm based on RGB-D camera: theoretical background and experimental results

    PubMed Central

    Freddi, Alessandro; Monteriù, Andrea; Longhi, Sauro

    2014-01-01

    Both the theoretical background and the experimental results of an algorithm developed to perform human respiratory rate measurements without any physical contact are presented. Based on depth image sensing techniques, the respiratory rate is derived by measuring morphological changes of the chest wall. The algorithm identifies the human chest, computes its distance from the camera and compares this value with the instantaneous distance, discerning if it is due to the respiratory act or due to a limited movement of the person being monitored. To experimentally validate the proposed algorithm, the respiratory rate measurements coming from a spirometer were taken as a benchmark and compared with those estimated by the algorithm. Five tests were performed, with five different persons sat in front of the camera. The first test aimed to choose the suitable sampling frequency. The second test was conducted to compare the performances of the proposed system with respect to the gold standard in ideal conditions of light, orientation and clothing. The third, fourth and fifth tests evaluated the algorithm performances under different operating conditions. The experimental results showed that the system can correctly measure the respiratory rate, and it is a viable alternative to monitor the respiratory activity of a person without using invasive sensors. PMID:26609383

  16. Correlation between standard plate count and somatic cell count milk quality results for Wisconsin dairy producers.

    PubMed

    Borneman, Darand L; Ingham, Steve

    2014-05-01

    The objective of this study was to determine if a correlation exists between standard plate count (SPC) and somatic cell count (SCC) monthly reported results for Wisconsin dairy producers. Such a correlation may indicate that Wisconsin producers effectively controlling sanitation and milk temperature (reflected in low SPC) also have implemented good herd health management practices (reflected in low SCC). The SPC and SCC results for all grade A and B dairy producers who submitted results to the Wisconsin Department of Agriculture, Trade, and Consumer Protection, in each month of 2012 were analyzed. Grade A producer SPC results were less dispersed than grade B producer SPC results. Regression analysis showed a highly significant correlation between SPC and SCC, but the R(2) value was very small (0.02-0.03), suggesting that many other factors, besides SCC, influence SPC. Average SCC (across 12 mo) for grade A and B producers decreased with an increase in the number of monthly SPC results (out of 12) that were ≤ 25,000 cfu/mL. A chi-squared test of independence showed that the proportion of monthly SCC results >250,000 cells/mL varied significantly depending on whether the corresponding SPC result was ≤ 25,000 or >25,000 cfu/mL. This significant difference occurred in all months of 2012 for grade A and B producers. The results suggest that a generally consistent level of skill exists across dairy production practices affecting SPC and SCC.

  17. Comparative Evaluation of Registration Algorithms in Different Brain Databases With Varying Difficulty: Results and Insights

    PubMed Central

    Akbari, Hamed; Bilello, Michel; Da, Xiao; Davatzikos, Christos

    2015-01-01

    Evaluating various algorithms for the inter-subject registration of brain magnetic resonance images (MRI) is a necessary topic receiving growing attention. Existing studies evaluated image registration algorithms in specific tasks or using specific databases (e.g., only for skull-stripped images, only for single-site images, etc.). Consequently, the choice of registration algorithms seems task- and usage/parameter-dependent. Nevertheless, recent large-scale, often multi-institutional imaging-related studies create the need and raise the question whether some registration algorithms can 1) generally apply to various tasks/databases posing various challenges; 2) perform consistently well, and while doing so, 3) require minimal or ideally no parameter tuning. In seeking answers to this question, we evaluated 12 general-purpose registration algorithms, for their generality, accuracy and robustness. We fixed their parameters at values suggested by algorithm developers as reported in the literature. We tested them in 7 databases/tasks, which present one or more of 4 commonly-encountered challenges: 1) inter-subject anatomical variability in skull-stripped images; 2) intensity homogeneity, noise and large structural differences in raw images; 3) imaging protocol and field-of-view (FOV) differences in multi-site data; and 4) missing correspondences in pathology-bearing images. Totally 7,562 registrations were performed. Registration accuracies were measured by (multi-)expert-annotated landmarks or regions of interest (ROIs). To ensure reproducibility, we used public software tools, public databases (whenever possible), and we fully disclose the parameter settings. We show evaluation results, and discuss the performances in light of algorithms’ similarity metrics, transformation models and optimization strategies. We also discuss future directions for the algorithm development and evaluations. PMID:24951685

  18. A class of collinear scaling algorithms for bound-constrained optimization: Derivation and computational results

    NASA Astrophysics Data System (ADS)

    Ariyawansa, K. A.; Tabor, Wayne L.

    2009-08-01

    A family of algorithms for the approximate solution of the bound-constrained minimization problem is described. These algorithms employ the standard barrier method, with the inner iteration based on trust region methods. Local models are conic functions rather than the usual quadratic functions, and are required to match first and second derivatives of the barrier function at the current iterate. The various members of the family are distinguished by the choice of a vector-valued parameter, which is the zero vector in the degenerate case that quadratic local models are used. Computational results are used to compare the efficiency of various members of the family on a selection of test functions.

  19. Image Artifacts Resulting from Gamma-Ray Tracking Algorithms Used with Compton Imagers

    SciTech Connect

    Seifert, Carolyn E.; He, Zhong

    2005-10-01

    For Compton imaging it is necessary to determine the sequence of gamma-ray interactions in a single detector or array of detectors. This can be done by time-of-flight measurements if the interactions are sufficiently far apart. However, in small detectors the time between interactions can be too small to measure, and other means of gamma-ray sequencing must be used. In this work, several popular sequencing algorithms are reviewed for sequences with two observed events and three or more observed events in the detector. These algorithms can result in poor imaging resolution and introduce artifacts in the backprojection images. The effects of gamma-ray tracking algorithms on Compton imaging are explored in the context of the 4π Compton imager built by the University of Michigan.

  20. A few results for using genetic algorithms in the design of electrical machines

    SciTech Connect

    Wurtz, F.; Richomme, M.; Bigeon, J.; Sabonnadiere, J.C.

    1997-03-01

    Genetic algorithms (GAs) seem to be attractive for the design of electrical machines but their main difficulty is to find a configuration so that they are efficient. This paper exposes a criterion and a methodology the authors have imagined to find efficient configurations. The first configuration they obtained will then be detailed. The results based on this configuration will be exposed with an example of a design problem.

  1. Modal characterization of the ASCIE segmented optics testbed: New algorithms and experimental results

    NASA Technical Reports Server (NTRS)

    Carrier, Alain C.; Aubrun, Jean-Noel

    1993-01-01

    New frequency response measurement procedures, on-line modal tuning techniques, and off-line modal identification algorithms are developed and applied to the modal identification of the Advanced Structures/Controls Integrated Experiment (ASCIE), a generic segmented optics telescope test-bed representative of future complex space structures. The frequency response measurement procedure uses all the actuators simultaneously to excite the structure and all the sensors to measure the structural response so that all the transfer functions are measured simultaneously. Structural responses to sinusoidal excitations are measured and analyzed to calculate spectral responses. The spectral responses in turn are analyzed as the spectral data become available and, which is new, the results are used to maintain high quality measurements. Data acquisition, processing, and checking procedures are fully automated. As the acquisition of the frequency response progresses, an on-line algorithm keeps track of the actuator force distribution that maximizes the structural response to automatically tune to a structural mode when approaching a resonant frequency. This tuning is insensitive to delays, ill-conditioning, and nonproportional damping. Experimental results show that is useful for modal surveys even in high modal density regions. For thorough modeling, a constructive procedure is proposed to identify the dynamics of a complex system from its frequency response with the minimization of a least-squares cost function as a desirable objective. This procedure relies on off-line modal separation algorithms to extract modal information and on least-squares parameter subset optimization to combine the modal results and globally fit the modal parameters to the measured data. The modal separation algorithms resolved modal density of 5 modes/Hz in the ASCIE experiment. They promise to be useful in many challenging applications.

  2. Performance analysis results of a battery fuel gauge algorithm at multiple temperatures

    NASA Astrophysics Data System (ADS)

    Balasingam, B.; Avvari, G. V.; Pattipati, K. R.; Bar-Shalom, Y.

    2015-01-01

    Evaluating a battery fuel gauge (BFG) algorithm is a challenging problem due to the fact that there are no reliable mathematical models to represent the complex features of a Li-ion battery, such as hysteresis and relaxation effects, temperature effects on parameters, aging, power fade (PF), and capacity fade (CF) with respect to the chemical composition of the battery. The existing literature is largely focused on developing different BFG strategies and BFG validation has received little attention. In this paper, using hardware in the loop (HIL) data collected form three Li-ion batteries at nine different temperatures ranging from -20 °C to 40 °C, we demonstrate detailed validation results of a battery fuel gauge (BFG) algorithm. The BFG validation is based on three different BFG validation metrics; we provide implementation details of these three BFG evaluation metrics by proposing three different BFG validation load profiles that satisfy varying levels of user requirements.

  3. Efficient algorithms for mixed aleatory-epistemic uncertainty quantification with application to radiation-hardened electronics. Part I, algorithms and benchmark results.

    SciTech Connect

    Swiler, Laura Painton; Eldred, Michael Scott

    2009-09-01

    This report documents the results of an FY09 ASC V&V Methods level 2 milestone demonstrating new algorithmic capabilities for mixed aleatory-epistemic uncertainty quantification. Through the combination of stochastic expansions for computing aleatory statistics and interval optimization for computing epistemic bounds, mixed uncertainty analysis studies are shown to be more accurate and efficient than previously achievable. Part I of the report describes the algorithms and presents benchmark performance results. Part II applies these new algorithms to UQ analysis of radiation effects in electronic devices and circuits for the QASPR program.

  4. Algorithms for personalized therapy of type 2 diabetes: results of a web-based international survey

    PubMed Central

    Gallo, Marco; Mannucci, Edoardo; De Cosmo, Salvatore; Gentile, Sandro; Candido, Riccardo; De Micheli, Alberto; Di Benedetto, Antonino; Esposito, Katherine; Genovese, Stefano; Medea, Gerardo; Ceriello, Antonio

    2015-01-01

    Objective In recent years increasing interest in the issue of treatment personalization for type 2 diabetes (T2DM) has emerged. This international web-based survey aimed to evaluate opinions of physicians about tailored therapeutic algorithms developed by the Italian Association of Diabetologists (AMD) and available online, and to get suggestions for future developments. Another aim of this initiative was to assess whether the online advertising and the survey would have increased the global visibility of the AMD algorithms. Research design and methods The web-based survey, which comprised five questions, has been available from the homepage of the web-version of the journal Diabetes Care throughout the month of December 2013, and on the AMD website between December 2013 and September 2014. Participation was totally free and responders were anonymous. Results Overall, 452 physicians (M=58.4%) participated in the survey. Diabetologists accounted for 76.8% of responders. The results of the survey show wide agreement (>90%) by participants on the utility of the algorithms proposed, even if they do not cover all possible needs of patients with T2DM for a personalized therapeutic approach. In the online survey period and in the months after its conclusion, a relevant and durable increase in the number of unique users who visited the websites was registered, compared to the period preceding the survey. Conclusions Patients with T2DM are heterogeneous, and there is interest toward accessible and easy to use personalized therapeutic algorithms. Responders opinions probably reflect the peculiar organization of diabetes care in each country. PMID:26301097

  5. Orion Guidance and Control Ascent Abort Algorithm Design and Performance Results

    NASA Technical Reports Server (NTRS)

    Proud, Ryan W.; Bendle, John R.; Tedesco, Mark B.; Hart, Jeremy J.

    2009-01-01

    During the ascent flight phase of NASA s Constellation Program, the Ares launch vehicle propels the Orion crew vehicle to an agreed to insertion target. If a failure occurs at any point in time during ascent then a system must be in place to abort the mission and return the crew to a safe landing with a high probability of success. To achieve continuous abort coverage one of two sets of effectors is used. Either the Launch Abort System (LAS), consisting of the Attitude Control Motor (ACM) and the Abort Motor (AM), or the Service Module (SM), consisting of SM Orion Main Engine (OME), Auxiliary (Aux) Jets, and Reaction Control System (RCS) jets, is used. The LAS effectors are used for aborts from liftoff through the first 30 seconds of second stage flight. The SM effectors are used from that point through Main Engine Cutoff (MECO). There are two distinct sets of Guidance and Control (G&C) algorithms that are designed to maximize the performance of these abort effectors. This paper will outline the necessary inputs to the G&C subsystem, the preliminary design of the G&C algorithms, the ability of the algorithms to predict what abort modes are achievable, and the resulting success of the abort system. Abort success will be measured against the Preliminary Design Review (PDR) abort performance metrics and overall performance will be reported. Finally, potential improvements to the G&C design will be discussed.

  6. A new algorithm and results of ionospheric delay correction for satellite-based augmentation system

    NASA Astrophysics Data System (ADS)

    Huang, Z.; Yuan, H.

    Ionospheric delay resulted from radio signals traveling ionosphere is the largest source of errors for single-frequency users of the Global Positioning System GPS In order to improve users position accuracy augmentation systems based on satellite have been developed to provide accurate calibration since the nineties A famous one is Wide Area Augmentation System WAAS which is aimed to the efficiency of navigation over the conterminous United States and has been operating successfully so far The main idea of ionospheric correction algorithm for WAAS is to establish ionospheric grid model i e ionosphere is discretized into a set of regularly-spaced intervals in latitude and longitude at an altitude of 350km above the earth surface The users calculate their pseudoranges by interpolating estimates of vertical ionospheric delay modeled at ionospheric grid points The Chinese crust deformation monitoring network has been established since the eighties and now it is in good operation with 25 permanent GPS stations which provide feasibility to construct similar satellite-based augmentation system SBAS in China For the west region of China the distribution of stations is relatively sparse not to ensure sufficient data If we follow the ionospheric grid correction algorithm some grid points can t obtain their estimate and lost availability Consequently ionospheric correction measurement on the users situated in that region is inestimable which constitute a fatal threat to navigation users In this paper we presented a new algorithm that

  7. Active Learning in Large Classes: Can Small Interventions Produce Greater Results than Are Statistically Predictable?

    ERIC Educational Resources Information Center

    Adrian, Lynne M.

    2010-01-01

    Six online postings and six one-minute papers were added to an introductory first-year class, forming 5 percent of the final grade, but represented significant intervention in class functioning and amount of active learning. Active learning produced results in student performance beyond the percentage of the final grade it constituted. (Contains 1…

  8. Comparative Results of AIRS AMSU and CrIS/ATMS Retrievals Using a Scientifically Equivalent Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Kouvaris, Louis; Iredell, Lena

    2016-01-01

    The AIRS Science Team Version 6 retrieval algorithm is currently producing high quality level-3 Climate Data Records (CDRs) from AIRSAMSU which are critical for understanding climate processes. The AIRS Science Team is finalizing an improved Version-7 retrieval algorithm to reprocess all old and future AIRS data. AIRS CDRs should eventually cover the period September 2002 through at least 2020. CrISATMS is the only scheduled follow on to AIRSAMSU. The objective of this research is to prepare for generation of a long term CrISATMS level-3 data using a finalized retrieval algorithm that is scientifically equivalent to AIRSAMSU Version-7.

  9. Comparative Results of AIRS/AMSU and CrIS/ATMS Retrievals Using a Scientifically Equivalent Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Kouvaris, Louis; Iredell, Lena

    2016-01-01

    The AIRS Science Team Version-6 retrieval algorithm is currently producing high quality level-3 Climate Data Records (CDRs) from AIRS/AMSU which are critical for understanding climate processes. The AIRS Science Team is finalizing an improved Version-7 retrieval algorithm to reprocess all old and future AIRS data. AIRS CDRs should eventually cover the period September 2002 through at least 2020. CrIS/ATMS is the only scheduled follow on to AIRS/AMSU. The objective of this research is to prepare for generation of long term CrIS/ATMS CDRs using a retrieval algorithm that is scientifically equivalent to AIRS/AMSU Version-7.

  10. Potential for false positive HIV test results with the serial rapid HIV testing algorithm

    PubMed Central

    2012-01-01

    Background Rapid HIV tests provide same-day results and are widely used in HIV testing programs in areas with limited personnel and laboratory infrastructure. The Uganda Ministry of Health currently recommends the serial rapid testing algorithm with Determine, STAT-PAK, and Uni-Gold for diagnosis of HIV infection. Using this algorithm, individuals who test positive on Determine, negative to STAT-PAK and positive to Uni-Gold are reported as HIV positive. We conducted further testing on this subgroup of samples using qualitative DNA PCR to assess the potential for false positive tests in this situation. Results Of the 3388 individuals who were tested, 984 were HIV positive on two consecutive tests, and 29 were considered positive by a tiebreaker (positive on Determine, negative on STAT-PAK, and positive on Uni-Gold). However, when the 29 samples were further tested using qualitative DNA PCR, 14 (48.2%) were HIV negative. Conclusion Although this study was not primarily designed to assess the validity of rapid HIV tests and thus only a subset of the samples were retested, the findings show a potential for false positive HIV results in the subset of individuals who test positive when a tiebreaker test is used in serial testing. These findings highlight a need for confirmatory testing for this category of individuals. PMID:22429706

  11. Results from CrIS/ATMS Obtained Using an AIRS "Version-6 like" Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Kouvaris, Louis; Iredell, Lena

    2015-01-01

    We tested and evaluated Version-6.22 AIRS and Version-6.22 CrIS products on a single day, December 4, 2013, and compared results to those derived using AIRS Version-6. AIRS and CrIS Version-6.22 O3(p) and q(p) products are both superior to those of AIRS Version-6All AIRS and CrIS products agree reasonably well with each other. CrIS Version-6.22 T(p) and q(p) results are slightly poorer than AIRS over land, especially under very cloudy conditions. Both AIRS and CrIS Version-6.22 run now at JPL. Our short term plans are to analyze many common months at JPL in the near future using Version-6.22 or a further improved algorithm to assess the compatibility of AIRS and CrIS monthly mean products and their interannual differences. Updates to the calibration of both CrIS and ATMS are still being finalized. JPL plans, in collaboration with the Goddard DISC, to reprocess all AIRS data using a still to be finalized Version-7 retrieval algorithm, and to reprocess all recalibrated CrISATMS data using Version-7 as well.

  12. Results from CrIS/ATMS Obtained Using an AIRS "Version-6 Like" Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Kouvaris, Louis; Iredell, Lena

    2015-01-01

    We have tested and evaluated Version-6.22 AIRS and Version-6.22 CrIS products on a single day, December 4, 2013, and compared results to those derived using AIRS Version-6. AIRS and CrIS Version-6.22 O3(p) and q(p) products are both superior to those of AIRS Version-6All AIRS and CrIS products agree reasonably well with each other CrIS Version-6.22 T(p) and q(p) results are slightly poorer than AIRS under very cloudy conditions. Both AIRS and CrIS Version-6.22 run now at JPL. Our short term plans are to analyze many common months at JPL in the near future using Version-6.22 or a further improved algorithm to assess the compatibility of AIRS and CrIS monthly mean products and their interannual differencesUpdates to the calibration of both CrIS and ATMS are still being finalized. JPL plans, in collaboration with the Goddard DISC, to reprocess all AIRS data using a still to be finalized Version-7 retrieval algorithm, and to reprocess all recalibrated CrISATMS data using Version-7 as well.

  13. Statistically significant performance results of a mine detector and fusion algorithm from an x-band high-resolution SAR

    NASA Astrophysics Data System (ADS)

    Williams, Arnold C.; Pachowicz, Peter W.

    2004-09-01

    Current mine detection research indicates that no single sensor or single look from a sensor will detect mines/minefields in a real-time manner at a performance level suitable for a forward maneuver unit. Hence, the integrated development of detectors and fusion algorithms are of primary importance. A problem in this development process has been the evaluation of these algorithms with relatively small data sets, leading to anecdotal and frequently over trained results. These anecdotal results are often unreliable and conflicting among various sensors and algorithms. Consequently, the physical phenomena that ought to be exploited and the performance benefits of this exploitation are often ambiguous. The Army RDECOM CERDEC Night Vision Laboratory and Electron Sensors Directorate has collected large amounts of multisensor data such that statistically significant evaluations of detection and fusion algorithms can be obtained. Even with these large data sets care must be taken in algorithm design and data processing to achieve statistically significant performance results for combined detectors and fusion algorithms. This paper discusses statistically significant detection and combined multilook fusion results for the Ellipse Detector (ED) and the Piecewise Level Fusion Algorithm (PLFA). These statistically significant performance results are characterized by ROC curves that have been obtained through processing this multilook data for the high resolution SAR data of the Veridian X-Band radar. We discuss the implications of these results on mine detection and the importance of statistical significance, sample size, ground truth, and algorithm design in performance evaluation.

  14. Mars Entry Atmospheric Data System Trajectory Reconstruction Algorithms and Flight Results

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark; Shidner, Jeremy; Munk, Michelle

    2013-01-01

    The Mars Entry Atmospheric Data System is a part of the Mars Science Laboratory, Entry, Descent, and Landing Instrumentation project. These sensors are a system of seven pressure transducers linked to ports on the entry vehicle forebody to record the pressure distribution during atmospheric entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. Specifically, angle of attack, angle of sideslip, dynamic pressure, Mach number, and freestream atmospheric properties are reconstructed from the measured pressures. Such data allows for the aerodynamics to become decoupled from the assumed atmospheric properties, allowing for enhanced trajectory reconstruction and performance analysis as well as an aerodynamic reconstruction, which has not been possible in past Mars entry reconstructions. This paper provides details of the data processing algorithms that are utilized for this purpose. The data processing algorithms include two approaches that have commonly been utilized in past planetary entry trajectory reconstruction, and a new approach for this application that makes use of the pressure measurements. The paper describes assessments of data quality and preprocessing, and results of the flight data reduction from atmospheric entry, which occurred on August 5th, 2012.

  15. Preliminary results on a new method for producing yttrium phosphorous microspheres.

    PubMed

    Ghahramani, M R; Garibov, A A; Agayev, T N

    2014-09-01

    This paper reports on a new method to embed phosphorus particles into the matrix of yttrium aluminum silicate microspheres. Yttrium phosphorus glass microspheres about 20µm in size were obtained when an aqueous solution of YCl3 and AlCl3 were added to tetraethyl orthosilicate (TEOS) (phosphoric acid was used to catalyze the hydrolysis and condensation of TEOS) and was pumped into silicone oil under constant stirring. The shapes of the particles produced by this method are regular and nearly spheric in shape. Paper chromatography was used to determine the radiochemical impurity of radioactive microspheres. Radionuclide purity was determined using a gamma spectrometry system and an ultra-low level liquid scintillation spectrometer. The P(+) ions implantation stage was eliminated by embedding phosphorus particles in the matrix of the glass microspheres. This paper shows that a high temperature is not required to produce yttrium phosphorus aluminum silicate microspheres. The result shows that the silicone oil spheroidization method is a very suitable way to produce yttrium phosphorus glass microspheres. The topographical analysis of microspheres shows that the Y, P, Si, and Al elements are distributed in the microspheres and the distribution of elements in the samples is homogenous.

  16. Aircraft-Produced Ice Particles (APIPs): Additional Results and Further Insights.

    NASA Astrophysics Data System (ADS)

    Woodley, William L.; Gordon, Glenn; Henderson, Thomas J.; Vonnegut, Bernard; Rosenfeld, Daniel; Detwiler, Andrew

    2003-05-01

    This paper presents new results from studies of aircraft-produced ice particles (APIPs) in supercooled fog and clouds. Nine aircraft, including a Beech King Air 200T cloud physics aircraft, a Piper Aztec, a Cessna 421-C, two North American T-28s, an Aero Commander, a Piper Navajo, a Beech Turbo Baron, and a second four-bladed King Air were involved in the tests. The instrumented King Air served as the monitoring aircraft for trails of ice particles created, or not created, when the other aircraft were flown through clouds at various temperatures and served as both the test and monitoring aircraft when it itself was tested. In some cases sulfur hexafluoride (SF6) gas was released by the test aircraft during its test run and was detected by the King Air during its monitoring passes to confirm the location of the test aircraft wake. Ambient temperatures for the tests ranged between 5° and 12°C. The results confirm earlier published results and provide further insights into the APIPs phenomenon. The King Air at ambient temperatures less than 8°C can produce APIPs readily. The Piper Aztec and the Aero Commander also produced APIPs under the test conditions in which they were flown. The Cessna 421, Piper Navajo, and Beech Turbo Baron did not. The APIPs production potential of a T-28 is still indeterminate because a limited range of conditions was tested. Homogeneous nucleation in the adiabatically cooled regions where air is expanding around the rapidly rotating propeller tips is the cause of APIPs. An equation involving the propeller efficiency, engine thrust, and true airspeed of the aircraft is used along with the published thrust characteristics of the propellers to predict when the aircraft will produce APIPs. In most cases the predictions agree well with the field tests. Of all of the aircraft tested, the Piper Aztec, despite its small size and low horsepower, was predicted to be the most prolific producer of APIPs, and this was confirmed in field tests. The

  17. Development of region processing algorithm for HSTAMIDS: status and field test results

    NASA Astrophysics Data System (ADS)

    Ngan, Peter; Burke, Sean; Cresci, Roger; Wilson, Joseph N.; Gader, Paul; Ho, K. C.; Bartosz, Elizabeth; Duvoisin, Herbert

    2007-04-01

    The Region Processing Algorithm (RPA) has been developed by the Office of the Army Humanitarian Demining Research and Development (HD R&D) Program as part of improvements for the AN/PSS-14. The effort was a collaboration between the HD R&D Program, L-3 Communication CyTerra Corporation, University of Florida, Duke University and University of Missouri. RPA has been integrated into and implemented in a real-time AN/PSS-14. The subject unit was used to collect data and tested for its performance at three Army test sites within the United States of America. This paper describes the status of the technology and its recent test results.

  18. A super-resolution algorithm for enhancement of flash lidar data: flight test results

    NASA Astrophysics Data System (ADS)

    Bulyshev, Alexander; Amzajerdian, Farzin; Roback, Eric; Reisse, Robert

    2013-03-01

    This paper describes the results of a 3D super-resolution algorithm applied to the range data obtained from a recent Flash Lidar helicopter flight test. The flight test was conducted by the NASA's Autonomous Landing and Hazard Avoidance Technology (ALHAT) project over a simulated lunar terrain facility at NASA Kennedy Space Center. ALHAT is developing the technology for safe autonomous landing on the surface of celestial bodies: Moon, Mars, asteroids. One of the test objectives was to verify the ability of 3D super-resolution technique to generate high resolution digital elevation models (DEMs) and to determine time resolved relative positions and orientations of the vehicle. 3D super-resolution algorithm was developed earlier and tested in computational modeling, and laboratory experiments, and in a few dynamic experiments using a moving truck. Prior to the helicopter flight test campaign, a 100mX100m hazard field was constructed having most of the relevant extraterrestrial hazard: slopes, rocks, and craters with different sizes. Data were collected during the flight and then processed by the super-resolution code. The detailed DEM of the hazard field was constructed using independent measurement to be used for comparison. ALHAT navigation system data were used to verify abilities of super-resolution method to provide accurate relative navigation information. Namely, the 6 degree of freedom state vector of the instrument as a function of time was restored from super-resolution data. The results of comparisons show that the super-resolution method can construct high quality DEMs and allows for identifying hazards like rocks and craters within the accordance of ALHAT requirements.

  19. A Super-Resolution Algorithm for Enhancement of FLASH LIDAR Data: Flight Test Results

    NASA Technical Reports Server (NTRS)

    Bulyshev, Alexander; Amzajerdian, Farzin; Roback, Eric; Reisse Robert

    2014-01-01

    This paper describes the results of a 3D super-resolution algorithm applied to the range data obtained from a recent Flash Lidar helicopter flight test. The flight test was conducted by the NASA's Autonomous Landing and Hazard Avoidance Technology (ALHAT) project over a simulated lunar terrain facility at NASA Kennedy Space Center. ALHAT is developing the technology for safe autonomous landing on the surface of celestial bodies: Moon, Mars, asteroids. One of the test objectives was to verify the ability of 3D super-resolution technique to generate high resolution digital elevation models (DEMs) and to determine time resolved relative positions and orientations of the vehicle. 3D super-resolution algorithm was developed earlier and tested in computational modeling, and laboratory experiments, and in a few dynamic experiments using a moving truck. Prior to the helicopter flight test campaign, a 100mX100m hazard field was constructed having most of the relevant extraterrestrial hazard: slopes, rocks, and craters with different sizes. Data were collected during the flight and then processed by the super-resolution code. The detailed DEM of the hazard field was constructed using independent measurement to be used for comparison. ALHAT navigation system data were used to verify abilities of super-resolution method to provide accurate relative navigation information. Namely, the 6 degree of freedom state vector of the instrument as a function of time was restored from super-resolution data. The results of comparisons show that the super-resolution method can construct high quality DEMs and allows for identifying hazards like rocks and craters within the accordance of ALHAT requirements.

  20. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  1. Short Hairpin RNA Suppression of Thymidylate Synthase Produces DNA Mismatches and Results in Excellent Radiosensitization

    SciTech Connect

    Flanagan, Sheryl A.; Cooper, Kristin S.; Mannava, Sudha; Nikiforov, Mikhail A.; Shewach, Donna S.

    2012-12-01

    Purpose: To determine the effect of short hairpin ribonucleic acid (shRNA)-mediated suppression of thymidylate synthase (TS) on cytotoxicity and radiosensitization and the mechanism by which these events occur. Methods and Materials: shRNA suppression of TS was compared with 5-fluoro-2 Prime -deoxyuridine (FdUrd) inactivation of TS with or without ionizing radiation in HCT116 and HT29 colon cancer cells. Cytotoxicity and radiosensitization were measured by clonogenic assay. Cell cycle effects were measured by flow cytometry. The effects of FdUrd or shRNA suppression of TS on dNTP deoxynucleotide triphosphate imbalances and consequent nucleotide misincorporations into deoxyribonucleic acid (DNA) were analyzed by high-pressure liquid chromatography and as pSP189 plasmid mutations, respectively. Results: TS shRNA produced profound ({>=}90%) and prolonged ({>=}8 days) suppression of TS in HCT116 and HT29 cells, whereas FdUrd increased TS expression. TS shRNA also produced more specific and prolonged effects on dNTPs deoxynucleotide triphosphates compared with FdUrd. TS shRNA suppression allowed accumulation of cells in S-phase, although its effects were not as long-lasting as those of FdUrd. Both treatments resulted in phosphorylation of Chk1. TS shRNA alone was less cytotoxic than FdUrd but was equally effective as FdUrd in eliciting radiosensitization (radiation enhancement ratio: TS shRNA, 1.5-1.7; FdUrd, 1.4-1.6). TS shRNA and FdUrd produced a similar increase in the number and type of pSP189 mutations. Conclusions: TS shRNA produced less cytotoxicity than FdUrd but was equally effective at radiosensitizing tumor cells. Thus, the inhibitory effect of FdUrd on TS alone is sufficient to elicit radiosensitization with FdUrd, but it only partially explains FdUrd-mediated cytotoxicity and cell cycle inhibition. The increase in DNA mismatches after TS shRNA or FdUrd supports a causal and sufficient role for the depletion of dTTP thymidine triphosphate and consequent DNA

  2. One-year results of an algorithmic approach to managing failed back surgery syndrome

    PubMed Central

    Avellanal, Martín; Diaz-Reganon, Gonzalo; Orts, Alejandro; Soto, Silvia

    2014-01-01

    BACKGROUND: Failed back surgery syndrome (FBSS) is a major clinical problem. Different etiologies with different incidence rates have been proposed. There are currently no standards regarding the management of these patients. Epiduroscopy is an endoscopic technique that may play a role in the management of FBSS. OBJECTIVE: To evaluate an algorithm for management of severe FBSS including epiduroscopy as a diagnostic and therapeutic tool. METHODS: A total of 133 patients with severe symptoms of FBSS (visual analogue scale score ≥7) and no response to pharmacological treatment and physical therapy were included. A six-step management algorithm was applied. Data, including patient demographics, pain and surgical procedure, were analyzed. In all cases, one or more objective causes of pain were established. Treatment success was defined as ≥50% long-term pain relief maintained during the first year of follow-up. Final allocation of patients was registered: good outcome with conservative treatment, surgical reintervention and palliative treatment with implantable devices. RESULTS: Of 122 patients enrolled, 59.84% underwent instrumented surgery and 40.16% a noninstrumented procedure. Most (64.75%) experienced significant pain relief with conventional pain clinic treatments; 15.57% required surgical treatment. Palliative spinal cord stimulation and spinal analgesia were applied in 9.84% and 2.46% of the cases, respectively. The most common diagnosis was epidural fibrosis, followed by disc herniation, global or lateral stenosis, and foraminal stenosis. CONCLUSIONS: A new six-step ladder approach to severe FBSS management that includes epiduroscopy was analyzed. Etiologies are accurately described and a useful role of epiduroscopy was confirmed. PMID:25222573

  3. Unrealistic statistics: how average constitutive coefficients can produce non-physical results.

    PubMed

    Robertson, Daniel; Cook, Douglas

    2014-12-01

    The coefficients of constitutive models are frequently averaged in order to concisely summarize the complex, nonlinear, material properties of biomedical materials. However, when dealing with nonlinear systems, average inputs (e.g. average constitutive coefficients) often fail to generate average behavior. This raises an important issue because average nonlinear constitutive coefficients of biomedical materials are commonly reported in the literature. This paper provides examples which demonstrate that average constitutive coefficients applied to nonlinear constitutive laws in the field of biomedical material characterization can fail to produce average stress-strain responses and in some cases produce non-physical responses. Results are presented from a literature survey which indicates that approximately 90% of tissue measurement studies that employ a nonlinear constitutive model report average nonlinear constitutive coefficients. We suggest that reviewers and editors of future measurement studies discourage the reporting of average nonlinear constitutive coefficients. Reporting of individual coefficient sets for each test sample should be considered and discussed as designation for a "best practice" in the field of biomedical material characterization.

  4. Consequences of transfer of an in vitro-produced embryo for the dam and resultant calf.

    PubMed

    Bonilla, L; Block, J; Denicol, A C; Hansen, P J

    2014-01-01

    No reports exist on consequences of in vitro production (IVP) of embryos for the postnatal development of the calf or on postparturient function of the dam of the calf. Three hypotheses were evaluated: calves born as a result of transfer of an IVP embryo have reduced neonatal survival and altered postnatal growth, fertility, and milk yield compared with artificial insemination (AI) calves; cows giving birth to IVP calves have lower milk yield and fertility and higher incidence of postparturient disease than cows giving birth to AI calves; and the medium used for IVP affects the incidence of developmental abnormalities. In the first experiment, calves were produced by AI using conventional semen or by embryo transfer (ET) using a fresh or vitrified embryo produced in vitro with X-sorted semen. Gestation length was longer for cows receiving a vitrified embryo than for cows receiving a fresh embryo or AI. The percentage of dams experiencing calving difficulty was higher for ET than AI. We observed a tendency for incidence of retained placenta to be higher for ET than AI but found no significant effect of treatment on incidence of prolapse or metritis, pregnancy rate at first service, services per conception, or any measured characteristic of milk production in the subsequent lactation. Among Holstein heifers produced by AI or ET, treatment had no effect on birth weight but the variance tended to be greater in the ET groups. More Holstein heifer calves tended to be born dead, died, or were euthanized within the first 20d of life for the ET groups than for AI. Similarly, the proportion of Holstein heifer calves that either died or were culled for poor health after 20d of age was greater for the ET groups than for AI. We observed no effect of ET compared with AI on age at first service or on the percentage of heifers pregnant at first service, calf growth, or milk yield or composition in the first 120d in milk of the first lactation. In a second experiment, embryos were

  5. Different quantification algorithms may lead to different results: a comparison using proton MRS lipid signals.

    PubMed

    Mosconi, E; Sima, D M; Osorio Garcia, M I; Fontanella, M; Fiorini, S; Van Huffel, S; Marzola, P

    2014-04-01

    Proton magnetic resonance spectroscopy (MRS) is a sensitive method for investigating the biochemical compounds in a tissue. The interpretation of the data relies on the quantification algorithms applied to MR spectra. Each of these algorithms has certain underlying assumptions and may allow one to incorporate prior knowledge, which could influence the quality of the fit. The most commonly considered types of prior knowledge include the line-shape model (Lorentzian, Gaussian, Voigt), knowledge of the resonating frequencies, modeling of the baseline, constraints on the damping factors and phase, etc. In this article, we study whether the statistical outcome of a biological investigation can be influenced by the quantification method used. We chose to study lipid signals because of their emerging role in the investigation of metabolic disorders. Lipid spectra, in particular, are characterized by peaks that are in most cases not Lorentzian, because measurements are often performed in difficult body locations, e.g. in visceral fats close to peristaltic movements in humans or very small areas close to different tissues in animals. This leads to spectra with several peak distortions. Linear combination of Model spectra (LCModel), Advanced Method for Accurate Robust and Efficient Spectral fitting (AMARES), quantitation based on QUantum ESTimation (QUEST), Automated Quantification of Short Echo-time MRS (AQSES)-Lineshape and Integration were applied to simulated spectra, and area under the curve (AUC) values, which are proportional to the quantity of the resonating molecules in the tissue, were compared with true values. A comparison between techniques was also carried out on lipid signals from obese and lean Zucker rats, for which the polyunsaturation value expressed in white adipose tissue should be statistically different, as confirmed by high-resolution NMR measurements (considered the gold standard) on the same animals. LCModel, AQSES-Lineshape, QUEST and Integration

  6. Approximation of HRPITS results for SI GaAs by large scale support vector machine algorithms

    NASA Astrophysics Data System (ADS)

    Jankowski, Stanisław; Wojdan, Konrad; Szymański, Zbigniew; Kozłowski, Roman

    2006-10-01

    For the first time large-scale support vector machine algorithms are used to extraction defect parameters in semi-insulating (SI) GaAs from high resolution photoinduced transient spectroscopy experiment. By smart decomposition of the data set the SVNTorch algorithm enabled to obtain good approximation of analyzed correlation surface by a parsimonious model (with small number of support vector). The extracted parameters of deep level defect centers from SVM approximation are of good quality as compared to the reference data.

  7. Deriving rules from activity diary data: A learning algorithm and results of computer experiments

    NASA Astrophysics Data System (ADS)

    Arentze, Theo A.; Hofman, Frank; Timmermans, Harry J. P.

    Activity-based models consider travel as a derived demand from the activities households need to conduct in space and time. Over the last 15 years, computational or rule-based models of activity scheduling have gained increasing interest in time-geography and transportation research. This paper argues that a lack of techniques for deriving rules from empirical data hinders the further development of rule-based systems in this area. To overcome this problem, this paper develops and tests an algorithm for inductively deriving rules from activity-diary data. The decision table formalism is used to exhaustively represent the theoretically possible decision rules that individuals may use in sequencing a given set of activities. Actual activity patterns of individuals are supplied to the system as examples. In an incremental learning process, the system progressively improves on the selection of rules used for reproducing the examples. Computer experiments based on simulated data are performed to fine-tune rule selection and rule value update functions. The results suggest that the system is effective and fairly robust for parameter settings. It is concluded, therefore, that the proposed approach opens up possibilities to derive empirically tested rule-based models of activity scheduling. Follow-up research will be concerned with testing the system on empirical data.

  8. Flight test results of a vector-based failure detection and isolation algorithm for a redundant strapdown inertial measurement unit

    NASA Technical Reports Server (NTRS)

    Morrell, F. R.; Bailey, M. L.; Motyka, P. R.

    1988-01-01

    Flight test results of a vector-based fault-tolerant algorithm for a redundant strapdown inertial measurement unit are presented. Because the inertial sensors provide flight-critical information for flight control and navigation, failure detection and isolation is developed in terms of a multi-level structure. Threshold compensation techniques for gyros and accelerometers, developed to enhance the sensitivity of the failure detection process to low-level failures, are presented. Four flight tests, conducted in a commercial transport type environment, were used to determine the ability of the failure detection and isolation algorithm to detect failure signals, such a hard-over, null, or bias shifts. The algorithm provided timely detection and correct isolation of flight control- and low-level failures. The flight tests of the vector-based algorithm demonstrated its capability to provide false alarm free dual fail-operational performance for the skewed array of inertial sensors.

  9. Numerical Results for a Polytropic Cosmology Interpreted as a Dust Universe Producing Gravitational Waves

    NASA Astrophysics Data System (ADS)

    Klapp, J.; Cervantes-Cota, J.; Chauvet, P.

    1990-11-01

    RESUMEN. A nivel cosmol6gico pensamos que se ha estado prodticiendo radiaci6n gravitacional en cantidades considerables dentro de las galaxias. Si los eventos prodnctores de radiaci6n gravitatoria han venido ocurriendo desde Ia epoca de Ia formaci6n de las galaxias, cuando menos, sus efectos cosmol6gicos pueden ser tomados en cuenta con simplicidad y elegancia al representar la producci6n de radiaci6n y, por consiguiente, su interacci6n con materia ordinaria fenomenol6gicamente a trave's de una ecuaci6n de estado politr6pica, como lo hemos mostrado en otros trabajos. Presentamos en este articulo resultados nunericos de este modelo. ABSTRACT A common believe in cosmology is that gravitational radiation in considerable quantities is being produced within the galaxies. Ifgravitational radiation production has been running since the galaxy formation epoch, at least, its cosmological effects can be assesed with simplicity and elegance by representing the production of radiation and, therefore, its interaction with ordinary matter phenomenologically through a polytropic equation of state as shown already elsewhere. We present in this paper the numerical results of such a model. K words: COSMOLOGY - GRAVITATION

  10. Soil chemical changes resulting from irrigation with water co-produced with coalbed natural gas

    SciTech Connect

    Ganjegunte, G.K.; Vance, G.F.; King, L.A.

    2005-12-01

    Land application of coalbed natural gas (CBNG) co-produced water is a popular management option within northwestern Powder River Basin (PRB) of Wyoming. This study evaluated the impacts of land application of CBNG waters on soil chemical properties at five sites. Soil samples were collected from different depths (0-5, 5-15, 15-30, 30-60, 60-90, and 90-120 cm) from sites that were irrigated with CBNG water for 2 to 3 yr and control sites. Chemical properties of CBNG water used for irrigation on the study sites indicate that electrical conductivity of CBNG water (EC{sub w}) and sodium adsorption ratio of CBNG water (SAR{sub w}) values were greater than those recommended for irrigation use on the soils at the study sites. Soil chemical analyses indicated that electrical conductivity of soil saturated paste extracts (ECe) and sodium adsorption ratio of soil saturated paste extracts (SAR(e)) values for irrigated sites were significantly greater (P < 0.05) than control plots in the upper 30-cm soil depths. Mass balance calculations suggested that there has been significant buildup of Na in irrigated soils due to CBNG irrigation water as well as Na mobilization within the soil profiles. Results indicate that irrigation with CBNG water significantly impacts certain soil properties, particularly if amendments are not properly utilized. This study provides information for better understanding changes in soil properties due to land application of CBNG water.

  11. Unified treatment algorithm for the management of crotaline snakebite in the United States: results of an evidence-informed consensus workshop

    PubMed Central

    2011-01-01

    Background Envenomation by crotaline snakes (rattlesnake, cottonmouth, copperhead) is a complex, potentially lethal condition affecting thousands of people in the United States each year. Treatment of crotaline envenomation is not standardized, and significant variation in practice exists. Methods A geographically diverse panel of experts was convened for the purpose of deriving an evidence-informed unified treatment algorithm. Research staff analyzed the extant medical literature and performed targeted analyses of existing databases to inform specific clinical decisions. A trained external facilitator used modified Delphi and structured consensus methodology to achieve consensus on the final treatment algorithm. Results A unified treatment algorithm was produced and endorsed by all nine expert panel members. This algorithm provides guidance about clinical and laboratory observations, indications for and dosing of antivenom, adjunctive therapies, post-stabilization care, and management of complications from envenomation and therapy. Conclusions Clinical manifestations and ideal treatment of crotaline snakebite differ greatly, and can result in severe complications. Using a modified Delphi method, we provide evidence-informed treatment guidelines in an attempt to reduce variation in care and possibly improve clinical outcomes. PMID:21291549

  12. Photometric redshifts with the quasi Newton algorithm (MLPQNA) Results in the PHAT1 contest

    NASA Astrophysics Data System (ADS)

    Cavuoti, S.; Brescia, M.; Longo, G.; Mercurio, A.

    2012-10-01

    Context. Since the advent of modern multiband digital sky surveys, photometric redshifts (photo-z's) have become relevant if not crucial to many fields of observational cosmology, such as the characterization of cosmic structures and the weak and strong lensing. Aims: We describe an application to an astrophysical context, namely the evaluation of photometric redshifts, of MLPQNA, which is a machine-learning method based on the quasi Newton algorithm. Methods: Theoretical methods for photo-z evaluation are based on the interpolation of a priori knowledge (spectroscopic redshifts or SED templates), and they represent an ideal comparison ground for neural network-based methods. The MultiLayer Perceptron with quasi Newton learning rule (MLPQNA) described here is an effective computing implementation of neural networks exploited for the first time to solve regression problems in the astrophysical context. It is offered to the community through the DAMEWARE (DAta Mining & Exploration Web Application REsource) infrastructure. Results: The PHAT contest (Hildebrandt et al. 2010, A&A, 523, A31) provides a standard dataset to test old and new methods for photometric redshift evaluation and with a set of statistical indicators that allow a straightforward comparison among different methods. The MLPQNA model has been applied on the whole PHAT1 dataset of 1984 objects after an optimization of the model performed with the 515 available spectroscopic redshifts as training set. When applied to the PHAT1 dataset, MLPQNA obtains the best bias accuracy (0.0006) and very competitive accuracies in terms of scatter (0.056) and outlier percentage (16.3%), scoring as the second most effective empirical method among those that have so far participated in the contest. MLPQNA shows better generalization capabilities than most other empirical methods especially in the presence of underpopulated regions of the knowledge base.

  13. An algorithm for tailoring pharmacotherapy for smoking cessation: results from a Delphi panel of international experts

    PubMed Central

    Bader, P; McDonald, P; Selby, P

    2009-01-01

    Background: Evidence-based smoking cessation guidelines recommend nicotine replacement therapy (NRT), bupropion SR and varenicline as first-line therapy in combination with behavioural interventions. However, there are limited data to guide clinicians in recommending one form over another, using combinations, or matching individual smokers to particular forms. Objective: To develop decision rules for clinicians to guide differential prescribing practices and tailoring of pharmacotherapy for smoking cessation. Methods: A Delphi approach was used to build consensus among a panel of 37 international experts from various health disciplines. Through an iterative process, panellists responded to three rounds of questionnaires. Participants identified and ranked “best practices” used by them to tailor pharmacotherapy to aid smoking cessation. An independent panel of 10 experts provided cross-validation of findings. Results: There was a 100% response rate to all three rounds. A high level of consensus was achieved in determining the most important priorities: (1) factors to consider in prescribing pharmacotherapy: evidence, patient preference, patient experience; (2) combinations based on: failed attempt with monotherapy, patients with breakthrough cravings, level of tobacco dependence; (3) specific combinations, main categories: (a) two or more forms of NRT, (b) bupropion + form of NRT; (4) specific combinations, subcategories: (1a) patch + gum, (1b) patch + inhaler, (1c) patch + lozenge; (2a) bupropion + patch, (2b) bupropion + gum; (5) impact of comorbidities on selection of pharmacotherapy: contraindications, specific pharmacotherapy useful for certain comorbidities, dual purpose medications; (6) frequency of monitoring determined by patient needs and type of pharmacotherapy. Conclusion: An algorithm and guide were developed to assist clinicians in prescribing pharmacotherapy for smoking cessation. There appears to be good justification for “off-label” use such

  14. Atomic hydrogen in the mesopause region derived from SABER: Algorithm theoretical basis, measurement uncertainty, and results

    NASA Astrophysics Data System (ADS)

    Mlynczak, Martin G.; Hunt, Linda A.; Marshall, B. Thomas; Mertens, Christopher J.; Marsh, Daniel R.; Smith, Anne K.; Russell, James M.; Siskind, David E.; Gordley, Larry L.

    2014-03-01

    Atomic hydrogen (H) is a fundamental component in the photochemistry and energy balance of the terrestrial mesopause region (80-100 km). H is generated primarily by photolysis of water vapor and participates in a highly exothermic reaction with ozone. This reaction is a significant source of heat in the mesopause region and also creates highly vibrationally excited hydroxyl (OH) from which the Meinel band radiative emission features originate. Concentrations (cm-3) and volume mixing ratios of H are derived from observations of infrared emission from the OH (υ = 9 + 8, Δυ = 2) vibration-rotation bands near 2.0 µm made by the Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) instrument on the NASA Thermosphere Ionosphere Mesosphere Energetics and Dynamics satellite. The algorithms for deriving day and night H are described herein. Day and night concentrations exhibit excellent agreement between 87 and 95 km. SABER H results also exhibit good agreement with observations from the Solar Mesosphere Explorer made nearly 30 years ago. An apparent inverse dependence on the solar cycle is observed in the SABER H concentrations, with the H increasing as solar activity decreases. This increase is shown to be primarily due to the temperature dependence of various reaction rate coefficients for H photochemistry. The SABER H data, coupled with SABER atomic oxygen, ozone, and temperature, enable tests of mesospheric photochemistry and energetics in atmospheric models, studies of formation of polar mesospheric clouds, and studies of atmospheric evolution via escape of hydrogen. These data and studies are made possible by the wide range of parameters measured simultaneously by the SABER instrument.

  15. Hydroclast and Peperite generation: Experimental Results produced using the Silicate Melt Injection Laboratory Experiment

    NASA Astrophysics Data System (ADS)

    Downey, W. S.; Mastin, L. G.; Spieler, O.; Kunzmann, T.; Shaw, C. S.; Dingwell, D. B.

    2008-12-01

    The Silicate Melt Injection Laboratory Experiment (SMILE) allows for the effusive and explosive injection of molten glass into a variety of media - air, water, water spray, and wet sediments. Experiments have been preformed using the SMILE apparatus to evaluate the mechanisms of "turbulent shedding" during shallow submarine volcanic eruptions and magma/wet-sediment interactions. In these experiments, approximately 0.5 kg of basaltic melt with 5 wt.% Spectromelt (dilithium tetraborate) is produced in an internally heated autoclave at 1150° C and ambient pressure. The molten charge is ejected via the bursting of a rupture disc at 3.5 MPa into the reaction media, situated within the low pressure tank (atmospheric conditions). Preliminary experiments ejecting melt into a standing water column have yielded hydroclasts of basalt. SEM images of the clasts show ubiquitous discontinuous skins ("rinds") that are flaked, peeled, or smeared away in strips. Adhering to the clast surfaces are flakes, blocks, and blobs of detached material, up to 10 μm in size. The presence of partially detached rinds and rind debris likely reflects repeated bending, scraping, impact, and other disruption through turbulent velocity fluctuations. These textures are comparable to littoral explosive deposits at Kilauea Volcano, Hawaii, where lava tubes are torn apart by wave action, the lava is quenched, and thrown back on the beach as loose fragments (hyaloclastite). Preliminary experiments injecting melt into wet sediments show evidence of sediment ingestion and fluidal textures. These results support the interpretation that peperite generation can be driven by hydrodynamic mixing of a fuel and a coolant.

  16. Modeling transport and dilution of produced water and the resulting uptake and biomagnification in marine biota

    SciTech Connect

    Rye, H.; Reed, M.; Slagstad, D.

    1996-12-31

    The paper explains the numerical modelling efforts undertaken in order to study possible marine biological impacts caused by releases of produced water from the Haltenbanken area outside the western coast of Norway. Acute effects on marine life from releases of produced water appear to be relatively small and confined to areas rather lose to the release site. Biomagnification may however be experienced for relatively low concentrations at larger distances from the release point. Such effects can he modeled by performing a step-wise approach which includes: The use of 3-D hydrodynamic models to determine the ocean current fields; The use of 3-D multi-source numerical models to determine the concentration fields from the produced water releases, given the current field; and The use of biologic models to simulate the behavior of and larvae (passive marine biota) and fish (active marine biota) and their interaction with the concentration field. The paper explains the experiences gained by using this approach for the calculation of possible influences on marine life below the EC{sub 50} or LC{sub 50} concentration levels. The models are used for simulating concentration fields from 5 simultaneous sources at the Haltenbank area and simulation of magnification in some marine species from 2 simultaneous sources in the same area. Naphthalenes and phenols, which are both present in the produced water, were used as the chemical substances in the simulations.

  17. Results from a first production of enhanced Silicon Sensor Test Structures produced by ITE Warsaw

    NASA Astrophysics Data System (ADS)

    Bergauer, T.; Dragicevic, M.; Frey, M.; Grabiec, P.; Grodner, M.; Hänsel, S.; Hartmann, F.; Hoffmann, K.-H.; Hrubec, J.; Krammer, M.; Kucharski, K.; Macchiolo, A.; Marczewski, J.

    2009-01-01

    Monitoring the manufacturing process of silicon sensors is essential to ensure stable quality of the produced detectors. During the CMS silicon sensor production we were utilising small Test Structures (TS) incorporated on the cut-away of the wafers to measure certain process-relevant parameters. Experience from the CMS production and quality assurance led to enhancements of these TS. Another important application of TS is the commissioning of new vendors. The measurements provide us with a good understanding of the capabilities of a vendor's process. A first batch of the new TS was produced at the Institute of Electron Technology in Warsaw Poland. We will first review the improvements to the original CMS test structures and then discuss a selection of important measurements performed on this first batch.

  18. Gasbuggy, New Mexico, Natural Gas and Produced Water Sampling and Analysis Results for 2011

    SciTech Connect

    2011-09-01

    The U.S. Department of Energy (DOE) Office of Legacy Management conducted natural gas sampling for the Gasbuggy, New Mexico, site on June 7 and 8, 2011. Natural gas sampling consists of collecting both gas samples and samples of produced water from gas production wells. Water samples from gas production wells were analyzed for gamma-emitting radionuclides, gross alpha, gross beta, and tritium. Natural gas samples were analyzed for tritium and carbon-14. ALS Laboratory Group in Fort Collins, Colorado, analyzed water samples. Isotech Laboratories in Champaign, Illinois, analyzed natural gas samples.

  19. Gasbuggy, New Mexico, Natural Gas and Produced Water Sampling Results for 2012

    SciTech Connect

    2012-12-01

    The U.S. Department of Energy (DOE) Office of Legacy Management conducted annual natural gas sampling for the Gasbuggy, New Mexico, Site on June 20 and 21, 2012. This long-term monitoring of natural gas includes samples of produced water from gas production wells that are located near the site. Water samples from gas production wells were analyzed for gamma-emitting radionuclides, gross alpha, gross beta, and tritium. Natural gas samples were analyzed for tritium and carbon-14. ALS Laboratory Group in Fort Collins, Colorado, analyzed water samples. Isotech Laboratories in Champaign, Illinois, analyzed natural gas samples.

  20. Results from CrIS/ATMS Obtained Using an "AIRS Version-6 Like" Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Kouvaris, Louis; Iredell, Lena

    2015-01-01

    A main objective of AIRS/AMSU on EOS is to provide accurate sounding products that are used to generate climate data sets. Suomi NPP carries CrIS/ATMS that were designed as follow-ons to AIRS/AMSU. Our objective is to generate a long term climate data set of products derived from CrIS/ATMS to serve as a continuation of the AIRS/AMSU products. We have modified an improved version of the operational AIRS Version-6 retrieval algorithm for use with CrIS/ATMS. CrIS/ATMS products are of very good quality, and are comparable to, and consistent with, those of AIRS.

  1. Results from CrIS/ATMS obtained using an "AIRS Version-6 like" retrieval algorithm

    NASA Astrophysics Data System (ADS)

    Susskind, Joel; Kouvaris, Louis; Iredell, Lena

    2015-09-01

    A main objective of AIRS/AMSU on EOS is to provide accurate sounding products that are used to generate climate data sets. Suomi NPP carries CrIS/ATMS that were designed as follow-ons to AIRS/AMSU. Our objective is to generate a long term climate data set of products derived from CrIS/ATMS to serve as a continuation of the AIRS/AMSU products. We have modified an improved version of the operational AIRS Version-6 retrieval algorithm for use with CrIS/ATMS. CrIS/ATMS products are of very good quality, and are comparable to, and consistent with, those of AIRS.

  2. Automated analysis of Kokee-Wettzell Intensive VLBI sessions—algorithms, results, and recommendations

    NASA Astrophysics Data System (ADS)

    Kareinen, Niko; Hobiger, Thomas; Haas, Rüdiger

    2015-11-01

    The time-dependent variations in the rotation and orientation of the Earth are represented by a set of Earth Orientation Parameters (EOP). Currently, Very Long Baseline Interferometry (VLBI) is the only technique able to measure all EOP simultaneously and to provide direct observation of universal time, usually expressed as UT1-UTC. To produce estimates for UT1-UTC on a daily basis, 1-h VLBI experiments involving two or three stations are organised by the International VLBI Service for Geodesy and Astrometry (IVS), the IVS Intensive (INT) series. There is an ongoing effort to minimise the turn-around time for the INT sessions in order to achieve near real-time and high quality UT1-UTC estimates. As a step further towards true fully automated real-time analysis of UT1-UTC, we carry out an extensive investigation with INT sessions on the Kokee-Wettzell baseline. Our analysis starts with the first versions of the observational files in S- and X-band and includes an automatic group delay ambiguity resolution and ionospheric calibration. Several different analysis strategies are investigated. In particular, we focus on the impact of external information, such as meteorological and cable delay data provided in the station log-files, and a priori EOP information. The latter is studied by extensive Monte Carlo simulations. Our main findings are that it is easily possible to analyse the INT sessions in a fully automated mode to provide UT1-UTC with very low latency. The information found in the station log-files is important for the accuracy of the UT1-UTC results, provided that the data in the station log-files are reliable. Furthermore, to guarantee UT1-UTC with an accuracy of less than 20 μs, it is necessary to use predicted a priori polar motion data in the analysis that are not older than 12 h.

  3. Results of Aging Tests of Vendor-Produced Blended Feed Simulant

    SciTech Connect

    Russell, Renee L.; Buchmiller, William C.; Cantrell, Kirk J.; Peterson, Reid A.; Rinehart, Donald E.

    2009-04-21

    The Hanford Tank Waste Treatment and Immobilization Plant (WTP) is procuring through Pacific Northwest National Laboratory (PNNL) a minimum of five 3,500 gallon batches of waste simulant for Phase 1 testing in the Pretreatment Engineering Platform (PEP). To make sure that the quality of the simulant is acceptable, the production method was scaled up starting from laboratory-prepared simulant through 15-gallon vendor prepared simulant and 250-gallon vendor prepared simulant before embarking on the production of the 3500-gallon simulant batch by the vendor. The 3500-gallon PEP simulant batches were packaged in 250-gallon high molecular weight polyethylene totes at NOAH Technologies. The simulant was stored in an environmentally controlled environment at NOAH Technologies within their warehouse before blending or shipping. For the 15-gallon, 250-gallon, and 3500-gallon batch 0, the simulant was shipped in ambient temperature trucks with shipment requiring nominally 3 days. The 3500-gallon batch 1 traveled in a 70-75°F temperature controlled truck. Typically the simulant was uploaded in a PEP receiving tank within 24-hours of receipt. The first uploading required longer with it stored outside. Physical and chemical characterization of the 250-gallon batch was necessary to determine the effect of aging on the simulant in transit from the vendor and in storage before its use in the PEP. Therefore, aging tests were conducted on the 250-gallon batch of the vendor-produced PEP blended feed simulant to identify and determine any changes to the physical characteristics of the simulant when in storage. The supernate was also chemically characterized. Four aging scenarios for the vendor-produced blended simulant were studied: 1) stored outside in a 250-gallon tote, 2) stored inside in a gallon plastic bottle, 3) stored inside in a well mixed 5-L tank, and 4) subject to extended temperature cycling under summer temperature conditions in a gallon plastic bottle. The following

  4. Results from CrIS/ATMS Obtained Using an "AIRS Version-6 Like" Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Kouvaris, Louis; Iredell, Lena; Blaisdell, John

    2015-01-01

    AIRS and CrIS Version-6.22 O3(p) and q(p) products are both superior to those of AIRS Version-6.Monthly mean August 2014 Version-6.22 AIRS and CrIS products agree reasonably well with OMPS, CERES, and witheach other. JPL plans to process AIRS and CrIS for many months and compare interannual differences. Updates to thecalibration of both CrIS and ATMS are still being finalized. We are also working with JPL to develop a joint AIRS/CrISlevel-1 to level-3 processing system using a still to be finalized Version-7 retrieval algorithm. The NASA Goddard DISCwill eventually use this system to reprocess all AIRS and recalibrated CrIS/ATMS. .

  5. Horizon Acquisition for Attitude Determination Using Image Processing Algorithms- Results of HORACE on REXUS 16

    NASA Astrophysics Data System (ADS)

    Barf, J.; Rapp, T.; Bergmann, M.; Geiger, S.; Scharf, A.; Wolz, F.

    2015-09-01

    The aim of the Horizon Acquisition Experiment (HORACE) was to prove a new concept for a two-axis horizon sensor using algorithms processing ordinary images, which is also operable at high spinning rates occurring during emergencies. The difficulty to cope with image distortions, which is avoided by conventional horizon sensors, was introduced on purpose as we envision a system being capable of using any optical data. During the flight on REXUS1 16, which provided a suitable platform similar to the future application scenario, a malfunction of the payload cameras caused severe degradation of the collected scientific data. Nevertheless, with the aid of simulations we could show that the concept is accurate (±0.6°), fast (~ lOOms/frame) and robust enough for coarse attitude determination during emergencies and also applicable for small satellites. Besides, technical knowledge regarding the design of REXUS-experiments, including the detection of interferences between SATA and GPS, was gained.

  6. OPTIMAL DNA TIER FOR THE IRT/DNA ALGORITHM DETERMINED BY CFTR MUTATION RESULTS OVER 14 YEARS OF NEWBORN SCREENING

    PubMed Central

    Baker, Mei W.; Groose, Molly; Hoffman, Gary; Rock, Michael; Levy, Hara; Farrell, Philip M.

    2011-01-01

    Background There has been great variation and uncertainty about how many and what CFTR mutations to include in cystic fibrosis (CF) newborn screening algorithms, and very little research on this topic using large populations of newborns. Methods We reviewed Wisconsin screening results for 1994–2008 to identify an ideal panel. Results Upon analyzing approximately 1 million screening results, we found it optimal to use a 23 CFTR mutation panel as a second tier when an immunoreactive trypsinogen (IRT)/DNA algorithm was applied for CF screening. This panel in association with a 96th percentile IRT cutoff gave a sensitivity of 97.3%, but restricting the DNA tier to F508del was associated with 90% (P<.0001). Conclusions Although CFTR panel selection has been challenging, our data show that a 23 mutation method optimizes sensitivity and is practically advantageous. The IRT cutoff value, however, is actually more critical than DNA in determining CF newborn screening sensitivity. PMID:21388895

  7. Do dynamic-based MR knee kinematics methods produce the same results as static methods?

    PubMed

    d'Entremont, Agnes G; Nordmeyer-Massner, Jurek A; Bos, Clemens; Wilson, David R; Pruessmann, Klaas P

    2013-06-01

    MR-based methods provide low risk, noninvasive assessment of joint kinematics; however, these methods often use static positions or require many identical cycles of movement. The study objective was to compare the 3D kinematic results approximated from a series of sequential static poses of the knee with the 3D kinematic results obtained from continuous dynamic movement of the knee. To accomplish this objective, we compared kinematic data from a validated static MR method to a fast static MR method, and compared kinematic data from both static methods to a newly developed dynamic MR method. Ten normal volunteers were imaged using the three kinematic methods (dynamic, static standard, and static fast). Results showed that the two sets of static results were in agreement, indicating that the sequences (standard and fast) may be used interchangeably. Dynamic kinematic results were significantly different from both static results in eight of 11 kinematic parameters: patellar flexion, patellar tilt, patellar proximal translation, patellar lateral translation, patellar anterior translation, tibial abduction, tibial internal rotation, and tibial anterior translation. Three-dimensional MR kinematics measured from dynamic knee motion are often different from those measured in a static knee at several positions, indicating that dynamic-based kinematics provides information that is not obtainable from static scans.

  8. Using a hybrid Monte Carlo/ Slip Estimator-Genetic Algorithm (MCSE-GA) to produce high resolution estimates of paleoearthquakes from geodetic data

    NASA Astrophysics Data System (ADS)

    Lindsay, Anthony; McCloskey, John; Simão, Nuno; Murphy, Shane; Bhloscaidh, Mairead Nic

    2014-05-01

    Identifying fault sections where slip deficits have accumulated may provide a means for understanding sequences of large megathrust earthquakes. Stress accumulated during the interseismic period on an active megathrust is stored as potential slip, referred to as slip deficit, along locked sections of the fault. Analysis of the spatial distribution of slip during antecedent events along the fault will show where the locked plate has spent its stored slip. Areas of unreleased slip indicate where the potential for large events remain. The location of recent earthquakes and their distribution of slip can be estimated from instrumentally recorded seismic and geodetic data. However, long-term slip-deficit modelling requires detailed information on the size and distribution of slip for pre-instrumental events over hundreds of years covering more than one 'seismic cycle'. This requires the exploitation of proxy sources of data. Coral microatolls, growing in the intertidal zone of the outer island arc of the Sunda trench, present the possibility of reconstructing slip for a number of pre-instrumental earthquakes. Their growth is influenced by tectonic flexing of the continental plate beneath them; they act as long term recorders of the vertical component of deformation. However, the sparse distribution of data available using coral geodesy results in a under determined problem with non-unique solutions. Rather than accepting any one realisation as the definite model satisfying the coral displacement data, a Monte Carlo approach identifies a suite of models consistent with the observations. Using a Genetic Algorithm to accelerate the identification of desirable models, we have developed a Monte Carlo Slip Estimator- Genetic Algorithm (MCSE-GA) which exploits the full range of uncertainty associated with the displacements. Each iteration of the MCSE-GA samples different values from within the spread of uncertainties associated with each coral displacement. The Genetic

  9. Genetic Algorithms and Local Search

    NASA Technical Reports Server (NTRS)

    Whitley, Darrell

    1996-01-01

    The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.

  10. A New Retrieval Algorithm for OMI NO2: Tropospheric Results and Comparisons with Measurements and Models

    NASA Technical Reports Server (NTRS)

    Swartz, W. H.; Bucesla, E. J.; Lamsal, L. N.; Celarier, E. A.; Krotkov, N. A.; Bhartia, P, K,; Strahan, S. E.; Gleason, J. F.; Herman, J.; Pickering, K.

    2012-01-01

    Nitrogen oxides (NOx =NO+NO2) are important atmospheric trace constituents that impact tropospheric air pollution chemistry and air quality. We have developed a new NASA algorithm for the retrieval of stratospheric and tropospheric NO2 vertical column densities using measurements from the nadir-viewing Ozone Monitoring Instrument (OMI) on NASA's Aura satellite. The new products rely on an improved approach to stratospheric NO2 column estimation and stratosphere-troposphere separation and a new monthly NO2 climatology based on the NASA Global Modeling Initiative chemistry-transport model. The retrieval does not rely on daily model profiles, minimizing the influence of a priori information. We evaluate the retrieved tropospheric NO2 columns using surface in situ (e.g., AQS/EPA), ground-based (e.g., DOAS), and airborne measurements (e.g., DISCOVER-AQ). The new, improved OMI tropospheric NO2 product is available at high spatial resolution for the years 200S-present. We believe that this product is valuable for the evaluation of chemistry-transport models, examining the spatial and temporal patterns of NOx emissions, constraining top-down NOx inventories, and for the estimation of NOx lifetimes.

  11. Adding socioeconomic data to hospital readmissions calculations may produce more useful results.

    PubMed

    Nagasako, Elna M; Reidhead, Mat; Waterman, Brian; Dunagan, W Claiborne

    2014-05-01

    To better understand the degree to which risk-standardized thirty-day readmission rates may be influenced by social factors, we compared results for hospitals in Missouri under two types of models. The first type of model is currently used by the Centers for Medicare and Medicaid Services for public reporting of condition-specific hospital readmission rates of Medicare patients. The second type of model is an "enriched" version of the first type of model with census tract-level socioeconomic data, such as poverty rate, educational attainment, and housing vacancy rate. We found that the inclusion of these factors had a pronounced effect on calculated hospital readmission rates for patients admitted with acute myocardial infarction, heart failure, and pneumonia. Specifically, the models including socioeconomic data narrowed the range of observed variation in readmission rates for the above conditions, in percentage points, from 6.5 to 1.8, 14.0 to 7.4, and 7.4 to 3.7, respectively. Interestingly, the average readmission rates for the three conditions did not change significantly between the two types of models. The results of our exploratory analysis suggest that further work to characterize and report the effects of socioeconomic factors on standardized readmission measures may assist efforts to improve care quality and deliver more equitable care on the part of hospitals, payers, and other stakeholders. PMID:24799575

  12. New results extending the Precessions process to smoothing ground aspheres and producing freeform parts

    NASA Astrophysics Data System (ADS)

    Walker, D. D.; Beaucamp, A. T. H.; Doubrovski, V.; Dunn, C.; Freeman, R.; McCavana, G.; Morton, R.; Riley, D.; Simms, J.; Wei, X.

    2005-09-01

    Zeeko's Precession polishing process uses a bulged, rotating membrane tool, creating a contact-area of variable size. In separate modes of operation, the bonnet rotation-axis is orientated pole-down on the surface, or inclined at an angle and then precessed about the local normal. The bonnet, covered with standard polishing cloth and working with standard slurry, has been found to give superb surface textures in the regime of nanometre to sub-nanometre Ra values, starting with parts directly off precision CNC aspheric grinding machines. This paper reports an important extension of the process to the precision-controlled smoothing (or 'fining') operation required between more conventional diamond milling and subsequent Precession polishing. The method utilises an aggressive surface on the bonnet, again with slurry. This is compared with an alternative approach using diamond abrasives bound onto flexible carriers attached to the bonnets. The results demonstrate the viability of smoothing aspheric surfaces, which extends Precessions processing to parts with inferior input-quality. This may prove of particular importance to large optics where significant volumes of material may need to be removed, and to the creation of more substantial aspheric departures from a parent sphere. The paper continues with a recent update on results obtained, and lessons learnt, processing free-form surfaces, and concludes with an assessment of the relevance of the smoothing and free-form operations to the fabrication of off-axis parts including segments for extremely large telescopes.

  13. Competing Sudakov veto algorithms

    NASA Astrophysics Data System (ADS)

    Kleiss, Ronald; Verheyen, Rob

    2016-07-01

    We present a formalism to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The formal analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance in a semi-realistic setting and show that there are significantly faster alternatives to the commonly used algorithms.

  14. Different Techniques For Producing Precision Holes (>20 mm) In Hardened Steel—Comparative Results

    NASA Astrophysics Data System (ADS)

    Coelho, R. T.; Tanikawa, S. T.

    2009-11-01

    High speed machining (HSM), or high performance machining, has been one of the most recent technological advances. When applied to milling operations, using adequate machines, CAM programs and tooling, it allows cutting hardened steels, which was not feasible just a couple of years ago. The use of very stiff and precision machines has created the possibilities of machining holes in hardened steels, such as AISI H13 with 48-50 HRC, using helical interpolations, for example. Such process is particularly useful for holes with diameter bigger than normal solid carbide drills commercially available, around 20 mm, or higher. Such holes may need narrow tolerances, fine surface finishing, which can be obtained just by end milling operations. The present work compares some of the strategies used to obtain such holes by end milling, and also some techniques employed to finish them, by milling, boring and also by fine grinding at the same machine. Results indicate that it is possible to obtain holes with less than 0.36 m in circularity, 7.41 m in cylindricity and 0.12 m in surface roughness Ra. Additionally, there is less possibilities of obtaining heat affected layers when using such technique.

  15. [Fractal dimension and histogram method: algorithm and some preliminary results of noise-like time series analysis].

    PubMed

    Pancheliuga, V A; Pancheliuga, M S

    2013-01-01

    In the present work a methodological background for the histogram method of time series analysis is developed. Connection between shapes of smoothed histograms constructed on the basis of short segments of time series of fluctuations and the fractal dimension of the segments is studied. It is shown that the fractal dimension possesses all main properties of the histogram method. Based on it a further development of fractal dimension determination algorithm is proposed. This algorithm allows more precision determination of the fractal dimension by using the "all possible combination" method. The application of the method to noise-like time series analysis leads to results, which could be obtained earlier only by means of the histogram method based on human expert comparisons of histograms shapes. PMID:23755565

  16. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  17. Intra-operative ultrasound hand-held strain imaging for the visualization of ablations produced in the liver with a toroidal HIFU transducer: first in vivo results

    PubMed Central

    Chenot, Jérémy; Melodelima, David; N'Djin, William Apoutou; Souchon, Rémi; Rivoire, Michel; Chapelon, Jean-Yves

    2010-01-01

    The use of hand-held ultrasound strain imaging for intra-operative real-time visualization of HIFU ablations produced in the liver by a toroidal transducer was investigated. A linear 12 MHz ultrasound imaging probe was used to obtain radiofrequency signals. Using a fast cross-correlation algorithm, strain images were calculated and displayed at 60 frames/s, allowing the use of hand-held strain imaging intra-operatively. Fourteen HIFU lesions were produced in 4 pigs. Intra-operative strain imaging of HIFU ablations in the liver was feasible owing to the high frame rate. The correlation between dimensions measured on gross pathology and dimensions measured on B-mode images and on strain images were R = 0.72 and R = 0.94 respectively. The contrast between ablated and non-ablated tissue was significantly higher (p<0.05) in the strain images (22 dB) than in the B-mode images (9 dB). Strain images allowed equivalent or improved definition of ablated regions when compared with B-mode images. Real-time intra-operative hand-held strain imaging seems to be a promising complement to conventional B-Mode imaging for the guidance of HIFU ablations produced in the liver during an open procedure. These results support that hand-held strain imaging outperforms conventional B-mode ultrasound and could potentially be used for assessment of thermal therapies. PMID:20479514

  18. First results from the COST-HOME monthly benchmark dataset with temperature and precipitation data for testing homogenisation algorithms

    NASA Astrophysics Data System (ADS)

    Venema, Victor; Mestre, Olivier

    2010-05-01

    As part of the COST Action HOME (Advances in homogenisation methods of climate series: an integrated approach) a dataset was generated that serves as a benchmark for homogenisation algorithms. Members of the Action and third parties have been invited to homogenise this dataset. The results of this exercise are analysed by the HOME Working Groups (WG) on detection (WG2) and correction (WG3) algorithms to obtain recommendations for a standard homogenisation procedure for climate data. This talk will shortly describe this benchmark dataset and present first results comparing the quality of the about 25 contributions. Based upon a survey among homogenisation experts we chose to work with monthly values for temperature and precipitation. Temperature and precipitation were selected because most participants consider these elements the most relevant for their studies. Furthermore, they represent two important types of statistics (additive and multiplicative). The benchmark has three different types of datasets: real data, surrogate data and synthetic data. The real datasets allow comparing the different homogenisation methods with the most realistic type of data and inhomogeneities. Thus this part of the benchmark is important for a faithful comparison of algorithms with each other. However, as in this case the truth is not known, it is not possible to quantify the improvements due to homogenisation. Therefore, the benchmark also has two datasets with artificial data to which we inserted known inhomogeneities: surrogate and synthetic data. The aim of surrogate data is to reproduce the structure of measured data accurately enough that it can be used as substitute for measurements. The surrogate climate networks have the spatial and temporal auto- and cross-correlation functions of real homogenised networks as well as the exact (non-Gaussian) distribution for each station. The idealised synthetic data is based on the surrogate networks. The change is that the difference

  19. Remote sensing of gases by hyperspectral imaging: algorithms and results of field measurements

    NASA Astrophysics Data System (ADS)

    Sabbah, Samer; Rusch, Peter; Eichmann, Jens; Gerhard, Jörn-Hinnrich; Harig, Roland

    2012-09-01

    Remote gas detection and visualization provides vital information in scenarios involving chemical accidents, terrorist attacks or gas leaks. Previous work showed how imaging infrared spectroscopy can be used to assess the location, the dimensions, and the dispersion of a potentially hazardous cloud. In this work the latest developments of an infrared hyperspectral imager based on a Michelson interferometer in combination with a focal plane array detector are presented. The performance of the system is evaluated by laboratory measurements. The system was deployed in field measurements to identify industrial gas emissions. Excellent results were obtained by successfully identifying released gases from relatively long distances.

  20. Simulation Results of the Huygens Probe Entry and Descent Trajectory Reconstruction Algorithm

    NASA Technical Reports Server (NTRS)

    Kazeminejad, B.; Atkinson, D. H.; Perez-Ayucar, M.

    2005-01-01

    Cassini/Huygens is a joint NASA/ESA mission to explore the Saturnian system. The ESA Huygens probe is scheduled to be released from the Cassini spacecraft on December 25, 2004, enter the atmosphere of Titan in January, 2005, and descend to Titan s surface using a sequence of different parachutes. To correctly interpret and correlate results from the probe science experiments and to provide a reference set of data for "ground-truthing" Orbiter remote sensing measurements, it is essential that the probe entry and descent trajectory reconstruction be performed as early as possible in the postflight data analysis phase. The Huygens Descent Trajectory Working Group (DTWG), a subgroup of the Huygens Science Working Team (HSWT), is responsible for developing a methodology and performing the entry and descent trajectory reconstruction. This paper provides an outline of the trajectory reconstruction methodology, preliminary probe trajectory retrieval test results using a simulated synthetic Huygens dataset developed by the Huygens Project Scientist Team at ESA/ESTEC, and a discussion of strategies for recovery from possible instrument failure.

  1. Deriving Arctic Cloud Microphysics at Barrow, Alaska. Algorithms, Results, and Radiative Closure

    SciTech Connect

    Shupe, Matthew D.; Turner, David D.; Zwink, Alexander; Thieman, Mandana M.; Mlawer, Eli J.; Shippert, Timothy

    2015-07-01

    Cloud phase and microphysical properties control the radiative effects of clouds in the climate system and are therefore crucial to characterize in a variety of conditions and locations. An Arctic-specific, ground-based, multi-sensor cloud retrieval system is described here and applied to two years of observations from Barrow, Alaska. Over these two years, clouds occurred 75% of the time, with cloud ice and liquid each occurring nearly 60% of the time. Liquid water occurred at least 25% of the time even in the winter, and existed up to heights of 8 km. The vertically integrated mass of liquid was typically larger than that of ice. While it is generally difficult to evaluate the overall uncertainty of a comprehensive cloud retrieval system of this type, radiative flux closure analyses were performed where flux calculations using the derived microphysical properties were compared to measurements at the surface and top-of-atmosphere. Radiative closure biases were generally smaller for cloudy scenes relative to clear skies, while the variability of flux closure results was only moderately larger than under clear skies. The best closure at the surface was obtained for liquid-containing clouds. Radiative closure results were compared to those based on a similar, yet simpler, cloud retrieval system. These comparisons demonstrated the importance of accurate cloud phase classification, and specifically the identification of liquid water, for determining radiative fluxes. Enhanced retrievals of liquid water path for thin clouds were also shown to improve radiative flux calculations.

  2. Advanced Transport Delay Compensation Algorithms: Results of Delay Measurement and Piloted Performance Tests

    NASA Technical Reports Server (NTRS)

    Guo, Liwen; Cardullo, Frank M.; Kelly, Lon C.

    2007-01-01

    This report summarizes the results of delay measurement and piloted performance tests that were conducted to assess the effectiveness of the adaptive compensator and the state space compensator for alleviating the phase distortion of transport delay in the visual system in the VMS at the NASA Langley Research Center. Piloted simulation tests were conducted to assess the effectiveness of two novel compensators in comparison to the McFarland predictor and the baseline system with no compensation. Thirteen pilots with heterogeneous flight experience executed straight-in and offset approaches, at various delay configurations, on a flight simulator where different predictors were applied to compensate for transport delay. The glideslope and touchdown errors, power spectral density of the pilot control inputs, NASA Task Load Index, and Cooper-Harper rating of the handling qualities were employed for the analyses. The overall analyses show that the adaptive predictor results in slightly poorer compensation for short added delay (up to 48 ms) and better compensation for long added delay (up to 192 ms) than the McFarland compensator. The analyses also show that the state space predictor is fairly superior for short delay and significantly superior for long delay than the McFarland compensator.

  3. Minimal Sign Representation of Boolean Functions: Algorithms and Exact Results for Low Dimensions.

    PubMed

    Sezener, Can Eren; Oztop, Erhan

    2015-08-01

    Boolean functions (BFs) are central in many fields of engineering and mathematics, such as cryptography, circuit design, and combinatorics. Moreover, they provide a simple framework for studying neural computation mechanisms of the brain. Many representation schemes for BFs exist to satisfy the needs of the domain they are used in. In neural computation, it is of interest to know how many input lines a neuron would need to represent a given BF. A common BF representation to study this is the so-called polynomial sign representation where [Formula: see text] and 1 are associated with true and false, respectively. The polynomial is treated as a real-valued function and evaluated at its parameters, and the sign of the polynomial is then taken as the function value. The number of input lines for the modeled neuron is exactly the number of terms in the polynomial. This letter investigates the minimum number of terms, that is, the minimum threshold density, that is sufficient to represent a given BF and more generally aims to find the maximum over this quantity for all BFs in a given dimension. With this work, for the first time exact results for four- and five-variable BFs are obtained, and strong bounds for six-variable BFs are derived. In addition, some connections between the sign representation framework and bent functions are derived, which are generally studied for their desirable cryptographic properties.

  4. The equation of state for stellar envelopes. II - Algorithm and selected results

    NASA Technical Reports Server (NTRS)

    Mihalas, Dimitri; Dappen, Werner; Hummer, D. G.

    1988-01-01

    A free-energy-minimization method for computing the dissociation and ionization equilibrium of a multicomponent gas is discussed. The adopted free energy includes terms representing the translational free energy of atoms, ions, and molecules; the internal free energy of particles with excited states; the free energy of a partially degenerate electron gas; and the configurational free energy from shielded Coulomb interactions among charged particles. Internal partition functions are truncated using an occupation probability formalism that accounts for perturbations of bound states by both neutral and charged perturbers. The entire theory is analytical and differentiable to all orders, so it is possible to write explicit analytical formulas for all derivatives required in a Newton-Raphson iteration; these are presented to facilitate future work. Some representative results for both Saha and free-energy-minimization equilibria are presented for a hydrogen-helium plasma with N(He)/N(H) = 0.10. These illustrate nicely the phenomena of pressure dissociation and ionization, and also demonstrate vividly the importance of choosing a reliable cutoff procedure for internal partition functions.

  5. Minimal Sign Representation of Boolean Functions: Algorithms and Exact Results for Low Dimensions.

    PubMed

    Sezener, Can Eren; Oztop, Erhan

    2015-08-01

    Boolean functions (BFs) are central in many fields of engineering and mathematics, such as cryptography, circuit design, and combinatorics. Moreover, they provide a simple framework for studying neural computation mechanisms of the brain. Many representation schemes for BFs exist to satisfy the needs of the domain they are used in. In neural computation, it is of interest to know how many input lines a neuron would need to represent a given BF. A common BF representation to study this is the so-called polynomial sign representation where [Formula: see text] and 1 are associated with true and false, respectively. The polynomial is treated as a real-valued function and evaluated at its parameters, and the sign of the polynomial is then taken as the function value. The number of input lines for the modeled neuron is exactly the number of terms in the polynomial. This letter investigates the minimum number of terms, that is, the minimum threshold density, that is sufficient to represent a given BF and more generally aims to find the maximum over this quantity for all BFs in a given dimension. With this work, for the first time exact results for four- and five-variable BFs are obtained, and strong bounds for six-variable BFs are derived. In addition, some connections between the sign representation framework and bent functions are derived, which are generally studied for their desirable cryptographic properties. PMID:26079754

  6. Safety assessment of EPA-rich oil produced from yeast: Results of a 90-day subchronic toxicity study.

    PubMed

    MacKenzie, Susan A; Belcher, Leigh A; Sykes, Greg P; Frame, Steven R; Mukerji, Pushkor; Gillies, Peter J

    2010-12-01

    The safety of eicosapentaenoic acid (EPA) oil produced from genetically modified Yarrowia lipolytica yeast was evaluated following 90 days of exposure. Groups of rats received 0 (olive oil), 98, 488, or 976 mg EPA/kg/day, or GRAS fish oil or deionized water by oral gavage. Rats were evaluated for in-life, neurobehavioral, anatomic and clinical pathology parameters. Lower serum cholesterol (total and non-HDL) was observed in Medium and High EPA and fish oil groups. Lower HDL was observed in High EPA and fish oil males, only at early time points. Liver weights were increased in High EPA and Medium EPA (female only) groups with no associated clinical or microscopic pathology findings. Nasal lesions, attributed to oil in the nasal cavity, were observed in High and Medium EPA and fish oil groups. No other effects were attributed to test oil exposure. Exposure to EPA oil for 90 days produced no effects at 98 mg EPA/kg/day and no adverse effects at doses up to 976 mg EPA/kg/day. The safety profile of EPA oil was comparable to that of GRAS fish oil. These results support the use of EPA oil produced from yeast as a safe source for use in dietary supplements.

  7. Preliminary test results of a flight management algorithm for fuel conservative descents in a time based metered traffic environment. [flight tests of an algorithm to minimize fuel consumption of aircraft based on flight time

    NASA Technical Reports Server (NTRS)

    Knox, C. E.; Cannon, D. G.

    1979-01-01

    A flight management algorithm designed to improve the accuracy of delivering the airplane fuel efficiently to a metering fix at a time designated by air traffic control is discussed. The algorithm provides a 3-D path with time control (4-D) for a test B 737 airplane to make an idle thrust, clean configured descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path is calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithms and the results of the flight tests are discussed.

  8. Odour reduction strategies for biosolids produced from a Western Australian wastewater treatment plant: results from Phase I laboratory trials.

    PubMed

    Gruchlik, Yolanta; Heitz, Anna; Joll, Cynthia; Driessen, Hanna; Fouché, Lise; Penney, Nancy; Charrois, Jeffrey W A

    2013-01-01

    This study investigated sources of odours from biosolids produced from a Western Australian wastewater treatment plant and examined possible strategies for odour reduction, specifically chemical additions and reduction of centrifuge speed on a laboratory scale. To identify the odorous compounds and assess the effectiveness of the odour reduction measures trialled in this study, headspace solid-phase microextraction gas chromatography-mass spectrometry (HS SPME-GC-MS) methods were developed. The target odour compounds included volatile sulphur compounds (e.g. dimethyl sulphide, dimethyl disulphide and dimethyl trisulphide) and other volatile organic compounds (e.g. toluene, ethylbenzene, styrene, p-cresol, indole and skatole). In our laboratory trials, aluminium sulphate added to anaerobically digested sludge prior to dewatering offered the best odour reduction strategy amongst the options that were investigated, resulting in approximately 40% reduction in the maximum concentration of the total volatile organic sulphur compounds, relative to control. PMID:24355840

  9. RESULTS OF MONITORING METALLO-BETA-LACTAMASE-PRODUCING STRAINS OF PSEUDOMONAS AERUGINOSA IN A MULTI-PROFILE HOSPITAL.

    PubMed

    Shamaeva, S K; Portnyagina, U S; Edelstein, M V; Kuzmina, A A; Maloguloval, S; Varfolomeeva, N A

    2015-01-01

    The authors present the results of long-term monitoring of metallo-beta-lactamase (MBL) producing strains of Pseudomonas aeruginosa in the Republican Hospital No 2 of Yakutsk, Russian Federation. Hospitals across Russia, as well as the rest of the world, face a rapid appearance and a virtually unchecked spread of multiresistant and panresistant nosocomial pathogens. Especially prevalent are multidrug-resistant isolates of P. aeruginosa, most often found among the patients of intensive care and intensive therapy units, as well as surgery departments. The aim of this study is to investigate the prevalence of metallo-beta-lactamase-producing strains of P. aeruginosa in a multi-profile hospital. 2,135 isolates of P. aeruginosa were studied, collected during a time span of seven years (2008-2014) from clinical specimens of hospitalised patients in acute surgery, purulent surgery, neurosurgery, otolaryngology, coloproctology departments, intensive care and intensive therapy, burn units, as well as intensive care unit for patients with acute cerebrovascular accidents and coronary care unit. Strains were identified and re-identified using established methods, NEFERMtest 24 (MICROLATEST) biochemical microtest and API (bioMerieux) test systems were used. For all carbapenem-resistant strains a phenotype screening for MBL was performed using the double-disks method with EDTA. In order to identify VIM-type and IMP-type MBL genes a real-time multiplex polymerase chain reaction was used. Among the investigated strains the largest number of P. aeruginosa - 35.6% (761 isolates) was found in patients at intensive care and intensive therapy units. Clonal expansion of extensively drug-resistant strain P. aeruginosa ST235 (VIM-2) was determined, the resistance mechanism of which is connected to MBL. Sensitivity determination of MBL-producing isolates of P. aeruginosa has shown that isolated strains have a high level of resistance (100%) to all tested antibacterial agents: piperacillin

  10. Multimedia and Training: Practice and Skills of European Producers, (Part 1) Results of the European Project "START-UP."

    ERIC Educational Resources Information Center

    Gutierrez, Christine Gardiol; Boder, Andre

    1992-01-01

    Describes the START-UP project developed by the European Community to identify educational and training multimedia producers in European countries and to define the methodologies that these producers use in developing their products. Highlights include production stages, multimedia skills, teamwork, decision making, learning processes, learner…

  11. Two Measurement Methods of Leaf Dry Matter Content Produce Similar Results in a Broad Range of Species

    PubMed Central

    Vaieretti, María Victoria; Díaz, Sandra; Vile, Denis; Garnier, Eric

    2007-01-01

    Background and Aims Leaf dry matter content (LDMC) is widely used as an indicator of plant resource use in plant functional trait databases. Two main methods have been proposed to measure LDMC, which basically differ in the rehydration procedure to which leaves are subjected after harvesting. These are the ‘complete rehydration’ protocol of Garnier et al. (2001, Functional Ecology 15: 688–695) and the ‘partial rehydration’ protocol of Vendramini et al. (2002, New Phytologist 154: 147–157). Methods To test differences in LDMC due to the use of different methods, LDMC was measured on 51 native and cultivated species representing a wide range of plant families and growth forms from central-western Argentina, following the complete rehydration and partial rehydration protocols. Key Results and Conclusions The LDMC values obtained by both methods were strongly and positively correlated, clearly showing that LDMC is highly conserved between the two procedures. These trends were not altered by the exclusion of plants with non-laminar leaves. Although the complete rehydration method is the safest to measure LDMC, the partial rehydration procedure produces similar results and is faster. It therefore appears as an acceptable option for those situations in which the complete rehydration method cannot be applied. Two notes of caution are given for cases in which different datasets are compared or combined: (1) the discrepancy between the two rehydration protocols is greatest in the case of high-LDMC (succulent or tender) leaves; (2) the results suggest that, when comparing many studies across unrelated datasets, differences in the measurement protocol may be less important than differences among seasons, years and the quality of local habitats. PMID:17353207

  12. Preliminary Structural Design Using Topology Optimization with a Comparison of Results from Gradient and Genetic Algorithm Methods

    NASA Technical Reports Server (NTRS)

    Burt, Adam O.; Tinker, Michael L.

    2014-01-01

    In this paper, genetic algorithm based and gradient-based topology optimization is presented in application to a real hardware design problem. Preliminary design of a planetary lander mockup structure is accomplished using these methods that prove to provide major weight savings by addressing the structural efficiency during the design cycle. This paper presents two alternative formulations of the topology optimization problem. The first is the widely-used gradient-based implementation using commercially available algorithms. The second is formulated using genetic algorithms and internally developed capabilities. These two approaches are applied to a practical design problem for hardware that has been built, tested and proven to be functional. Both formulations converged on similar solutions and therefore were proven to be equally valid implementations of the process. This paper discusses both of these formulations at a high level.

  13. Planning fuel-conservative descents in an airline environmental using a small programmable calculator: Algorithm development and flight test results

    NASA Technical Reports Server (NTRS)

    Knox, C. E.; Vicroy, D. D.; Simmon, D. A.

    1985-01-01

    A simple, airborne, flight-management descent algorithm was developed and programmed into a small programmable calculator. The algorithm may be operated in either a time mode or speed mode. The time mode was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The speed model was designed for planning fuel-conservative descents when time is not a consideration. The descent path for both modes was calculated for a constant with considerations given for the descent Mach/airspeed schedule, gross weight, wind, wind gradient, and nonstandard temperature effects. Flight tests, using the algorithm on the programmable calculator, showed that the open-loop guidance could be useful to airline flight crews for planning and executing fuel-conservative descents.

  14. Planning fuel-conservative descents in an airline environmental using a small programmable calculator: algorithm development and flight test results

    SciTech Connect

    Knox, C.E.; Vicroy, D.D.; Simmon, D.A.

    1985-05-01

    A simple, airborne, flight-management descent algorithm was developed and programmed into a small programmable calculator. The algorithm may be operated in either a time mode or speed mode. The time mode was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The speed model was designed for planning fuel-conservative descents when time is not a consideration. The descent path for both modes was calculated for a constant with considerations given for the descent Mach/airspeed schedule, gross weight, wind, wind gradient, and nonstandard temperature effects. Flight tests, using the algorithm on the programmable calculator, showed that the open-loop guidance could be useful to airline flight crews for planning and executing fuel-conservative descents.

  15. Native-sized recombinant spider silk protein produced in metabolically engineered Escherichia coli results in a strong fiber

    PubMed Central

    Xia, Xiao-Xia; Qian, Zhi-Gang; Ki, Chang Seok; Park, Young Hwan; Kaplan, David L.; Lee, Sang Yup

    2010-01-01

    Spider dragline silk is a remarkably strong fiber that makes it attractive for numerous applications. Much has thus been done to make similar fibers by biomimic spinning of recombinant dragline silk proteins. However, success is limited in part due to the inability to successfully express native-sized recombinant silk proteins (250–320 kDa). Here we show that a 284.9 kDa recombinant protein of the spider Nephila clavipes is produced and spun into a fiber displaying mechanical properties comparable to those of the native silk. The native-sized protein, predominantly rich in glycine (44.9%), was favorably expressed in metabolically engineered Escherichia coli within which the glycyl-tRNA pool was elevated. We also found that the recombinant proteins of lower molecular weight versions yielded inferior fiber properties. The results provide insight into evolution of silk protein size related to mechanical performance, and also clarify why spinning lower molecular weight proteins does not recapitulate the properties of native fibers. Furthermore, the silk expression, purification, and spinning platform established here should be useful for sustainable production of natural quality dragline silk, potentially enabling broader applications. PMID:20660779

  16. Native-sized recombinant spider silk protein produced in metabolically engineered Escherichia coli results in a strong fiber.

    PubMed

    Xia, Xiao-Xia; Qian, Zhi-Gang; Ki, Chang Seok; Park, Young Hwan; Kaplan, David L; Lee, Sang Yup

    2010-08-10

    Spider dragline silk is a remarkably strong fiber that makes it attractive for numerous applications. Much has thus been done to make similar fibers by biomimic spinning of recombinant dragline silk proteins. However, success is limited in part due to the inability to successfully express native-sized recombinant silk proteins (250-320 kDa). Here we show that a 284.9 kDa recombinant protein of the spider Nephila clavipes is produced and spun into a fiber displaying mechanical properties comparable to those of the native silk. The native-sized protein, predominantly rich in glycine (44.9%), was favorably expressed in metabolically engineered Escherichia coli within which the glycyl-tRNA pool was elevated. We also found that the recombinant proteins of lower molecular weight versions yielded inferior fiber properties. The results provide insight into evolution of silk protein size related to mechanical performance, and also clarify why spinning lower molecular weight proteins does not recapitulate the properties of native fibers. Furthermore, the silk expression, purification, and spinning platform established here should be useful for sustainable production of natural quality dragline silk, potentially enabling broader applications. PMID:20660779

  17. ICPES analyses using full image spectra and astronomical data fitting algorithms to provide diagnostic and result information

    SciTech Connect

    Spencer, W.A.; Goode, S.R.

    1997-10-01

    ICP emission analyses are prone to errors due to changes in power level, nebulization rate, plasma temperature, and sample matrix. As a result, accurate analyses of complex samples often require frequent bracketing with matrix matched standards. Information needed to track and correct the matrix errors is contained in the emission spectrum. But most commercial software packages use only the analyte line emission to determine concentrations. Changes in plasma temperature and the nebulization rate are reflected by changes in the hydrogen line widths, the oxygen emission, and neutral ion line ratios. Argon and off-line emissions provide a measure to correct the power level and the background scattering occurring in the polychromator. The authors` studies indicated that changes in the intensity of the Ar 404.4 nm line readily flag most matrix and plasma condition modifications. Carbon lines can be used to monitor the impact of organics on the analyses and calcium and argon lines can be used to correct for spectral drift and alignment. Spectra of contaminated groundwater and simulated defense waste glasses were obtained using a Thermo Jarrell Ash ICP that has an echelle CID detector system covering the 190-850 nm range. The echelle images were translated to the FITS data format, which astronomers recommend for data storage. Data reduction packages such as those in the ESO-MIDAS/ECHELLE and DAOPHOT programs were tried with limited success. The radial point spread function was evaluated as a possible improved peak intensity measurement instead of the common pixel averaging approach used in the commercial ICP software. Several algorithms were evaluated to align and automatically scale the background and reference spectra. A new data reduction approach that utilizes standard reference images, successive subtractions, and residual analyses has been evaluated to correct for matrix effects.

  18. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  19. Formation of trichloromethane in chlorinated water and fresh-cut produce and as a result of reacting with citric acid

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Chlorine (sodium hypochlorite) is commonly used by the fresh produce industry to sanitize wash water, fresh and fresh-cut fruits and vegetables. However, possible formation of harmful chlorine by-products is a concern. The objectives of this study were to compare chlorine and chlorine dioxide in t...

  20. Multimedia and Training: Practice and Skills of European Producers, Results of the European Project "START-UP" (Part 2).

    ERIC Educational Resources Information Center

    Gutierrez, Christine Gardiol; Boder, Andre

    1992-01-01

    This second part of a report on European multimedia producers focuses on evaluation criteria and methodologies and the European market for educational and training multimedia materials, including production costs, subcontracting, the production hierarchy, the rationalization of production, and trends in the educational and training multimedia…

  1. Planning fuel-conservative descents with or without time constraints using a small programmable calculator: Algorithm development and flight test results

    NASA Technical Reports Server (NTRS)

    Knox, C. E.

    1983-01-01

    A simplified flight-management descent algorithm, programmed on a small programmable calculator, was developed and flight tested. It was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel-conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight-management descent algorithm is described. The results of flight tests flown with a T-39A (Sabreliner) airplane are presented.

  2. Development and test results of a flight management algorithm for fuel conservative descents in a time-based metered traffic environment

    NASA Technical Reports Server (NTRS)

    Knox, C. E.; Cannon, D. G.

    1980-01-01

    A simple flight management descent algorithm designed to improve the accuracy of delivering an airplane in a fuel-conservative manner to a metering fix at a time designated by air traffic control was developed and flight tested. This algorithm provides a three dimensional path with terminal area time constraints (four dimensional) for an airplane to make an idle thrust, clean configured (landing gear up, flaps zero, and speed brakes retracted) descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithm is described. The results of the flight tests flown with the Terminal Configured Vehicle airplane are presented.

  3. The Operational MODIS Cloud Optical and Microphysical Property Product: Overview of the Collection 6 Algorithm and Preliminary Results

    NASA Technical Reports Server (NTRS)

    Platnick, Steven; King, Michael D.; Wind, Galina; Amarasinghe, Nandana; Marchant, Benjamin; Arnold, G. Thomas

    2012-01-01

    Operational Moderate Resolution Imaging Spectroradiometer (MODIS) retrievals of cloud optical and microphysical properties (part of the archived products MOD06 and MYD06, for MODIS Terra and Aqua, respectively) are currently being reprocessed along with other MODIS Atmosphere Team products. The latest "Collection 6" processing stream, which is expected to begin production by summer 2012, includes updates to the previous cloud retrieval algorithm along with new capabilities. The 1 km retrievals, based on well-known solar reflectance techniques, include cloud optical thickness, effective particle radius, and water path, as well as thermodynamic phase derived from a combination of solar and infrared tests. Being both global and of high spatial resolution requires an algorithm that is computationally efficient and can perform over all surface types. Collection 6 additions and enhancements include: (i) absolute effective particle radius retrievals derived separately from the 1.6 and 3.7 !-lm bands (instead of differences relative to the standard 2.1 !-lm retrieval), (ii) comprehensive look-up tables for cloud reflectance and emissivity (no asymptotic theory) with a wind-speed interpolated Cox-Munk BRDF for ocean surfaces, (iii) retrievals for both liquid water and ice phases for each pixel, and a subsequent determination of the phase based, in part, on effective radius retrieval outcomes for the two phases, (iv) new ice cloud radiative models using roughened particles with a specified habit, (v) updated spatially-complete global spectral surface albedo maps derived from MODIS Collection 5, (vi) enhanced pixel-level uncertainty calculations incorporating additional radiative error sources including the MODIS L1 B uncertainty index for assessing band and scene-dependent radiometric uncertainties, (v) and use of a new 1 km cloud top pressure/temperature algorithm (also part of MOD06) for atmospheric corrections and low cloud non-unity emissivity temperature adjustments.

  4. Cosmic ray exposure dating with in situ produced cosmogenic He-3 - Results from young Hawaiian lava flows

    NASA Technical Reports Server (NTRS)

    Kurz, Mark D.; Colodner, Debra; Trull, Thomas W.; Moore, Richard B.; O'Brien, Keran

    1990-01-01

    Cosmogenic helium contents in a suite of Hawaiian radiocarbon-dated lava flows were measured to study the use of the production rate of spallation-produced cosmogenic He-3 as a surface exposure chronometer. Basalt samples from the Mauna Loa and Hualalai volcanoes were analyzed, showing that exposure-age dating is feasible in the 600-13000 year age range. The data suggest a present-day sea-level production rate in olivine of 125 + or - 30 atoms/g yr.

  5. Preliminary results of real-time PPP-RTK positioning algorithm development for moving platforms and its performance validation

    NASA Astrophysics Data System (ADS)

    Won, Jihye; Park, Kwan-Dong

    2015-04-01

    Real-time PPP-RTK positioning algorithms were developed for the purpose of getting precise coordinates of moving platforms. In this implementation, corrections for the satellite orbit and satellite clock were taken from the IGS-RTS products while the ionospheric delay was removed through ionosphere-free combination and the tropospheric delay was either taken care of using the Global Pressure and Temperature (GPT) model or estimated as a stochastic parameter. To improve the convergence speed, all the available GPS and GLONASS measurements were used and Extended Kalman Filter parameters were optimized. To validate our algorithms, we collected the GPS and GLONASS data from a geodetic-quality receiver installed on a roof of a moving vehicle in an open-sky environment and used IGS final products of satellite orbits and clock offsets. The horizontal positioning error got less than 10 cm within 5 minutes, and the error stayed below 10 cm even after the vehicle start moving. When the IGS-RTS product and the GPT model were used instead of the IGS precise product, the positioning accuracy of the moving vehicle was maintained at better than 20 cm once convergence was achieved at around 6 minutes.

  6. The Results of a Simulator Study to Determine the Effects on Pilot Performance of Two Different Motion Cueing Algorithms and Various Delays, Compensated and Uncompensated

    NASA Technical Reports Server (NTRS)

    Guo, Li-Wen; Cardullo, Frank M.; Telban, Robert J.; Houck, Jacob A.; Kelly, Lon C.

    2003-01-01

    A study was conducted employing the Visual Motion Simulator (VMS) at the NASA Langley Research Center, Hampton, Virginia. This study compared two motion cueing algorithms, the NASA adaptive algorithm and a new optimal control based algorithm. Also, the study included the effects of transport delays and the compensation thereof. The delay compensation algorithm employed is one developed by Richard McFarland at NASA Ames Research Center. This paper reports on the analyses of the results of analyzing the experimental data collected from preliminary simulation tests. This series of tests was conducted to evaluate the protocols and the methodology of data analysis in preparation for more comprehensive tests which will be conducted during the spring of 2003. Therefore only three pilots were used. Nevertheless some useful results were obtained. The experimental conditions involved three maneuvers; a straight-in approach with a rotating wind vector, an offset approach with turbulence and gust, and a takeoff with and without an engine failure shortly after liftoff. For each of the maneuvers the two motion conditions were combined with four delay conditions (0, 50, 100 & 200ms), with and without compensation.

  7. Isotope ratio determination of uranium by optical emission spectroscopy on a laser-produced plasma - basic investigations and analytical results

    NASA Astrophysics Data System (ADS)

    Pietsch, W.; Petit, A.; Briand, A.

    1998-05-01

    We report in this paper, the first determination of the isotope ratio (238/235) in an uranium sample by optical emission spectroscopy on a laser-produced plasma at reduced pressure (2.67 Pa). Investigations aimed at developing a new application of laser ablation for analytical isotope control of uranium are presented. Optimized experimental conditions allow one to obtain atomic emission spectra characterized by the narrowest possible line widths of the order of 0.01 nm for the investigated transition UII 424.437 nm. We show the possibility to achieve a relative precision in the range of 5% for an enrichment of 3.5% 235U. The influence of different relevant plasma parameters on the measured line width is discussed.

  8. Cosmic ray exposure dating with in situ produced cosmogenic 3He: results from young Hawaiian lava flows

    USGS Publications Warehouse

    Kurz, M.D.; Colodner, D.; Trull, T.W.; Moore, R.B.; O'Brien, K.

    1990-01-01

    In an effort to determine the in situ production rate of spallation-produced cosmogenic 3He, and evaluate its use as a surface exposure chronometer, we have measured cosmogenic helium contents in a suite of Hawaiian radiocarbon-dated lava flows. The lava flows, ranging in age from 600 to 13,000 years, were collected from Hualalai and Mauna Loa volcanoes on the island of Hawaii. Because cosmic ray surface-exposure dating requires the complete absence of erosion or soil cover, these lava flows were selected specifically for this purpose. The 3He production rate, measured within olivine phenocrysts, was found to vary significantly, ranging from 47 to 150 atoms g-1 yr-1 (normalized to sea level). Although there is considerable scatter in the data, the samples younger than 10,000 years are well-preserved and exposed, and the production rate variations are therefore not related to erosion or soil cover. Data averaged over the past 2000 years indicate a sea-level 3He production rate of 125 ?? 30 atoms g-1 yr-1, which agrees well with previous estimates. The longer record suggests a minimum in sea level normalized 3He production rate between 2000 and 7000 years (55 ?? 15 atoms g-1 yr-1), as compared to samples younger than 2000 years (125 ?? 30 atoms g-1 yr-1), and those between 7000 and 10,000 years (127 ?? 19 atoms g-1 yr-1). The minimum in production rate is similar in age to that which would be produced by variations in geomagnetic field strength, as indicated by archeomagnetic data. However, the production rate variations (a factor of 2.3 ?? 0.8) are poorly determined due to the large uncertainties in the youngest samples and questions of surface preservation for the older samples. Calculations using the atmospheric production model of O'Brien (1979) [35], and the method of Lal and Peters (1967) [11], predict smaller production rate variations for similar variation in dipole moment (a factor of 1.15-1.65). Because the production rate variations, archeomagnetic data

  9. Algorithms in Learning, Teaching, and Instructional Design. Studies in Systematic Instruction and Training Technical Report 51201.

    ERIC Educational Resources Information Center

    Gerlach, Vernon S.; And Others

    An algorithm is defined here as an unambiguous procedure which will always produce the correct result when applied to any problem of a given class of problems. This paper gives an extended discussion of the definition of an algorithm. It also explores in detail the elements of an algorithm, the representation of algorithms in standard prose, flow…

  10. Results.

    ERIC Educational Resources Information Center

    Zemsky, Robert; Shaman, Susan; Shapiro, Daniel B.

    2001-01-01

    Describes the Collegiate Results Instrument (CRI), which measures a range of collegiate outcomes for alumni 6 years after graduation. The CRI was designed to target alumni from institutions across market segments and assess their values, abilities, work skills, occupations, and pursuit of lifelong learning. (EV)

  11. Performance simulation of a combustion engine charged by a variable geometry turbocharger. I - Prerequirements, boundary conditions and model development. II - Simulation algorithm, computed results

    NASA Astrophysics Data System (ADS)

    Malobabic, M.; Buttschardt, W.; Rautenberg, M.

    The paper presents a theoretical derivation of the relationship between a variable geometry turbocharger and the combustion engine, using simplified boundary conditions and model restraints and taking into account the combustion process itself as well as the nonadiabatic operating conditions for the turbine and the compressor. The simulation algorithm is described, and the results computed using this algorithm are compared with measurements performed on a test engine in combination with a controllable turbocharger with adjustable turbine inlet guide vanes. In addition, the results of theoretical parameter studies are presented, which include the simulation of a given turbocharger with variable geometry in combination with different sized combustion engines and the simulation of different sized variable-geometry turbochargers in combination with a given combustion engine.

  12. Automatic control algorithm effects on energy production

    NASA Technical Reports Server (NTRS)

    Mcnerney, G. M.

    1981-01-01

    A computer model was developed using actual wind time series and turbine performance data to simulate the power produced by the Sandia 17-m VAWT operating in automatic control. The model was used to investigate the influence of starting algorithms on annual energy production. The results indicate that, depending on turbine and local wind characteristics, a bad choice of a control algorithm can significantly reduce overall energy production. The model can be used to select control algorithms and threshold parameters that maximize long term energy production. The results from local site and turbine characteristics were generalized to obtain general guidelines for control algorithm design.

  13. Epidemic diffusion of OXA-23-producing Acinetobacter baumannii isolates in Italy: results of the first cross-sectional countrywide survey.

    PubMed

    Principe, Luigi; Piazza, Aurora; Giani, Tommaso; Bracco, Silvia; Caltagirone, Maria Sofia; Arena, Fabio; Nucleo, Elisabetta; Tammaro, Federica; Rossolini, Gian Maria; Pagani, Laura; Luzzaro, Francesco

    2014-08-01

    Carbapenem-resistant Acinetobacter baumannii (CRAb) is emerging worldwide as a public health problem in various settings. The aim of this study was to investigate the prevalence of CRAb isolates in Italy and to characterize their resistance mechanisms and genetic relatedness. A countrywide cross-sectional survey was carried out at 25 centers in mid-2011. CRAb isolates were reported from all participating centers, with overall proportions of 45.7% and 22.2% among consecutive nonreplicate clinical isolates of A. baumannii from inpatients (n = 508) and outpatients (n = 63), respectively. Most of them were resistant to multiple antibiotics, whereas all remained susceptible to colistin, with MIC50 and MIC90 values of ≤ 0.5 mg/liter. The genes coding for carbapenemase production were identified by PCR and sequencing. OXA-23 enzymes (found in all centers) were by far the most common carbapenemases (81.7%), followed by OXA-58 oxacillinases (4.5%), which were found in 7 of the 25 centers. In 6 cases, CRAb isolates carried both bla(OXA-23-like) and bla(OXA-58-like) genes. A repetitive extragenic palindromic (REP)-PCR technique, multiplex PCRs for group identification, and multilocus sequence typing (MLST) were used to determine the genetic relationships among representative isolates (n = 55). Two different clonal lineages were identified, including a dominant clone of sequence type 2 (ST2) related to the international clone II (sequence group 1 [SG1], SG4, and SG5) and a clone of ST78 (SG6) previously described in Italy. Overall, our results demonstrate that OXA-23 enzymes have become the most prevalent carbapenemases and are now endemic in Italy. In addition, molecular typing profiles showed the presence of international and national clonal lineages in Italy.

  14. Motion Cueing Algorithm Development: Initial Investigation and Redesign of the Algorithms

    NASA Technical Reports Server (NTRS)

    Telban, Robert J.; Wu, Weimin; Cardullo, Frank M.; Houck, Jacob A. (Technical Monitor)

    2000-01-01

    In this project four motion cueing algorithms were initially investigated. The classical algorithm generated results with large distortion and delay and low magnitude. The NASA adaptive algorithm proved to be well tuned with satisfactory performance, while the UTIAS adaptive algorithm produced less desirable results. Modifications were made to the adaptive algorithms to reduce the magnitude of undesirable spikes. The optimal algorithm was found to have the potential for improved performance with further redesign. The center of simulator rotation was redefined. More terms were added to the cost function to enable more tuning flexibility. A new design approach using a Fortran/Matlab/Simulink setup was employed. A new semicircular canals model was incorporated in the algorithm. With these changes results show the optimal algorithm has some advantages over the NASA adaptive algorithm. Two general problems observed in the initial investigation required solutions. A nonlinear gain algorithm was developed that scales the aircraft inputs by a third-order polynomial, maximizing the motion cues while remaining within the operational limits of the motion system. A braking algorithm was developed to bring the simulator to a full stop at its motion limit and later release the brake to follow the cueing algorithm output.

  15. Solutions of the Two-Dimensional Hubbard Model: Benchmarks and Results from a Wide Range of Numerical Algorithms

    NASA Astrophysics Data System (ADS)

    LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; Bulik, Ireneusz W.; Chan, Garnet Kin-Lic; Chung, Chia-Min; Deng, Youjin; Ferrero, Michel; Henderson, Thomas M.; Jiménez-Hoyos, Carlos A.; Kozik, E.; Liu, Xuan-Wen; Millis, Andrew J.; Prokof'ev, N. V.; Qin, Mingpu; Scuseria, Gustavo E.; Shi, Hao; Svistunov, B. V.; Tocchio, Luca F.; Tupitsyn, I. S.; White, Steven R.; Zhang, Shiwei; Zheng, Bo-Xiao; Zhu, Zhenyue; Gull, Emanuel; Simons Collaboration on the Many-Electron Problem

    2015-10-01

    Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification of uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.

  16. Solutions of the Two Dimensional Hubbard Model: Benchmarks and Results from a Wide Range of Numerical Algorithms

    NASA Astrophysics Data System (ADS)

    Leblanc, James

    In this talk we present numerical results for ground state and excited state properties (energies, double occupancies, and Matsubara-axis self energies) of the single-orbital Hubbard model on a two-dimensional square lattice. In order to provide an assessment of our ability to compute accurate results in the thermodynamic limit we employ numerous methods including auxiliary field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock. We illustrate cases where agreement between different methods is obtained in order to establish benchmark results that should be useful in the validation of future results.

  17. Clutter discrimination algorithm simulation in pulse laser radar imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; Su, Xuan; Zhu, Fule

    2015-10-01

    Pulse laser radar imaging performance is greatly influenced by different kinds of clutter. Various algorithms are developed to mitigate clutter. However, estimating performance of a new algorithm is difficult. Here, a simulation model for estimating clutter discrimination algorithms is presented. This model consists of laser pulse emission, clutter jamming, laser pulse reception and target image producing. Additionally, a hardware platform is set up gathering clutter data reflected by ground and trees. The data logging is as clutter jamming input in the simulation model. The hardware platform includes a laser diode, a laser detector and a high sample rate data logging circuit. The laser diode transmits short laser pulses (40ns FWHM) at 12.5 kilohertz pulse rate and at 905nm wavelength. An analog-to-digital converter chip integrated in the sample circuit works at 250 mega samples per second. The simulation model and the hardware platform contribute to a clutter discrimination algorithm simulation system. Using this system, after analyzing clutter data logging, a new compound pulse detection algorithm is developed. This new algorithm combines matched filter algorithm and constant fraction discrimination (CFD) algorithm. Firstly, laser echo pulse signal is processed by matched filter algorithm. After the first step, CFD algorithm comes next. Finally, clutter jamming from ground and trees is discriminated and target image is produced. Laser radar images are simulated using CFD algorithm, matched filter algorithm and the new algorithm respectively. Simulation result demonstrates that the new algorithm achieves the best target imaging effect of mitigating clutter reflected by ground and trees.

  18. Solutions of the two-dimensional Hubbard model: Benchmarks and results from a wide range of numerical algorithms

    SciTech Connect

    LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; Bulik, Ireneusz W.; Chan, Garnet Kin-Lic; Chung, Chia -Min; Deng, Youjin; Ferrero, Michel; Henderson, Thomas M.; Jiménez-Hoyos, Carlos A.; Kozik, E.; Liu, Xuan -Wen; Millis, Andrew J.; Prokof’ev, N. V.; Qin, Mingpu; Scuseria, Gustavo E.; Shi, Hao; Svistunov, B. V.; Tocchio, Luca F.; Tupitsyn, I. S.; White, Steven R.; Zhang, Shiwei; Zheng, Bo -Xiao; Zhu, Zhenyue; Gull, Emanuel

    2015-12-14

    Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification of uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Furthermore, cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.

  19. Solutions of the two-dimensional Hubbard model: Benchmarks and results from a wide range of numerical algorithms

    DOE PAGES

    LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; Bulik, Ireneusz W.; Chan, Garnet Kin-Lic; Chung, Chia -Min; Deng, Youjin; Ferrero, Michel; Henderson, Thomas M.; Jiménez-Hoyos, Carlos A.; et al

    2015-12-14

    Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification ofmore » uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Furthermore, cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.« less

  20. The Soil Moisture Active Passive Mission (SMAP) Science Data Products: Results of Testing with Field Experiment and Algorithm Testbed Simulation Environment Data

    NASA Technical Reports Server (NTRS)

    Entekhabi, Dara; Njoku, Eni E.; O'Neill, Peggy E.; Kellogg, Kent H.; Entin, Jared K.

    2010-01-01

    Talk outline 1. Derivation of SMAP basic and applied science requirements from the NRC Earth Science Decadal Survey applications 2. Data products and latencies 3. Algorithm highlights 4. SMAP Algorithm Testbed 5. SMAP Working Groups and community engagement

  1. Testing block subdivision algorithms on block designs

    NASA Astrophysics Data System (ADS)

    Wiseman, Natalie; Patterson, Zachary

    2016-01-01

    Integrated land use-transportation models predict future transportation demand taking into account how households and firms arrange themselves partly as a function of the transportation system. Recent integrated models require parcels as inputs and produce household and employment predictions at the parcel scale. Block subdivision algorithms automatically generate parcel patterns within blocks. Evaluating block subdivision algorithms is done by way of generating parcels and comparing them to those in a parcel database. Three block subdivision algorithms are evaluated on how closely they reproduce parcels of different block types found in a parcel database from Montreal, Canada. While the authors who developed each of the algorithms have evaluated them, they have used their own metrics and block types to evaluate their own algorithms. This makes it difficult to compare their strengths and weaknesses. The contribution of this paper is in resolving this difficulty with the aim of finding a better algorithm suited to subdividing each block type. The proposed hypothesis is that given the different approaches that block subdivision algorithms take, it's likely that different algorithms are better adapted to subdividing different block types. To test this, a standardized block type classification is used that consists of mutually exclusive and comprehensive categories. A statistical method is used for finding a better algorithm and the probability it will perform well for a given block type. Results suggest the oriented bounding box algorithm performs better for warped non-uniform sites, as well as gridiron and fragmented uniform sites. It also produces more similar parcel areas and widths. The Generalized Parcel Divider 1 algorithm performs better for gridiron non-uniform sites. The Straight Skeleton algorithm performs better for loop and lollipop networks as well as fragmented non-uniform and warped uniform sites. It also produces more similar parcel shapes and patterns.

  2. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  3. A splitting algorithm for Vlasov simulation with filamentation filtration

    NASA Technical Reports Server (NTRS)

    Klimas, A. J.; Farrell, W. M.

    1994-01-01

    A Fourier-Fourier transformed version of the splitting algorithm for simulating solutions of the Vlasov-Poisson system of equations is introduced. It is shown that with the inclusion of filamentation filtration in this transformed algorithm it is both faster and more stable than the standard splitting algorithm. It is further shown that in a scalar computer environment this new algorithm is approximately equal in speed and far less noisy than its particle-in-cell counterpart. It is conjectured that in a multiprocessor environment the filtered splitting algorithm would be faster while producing more precise results.

  4. An SMP soft classification algorithm for remote sensing

    NASA Astrophysics Data System (ADS)

    Phillips, Rhonda D.; Watson, Layne T.; Easterling, David R.; Wynne, Randolph H.

    2014-07-01

    This work introduces a symmetric multiprocessing (SMP) version of the continuous iterative guided spectral class rejection (CIGSCR) algorithm, a semiautomated classification algorithm for remote sensing (multispectral) images. The algorithm uses soft data clusters to produce a soft classification containing inherently more information than a comparable hard classification at an increased computational cost. Previous work suggests that similar algorithms achieve good parallel scalability, motivating the parallel algorithm development work here. Experimental results of applying parallel CIGSCR to an image with approximately 108 pixels and six bands demonstrate superlinear speedup. A soft two class classification is generated in just over 4 min using 32 processors.

  5. Univariate time series forecasting algorithm validation

    NASA Astrophysics Data System (ADS)

    Ismail, Suzilah; Zakaria, Rohaiza; Muda, Tuan Zalizam Tuan

    2014-12-01

    Forecasting is a complex process which requires expert tacit knowledge in producing accurate forecast values. This complexity contributes to the gaps between end users and expert. Automating this process by using algorithm can act as a bridge between them. Algorithm is a well-defined rule for solving a problem. In this study a univariate time series forecasting algorithm was developed in JAVA and validated using SPSS and Excel. Two set of simulated data (yearly and non-yearly); several univariate forecasting techniques (i.e. Moving Average, Decomposition, Exponential Smoothing, Time Series Regressions and ARIMA) and recent forecasting process (such as data partition, several error measures, recursive evaluation and etc.) were employed. Successfully, the results of the algorithm tally with the results of SPSS and Excel. This algorithm will not just benefit forecaster but also end users that lacking in depth knowledge of forecasting process.

  6. MLP iterative construction algorithm

    NASA Astrophysics Data System (ADS)

    Rathbun, Thomas F.; Rogers, Steven K.; DeSimio, Martin P.; Oxley, Mark E.

    1997-04-01

    The MLP Iterative Construction Algorithm (MICA) designs a Multi-Layer Perceptron (MLP) neural network as it trains. MICA adds Hidden Layer Nodes one at a time, separating classes on a pair-wise basis, until the data is projected into a linear separable space by class. Then MICA trains the Output Layer Nodes, which results in an MLP that achieves 100% accuracy on the training data. MICA, like Backprop, produces an MLP that is a minimum mean squared error approximation of the Bayes optimal discriminant function. Moreover, MICA's training technique yields novel feature selection technique and hidden node pruning technique

  7. Image-Data Compression Using Edge-Optimizing Algorithm for WFA Inference.

    ERIC Educational Resources Information Center

    Culik, Karel II; Kari, Jarkko

    1994-01-01

    Presents an inference algorithm that produces a weighted finite automata (WFA), in particular, the grayness functions of graytone images. Image-data compression results based on the new inference algorithm produces a WFA with a relatively small number of edges. Image-data compression results alone and in combination with wavelets are discussed.…

  8. RADFLO physics and algorithms

    SciTech Connect

    Symbalisty, E.M.D.; Zinn, J.; Whitaker, R.W.

    1995-09-01

    This paper describes the history, physics, and algorithms of the computer code RADFLO and its extension HYCHEM. RADFLO is a one-dimensional, radiation-transport hydrodynamics code that is used to compute early-time fireball behavior for low-altitude nuclear bursts. The primary use of the code is the prediction of optical signals produced by nuclear explosions. It has also been used to predict thermal and hydrodynamic effects that are used for vulnerability and lethality applications. Another closely related code, HYCHEM, is an extension of RADFLO which includes the effects of nonequilibrium chemistry. Some examples of numerical results will be shown, along with scaling expressions derived from those results. We describe new computations of the structures and luminosities of steady-state shock waves and radiative thermal waves, which have been extended to cover a range of ambient air densities for high-altitude applications. We also describe recent modifications of the codes to use a one-dimensional analog of the CAVEAT fluid-dynamics algorithm in place of the former standard Richtmyer-von Neumann algorithm.

  9. Filtered refocusing: a volumetric reconstruction algorithm for plenoptic-PIV

    NASA Astrophysics Data System (ADS)

    Fahringer, Timothy W.; Thurow, Brian S.

    2016-09-01

    A new algorithm for reconstruction of 3D particle fields from plenoptic image data is presented. The algorithm is based on the technique of computational refocusing with the addition of a post reconstruction filter to remove the out of focus particles. This new algorithm is tested in terms of reconstruction quality on synthetic particle fields as well as a synthetically generated 3D Gaussian ring vortex. Preliminary results indicate that the new algorithm performs as well as the MART algorithm (used in previous work) in terms of the reconstructed particle position accuracy, but produces more elongated particles. The major advantage to the new algorithm is the dramatic reduction in the computational cost required to reconstruct a volume. It is shown that the new algorithm takes 1/9th the time to reconstruct the same volume as MART while using minimal resources. Experimental results are presented in the form of the wake behind a cylinder at a Reynolds number of 185.

  10. Basic firefly algorithm for document clustering

    NASA Astrophysics Data System (ADS)

    Mohammed, Athraa Jasim; Yusof, Yuhanis; Husni, Husniza

    2015-12-01

    The Document clustering plays significant role in Information Retrieval (IR) where it organizes documents prior to the retrieval process. To date, various clustering algorithms have been proposed and this includes the K-means and Particle Swarm Optimization. Even though these algorithms have been widely applied in many disciplines due to its simplicity, such an approach tends to be trapped in a local minimum during its search for an optimal solution. To address the shortcoming, this paper proposes a Basic Firefly (Basic FA) algorithm to cluster text documents. The algorithm employs the Average Distance to Document Centroid (ADDC) as the objective function of the search. Experiments utilizing the proposed algorithm were conducted on the 20Newsgroups benchmark dataset. Results demonstrate that the Basic FA generates a more robust and compact clusters than the ones produced by K-means and Particle Swarm Optimization (PSO).

  11. Automated classification of seismic sources in large database using random forest algorithm: First results at Piton de la Fournaise volcano (La Réunion).

    NASA Astrophysics Data System (ADS)

    Hibert, Clément; Provost, Floriane; Malet, Jean-Philippe; Stumpf, André; Maggi, Alessia; Ferrazzini, Valérie

    2016-04-01

    In the past decades the increasing quality of seismic sensors and capability to transfer remotely large quantity of data led to a fast densification of local, regional and global seismic networks for near real-time monitoring. This technological advance permits the use of seismology to document geological and natural/anthropogenic processes (volcanoes, ice-calving, landslides, snow and rock avalanches, geothermal fields), but also led to an ever-growing quantity of seismic data. This wealth of seismic data makes the construction of complete seismicity catalogs, that include earthquakes but also other sources of seismic waves, more challenging and very time-consuming as this critical pre-processing stage is classically done by human operators. To overcome this issue, the development of automatic methods for the processing of continuous seismic data appears to be a necessity. The classification algorithm should satisfy the need of a method that is robust, precise and versatile enough to be deployed to monitor the seismicity in very different contexts. We propose a multi-class detection method based on the random forests algorithm to automatically classify the source of seismic signals. Random forests is a supervised machine learning technique that is based on the computation of a large number of decision trees. The multiple decision trees are constructed from training sets including each of the target classes. In the case of seismic signals, these attributes may encompass spectral features but also waveform characteristics, multi-stations observations and other relevant information. The Random Forests classifier is used because it provides state-of-the-art performance when compared with other machine learning techniques (e.g. SVM, Neural Networks) and requires no fine tuning. Furthermore it is relatively fast, robust, easy to parallelize, and inherently suitable for multi-class problems. In this work, we present the first results of the classification method applied

  12. Profiling wind and greenhouse gases by infrared-laser occultation: algorithm and results from end-to-end simulations in windy air

    NASA Astrophysics Data System (ADS)

    Plach, A.; Proschek, V.; Kirchengast, G.

    2015-01-01

    The new mission concept of microwave and infrared-laser occultation between low-Earth-orbit satellites (LMIO) is designed to provide accurate and long-term stable profiles of atmospheric thermodynamic variables, greenhouse gases (GHGs), and line-of-sight (l.o.s.) wind speed with focus on the upper troposphere and lower stratosphere (UTLS). While the unique quality of GHG retrievals enabled by LMIO over the UTLS has been recently demonstrated based on end-to-end simulations, the promise of l.o.s. wind retrieval, and of joint GHG and wind retrieval, has not yet been analyzed in any realistic simulation setting so far. Here we describe a newly developed l.o.s. wind retrieval algorithm, which we embedded in an end-to-end simulation framework that also includes the retrieval of thermodynamic variables and GHGs, and analyze the performance of both standalone wind retrieval and joint wind and GHG retrieval. The wind algorithm utilizes LMIO laser signals placed on the inflection points at the wings of the highly symmetric C18OO absorption line near 4767 cm-1 and exploits transmission differences from wind-induced Doppler shift. Based on realistic example cases for a diversity of atmospheric conditions, ranging from tropical to high-latitude winter, we find that the retrieved l.o.s wind profiles are of high quality over the lower stratosphere under all conditions, i.e., unbiased and accurate to within about 2 m s-1 over about 15 to 35 km. The wind accuracy degrades into the upper troposphere due to decreasing signal-to-noise ratio of the wind-induced differential transmission signals. The GHG retrieval in windy air is not vulnerable to wind speed uncertainties up to about 10 m s-1 but is found to benefit in case of higher speeds from the integrated wind retrieval that enables correction of wind-induced Doppler shift of GHG signals. Overall both the l.o.s. wind and GHG retrieval results are strongly encouraging towards further development and implementation of a LMIO mission.

  13. Mortality prediction in the ICU: can we do better? Results from the Super ICU Learner Algorithm (SICULA) project, a population-based study

    PubMed Central

    Pirracchio, Romain; Petersen, Maya L.; Carone, Marco; Rigon, Matthieu Resche; Chevret, Sylvie; van der LAAN, Mark J.

    2015-01-01

    Background Improved mortality prediction for patients in intensive care units (ICU) remains an important challenge. Many severity scores have been proposed but validation studies have concluded that they are not adequately calibrated. Many flexible algorithms are available, yet none of these individually outperform all others regardless of context. In contrast, the Super Learner (SL), an ensemble machine learning technique that leverages on multiple learning algorithms to obtain better prediction performance, has been shown to perform at least as well as the optimal member of its library. It might provide an ideal opportunity to construct a novel severity score with an improved performance profile. The aim of the present study was to provide a new mortality prediction algorithm for ICU patients using an implementation of the Super Learner, and to assess its performance relative to prediction based on the SAPS II, APACHE II and SOFA scores. Methods We used the Multiparameter Intelligent Monitoring in Intensive Care II (MIMIC-II) database (v26) including all patients admitted to an ICU at Boston’s Beth Israel Deaconess Medical Center from 2001 to 2008. The calibration, discrimination and risk classification of predicted hospital mortality based on SAPS II, on APACHE II, on SOFA and on our Super Learned-based proposal were evaluated. Performance measures were calculated using cross-validation to avoid making biased assessments. Our proposed score was then externally validated on a dataset of 200 randomly selected patients admitted at the ICU of Hôpital Européen Georges-Pompidou in Paris, France between September 2013 and June 2014. The primary outcome was hospital mortality. The explanatory variables were the same as those included in the SAPS II score. Results 24,508 patients were included, with median SAPS II 38 (IQR: 27–51), median SOFA 5 (IQR: 2–8). A total of 3,002/24,508(12.2%) patients died in the hospital. The two versions of our Super Learner

  14. A flight management algorithm and guidance for fuel-conservative descents in a time-based metered air traffic environment: Development and flight test results

    NASA Technical Reports Server (NTRS)

    Knox, C. E.

    1984-01-01

    A simple airborne flight management descent algorithm designed to define a flight profile subject to the constraints of using idle thrust, a clean airplane configuration (landing gear up, flaps zero, and speed brakes retracted), and fixed-time end conditions was developed and flight tested in the NASA TSRV B-737 research airplane. The research test flights, conducted in the Denver ARTCC automated time-based metering LFM/PD ATC environment, demonstrated that time guidance and control in the cockpit was acceptable to the pilots and ATC controllers and resulted in arrival of the airplane over the metering fix with standard deviations in airspeed error of 6.5 knots, in altitude error of 23.7 m (77.8 ft), and in arrival time accuracy of 12 sec. These accuracies indicated a good representation of airplane performance and wind modeling. Fuel savings will be obtained on a fleet-wide basis through a reduction of the time error dispersions at the metering fix and on a single-airplane basis by presenting the pilot with guidance for a fuel-efficient descent.

  15. Fast deterministic algorithm for EEE components classification

    NASA Astrophysics Data System (ADS)

    Kazakovtsev, L. A.; Antamoshkin, A. N.; Masich, I. S.

    2015-10-01

    Authors consider the problem of automatic classification of the electronic, electrical and electromechanical (EEE) components based on results of the test control. Electronic components of the same type used in a high- quality unit must be produced as a single production batch from a single batch of the raw materials. Data of the test control are used for splitting a shipped lot of the components into several classes representing the production batches. Methods such as k-means++ clustering or evolutionary algorithms combine local search and random search heuristics. The proposed fast algorithm returns a unique result for each data set. The result is comparatively precise. If the data processing is performed by the customer of the EEE components, this feature of the algorithm allows easy checking of the results by a producer or supplier.

  16. Semioptimal practicable algorithmic cooling

    NASA Astrophysics Data System (ADS)

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-04-01

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon’s entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  17. Sprouting Healthy Kids Promotes Local Produce and Healthy Eating Behavior in Austin, Texas, Middle Schools: Promoting the Use of Local Produce and Healthy Eating Behavior in Austin City Schools. Program Results Report

    ERIC Educational Resources Information Center

    Feiden, Karyn

    2010-01-01

    The Sustainable Food Center, which promotes healthy food choices, partnered with six middle schools in Austin, Texas, to implement Sprouting Healthy Kids. The pilot project was designed to increase children's knowledge of the food system, their consumption of fruits and vegetables and their access to local farm produce. Most students at these…

  18. HEATR project: ATR algorithm parallelization

    NASA Astrophysics Data System (ADS)

    Deardorf, Catherine E.

    1998-09-01

    High Performance Computing (HPC) Embedded Application for Target Recognition (HEATR) is a project funded by the High Performance Computing Modernization Office through the Common HPC Software Support Initiative (CHSSI). The goal of CHSSI is to produce portable, parallel, multi-purpose, freely distributable, support software to exploit emerging parallel computing technologies and enable application of scalable HPC's for various critical DoD applications. Specifically, the CHSSI goal for HEATR is to provide portable, parallel versions of several existing ATR detection and classification algorithms to the ATR-user community to achieve near real-time capability. The HEATR project will create parallel versions of existing automatic target recognition (ATR) detection and classification algorithms and generate reusable code that will support porting and software development process for ATR HPC software. The HEATR Team has selected detection/classification algorithms from both the model- based and training-based (template-based) arena in order to consider the parallelization requirements for detection/classification algorithms across ATR technology. This would allow the Team to assess the impact that parallelization would have on detection/classification performance across ATR technology. A field demo is included in this project. Finally, any parallel tools produced to support the project will be refined and returned to the ATR user community along with the parallel ATR algorithms. This paper will review: (1) HPCMP structure as it relates to HEATR, (2) Overall structure of the HEATR project, (3) Preliminary results for the first algorithm Alpha Test, (4) CHSSI requirements for HEATR, and (5) Project management issues and lessons learned.

  19. Damage evaluation on a multi-story framed structures: comparison of results retrieved from algorithms based on modal and non-modal parameters

    NASA Astrophysics Data System (ADS)

    Auletta, Gianluca; Ditommaso, Rocco; Iacovino, Chiara; Carlo Ponzo, Felice; Pina Limongelli, Maria

    2016-04-01

    Continuous monitoring based on vibrational identification methods is increasingly employed with the aim of evaluate the state of the health of existing structures and infrastructures and to evaluate the performance of safety interventions over time. In case of earthquakes, data acquired by means of continuous monitoring systems can be used to localize and quantify a possible damage occurred on a monitored structure using appropriate algorithms based on the variations of structural parameters. Most of the damage identification methods are based on the variation of few modal and/or non-modal parameters: the former, are strictly related to the structural eigenfrequencies, equivalent viscous damping factors and mode shapes; the latter, are based on the variation of parameters related to the geometric characteristics of the monitored structure whose variations could be correlated related to damage. In this work results retrieved from the application of a curvature evolution based method and an interpolation error based method are compared. The first method is based on the evaluation of the curvature variation (related to the fundamental mode of vibration) over time and compares the variations before, during and after the earthquake. The Interpolation Method is based on the detection of localized reductions of smoothness in the Operational Deformed Shapes (ODSs) of the structure. A damage feature is defined in terms of the error related to the use of a spline function in interpolating the ODSs of the structure: statistically significant variations of the interpolation error between two successive inspections of the structure indicate the onset of damage. Both methods have been applied using both numerical data retrieved from nonlinear FE models and experimental tests on scaled structures carried out on the shaking table of the University of Basilicata. Acknowledgements This study was partially funded by the Italian Civil Protection Department within the project DPC

  20. Comparison of ice-sheet satellite altimeter retracking algorithm

    SciTech Connect

    Davis, C.H.

    1996-01-01

    The NASA and ESA retracking algorithms are compared with an algorithm based upon a combined surface and volume (S/V) scattering model. First, the S/V, NASA, and ESA algorithms were used to retrack over 1.3 million altimeter return waveforms from the Greenland and Antarctic ice sheets. The surface elevations from the S/V algorithm were compared with the elevations produced by the NASA and ESA algorithms to determine the relative accuracy of these algorithms when subsurface volume scattering occurs. The results show that the ESA{sub 25%} algorithm produced slightly higher surface elevations than the S/V algorithm. The NASA retracking algorithm produced lower surface elevations than the S/V retracking algorithm, with average differences ranging from {minus}0.3 to {minus}0.9 m. The lower NASA elevations can only account for a portion of previously reported differences between altimeter and geoceiver surface elevations, suggesting that the remainder is probably due to orbital differences. Next, by analyzing several thousand satellite crossover points from the Greenland and Antarctic ice sheets, the author estimated the repeatability of the surface elevations derived from the different retracking algorithms. The elevations derived from the ESA{sub 25%} and S/V algorithm had the smallest standard deviations for the crossover differences for a time period where no significant change in surface elevation should occur. The NASA standard deviations were approximately 0.2 m larger than those from the ESA{sub 25%} and S/V algorithm, which represents an average increase in error of approximately 0.5 m in the datasets.

  1. Scheduling Earth Observing Satellites with Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna

    2003-01-01

    We hypothesize that evolutionary algorithms can effectively schedule coordinated fleets of Earth observing satellites. The constraints are complex and the bottlenecks are not well understood, a condition where evolutionary algorithms are often effective. This is, in part, because evolutionary algorithms require only that one can represent solutions, modify solutions, and evaluate solution fitness. To test the hypothesis we have developed a representative set of problems, produced optimization software (in Java) to solve them, and run experiments comparing techniques. This paper presents initial results of a comparison of several evolutionary and other optimization techniques; namely the genetic algorithm, simulated annealing, squeaky wheel optimization, and stochastic hill climbing. We also compare separate satellite vs. integrated scheduling of a two satellite constellation. While the results are not definitive, tests to date suggest that simulated annealing is the best search technique and integrated scheduling is superior.

  2. Coronary CTA using scout-based automated tube potential and current selection algorithm, with breast displacement results in lower radiation exposure in females compared to males

    PubMed Central

    Vadvala, Harshna; Kim, Phillip; Mayrhofer, Thomas; Pianykh, Oleg; Kalra, Mannudeep; Hoffmann, Udo

    2014-01-01

    Purpose To evaluate the effect of automatic tube potential selection and automatic exposure control combined with female breast displacement during coronary computed tomography angiography (CCTA) on radiation exposure in women versus men of the same body size. Materials and methods Consecutive clinical exams between January 2012 and July 2013 at an academic medical center were retrospectively analyzed. All examinations were performed using ECG-gating, automated tube potential, and tube current selection algorithm (APS-AEC) with breast displacement in females. Cohorts were stratified by sex and standard World Health Organization body mass index (BMI) ranges. CT dose index volume (CTDIvol), dose length product (DLP) median effective dose (ED), and size specific dose estimate (SSDE) were recorded. Univariable and multivariable regression analyses were performed to evaluate the effect of gender on radiation exposure per BMI. Results A total of 726 exams were included, 343 (47%) were females; mean BMI was similar by gender (28.6±6.9 kg/m2 females vs. 29.2±6.3 kg/m2 males; P=0.168). Median ED was 2.3 mSv (1.4-5.2) for females and 3.6 (2.5-5.9) for males (P<0.001). Females were exposed to less radiation by a difference in median ED of –1.3 mSv, CTDIvol –4.1 mGy, and SSDE –6.8 mGy (all P<0.001). After adjusting for BMI, patient characteristics, and gating mode, females exposure was lower by a median ED of –0.7 mSv, CTDIvol –2.3 mGy, and SSDE –3.15 mGy, respectively (all P<0.01). Conclusions: We observed a difference in radiation exposure to patients undergoing CCTA with the combined use of AEC-APS and breast displacement in female patients as compared to their BMI-matched male counterparts, with female patients receiving one third less exposure. PMID:25610804

  3. Strengths and weaknesses in the supply of school food resulting from the procurement of family farm produce in a municipality in Brazil.

    PubMed

    Soares, Panmela; Martinelli, Suellen Secchi; Melgarejo, Leonardo; Davó-Blanes, Mari Carmen; Cavalli, Suzi Barletto

    2015-06-01

    The objective of this study was to assess compliance with school food programme recommendations for the procurement of family farm produce. This study consists of an exploratory descriptive study utilising a qualitative approach based on semistructured interviews with key informants in a municipality in the State of Santa Catarina in Brazil. Study participants were managers and staff of the school food programme and department of agriculture, and representatives of a farmers' organisation. The produce delivery and demand fulfilment stages of the procurement process were carried out in accordance with the recommendations. However, nonconformities occurred in the elaboration of the public call for proposals, elaboration of the sales proposal, and fulfilment of produce quality standards. It was observed that having a diverse range of suppliers and the exchange of produce by the cooperative with neighbouring municipalities helped to maintain a regular supply of produce. The elaboration of menus contributed to planning agricultural production. However, agricultural production was not mapped before elaborating the menus in this case study and an agricultural reform settlement was left out of the programme. A number of weaknesses in the programme were identified which need to be overcome in order to promote local family farming and improve the quality of school food in the municipality.

  4. Severe sepsis and septic shock in pre-hospital emergency medicine: survey results of medical directors of emergency medical services concerning antibiotics, blood cultures and algorithms.

    PubMed

    Casu, Sebastian; Häske, David

    2016-06-01

    Delayed antibiotic treatment for patients in severe sepsis and septic shock decreases the probability of survival. In this survey, medical directors of different emergency medical services (EMS) in Germany were asked if they are prepared for pre-hospital sepsis therapy with antibiotics or special algorithms to evaluate the individual preparations of the different rescue areas for the treatment of patients with this infectious disease. The objective of the survey was to obtain a general picture of the current status of the EMS with respect to rapid antibiotic treatment for sepsis. A total of 166 medical directors were invited to complete a short survey on behalf of the different rescue service districts in Germany via an electronic cover letter. Of the rescue districts, 25.6 % (n = 20) stated that they keep antibiotics on EMS vehicles. In addition, 2.6 % carry blood cultures on the vehicles. The most common antibiotic is ceftriaxone (third generation cephalosporin). In total, 8 (10.3 %) rescue districts use an algorithm for patients with sepsis, severe sepsis or septic shock. Although the German EMS is an emergency physician-based rescue system, special opportunities in the form of antibiotics on emergency physician vehicles are missing. Simultaneously, only 10.3 % of the rescue districts use a special algorithm for sepsis therapy. Sepsis, severe sepsis and septic shock do not appear to be prioritized as highly as these deadly diseases should be in the pre-hospital setting. PMID:26719078

  5. A motif extraction algorithm based on hashing and modulo-4 arithmetic.

    PubMed

    Sheng, Huitao; Mehrotra, Kishan; Mohan, Chilukuri; Raina, Ramesh

    2008-01-01

    We develop an algorithm to identify cis-elements in promoter regions of coregulated genes. This algorithm searches for subsequences of desired length whose frequency of occurrence is relatively high, while accounting for slightly perturbed variants using hash table and modulo arithmetic. Motifs are evaluated using profile matrices and higher-order Markov background model. Simulation results show that our algorithm discovers more motifs present in the test sequences, when compared with two well-known motif-discovery tools (MDScan and AlignACE). The algorithm produces very promising results on real data set; the output of the algorithm contained many known motifs. PMID:20058489

  6. A motif extraction algorithm based on hashing and modulo-4 arithmetic.

    PubMed

    Sheng, Huitao; Mehrotra, Kishan; Mohan, Chilukuri; Raina, Ramesh

    2008-01-01

    We develop an algorithm to identify cis-elements in promoter regions of coregulated genes. This algorithm searches for subsequences of desired length whose frequency of occurrence is relatively high, while accounting for slightly perturbed variants using hash table and modulo arithmetic. Motifs are evaluated using profile matrices and higher-order Markov background model. Simulation results show that our algorithm discovers more motifs present in the test sequences, when compared with two well-known motif-discovery tools (MDScan and AlignACE). The algorithm produces very promising results on real data set; the output of the algorithm contained many known motifs.

  7. Developing evidence-based algorithms for negative pressure wound therapy in adults with acute and chronic wounds: literature and expert-based face validation results.

    PubMed

    Beitz, Janice M; van Rijswijk, Lia

    2012-04-01

    Negative pressure wound therapy (NPWT) is used extensively in the management of acute and chronic wounds, but concerns persist about its efficacy, effectiveness, and safety. Available guidelines and algorithms are wound type-specific, not evidence-based, and many lack clearly described relative and absolute contraindications and stop criteria. The purpose of this research was to: (1) develop evidence-based algorithms for the safe use of NPWT in adults with acute and chronic wounds by nonwound expert clinicians, and (2) obtain face validity for the algorithms. Using NPWT meta-analyses and systematic reviews (n = 10), NPWT guidelines of care (n = 12), general evidence-based guidelines of wound care (n = 11), and a framework for transitioning between moisture-retentive and NPWT care (n = 1), a set of three algorithms was developed. Literature-based validity for each of the 39 discreet algorithm steps/decision points was obtained by reviewing best available evidence from systematic literature reviews (n = 331 publications) and abstraction of all NPWT-relevant publications (n = 182) using the patient-oriented Strength of Recommendation (SORT) taxonomy. Of the 182 NPWT studies abstracted, 25 met criteria for level 1 and 2 evidence but only one general assessment step had both level 1 evidence and an "A" strength of recommendation. Next, an Institutional Review Board-approved, cross-sectional mixed methods survey design face validation pilot study was conducted to solicit comments on, and rate the validity of, the 51 discreet algorithm-related statements, including the 39 decisions/steps. Twelve (12) of the 15 invited interdisciplinary wound experts agreed to participate. The overall algorithm content validity index (CVI) was high (0.96 out of 1). Helpful design suggestions to ensure safe use were made, and participants suggested an examination of commonly used wound definitions in follow-up studies. Results of the literature-based face validation confirm that the

  8. Algorithm for dynamic Speckle pattern processing

    NASA Astrophysics Data System (ADS)

    Cariñe, J.; Guzmán, R.; Torres-Ruiz, F. A.

    2016-07-01

    In this paper we present a new algorithm for determining surface activity by processing speckle pattern images recorded with a CCD camera. Surface activity can be produced by motility or small displacements among other causes, and is manifested as a change in the pattern recorded in the camera with reference to a static background pattern. This intensity variation is considered to be a small perturbation compared with the mean intensity. Based on a perturbative method we obtain an equation with which we can infer information about the dynamic behavior of the surface that generates the speckle pattern. We define an activity index based on our algorithm that can be easily compared with the outcomes from other algorithms. It is shown experimentally that this index evolves in time in the same way as the Inertia Moment method, however our algorithm is based on direct processing of speckle patterns without the need for other kinds of post-processes (like THSP and co-occurrence matrix), making it a viable real-time method. We also show how this algorithm compares with several other algorithms when applied to calibration experiments. From these results we conclude that our algorithm offer qualitative and quantitative advantages over current methods.

  9. SOLIDIFICATION OF THE HANFORD LAW WASTE STREAM PRODUCED AS A RESULT OF NEAR-TANK CONTINUOUS SLUDGE LEACHING AND SODIUM HYDROXIDE RECOVERY

    SciTech Connect

    Reigel, M.; Johnson, F.; Crawford, C.; Jantzen, C.

    2011-09-20

    The U.S. Department of Energy (DOE), Office of River Protection (ORP), is responsible for the remediation and stabilization of the Hanford Site tank farms, including 53 million gallons of highly radioactive mixed wasted waste contained in 177 underground tanks. The plan calls for all waste retrieved from the tanks to be transferred to the Waste Treatment Plant (WTP). The WTP will consist of three primary facilities including pretreatment facilities for Low Activity Waste (LAW) to remove aluminum, chromium and other solids and radioisotopes that are undesirable in the High Level Waste (HLW) stream. Removal of aluminum from HLW sludge can be accomplished through continuous sludge leaching of the aluminum from the HLW sludge as sodium aluminate; however, this process will introduce a significant amount of sodium hydroxide into the waste stream and consequently will increase the volume of waste to be dispositioned. A sodium recovery process is needed to remove the sodium hydroxide and recycle it back to the aluminum dissolution process. The resulting LAW waste stream has a high concentration of aluminum and sodium and will require alternative immobilization methods. Five waste forms were evaluated for immobilization of LAW at Hanford after the sodium recovery process. The waste forms considered for these two waste streams include low temperature processes (Saltstone/Cast stone and geopolymers), intermediate temperature processes (steam reforming and phosphate glasses) and high temperature processes (vitrification). These immobilization methods and the waste forms produced were evaluated for (1) compliance with the Performance Assessment (PA) requirements for disposal at the IDF, (2) waste form volume (waste loading), and (3) compatibility with the tank farms and systems. The iron phosphate glasses tested using the product consistency test had normalized release rates lower than the waste form requirements although the CCC glasses had higher release rates than the

  10. Validation of a treatment algorithm for orthopaedic implant-related infections with device-retention-results from a prospective observational cohort study.

    PubMed

    Tschudin-Sutter, S; Frei, R; Dangel, M; Jakob, M; Balmelli, C; Schaefer, D J; Weisser, M; Elzi, L; Battegay, M; Widmer, A F

    2016-05-01

    Success rates for treatment regimens involving retention of an infected implant are conflicting and failure rates of up to 80% have been reported. We aimed to validate a proposed treatment algorithm, based on strict selection criteria, by assessing long-term outcome of treatment for orthopaedic device-related infection (ODRI) with retention. From January 1999 to December 2009, all patients diagnosed with ODRI at the University Hospital Basel, Switzerland were eligible for treatment with open surgical debridement, implant-retention and antibiotics, if duration of clinical symptoms was ≤3 weeks, the implant was stable, the soft-tissue had no abscess or sinus tract, and the causative pathogen was susceptible to antimicrobial agents with activity against surface-adhering microorganisms. Antimicrobial treatment was administered according to a predefined algorithm. The primary outcome was treatment failure after 2-year follow up. A total of 455 patients were diagnosed with an ODRI, of whom 233 (51.2%) patients were eligible for treatment involving implant-retention. Causative pathogens were mainly Staphylococcus aureus (41.6%) and coagulase-negative staphylococci (33.9%). Among patients with ODRIs related to prostheses, failure was documented in 10.8% (12/111) and in patients with ODRIs related to osteosyntheses, failure occurred in 9.8% (12/122) after 2 years of follow up. In all, 90% of ODRIs were successfully cured with surgical debridement and implant-retention in addition to long-term antimicrobial therapy according to a predefined treatment algorithm: if patients fulfilled strict selection criteria and there was susceptibility to rifampin for Gram-positive pathogens and ciprofloxacin for Gram-negative pathogens. PMID:26806134

  11. A multi-band semi-analytical algorithm for estimating chlorophyll-a concentration in the Yellow River Estuary, China.

    PubMed

    Chen, Jun; Quan, Wenting; Cui, Tingwei

    2015-01-01

    In this study, two sample semi-analytical algorithms and one new unified multi-band semi-analytical algorithm (UMSA) for estimating chlorophyll-a (Chla) concentration were constructed by specifying optimal wavelengths. The three sample semi-analytical algorithms, including the three-band semi-analytical algorithm (TSA), four-band semi-analytical algorithm (FSA), and UMSA algorithm, were calibrated and validated by the dataset collected in the Yellow River Estuary between September 1 and 10, 2009. By comparing of the accuracy of assessment of TSA, FSA, and UMSA algorithms, it was found that the UMSA algorithm had a superior performance in comparison with the two other algorithms, TSA and FSA. Using the UMSA algorithm in retrieving Chla concentration in the Yellow River Estuary decreased by 25.54% NRMSE (normalized root mean square error) when compared with the FSA algorithm, and 29.66% NRMSE in comparison with the TSA algorithm. These are very significant improvements upon previous methods. Additionally, the study revealed that the TSA and FSA algorithms are merely more specific forms of the UMSA algorithm. Owing to the special form of the UMSA algorithm, if the same bands were used for both the TSA and UMSA algorithms or FSA and UMSA algorithms, the UMSA algorithm would theoretically produce superior results in comparison with the TSA and FSA algorithms. Thus, good results may also be produced if the UMSA algorithm were to be applied for predicting Chla concentration for datasets of Gitelson et al. (2008) and Le et al. (2009).

  12. Long-term results of adrenalectomy in patients with aldosterone-producing adenomas: multivariate analysis of factors affecting unresolved hypertension and review of the literature.

    PubMed

    Lumachi, Franco; Ermani, Mario; Basso, Stefano M M; Armanini, Decio; Iacobone, Maurizio; Favia, Gennaro

    2005-10-01

    The long-term surgical cure rate of patients with primary aldosteronism varies widely, and causes of persistent hypertension are not completely established. We reviewed retrospectively charts from 98 patients (range, 19-70 years old) with aldosterone-producing adenomas who underwent unilateral adrenalectomy. At a median follow-up of 81 months (range, 18-186 months), the mean blood pressure values improved in 95 out of 98 (96.9%) patients, although hypertension was cured only in 71 out of 98 (72.4%) patients. Multivariate analysis using a logistic regression model adjusted for duration of follow-up showed that only age of the patients and duration of the disease independently correlated with unresolved hypertension. The cumulative odds ratio (OR), obtained using the logistic regression function, was 5.38 (95% CI 1.78-16.22), and the OR of single variables were 1.32 (95% CI 0.36-19.83) and 4.56 (95% CI 1.41-14.78), respectively. By using discriminant analysis to derive a classification function for the prediction of unresolved hypertension, a maximum predictive power of 75 per cent was achieved. In conclusion, in patients with an aldosterone-producing adenoma undergoing surgery, the combination of age and duration of hypertension gave the best predictive power of a linear classification function and represented the main independent risk factors affecting hypertension cure rate. PMID:16468537

  13. The effect of sub-surface volume scattering on the accuracy of ice-sheet altimeter retracking algorithms

    NASA Technical Reports Server (NTRS)

    Davis, Curt H.

    1993-01-01

    The NASA and ESA retracking algorithms are compared with an algorithm based upon a combined surface and volume (S/V) scattering model. First, the S/V, NASA, and ESA algorithms were used to retrack over 400,000 altimeter return waveforms from the Greenland and Antarctic ice sheets. The surface elevations from the S/V algorithm were compared with the elevations produced by the NASA and ESA algorithms to determine the relative accuracy of these algorithms when subsurface volume-scattering occurs. The results show that the NASA algorithm produced surface elevations within 35 to 50 cm of the S/V algorithm, while the performance of the ESA algorithm was slightly worse. Next, by analyzing several thousand satellite crossover points from the Antarctic data set, we determined the retracking algorithm that produced the most repeatable surface elevations. The elevations derived from the S/V algorithm had the smallest RMS error for the region of the East Antarctic plateau examined here. The ESA algorithm produced erroneous estimates of elevation change when seasonal variations were present; it measured 0.7 to 1.6-m change in elevation over a 6-month period on the East Antarctic plateau where accumulation rates are only 10 cm/year.

  14. Adaptive image contrast enhancement algorithm for point-based rendering

    NASA Astrophysics Data System (ADS)

    Xu, Shaoping; Liu, Xiaoping P.

    2015-03-01

    Surgical simulation is a major application in computer graphics and virtual reality, and most of the existing work indicates that interactive real-time cutting simulation of soft tissue is a fundamental but challenging research problem in virtual surgery simulation systems. More specifically, it is difficult to achieve a fast enough graphic update rate (at least 30 Hz) on commodity PC hardware by utilizing traditional triangle-based rendering algorithms. In recent years, point-based rendering (PBR) has been shown to offer the potential to outperform the traditional triangle-based rendering in speed when it is applied to highly complex soft tissue cutting models. Nevertheless, the PBR algorithms are still limited in visual quality due to inherent contrast distortion. We propose an adaptive image contrast enhancement algorithm as a postprocessing module for PBR, providing high visual rendering quality as well as acceptable rendering efficiency. Our approach is based on a perceptible image quality technique with automatic parameter selection, resulting in a visual quality comparable to existing conventional PBR algorithms. Experimental results show that our adaptive image contrast enhancement algorithm produces encouraging results both visually and numerically compared to representative algorithms, and experiments conducted on the latest hardware demonstrate that the proposed PBR framework with the postprocessing module is superior to the conventional PBR algorithm and that the proposed contrast enhancement algorithm can be utilized in (or compatible with) various variants of the conventional PBR algorithm.

  15. Identifying Risk Factors for Recent HIV Infection in Kenya Using a Recent Infection Testing Algorithm: Results from a Nationally Representative Population-Based Survey

    PubMed Central

    Kim, Andrea A.; Parekh, Bharat S.; Umuro, Mamo; Galgalo, Tura; Bunnell, Rebecca; Makokha, Ernest; Dobbs, Trudy; Murithi, Patrick; Muraguri, Nicholas; De Cock, Kevin M.; Mermin, Jonathan

    2016-01-01

    Introduction A recent infection testing algorithm (RITA) that can distinguish recent from long-standing HIV infection can be applied to nationally representative population-based surveys to characterize and identify risk factors for recent infection in a country. Materials and Methods We applied a RITA using the Limiting Antigen Avidity Enzyme Immunoassay (LAg) on stored HIV-positive samples from the 2007 Kenya AIDS Indicator Survey. The case definition for recent infection included testing recent on LAg and having no evidence of antiretroviral therapy use. Multivariate analysis was conducted to determine factors associated with recent and long-standing infection compared to HIV-uninfected persons. All estimates were weighted to adjust for sampling probability and nonresponse. Results Of 1,025 HIV-antibody-positive specimens, 64 (6.2%) met the case definition for recent infection and 961 (93.8%) met the case definition for long-standing infection. Compared to HIV-uninfected individuals, factors associated with higher adjusted odds of recent infection were living in Nairobi (adjusted odds ratio [AOR] 11.37; confidence interval [CI] 2.64–48.87) and Nyanza (AOR 4.55; CI 1.39–14.89) provinces compared to Western province; being widowed (AOR 8.04; CI 1.42–45.50) or currently married (AOR 6.42; CI 1.55–26.58) compared to being never married; having had ≥ 2 sexual partners in the last year (AOR 2.86; CI 1.51–5.41); not using a condom at last sex in the past year (AOR 1.61; CI 1.34–1.93); reporting a sexually transmitted infection (STI) diagnosis or symptoms of STI in the past year (AOR 1.97; CI 1.05–8.37); and being aged <30 years with: 1) HSV-2 infection (AOR 8.84; CI 2.62–29.85), 2) male genital ulcer disease (AOR 8.70; CI 2.36–32.08), or 3) lack of male circumcision (AOR 17.83; CI 2.19–144.90). Compared to HIV-uninfected persons, factors associated with higher adjusted odds of long-standing infection included living in Coast (AOR 1.55; CI 1.04–2

  16. Speckle imaging algorithms for planetary imaging

    SciTech Connect

    Johansson, E.

    1994-11-15

    I will discuss the speckle imaging algorithms used to process images of the impact sites of the collision of comet Shoemaker-Levy 9 with Jupiter. The algorithms use a phase retrieval process based on the average bispectrum of the speckle image data. High resolution images are produced by estimating the Fourier magnitude and Fourier phase of the image separately, then combining them and inverse transforming to achieve the final result. I will show raw speckle image data and high-resolution image reconstructions from our recent experiment at Lick Observatory.

  17. Automatic design of decision-tree algorithms with evolutionary algorithms.

    PubMed

    Barros, Rodrigo C; Basgalupp, Márcio P; de Carvalho, André C P L F; Freitas, Alex A

    2013-01-01

    This study reports the empirical analysis of a hyper-heuristic evolutionary algorithm that is capable of automatically designing top-down decision-tree induction algorithms. Top-down decision-tree algorithms are of great importance, considering their ability to provide an intuitive and accurate knowledge representation for classification problems. The automatic design of these algorithms seems timely, given the large literature accumulated over more than 40 years of research in the manual design of decision-tree induction algorithms. The proposed hyper-heuristic evolutionary algorithm, HEAD-DT, is extensively tested using 20 public UCI datasets and 10 microarray gene expression datasets. The algorithms automatically designed by HEAD-DT are compared with traditional decision-tree induction algorithms, such as C4.5 and CART. Experimental results show that HEAD-DT is capable of generating algorithms which are significantly more accurate than C4.5 and CART.

  18. Application of Least Mean Square Algorithms to Spacecraft Vibration Compensation

    NASA Technical Reports Server (NTRS)

    Woodard , Stanley E.; Nagchaudhuri, Abhijit

    1998-01-01

    This paper describes the application of the Least Mean Square (LMS) algorithm in tandem with the Filtered-X Least Mean Square algorithm for controlling a science instrument's line-of-sight pointing. Pointing error is caused by a periodic disturbance and spacecraft vibration. A least mean square algorithm is used on-orbit to produce the transfer function between the instrument's servo-mechanism and error sensor. The result is a set of adaptive transversal filter weights tuned to the transfer function. The Filtered-X LMS algorithm, which is an extension of the LMS, tunes a set of transversal filter weights to the transfer function between the disturbance source and the servo-mechanism's actuation signal. The servo-mechanism's resulting actuation counters the disturbance response and thus maintains accurate science instrumental pointing. A simulation model of the Upper Atmosphere Research Satellite is used to demonstrate the algorithms.

  19. Phenotypical and molecular responses of Arabidopsis thaliana roots as a result of inoculation with the auxin-producing bacterium Azospirillum brasilense.

    PubMed

    Spaepen, Stijn; Bossuyt, Stijn; Engelen, Kristof; Marchal, Kathleen; Vanderleyden, Jos

    2014-02-01

    The auxin-producing bacterium Azospirillum brasilense Sp245 can promote the growth of several plant species. The model plant Arabidopsis thaliana was chosen as host plant to gain an insight into the molecular mechanisms that govern this interaction. The determination of differential gene expression in Arabidopsis roots after inoculation with either A. brasilense wild-type or an auxin biosynthesis mutant was achieved by microarray analysis. Arabidopsis thaliana inoculation with A. brasilense wild-type increases the number of lateral roots and root hairs, and elevates the internal auxin concentration in the plant. The A. thaliana root transcriptome undergoes extensive changes on A. brasilense inoculation, and the effects are more pronounced at later time points. The wild-type bacterial strain induces changes in hormone- and defense-related genes, as well as in plant cell wall-related genes. The A. brasilense mutant, however, does not elicit these transcriptional changes to the same extent. There are qualitative and quantitative differences between A. thaliana responses to the wild-type A. brasilense strain and the auxin biosynthesis mutant strain, based on both phenotypic and transcriptomic data. This illustrates the major role played by auxin in the Azospirillum-Arabidopsis interaction, and possibly also in other bacterium-plant interactions.

  20. New Attitude Sensor Alignment Calibration Algorithms

    NASA Technical Reports Server (NTRS)

    Hashmall, Joseph A.; Sedlak, Joseph E.; Harman, Richard (Technical Monitor)

    2002-01-01

    Accurate spacecraft attitudes may only be obtained if the primary attitude sensors are well calibrated. Launch shock, relaxation of gravitational stresses and similar effects often produce large enough alignment shifts so that on-orbit alignment calibration is necessary if attitude accuracy requirements are to be met. A variety of attitude sensor alignment algorithms have been developed to meet the need for on-orbit calibration. Two new algorithms are presented here: ALICAL and ALIQUEST. Each of these has advantages in particular circumstances. ALICAL is an attitude independent algorithm that uses near simultaneous measurements from two or more sensors to produce accurate sensor alignments. For each set of simultaneous observations the attitude is overdetermined. The information content of the extra degrees of freedom can be combined over numerous sets to provide the sensor alignments. ALIQUEST is an attitude dependent algorithm that combines sensor and attitude data into a loss function that has the same mathematical form as the Wahba problem. Alignments can then be determined using any of the algorithms (such as the QUEST quaternion estimator) that have been developed to solve the Wahba problem for attitude. Results from the use of these methods on active missions are presented.

  1. Genetic Algorithm for Optimization: Preprocessor and Algorithm

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Shaykhian, Gholam A.

    2006-01-01

    Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.

  2. An enhanced fast scanning algorithm for image segmentation

    NASA Astrophysics Data System (ADS)

    Ismael, Ahmed Naser; Yusof, Yuhanis binti

    2015-12-01

    Segmentation is an essential and important process that separates an image into regions that have similar characteristics or features. This will transform the image for a better image analysis and evaluation. An important benefit of segmentation is the identification of region of interest in a particular image. Various algorithms have been proposed for image segmentation and this includes the Fast Scanning algorithm which has been employed on food, sport and medical images. It scans all pixels in the image and cluster each pixel according to the upper and left neighbor pixels. The clustering process in Fast Scanning algorithm is performed by merging pixels with similar neighbor based on an identified threshold. Such an approach will lead to a weak reliability and shape matching of the produced segments. This paper proposes an adaptive threshold function to be used in the clustering process of the Fast Scanning algorithm. This function used the gray'value in the image's pixels and variance Also, the level of the image that is more the threshold are converted into intensity values between 0 and 1, and other values are converted into intensity values zero. The proposed enhanced Fast Scanning algorithm is realized on images of the public and private transportation in Iraq. Evaluation is later made by comparing the produced images of proposed algorithm and the standard Fast Scanning algorithm. The results showed that proposed algorithm is faster in terms the time from standard fast scanning.

  3. Risk factors for bloodstream infections due to colistin-resistant KPC-producing Klebsiella pneumoniae: results from a multicenter case-control-control study.

    PubMed

    Giacobbe, D R; Del Bono, V; Trecarichi, E M; De Rosa, F G; Giannella, M; Bassetti, M; Bartoloni, A; Losito, A R; Corcione, S; Bartoletti, M; Mantengoli, E; Saffioti, C; Pagani, N; Tedeschi, S; Spanu, T; Rossolini, G M; Marchese, A; Ambretti, S; Cauda, R; Viale, P; Viscoli, C; Tumbarello, M

    2015-12-01

    The increasing prevalence of colistin resistance (ColR) Klebsiella pneumoniae carbapenemase (KPC)-producing K. pneumoniae (Kp) is a matter of concern because of its unfavourable impact on mortality of KPC-Kp bloodstream infections (BSI) and the shortage of alternative therapeutic options. A matched case-control-control analysis was conducted. The primary study end point was to assess risk factors for ColR KPC-Kp BSI. The secondary end point was to describe mortality and clinical characteristics of these infections. To assess risk factors for ColR, 142 patients with ColR KPC-Kp BSI were compared to two controls groups: 284 controls without infections caused by KPC-Kp (control group A) and 284 controls with colistin-susceptible (ColS) KPC-Kp BSI (control group B). In the first multivariate analysis (cases vs. group A), previous colistin therapy, previous KPC-Kp colonization, ≥3 previous hospitalizations, Charlson score ≥3 and neutropenia were found to be associated with the development of ColR KPC-Kp BSI. In the second multivariate analysis (cases vs. group B), only previous colistin therapy, previous KPC-Kp colonization and Charlson score ≥3 were associated with ColR. Overall, ColR among KPC-Kp blood isolates increased more than threefold during the 4.5-year study period, and 30-day mortality of ColR KPC-Kp BSI was as high as 51%. Strict rules for the use of colistin are mandatory to staunch the dissemination of ColR in KPC-Kp-endemic hospitals.

  4. Approximation algorithms for planning and control

    NASA Technical Reports Server (NTRS)

    Boddy, Mark; Dean, Thomas

    1989-01-01

    A control system operating in a complex environment will encounter a variety of different situations, with varying amounts of time available to respond to critical events. Ideally, such a control system will do the best possible with the time available. In other words, its responses should approximate those that would result from having unlimited time for computation, where the degree of the approximation depends on the amount of time it actually has. There exist approximation algorithms for a wide variety of problems. Unfortunately, the solution to any reasonably complex control problem will require solving several computationally intensive problems. Algorithms for successive approximation are a subclass of the class of anytime algorithms, algorithms that return answers for any amount of computation time, where the answers improve as more time is allotted. An architecture is described for allocating computation time to a set of anytime algorithms, based on expectations regarding the value of the answers they return. The architecture described is quite general, producing optimal schedules for a set of algorithms under widely varying conditions.

  5. On algorithmic rate-coded AER generation.

    PubMed

    Linares-Barranco, Alejandro; Jimenez-Moreno, Gabriel; Linares-Barranco, Bernabé; Civit-Balcells, Antón

    2006-05-01

    This paper addresses the problem of converting a conventional video stream based on sequences of frames into the spike event-based representation known as the address-event-representation (AER). In this paper we concentrate on rate-coded AER. The problem is addressed as an algorithmic problem, in which different methods are proposed, implemented and tested through software algorithms. The proposed algorithms are comparatively evaluated according to different criteria. Emphasis is put on the potential of such algorithms for a) doing the frame-based to event-based representation in real time, and b) that the resulting event streams ressemble as much as possible those generated naturally by rate-coded address-event VLSI chips, such as silicon AER retinae. It is found that simple and straightforward algorithms tend to have high potential for real time but produce event distributions that differ considerably from those obtained in AER VLSI chips. On the other hand, sophisticated algorithms that yield better event distributions are not efficient for real time operations. The methods based on linear-feedback-shift-register (LFSR) pseudorandom number generation is a good compromise, which is feasible for real time and yield reasonably well distributed events in time. Our software experiments, on a 1.6-GHz Pentium IV, show that at 50% AER bus load the proposed algorithms require between 0.011 and 1.14 ms per 8 bit-pixel per frame. One of the proposed LFSR methods is implemented in real time hardware using a prototyping board that includes a VirtexE 300 FPGA. The demonstration hardware is capable of transforming frames of 64 x 64 pixels of 8-bit depth at a frame rate of 25 frames per second, producing spike events at a peak rate of 10(7) events per second. PMID:16722179

  6. Acoustic design of rotor blades using a genetic algorithm

    NASA Technical Reports Server (NTRS)

    Wells, V. L.; Han, A. Y.; Crossley, W. A.

    1995-01-01

    A genetic algorithm coupled with a simplified acoustic analysis was used to generate low-noise rotor blade designs. The model includes thickness, steady loading and blade-vortex interaction noise estimates. The paper presents solutions for several variations in the fitness function, including thickness noise only, loading noise only, and combinations of the noise types. Preliminary results indicate that the analysis provides reasonable assessments of the noise produced, and that genetic algorithm successfully searches for 'good' designs. The results show that, for a given required thrust coefficient, proper blade design can noticeably reduce the noise produced at some expense to the power requirements.

  7. Algorithmic chemistry

    SciTech Connect

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  8. A New Approximate Chimera Donor Cell Search Algorithm

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Nixon, David (Technical Monitor)

    1998-01-01

    The objectives of this study were to develop chimera-based full potential methodology which is compatible with overflow (Euler/Navier-Stokes) chimera flow solver and to develop a fast donor cell search algorithm that is compatible with the chimera full potential approach. Results of this work included presenting a new donor cell search algorithm suitable for use with a chimera-based full potential solver. This algorithm was found to be extremely fast and simple producing donor cells as fast as 60,000 per second.

  9. A VLSI optimal constructive algorithm for classification problems

    SciTech Connect

    Beiu, V.; Draghici, S.; Sethi, I.K.

    1997-10-01

    If neural networks are to be used on a large scale, they have to be implemented in hardware. However, the cost of the hardware implementation is critically sensitive to factors like the precision used for the weights, the total number of bits of information and the maximum fan-in used in the network. This paper presents a version of the Constraint Based Decomposition training algorithm which is able to produce networks using limited precision integer weights and units with limited fan-in. The algorithm is tested on the 2-spiral problem and the results are compared with other existing algorithms.

  10. ON THE VERIFICATION AND VALIDATION OF GEOSPATIAL IMAGE ANALYSIS ALGORITHMS

    SciTech Connect

    Roberts, Randy S.; Trucano, Timothy G.; Pope, Paul A.; Aragon, Cecilia R.; Jiang , Ming; Wei, Thomas; Chilton, Lawrence; Bakel, A. J.

    2010-07-25

    Verification and validation (V&V) of geospatial image analysis algorithms is a difficult task and is becoming increasingly important. While there are many types of image analysis algorithms, we focus on developing V&V methodologies for algorithms designed to provide textual descriptions of geospatial imagery. In this paper, we present a novel methodological basis for V&V that employs a domain-specific ontology, which provides a naming convention for a domain-bounded set of objects and a set of named relationship between these objects. We describe a validation process that proceeds through objectively comparing benchmark imagery, produced using the ontology, with algorithm results. As an example, we describe how the proposed V&V methodology would be applied to algorithms designed to provide textual descriptions of facilities

  11. Detecting Danger: The Dendritic Cell Algorithm

    NASA Astrophysics Data System (ADS)

    Greensmith, Julie; Aickelin, Uwe; Cayzer, Steve

    The "Dendritic Cell Algorithm" (DCA) is inspired by the function of the dendritic cells of the human immune system. In nature, dendritic cells are the intrusion detection agents of the human body, policing the tissue and organs for potential invaders in the form of pathogens. In this research, an abstract model of dendritic cell (DC) behavior is developed and subsequently used to form an algorithm—the DCA. The abstraction process was facilitated through close collaboration with laboratory-based immunologists, who performed bespoke experiments, the results of which are used as an integral part of this algorithm. The DCA is a population-based algorithm, with each agent in the system represented as an "artificial DC". Each DC has the ability to combine multiple data streams and can add context to data suspected as anomalous. In this chapter, the abstraction process and details of the resultant algorithm are given. The algorithm is applied to numerous intrusion detection problems in computer security including the detection of port scans and botnets, where it has produced impressive results with relatively low rates of false positives.

  12. Multipartite entanglement in quantum algorithms

    SciTech Connect

    Bruss, D.; Macchiavello, C.

    2011-05-15

    We investigate the entanglement features of the quantum states employed in quantum algorithms. In particular, we analyze the multipartite entanglement properties in the Deutsch-Jozsa, Grover, and Simon algorithms. Our results show that for these algorithms most instances involve multipartite entanglement.

  13. Depletion of mucin in mucin-producing human gastrointestinal carcinoma: Results from in vitro and in vivo studies with bromelain and N-acetylcysteine.

    PubMed

    Amini, Afshin; Masoumi-Moghaddam, Samar; Ehteda, Anahid; Liauw, Winston; Morris, David L

    2015-10-20

    Aberrant expression of membrane-associated and secreted mucins, as evident in epithelial tumors, is known to facilitate tumor growth, progression and metastasis, and to provide protection against adverse growth conditions, chemotherapy and immune surveillance. Emerging evidence provides support for the oncogenic role of MUC1 in gastrointestinal carcinomas and relates its expression to an invasive phenotype. Similarly, mucinous differentiation of gastrointestinal tumors, in particular increased or de novo expression of MUC2 and/or MUC5AC, is widely believed to imply an adverse clinicopathological feature. Through formation of viscous gels, too, MUC2 and MUC5AC significantly contribute to the biology and pathogenesis of mucin-secreting gastrointestinal tumors. Here, we investigated the mucin-depleting effects of bromelain (BR) and N-acetylcysteine (NAC), in nine different regimens as single or combination therapy, in in vitro (MKN45, KATOIII and LS174T cell lines) and in vivo (female nude mice bearing intraperitoneal MKN45 and LS174T) settings. The inhibitory effects of the treatment on cancer cell growth and proliferation were also evaluated in vivo. Our results suggest that a combination of BR and NAC with dual effects on growth and mucin products of mucin-expressing tumor cells is a promising candidate towards the development of novel approaches to gastrointestinal malignancies with the involvement of mucin pathology. This capability supports the use of this combination formulation in locoregional approaches for reducing the adverse effects of the aberrantly secreted gel-forming mucins, as in pseudomyxoma peritonei and similar pathologies with ectopic production of mucin.

  14. Depletion of mucin in mucin-producing human gastrointestinal carcinoma: Results from in vitro and in vivo studies with bromelain and N-acetylcysteine

    PubMed Central

    Amini, Afshin; Masoumi-Moghaddam, Samar; Ehteda, Anahid; Liauw, Winston; Morris, David L.

    2015-01-01

    Aberrant expression of membrane-associated and secreted mucins, as evident in epithelial tumors, is known to facilitate tumor growth, progression and metastasis, and to provide protection against adverse growth conditions, chemotherapy and immune surveillance. Emerging evidence provides support for the oncogenic role of MUC1 in gastrointestinal carcinomas and relates its expression to an invasive phenotype. Similarly, mucinous differentiation of gastrointestinal tumors, in particular increased or de novo expression of MUC2 and/or MUC5AC, is widely believed to imply an adverse clinicopathological feature. Through formation of viscous gels, too, MUC2 and MUC5AC significantly contribute to the biology and pathogenesis of mucin-secreting gastrointestinal tumors. Here, we investigated the mucin-depleting effects of bromelain (BR) and N-acetylcysteine (NAC), in nine different regimens as single or combination therapy, in in vitro (MKN45, KATOIII and LS174T cell lines) and in vivo (female nude mice bearing intraperitoneal MKN45 and LS174T) settings. The inhibitory effects of the treatment on cancer cell growth and proliferation were also evaluated in vivo. Our results suggest that a combination of BR and NAC with dual effects on growth and mucin products of mucin-expressing tumor cells is a promising candidate towards the development of novel approaches to gastrointestinal malignancies with the involvement of mucin pathology. This capability supports the use of this combination formulation in locoregional approaches for reducing the adverse effects of the aberrantly secreted gel-forming mucins, as in pseudomyxoma peritonei and similar pathologies with ectopic production of mucin. PMID:26436698

  15. Depletion of mucin in mucin-producing human gastrointestinal carcinoma: Results from in vitro and in vivo studies with bromelain and N-acetylcysteine.

    PubMed

    Amini, Afshin; Masoumi-Moghaddam, Samar; Ehteda, Anahid; Liauw, Winston; Morris, David L

    2015-10-20

    Aberrant expression of membrane-associated and secreted mucins, as evident in epithelial tumors, is known to facilitate tumor growth, progression and metastasis, and to provide protection against adverse growth conditions, chemotherapy and immune surveillance. Emerging evidence provides support for the oncogenic role of MUC1 in gastrointestinal carcinomas and relates its expression to an invasive phenotype. Similarly, mucinous differentiation of gastrointestinal tumors, in particular increased or de novo expression of MUC2 and/or MUC5AC, is widely believed to imply an adverse clinicopathological feature. Through formation of viscous gels, too, MUC2 and MUC5AC significantly contribute to the biology and pathogenesis of mucin-secreting gastrointestinal tumors. Here, we investigated the mucin-depleting effects of bromelain (BR) and N-acetylcysteine (NAC), in nine different regimens as single or combination therapy, in in vitro (MKN45, KATOIII and LS174T cell lines) and in vivo (female nude mice bearing intraperitoneal MKN45 and LS174T) settings. The inhibitory effects of the treatment on cancer cell growth and proliferation were also evaluated in vivo. Our results suggest that a combination of BR and NAC with dual effects on growth and mucin products of mucin-expressing tumor cells is a promising candidate towards the development of novel approaches to gastrointestinal malignancies with the involvement of mucin pathology. This capability supports the use of this combination formulation in locoregional approaches for reducing the adverse effects of the aberrantly secreted gel-forming mucins, as in pseudomyxoma peritonei and similar pathologies with ectopic production of mucin. PMID:26436698

  16. Firefly algorithm with chaos

    NASA Astrophysics Data System (ADS)

    Gandomi, A. H.; Yang, X.-S.; Talatahari, S.; Alavi, A. H.

    2013-01-01

    A recently developed metaheuristic optimization algorithm, firefly algorithm (FA), mimics the social behavior of fireflies based on the flashing and attraction characteristics of fireflies. In the present study, we will introduce chaos into FA so as to increase its global search mobility for robust global optimization. Detailed studies are carried out on benchmark problems with different chaotic maps. Here, 12 different chaotic maps are utilized to tune the attractive movement of the fireflies in the algorithm. The results show that some chaotic FAs can clearly outperform the standard FA.

  17. Improved multiprocessor garbage collection algorithms

    SciTech Connect

    Newman, I.A.; Stallard, R.P.; Woodward, M.C.

    1983-01-01

    Outlines the results of an investigation of existing multiprocessor garbage collection algorithms and introduces two new algorithms which significantly improve some aspects of the performance of their predecessors. The two algorithms arise from different starting assumptions. One considers the case where the algorithm will terminate successfully whatever list structure is being processed and assumes that the extra data space should be minimised. The other seeks a very fast garbage collection time for list structures that do not contain loops. Results of both theoretical and experimental investigations are given to demonstrate the efficacy of the algorithms. 7 references.

  18. On Learning Algorithms for Nash Equilibria

    NASA Astrophysics Data System (ADS)

    Daskalakis, Constantinos; Frongillo, Rafael; Papadimitriou, Christos H.; Pierrakos, George; Valiant, Gregory

    Can learning algorithms find a Nash equilibrium? This is a natural question for several reasons. Learning algorithms resemble the behavior of players in many naturally arising games, and thus results on the convergence or non-convergence properties of such dynamics may inform our understanding of the applicability of Nash equilibria as a plausible solution concept in some settings. A second reason for asking this question is in the hope of being able to prove an impossibility result, not dependent on complexity assumptions, for computing Nash equilibria via a restricted class of reasonable algorithms. In this work, we begin to answer this question by considering the dynamics of the standard multiplicative weights update learning algorithms (which are known to converge to a Nash equilibrium for zero-sum games). We revisit a 3×3 game defined by Shapley [10] in the 1950s in order to establish that fictitious play does not converge in general games. For this simple game, we show via a potential function argument that in a variety of settings the multiplicative updates algorithm impressively fails to find the unique Nash equilibrium, in that the cumulative distributions of players produced by learning dynamics actually drift away from the equilibrium.

  19. A novel bee swarm optimization algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Akbari, Reza; Mohammadi, Alireza; Ziarati, Koorush

    2010-10-01

    The optimization algorithms which are inspired from intelligent behavior of honey bees are among the most recently introduced population based techniques. In this paper, a novel algorithm called bee swarm optimization, or BSO, and its two extensions for improving its performance are presented. The BSO is a population based optimization technique which is inspired from foraging behavior of honey bees. The proposed approach provides different patterns which are used by the bees to adjust their flying trajectories. As the first extension, the BSO algorithm introduces different approaches such as repulsion factor and penalizing fitness (RP) to mitigate the stagnation problem. Second, to maintain efficiently the balance between exploration and exploitation, time-varying weights (TVW) are introduced into the BSO algorithm. The proposed algorithm (BSO) and its two extensions (BSO-RP and BSO-RPTVW) are compared with existing algorithms which are based on intelligent behavior of honey bees, on a set of well known numerical test functions. The experimental results show that the BSO algorithms are effective and robust; produce excellent results, and outperform other algorithms investigated in this consideration.

  20. Improved Ant Colony Clustering Algorithm and Its Performance Study.

    PubMed

    Gao, Wei

    2016-01-01

    Clustering analysis is used in many disciplines and applications; it is an important tool that descriptively identifies homogeneous groups of objects based on attribute values. The ant colony clustering algorithm is a swarm-intelligent method used for clustering problems that is inspired by the behavior of ant colonies that cluster their corpses and sort their larvae. A new abstraction ant colony clustering algorithm using a data combination mechanism is proposed to improve the computational efficiency and accuracy of the ant colony clustering algorithm. The abstraction ant colony clustering algorithm is used to cluster benchmark problems, and its performance is compared with the ant colony clustering algorithm and other methods used in existing literature. Based on similar computational difficulties and complexities, the results show that the abstraction ant colony clustering algorithm produces results that are not only more accurate but also more efficiently determined than the ant colony clustering algorithm and the other methods. Thus, the abstraction ant colony clustering algorithm can be used for efficient multivariate data clustering. PMID:26839533

  1. Improved Ant Colony Clustering Algorithm and Its Performance Study

    PubMed Central

    Gao, Wei

    2016-01-01

    Clustering analysis is used in many disciplines and applications; it is an important tool that descriptively identifies homogeneous groups of objects based on attribute values. The ant colony clustering algorithm is a swarm-intelligent method used for clustering problems that is inspired by the behavior of ant colonies that cluster their corpses and sort their larvae. A new abstraction ant colony clustering algorithm using a data combination mechanism is proposed to improve the computational efficiency and accuracy of the ant colony clustering algorithm. The abstraction ant colony clustering algorithm is used to cluster benchmark problems, and its performance is compared with the ant colony clustering algorithm and other methods used in existing literature. Based on similar computational difficulties and complexities, the results show that the abstraction ant colony clustering algorithm produces results that are not only more accurate but also more efficiently determined than the ant colony clustering algorithm and the other methods. Thus, the abstraction ant colony clustering algorithm can be used for efficient multivariate data clustering. PMID:26839533

  2. The Coral Reefs Optimization Algorithm: A Novel Metaheuristic for Efficiently Solving Optimization Problems

    PubMed Central

    Salcedo-Sanz, S.; Del Ser, J.; Landa-Torres, I.; Gil-López, S.; Portilla-Figueras, J. A.

    2014-01-01

    This paper presents a novel bioinspired algorithm to tackle complex optimization problems: the coral reefs optimization (CRO) algorithm. The CRO algorithm artificially simulates a coral reef, where different corals (namely, solutions to the optimization problem considered) grow and reproduce in coral colonies, fighting by choking out other corals for space in the reef. This fight for space, along with the specific characteristics of the corals' reproduction, produces a robust metaheuristic algorithm shown to be powerful for solving hard optimization problems. In this research the CRO algorithm is tested in several continuous and discrete benchmark problems, as well as in practical application scenarios (i.e., optimum mobile network deployment and off-shore wind farm design). The obtained results confirm the excellent performance of the proposed algorithm and open line of research for further application of the algorithm to real-world problems. PMID:25147860

  3. The coral reefs optimization algorithm: a novel metaheuristic for efficiently solving optimization problems.

    PubMed

    Salcedo-Sanz, S; Del Ser, J; Landa-Torres, I; Gil-López, S; Portilla-Figueras, J A

    2014-01-01

    This paper presents a novel bioinspired algorithm to tackle complex optimization problems: the coral reefs optimization (CRO) algorithm. The CRO algorithm artificially simulates a coral reef, where different corals (namely, solutions to the optimization problem considered) grow and reproduce in coral colonies, fighting by choking out other corals for space in the reef. This fight for space, along with the specific characteristics of the corals' reproduction, produces a robust metaheuristic algorithm shown to be powerful for solving hard optimization problems. In this research the CRO algorithm is tested in several continuous and discrete benchmark problems, as well as in practical application scenarios (i.e., optimum mobile network deployment and off-shore wind farm design). The obtained results confirm the excellent performance of the proposed algorithm and open line of research for further application of the algorithm to real-world problems. PMID:25147860

  4. The coral reefs optimization algorithm: a novel metaheuristic for efficiently solving optimization problems.

    PubMed

    Salcedo-Sanz, S; Del Ser, J; Landa-Torres, I; Gil-López, S; Portilla-Figueras, J A

    2014-01-01

    This paper presents a novel bioinspired algorithm to tackle complex optimization problems: the coral reefs optimization (CRO) algorithm. The CRO algorithm artificially simulates a coral reef, where different corals (namely, solutions to the optimization problem considered) grow and reproduce in coral colonies, fighting by choking out other corals for space in the reef. This fight for space, along with the specific characteristics of the corals' reproduction, produces a robust metaheuristic algorithm shown to be powerful for solving hard optimization problems. In this research the CRO algorithm is tested in several continuous and discrete benchmark problems, as well as in practical application scenarios (i.e., optimum mobile network deployment and off-shore wind farm design). The obtained results confirm the excellent performance of the proposed algorithm and open line of research for further application of the algorithm to real-world problems.

  5. Algorithmic Processes for Increasing Design Efficiency.

    ERIC Educational Resources Information Center

    Terrell, William R.

    1983-01-01

    Discusses the role of algorithmic processes as a supplementary method for producing cost-effective and efficient instructional materials. Examines three approaches to problem solving in the context of developing training materials for the Naval Training Command: application of algorithms, quasi-algorithms, and heuristics. (EAO)

  6. Teamwork Produces Results That Attract National Attention.

    ERIC Educational Resources Information Center

    Andrle, Michele

    1980-01-01

    Recounts the way a high school newspaper staff used teamwork in investigating child labor law violations, writing news stories about the situation, and participating in airings of the issue for local and national television programs. (GT)

  7. A digitally reconstructed radiograph algorithm calculated from first principles

    SciTech Connect

    Staub, David; Murphy, Martin J.

    2013-01-15

    Purpose: To develop an algorithm for computing realistic digitally reconstructed radiographs (DRRs) that match real cone-beam CT (CBCT) projections with no artificial adjustments. Methods: The authors used measured attenuation data from cone-beam CT projection radiographs of different materials to obtain a function to convert CT number to linear attenuation coefficient (LAC). The effects of scatter, beam hardening, and veiling glare were first removed from the attenuation data. Using this conversion function the authors calculated the line integral of LAC through a CT along rays connecting the radiation source and detector pixels with a ray-tracing algorithm, producing raw DRRs. The effects of scatter, beam hardening, and veiling glare were then included in the DRRs through postprocessing. Results: The authors compared actual CBCT projections to DRRs produced with all corrections (scatter, beam hardening, and veiling glare) and to uncorrected DRRs. Algorithm accuracy was assessed through visual comparison of projections and DRRs, pixel intensity comparisons, intensity histogram comparisons, and correlation plots of DRR-to-projection pixel intensities. In general, the fully corrected algorithm provided a small but nontrivial improvement in accuracy over the uncorrected algorithm. The authors also investigated both measurement- and computation-based methods for determining the beam hardening correction, and found the computation-based method to be superior, as it accounted for nonuniform bowtie filter thickness. The authors benchmarked the algorithm for speed and found that it produced DRRs in about 0.35 s for full detector and CT resolution at a ray step-size of 0.5 mm. Conclusions: The authors have demonstrated a DRR algorithm calculated from first principles that accounts for scatter, beam hardening, and veiling glare in order to produce accurate DRRs. The algorithm is computationally efficient, making it a good candidate for iterative CT reconstruction techniques

  8. An efficient algorithm for function optimization: modified stem cells algorithm

    NASA Astrophysics Data System (ADS)

    Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad

    2013-03-01

    In this paper, we propose an optimization algorithm based on the intelligent behavior of stem cell swarms in reproduction and self-organization. Optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm and Artificial Bee Colony (ABC) algorithm, can give solutions to linear and non-linear problems near to the optimum for many applications; however, in some case, they can suffer from becoming trapped in local optima. The Stem Cells Algorithm (SCA) is an optimization algorithm inspired by the natural behavior of stem cells in evolving themselves into new and improved cells. The SCA avoids the local optima problem successfully. In this paper, we have made small changes in the implementation of this algorithm to obtain improved performance over previous versions. Using a series of benchmark functions, we assess the performance of the proposed algorithm and compare it with that of the other aforementioned optimization algorithms. The obtained results prove the superiority of the Modified Stem Cells Algorithm (MSCA).

  9. Project Produce

    ERIC Educational Resources Information Center

    Wolfinger, Donna M.

    2005-01-01

    The grocery store produce section used to be a familiar but rather dull place. There were bananas next to the oranges next to the limes. Broccoli was next to corn and lettuce. Apples and pears, radishes and onions, eggplants and zucchinis all lay in their appropriate bins. Those days are over. Now, broccoli may be next to bok choy, potatoes beside…

  10. Adaptive continuous twisting algorithm

    NASA Astrophysics Data System (ADS)

    Moreno, Jaime A.; Negrete, Daniel Y.; Torres-González, Victor; Fridman, Leonid

    2016-09-01

    In this paper, an adaptive continuous twisting algorithm (ACTA) is presented. For double integrator, ACTA produces a continuous control signal ensuring finite time convergence of the states to zero. Moreover, the control signal generated by ACTA compensates the Lipschitz perturbation in finite time, i.e. its value converges to the opposite value of the perturbation. ACTA also keeps its convergence properties, even in the case that the upper bound of the derivative of the perturbation exists, but it is unknown.

  11. Some Practical Payments Clearance Algorithms

    NASA Astrophysics Data System (ADS)

    Kumlander, Deniss

    The globalisation of corporations' operations has produced a huge volume of inter-company invoices. Optimisation of those known as payment clearance can produce a significant saving in costs associated with those transfers and handling. The paper revises some common and so practical approaches to the payment clearance problem and proposes some novel algorithms based on graphs theory and heuristic totals' distribution.

  12. Cloud model bat algorithm.

    PubMed

    Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi

    2014-01-01

    Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425

  13. Three hypothesis algorithm with occlusion reasoning for multiple people tracking

    NASA Astrophysics Data System (ADS)

    Reta, Carolina; Altamirano, Leopoldo; Gonzalez, Jesus A.; Medina-Carnicer, Rafael

    2015-01-01

    This work proposes a detection-based tracking algorithm able to locate and keep the identity of multiple people, who may be occluded, in uncontrolled stationary environments. Our algorithm builds a tracking graph that models spatio-temporal relationships among attributes of interacting people to predict and resolve partial and total occlusions. When a total occlusion occurs, the algorithm generates various hypotheses about the location of the occluded person considering three cases: (a) the person keeps the same direction and speed, (b) the person follows the direction and speed of the occluder, and (c) the person remains motionless during occlusion. By analyzing the graph, our algorithm can detect trajectories produced by false alarms and estimate the location of missing or occluded people. Our algorithm performs acceptably under complex conditions, such as partial visibility of individuals getting inside or outside the scene, continuous interactions and occlusions among people, wrong or missing information on the detection of persons, as well as variation of the person's appearance due to illumination changes and background-clutter distracters. Our algorithm was evaluated on test sequences in the field of intelligent surveillance achieving an overall precision of 93%. Results show that our tracking algorithm outperforms even trajectory-based state-of-the-art algorithms.

  14. GOES-R Space Environment In-Situ Suite: instruments overview, calibration results, and data processing algorithms, and expected on-orbit performance

    NASA Astrophysics Data System (ADS)

    Galica, G. E.; Dichter, B. K.; Tsui, S.; Golightly, M. J.; Lopate, C.; Connell, J. J.

    2016-05-01

    The space weather instruments (Space Environment In-Situ Suite - SEISS) on the soon to be launched, NOAA GOES-R series spacecraft offer significant space weather measurement performance advances over the previous GOES N-P series instruments. The specifications require that the instruments ensure proper operation under the most stressful high flux conditions corresponding to the largest solar particle event expected during the program, while maintaining high sensitivity at low flux levels. Since the performance of remote sensing instruments is sensitive to local space weather conditions, the SEISS data will be of be of use to a broad community of users. The SEISS suite comprises five individual sensors and a data processing unit: Magnetospheric Particle Sensor-Low (0.03-30 keV electrons and ions), Magnetospheric Particle Sensor-High (0.05-4 MeV electrons, 0.08-12 MeV protons), two Solar And Galactic Proton Sensors (1 to >500 MeV protons), and the Energetic Heavy ion Sensor (10-200 MeV for H, H to Fe with single element resolution). We present comparisons between the enhanced GOES-R instruments and the current GOES space weather measurement capabilities. We provide an overview of the sensor configurations and performance. Results of extensive sensor modeling with GEANT, FLUKA and SIMION are compared with calibration data measured over nearly the entire energy range of the instruments. Combination of the calibration results and model are used to calculate the geometric factors of the various energy channels. The calibrated geometric factors and typical and extreme space weather environments are used to calculate the expected on-orbit performance.

  15. Motion Cueing Algorithm Development: Human-Centered Linear and Nonlinear Approaches

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.

    2005-01-01

    While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. Prior research identified viable features from two algorithms: the nonlinear "adaptive algorithm", and the "optimal algorithm" that incorporates human vestibular models. A novel approach to motion cueing, the "nonlinear algorithm" is introduced that combines features from both approaches. This algorithm is formulated by optimal control, and incorporates a new integrated perception model that includes both visual and vestibular sensation and the interaction between the stimuli. Using a time-varying control law, the matrix Riccati equation is updated in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. The neurocomputing approach was crucial in that the number of presentations of an input vector could be reduced to meet the real time requirement without degrading the quality of the motion cues.

  16. A fast neural-network algorithm for VLSI cell placement.

    PubMed

    Aykanat, Cevdet; Bultan, Tevfik; Haritaoğlu, Ismail

    1998-12-01

    Cell placement is an important phase of current VLSI circuit design styles such as standard cell, gate array, and Field Programmable Gate Array (FPGA). Although nondeterministic algorithms such as Simulated Annealing (SA) were successful in solving this problem, they are known to be slow. In this paper, a neural network algorithm is proposed that produces solutions as good as SA in substantially less time. This algorithm is based on Mean Field Annealing (MFA) technique, which was successfully applied to various combinatorial optimization problems. A MFA formulation for the cell placement problem is derived which can easily be applied to all VLSI design styles. To demonstrate that the proposed algorithm is applicable in practice, a detailed formulation for the FPGA design style is derived, and the layouts of several benchmark circuits are generated. The performance of the proposed cell placement algorithm is evaluated in comparison with commercial automated circuit design software Xilinx Automatic Place and Route (APR) which uses SA technique. Performance evaluation is conducted using ACM/SIGDA Design Automation benchmark circuits. Experimental results indicate that the proposed MFA algorithm produces comparable results with APR. However, MFA is almost 20 times faster than APR on the average.

  17. Avoiding spurious submovement decompositions : a globally optimal algorithm.

    SciTech Connect

    Rohrer, Brandon Robinson; Hogan, Neville

    2003-07-01

    Evidence for the existence of discrete submovements underlying continuous human movement has motivated many attempts to extract them. Although they produce visually convincing results, all of the methodologies that have been employed are prone to produce spurious decompositions. Examples of potential failures are given. A branch-and-bound algorithm for submovement extraction, capable of global nonlinear minimization (and hence capable of avoiding spurious decompositions), is developed and demonstrated.

  18. Stride search: A general algorithm for storm detection in high resolution climate data

    DOE PAGES

    Bosler, Peter Andrew; Roesler, Erika Louise; Taylor, Mark A.; Mundt, Miranda

    2015-09-08

    This article discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared. The commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. Stride Search is designed to work at all latitudes, while grid point searches may fail in polar regions. Results from the two algorithms are compared for the application of tropicalmore » cyclone detection, and shown to produce similar results for the same set of storm identification criteria. The time required for both algorithms to search the same data set is compared. Furthermore, Stride Search's ability to search extreme latitudes is demonstrated for the case of polar low detection.« less

  19. Stride search: A general algorithm for storm detection in high resolution climate data

    SciTech Connect

    Bosler, Peter Andrew; Roesler, Erika Louise; Taylor, Mark A.; Mundt, Miranda

    2015-09-08

    This article discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared. The commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. Stride Search is designed to work at all latitudes, while grid point searches may fail in polar regions. Results from the two algorithms are compared for the application of tropical cyclone detection, and shown to produce similar results for the same set of storm identification criteria. The time required for both algorithms to search the same data set is compared. Furthermore, Stride Search's ability to search extreme latitudes is demonstrated for the case of polar low detection.

  20. D-leaping: Accelerating stochastic simulation algorithms for reactions with delays

    SciTech Connect

    Bayati, Basil; Chatelain, Philippe; Koumoutsakos, Petros

    2009-09-01

    We propose a novel, accelerated algorithm for the approximate stochastic simulation of biochemical systems with delays. The present work extends existing accelerated algorithms by distributing, in a time adaptive fashion, the delayed reactions so as to minimize the computational effort while preserving their accuracy. The accuracy of the present algorithm is assessed by comparing its results to those of the corresponding delay differential equations for a representative biochemical system. In addition, the fluctuations produced from the present algorithm are comparable to those from an exact stochastic simulation with delays. The algorithm is used to simulate biochemical systems that model oscillatory gene expression. The results indicate that the present algorithm is competitive with existing works for several benchmark problems while it is orders of magnitude faster for certain systems of biochemical reactions.

  1. A fast algorithm for sparse matrix computations related to inversion

    NASA Astrophysics Data System (ADS)

    Li, S.; Wu, W.; Darve, E.

    2013-06-01

    We have developed a fast algorithm for computing certain entries of the inverse of a sparse matrix. Such computations are critical to many applications, such as the calculation of non-equilibrium Green's functions Gr and G< for nano-devices. The FIND (Fast Inverse using Nested Dissection) algorithm is optimal in the big-O sense. However, in practice, FIND suffers from two problems due to the width-2 separators used by its partitioning scheme. One problem is the presence of a large constant factor in the computational cost of FIND. The other problem is that the partitioning scheme used by FIND is incompatible with most existing partitioning methods and libraries for nested dissection, which all use width-1 separators. Our new algorithm resolves these problems by thoroughly decomposing the computation process such that width-1 separators can be used, resulting in a significant speedup over FIND for realistic devices — up to twelve-fold in simulation. The new algorithm also has the added advantage that desired off-diagonal entries can be computed for free. Consequently, our algorithm is faster than the current state-of-the-art recursive methods for meshes of any size. Furthermore, the framework used in the analysis of our algorithm is the first attempt to explicitly apply the widely-used relationship between mesh nodes and matrix computations to the problem of multiple eliminations with reuse of intermediate results. This framework makes our algorithm easier to generalize, and also easier to compare against other methods related to elimination trees. Finally, our accuracy analysis shows that the algorithms that require back-substitution are subject to significant extra round-off errors, which become extremely large even for some well-conditioned matrices or matrices with only moderately large condition numbers. When compared to these back-substitution algorithms, our algorithm is generally a few orders of magnitude more accurate, and our produced round-off errors

  2. Advisory Algorithm for Scheduling Open Sectors, Operating Positions, and Workstations

    NASA Technical Reports Server (NTRS)

    Bloem, Michael; Drew, Michael; Lai, Chok Fung; Bilimoria, Karl D.

    2012-01-01

    Air traffic controller supervisors configure available sector, operating position, and work-station resources to safely and efficiently control air traffic in a region of airspace. In this paper, an algorithm for assisting supervisors with this task is described and demonstrated on two sample problem instances. The algorithm produces configuration schedule advisories that minimize a cost. The cost is a weighted sum of two competing costs: one penalizing mismatches between configurations and predicted air traffic demand and another penalizing the effort associated with changing configurations. The problem considered by the algorithm is a shortest path problem that is solved with a dynamic programming value iteration algorithm. The cost function contains numerous parameters. Default values for most of these are suggested based on descriptions of air traffic control procedures and subject-matter expert feedback. The parameter determining the relative importance of the two competing costs is tuned by comparing historical configurations with corresponding algorithm advisories. Two sample problem instances for which appropriate configuration advisories are obvious were designed to illustrate characteristics of the algorithm. Results demonstrate how the algorithm suggests advisories that appropriately utilize changes in airspace configurations and changes in the number of operating positions allocated to each open sector. The results also demonstrate how the advisories suggest appropriate times for configuration changes.

  3. Development of a Scoring Algorithm To Replace Expert Rating for Scoring a Complex Performance-Based Assessment.

    ERIC Educational Resources Information Center

    Clauser, Brian E.; Ross, Linette P.; Clyman, Stephen G.; Rose, Kathie M.; Margolis, Melissa J.; Nungester, Ronald J.; Piemme, Thomas E.; Chang, Lucy; El-Bayoumi, Gigi; Malakoff, Gary L.; Pincetl, Pierre S.

    1997-01-01

    Describes an automated scoring algorithm for a computer-based simulation examination of physicians' patient-management skills. Results with 280 medical students show that scores produced using this algorithm are highly correlated to actual clinician ratings. Scores were also effective in discriminating between case performance judged passing or…

  4. Estimation of Contextual Effects through Nonlinear Multilevel Latent Variable Modeling with a Metropolis-Hastings Robbins-Monro Algorithm

    ERIC Educational Resources Information Center

    Yang, Ji Seung; Cai, Li

    2014-01-01

    The main purpose of this study is to improve estimation efficiency in obtaining maximum marginal likelihood estimates of contextual effects in the framework of nonlinear multilevel latent variable model by adopting the Metropolis-Hastings Robbins-Monro algorithm (MH-RM). Results indicate that the MH-RM algorithm can produce estimates and standard…

  5. Naive Bayes-Guided Bat Algorithm for Feature Selection

    PubMed Central

    Taha, Ahmed Majid; Mustapha, Aida; Chen, Soong-Der

    2013-01-01

    When the amount of data and information is said to double in every 20 months or so, feature selection has become highly important and beneficial. Further improvements in feature selection will positively affect a wide array of applications in fields such as pattern recognition, machine learning, or signal processing. Bio-inspired method called Bat Algorithm hybridized with a Naive Bayes classifier has been presented in this work. The performance of the proposed feature selection algorithm was investigated using twelve benchmark datasets from different domains and was compared to three other well-known feature selection algorithms. Discussion focused on four perspectives: number of features, classification accuracy, stability, and feature generalization. The results showed that BANB significantly outperformed other algorithms in selecting lower number of features, hence removing irrelevant, redundant, or noisy features while maintaining the classification accuracy. BANB is also proven to be more stable than other methods and is capable of producing more general feature subsets. PMID:24396295

  6. Algorithm for remote sensing of land surface temperature

    NASA Astrophysics Data System (ADS)

    AlSultan, Sultan; Lim, H. S.; MatJafri, M. Z.; Abdullah, K.

    2008-10-01

    This study employs the developed algorithm for retrieving land surface temperature (LST) from Landsat TM over Saudi Arabia. The algorithm is a mono window algorithm because the Landsat TM has only one thermal band between wavelengths of 10.44-12.42 μm. The proposed algorithm included three parameters, brightness temperature, surface emissivity and incoming solar radiation in the algorithm regression analysis. The LST estimated by the proposed developed algorithm and the LST values produced using ATCORT2_T in the PCI Geomatica 9.1 image processing software were compared. The mono window algorithm produced high accuracy LST values using Landsat TM data.

  7. Algorithms, games, and evolution.

    PubMed

    Chastain, Erick; Livnat, Adi; Papadimitriou, Christos; Vazirani, Umesh

    2014-07-22

    Even the most seasoned students of evolution, starting with Darwin himself, have occasionally expressed amazement that the mechanism of natural selection has produced the whole of Life as we see it around us. There is a computational way to articulate the same amazement: "What algorithm could possibly achieve all this in a mere three and a half billion years?" In this paper we propose an answer: We demonstrate that in the regime of weak selection, the standard equations of population genetics describing natural selection in the presence of sex become identical to those of a repeated game between genes played according to multiplicative weight updates (MWUA), an algorithm known in computer science to be surprisingly powerful and versatile. MWUA maximizes a tradeoff between cumulative performance and entropy, which suggests a new view on the maintenance of diversity in evolution.

  8. A robust threshold retracking algorithm for measuring ice-sheet surface elevation change from satellite radar altimeters

    SciTech Connect

    Davis, C.H.

    1997-07-01

    A threshold retracking algorithm for processing ice-sheet altimeter data is presented. The primary purpose for developing this algorithm is detection of ice-sheet elevation change, where it is critical that a retracking algorithm produce repeatable elevations. The more consistent an algorithm is in selecting the retracking point the less likely that errors and/or biases will be introduced by the retracking scheme in the elevation-change measurement. The authors performed extensive comparisons between the threshold algorithm and two other widely used ice-sheet retracking algorithms on Geosat datasets comprised of over 60,000 crossover points. The results show that the threshold retracking algorithm, with a 10% threshold level, produces ice-sheet surface elevations that are more repeatable than the elevations derived from the other retracking algorithms. For this reason, the threshold retracking algorithm has been adopted by NASA/GSFC as an alternative to their existing algorithm for production of ice-sheet altimeter datasets under the NASA Pathfinder program. The threshold algorithm will be used to re-process existing ice-sheet altimeter datasets and to process the datasets from future altimeter missions.

  9. MO-G-17A-07: Improved Image Quality in Brain F-18 FDG PET Using Penalized-Likelihood Image Reconstruction Via a Generalized Preconditioned Alternating Projection Algorithm: The First Patient Results

    SciTech Connect

    Schmidtlein, CR; Beattie, B; Humm, J; Li, S; Wu, Z; Xu, Y; Zhang, J; Shen, L; Vogelsang, L; Feiglin, D; Krol, A

    2014-06-15

    Purpose: To investigate the performance of a new penalized-likelihood PET image reconstruction algorithm using the 1{sub 1}-norm total-variation (TV) sum of the 1st through 4th-order gradients as the penalty. Simulated and brain patient data sets were analyzed. Methods: This work represents an extension of the preconditioned alternating projection algorithm (PAPA) for emission-computed tomography. In this new generalized algorithm (GPAPA), the penalty term is expanded to allow multiple components, in this case the sum of the 1st to 4th order gradients, to reduce artificial piece-wise constant regions (“staircase” artifacts typical for TV) seen in PAPA images penalized with only the 1st order gradient. Simulated data were used to test for “staircase” artifacts and to optimize the penalty hyper-parameter in the root-mean-squared error (RMSE) sense. Patient FDG brain scans were acquired on a GE D690 PET/CT (370 MBq at 1-hour post-injection for 10 minutes) in time-of-flight mode and in all cases were reconstructed using resolution recovery projectors. GPAPA images were compared PAPA and RMSE-optimally filtered OSEM (fully converged) in simulations and to clinical OSEM reconstructions (3 iterations, 32 subsets) with 2.6 mm XYGaussian and standard 3-point axial smoothing post-filters. Results: The results from the simulated data show a significant reduction in the 'staircase' artifact for GPAPA compared to PAPA and lower RMSE (up to 35%) compared to optimally filtered OSEM. A simple power-law relationship between the RMSE-optimal hyper-parameters and the noise equivalent counts (NEC) per voxel is revealed. Qualitatively, the patient images appear much sharper and with less noise than standard clinical images. The convergence rate is similar to OSEM. Conclusions: GPAPA reconstructions using the 1{sub 1}-norm total-variation sum of the 1st through 4th-order gradients as the penalty show great promise for the improvement of image quality over that currently achieved

  10. Parallel algorithms for unconstrained optimizations by multisplitting

    SciTech Connect

    He, Qing

    1994-12-31

    In this paper a new parallel iterative algorithm for unconstrained optimization using the idea of multisplitting is proposed. This algorithm uses the existing sequential algorithms without any parallelization. Some convergence and numerical results for this algorithm are presented. The experiments are performed on an Intel iPSC/860 Hyper Cube with 64 nodes. It is interesting that the sequential implementation on one node shows that if the problem is split properly, the algorithm converges much faster than one without splitting.

  11. The Langley Parameterized Shortwave Algorithm (LPSA) for Surface Radiation Budget Studies. 1.0

    NASA Technical Reports Server (NTRS)

    Gupta, Shashi K.; Kratz, David P.; Stackhouse, Paul W., Jr.; Wilber, Anne C.

    2001-01-01

    An efficient algorithm was developed during the late 1980's and early 1990's by W. F. Staylor at NASA/LaRC for the purpose of deriving shortwave surface radiation budget parameters on a global scale. While the algorithm produced results in good agreement with observations, the lack of proper documentation resulted in a weak acceptance by the science community. The primary purpose of this report is to develop detailed documentation of the algorithm. In the process, the algorithm was modified whenever discrepancies were found between the algorithm and its referenced literature sources. In some instances, assumptions made in the algorithm could not be justified and were replaced with those that were justifiable. The algorithm uses satellite and operational meteorological data for inputs. Most of the original data sources have been replaced by more recent, higher quality data sources, and fluxes are now computed on a higher spatial resolution. Many more changes to the basic radiation scheme and meteorological inputs have been proposed to improve the algorithm and make the product more useful for new research projects. Because of the many changes already in place and more planned for the future, the algorithm has been renamed the Langley Parameterized Shortwave Algorithm (LPSA).

  12. Efficient iterative image reconstruction algorithm for dedicated breast CT

    NASA Astrophysics Data System (ADS)

    Antropova, Natalia; Sanchez, Adrian; Reiser, Ingrid S.; Sidky, Emil Y.; Boone, John; Pan, Xiaochuan

    2016-03-01

    Dedicated breast computed tomography (bCT) is currently being studied as a potential screening method for breast cancer. The X-ray exposure is set low to achieve an average glandular dose comparable to that of mammography, yielding projection data that contains high levels of noise. Iterative image reconstruction (IIR) algorithms may be well-suited for the system since they potentially reduce the effects of noise in the reconstructed images. However, IIR outcomes can be difficult to control since the algorithm parameters do not directly correspond to the image properties. Also, IIR algorithms are computationally demanding and have optimal parameter settings that depend on the size and shape of the breast and positioning of the patient. In this work, we design an efficient IIR algorithm with meaningful parameter specifications and that can be used on a large, diverse sample of bCT cases. The flexibility and efficiency of this method comes from having the final image produced by a linear combination of two separately reconstructed images - one containing gray level information and the other with enhanced high frequency components. Both of the images result from few iterations of separate IIR algorithms. The proposed algorithm depends on two parameters both of which have a well-defined impact on image quality. The algorithm is applied to numerous bCT cases from a dedicated bCT prototype system developed at University of California, Davis.

  13. The NRMP matching algorithm revisited: theory versus practice. National Resident Matching Program.

    PubMed

    Peranson, E; Randlett, R R

    1995-06-01

    The authors examine the algorithm used by the National Resident Matching Program (NRMP) in its centralized matching of applicants to U.S. residency programs ("the Match"). Their goal is to evaluate the current NRMP matching algorithm to determine whether it still fulfills its intended purpose adequately and whether changes could be made that would improve the Match. They describe the basic NRMP algorithm and many of the variations of the matching process ("match variations") incorporated over the last 20 years to meet participants' requirements. An overview of the current state of the theory of preference matching is presented, including descriptions of the characteristics of stable matches in general, program-optimal and applicant-optimal matchings, and strategies for formulating preference lists. The characteristics of the current NRMP algorithm are then compared with the theoretical findings. Research conducted long after the original NRMP algorithm was devised has shown that an algorithm that produces stable matches is the best approach for matching applicants to positions. In the absence of requirements to satisfy match variations, the NRMP's deferred-acceptance algorithm produces a program-optimal stable match. When match variations, such as those handled by the NRMP, must be introduced, it is possible that no stable matching exists, and the resulting matching produced by the NRMP algorithm may not be program-optimal. The question of program-optimal versus applicant-optimal matchings is discussed. Theoretical and empirical evidence currently available suggest that differences between these two kinds of matchings are likely to be small. However, further tests and research are needed to assess the real differences in the results produced by different stable matching algorithms that produce program-optimal or applicant-optimal stable matches.(ABSTRACT TRUNCATED AT 250 WORDS)

  14. Description of a Normal-Force In-Situ Turbulence Algorithm for Airplanes

    NASA Technical Reports Server (NTRS)

    Stewart, Eric C.

    2003-01-01

    A normal-force in-situ turbulence algorithm for potential use on commercial airliners is described. The algorithm can produce information that can be used to predict hazardous accelerations of airplanes or to aid meteorologists in forecasting weather patterns. The algorithm uses normal acceleration and other measures of the airplane state to approximate the vertical gust velocity. That is, the fundamental, yet simple, relationship between normal acceleration and the change in normal force coefficient is exploited to produce an estimate of the vertical gust velocity. This simple approach is robust and produces a time history of the vertical gust velocity that would be intuitively useful to pilots. With proper processing, the time history can be transformed into the eddy dissipation rate that would be useful to meteorologists. Flight data for a simplified research implementation of the algorithm are presented for a severe turbulence encounter of the NASA ARIES Boeing 757 research airplane. The results indicate that the algorithm has potential for producing accurate in-situ turbulence measurements. However, more extensive tests and analysis are needed with an operational implementation of the algorithm to make comparisons with other algorithms or methods.

  15. A new image enhancement algorithm with applications to forestry stand mapping

    NASA Technical Reports Server (NTRS)

    Kan, E. P. F. (Principal Investigator); Lo, J. K.

    1975-01-01

    The author has identified the following significant results. Results show that the new algorithm produced cleaner classification maps in which holes of small predesignated sizes were eliminated and significant boundary information was preserved. These cleaner post-processed maps better resemble true life timber stand maps and are thus more usable products than the pre-post-processing ones: Compared to an accepted neighbor-checking post-processing technique, the new algorithm is more appropriate for timber stand mapping.

  16. Temperature Corrected Bootstrap Algorithm

    NASA Technical Reports Server (NTRS)

    Comiso, Joey C.; Zwally, H. Jay

    1997-01-01

    A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.

  17. Messy genetic algorithms: Recent developments

    SciTech Connect

    Kargupta, H.

    1996-09-01

    Messy genetic algorithms define a rare class of algorithms that realize the need for detecting appropriate relations among members of the search domain in optimization. This paper reviews earlier works in messy genetic algorithms and describes some recent developments. It also describes the gene expression messy GA (GEMGA)--an {Omicron}({Lambda}{sup {kappa}}({ell}{sup 2} + {kappa})) sample complexity algorithm for the class of order-{kappa} delineable problems (problems that can be solved by considering no higher than order-{kappa} relations) of size {ell} and alphabet size {Lambda}. Experimental results are presented to demonstrate the scalability of the GEMGA.

  18. Raytracing Based upon the Sympletic Algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Li, C.

    2014-12-01

    The raytracing is the basic problem in seismic imaging, and the reliability of the imaging depends on the accuracies both spatial trajectory and traveltime of the ray, and is using in seismology broadly. The seismic ray travels through the inhomogeneous media fallows the the eikonal equation, and the eikonal equation is an one order differential equation of traveltime, and satisfies the Hamilton System. In Cartesian coordinate system, we use a separable Hamilton System function. In this paper, the Sympletic algorithm method with bi-cubic convolution algorithm was used to solve the Hamilton System to deal with the raytracing problem. Compared with the Fsat Marching Method (FMM), The result shows that the Sympletic algorithm method (SAM) can keep the stability of the solution for the eikonal equation. Due to the use of the Sympletic algorithm, the method can produce a reliable seismic wavefront with an accurate ray trajectory (Fig.1). Meanwhile, the numerical modeling shows that the use of SAM can not only keep the stability of the Hamilton System with a fast computation but also improve the accuracy of the seismic ray tracing (Fig.2).

  19. Parallel algorithm development

    SciTech Connect

    Adams, T.F.

    1996-06-01

    Rapid changes in parallel computing technology are causing significant changes in the strategies being used for parallel algorithm development. One approach is simply to write computer code in a standard language like FORTRAN 77 or with the expectation that the compiler will produce executable code that will run in parallel. The alternatives are: (1) to build explicit message passing directly into the source code; or (2) to write source code without explicit reference to message passing or parallelism, but use a general communications library to provide efficient parallel execution. Application of these strategies is illustrated with examples of codes currently under development.

  20. Algorithmic commonalities in the parallel environment

    NASA Technical Reports Server (NTRS)

    Mcanulty, Michael A.; Wainer, Michael S.

    1987-01-01

    The ultimate aim of this project was to analyze procedures from substantially different application areas to discover what is either common or peculiar in the process of conversion to the Massively Parallel Processor (MPP). Three areas were identified: molecular dynamic simulation, production systems (rule systems), and various graphics and vision algorithms. To date, only selected graphics procedures have been investigated. They are the most readily available, and produce the most visible results. These include simple polygon patch rendering, raycasting against a constructive solid geometric model, and stochastic or fractal based textured surface algorithms. Only the simplest of conversion strategies, mapping a major loop to the array, has been investigated so far. It is not entirely satisfactory.

  1. Stochastic Formal Correctness of Numerical Algorithms

    NASA Technical Reports Server (NTRS)

    Daumas, Marc; Lester, David; Martin-Dorel, Erik; Truffert, Annick

    2009-01-01

    We provide a framework to bound the probability that accumulated errors were never above a given threshold on numerical algorithms. Such algorithms are used for example in aircraft and nuclear power plants. This report contains simple formulas based on Levy's and Markov's inequalities and it presents a formal theory of random variables with a special focus on producing concrete results. We selected four very common applications that fit in our framework and cover the common practices of systems that evolve for a long time. We compute the number of bits that remain continuously significant in the first two applications with a probability of failure around one out of a billion, where worst case analysis considers that no significant bit remains. We are using PVS as such formal tools force explicit statement of all hypotheses and prevent incorrect uses of theorems.

  2. Comparing a Coevolutionary Genetic Algorithm for Multiobjective Optimization

    NASA Technical Reports Server (NTRS)

    Lohn, Jason D.; Kraus, William F.; Haith, Gary L.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    We present results from a study comparing a recently developed coevolutionary genetic algorithm (CGA) against a set of evolutionary algorithms using a suite of multiobjective optimization benchmarks. The CGA embodies competitive coevolution and employs a simple, straightforward target population representation and fitness calculation based on developmental theory of learning. Because of these properties, setting up the additional population is trivial making implementation no more difficult than using a standard GA. Empirical results using a suite of two-objective test functions indicate that this CGA performs well at finding solutions on convex, nonconvex, discrete, and deceptive Pareto-optimal fronts, while giving respectable results on a nonuniform optimization. On a multimodal Pareto front, the CGA finds a solution that dominates solutions produced by eight other algorithms, yet the CGA has poor coverage across the Pareto front.

  3. Artificial immune algorithm for multi-depot vehicle scheduling problems

    NASA Astrophysics Data System (ADS)

    Wu, Zhongyi; Wang, Donggen; Xia, Linyuan; Chen, Xiaoling

    2008-10-01

    In the fast-developing logistics and supply chain management fields, one of the key problems in the decision support system is that how to arrange, for a lot of customers and suppliers, the supplier-to-customer assignment and produce a detailed supply schedule under a set of constraints. Solutions to the multi-depot vehicle scheduling problems (MDVRP) help in solving this problem in case of transportation applications. The objective of the MDVSP is to minimize the total distance covered by all vehicles, which can be considered as delivery costs or time consumption. The MDVSP is one of nondeterministic polynomial-time hard (NP-hard) problem which cannot be solved to optimality within polynomial bounded computational time. Many different approaches have been developed to tackle MDVSP, such as exact algorithm (EA), one-stage approach (OSA), two-phase heuristic method (TPHM), tabu search algorithm (TSA), genetic algorithm (GA) and hierarchical multiplex structure (HIMS). Most of the methods mentioned above are time consuming and have high risk to result in local optimum. In this paper, a new search algorithm is proposed to solve MDVSP based on Artificial Immune Systems (AIS), which are inspirited by vertebrate immune systems. The proposed AIS algorithm is tested with 30 customers and 6 vehicles located in 3 depots. Experimental results show that the artificial immune system algorithm is an effective and efficient method for solving MDVSP problems.

  4. DNABIT Compress - Genome compression algorithm.

    PubMed

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  5. Comprehensive eye evaluation algorithm

    NASA Astrophysics Data System (ADS)

    Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.

    2016-03-01

    In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.

  6. Highly Scalable Matching Pursuit Signal Decomposition Algorithm

    NASA Technical Reports Server (NTRS)

    Christensen, Daniel; Das, Santanu; Srivastava, Ashok N.

    2009-01-01

    Matching Pursuit Decomposition (MPD) is a powerful iterative algorithm for signal decomposition and feature extraction. MPD decomposes any signal into linear combinations of its dictionary elements or atoms . A best fit atom from an arbitrarily defined dictionary is determined through cross-correlation. The selected atom is subtracted from the signal and this procedure is repeated on the residual in the subsequent iterations until a stopping criterion is met. The reconstructed signal reveals the waveform structure of the original signal. However, a sufficiently large dictionary is required for an accurate reconstruction; this in return increases the computational burden of the algorithm, thus limiting its applicability and level of adoption. The purpose of this research is to improve the scalability and performance of the classical MPD algorithm. Correlation thresholds were defined to prune insignificant atoms from the dictionary. The Coarse-Fine Grids and Multiple Atom Extraction techniques were proposed to decrease the computational burden of the algorithm. The Coarse-Fine Grids method enabled the approximation and refinement of the parameters for the best fit atom. The ability to extract multiple atoms within a single iteration enhanced the effectiveness and efficiency of each iteration. These improvements were implemented to produce an improved Matching Pursuit Decomposition algorithm entitled MPD++. Disparate signal decomposition applications may require a particular emphasis of accuracy or computational efficiency. The prominence of the key signal features required for the proper signal classification dictates the level of accuracy necessary in the decomposition. The MPD++ algorithm may be easily adapted to accommodate the imposed requirements. Certain feature extraction applications may require rapid signal decomposition. The full potential of MPD++ may be utilized to produce incredible performance gains while extracting only slightly less energy than the

  7. Improved document image segmentation algorithm using multiresolution morphology

    NASA Astrophysics Data System (ADS)

    Bukhari, Syed Saqib; Shafait, Faisal; Breuel, Thomas M.

    2011-01-01

    Page segmentation into text and non-text elements is an essential preprocessing step before optical character recognition (OCR) operation. In case of poor segmentation, an OCR classification engine produces garbage characters due to the presence of non-text elements. This paper describes modifications to the text/non-text segmentation algorithm presented by Bloomberg,1 which is also available in his open-source Leptonica library.2The modifications result in significant improvements and achieved better segmentation accuracy than the original algorithm for UW-III, UNLV, ICDAR 2009 page segmentation competition test images and circuit diagram datasets.

  8. Evolutionary Algorithm for Optimal Vaccination Scheme

    NASA Astrophysics Data System (ADS)

    Parousis-Orthodoxou, K. J.; Vlachos, D. S.

    2014-03-01

    The following work uses the dynamic capabilities of an evolutionary algorithm in order to obtain an optimal immunization strategy in a user specified network. The produced algorithm uses a basic genetic algorithm with crossover and mutation techniques, in order to locate certain nodes in the inputted network. These nodes will be immunized in an SIR epidemic spreading process, and the performance of each immunization scheme, will be evaluated by the level of containment that provides for the spreading of the disease.

  9. Modal parameters estimation using ant colony optimisation algorithm

    NASA Astrophysics Data System (ADS)

    Sitarz, Piotr; Powałka, Bartosz

    2016-08-01

    The paper puts forward a new estimation method of modal parameters for dynamical systems. The problem of parameter estimation has been simplified to optimisation which is carried out using the ant colony system algorithm. The proposed method significantly constrains the solution space, determined on the basis of frequency plots of the receptance FRFs (frequency response functions) for objects presented in the frequency domain. The constantly growing computing power of readily accessible PCs makes this novel approach a viable solution. The combination of deterministic constraints of the solution space with modified ant colony system algorithms produced excellent results for systems in which mode shapes are defined by distinctly different natural frequencies and for those in which natural frequencies are similar. The proposed method is fully autonomous and the user does not need to select a model order. The last section of the paper gives estimation results for two sample frequency plots, conducted with the proposed method and the PolyMAX algorithm.

  10. Contact solution algorithms

    NASA Technical Reports Server (NTRS)

    Tielking, John T.

    1989-01-01

    Two algorithms for obtaining static contact solutions are described in this presentation. Although they were derived for contact problems involving specific structures (a tire and a solid rubber cylinder), they are sufficiently general to be applied to other shell-of-revolution and solid-body contact problems. The shell-of-revolution contact algorithm is a method of obtaining a point load influence coefficient matrix for the portion of shell surface that is expected to carry a contact load. If the shell is sufficiently linear with respect to contact loading, a single influence coefficient matrix can be used to obtain a good approximation of the contact pressure distribution. Otherwise, the matrix will be updated to reflect nonlinear load-deflection behavior. The solid-body contact algorithm utilizes a Lagrange multiplier to include the contact constraint in a potential energy functional. The solution is found by applying the principle of minimum potential energy. The Lagrange multiplier is identified as the contact load resultant for a specific deflection. At present, only frictionless contact solutions have been obtained with these algorithms. A sliding tread element has been developed to calculate friction shear force in the contact region of the rolling shell-of-revolution tire model.

  11. PSC algorithm description

    NASA Technical Reports Server (NTRS)

    Nobbs, Steven G.

    1995-01-01

    An overview of the performance seeking control (PSC) algorithm and details of the important components of the algorithm are given. The onboard propulsion system models, the linear programming optimization, and engine control interface are described. The PSC algorithm receives input from various computers on the aircraft including the digital flight computer, digital engine control, and electronic inlet control. The PSC algorithm contains compact models of the propulsion system including the inlet, engine, and nozzle. The models compute propulsion system parameters, such as inlet drag and fan stall margin, which are not directly measurable in flight. The compact models also compute sensitivities of the propulsion system parameters to change in control variables. The engine model consists of a linear steady state variable model (SSVM) and a nonlinear model. The SSVM is updated with efficiency factors calculated in the engine model update logic, or Kalman filter. The efficiency factors are used to adjust the SSVM to match the actual engine. The propulsion system models are mathematically integrated to form an overall propulsion system model. The propulsion system model is then optimized using a linear programming optimization scheme. The goal of the optimization is determined from the selected PSC mode of operation. The resulting trims are used to compute a new operating point about which the optimization process is repeated. This process is continued until an overall (global) optimum is reached before applying the trims to the controllers.

  12. Comparison of cone beam artifacts reduction: two pass algorithm vs TV-based CS algorithm

    NASA Astrophysics Data System (ADS)

    Choi, Shinkook; Baek, Jongduk

    2015-03-01

    In a cone beam computed tomography (CBCT), the severity of the cone beam artifacts is increased as the cone angle increases. To reduce the cone beam artifacts, several modified FDK algorithms and compressed sensing based iterative algorithms have been proposed. In this paper, we used two pass algorithm and Gradient-Projection-Barzilai-Borwein (GPBB) algorithm to reduce the cone beam artifacts, and compared their performance using structural similarity (SSIM) index. In two pass algorithm, it is assumed that the cone beam artifacts are mainly caused by extreme-density(ED) objects, and therefore the algorithm reproduces the cone beam artifacts(i.e., error image) produced by ED objects, and then subtract it from the original image. GPBB algorithm is a compressed sensing based iterative algorithm which minimizes an energy function for calculating the gradient projection with the step size determined by the Barzilai- Borwein formulation, therefore it can estimate missing data caused by the cone beam artifacts. To evaluate the performance of two algorithms, we used testing objects consisting of 7 ellipsoids separated along the z direction and cone beam artifacts were generated using 30 degree cone angle. Even though the FDK algorithm produced severe cone beam artifacts with a large cone angle, two pass algorithm reduced the cone beam artifacts with small residual errors caused by inaccuracy of ED objects. In contrast, GPBB algorithm completely removed the cone beam artifacts and restored the original shape of the objects.

  13. Parallelization of Edge Detection Algorithm using MPI on Beowulf Cluster

    NASA Astrophysics Data System (ADS)

    Haron, Nazleeni; Amir, Ruzaini; Aziz, Izzatdin A.; Jung, Low Tan; Shukri, Siti Rohkmah

    In this paper, we present the design of parallel Sobel edge detection algorithm using Foster's methodology. The parallel algorithm is implemented using MPI message passing library and master/slave algorithm. Every processor performs the same sequential algorithm but on different part of the image. Experimental results conducted on Beowulf cluster are presented to demonstrate the performance of the parallel algorithm.

  14. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  15. Monitoring and Commissioning Verification Algorithms for CHP Systems

    SciTech Connect

    Brambley, Michael R.; Katipamula, Srinivas; Jiang, Wei

    2008-03-31

    This document provides the algorithms for CHP system performance monitoring and commissioning verification (CxV). It starts by presenting system-level and component-level performance metrics, followed by descriptions of algorithms for performance monitoring and commissioning verification, using the metric presented earlier. Verification of commissioning is accomplished essentially by comparing actual measured performance to benchmarks for performance provided by the system integrator and/or component manufacturers. The results of these comparisons are then automatically interpreted to provide conclusions regarding whether the CHP system and its components have been properly commissioned and where problems are found, guidance is provided for corrections. A discussion of uncertainty handling is then provided, which is followed by a description of how simulations models can be used to generate data for testing the algorithms. A model is described for simulating a CHP system consisting of a micro-turbine, an exhaust-gas heat recovery unit that produces hot water, a absorption chiller and a cooling tower. The process for using this model for generating data for testing the algorithms for a selected set of faults is described. The next section applies the algorithms developed to CHP laboratory and field data to illustrate their use. The report then concludes with a discussion of the need for laboratory testing of the algorithms on a physical CHP systems and identification of the recommended next steps.

  16. An efficient clustering algorithm for partitioning Y-short tandem repeats data

    PubMed Central

    2012-01-01

    Background Y-Short Tandem Repeats (Y-STR) data consist of many similar and almost similar objects. This characteristic of Y-STR data causes two problems with partitioning: non-unique centroids and local minima problems. As a result, the existing partitioning algorithms produce poor clustering results. Results Our new algorithm, called k-Approximate Modal Haplotypes (k-AMH), obtains the highest clustering accuracy scores for five out of six datasets, and produces an equal performance for the remaining dataset. Furthermore, clustering accuracy scores of 100% are achieved for two of the datasets. The k-AMH algorithm records the highest mean accuracy score of 0.93 overall, compared to that of other algorithms: k-Population (0.91), k-Modes-RVF (0.81), New Fuzzy k-Modes (0.80), k-Modes (0.76), k-Modes-Hybrid 1 (0.76), k-Modes-Hybrid 2 (0.75), Fuzzy k-Modes (0.74), and k-Modes-UAVM (0.70). Conclusions The partitioning performance of the k-AMH algorithm for Y-STR data is superior to that of other algorithms, owing to its ability to solve the non-unique centroids and local minima problems. Our algorithm is also efficient in terms of time complexity, which is recorded as O(km(n-k)) and considered to be linear. PMID:23039132

  17. Linear Bregman algorithm implemented in parallel GPU

    NASA Astrophysics Data System (ADS)

    Li, Pengyan; Ke, Jue; Sui, Dong; Wei, Ping

    2015-08-01

    At present, most compressed sensing (CS) algorithms have poor converging speed, thus are difficult to run on PC. To deal with this issue, we use a parallel GPU, to implement a broadly used compressed sensing algorithm, the Linear Bregman algorithm. Linear iterative Bregman algorithm is a reconstruction algorithm proposed by Osher and Cai. Compared with other CS reconstruction algorithms, the linear Bregman algorithm only involves the vector and matrix multiplication and thresholding operation, and is simpler and more efficient for programming. We use C as a development language and adopt CUDA (Compute Unified Device Architecture) as parallel computing architectures. In this paper, we compared the parallel Bregman algorithm with traditional CPU realized Bregaman algorithm. In addition, we also compared the parallel Bregman algorithm with other CS reconstruction algorithms, such as OMP and TwIST algorithms. Compared with these two algorithms, the result of this paper shows that, the parallel Bregman algorithm needs shorter time, and thus is more convenient for real-time object reconstruction, which is important to people's fast growing demand to information technology.

  18. Quantum defragmentation algorithm

    SciTech Connect

    Burgarth, Daniel; Giovannetti, Vittorio

    2010-08-15

    In this addendum to our paper [D. Burgarth and V. Giovannetti, Phys. Rev. Lett. 99, 100501 (2007)] we prove that during the transformation that allows one to enforce control by relaxation on a quantum system, the ancillary memory can be kept at a finite size, independently from the fidelity one wants to achieve. The result is obtained by introducing the quantum analog of defragmentation algorithms which are employed for efficiently reorganizing classical information in conventional hard disks.

  19. Towards enhancement of performance of K-means clustering using nature-inspired optimization algorithms.

    PubMed

    Fong, Simon; Deb, Suash; Yang, Xin-She; Zhuang, Yan

    2014-01-01

    Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario. PMID:25202730

  20. Towards Enhancement of Performance of K-Means Clustering Using Nature-Inspired Optimization Algorithms

    PubMed Central

    Deb, Suash; Yang, Xin-She

    2014-01-01

    Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario. PMID:25202730

  1. Towards enhancement of performance of K-means clustering using nature-inspired optimization algorithms.

    PubMed

    Fong, Simon; Deb, Suash; Yang, Xin-She; Zhuang, Yan

    2014-01-01

    Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario.

  2. A method for the dynamic analysis of the heart using a Lyapounov based denoising algorithm.

    PubMed

    Nascimento, Jacinto C; Sanches, João M; Marques, Jorge S

    2006-01-01

    Heart tracking in ultrasound sequences is a difficult task due to speckle noise, low SNR and lack of contrast. Therefore it is usually difficult to obtain robust estimates of the heart cavities since feature detectors produce a large number of outliers. This paper presents an algorithm which combines two main operations: i) a novel denoising algorithm based on the Lyapounov equation and ii) a robust tracker, recently proposed by the authors, based on a model of the outlier features. Experimental results are provided, showing that the proposed algorithm is computationally efficient and leads to accurate estimates of the left ventricle during the cardiac cycle.

  3. Real-time robot deliberation by compilation and monitoring of anytime algorithms

    NASA Technical Reports Server (NTRS)

    Zilberstein, Shlomo

    1994-01-01

    Anytime algorithms are algorithms whose quality of results improves gradually as computation time increases. Certainty, accuracy, and specificity are metrics useful in anytime algorighm construction. It is widely accepted that a successful robotic system must trade off between decision quality and the computational resources used to produce it. Anytime algorithms were designed to offer such a trade off. A model of compilation and monitoring mechanisms needed to build robots that can efficiently control their deliberation time is presented. This approach simplifies the design and implementation of complex intelligent robots, mechanizes the composition and monitoring processes, and provides independent real time robotic systems that automatically adjust resource allocation to yield optimum performance.

  4. Cultural algorithms, an alternative heuristic to solve the job shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Cortes Rivera, Daniel; Landa Becerra, Ricardo; Coello Coello, Carlos A.

    2007-01-01

    In this work, an approach for solving the job shop scheduling problem using a cultural algorithm is proposed. Cultural algorithms are evolutionary computation methods that extract domain knowledge during the evolutionary process. Additional to this extracted knowledge, the proposed approach also uses domain knowledge given a priori (based on specific domain knowledge available for the job shop scheduling problem). The proposed approach is compared with respect to a Greedy Randomized Adaptive Search Procedure (GRASP), a Parallel GRASP, a Genetic Algorithm, a Hybrid Genetic Algorithm, and a deterministic method called shifting bottleneck. The cultural algorithm proposed in this article is able to produce competitive results with respect to the two approaches previously indicated at a significantly lower computational cost than at least one of them and without using any sort of parallel processing.

  5. Monte Carlo algorithm for free energy calculation.

    PubMed

    Bi, Sheng; Tong, Ning-Hua

    2015-07-01

    We propose a Monte Carlo algorithm for the free energy calculation based on configuration space sampling. An upward or downward temperature scan can be used to produce F(T). We implement this algorithm for the Ising model on a square lattice and triangular lattice. Comparison with the exact free energy shows an excellent agreement. We analyze the properties of this algorithm and compare it with the Wang-Landau algorithm, which samples in energy space. This method is applicable to general classical statistical models. The possibility of extending it to quantum systems is discussed.

  6. Thermostat algorithm for generating target ensembles.

    PubMed

    Bravetti, A; Tapias, D

    2016-02-01

    We present a deterministic algorithm called contact density dynamics that generates any prescribed target distribution in the physical phase space. Akin to the famous model of Nosé and Hoover, our algorithm is based on a non-Hamiltonian system in an extended phase space. However, the equations of motion in our case follow from contact geometry and we show that in general they have a similar form to those of the so-called density dynamics algorithm. As a prototypical example, we apply our algorithm to produce a Gibbs canonical distribution for a one-dimensional harmonic oscillator. PMID:26986320

  7. Thermostat algorithm for generating target ensembles.

    PubMed

    Bravetti, A; Tapias, D

    2016-02-01

    We present a deterministic algorithm called contact density dynamics that generates any prescribed target distribution in the physical phase space. Akin to the famous model of Nosé and Hoover, our algorithm is based on a non-Hamiltonian system in an extended phase space. However, the equations of motion in our case follow from contact geometry and we show that in general they have a similar form to those of the so-called density dynamics algorithm. As a prototypical example, we apply our algorithm to produce a Gibbs canonical distribution for a one-dimensional harmonic oscillator.

  8. Thermostat algorithm for generating target ensembles

    NASA Astrophysics Data System (ADS)

    Bravetti, A.; Tapias, D.

    2016-02-01

    We present a deterministic algorithm called contact density dynamics that generates any prescribed target distribution in the physical phase space. Akin to the famous model of Nosé and Hoover, our algorithm is based on a non-Hamiltonian system in an extended phase space. However, the equations of motion in our case follow from contact geometry and we show that in general they have a similar form to those of the so-called density dynamics algorithm. As a prototypical example, we apply our algorithm to produce a Gibbs canonical distribution for a one-dimensional harmonic oscillator.

  9. An adaptive algorithm for noise rejection.

    PubMed

    Lovelace, D E; Knoebel, S B

    1978-01-01

    An adaptive algorithm for the rejection of noise artifact in 24-hour ambulatory electrocardiographic recordings is described. The algorithm is based on increased amplitude distortion or increased frequency of fluctuations associated with an episode of noise artifact. The results of application of the noise rejection algorithm on a high noise population of test tapes are discussed.

  10. A New Image Denoising Algorithm that Preserves Structures of Astronomical Data

    NASA Astrophysics Data System (ADS)

    Bressert, Eli; Edmonds, P.; Kowal Arcand, K.

    2007-05-01

    We have processed numerous x-ray data sets using several well-known algorithms such as Gaussian and adaptive smoothing for public related image releases. These algorithms are used to denoise/smooth images and retain the overall structure of observed objects. Recently, a new PDE algorithm and program, provided by Dr. David Tschumperle and referred to as GREYCstoration, has been tested and is in the progress of being implemented into the Chandra EPO imaging group. Results of GREYCstoration will be presented and compared to the currently used methods for x-ray and multiple wavelength images. What demarcates Tschumperle's algorithm from the current algorithms used by the EPO imaging group is its ability to preserve the main structures of an image strongly, while reducing noise. In addition to denoising images, GREYCstoration can be used to erase artifacts accumulated during observation and mosaicing stages. GREYCstoration produces results that are comparable and in some cases more preferable than the current denoising/smoothing algorithms. From our early stages of testing, the results of the new algorithm will provide insight on the algorithm's initial capabilities on multiple wavelength astronomy data sets.

  11. Reduction Algorithm

    2010-12-31

    Conventional methods used for modeling a transmission network have resulted in a high degree of error and instability. This methodology condenses the network for analysis purposes without a loss of precision.

  12. Leaf Sequencing Algorithm Based on MLC Shape Constraint

    NASA Astrophysics Data System (ADS)

    Jing, Jia; Pei, Xi; Wang, Dong; Cao, Ruifen; Lin, Hui

    2012-06-01

    Intensity modulated radiation therapy (IMRT) requires the determination of the appropriate multileaf collimator settings to deliver an intensity map. The purpose of this work was to attempt to regulate the shape between adjacent multileaf collimator apertures by a leaf sequencing algorithm. To qualify and validate this algorithm, the integral test for the segment of the multileaf collimator of ARTS was performed with clinical intensity map experiments. By comparisons and analyses of the total number of monitor units and number of segments with benchmark results, the proposed algorithm performed well while the segment shape constraint produced segments with more compact shapes when delivering the planned intensity maps, which may help to reduce the multileaf collimator's specific effects.

  13. Comparing barrier algorithms

    NASA Technical Reports Server (NTRS)

    Arenstorf, Norbert S.; Jordan, Harry F.

    1987-01-01

    A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor, that support these conclusions are detailed.

  14. Large scale tracking algorithms.

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  15. Evaluating super resolution algorithms

    NASA Astrophysics Data System (ADS)

    Kim, Youn Jin; Park, Jong Hyun; Shin, Gun Shik; Lee, Hyun-Seung; Kim, Dong-Hyun; Park, Se Hyeok; Kim, Jaehyun

    2011-01-01

    This study intends to establish a sound testing and evaluation methodology based upon the human visual characteristics for appreciating the image restoration accuracy; in addition to comparing the subjective results with predictions by some objective evaluation methods. In total, six different super resolution (SR) algorithms - such as iterative back-projection (IBP), robust SR, maximum a posteriori (MAP), projections onto convex sets (POCS), a non-uniform interpolation, and frequency domain approach - were selected. The performance comparison between the SR algorithms in terms of their restoration accuracy was carried out through both subjectively and objectively. The former methodology relies upon the paired comparison method that involves the simultaneous scaling of two stimuli with respect to image restoration accuracy. For the latter, both conventional image quality metrics and color difference methods are implemented. Consequently, POCS and a non-uniform interpolation outperformed the others for an ideal situation, while restoration based methods appear more accurate to the HR image in a real world case where any prior information about the blur kernel is remained unknown. However, the noise-added-image could not be restored successfully by any of those methods. The latest International Commission on Illumination (CIE) standard color difference equation CIEDE2000 was found to predict the subjective results accurately and outperformed conventional methods for evaluating the restoration accuracy of those SR algorithms.

  16. MLEM algorithm adaptation for improved SPECT scintimammography

    NASA Astrophysics Data System (ADS)

    Krol, Andrzej; Feiglin, David H.; Lee, Wei; Kunniyur, Vikram R.; Gangal, Kedar R.; Coman, Ioana L.; Lipson, Edward D.; Karczewski, Deborah A.; Thomas, F. Deaver

    2005-04-01

    Standard MLEM and OSEM algorithms used in SPECT Tc-99m sestamibi scintimammography produce hot-spot artifacts (HSA) at the image support peripheries. We investigated a suitable adaptation of MLEM and OSEM algorithms needed to reduce HSA. Patients with suspicious breast lesions were administered 10 mCi of Tc-99m sestamibi and SPECT scans were acquired for patients in prone position with uncompressed breasts. In addition, to simulate breast lesions, some patients were imaged with a number of breast skin markers each containing 1 mCi of Tc-99m. In order to reduce HSA in reconstruction, we removed from the backprojection step the rays that traverse the periphery of the support region on the way to a detector bin, when their path length through this region was shorter than some critical length. Such very short paths result in a very low projection counts contributed to the detector bin, and consequently to overestimation of the activity in the peripheral voxels in the backprojection step-thus creating HSA. We analyzed the breast-lesion contrast and suppression of HSA in the images reconstructed using standard and modified MLEM and OSEM algorithms vs. critical path length (CPL). For CPL >= 0.01 pixel size, we observed improved breast-lesion contrast and lower noise in the reconstructed images, and a very significant reduction of HSA in the maximum intensity projection (MIP) images.

  17. Ozone ensemble forecast with machine learning algorithms

    NASA Astrophysics Data System (ADS)

    Mallet, Vivien; Stoltz, Gilles; Mauricette, Boris

    2009-03-01

    We apply machine learning algorithms to perform sequential aggregation of ozone forecasts. The latter rely on a multimodel ensemble built for ozone forecasting with the modeling system Polyphemus. The ensemble simulations are obtained by changes in the physical parameterizations, the numerical schemes, and the input data to the models. The simulations are carried out for summer 2001 over western Europe in order to forecast ozone daily peaks and ozone hourly concentrations. On the basis of past observations and past model forecasts, the learning algorithms produce a weight for each model. A convex or linear combination of the model forecasts is then formed with these weights. This process is repeated for each round of forecasting and is therefore called sequential aggregation. The aggregated forecasts demonstrate good results; for instance, they always show better performance than the best model in the ensemble and they even compete against the best constant linear combination. In addition, the machine learning algorithms come with theoretical guarantees with respect to their performance, that hold for all possible sequences of observations, even nonstochastic ones. Our study also demonstrates the robustness of the methods. We therefore conclude that these aggregation methods are very relevant for operational forecasts.

  18. Library of Continuation Algorithms

    2005-03-01

    LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newton’s method for their nonlinear solve.

  19. Spaceborne SAR Imaging Algorithm for Coherence Optimized.

    PubMed

    Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun

    2016-01-01

    This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446

  20. Spaceborne SAR Imaging Algorithm for Coherence Optimized.

    PubMed

    Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun

    2016-01-01

    This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application.

  1. Spaceborne SAR Imaging Algorithm for Coherence Optimized

    PubMed Central

    Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun

    2016-01-01

    This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446

  2. Upwind relaxation algorithms for Euler/Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Thomas, J. L.; Walters, R. W.; Rudy, D. H.; Swanson, R. C.

    1986-01-01

    A description of and results from a solution algorithm for the compressible Navier-Stokes equations are presented. The main features of the algorithm are second or third order accurate upwind discretization of the convection and pressure derivatives and a relaxation scheme for the unfactored implicit backward Euler time method, implemented in a finite-volume formulation. Upwind methods were successfully used to obtain solutions to the Euler equations for flows with strong shock waves. The particular upwind method being used is based on the flux vector splitting technique developed by Van Leer and both second and third order accurate discretizations were developed. Currently, the most widely used implicit solution technique for the Navier-Stokes equations use approximate factorization (AF) methods to treat multidimensional problems. The time integration scheme being used in the present algorithm corresponds to a line Gauss-Seidel relaxation method. This method produces good convergence rates for steady-state flows, and most of the algorithm was vectorized on the NASA Langley VPS 32 computer. The Navier-Stokes algorithm was tested for several two-dimensional flow problems. Solutions for the problems gave excellent results. The presented effort is directed toward the extension of the scheme to the full three-dimensional Navier-Stokes equations.

  3. Generating folded protein structures with a lattice chain growth algorithm

    NASA Astrophysics Data System (ADS)

    Gan, Hin Hark; Tropsha, Alexander; Schlick, Tamar

    2000-10-01

    We present a new application of the chain growth algorithm to lattice generation of protein structure and thermodynamics. Given the difficulty of ab initio protein structure prediction, this approach provides an alternative to current folding algorithms. The chain growth algorithm, unlike Metropolis folding algorithms, generates independent protein structures to achieve rapid and efficient exploration of configurational space. It is a modified version of the Rosenbluth algorithm where the chain growth transition probability is a normalized Boltzmann factor; it was previously applied only to simple polymers and protein models with two residue types. The independent protein configurations, generated segment-by-segment on a refined cubic lattice, are based on a single interaction site for each amino acid and a statistical interaction energy derived by Miyazawa and Jernigan. We examine for several proteins the algorithm's ability to produce nativelike folds and its effectiveness for calculating protein thermodynamics. Thermal transition profiles associated with the internal energy, entropy, and radius of gyration show characteristic folding/unfolding transitions and provide evidence for unfolding via partially unfolded (molten-globule) states. From the configurational ensembles, the protein structures with the lowest distance root-mean-square deviations (dRMSD) vary between 2.2 to 3.8 Å, a range comparable to results of an exhaustive enumeration search. Though the ensemble-averaged dRMSD values are about 1.5 to 2 Å larger, the lowest dRMSD structures have similar overall folds to the native proteins. These results demonstrate that the chain growth algorithm is a viable alternative to protein simulations using the whole chain.

  4. The Loop Algorithm

    NASA Astrophysics Data System (ADS)

    Evertz, Hans Gerd

    1998-03-01

    Exciting new investigations have recently become possible for strongly correlated systems of spins, bosons, and fermions, through Quantum Monte Carlo simulations with the Loop Algorithm (H.G. Evertz, G. Lana, and M. Marcu, Phys. Rev. Lett. 70, 875 (1993).) (For a recent review see: H.G. Evertz, cond- mat/9707221.) and its generalizations. A review of this new method, its generalizations and its applications is given, including some new results. The Loop Algorithm is based on a formulation of physical models in an extended ensemble of worldlines and graphs, and is related to Swendsen-Wang cluster algorithms. It performs nonlocal changes of worldline configurations, determined by local stochastic decisions. It overcomes many of the difficulties of traditional worldline simulations. Computer time requirements are reduced by orders of magnitude, through a corresponding reduction in autocorrelations. The grand-canonical ensemble (e.g. varying winding numbers) is naturally simulated. The continuous time limit can be taken directly. Improved Estimators exist which further reduce the errors of measured quantities. The algorithm applies unchanged in any dimension and for varying bond-strengths. It becomes less efficient in the presence of strong site disorder or strong magnetic fields. It applies directly to locally XYZ-like spin, fermion, and hard-core boson models. It has been extended to the Hubbard and the tJ model and generalized to higher spin representations. There have already been several large scale applications, especially for Heisenberg-like models, including a high statistics continuous time calculation of quantum critical exponents on a regularly depleted two-dimensional lattice of up to 20000 spatial sites at temperatures down to T=0.01 J.

  5. Evaluation of five non-rigid image registration algorithms using the NIREP framework

    NASA Astrophysics Data System (ADS)

    Wei, Ying; Christensen, Gary E.; Song, Joo Hyun; Rudrauf, David; Bruss, Joel; Kuhl, Jon G.; Grabowski, Thomas J.

    2010-03-01

    Evaluating non-rigid image registration algorithm performance is a difficult problem since there is rarely a "gold standard" (i.e., known) correspondence between two images. This paper reports the analysis and comparison of five non-rigid image registration algorithms using the Non-Rigid Image Registration Evaluation Project (NIREP) (www.nirep.org) framework. The NIREP framework evaluates registration performance using centralized databases of well-characterized images and standard evaluation statistics (methods) which are implemented in a software package. The performance of five non-rigid registration algorithms (Affine, AIR, Demons, SLE and SICLE) was evaluated using 22 images from two NIREP neuroanatomical evaluation databases. Six evaluation statistics (relative overlap, intensity variance, normalized ROI overlap, alignment of calcarine sulci, inverse consistency error and transitivity error) were used to evaluate and compare image registration performance. The results indicate that the Demons registration algorithm produced the best registration results with respect to the relative overlap statistic but produced nearly the worst registration results with respect to the inverse consistency statistic. The fact that one registration algorithm produced the best result for one criterion and nearly the worst for another illustrates the need to use multiple evaluation statistics to fully assess performance.

  6. Calibration of FRESIM for Singapore expressway using genetic algorithm

    SciTech Connect

    Cheu, R.L.; Jin, X.; Srinivasa, D.; Ng, K.C.; Ng, Y.L.

    1998-11-01

    FRESIM is a microscopic time-stepping simulation model for freeway corridor traffic operations. To enable FRESIM to realistically simulate expressway traffic flow in Singapore, parameters that govern the movement of vehicles needed to be recalibrated for local traffic conditions. This paper presents the application of a genetic algorithm as an optimization method for finding a suitable combination of FRESIM parameter values. The calibration is based on field data collected on weekdays over a 5.8 km segment of the Ayer Rajar Expressway. Independent calibrations have been made for evening peak and midday off-peak traffic. The results show that the genetic algorithm is able to search for two sets of parameter values that enable FRESIM to produce 30-s loop-detector volume and speed (averaged across all lanes) closely matching the field data under two different traffic conditions. The two sets of parameter values are found to produce a consistently good match for data collected in different days.

  7. Optimum Actuator Selection with a Genetic Algorithm for Aircraft Control

    NASA Technical Reports Server (NTRS)

    Rogers, James L.

    2004-01-01

    The placement of actuators on a wing determines the control effectiveness of the airplane. One approach to placement maximizes the moments about the pitch, roll, and yaw axes, while minimizing the coupling. For example, the desired actuators produce a pure roll moment without at the same time causing much pitch or yaw. For a typical wing, there is a large set of candidate locations for placing actuators, resulting in a substantially larger number of combinations to examine in order to find an optimum placement satisfying the mission requirements and mission constraints. A genetic algorithm has been developed for finding the best placement for four actuators to produce an uncoupled pitch moment. The genetic algorithm has been extended to find the minimum number of actuators required to provide uncoupled pitch, roll, and yaw control. A simplified, untapered, unswept wing is the model for each application.

  8. Parallelized dilate algorithm for remote sensing image.

    PubMed

    Zhang, Suli; Hu, Haoran; Pan, Xin

    2014-01-01

    As an important algorithm, dilate algorithm can give us more connective view of a remote sensing image which has broken lines or objects. However, with the technological progress of satellite sensor, the resolution of remote sensing image has been increasing and its data quantities become very large. This would lead to the decrease of algorithm running speed or cannot obtain a result in limited memory or time. To solve this problem, our research proposed a parallelized dilate algorithm for remote sensing Image based on MPI and MP. Experiments show that our method runs faster than traditional single-process algorithm.

  9. Geochemical effects of CO2 injection on produced water chemistry at an enhanced oil recovery site in the Permian Basin of northwest Texas, USA: Preliminary geochemical and Li isotope results

    NASA Astrophysics Data System (ADS)

    Pfister, S.; Gardiner, J.; Phan, T. T.; Macpherson, G. L.; Diehl, J. R.; Lopano, C. L.; Stewart, B. W.; Capo, R. C.

    2014-12-01

    Injection of supercritical CO2 for enhanced oil recovery (EOR) presents an opportunity to evaluate the effects of CO2 on reservoir properties and formation waters during geologic carbon sequestration. Produced water from oil wells tapping a carbonate-hosted reservoir at an active EOR site in the Permian Basin of Texas both before and after injection were sampled to evaluate geochemical and isotopic changes associated with water-rock-CO2 interaction. Produced waters from the carbonate reservoir rock are Na-Cl brines with TDS levels of 16.5-34 g/L and detectable H2S. These brines are potentially diluted with shallow groundwater from earlier EOR water flooding. Initial lithium isotope data (δ7Li) from pre-injection produced water in the EOR field fall within the range of Gulf of Mexico Coastal sedimentary basin and Appalachian basin values (Macpherson et al., 2014, Geofluids, doi: 10.1111/gfl.12084). Pre-injection produced water 87Sr/86Sr ratios (0.70788-0.70795) are consistent with mid-late Permian seawater/carbonate. CO2 injection took place in October 2013, and four of the wells sampled in May 2014 showed CO2 breakthrough. Preliminary comparison of pre- and post-injection produced waters indicates no significant changes in the major inorganic constituents following breakthrough, other than a possible drop in K concentration. Trace element and isotope data from pre- and post-breakthrough wells are currently being evaluated and will be presented.

  10. Basis for a neuronal version of Grover's quantum algorithm.

    PubMed

    Clark, Kevin B

    2014-01-01

    Grover's quantum (search) algorithm exploits principles of quantum information theory and computation to surpass the strong Church-Turing limit governing classical computers. The algorithm initializes a search field into superposed N (eigen)states to later execute nonclassical "subroutines" involving unitary phase shifts of measured states and to produce root-rate or quadratic gain in the algorithmic time (O(N (1/2))) needed to find some "target" solution m. Akin to this fast technological search algorithm, single eukaryotic cells, such as differentiated neurons, perform natural quadratic speed-up in the search for appropriate store-operated Ca(2+) response regulation of, among other processes, protein and lipid biosynthesis, cell energetics, stress responses, cell fate and death, synaptic plasticity, and immunoprotection. Such speed-up in cellular decision making results from spatiotemporal dynamics of networked intracellular Ca(2+)-induced Ca(2+) release and the search (or signaling) velocity of Ca(2+) wave propagation. As chemical processes, such as the duration of Ca(2+) mobilization, become rate-limiting over interstore distances, Ca(2+) waves quadratically decrease interstore-travel time from slow saltatory to fast continuous gradients proportional to the square-root of the classical Ca(2+) diffusion coefficient, D (1/2), matching the computing efficiency of Grover's quantum algorithm. In this Hypothesis and Theory article, I elaborate on these traits using a fire-diffuse-fire model of store-operated cytosolic Ca(2+) signaling valid for glutamatergic neurons. Salient model features corresponding to Grover's quantum algorithm are parameterized to meet requirements for the Oracle Hadamard transform and Grover's iteration. A neuronal version of Grover's quantum algorithm figures to benefit signal coincidence detection and integration, bidirectional synaptic plasticity, and other vital cell functions by rapidly selecting, ordering, and/or counting optional

  11. Basis for a neuronal version of Grover's quantum algorithm

    PubMed Central

    Clark, Kevin B.

    2014-01-01

    Grover's quantum (search) algorithm exploits principles of quantum information theory and computation to surpass the strong Church–Turing limit governing classical computers. The algorithm initializes a search field into superposed N (eigen)states to later execute nonclassical “subroutines” involving unitary phase shifts of measured states and to produce root-rate or quadratic gain in the algorithmic time (O(N1/2)) needed to find some “target” solution m. Akin to this fast technological search algorithm, single eukaryotic cells, such as differentiated neurons, perform natural quadratic speed-up in the search for appropriate store-operated Ca2+ response regulation of, among other processes, protein and lipid biosynthesis, cell energetics, stress responses, cell fate and death, synaptic plasticity, and immunoprotection. Such speed-up in cellular decision making results from spatiotemporal dynamics of networked intracellular Ca2+-induced Ca2+ release and the search (or signaling) velocity of Ca2+ wave propagation. As chemical processes, such as the duration of Ca2+ mobilization, become rate-limiting over interstore distances, Ca2+ waves quadratically decrease interstore-travel time from slow saltatory to fast continuous gradients proportional to the square-root of the classical Ca2+ diffusion coefficient, D1/2, matching the computing efficiency of Grover's quantum algorithm. In this Hypothesis and Theory article, I elaborate on these traits using a fire-diffuse-fire model of store-operated cytosolic Ca2+ signaling valid for glutamatergic neurons. Salient model features corresponding to Grover's quantum algorithm are parameterized to meet requirements for the Oracle Hadamard transform and Grover's iteration. A neuronal version of Grover's quantum algorithm figures to benefit signal coincidence detection and integration, bidirectional synaptic plasticity, and other vital cell functions by rapidly selecting, ordering, and/or counting optional response

  12. Dynamic Programming Algorithm vs. Genetic Algorithm: Which is Faster?

    NASA Astrophysics Data System (ADS)

    Petković, Dušan

    The article compares two different approaches for the optimization problem of large join queries (LJQs). Almost all commercial database systems use a form of the dynamic programming algorithm to solve the ordering of join operations for large join queries, i.e. joins with more than dozen join operations. The property of the dynamic programming algorithm is that the execution time increases significantly in the case, where the number of join operations in a query is large. Genetic algorithms (GAs), as a data mining technique, have been shown as a promising technique in solving the ordering of join operations in LJQs. Using the existing implementation of GA, we compare the dynamic programming algorithm implemented in commercial database systems with the corresponding GA module. Our results show that the use of a genetic algorithm is a better solution for optimization of large join queries, i.e., that such a technique outperforms the implementations of the dynamic programming algorithm in conventional query optimization components for very large join queries.

  13. Genetic Algorithm Calibration of Probabilistic Cellular Automata for Modeling Mining Permit Activity

    USGS Publications Warehouse

    Louis, S.J.; Raines, G.L.

    2003-01-01

    We use a genetic algorithm to calibrate a spatially and temporally resolved cellular automata to model mining activity on public land in Idaho and western Montana. The genetic algorithm searches through a space of transition rule parameters of a two dimensional cellular automata model to find rule parameters that fit observed mining activity data. Previous work by one of the authors in calibrating the cellular automaton took weeks - the genetic algorithm takes a day and produces rules leading to about the same (or better) fit to observed data. These preliminary results indicate that genetic algorithms are a viable tool in calibrating cellular automata for this application. Experience gained during the calibration of this cellular automata suggests that mineral resource information is a critical factor in the quality of the results. With automated calibration, further refinements of how the mineral-resource information is provided to the cellular automaton will probably improve our model.

  14. A new minimax algorithm

    NASA Technical Reports Server (NTRS)

    Vardi, A.

    1984-01-01

    The representation min t s.t. F(I)(x). - t less than or equal to 0 for all i is examined. An active set strategy is designed of functions: active, semi-active, and non-active. This technique will help in preventing zigzagging which often occurs when an active set strategy is used. Some of the inequality constraints are handled with slack variables. Also a trust region strategy is used in which at each iteration there is a sphere around the current point in which the local approximation of the function is trusted. The algorithm is implemented into a successful computer program. Numerical results are provided.

  15. Producing anaglyphs from synthetic images

    NASA Astrophysics Data System (ADS)

    Sanders, William R.; McAllister, David F.

    2003-05-01

    Distance learning and virtual laboratory applications have motivated the use of inexpensive visual stereo solutions for computer displays. The anaglyph method is such a solution. Several techniques have been proposed for the production of anaglyphs. We discuss three approaches: the Photoshop algorithm and its variants, the least squares algorithm proposed by Eric Dubois that optimizes in the CIE color space, and the midpoint algorithm that minimizes the sum of the distances between the anagylph color and the left and right eye colors in CIEL*a*b*. Our results show that each method has its advantages and disadvantages in faithful color representation and in stereo quality as it relates to region merging and ghosting.

  16. Systolic array architecture for convolutional decoding algorithms: Viterbi algorithm and stack algorithm

    SciTech Connect

    Chang, C.Y.

    1986-01-01

    New results on efficient forms of decoding convolutional codes based on Viterbi and stack algorithms using systolic array architecture are presented. Some theoretical aspects of systolic arrays are also investigated. First, systolic array implementation of Viterbi algorithm is considered, and various properties of convolutional codes are derived. A technique called strongly connected trellis decoding is introduced to increase the efficient utilization of all the systolic array processors. The issues dealing with the composite branch metric generation, survivor updating, overall system architecture, throughput rate, and computations overhead ratio are also investigated. Second, the existing stack algorithm is modified and restated in a more concise version so that it can be efficiently implemented by a special type of systolic array called systolic priority queue. Three general schemes of systolic priority queue based on random access memory, shift register, and ripple register are proposed. Finally, a systematic approach is presented to design systolic arrays for certain general classes of recursively formulated algorithms.

  17. Presence of Shiga toxin-producing Escherichia coli O-groups in small and very-small beef-processing plants and resulting ground beef detected by a multiplex polymerase chain reaction assay.

    PubMed

    Svoboda, Amanda L; Dudley, Edward G; Debroy, Chitrita; Mills, Edward W; Cutter, Catherine N

    2013-09-01

    Shiga toxin-producing Escherichia coli (STEC) are associated with foodborne illnesses, including hemolytic uremic syndrome in humans. Cattle and consequently, beef products are considered a major source of STEC. E. coli O157:H7 has been regulated as an adulterant in ground beef since 1996. The United States Department of Agriculture Food Safety and Inspection Service began regulating six additional STEC (O145, O121, O111, O103, O45, and O26) as adulterants in beef trim and raw ground beef in June 2012. Little is known about the presence of STEC in small and very-small beef-processing plants. Therefore, we propose to determine whether small and very-small beef-processing plants are a potential source of non-O157:H7 STEC. Environmental swabs, carcass swabs, hide swabs, and ground beef from eight small and very-small beef-processing plants were obtained from October 2010 to December 2011. A multiplex polymerase chain reaction assay was used to determine the presence of STEC O-groups: O157, O145, O121, O113, O111, O103, O45, and O26 in the samples. Results demonstrated that 56.6% (154/272) of the environmental samples, 35.0% (71/203) of the carcass samples, 85.2% (23/27) of the hide samples, and 17.0% (20/118) of the ground beef samples tested positive for one or more of the serogroups. However, only 7.4% (20/272) of the environmental samples, 4.4% (9/203) of the carcass samples, and 0% (0/118) ground beef samples tested positive for both the serogroup and Shiga toxin genes. Based on this survey, small and very-small beef processors may be a source of non-O157:H7 STEC. The information from this study may be of interest to regulatory officials, researchers, public health personnel, and the beef industry that are interested in the presence of these pathogens in the beef supply. PMID:23742295

  18. Development, Comparisons and Evaluation of Aerosol Retrieval Algorithms

    NASA Astrophysics Data System (ADS)

    de Leeuw, G.; Holzer-Popp, T.; Aerosol-cci Team

    2011-12-01

    The Climate Change Initiative (cci) of the European Space Agency (ESA) has brought together a team of European Aerosol retrieval groups working on the development and improvement of aerosol retrieval algorithms. The goal of this cooperation is the development of methods to provide the best possible information on climate and climate change based on satellite observations. To achieve this, algorithms are characterized in detail as regards the retrieval approaches, the aerosol models used in each algorithm, cloud detection and surface treatment. A round-robin intercomparison of results from the various participating algorithms serves to identify the best modules or combinations of modules for each sensor. Annual global datasets including their uncertainties will then be produced and validated. The project builds on 9 existing algorithms to produce spectral aerosol optical depth (AOD and Ångström exponent) as well as other aerosol information; two instruments are included to provide the absorbing aerosol index (AAI) and stratospheric aerosol information. The algorithms included are: - 3 for ATSR (ORAC developed by RAL / Oxford university, ADV developed by FMI and the SU algorithm developed by Swansea University ) - 2 for MERIS (BAER by Bremen university and the ESA standard handled by HYGEOS) - 1 for POLDER over ocean (LOA) - 1 for synergetic retrieval (SYNAER by DLR ) - 1 for OMI retreival of the absorbing aerosol index with averaging kernel information (KNMI) - 1 for GOMOS stratospheric extinction profile retrieval (BIRA) The first seven algorithms aim at the retrieval of the AOD. However, each of the algorithms used differ in their approach, even for algorithms working with the same instrument such as ATSR or MERIS. To analyse the strengths and weaknesses of each algorithm several tests are made. The starting point for comparison and measurement of improvements is a retrieval run for 1 month, September 2008. The data from the same month are subsequently used for

  19. Scheduling with genetic algorithms

    NASA Technical Reports Server (NTRS)

    Fennel, Theron R.; Underbrink, A. J., Jr.; Williams, George P. W., Jr.

    1994-01-01

    In many domains, scheduling a sequence of jobs is an important function contributing to the overall efficiency of the operation. At Boeing, we develop schedules for many different domains, including assembly of military and commercial aircraft, weapons systems, and space vehicles. Boeing is under contract to develop scheduling systems for the Space Station Payload Planning System (PPS) and Payload Operations and Integration Center (POIC). These applications require that we respect certain sequencing restrictions among the jobs to be scheduled while at the same time assigning resources to the jobs. We call this general problem scheduling and resource allocation. Genetic algorithms (GA's) offer a search method that uses a population of solutions and benefits from intrinsic parallelism to search the problem space rapidly, producing near-optimal solutions. Good intermediate solutions are probabalistically recombined to produce better offspring (based upon some application specific measure of solution fitness, e.g., minimum flowtime, or schedule completeness). Also, at any point in the search, any intermediate solution can be accepted as a final solution; allowing the search to proceed longer usually produces a better solution while terminating the search at virtually any time may yield an acceptable solution. Many processes are constrained by restrictions of sequence among the individual jobs. For a specific job, other jobs must be completed beforehand. While there are obviously many other constraints on processes, it is these on which we focussed for this research: how to allocate crews to jobs while satisfying job precedence requirements and personnel, and tooling and fixture (or, more generally, resource) requirements.

  20. Algorithms versus architectures for computational chemistry

    NASA Technical Reports Server (NTRS)

    Partridge, H.; Bauschlicher, C. W., Jr.

    1986-01-01

    The algorithms employed are computationally intensive and, as a result, increased performance (both algorithmic and architectural) is required to improve accuracy and to treat larger molecular systems. Several benchmark quantum chemistry codes are examined on a variety of architectures. While these codes are only a small portion of a typical quantum chemistry library, they illustrate many of the computationally intensive kernels and data manipulation requirements of some applications. Furthermore, understanding the performance of the existing algorithm on present and proposed supercomputers serves as a guide for future programs and algorithm development. The algorithms investigated are: (1) a sparse symmetric matrix vector product; (2) a four index integral transformation; and (3) the calculation of diatomic two electron Slater integrals. The vectorization strategies are examined for these algorithms for both the Cyber 205 and Cray XMP. In addition, multiprocessor implementations of the algorithms are looked at on the Cray XMP and on the MIT static data flow machine proposed by DENNIS.

  1. A synthesized heuristic task scheduling algorithm.

    PubMed

    Dai, Yanyan; Zhang, Xiangli

    2014-01-01

    Aiming at the static task scheduling problems in heterogeneous environment, a heuristic task scheduling algorithm named HCPPEFT is proposed. In task prioritizing phase, there are three levels of priority in the algorithm to choose task. First, the critical tasks have the highest priority, secondly the tasks with longer path to exit task will be selected, and then algorithm will choose tasks with less predecessors to schedule. In resource selection phase, the algorithm is selected task duplication to reduce the interresource communication cost, besides forecasting the impact of an assignment for all children of the current task permits better decisions to be made in selecting resources. The algorithm proposed is compared with STDH, PEFT, and HEFT algorithms through randomly generated graphs and sets of task graphs. The experimental results show that the new algorithm can achieve better scheduling performance.

  2. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.

    1986-01-01

    Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.

  3. Algorithm-development activities

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    1994-01-01

    The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.

  4. Visual Empirical Region of Influence (VERI) Pattern Recognition Algorithms

    2002-05-01

    We developed new pattern recognition (PR) algorithms based on a human visual perception model. We named these algorithms Visual Empirical Region of Influence (VERI) algorithms. To compare the new algorithm's effectiveness against othe PR algorithms, we benchmarked their clustering capabilities with a standard set of two-dimensional data that is well known in the PR community. The VERI algorithm succeeded in clustering all the data correctly. No existing algorithm had previously clustered all the pattens inmore » the data set successfully. The commands to execute VERI algorithms are quite difficult to master when executed from a DOS command line. The algorithm requires several parameters to operate correctly. From our own experiences we realized that if we wanted to provide a new data analysis tool to the PR community we would have to provide a new data analysis tool to the PR community we would have to make the tool powerful, yet easy and intuitive to use. That was our motivation for developing graphical user interfaces (GUI's) to the VERI algorithms. We developed GUI's to control the VERI algorithm in a single pass mode and in an optimization mode. We also developed a visualization technique that allows users to graphically animate and visually inspect multi-dimensional data after it has been classified by the VERI algorithms. The visualization technique that allows users to graphically animate and visually inspect multi-dimensional data after it has been classified by the VERI algorithms. The visualization package is integrated into the single pass interface. Both the single pass interface and optimization interface are part of the PR software package we have developed and make available to other users. The single pass mode only finds PR results for the sets of features in the data set that are manually requested by the user. The optimization model uses a brute force method of searching through the cominations of features in a data set for features that produce

  5. Producing Uniform Lesion Pattern in HIFU Ablation

    NASA Astrophysics Data System (ADS)

    Zhou, Yufeng; Kargl, Steven G.; Hwang, Joo Ha

    2009-04-01

    High intensity focused ultrasound (HIFU) is emerging as a modality for treatment of solid tumors. The temperature at the focus can reach over 65° C denaturing cellular proteins resulting in coagulative necrosis. Typically, HIFU parameters are the same for each treated spot in most HIFU control systems. Because of thermal diffusion from nearby spots, the size of lesions will gradually become larger as the HIFU therapy progresses, which may cause insufficient treatment of initial spots, and over-treatment of later ones. It is found that the produced lesion pattern also depends on the scanning pathway. From the viewpoint of the physician creating uniform lesions and minimizing energy exposure are preferred in tumor ablation. An algorithm has been developed to adaptively determine the treatment parameters for every spot in a theoretical model in order to maintain similar lesion size throughout the HIFU therapy. In addition, the exposure energy needed using the traditional raster scanning is compared with those of two other scanning pathways, spiral scanning from the center to the outside and from the outside to the center. The theoretical prediction and proposed algorithm were further evaluated using transparent gel phantoms as a target. Digital images of the lesions were obtained, quantified, and then compared with each other. Altogether, dynamically changing treatment parameters can improve the efficacy and safety of HIFU ablation.

  6. INSENS classification algorithm report

    SciTech Connect

    Hernandez, J.E.; Frerking, C.J.; Myers, D.W.

    1993-07-28

    This report describes a new algorithm developed for the Imigration and Naturalization Service (INS) in support of the INSENS project for classifying vehicles and pedestrians using seismic data. This algorithm is less sensitive to nuisance alarms due to environmental events than the previous algorithm. Furthermore, the algorithm is simple enough that it can be implemented in the 8-bit microprocessor used in the INSENS system.

  7. Genetic Algorithm Approaches to Prebiobiotic Chemistry Modeling

    NASA Technical Reports Server (NTRS)

    Lohn, Jason; Colombano, Silvano

    1997-01-01

    We model an artificial chemistry comprised of interacting polymers by specifying two initial conditions: a distribution of polymers and a fixed set of reversible catalytic reactions. A genetic algorithm is used to find a set of reactions that exhibit a desired dynamical behavior. Such a technique is useful because it allows an investigator to determine whether a specific pattern of dynamics can be produced, and if it can, the reaction network found can be then analyzed. We present our results in the context of studying simplified chemical dynamics in theorized protocells - hypothesized precursors of the first living organisms. Our results show that given a small sample of plausible protocell reaction dynamics, catalytic reaction sets can be found. We present cases where this is not possible and also analyze the evolved reaction sets.

  8. Modified-INSAT Multi-Spectral Rainfall Algorithm (M-IMSRA) - A New Satellite Rainfall Estimation Algorithm based on Climatic region

    NASA Astrophysics Data System (ADS)

    Upadhyaya, S. A.; Ramsankaran, R.

    2015-12-01

    A new simple geostationary satellite based hybrid rainfall estimation algorithm called Modified-INSAT Multi-Spectral Rainfall Algorithm (M-IMSRA) has been developed and evaluated in the present study. Mainly, the following two questions have been addressed and accordingly the algorithm has been developed and evaluated: Can simple geostationary satellite based SRE's perform equivalent to merged techniques which uses all the available satellite datasets? If so/not so, then how best it can perform? Whether SRE's perform differently over different climate regions? Whether incorporating topography in SRE will improve the performance equally over all climate regions? M-IMSRA incorporates topographic information in the IMSRA (INSAT Multi-Spectral Rainfall Algorithm) algorithm using 20 different variables extracted from Digital Elevation Model by using Least Absolute Shrinkage and Selection Operator (LASSO) technique. The results show that the simple algorithms like M-IMSRA can perform similar to the other highly computationally expensive merged algorithms like TRMM 3B42 and TRMM 3B42-RT over some climatic regions of India. It has been observed that, by incorporating static topographic information the estimates over orographic regions of India like Western Ghats and North-East India has significantly improved with only reduction in the additive bias over other regions. Relative performance of the tested satellite rainfall estimates like TRMM 3B42, TRMM 3B42-RT and M-IMSRA are completely different over different climatic regions, with better performance over moderate rainfall climate regions and relatively poor performance over low and high rainfall climate regions. The obtained results highlight that only one algorithm with same input variables cannot produce better rainfall estimates over all the climatic regions where the driving variables for each region will be different. Therefore, the imminent development of SRE's must give attention to this fact and consider this to

  9. Scheduling periodic jobs using imprecise results

    NASA Technical Reports Server (NTRS)

    Chung, Jen-Yao; Liu, Jane W. S.; Lin, Kwei-Jay

    1987-01-01

    One approach to avoid timing faults in hard, real-time systems is to make available intermediate, imprecise results produced by real-time processes. When a result of the desired quality cannot be produced in time, an imprecise result of acceptable quality produced before the deadline can be used. The problem of scheduling periodic jobs to meet deadlines on a system that provides the necessary programming language primitives and run-time support for processes to return imprecise results is discussed. Since the scheduler may choose to terminate a task before it is completed, causing it to produce an acceptable but imprecise result, the amount of processor time assigned to any task in a valid schedule can be less than the amount of time required to complete the task. A meaningful formulation of the scheduling problem must take into account the overall quality of the results. Depending on the different types of undesirable effects caused by errors, jobs are classified as type N or type C. For type N jobs, the effects of errors in results produced in different periods are not cumulative. A reasonable performance measure is the average error over all jobs. Three heuristic algorithms that lead to feasible schedules with small average errors are described. For type C jobs, the undesirable effects of errors produced in different periods are cumulative. Schedulability criteria of type C jobs are discussed.

  10. Interior search algorithm (ISA): a novel approach for global optimization.

    PubMed

    Gandomi, Amir H

    2014-07-01

    This paper presents the interior search algorithm (ISA) as a novel method for solving optimization tasks. The proposed ISA is inspired by interior design and decoration. The algorithm is different from other metaheuristic algorithms and provides new insight for global optimization. The proposed method is verified using some benchmark mathematical and engineering problems commonly used in the area of optimization. ISA results are further compared with well-known optimization algorithms. The results show that the ISA is efficiently capable of solving optimization problems. The proposed algorithm can outperform the other well-known algorithms. Further, the proposed algorithm is very simple and it only has one parameter to tune.

  11. Stride Search: a general algorithm for storm detection in high-resolution climate data

    NASA Astrophysics Data System (ADS)

    Bosler, Peter A.; Roesler, Erika L.; Taylor, Mark A.; Mundt, Miranda R.

    2016-04-01

    This article discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared: the commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. The Stride Search algorithm is defined independently of the spatial discretization associated with a particular data set. Results from the two algorithms are compared for the application of tropical cyclone detection, and shown to produce similar results for the same set of storm identification criteria. Differences between the two algorithms arise for some storms due to their different definition of search regions in physical space. The physical space associated with each Stride Search region is constant, regardless of data resolution or latitude, and Stride Search is therefore capable of searching all regions of the globe in the same manner. Stride Search's ability to search high latitudes is demonstrated for the case of polar low detection. Wall clock time required for Stride Search is shown to be smaller than a grid point search of the same data, and the relative speed up associated with Stride Search increases as resolution increases.

  12. Developing NASA's VIIRS LST and Emissivity EDRs using a physics based Temperature Emissivity Separation (TES) algorithm

    NASA Astrophysics Data System (ADS)

    Islam, T.; Hulley, G. C.; Malakar, N.; Hook, S. J.

    2015-12-01

    Land Surface Temperature and Emissivity (LST&E) data are acknowledged as critical Environmental Data Records (EDRs) by the NASA Earth Science Division. The current operational LST EDR for the recently launched Suomi National Polar-orbiting Partnership's (NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) payload utilizes a split-window algorithm that relies on previously-generated fixed emissivity dependent coefficients and does not produce a dynamically varying and multi-spectral land surface emissivity product. Furthermore, this algorithm deviates from its MODIS counterpart (MOD11) resulting in a discontinuity in the MODIS/VIIRS LST time series. This study presents an alternative physics based algorithm for generation of the NASA VIIRS LST&E EDR in order to provide continuity with its MODIS counterpart algorithm (MOD21). The algorithm, known as temperature emissivity separation (TES) algorithm, uses a fast radiative transfer model - Radiative Transfer for (A)TOVS (RTTOV) in combination with an emissivity calibration model to isolate the surface radiance contribution retrieving temperature and emissivity. Further, a new water-vapor scaling (WVS) method is developed and implemented to improve the atmospheric correction process within the TES system. An independent assessment of the VIIRS LST&E outputs is performed against in situ LST measurements and laboratory measured emissivity spectra samples over dedicated validation sites in the Southwest USA. Emissivity retrievals are also validated with the latest ASTER Global Emissivity Database Version 4 (GEDv4). An overview and current status of the algorithm as well as the validation results will be discussed.

  13. Algorithm Optimally Orders Forward-Chaining Inference Rules

    NASA Technical Reports Server (NTRS)

    James, Mark

    2008-01-01

    People typically develop knowledge bases in a somewhat ad hoc manner by incrementally adding rules with no specific organization. This often results in a very inefficient execution of those rules since they are so often order sensitive. This is relevant to tasks like Deep Space Network in that it allows the knowledge base to be incrementally developed and have it automatically ordered for efficiency. Although data flow analysis was first developed for use in compilers for producing optimal code sequences, its usefulness is now recognized in many software systems including knowledge-based systems. However, this approach for exhaustively computing data-flow information cannot directly be applied to inference systems because of the ubiquitous execution of the rules. An algorithm is presented that efficiently performs a complete producer/consumer analysis for each antecedent and consequence clause in a knowledge base to optimally order the rules to minimize inference cycles. An algorithm was developed that optimally orders a knowledge base composed of forwarding chaining inference rules such that independent inference cycle executions are minimized, thus, resulting in significantly faster execution. This algorithm was integrated into the JPL tool Spacecraft Health Inference Engine (SHINE) for verification and it resulted in a significant reduction in inference cycles for what was previously considered an ordered knowledge base. For a knowledge base that is completely unordered, then the improvement is much greater.

  14. Control algorithms for dynamic attenuators

    SciTech Connect

    Hsieh, Scott S.; Pelc, Norbert J.

    2014-06-15

    Purpose: The authors describe algorithms to control dynamic attenuators in CT and compare their performance using simulated scans. Dynamic attenuators are prepatient beam shaping filters that modulate the distribution of x-ray fluence incident on the patient on a view-by-view basis. These attenuators can reduce dose while improving key image quality metrics such as peak or mean variance. In each view, the attenuator presents several degrees of freedom which may be individually adjusted. The total number of degrees of freedom across all views is very large, making many optimization techniques impractical. The authors develop a theory for optimally controlling these attenuators. Special attention is paid to a theoretically perfect attenuator which controls the fluence for each ray individually, but the authors also investigate and compare three other, practical attenuator designs which have been previously proposed: the piecewise-linear attenuator, the translating attenuator, and the double wedge attenuator. Methods: The authors pose and solve the optimization problems of minimizing the mean and peak variance subject to a fixed dose limit. For a perfect attenuator and mean variance minimization, this problem can be solved in simple, closed form. For other attenuator designs, the problem can be decomposed into separate problems for each view to greatly reduce the computational complexity. Peak variance minimization can be approximately solved using iterated, weighted mean variance (WMV) minimization. Also, the authors develop heuristics for the perfect and piecewise-linear attenuators which do not requirea priori knowledge of the patient anatomy. The authors compare these control algorithms on different types of dynamic attenuators using simulated raw data from forward projected DICOM files of a thorax and an abdomen. Results: The translating and double wedge attenuators reduce dose by an average of 30% relative to current techniques (bowtie filter with tube current

  15. Ares I-X Best Estimated Trajectory Analysis and Results

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.; Beck, Roger E.; Starr, Brett R.; Derry, Stephen D.; Brandon, Jay; Olds, Aaron D.

    2011-01-01

    The Ares I-X trajectory reconstruction produced best estimated trajectories of the flight test vehicle ascent through stage separation, and of the first and upper stage entries after separation. The trajectory reconstruction process combines on-board, ground-based, and atmospheric measurements to produce the trajectory estimates. The Ares I-X vehicle had a number of on-board and ground based sensors that were available, including inertial measurement units, radar, air-data, and weather balloons. However, due to problems with calibrations and/or data, not all of the sensor data were used. The trajectory estimate was generated using an Iterative Extended Kalman Filter algorithm, which is an industry standard processing algorithm for filtering and estimation applications. This paper describes the methodology and results of the trajectory reconstruction process, including flight data preprocessing and input uncertainties, trajectory estimation algorithms, output transformations, and comparisons with preflight predictions.

  16. A Modified Decision Tree Algorithm Based on Genetic Algorithm for Mobile User Classification Problem

    PubMed Central

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389

  17. A modified decision tree algorithm based on genetic algorithm for mobile user classification problem.

    PubMed

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389

  18. A modified decision tree algorithm based on genetic algorithm for mobile user classification problem.

    PubMed

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity.

  19. Advancements to the planogram frequency–distance rebinning algorithm

    PubMed Central

    Champley, Kyle M; Raylman, Raymond R; Kinahan, Paul E

    2010-01-01

    In this paper we consider the task of image reconstruction in positron emission tomography (PET) with the planogram frequency–distance rebinning (PFDR) algorithm. The PFDR algorithm is a rebinning algorithm for PET systems with panel detectors. The algorithm is derived in the planogram coordinate system which is a native data format for PET systems with panel detectors. A rebinning algorithm averages over the redundant four-dimensional set of PET data to produce a three-dimensional set of data. Images can be reconstructed from this rebinned three-dimensional set of data. This process enables one to reconstruct PET images more quickly than reconstructing directly from the four-dimensional PET data. The PFDR algorithm is an approximate rebinning algorithm. We show that implementing the PFDR algorithm followed by the (ramp) filtered backprojection (FBP) algorithm in linogram coordinates from multiple views reconstructs a filtered version of our image. We develop an explicit formula for this filter which can be used to achieve exact reconstruction by means of a modified FBP algorithm applied to the stack of rebinned linograms and can also be used to quantify the errors introduced by the PFDR algorithm. This filter is similar to the filter in the planogram filtered backprojection algorithm derived by Brasse et al. The planogram filtered backprojection and exact reconstruction with the PFDR algorithm require complete projections which can be completed with a reprojection algorithm. The PFDR algorithm is similar to the rebinning algorithm developed by Kao et al. By expressing the PFDR algorithm in detector coordinates, we provide a comparative analysis between the two algorithms. Numerical experiments using both simulated data and measured data from a positron emission mammography/tomography (PEM/PET) system are performed. Images are reconstructed by PFDR+FBP (PFDR followed by 2D FBP reconstruction), PFDRX (PFDR followed by the modified FBP algorithm for exact

  20. Improved autonomous star identification algorithm

    NASA Astrophysics Data System (ADS)

    Luo, Li-Yan; Xu, Lu-Ping; Zhang, Hua; Sun, Jing-Rong

    2015-06-01

    The log-polar transform (LPT) is introduced into the star identification because of its rotation invariance. An improved autonomous star identification algorithm is proposed in this paper to avoid the circular shift of the feature vector and to reduce the time consumed in the star identification algorithm using LPT. In the proposed algorithm, the star pattern of the same navigation star remains unchanged when the stellar image is rotated, which makes it able to reduce the star identification time. The logarithmic values of the plane distances between the navigation and its neighbor stars are adopted to structure the feature vector of the navigation star, which enhances the robustness of star identification. In addition, some efforts are made to make it able to find the identification result with fewer comparisons, instead of searching the whole feature database. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition rate and robustness by the proposed algorithm are better than those by the LPT algorithm and the modified grid algorithm. Project supported by the National Natural Science Foundation of China (Grant Nos. 61172138 and 61401340), the Open Research Fund of the Academy of Satellite Application, China (Grant No. 2014_CXJJ-DH_12), the Fundamental Research Funds for the Central Universities, China (Grant Nos. JB141303 and 201413B), the Natural Science Basic Research Plan in Shaanxi Province, China (Grant No. 2013JQ8040), the Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20130203120004), and the Xi’an Science and Technology Plan, China (Grant. No CXY1350(4)).

  1. An efficient parallel termination detection algorithm

    SciTech Connect

    Baker, A. H.; Crivelli, S.; Jessup, E. R.

    2004-05-27

    Information local to any one processor is insufficient to monitor the overall progress of most distributed computations. Typically, a second distributed computation for detecting termination of the main computation is necessary. In order to be a useful computational tool, the termination detection routine must operate concurrently with the main computation, adding minimal overhead, and it must promptly and correctly detect termination when it occurs. In this paper, we present a new algorithm for detecting the termination of a parallel computation on distributed-memory MIMD computers that satisfies all of those criteria. A variety of termination detection algorithms have been devised. Of these, the algorithm presented by Sinha, Kale, and Ramkumar (henceforth, the SKR algorithm) is unique in its ability to adapt to the load conditions of the system on which it runs, thereby minimizing the impact of termination detection on performance. Because their algorithm also detects termination quickly, we consider it to be the most efficient practical algorithm presently available. The termination detection algorithm presented here was developed for use in the PMESC programming library for distributed-memory MIMD computers. Like the SKR algorithm, our algorithm adapts to system loads and imposes little overhead. Also like the SKR algorithm, ours is tree-based, and it does not depend on any assumptions about the physical interconnection topology of the processors or the specifics of the distributed computation. In addition, our algorithm is easier to implement and requires only half as many tree traverses as does the SKR algorithm. This paper is organized as follows. In section 2, we define our computational model. In section 3, we review the SKR algorithm. We introduce our new algorithm in section 4, and prove its correctness in section 5. We discuss its efficiency and present experimental results in section 6.

  2. Harmony Search Algorithm for Word Sense Disambiguation

    PubMed Central

    Abed, Saad Adnan; Tiun, Sabrina; Omar, Nazlia

    2015-01-01

    Word Sense Disambiguation (WSD) is the task of determining which sense of an ambiguous word (word with multiple meanings) is chosen in a particular use of that word, by considering its context. A sentence is considered ambiguous if it contains ambiguous word(s). Practically, any sentence that has been classified as ambiguous usually has multiple interpretations, but just one of them presents the correct interpretation. We propose an unsupervised method that exploits knowledge based approaches for word sense disambiguation using Harmony Search Algorithm (HSA) based on a Stanford dependencies generator (HSDG). The role of the dependency generator is to parse sentences to obtain their dependency relations. Whereas, the goal of using the HSA is to maximize the overall semantic similarity of the set of parsed words. HSA invokes a combination of semantic similarity and relatedness measurements, i.e., Jiang and Conrath (jcn) and an adapted Lesk algorithm, to perform the HSA fitness function. Our proposed method was experimented on benchmark datasets, which yielded results comparable to the state-of-the-art WSD methods. In order to evaluate the effectiveness of the dependency generator, we perform the same methodology without the parser, but with a window of words. The empirical results demonstrate that the proposed method is able to produce effective solutions for most instances of the datasets used. PMID:26422368

  3. Effects of Sample Size and Dimensionality on the Performance of Four Algorithms for Inference of Association Networks in Metabonomics.

    PubMed

    Suarez-Diez, Maria; Saccenti, Edoardo

    2015-12-01

    We investigated the effect of sample size and dimensionality on the performance of four algorithms (ARACNE, CLR, CORR, and PCLRC) when they are used for the inference of metabolite association networks. We report that as many as 100-400 samples may be necessary to obtain stable network estimations, depending on the algorithm and the number of measured metabolites. The CLR and PCLRC methods produce similar results, whereas network inference based on correlations provides sparse networks; we found ARACNE to be unsuitable for this application, being unable to recover the underlying metabolite association network. We recommend the PCLRC algorithm for the inference on metabolite association networks.

  4. A convergent hybrid decomposition algorithm model for SVM training.

    PubMed

    Lucidi, Stefano; Palagi, Laura; Risi, Arnaldo; Sciandrone, Marco

    2009-06-01

    Training of support vector machines (SVMs) requires to solve a linearly constrained convex quadratic problem. In real applications, the number of training data may be very huge and the Hessian matrix cannot be stored. In order to take into account this issue, a common strategy consists in using decomposition algorithms which at each iteration operate only on a small subset of variables, usually referred to as the working set. Training time can be significantly reduced by using a caching technique that allocates some memory space to store the columns of the Hessian matrix corresponding to the variables recently updated. The convergence properties of a decomposition method can be guaranteed by means of a suitable selection of the working set and this can limit the possibility of exploiting the information stored in the cache. We propose a general hybrid algorithm model which combines the capability of producing a globally convergent sequence of points with a flexible use of the information in the cache. As an example of a specific realization of the general hybrid model, we describe an algorithm based on a particular strategy for exploiting the information deriving from a caching technique. We report the results of computational experiments performed by simple implementations of this algorithm. The numerical results point out the potentiality of the approach.

  5. Locomotive assignment problem with train precedence using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Noori, Siamak; Ghannadpour, Seyed Farid

    2012-07-01

    This paper aims to study the locomotive assignment problem which is very important for railway companies, in view of high cost of operating locomotives. This problem is to determine the minimum cost assignment of homogeneous locomotives located in some central depots to a set of pre-scheduled trains in order to provide sufficient power to pull the trains from their origins to their destinations. These trains have different degrees of priority for servicing, and the high class of trains should be serviced earlier than others. This problem is modeled using vehicle routing and scheduling problem where trains representing the customers are supposed to be serviced in pre-specified hard/soft fuzzy time windows. A two-phase approach is used which, in the first phase, the multi-depot locomotive assignment is converted to a set of single depot problems, and after that, each single depot problem is solved heuristically by a hybrid genetic algorithm. In the genetic algorithm, various heuristics and efficient operators are used in the evolutionary search. The suggested algorithm is applied to solve the medium sized numerical example to check capabilities of the model and algorithm. Moreover, some of the results are compared with those solutions produced by branch-and-bound technique to determine validity and quality of the model. Results show that suggested approach is rather effective in respect of quality and time.

  6. Masseter segmentation using an improved watershed algorithm with unsupervised classification.

    PubMed

    Ng, H P; Ong, S H; Foong, K W C; Goh, P S; Nowinski, W L

    2008-02-01

    The watershed algorithm always produces a complete division of the image. However, it is susceptible to over-segmentation and sensitivity to false edges. In medical images this leads to unfavorable representations of the anatomy. We address these drawbacks by introducing automated thresholding and post-segmentation merging. The automated thresholding step is based on the histogram of the gradient magnitude map while post-segmentation merging is based on a criterion which measures the similarity in intensity values between two neighboring partitions. Our improved watershed algorithm is able to merge more than 90% of the initial partitions, which indicates that a large amount of over-segmentation has been reduced. To further improve the segmentation results, we make use of K-means clustering to provide an initial coarse segmentation of the highly textured image before the improved watershed algorithm is applied to it. When applied to the segmentation of the masseter from 60 magnetic resonance images of 10 subjects, the proposed algorithm achieved an overlap index (kappa) of 90.6%, and was able to merge 98% of the initial partitions on average. The segmentation results are comparable to those obtained using the gradient vector flow snake. PMID:17950265

  7. WDM Multicast Tree Construction Algorithms and Their Comparative Evaluations

    NASA Astrophysics Data System (ADS)

    Makabe, Tsutomu; Mikoshi, Taiju; Takenaka, Toyofumi

    We propose novel tree construction algorithms for multicast communication in photonic networks. Since multicast communications consume many more link resources than unicast communications, effective algorithms for route selection and wavelength assignment are required. We propose a novel tree construction algorithm, called the Weighted Steiner Tree (WST) algorithm and a variation of the WST algorithm, called the Composite Weighted Steiner Tree (CWST) algorithm. Because these algorithms are based on the Steiner Tree algorithm, link resources among source and destination pairs tend to be commonly used and link utilization ratios are improved. Because of this, these algorithms can accept many more multicast requests than other multicast tree construction algorithms based on the Dijkstra algorithm. However, under certain delay constraints, the blocking characteristics of the proposed Weighted Steiner Tree algorithm deteriorate since some light paths between source and destinations use many hops and cannot satisfy the delay constraint. In order to adapt the approach to the delay-sensitive environments, we have devised the Composite Weighted Steiner Tree algorithm comprising the Weighted Steiner Tree algorithm and the Dijkstra algorithm for use in a delay constrained environment such as an IPTV application. In this paper, we also give the results of simulation experiments which demonstrate the superiority of the proposed Composite Weighted Steiner Tree algorithm compared with the Distributed Minimum Hop Tree (DMHT) algorithm, from the viewpoint of the light-tree request blocking.

  8. Research on Palmprint Identification Method Based on Quantum Algorithms

    PubMed Central

    Zhang, Zhanzhan

    2014-01-01

    Quantum image recognition is a technology by using quantum algorithm to process the image information. It can obtain better effect than classical algorithm. In this paper, four different quantum algorithms are used in the three stages of palmprint recognition. First, quantum adaptive median filtering algorithm is presented in palmprint filtering processing. Quantum filtering algorithm can get a better filtering result than classical algorithm through the comparison. Next, quantum Fourier transform (QFT) is used to extract pattern features by only one operation due to quantum parallelism. The proposed algorithm exhibits an exponential speed-up compared with discrete Fourier transform in the feature extraction. Finally, quantum set operations and Grover algorithm are used in palmprint matching. According to the experimental results, quantum algorithm only needs to apply square of N operations to find out the target palmprint, but the traditional method needs N times of calculation. At the same time, the matching accuracy of quantum algorithm is almost 100%. PMID:25105165

  9. Perturbation resilience and superiorization of iterative algorithms

    NASA Astrophysics Data System (ADS)

    Censor, Y.; Davidi, R.; Herman, G. T.

    2010-06-01

    Iterative algorithms aimed at solving some problems are discussed. For certain problems, such as finding a common point in the intersection of a finite number of convex sets, there often exist iterative algorithms that impose very little demand on computer resources. For other problems, such as finding that point in the intersection at which the value of a given function is optimal, algorithms tend to need more computer memory and longer execution time. A methodology is presented whose aim is to produce automatically for an iterative algorithm of the first kind a 'superiorized version' of it that retains its computational efficiency but nevertheless goes a long way toward solving an optimization problem. This is possible to do if the original algorithm is 'perturbation resilient', which is shown to be the case for various projection algorithms for solving the consistent convex feasibility problem. The superiorized versions of such algorithms use perturbations that steer the process in the direction of a superior feasible point, which is not necessarily optimal, with respect to the given function. After presenting these intuitive ideas in a precise mathematical form, they are illustrated in image reconstruction from projections for two different projection algorithms superiorized for the function whose value is the total variation of the image.

  10. Kalman plus weights: a time scale algorithm

    NASA Technical Reports Server (NTRS)

    Greenhall, C. A.

    2001-01-01

    KPW is a time scale algorithm that combines Kalman filtering with the basic time scale equation (BTSE). A single Kalman filter that estimates all clocks simultaneously is used to generate the BTSE frequency estimates, while the BTSE weights are inversely proportional to the white FM variances of the clocks. Results from simulated clock ensembles are compared to previous simulation results from other algorithms.

  11. IUS guidance algorithm gamma guide assessment

    NASA Technical Reports Server (NTRS)

    Bray, R. E.; Dauro, V. A.

    1980-01-01

    The Gamma Guidance Algorithm which controls the inertial upper stage is described. The results of an independent assessment of the algorithm's performance in satisfying the NASA missions' targeting objectives are presented. The results of a launch window analysis for a Galileo mission, and suggested improvements are included.

  12. A sustainable genetic algorithm for satellite resource allocation

    NASA Technical Reports Server (NTRS)

    Abbott, R. J.; Campbell, M. L.; Krenz, W. C.

    1995-01-01

    A hybrid genetic algorithm is used to schedule tasks for 8 satellites, which can be modelled as a robot whose task is to retrieve objects from a two dimensional field. The objective is to find a schedule that maximizes the value of objects retrieved. Typical of the real-world tasks to which this corresponds is the scheduling of ground contacts for a communications satellite. An important feature of our application is that the amount of time available for running the scheduler is not necessarily known in advance. This requires that the scheduler produce reasonably good results after a short period but that it also continue to improve its results if allowed to run for a longer period. We satisfy this requirement by developing what we call a sustainable genetic algorithm.

  13. An Upperbound to the Performance of Ranked-Output Searching: Optimal Weighting of Query Terms Using A Genetic Algorithm.

    ERIC Educational Resources Information Center

    Robertson, Alexander M.; Willett, Peter

    1996-01-01

    Describes a genetic algorithm (GA) that assigns weights to query terms in a ranked-output document retrieval system. Experiments showed the GA often found weights slightly superior to those produced by deterministic weighting (F4). Many times, however, the two methods gave the same results and sometimes the F4 results were superior, indicating…

  14. Programming environment for parallel vision algorithms. Annual report, February 1986-February 1987

    SciTech Connect

    Brown, C.

    1987-02-01

    During the second year of the award period, the Computer Science Department of the University of Rochester continued work in: 1) systems support algorithms, 2) the Butterfly programming environment, and 3) vision applications. This research produced several internal and external reports as well as much exportable code. The University of Rochester also employed DARPA Parallel Architecture Benchmark problems to test different algorithms using four different Butterfly programming environments. These tests produced several interesting results and demonstrated that the Butterfly architecture is a flexible general-purpose architecture that can be effectively programmed by non-experts, using tools developed at BBN and Rochester. The University of Rochester is continuing to study the issues and concerns surrounding the effective implementation of parallel algorithms.

  15. Evaluation of Various Radar Data Quality Control Algorithms Based on Accumulated Radar Rainfall Statistics

    NASA Technical Reports Server (NTRS)

    Robinson, Michael; Steiner, Matthias; Wolff, David B.; Ferrier, Brad S.; Kessinger, Cathy; Einaudi, Franco (Technical Monitor)

    2000-01-01

    The primary function of the TRMM Ground Validation (GV) Program is to create GV rainfall products that provide basic validation of satellite-derived precipitation measurements for select primary sites. A fundamental and extremely important step in creating high-quality GV products is radar data quality control. Quality control (QC) processing of TRMM GV radar data is based on some automated procedures, but the current QC algorithm is not fully operational and requires significant human interaction to assure satisfactory results. Moreover, the TRMM GV QC algorithm, even with continuous manual tuning, still can not completely remove all types of spurious echoes. In an attempt to improve the current operational radar data QC procedures of the TRMM GV effort, an intercomparison of several QC algorithms has been conducted. This presentation will demonstrate how various radar data QC algorithms affect accumulated radar rainfall products. In all, six different QC algorithms will be applied to two months of WSR-88D radar data from Melbourne, Florida. Daily, five-day, and monthly accumulated radar rainfall maps will be produced for each quality-controlled data set. The QC algorithms will be evaluated and compared based on their ability to remove spurious echoes without removing significant precipitation. Strengths and weaknesses of each algorithm will be assessed based on, their abilit to mitigate both erroneous additions and reductions in rainfall accumulation from spurious echo contamination and true precipitation removal, respectively. Contamination from individual spurious echo categories will be quantified to further diagnose the abilities of each radar QC algorithm. Finally, a cost-benefit analysis will be conducted to determine if a more automated QC algorithm is a viable alternative to the current, labor-intensive QC algorithm employed by TRMM GV.

  16. Coupled Inertial Navigation and Flush Air Data Sensing Algorithm for Atmosphere Estimation

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark

    2015-01-01

    This paper describes an algorithm for atmospheric state estimation that is based on a coupling between inertial navigation and flush air data sensing pressure measurements. In this approach, the full navigation state is used in the atmospheric estimation algorithm along with the pressure measurements and a model of the surface pressure distribution to directly estimate atmospheric winds and density using a nonlinear weighted least-squares algorithm. The approach uses a high fidelity model of atmosphere stored in table-look-up form, along with simplified models of that are propagated along the trajectory within the algorithm to provide prior estimates and covariances to aid the air data state solution. Thus, the method is essentially a reduced-order Kalman filter in which the inertial states are taken from the navigation solution and atmospheric states are estimated in the filter. The algorithm is applied to data from the Mars Science Laboratory entry, descent, and landing from August 2012. Reasonable estimates of the atmosphere and winds are produced by the algorithm. The observability of winds along the trajectory are examined using an index based on the discrete-time observability Gramian and the pressure measurement sensitivity matrix. The results indicate that bank reversals are responsible for adding information content to the system. The algorithm is then applied to the design of the pressure measurement system for the Mars 2020 mission. The pressure port layout is optimized to maximize the observability of atmospheric states along the trajectory. Linear covariance analysis is performed to assess estimator performance for a given pressure measurement uncertainty. The results indicate that the new tightly-coupled estimator can produce enhanced estimates of atmospheric states when compared with existing algorithms.

  17. Adaptive Routing Algorithm in Wireless Communication Networks Using Evolutionary Algorithm

    NASA Astrophysics Data System (ADS)

    Yan, Xuesong; Wu, Qinghua; Cai, Zhihua

    At present, mobile communications traffic routing designs are complicated because there are more systems inter-connecting to one another. For example, Mobile Communication in the wireless communication networks has two routing design conditions to consider, i.e. the circuit switching and the packet switching. The problem in the Packet Switching routing design is its use of high-speed transmission link and its dynamic routing nature. In this paper, Evolutionary Algorithms is used to determine the best solution and the shortest communication paths. We developed a Genetic Optimization Process that can help network planners solving the best solutions or the best paths of routing table in wireless communication networks are easily and quickly. From the experiment results can be noted that the evolutionary algorithm not only gets good solutions, but also a more predictable running time when compared to sequential genetic algorithm.

  18. A discrete artificial bee colony algorithm for detecting transcription factor binding sites in DNA sequences.

    PubMed

    Karaboga, D; Aslan, S

    2016-01-01

    The great majority of biological sequences share significant similarity with other sequences as a result of evolutionary processes, and identifying these sequence similarities is one of the most challenging problems in bioinformatics. In this paper, we present a discrete artificial bee colony (ABC) algorithm, which is inspired by the intelligent foraging behavior of real honey bees, for the detection of highly conserved residue patterns or motifs within sequences. Experimental studies on three different data sets showed that the proposed discrete model, by adhering to the fundamental scheme of the ABC algorithm, produced competitive or better results than other metaheuristic motif discovery techniques. PMID:27173272

  19. Cognitive radio resource allocation based on coupled chaotic genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zu, Yun-Xiao; Zhou, Jie; Zeng, Chang-Chang

    2010-11-01

    A coupled chaotic genetic algorithm for cognitive radio resource allocation which is based on genetic algorithm and coupled Logistic map is proposed. A fitness function for cognitive radio resource allocation is provided. Simulations are conducted for cognitive radio resource allocation by using the coupled chaotic genetic algorithm, simple genetic algorithm and dynamic allocation algorithm respectively. The simulation results show that, compared with simple genetic and dynamic allocation algorithm, coupled chaotic genetic algorithm reduces the total transmission power and bit error rate in cognitive radio system, and has faster convergence speed.

  20. An automatic and fast centerline extraction algorithm for virtual colonoscopy.

    PubMed

    Jiang, Guangxiang; Gu, Lixu

    2005-01-01

    This paper introduces a new refined centerline extraction algorithm, which is based on and significantly improved from distance mapping algorithms. The new approach include three major parts: employing a colon segmentation method; designing and realizing a fast Euclidean Transform algorithm and inducting boundary voxels cutting (BVC) approach. The main contribution is the BVC processing, which greatly speeds up the Dijkstra algorithm and improves the whole performance of the new algorithm. Experimental results demonstrate that the new centerline algorithm was more efficient and accurate comparing with existing algorithms. PMID:17281406

  1. A Short Survey of Document Structure Similarity Algorithms

    SciTech Connect

    Buttler, D

    2004-02-27

    This paper provides a brief survey of document structural similarity algorithms, including the optimal Tree Edit Distance algorithm and various approximation algorithms. The approximation algorithms include the simple weighted tag similarity algorithm, Fourier transforms of the structure, and a new application of the shingle technique to structural similarity. We show three surprising results. First, the Fourier transform technique proves to be the least accurate of any of approximation algorithms, while also being slowest. Second, optimal Tree Edit Distance algorithms may not be the best technique for clustering pages from different sites. Third, the simplest approximation to structure may be the most effective and efficient mechanism for many applications.

  2. NWRA AVOSS Wake Vortex Prediction Algorithm. 3.1.1

    NASA Technical Reports Server (NTRS)

    Robins, R. E.; Delisi, D. P.; Hinton, David (Technical Monitor)

    2002-01-01

    This report provides a detailed description of the wake vortex prediction algorithm used in the Demonstration Version of NASA's Aircraft Vortex Spacing System (AVOSS). The report includes all equations used in the algorithm, an explanation of how to run the algorithm, and a discussion of how the source code for the algorithm is organized. Several appendices contain important supplementary information, including suggestions for enhancing the algorithm and results from test cases.

  3. Land cover and land use mapping of the iSimangaliso Wetland Park, South Africa: comparison of oblique and orthogonal random forest algorithms

    NASA Astrophysics Data System (ADS)

    Bassa, Zaakirah; Bob, Urmilla; Szantoi, Zoltan; Ismail, Riyad

    2016-01-01

    In recent years, the popularity of tree-based ensemble methods for land cover classification has increased significantly. Using WorldView-2 image data, we evaluate the potential of the oblique random forest algorithm (oRF) to classify a highly heterogeneous protected area. In contrast to the random forest (RF) algorithm, the oRF algorithm builds multivariate trees by learning the optimal split using a supervised model. The oRF binary algorithm is adapted to a multiclass land cover and land use application using both the "one-against-one" and "one-against-all" combination approaches. Results show that the oRF algorithms are capable of achieving high classification accuracies (>80%). However, there was no statistical difference in classification accuracies obtained by the oRF algorithms and the more popular RF algorithm. For all the algorithms, user accuracies (UAs) and producer accuracies (PAs) >80% were recorded for most of the classes. Both the RF and oRF algorithms poorly classified the indigenous forest class as indicated by the low UAs and PAs. Finally, the results from this study advocate and support the utility of the oRF algorithm for land cover and land use mapping of protected areas using WorldView-2 image data.

  4. An Improved Neutron Transport Algorithm for HZETRN

    NASA Technical Reports Server (NTRS)

    Slaba, Tony C.; Blattnig, Steve R.; Clowdsley, Martha S.; Walker, Steven A.; Badavi, Francis F.

    2010-01-01

    Long term human presence in space requires the inclusion of radiation constraints in mission planning and the design of shielding materials, structures, and vehicles. In this paper, the numerical error associated with energy discretization in HZETRN is addressed. An inadequate numerical integration scheme in the transport algorithm is shown to produce large errors in the low energy portion of the neutron and light ion fluence spectra. It is further shown that the errors result from the narrow energy domain of the neutron elastic cross section spectral distributions, and that an extremely fine energy grid is required to resolve the problem under the current formulation. Two numerical methods are developed to provide adequate resolution in the energy domain and more accurately resolve the neutron elastic interactions. Convergence testing is completed by running the code for various environments and shielding materials with various energy grids to ensure stability of the newly implemented method.

  5. A linear-time algorithm for computing inversion distance between signed permutations with an experimental study.

    PubMed

    Bader, D A; Moret, B M; Yan, M

    2001-01-01

    Hannenhalli and Pevzner gave the first polynomial-time algorithm for computing the inversion distance between two signed permutations, as part of the larger task of determining the shortest sequence of inversions needed to transform one permutation into the other. Their algorithm (restricted to distance calculation) proceeds in two stages: in the first stage, the overlap graph induced by the permutation is decomposed into connected components; then, in the second stage, certain graph structures (hurdles and others) are identified. Berman and Hannenhalli avoided the explicit computation of the overlap graph and gave an O(nalpha(n)) algorithm, based on a Union-Find structure, to find its connected components, where alpha is the inverse Ackerman function. Since for all practical purposes alpha(n) is a constant no larger than four, this algorithm has been the fastest practical algorithm to date. In this paper, we present a new linear-time algorithm for computing the connected components, which is more efficient than that of Berman and Hannenhalli in both theory and practice. Our algorithm uses only a stack and is very easy to implement. We give the results of computational experiments over a large range of permutation pairs produced through simulated evolution; our experiments show a speed-up by a factor of 2 to 5 in the computation of the connected components and by a factor of 1.3 to 2 in the overall distance computation.

  6. Discrete artificial bee colony algorithm for lot-streaming flowshop with total flowtime minimization

    NASA Astrophysics Data System (ADS)

    Sang, Hongyan; Gao, Liang; Pan, Quanke

    2012-09-01

    Unlike a traditional flowshop problem where a job is assumed to be indivisible, in the lot-streaming flowshop problem, a job is allowed to overlap its operations between successive machines by splitting it into a number of smaller sub-lots and moving the completed portion of the sub-lots to downstream machine. In this way, the production is accelerated. This paper presents a discrete artificial bee colony (DABC) algorithm for a lot-streaming flowshop scheduling problem with total flowtime criterion. Unlike the basic ABC algorithm, the proposed DABC algorithm represents a solution as a discrete job permutation. An efficient initialization scheme based on the extended Nawaz-Enscore-Ham heuristic is utilized to produce an initial population with a certain level of quality and diversity. Employed and onlooker bees generate new solutions in their neighborhood, whereas scout bees generate new solutions by performing insert operator and swap operator to the best solution found so far. Moreover, a simple but effective local search is embedded in the algorithm to enhance local exploitation capability. A comparative experiment is carried out with the existing discrete particle swarm optimization, hybrid genetic algorithm, threshold accepting, simulated annealing and ant colony optimization algorithms based on a total of 160 randomly generated instances. The experimental results show that the proposed DABC algorithm is quite effective for the lot-streaming flowshop with total flowtime criterion in terms of searching quality, robustness and effectiveness. This research provides the references to the optimization research on lot-streaming flowshop.

  7. An Algorithm for Interactive Modeling of Space-Transportation Engine Simulations: A Constraint Satisfaction Approach

    NASA Technical Reports Server (NTRS)

    Mitra, Debasis; Thomas, Ajai; Hemminger, Joseph; Sakowski, Barbara

    2001-01-01

    In this research we have developed an algorithm for the purpose of constraint processing by utilizing relational algebraic operators. Van Beek and others have investigated in the past this type of constraint processing from within a relational algebraic framework, producing some unique results. Apart from providing new theoretical angles, this approach also gives the opportunity to use the existing efficient implementations of relational database management systems as the underlying data structures for any relevant algorithm. Our algorithm here enhances that framework. The algorithm is quite general in its current form. Weak heuristics (like forward checking) developed within the Constraint-satisfaction problem (CSP) area could be also plugged easily within this algorithm for further enhancements of efficiency. The algorithm as developed here is targeted toward a component-oriented modeling problem that we are currently working on, namely, the problem of interactive modeling for batch-simulation of engineering systems (IMBSES). However, it could be adopted for many other CSP problems as well. The research addresses the algorithm and many aspects of the problem IMBSES that we are currently handling.

  8. A Breeder Algorithm for Stellarator Optimization

    NASA Astrophysics Data System (ADS)

    Wang, S.; Ware, A. S.; Hirshman, S. P.; Spong, D. A.

    2003-10-01

    An optimization algorithm that combines the global parameter space search properties of a genetic algorithm (GA) with the local parameter search properties of a Levenberg-Marquardt (LM) algorithm is described. Optimization algorithms used in the design of stellarator configurations are often classified as either global (such as GA and differential evolution algorithm) or local (such as LM). While nonlinear least-squares methods such as LM are effective at minimizing a cost-function based on desirable plasma properties such as quasi-symmetry and ballooning stability, whether or not this is a local or global minimum is unknown. The advantage of evolutionary algorithms such as GA is that they search a wider range of parameter space and are not susceptible to getting stuck in a local minimum of the cost function. Their disadvantage is that in some cases the evolutionary algorithms are ineffective at finding a minimum state. Here, we describe the initial development of the Breeder Algorithm (BA). BA consists of a genetic algorithm outer loop with an inner loop in which each generation is refined using a LM step. Initial results for a quasi-poloidal stellarator optimization will be presented, along with a comparison to existing optimization algorithms.

  9. Effects of visualization on algorithm comprehension

    NASA Astrophysics Data System (ADS)

    Mulvey, Matthew

    Computer science students are expected to learn and apply a variety of core algorithms which are an essential part of the field. Any one of these algorithms by itself is not necessarily extremely complex, but remembering the large variety of algorithms and the differences between them is challenging. To address this challenge, we present a novel algorithm visualization tool designed to enhance students understanding of Dijkstra's algorithm by allowing them to discover the rules of the algorithm for themselves. It is hoped that a deeper understanding of the algorithm will help students correctly select, adapt and apply the appropriate algorithm when presented with a problem to solve, and that what is learned here will be applicable to the design of other visualization tools designed to teach different algorithms. Our visualization tool is currently in the prototype stage, and this thesis will discuss the pedagogical approach that informs its design, as well as the results of some initial usability testing. Finally, to clarify the direction for further development of the tool, four different variations of the prototype were implemented, and the instructional effectiveness of each was assessed by having a small sample participants use the different versions of the prototype and then take a quiz to assess their comprehension of the algorithm.

  10. A Winner Determination Algorithm for Combinatorial Auctions Based on Hybrid Artificial Fish Swarm Algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Genrang; Lin, ZhengChun

    The problem of winner determination in combinatorial auctions is a hotspot electronic business, and a NP hard problem. A Hybrid Artificial Fish Swarm Algorithm(HAFSA), which is combined with First Suite Heuristic Algorithm (FSHA) and Artificial Fish Swarm Algorithm (AFSA), is proposed to solve the problem after probing it base on the theories of AFSA. Experiment results show that the HAFSA is a rapidly and efficient algorithm for The problem of winner determining. Compared with Ant colony Optimization Algorithm, it has a good performance with broad and prosperous application.

  11. Genetic algorithms and their use in geophysical problems

    NASA Astrophysics Data System (ADS)

    Parker, Paul Bradley

    Genetic algorithms (GAs), global optimization methods that mimic Darwinian evolution are well suited to the nonlinear inverse problems of geophysics. A standard genetic algorithm selects the best or "fittest" models from a "population" and then applies operators such as crossover and mutation in order to combine the most successful characteristics of each model and produce fitter models. More sophisticated operators have been developed, but the standard GA usually provides a robust and efficient search. Although the choice of parameter settings such as crossover and mutation rate may depend largely on the type of problem being solved, numerous results show that certain parameter settings produce optimal performance for a wide range of problems and difficulties. In particular, a low (about half of the inverse of the population size) mutation rate is crucial for optimal results, but the choice of crossover method and rate do not seem to affect performance appreciably. Also, optimal efficiency is usually achieved with smaller (<50) populations. Lastly, tournament selection appears to be the best choice of selection methods due to its simplicity and its autoscaling properties. However, if a proportional selection method is used such as roulette wheel selection, fitness scaling is a necessity, and a high scaling factor (>2.0) should be used for the best performance. Three case studies are presented in which genetic algorithms are used to invert for crustal parameters. The first is an inversion for basement depth at Yucca mountain using gravity data, the second an inversion for velocity structure in the crust of the south island of New Zealand using receiver functions derived from teleseismic events, and the third is a similar receiver function inversion for crustal velocities beneath the Mendocino Triple Junction region of Northern California. The inversions demonstrate that genetic algorithms are effective in solving problems with reasonably large numbers of free

  12. Genetic algorithms and their use in Geophysical Problems

    SciTech Connect

    Parker, Paul B.

    1999-04-01

    Genetic algorithms (GAs), global optimization methods that mimic Darwinian evolution are well suited to the nonlinear inverse problems of geophysics. A standard genetic algorithm selects the best or ''fittest'' models from a ''population'' and then applies operators such as crossover and mutation in order to combine the most successful characteristics of each model and produce fitter models. More sophisticated operators have been developed, but the standard GA usually provides a robust and efficient search. Although the choice of parameter settings such as crossover and mutation rate may depend largely on the type of problem being solved, numerous results show that certain parameter settings produce optimal performance for a wide range of problems and difficulties. In particular, a low (about half of the inverse of the population size) mutation rate is crucial for optimal results, but the choice of crossover method and rate do not seem to affect performance appreciably. Optimal efficiency is usually achieved with smaller (< 50) populations. Lastly, tournament selection appears to be the best choice of selection methods due to its simplicity and its autoscaling properties. However, if a proportional selection method is used such as roulette wheel selection, fitness scaling is a necessity, and a high scaling factor (> 2.0) should be used for the best performance. Three case studies are presented in which genetic algorithms are used to invert for crustal parameters. The first is an inversion for basement depth at Yucca mountain using gravity data, the second an inversion for velocity structure in the crust of the south island of New Zealand using receiver functions derived from teleseismic events, and the third is a similar receiver function inversion for crustal velocities beneath the Mendocino Triple Junction region of Northern California. The inversions demonstrate that genetic algorithms are effective in solving problems with reasonably large numbers of free

  13. Ensemble algorithms in reinforcement learning.

    PubMed

    Wiering, Marco A; van Hasselt, Hado

    2008-08-01

    This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and implemented four different ensemble methods combining the following five different RL algorithms: Q-learning, Sarsa, actor-critic (AC), QV-learning, and AC learning automaton. The intuitively designed ensemble methods, namely, majority voting (MV), rank voting, Boltzmann multiplication (BM), and Boltzmann addition, combine the policies derived from the value functions of the different RL algorithms, in contrast to previous work where ensemble methods have been used in RL for representing and learning a single value function. We show experiments on five maze problems of varying complexity; the first problem is simple, but the other four maze tasks are of a dynamic or partially observable nature. The results indicate that the BM and MV ensembles significantly outperform the single RL algorithms.

  14. POSE Algorithms for Automated Docking

    NASA Technical Reports Server (NTRS)

    Heaton, Andrew F.; Howard, Richard T.

    2011-01-01

    POSE (relative position and attitude) can be computed in many different ways. Given a sensor that measures bearing to a finite number of spots corresponding to known features (such as a target) of a spacecraft, a number of different algorithms can be used to compute the POSE. NASA has sponsored the development of a flash LIDAR proximity sensor called the Vision Navigation Sensor (VNS) for use by the Orion capsule in future docking missions. This sensor generates data that can be used by a variety of algorithms to compute POSE solutions inside of 15 meters, including at the critical docking range of approximately 1-2 meters. Previously NASA participated in a DARPA program called Orbital Express that achieved the first automated docking for the American space program. During this mission a large set of high quality mated sensor data was obtained at what is essentially the docking distance. This data set is perhaps the most accurate truth data in existence for docking proximity sensors in orbit. In this paper, the flight data from Orbital Express is used to test POSE algorithms at 1.22 meters range. Two different POSE algorithms are tested for two different Fields-of-View (FOVs) and two different pixel noise levels. The results of the analysis are used to predict future performance of the POSE algorithms with VNS data.

  15. SDR Input Power Estimation Algorithms

    NASA Technical Reports Server (NTRS)

    Nappier, Jennifer M.; Briones, Janette C.

    2013-01-01

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  16. Ensemble algorithms in reinforcement learning.

    PubMed

    Wiering, Marco A; van Hasselt, Hado

    2008-08-01

    This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and implemented four different ensemble methods combining the following five different RL algorithms: Q-learning, Sarsa, actor-critic (AC), QV-learning, and AC learning automaton. The intuitively designed ensemble methods, namely, majority voting (MV), rank voting, Boltzmann multiplication (BM), and Boltzmann addition, combine the policies derived from the value functions of the different RL algorithms, in contrast to previous work where ensemble methods have been used in RL for representing and learning a single value function. We show experiments on five maze problems of varying complexity; the first problem is simple, but the other four maze tasks are of a dynamic or partially observable nature. The results indicate that the BM and MV ensembles significantly outperform the single RL algorithms. PMID:18632380

  17. SDR input power estimation algorithms

    NASA Astrophysics Data System (ADS)

    Briones, J. C.; Nappier, J. M.

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  18. Algorithms for automated DNA assembly

    PubMed Central

    Densmore, Douglas; Hsiau, Timothy H.-C.; Kittleson, Joshua T.; DeLoache, Will; Batten, Christopher; Anderson, J. Christopher

    2010-01-01

    Generating a defined set of genetic constructs within a large combinatorial space provides a powerful method for engineering novel biological functions. However, the process of assembling more than a few specific DNA sequences can be costly, time consuming and error prone. Even if a correct theoretical construction scheme is developed manually, it is likely to be suboptimal by any number of cost metrics. Modular, robust and formal approaches are needed for exploring these vast design spaces. By automating the design of DNA fabrication schemes using computational algorithms, we can eliminate human error while reducing redundant operations, thus minimizing the time and cost required for conducting biological engineering experiments. Here, we provide algorithms that optimize the simultaneous assembly of a collection of related DNA sequences. We compare our algorithms to an exhaustive search on a small synthetic dataset and our results show that our algorithms can quickly find an optimal solution. Comparison with random search approaches on two real-world datasets show that our algorithms can also quickly find lower-cost solutions for large datasets. PMID:20335162

  19. ALGORITHM DEVELOPMENT FOR SPATIAL OPERATORS.

    USGS Publications Warehouse

    Claire, Robert W.

    1984-01-01

    An approach is given that develops spatial operators about the basic geometric elements common to spatial data structures. In this fashion, a single set of spatial operators may be accessed by any system that reduces its operands to such basic generic representations. Algorithms based on this premise have been formulated to perform operations such as separation, overlap, and intersection. Moreover, this generic approach is well suited for algorithms that exploit concurrent properties of spatial operators. The results may provide a framework for a geometry engine to support fundamental manipulations within a geographic information system.

  20. Performance study of a new time-delay estimation algorithm in ultrasonic echo signals and ultrasound elastography.

    PubMed

    Shaswary, Elyas; Xu, Yuan; Tavakkoli, Jahan

    2016-07-01

    Time-delay estimation has countless applications in ultrasound medical imaging. Previously, we proposed a new time-delay estimation algorithm, which was based on the summation of the sign function to compute the time-delay estimate (Shaswary et al., 2015). We reported that the proposed algorithm performs similar to normalized cross-correlation (NCC) and sum squared differences (SSD) algorithms, even though it was significantly more computationally efficient. In this paper, we study the performance of the proposed algorithm using statistical analysis and image quality analysis in ultrasound elastography imaging. Field II simulation software was used for generation of ultrasound radio frequency (RF) echo signals for statistical analysis, and a clinical ultrasound scanner (Sonix® RP scanner, Ultrasonix Medical Corp., Richmond, BC, Canada) was used to scan a commercial ultrasound elastography tissue-mimicking phantom for image quality analysis. The statistical analysis results confirmed that, in overall, the proposed algorithm has similar performance compared to NCC and SSD algorithms. The image quality analysis results indicated that the proposed algorithm produces strain images with marginally higher signal-to-noise and contrast-to-noise ratios compared to NCC and SSD algorithms. PMID:27010697

  1. Fast training algorithms for multilayer neural nets.

    PubMed

    Brent, R P

    1991-01-01

    An algorithm that is faster than back-propagation and for which it is not necessary to specify the number of hidden units in advance is described. The relationship with other fast pattern-recognition algorithms, such as algorithms based on k-d trees, is discussed. The algorithm has been implemented and tested on artificial problems, such as the parity problem, and on real problems arising in speech recognition. Experimental results, including training times and recognition accuracy, are given. Generally, the algorithm achieves accuracy as good as or better than nets trained using back-propagation. Accuracy is comparable to that for the nearest-neighbor algorithm, which is slower and requires more storage space.

  2. Visualizing output for a data learning algorithm

    NASA Astrophysics Data System (ADS)

    Carson, Daniel; Graham, James; Ternovskiy, Igor

    2016-05-01

    This paper details the process we went through to visualize the output for our data learning algorithm. We have been developing a hierarchical self-structuring learning algorithm based around the general principles of the LaRue model. One example of a proposed application of this algorithm would be traffic analysis, chosen because it is conceptually easy to follow and there is a significant amount of already existing data and related research material with which to work with. While we choose the tracking of vehicles for our initial approach, it is by no means the only target of our algorithm. Flexibility is the end goal, however, we still need somewhere to start. To that end, this paper details our creation of the visualization GUI for our algorithm, the features we included and the initial results we obtained from our algorithm running a few of the traffic based scenarios we designed.

  3. A novel chaos danger model immune algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Qingyang; Wang, Song; Zhang, Li; Liang, Ying

    2013-11-01

    Making use of ergodicity and randomness of chaos, a novel chaos danger model immune algorithm (CDMIA) is presented by combining the benefits of chaos and danger model immune algorithm (DMIA). To maintain the diversity of antibodies and ensure the performances of the algorithm, two chaotic operators are proposed. Chaotic disturbance is used for updating the danger antibody to exploit local solution space, and the chaotic regeneration is referred to the safe antibody for exploring the entire solution space. In addition, the performances of the algorithm are examined based upon several benchmark problems. The experimental results indicate that the diversity of the population is improved noticeably, and the CDMIA exhibits a higher efficiency than the danger model immune algorithm and other optimization algorithms.

  4. Modified OMP Algorithm for Exponentially Decaying Signals

    PubMed Central

    Kazimierczuk, Krzysztof; Kasprzak, Paweł

    2015-01-01

    A group of signal reconstruction methods, referred to as compressed sensing (CS), has recently found a variety of applications in numerous branches of science and technology. However, the condition of the applicability of standard CS algorithms (e.g., orthogonal matching pursuit, OMP), i.e., the existence of the strictly sparse representation of a signal, is rarely met. Thus, dedicated algorithms for solving particular problems have to be developed. In this paper, we introduce a modification of OMP motivated by nuclear magnetic resonance (NMR) application of CS. The algorithm is based on the fact that the NMR spectrum consists of Lorentzian peaks and matches a single Lorentzian peak in each of its iterations. Thus, we propose the name Lorentzian peak matching pursuit (LPMP). We also consider certain modification of the algorithm by introducing the allowed positions of the Lorentzian peaks' centers. Our results show that the LPMP algorithm outperforms other CS algorithms when applied to exponentially decaying signals. PMID:25609044

  5. Algorithm Visualization in Teaching Practice

    ERIC Educational Resources Information Center

    Törley, Gábor

    2014-01-01

    This paper presents the history of algorithm visualization (AV), highlighting teaching-methodology aspects. A combined, two-group pedagogical experiment will be presented as well, which measured the efficiency and the impact on the abstract thinking of AV. According to the results, students, who learned with AV, performed better in the experiment.

  6. Understanding Algorithms in Different Presentations

    ERIC Educational Resources Information Center

    Csernoch, Mária; Biró, Piroska; Abari, Kálmán; Máth, János

    2015-01-01

    Within the framework of the Testing Algorithmic and Application Skills project we tested first year students of Informatics at the beginning of their tertiary education. We were focusing on the students' level of understanding in different programming environments. In the present paper we provide the results from the University of Debrecen, the…

  7. NASA, Navy, and AES/York sea ice concentration comparison of SSM/I algorithms with SAR derived values

    NASA Technical Reports Server (NTRS)

    Jentz, R. R.; Wackerman, C. C.; Shuchman, R. A.; Onstott, R. G.; Gloersen, Per; Cavalieri, Don; Ramseier, Rene; Rubinstein, Irene; Comiso, Joey; Hollinger, James

    1991-01-01

    Previous research studies have focused on producing algorithms for extracting geophysical information from passive microwave data regarding ice floe size, sea ice concentration, open water lead locations, and sea ice extent. These studies have resulted in four separate algorithms for extracting these geophysical parameters. Sea ice concentration estimates generated from each of these algorithms (i.e., NASA/Team, NASA/Comiso, AES/York, and Navy) are compared to ice concentration estimates produced from coincident high-resolution synthetic aperture radar (SAR) data. The SAR concentration estimates are produced from data collected in both the Beaufort Sea and the Greenland Sea in March 1988 and March 1989, respectively. The SAR data are coincident to the passive microwave data generated by the Special Sensor Microwave/Imager (SSM/I).

  8. Performance Comparison Of Evolutionary Algorithms For Image Clustering

    NASA Astrophysics Data System (ADS)

    Civicioglu, P.; Atasever, U. H.; Ozkan, C.; Besdok, E.; Karkinli, A. E.; Kesikoglu, A.

    2014-09-01

    Evolutionary computation tools are able to process real valued numerical sets in order to extract suboptimal solution of designed problem. Data clustering algorithms have been intensively used for image segmentation in remote sensing applications. Despite of wide usage of evolutionary algorithms on data clustering, their clustering performances have been scarcely studied by using clustering validation indexes. In this paper, the recently proposed evolutionary algorithms (i.e., Artificial Bee Colony Algorithm (ABC), Gravitational Search Algorithm (GSA), Cuckoo Search Algorithm (CS), Adaptive Differential Evolution Algorithm (JADE), Differential Search Algorithm (DSA) and Backtracking Search Optimization Algorithm (BSA)) and some classical image clustering techniques (i.e., k-means, fcm, som networks) have been used to cluster images and their performances have been compared by using four clustering validation indexes. Experimental test results exposed that evolutionary algorithms give more reliable cluster-centers than classical clustering techniques, but their convergence time is quite long.

  9. Compression algorithm for multideterminant wave functions.

    PubMed

    Weerasinghe, Gihan L; Ríos, Pablo López; Needs, Richard J

    2014-02-01

    A compression algorithm is introduced for multideterminant wave functions which can greatly reduce the number of determinants that need to be evaluated in quantum Monte Carlo calculations. We have devised an algorithm with three levels of compression, the least costly of which yields excellent results in polynomial time. We demonstrate the usefulness of the compression algorithm for evaluating multideterminant wave functions in quantum Monte Carlo calculations, whose computational cost is reduced by factors of between about 2 and over 25 for the examples studied. We have found evidence of sublinear scaling of quantum Monte Carlo calculations with the number of determinants when the compression algorithm is used.

  10. Algorithm to search for genomic rearrangements

    NASA Astrophysics Data System (ADS)

    Nałecz-Charkiewicz, Katarzyna; Nowak, Robert

    2013-10-01

    The aim of this article is to discuss the issue of comparing nucleotide sequences in order to detect chromosomal rearrangements (for example, in the study of genomes of two cucumber varieties, Polish and Chinese). Two basic algorithms for detecting rearrangements has been described: Smith-Waterman algorithm, as well as a new method of searching genetic markers in combination with Knuth-Morris-Pratt algorithm. The computer program in client-server architecture was developed. The algorithms properties were examined on genomes Escherichia coli and Arabidopsis thaliana genomes, and are prepared to compare two cucumber varieties, Polish and Chinese. The results are promising and further works are planned.

  11. Generation of attributes for learning algorithms

    SciTech Connect

    Hu, Yuh-Jyh; Kibler, D.

    1996-12-31

    Inductive algorithms rely strongly on their representational biases. Constructive induction can mitigate representational inadequacies. This paper introduces the notion of a relative gain measure and describes a new constructive induction algorithm (GALA) which is independent of the learning algorithm. Unlike most previous research on constructive induction, our methods are designed as preprocessing step before standard machine learning algorithms are applied. We present the results which demonstrate the effectiveness of GALA on artificial and real domains for several learners: C4.5, CN2, perceptron and backpropagation.

  12. A parallel algorithm for global routing

    NASA Technical Reports Server (NTRS)

    Brouwer, Randall J.; Banerjee, Prithviraj

    1990-01-01

    A Parallel Hierarchical algorithm for Global Routing (PHIGURE) is presented. The router is based on the work of Burstein and Pelavin, but has many extensions for general global routing and parallel execution. Main features of the algorithm include structured hierarchical decomposition into separate independent tasks which are suitable for parallel execution and adaptive simplex solution for adding feedthroughs and adjusting channel heights for row-based layout. Alternative decomposition methods and the various levels of parallelism available in the algorithm are examined closely. The algorithm is described and results are presented for a shared-memory multiprocessor implementation.

  13. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.; Subrahmanyam, P.A.

    1988-12-01

    The authors present a methodology for verifying correctness of systolic algorithms. The methodology is based on solving a set of Uniform Recurrence Equations obtained from a description of systolic algorithms as a set of recursive equations. They present an approach to mechanically verify correctness of systolic algorithms, using the Boyer-Moore theorem proven. A mechanical correctness proof of an example from the literature is also presented.

  14. Research on registration algorithm for check seal verification

    NASA Astrophysics Data System (ADS)

    Wang, Shuang; Liu, Tiegen

    2008-03-01

    Nowadays seals play an important role in China. With the development of social economy, the traditional method of manual check seal identification can't meet the need s of banking transactions badly. This paper focus on pre-processing and registration algorithm for check seal verification using theory of image processing and pattern recognition. First of all, analyze the complex characteristics of check seals. To eliminate the difference of producing conditions and the disturbance caused by background and writing in check image, many methods are used in the pre-processing of check seal verification, such as color components transformation, linearity transform to gray-scale image, medium value filter, Otsu, close calculations and labeling algorithm of mathematical morphology. After the processes above, the good binary seal image can be obtained. On the basis of traditional registration algorithm, a double-level registration method including rough and precise registration method is proposed. The deflection angle of precise registration method can be precise to 0.1°. This paper introduces the concepts of difference inside and difference outside and use the percent of difference inside and difference outside to judge whether the seal is real or fake. The experimental results of a mass of check seals are satisfied. It shows that the methods and algorithmic presented have good robustness to noise sealing conditions and satisfactory tolerance of difference within class.

  15. Lidar detection algorithm for time and range anomalies.

    PubMed

    Ben-David, Avishai; Davidson, Charles E; Vanderbeek, Richard G

    2007-10-10

    A new detection algorithm for lidar applications has been developed. The detection is based on hyperspectral anomaly detection that is implemented for time anomaly where the question "is a target (aerosol cloud) present at range R within time t(1) to t(2)" is addressed, and for range anomaly where the question "is a target present at time t within ranges R(1) and R(2)" is addressed. A detection score significantly different in magnitude from the detection scores for background measurements suggests that an anomaly (interpreted as the presence of a target signal in space/time) exists. The algorithm employs an option for a preprocessing stage where undesired oscillations and artifacts are filtered out with a low-rank orthogonal projection technique. The filtering technique adaptively removes the one over range-squared dependence of the background contribution of the lidar signal and also aids visualization of features in the data when the signal-to-noise ratio is low. A Gaussian-mixture probability model for two hypotheses (anomaly present or absent) is computed with an expectation-maximization algorithm to produce a detection threshold and probabilities of detection and false alarm. Results of the algorithm for CO(2) lidar measurements of bioaerosol clouds Bacillus atrophaeus (formerly known as Bacillus subtilis niger, BG) and Pantoea agglomerans, Pa (formerly known as Erwinia herbicola, Eh) are shown and discussed. PMID:17932542

  16. Lidar detection algorithm for time and range anomalies.

    PubMed

    Ben-David, Avishai; Davidson, Charles E; Vanderbeek, Richard G

    2007-10-10

    A new detection algorithm for lidar applications has been developed. The detection is based on hyperspectral anomaly detection that is implemented for time anomaly where the question "is a target (aerosol cloud) present at range R within time t(1) to t(2)" is addressed, and for range anomaly where the question "is a target present at time t within ranges R(1) and R(2)" is addressed. A detection score significantly different in magnitude from the detection scores for background measurements suggests that an anomaly (interpreted as the presence of a target signal in space/time) exists. The algorithm employs an option for a preprocessing stage where undesired oscillations and artifacts are filtered out with a low-rank orthogonal projection technique. The filtering technique adaptively removes the one over range-squared dependence of the background contribution of the lidar signal and also aids visualization of features in the data when the signal-to-noise ratio is low. A Gaussian-mixture probability model for two hypotheses (anomaly present or absent) is computed with an expectation-maximization algorithm to produce a detection threshold and probabilities of detection and false alarm. Results of the algorithm for CO(2) lidar measurements of bioaerosol clouds Bacillus atrophaeus (formerly known as Bacillus subtilis niger, BG) and Pantoea agglomerans, Pa (formerly known as Erwinia herbicola, Eh) are shown and discussed.

  17. A biological phantom for evaluation of CT image reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Cammin, J.; Fung, G. S. K.; Fishman, E. K.; Siewerdsen, J. H.; Stayman, J. W.; Taguchi, K.

    2014-03-01

    In recent years, iterative algorithms have become popular in diagnostic CT imaging to reduce noise or radiation dose to the patient. The non-linear nature of these algorithms leads to non-linearities in the imaging chain. However, the methods to assess the performance of CT imaging systems were developed assuming the linear process of filtered backprojection (FBP). Those methods may not be suitable any longer when applied to non-linear systems. In order to evaluate the imaging performance, a phantom is typically scanned and the image quality is measured using various indices. For reasons of practicality, cost, and durability, those phantoms often consist of simple water containers with uniform cylinder inserts. However, these phantoms do not represent the rich structure and patterns of real tissue accurately. As a result, the measured image quality or detectability performance for lesions may not reflect the performance on clinical images. The discrepancy between estimated and real performance may be even larger for iterative methods which sometimes produce "plastic-like", patchy images with homogeneous patterns. Consequently, more realistic phantoms should be used to assess the performance of iterative algorithms. We designed and constructed a biological phantom consisting of porcine organs and tissue that models a human abdomen, including liver lesions. We scanned the phantom on a clinical CT scanner and compared basic image quality indices between filtered backprojection and an iterative reconstruction algorithm.

  18. Lidar detection algorithm for time and range anomalies

    NASA Astrophysics Data System (ADS)

    Ben-David, Avishai; Davidson, Charles E.; Vanderbeek, Richard G.

    2007-10-01

    A new detection algorithm for lidar applications has been developed. The detection is based on hyperspectral anomaly detection that is implemented for time anomaly where the question "is a target (aerosol cloud) present at range R within time t1 to t2" is addressed, and for range anomaly where the question "is a target present at time t within ranges R1 and R2" is addressed. A detection score significantly different in magnitude from the detection scores for background measurements suggests that an anomaly (interpreted as the presence of a target signal in space/time) exists. The algorithm employs an option for a preprocessing stage where undesired oscillations and artifacts are filtered out with a low-rank orthogonal projection technique. The filtering technique adaptively removes the one over range-squared dependence of the background contribution of the lidar signal and also aids visualization of features in the data when the signal-to-noise ratio is low. A Gaussian-mixture probability model for two hypotheses (anomaly present or absent) is computed with an expectation-maximization algorithm to produce a detection threshold and probabilities of detection and false alarm. Results of the algorithm for CO2 lidar measurements of bioaerosol clouds Bacillus atrophaeus (formerly known as Bacillus subtilis niger, BG) and Pantoea agglomerans, Pa (formerly known as Erwinia herbicola, Eh) are shown and discussed.

  19. SETI Pulse Detection Algorithm: Analysis of False-alarm Rates

    NASA Technical Reports Server (NTRS)

    Levitt, B. K.

    1983-01-01

    Some earlier work by the Search for Extraterrestrial Intelligence (SETI) Science Working Group (SWG) on the derivation of spectrum analyzer thresholds for a pulse detection algorithm based on an analysis of false alarm rates is extended. The algorithm previously analyzed was intended to detect a finite sequence of i periodically spaced pulses that did not necessarily occupy the entire observation interval. This algorithm would recognize the presence of such a signal only if all i-received pulse powers exceeded a threshold T(i): these thresholds were selected to achieve a desired false alarm rate, independent of i. To simplify the analysis, it was assumed that the pulses were synchronous with the spectrum sample times. This analysis extends the earlier effort to include infinite and/or asynchronous pulse trains. Furthermore, to decrease the possibility of missing an extraterrestrial intelligence signal, the algorithm was modified to detect a pulse train even if some of the received pulse powers fall below the threshold. The analysis employs geometrical arguments that make it conceptually easy to incorporate boundary conditions imposed on the derivation of the false alarm rates. While the exact results can be somewhat complex, simple closed form approximations are derived that produce a negligible loss of accuracy.

  20. Investigation of range extension with a genetic algorithm

    SciTech Connect

    Austin, A. S., LLNL

    1998-03-04

    Range optimization is one of the tasks associated with the development of cost- effective, stand-off, air-to-surface munitions systems. The search for the optimal input parameters that will result in the maximum achievable range often employ conventional Monte Carlo techniques. Monte Carlo approaches can be time-consuming, costly, and insensitive to mutually dependent parameters and epistatic parameter effects. An alternative search and optimization technique is available in genetic algorithms. In the experiments discussed in this report, a simplified platform motion simulator was the fitness function for a genetic algorithm. The parameters to be optimized were the inputs to this motion generator and the simulator`s output (terminal range) was the fitness measure. The parameters of interest were initial launch altitude, initial launch speed, wing angle-of-attack, and engine ignition time. The parameter values the GA produced were validated by Monte Carlo investigations employing a full-scale six-degree-of-freedom (6 DOF) simulation. The best results produced by Monte Carlo processes using values based on the GA derived parameters were within - 1% of the ranges generated by the simplified model using the evolved parameter values. This report has five sections. Section 2 discusses the motivation for the range extension investigation and reviews the surrogate flight model developed as a fitness function for the genetic algorithm tool. Section 3 details the representation and implementation of the task within the genetic algorithm framework. Section 4 discusses the results. Section 5 concludes the report with a summary and suggestions for further research.

  1. Population Induced Instabilities in Genetic Algorithms for Constrained Optimization

    NASA Astrophysics Data System (ADS)

    Vlachos, D. S.; Parousis-Orthodoxou, K. J.

    2013-02-01

    Evolutionary computation techniques, like genetic algorithms, have received a lot of attention as optimization techniques but, although they exhibit a very promising potential in curing the problem, they have not produced a significant breakthrough in the area of systematic treatment of constraints. There are two mainly ways of handling the constraints: the first is to produce an infeasibility measure and add it to the general cost function (the well known penalty methods) and the other is to modify the mutation and crossover operation in a way that they only produce feasible members. Both methods have their drawbacks and are strongly correlated to the problem that they are applied. In this work, we propose a different treatment of the constraints: we induce instabilities in the evolving population, in a way that infeasible solution cannot survive as they are. Preliminary results are presented in a set of well known from the literature constrained optimization problems.

  2. Robustness of Tree Extraction Algorithms from LIDAR

    NASA Astrophysics Data System (ADS)

    Dumitru, M.; Strimbu, B. M.

    2015-12-01

    Forest inventory faces a new era as unmanned aerial systems (UAS) increased the precision of measurements, while reduced field effort and price of data acquisition. A large number of algorithms were developed to identify various forest attributes from UAS data. The objective of the present research is to assess the robustness of two types of tree identification algorithms when UAS data are combined with digital elevation models (DEM). The algorithms use as input photogrammetric point cloud, which are subsequent rasterized. The first type of algorithms associate tree crown with an inversed watershed (subsequently referred as watershed based), while the second type is based on simultaneous representation of tree crown as an individual entity, and its relation with neighboring crowns (subsequently referred as simultaneous representation). A DJI equipped with a SONY a5100 was used to acquire images over an area from center Louisiana. The images were processed with Pix4D, and a photogrammetric point cloud with 50 points / m2 was attained. DEM was obtained from a flight executed in 2013, which also supplied a LIDAR point cloud with 30 points/m2. The algorithms were tested on two plantations with different species and crown class complexities: one homogeneous (i.e., a mature loblolly pine plantation), and one heterogeneous (i.e., an unmanaged uneven-aged stand with mixed species pine -hardwoods). Tree identification on photogrammetric point cloud reveled that simultaneous representation algorithm outperforms watershed algorithm, irrespective stand complexity. Watershed algorithm exhibits robustness to parameters, but the results were worse than majority sets of parameters needed by the simultaneous representation algorithm. The simultaneous representation algorithm is a better alternative to watershed algorithm even when parameters are not accurately estimated. Similar results were obtained when the two algorithms were run on the LIDAR point cloud.

  3. Mapped Landmark Algorithm for Precision Landing

    NASA Technical Reports Server (NTRS)

    Johnson, Andrew; Ansar, Adnan; Matthies, Larry

    2007-01-01

    A report discusses a computer vision algorithm for position estimation to enable precision landing during planetary descent. The Descent Image Motion Estimation System for the Mars Exploration Rovers has been used as a starting point for creating code for precision, terrain-relative navigation during planetary landing. The algorithm is designed to be general because it handles images taken at different scales and resolutions relative to the map, and can produce mapped landmark matches for any planetary terrain of sufficient texture. These matches provide a measurement of horizontal position relative to a known landing site specified on the surface map. Multiple mapped landmarks generated per image allow for automatic detection and elimination of bad matches. Attitude and position can be generated from each image; this image-based attitude measurement can be used by the onboard navigation filter to improve the attitude estimate, which will improve the position estimates. The algorithm uses normalized correlation of grayscale images, producing precise, sub-pixel images. The algorithm has been broken into two sub-algorithms: (1) FFT Map Matching (see figure), which matches a single large template by correlation in the frequency domain, and (2) Mapped Landmark Refinement, which matches many small templates by correlation in the spatial domain. Each relies on feature selection, the homography transform, and 3D image correlation. The algorithm is implemented in C++ and is rated at Technology Readiness Level (TRL) 4.

  4. The prototype SMOS soil moisture Algorithm

    NASA Astrophysics Data System (ADS)

    Kerr, Y.; Waldteufel, P.; Richaume, P.; Cabot, F.; Wigneron, J. P.; Ferrazzoli, P.; Mahmoodi, A.; Delwart, S.

    2009-04-01

    The Soil Moisture and Ocean Salinity (SMOS) mission is ESA's (European Space Agency ) second Earth Explorer Opportunity mission, to be launched in September 2007. It is a joint programme between ESA CNES (Centre National d'Etudes Spatiales) and CDTI (Centro para el Desarrollo Tecnologico Industrial). SMOS carries a single payload, an L-band 2D interferometric radiometer in the 1400-1427 MHz protected band. This wavelength penetrates well through the atmosphere and hence the instrument probes the Earth surface emissivity. Surface emissivity can then be related to the moisture content in the first few centimeters of soil, and, after some surface roughness and temperature corrections, to the sea surface salinity over ocean. In order to prepare the data use and dissemination, the ground segment will produce level 1 and 2 data. Level 1 will consists mainly of angular brightness temperatures while level 2 will consist of geophysical products. In this context, a group of institutes prepared the soil moisture and ocean salinity Algorithm Theoretical Basis documents (ATBD) to be used to produce the operational algorithm. The consortium of institutes preparing the Soil moisture algorithm is led by CESBIO (Centre d'Etudes Spatiales de la BIOsphère) and Service d'Aéronomie and consists of the institutes represented by the authors. The principle of the soil moisture retrieval algorithm is based on an iterative approach which aims at minimizing a cost function given by the sum of the squared weighted differences between measured and modelled brightness temperature (TB) data, for a variety of incidence angles. This is achieved by finding the best suited set of the parameters which drive the direct TB model, e.g. soil moisture (SM) and vegetation characteristics. Despite the simplicity of this principle, the main reason for the complexity of the algorithm is that SMOS "pixels" can correspond to rather large, inhomogeneous surface areas whose contribution to the radiometric

  5. Portfolio optimisation for hydropower producers that balances riverine ecosystem protection and producer needs

    NASA Astrophysics Data System (ADS)

    Yin, X. A.; Yang, Z. F.; Liu, C. L.

    2013-12-01

    In deregulated electricity markets, hydropower portfolio design has become an essential task for producers. The previous research on hydropower portfolio optimisation focused mainly on the maximisation of profits but did not take into account riverine ecosystem protection. Although profit maximisation is the major objective for producers in deregulated markets, protection of riverine ecosystems must be incorporated into the process of hydropower portfolio optimisation, especially against a background of increasing attention to environmental protection and stronger opposition to hydropower generation. This research seeks mainly to remind hydropower producers of the requirement of river protection when they design portfolios and help shift portfolio optimisation from economically oriented to ecologically friendly. We establish a framework to determine the optimal portfolio for a hydropower reservoir, accounting for both economic benefits and ecological needs. In this framework, the degree of natural flow regime alteration is adopted as a constraint on hydropower generation to protect riverine ecosystems, and the maximisation of mean annual revenue is set as the optimisation objective. The electricity volumes assigned in different electricity sub-markets are optimised by the noisy genetic algorithm. The proposed framework is applied to China's Wangkuai Reservoir to test its effectiveness. The results show that the new framework could help to design eco-friendly portfolios that can ensure a planned profit and reduce alteration of the natural flow regime.

  6. Portfolio optimisation for hydropower producers that balances riverine ecosystem protection and producer needs

    NASA Astrophysics Data System (ADS)

    Yin, X. A.; Yang, Z. F.; Liu, C. L.

    2014-04-01

    In deregulated electricity markets, hydropower portfolio design has become an essential task for producers. The previous research on hydropower portfolio optimisation focused mainly on the maximisation of profits but did not take into account riverine ecosystem protection. Although profit maximisation is the major objective for producers in deregulated markets, protection of riverine ecosystems must be incorporated into the process of hydropower portfolio optimisation, especially against a background of increasing attention to environmental protection and stronger opposition to hydropower generation. This research seeks mainly to remind hydropower producers of the requirement of river protection when they design portfolios and help shift portfolio optimisation from economically oriented to ecologically friendly. We establish a framework to determine the optimal portfolio for a hydropower reservoir, accounting for both economic benefits and ecological needs. In this framework, the degree of natural flow regime alteration is adopted as a constraint on hydropower generation to protect riverine ecosystems, and the maximisation of mean annual revenue is set as the optimisation objective. The electricity volumes assigned in different electricity submarkets are optimised by the noisy genetic algorithm. The proposed framework is applied to China's Wangkuai Reservoir to test its effectiveness. The results show that the new framework could help to design eco-friendly portfolios that can ensure a planned profit and reduce alteration of the natural flow regime.

  7. A genetic algorithm for layered multisource video distribution

    NASA Astrophysics Data System (ADS)

    Cheok, Lai-Tee; Eleftheriadis, Alexandros

    2005-03-01

    We propose a genetic algorithm -- MckpGen -- for rate scaling and adaptive streaming of layered video streams from multiple sources in a bandwidth-constrained environment. A genetic algorithm (GA) consists of several components: a representation scheme; a generator for creating an initial population; a crossover operator for producing offspring solutions from parents; a mutation operator to promote genetic diversity and a repair operator to ensure feasibility of solutions produced. We formulated the problem as a Multiple-Choice Knapsack Problem (MCKP), a variant of Knapsack Problem (KP) and a decision problem in combinatorial optimization. MCKP has many successful applications in fault tolerance, capital budgeting, resource allocation for conserving energy on mobile devices, etc. Genetic algorithms have been used to solve NP-complete problems effectively, such as the KP, however, to the best of our knowledge, there is no GA for MCKP. We utilize a binary chromosome representation scheme for MCKP and design and implement the components, utilizing problem-specific knowledge for solving MCKP. In addition, for the repair operator, we propose two schemes (RepairSimple and RepairBRP). Results show that RepairBRP yields significantly better performance. We further show that the average fitness of the entire population converges towards the best fitness (optimal) value and compare the performance at various bit-rates.

  8. Pyroelectric sensors and classification algorithms for border / perimeter security

    NASA Astrophysics Data System (ADS)

    Jacobs, Eddie L.; Chari, Srikant; Halford, Carl; McClellan, Harry

    2009-09-01

    It has been shown that useful classifications can be made with a sensor that detects the shape of moving objects. This type of sensor has been referred to as a profiling sensor. In this research, two configurations of pyroelectric detectors are considered for use in a profiling sensor, a linear array and a circular array. The linear array produces crude images representing the shape of objects moving through the field of view. The circular array produces a temporal motion vector. A simulation of the output of each detector configuration is created and used to generate simulated profiles. The simulation is performed by convolving the pyroelectric detector response with images derived from calibrated thermal infrared video sequences. Profiles derived from these simulations are then used to train and test classification algorithms. Classification algorithms examined in this study include a naive Bayesian (NB) classifier and Linear discriminant analysis (LDA). Each classification algorithm assumes a three class problem where profiles are classified as either human, animal, or vehicle. Simulation results indicate that these systems can reliably classify outputs from these types of sensors. These types of sensors can be used in applications involving border or perimeter security.

  9. Parallel paving: An algorithm for generating distributed, adaptive, all-quadrilateral meshes on parallel computers

    SciTech Connect

    Lober, R.R.; Tautges, T.J.; Vaughan, C.T.

    1997-03-01

    Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.

  10. A modified Fuzzy C-Means (FCM) Clustering algorithm and its application on carbonate fluid identification

    NASA Astrophysics Data System (ADS)

    Liu, Lifeng; Sun, Sam Zandong; Yu, Hongyu; Yue, Xingtong; Zhang, Dong

    2016-06-01

    Considering the fact that the fluid distribution in carbonate reservoir is very complicated and the existing fluid prediction methods are not able to produce ideal predicted results, this paper proposes a new fluid identification method in carbonate reservoir based on the modified Fuzzy C-Means (FCM) Clustering algorithm. Both initialization and globally optimum cluster center are produced by Chaotic Quantum Particle Swarm Optimization (CQPSO) algorithm, which can effectively avoid the disadvantage of sensitivity to initial values and easily falling into local convergence in the traditional FCM Clustering algorithm. Then, the modified algorithm is applied to fluid identification in the carbonate X area in Tarim Basin of China, and a mapping relation between fluid properties and pre-stack elastic parameters will be built in multi-dimensional space. It has been proven that this modified algorithm has a good ability of fuzzy cluster and its total coincidence rate of fluid prediction reaches 97.10%. Besides, the membership of different fluids can be accumulated to obtain respective probability, which can evaluate the uncertainty in fluid identification result.

  11. Improved ant algorithms for software testing cases generation.

    PubMed

    Yang, Shunkun; Man, Tianlong; Xu, Jiaqi

    2014-01-01

    Existing ant colony optimization (ACO) for software testing cases generation is a very popular domain in software testing engineering. However, the traditional ACO has flaws, as early search pheromone is relatively scarce, search efficiency is low, search model is too simple, positive feedback mechanism is easy to produce the phenomenon of stagnation and precocity. This paper introduces improved ACO for software testing cases generation: improved local pheromone update strategy for ant colony optimization, improved pheromone volatilization coefficient for ant colony optimization (IPVACO), and improved the global path pheromone update strategy for ant colony optimization (IGPACO). At last, we put forward a comprehensive improved ant colony optimization (ACIACO), which is based on all the above three methods. The proposed technique will be compared with random algorithm (RND) and genetic algorithm (GA) in terms of both efficiency and coverage. The results indicate that the improved method can effectively improve the search efficiency, restrain precocity, promote case coverage, and reduce the number of iterations.

  12. Neural network implementations of data association algorithms for sensor fusion

    NASA Technical Reports Server (NTRS)

    Brown, Donald E.; Pittard, Clarence L.; Martin, Worthy N.

    1989-01-01

    The paper is concerned with locating a time varying set of entities in a fixed field when the entities are sensed at discrete time instances. At a given time instant a collection of bivariate Gaussian sensor reports is produced, and these reports estimate the location of a subset of the entities present in the field. A database of reports is maintained, which ideally should contain one report for each entity sensed. Whenever a collection of sensor reports is received, the database must be updated to reflect the new information. This updating requires association processing between the database reports and the new sensor reports to determine which pairs of sensor and database reports correspond to the same entity. Algorithms for performing this association processing are presented. Neural network implementation of the algorithms, along with simulation results comparing the approaches are provided.

  13. Systematic afterpulsing-estimation algorithms for gated avalanche photodiodes

    NASA Astrophysics Data System (ADS)

    Wiechers, Carlos; Ramírez-Alarcón, Roberto; Muñiz-Sánchez, Oscar R.; Yépiz, Pablo Daniel; Arredondo-Santos, Alejandro; Hirsch, Jorge G.; U'Ren, Alfred B.

    2016-09-01

    We present a method designed to efficiently extract optical signals from InGaAs avalanche photodiodes (APDs) operated in gated mode. In particular, our method permits an estimation of the fraction of counts which actually results from the signal being measured, as opposed to being produced by noise mechanisms, specifically by afterpulsing. Our method in principle allows the use of InGaAs APDs at high detection efficiencies, with the full operation bandwidth, either with or without resorting to the application of a dead time. As we show below, our method can be used in configurations where afterpulsing exceeds the genuine signal by orders of magnitude, even near saturation. The algorithms which we have developed are suitable to be used either in real-time processing of raw detection probabilities or in post-processing applications, after a calibration step has been performed. The algorithms which we propose here can complement technologies designed for the reduction of afterpulsing.

  14. A novel 3D algorithm for VLSI floorplanning

    NASA Astrophysics Data System (ADS)

    Rani, D. Gracia N.; Rajaram, S.; Sudarasan, Athira

    2013-01-01

    3-D VLSI circuit is becoming a hot issue because of its potential of enhancing performance, while it is also facing challenges such as the increased complexity on floorplanning and placement in VLSI Physical design. Efficient 3-D floorplan representations are needed to handle the placement optimization in new circuit designs. We analyze and categorize some state-of-the-art 3-D representations, and propose a Ternary tree model for 3-D nonslicing floorplans, extending the B*tree from 2D.This paper proposes a novel optimization algorithm for packing of 3D rectangular blocks. The new techniques considered are Differential evolutionary algorithm (DE) is very fast in that it evaluates the feasibility of a Ternary tree representation. Experimental results based on MCNC benchmark with constraints show that our proposed Differential Evolutionary (DE) can quickly produce optimal solutions.

  15. Systematic afterpulsing-estimation algorithms for gated avalanche photodiodes.

    PubMed

    Wiechers, Carlos; Ramírez-Alarcón, Roberto; Muñiz-Sánchez, Oscar R; Yépiz, Pablo Daniel; Arredondo-Santos, Alejandro; Hirsch, Jorge G; U'Ren, Alfred B

    2016-09-10

    We present a method designed to efficiently extract optical signals from InGaAs avalanche photodiodes (APDs) operated in gated mode. In particular, our method permits an estimation of the fraction of counts that actually results from the signal being measured, as opposed to being produced by noise mechanisms, specifically by afterpulsing. Our method in principle allows the use of InGaAs APDs at high detection efficiencies, with the full operation bandwidth, either with or without resorting to the application of a dead-time. As we show below, our method can be used in configurations where afterpulsing exceeds the genuine signal by orders of magnitude, even near saturation. The algorithms that we have developed are suitable to be used either in real-time processing of raw detection probabilities or in post-processing applications, after a calibration step has been performed. The algorithms that we propose here can complement technologies designed for the reduction of afterpulsing. PMID:27661361

  16. Motion Cueing Algorithm Modification for Improved Turbulence Simulation

    NASA Technical Reports Server (NTRS)

    Ercole, Anthony V.; Cardullo, Frank M.; Zaychik, Kirill; Kelly, Lon C.; Houck, Jacob

    2009-01-01

    Atmospheric turbulence cueing produced by flight simulator motion systems has been less than satisfactory because the turbulence profiles have been attenuated by the motion cueing algorithms. Cardullo and Ellor initially addressed this problem by directly porting the turbulence model output to the motion system. Reid and Robinson addressed the problem by employing a parallel aircraft model, which is only stimulated by the turbulence inputs and adding a filter specially designed to pass the higher turbulence frequencies. There have been advances in motion cueing algorithm development at the Man-Machine Systems Laboratory, at SUNY Binghamton. In particular, the system used to generate turbulence cues has been studied. The Reid approach, implemented by Telban and Cardullo, was employed to augment the optimal motion cueing algorithm installed at the NASA LaRC Simulation Laboratory, driving the Visual Motion Simulator. In this implementation, the output of the primary flight channel was added to the output of the turbulence channel and then sent through a non-linear cueing filter. The cueing filter is an adaptive filter; therefore, it is not desirable for the output of the turbulence channel to be augmented by this type of filter. The likelihood of the signal becoming divergent was also an issue in this design. After testing on-site it became apparent that the architecture of the turbulence algorithm was generating unacceptable cues. As mentioned above, this cueing algorithm comprised a filter that was designed to operate at low bandwidth. Therefore, the turbulence was also filtered, augmenting the cues generated by the model. If any filtering is to be done to the turbulence, it will utilize a filter with a much higher bandwidth, above the frequencies produced by the aircraft response to turbulence. The authors have developed an implementation wherein only the signal from the primary flight channel passes through the nonlinear cueing filter. This paper discusses three

  17. An MM-Based Algorithm for ℓ1-Regularized Least-Squares Estimation With an Application to Ground Penetrating Radar Image Reconstruction.

    PubMed

    Ndoye, Mandoye; Anderson, John M M; Greene, David J

    2016-05-01

    An estimation method known as least absolute shrinkage and selection operator (LASSO) or ℓ1-regularized LS estimation has been found to perform well in a number of applications. In this paper, we use the majorize-minimize method to develop an algorithm for minimizing the LASSO objective function, which is the sum of a linear LS objective function plus an ℓ1 penalty term. The proposed algorithm, which we call the LASSO estimation via majorization-minimization (LMM) algorithm, is straightforward to implement, parallelizable, and guaranteed to produce LASSO objective function values that monotonically decrease. In addition, we formulate an extension of the LMM algorithm for reconstructing ground penetrating radar (GPR) images, that is much faster than the standard LMM algorithm and utilizes significantly less memory. Thus, the GPR specific LMM (GPR-LMM) algorithm is able to accommodate the big data associated with GPR imaging. We compare our proposed algorithms to the state-of-the-art ℓ1-regularized LS algorithms using a time and space complexity analysis. The GPR-LMM greatly outperforms the competing algorithms in terms of the performance metrics we considered. In addition, the reconstruction results of the standard LMM and GPR-LMM algorithms are evaluated using both simulated and real GPR data.

  18. An MM-Based Algorithm for ℓ1-Regularized Least-Squares Estimation With an Application to Ground Penetrating Radar Image Reconstruction.

    PubMed

    Ndoye, Mandoye; Anderson, John M M; Greene, David J

    2016-05-01

    An estimation method known as least absolute shrinkage and selection operator (LASSO) or ℓ1-regularized LS estimation has been found to perform well in a number of applications. In this paper, we use the majorize-minimize method to develop an algorithm for minimizing the LASSO objective function, which is the sum of a linear LS objective function plus an ℓ1 penalty term. The proposed algorithm, which we call the LASSO estimation via majorization-minimization (LMM) algorithm, is straightforward to implement, parallelizable, and guaranteed to produce LASSO objective function values that monotonically decrease. In addition, we formulate an extension of the LMM algorithm for reconstructing ground penetrating radar (GPR) images, that is much faster than the standard LMM algorithm and utilizes significantly less memory. Thus, the GPR specific LMM (GPR-LMM) algorithm is able to accommodate the big data associated with GPR imaging. We compare our proposed algorithms to the state-of-the-art ℓ1-regularized LS algorithms using a time and space complexity analysis. The GPR-LMM greatly outperforms the competing algorithms in terms of the performance metrics we considered. In addition, the reconstruction results of the standard LMM and GPR-LMM algorithms are evaluated using both simulated and real GPR data. PMID:26800538

  19. Advancing-Front Algorithm For Delaunay Triangulation

    NASA Technical Reports Server (NTRS)

    Merriam, Marshal L.

    1993-01-01

    Efficient algorithm performs Delaunay triangulation to generate unstructured grids for use in computing two-dimensional flows. Once grid generated, one can optionally call upon additional subalgorithm that removes diagonal lines from quadrilateral cells nearly rectangular. Resulting approximately rectangular grid reduces cost per iteration of flow-computing algorithm.

  20. Algorithm for genome contig assembly. Final report

    SciTech Connect

    1995-09-01

    An algorithm was developed for genome contig assembly which extended the range of data types that could be included in assembly and which ran on the order of a hundred times faster than the algorithm it replaced. Maps of all existing cosmid clone and YAC data at the Human Genome Information Resource were assembled using ICA. The resulting maps are summarized.

  1. Quality control algorithms for rainfall measurements

    NASA Astrophysics Data System (ADS)

    Golz, Claudia; Einfalt, Thomas; Gabella, Marco; Germann, Urs

    2005-09-01

    One of the basic requirements for a scientific use of rain data from raingauges, ground and space radars is data quality control. Rain data could be used more intensively in many fields of activity (meteorology, hydrology, etc.), if the achievable data quality could be improved. This depends on the available data quality delivered by the measuring devices and the data quality enhancement procedures. To get an overview of the existing algorithms a literature review and literature pool have been produced. The diverse algorithms have been evaluated to meet VOLTAIRE objectives and sorted in different groups. To test the chosen algorithms an algorithm pool has been established, where the software is collected. A large part of this work presented here is implemented in the scope of the EU-project VOLTAIRE ( Validati on of mu ltisensors precipit ation fields and numerical modeling in Mediter ran ean test sites).

  2. A parallel unmixing algorithm for hyperspectral images

    NASA Astrophysics Data System (ADS)

    Robila, Stefan A.; Maciak, Lukasz G.

    2006-10-01

    We present a new algorithm for feature extraction in hyperspectral images based on source separation and parallel computing. In source separation, given a linear mixture of sources, the goal is to recover the components by producing an unmixing matrix. In hyperspectral imagery, the mixing transform and the separated components can be associated with endmembers and their abundances. Source separation based methods have been employed for target detection and classification of hyperspectral images. However, these methods usually involve restrictive conditions on the nature of the results such as orthogonality (in Principal Component Analysis - PCA and Orthogonal Subspace Projection - OSP) of the endmembers or statistical independence (in Independent Component Analysis - ICA) of the abundances nor do they fully satisfy all the conditions included in the Linear Mixing Model. Compared to this, our approach is based on the Nonnegative Matrix Factorization (NMF), a less constraining unmixing method. NMF has the advantage of producing positively defined data, and, with several modifications that we introduce also ensures addition to one. The endmember vectors and the abundances are obtained through a gradient based optimization approach. The algorithm is further modified to run in a parallel environment. The parallel NMF (P-NMF) significantly reduces the time complexity and is shown to also easily port to a distributed environment. Experiments with in-house and Hydice data suggest that NMF outperforms ICA, PCA and OSP for unsupervised endmember extraction. Coupled with its parallel implementation, the new method provides an efficient way for unsupervised unmixing further supporting our efforts in the development of a real time hyperspectral sensing environment with applications to industry and life sciences.

  3. Conditional disruption of miR17-92 cluster in collagen type I-producing osteoblasts results in reduced periosteal bone formation and bone anabolic response to exercise.

    PubMed

    Mohan, Subburaman; Wergedal, Jon E; Das, Subhashri; Kesavan, Chandrasekhar

    2015-02-01

    In this study, we evaluated the role of the microRNA (miR)17-92 cluster in osteoblast lineage cells using a Cre-loxP approach in which Cre expression is driven by the entire regulatory region of the type I collagen α2 gene. Conditional knockout (cKO) mice showed a 13-34% reduction in total body bone mineral content and area with little or no change in bone mineral density (BMD) by DXA at 2, 4, and 8 wk in both sexes. Micro-CT analyses of the femur revealed an 8% reduction in length and 25-27% reduction in total volume at the diaphyseal and metaphyseal sites. Neither cortical nor trabecular volumetric BMD was different in the cKO mice. Bone strength (maximum load) was reduced by 10% with no change in bone toughness. Quantitative histomorphometric analyses revealed a 28% reduction in the periosteal bone formation rate and in the mineral apposition rate but with no change in the resorbing surface. Expression levels of periostin, Elk3, Runx2 genes that are targeted by miRs from the cluster were decreased by 25-30% in the bones of cKO mice. To determine the contribution of the miR17-92 cluster to the mechanical strain effect on periosteal bone formation, we subjected cKO and control mice to 2 wk of mechanical loading by four-point bending. We found that the periosteal bone response to mechanical strain was significantly reduced in the cKO mice. We conclude that the miR17-92 cluster expressed in type I collagen-producing cells is a key regulator of periosteal bone formation in mice. PMID:25492928

  4. Barzilai-Borwein method in graph drawing algorithm based on Kamada-Kawai algorithm

    NASA Astrophysics Data System (ADS)

    Hasal, Martin; Pospisil, Lukas; Nowakova, Jana

    2016-06-01

    Extension of Kamada-Kawai algorithm, which was designed for calculating layouts of simple undirected graphs, is presented in this paper. Graphs drawn by Kamada-Kawai algorithm exhibit symmetries, tend to produce aesthetically pleasing and crossing-free layouts for planar graphs. Minimization of Kamada-Kawai algorithm is based on Newton-Raphson method, which needs Hessian matrix of second derivatives of minimized node. Disadvantage of Kamada-Kawai embedder algorithm is computational requirements. This is caused by searching of minimal potential energy of the whole system, which is minimized node by node. The node with highest energy is minimized against all nodes till the local equilibrium state is reached. In this paper with Barzilai-Borwein (BB) minimization algorithm, which needs only gradient for minimum searching, instead of Newton-Raphson method, is worked. It significantly improves the computational time and requirements.

  5. Algorithm That Synthesizes Other Algorithms for Hashing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2010-01-01

    An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the

  6. Human vision-based algorithm to hide defective pixels in LCDs

    NASA Astrophysics Data System (ADS)

    Kimpe, Tom; Coulier, Stefaan; Van Hoey, Gert

    2006-02-01

    Producing displays without pixel defects or repairing defective pixels is technically not possible at this moment. This paper presents a new approach to solve this problem: defects are made invisible for the user by using image processing algorithms based on characteristics of the human eye. The performance of this new algorithm has been evaluated using two different methods. First of all the theoretical response of the human eye was analyzed on a series of images and this before and after applying the defective pixel compensation algorithm. These results show that indeed it is possible to mask a defective pixel. A second method was to perform a psycho-visual test where users were asked whether or not a defective pixel could be perceived. The results of these user tests also confirm the value of the new algorithm. Our "defective pixel correction" algorithm can be implemented very efficiently and cost-effectively as pixel-dataprocessing algorithms inside the display in for instance an FPGA, a DSP or a microprocessor. The described techniques are also valid for both monochrome and color displays ranging from high-quality medical displays to consumer LCDTV applications.

  7. Oscillation Detection Algorithm Development Summary Report and Test Plan

    SciTech Connect

    Zhou, Ning; Huang, Zhenyu; Tuffner, Francis K.; Jin, Shuangshuang

    2009-10-03

    Small signal stability problems are one of the major threats to grid stability and reliability in California and the western U.S. power grid. An unstable oscillatory mode can cause large-amplitude oscillations and may result in system breakup and large-scale blackouts. There have been several incidents of system-wide oscillations. Of them, the most notable is the August 10, 1996 western system breakup produced as a result of undamped system-wide oscillations. There is a great need for real-time monitoring of small-signal oscillations in the system. In power systems, a small-signal oscillation is the result of poor electromechanical damping. Considerable understanding and literature have been developed on the small-signal stability problem over the past 50+ years. These studies have been mainly based on a linearized system model and eigenvalue analysis of its characteristic matrix. However, its practical feasibility is greatly limited as power system models have been found inadequate in describing real-time operating conditions. Significant efforts have been devoted to monitoring system oscillatory behaviors from real-time measurements in the past 20 years. The deployment of phasor measurement units (PMU) provides high-precision time-synchronized data needed for estimating oscillation modes. Measurement-based modal analysis, also known as ModeMeter, uses real-time phasor measure-ments to estimate system oscillation modes and their damping. Low damping indicates potential system stability issues. Oscillation alarms can be issued when the power system is lightly damped. A good oscillation alarm tool can provide time for operators to take remedial reaction and reduce the probability of a system breakup as a result of a light damping condition. Real-time oscillation monitoring requires ModeMeter algorithms to have the capability to work with various kinds of measurements: disturbance data (ringdown signals), noise probing data, and ambient data. Several measurement

  8. Automatic Data Filter Customization Using a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Mandrake, Lukas

    2013-01-01

    This work predicts whether a retrieval algorithm will usefully determine CO2 concentration from an input spectrum of GOSAT (Greenhouse Gases Observing Satellite). This was done to eliminate needless runtime on atmospheric soundings that would never yield useful results. A space of 50 dimensions was examined for predictive power on the final CO2 results. Retrieval algorithms are frequently expensive to run, and wasted effort defeats requirements and expends needless resources. This algorithm could be used to help predict and filter unneeded runs in any computationally expensive regime. Traditional methods such as the Fischer discriminant analysis and decision trees can attempt to predict whether a sounding will be properly processed. However, this work sought to detect a subsection of the dimensional space that can be simply filtered out to eliminate unwanted runs. LDAs (linear discriminant analyses) and other systems examine the entire data and judge a "best fit," giving equal weight to complex and problematic regions as well as simple, clear-cut regions. In this implementation, a genetic space of "left" and "right" thresholds outside of which all data are rejected was defined. These left/right pairs are created for each of the 50 input dimensions. A genetic algorithm then runs through countless potential filter settings using a JPL computer cluster, optimizing the tossed-out data s yield (proper vs. improper run removal) and number of points tossed. This solution is robust to an arbitrary decision boundary within the data and avoids the global optimization problem of whole-dataset fitting using LDA or decision trees. It filters out runs that would not have produced useful CO2 values to save needless computation. This would be an algorithmic preprocessing improvement to any computationally expensive system.

  9. Phase-distortion correction based on stochastic parallel proportional-integral-derivative algorithm for high-resolution adaptive optics

    NASA Astrophysics Data System (ADS)

    Sun, Yang; Wu, Ke-nan; Gao, Hong; Jin, Yu-qi

    2015-02-01

    A novel optimization method, stochastic parallel proportional-integral-derivative (SPPID) algorithm, is proposed for high-resolution phase-distortion correction in wave-front sensorless adaptive optics (WSAO). To enhance the global search and self-adaptation of stochastic parallel gradient descent (SPGD) algorithm, residual error and its temporal integration of performance metric are added in to incremental control signal's calculation. On the basis of the maximum fitting rate between real wave-front and corrector, a goal value of metric is set as the reference. The residual error of the metric relative to reference is transformed into proportional and integration terms to produce adaptive step size updating law of SPGD algorithm. The adaptation of step size leads blind optimization to desired goal and helps escape from local extrema. Different from conventional proportional-integral -derivative (PID) algorithm, SPPID algorithm designs incremental control signal as PI-by-D for adaptive adjustment of control law in SPGD algorithm. Experiments of high-resolution phase-distortion correction in "frozen" turbulences based on influence function coefficients optimization were carried out respectively using 128-by-128 typed spatial light modulators, photo detector and control computer. Results revealed the presented algorithm offered better performance in both cases. The step size update based on residual error and its temporal integration was justified to resolve severe local lock-in problem of SPGD algorithm used in high -resolution adaptive optics.

  10. Totally parallel multilevel algorithms

    NASA Technical Reports Server (NTRS)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  11. Hyperspectral image compressive projection algorithm

    NASA Astrophysics Data System (ADS)

    Rice, Joseph P.; Allen, David W.

    2009-05-01

    We describe a compressive projection algorithm and experimentally assess its performance when used with a Hyperspectral Image Projector (HIP). The HIP is being developed by NIST for system-level performance testing of hyperspectral and multispectral imagers. It projects a two-dimensional image into the unit under test (UUT), whereby each pixel can have an independently programmable arbitrary spectrum. To efficiently project a single frame of dynamic realistic hyperspectral imagery through the collimator into the UUT, a compression algorithm has been developed whereby the series of abundance images and corresponding endmember spectra that comprise the image cube of that frame are first computed using an automated endmember-finding algorithm such as the Sequential Maximum Angle Convex Cone (SMACC) endmember model. Then these endmember spectra are projected sequentially on the HIP spectral engine in sync with the projection of the abundance images on the HIP spatial engine, during the singleframe exposure time of the UUT. The integrated spatial image captured by the UUT is the endmember-weighted sum of the abundance images, which results in the formation of a datacube for that frame. Compressive projection enables a much smaller set of broadband spectra to be projected than monochromatic projection, and thus utilizes the inherent multiplex advantage of the HIP spectral engine. As a result, radiometric brightness and projection frame rate are enhanced. In this paper, we use a visible breadboard HIP to experimentally assess the compressive projection algorithm performance.

  12. Decomposition of Large Scale Semantic Graphsvia an Efficient Communities Algorithm

    SciTech Connect

    Yao, Y

    2008-02-08

    's decomposition algorithm, much more efficiently, leading to significantly reduced computation time. Test runs on a desktop computer have shown reductions of up to 89%. Our focus this year has been on the implementation of parallel graph clustering on one of LLNL's supercomputers. In order to achieve efficiency in parallel computing, we have exploited the fact that large semantic graphs tend to be sparse, comprising loosely connected dense node clusters. When implemented on distributed memory computers, our approach performed well on several large graphs with up to one billion nodes, as shown in Table 2. The rightmost column of Table 2 contains the associated Newman's modularity [1], a metric that is widely used to assess the quality of community structure. Existing algorithms produce results that merely approximate the optimal solution, i.e., maximum modularity. We have developed a verification tool for decomposition algorithms, based upon a novel integer linear programming (ILP) approach, that computes an exact solution. We have used this ILP methodology to find the maximum modularity and corresponding optimal community structure for several well-studied graphs in the literature (e.g., Figure 1) [3]. The above approaches assume that modularity is the best measure of quality for community structure. In an effort to enhance this quality metric, we have also generalized Newman's modularity based upon an insightful random walk interpretation that allows us to vary the scope of the metric. Generalized modularity has enabled us to develop new, more flexible versions of our algorithms. In developing these methodologies, we have made several contributions to both graph theoretic algorithms and software engineering. We have written two research papers for refereed publication [3-4] and are working on another one [5]. In addition, we have presented our research findings at three academic and professional conferences.

  13. CME Prediction Using SDO, SoHO, and STEREO data with a Machine Learning Algorithm

    NASA Astrophysics Data System (ADS)

    Bobra, M.; Ilonidis, S.

    2015-12-01

    It is unclear whether a flaring active region will also produce a Coronal Mass Ejection (CME). Usually, active regions that produce large flares will also produce a CME, but this is not always the case. For example, the largest active region from the last 24 years, NOAA Active Region 12192 of October 2014, produced many X-class flares but not a single CME. We attempt to forecast whether an active region that produces an M- or X-class flare will also produce a CME. We do this by analyzing data from three solar observatories -- SDO, STEREO, and SoHO -- using a machine-learning algorithm. We find that the role of horizontal component of the photospheric magnetic field plays a crucial component in driving a CME, a result corroborated by Sun et al. (2015). We present the success rate of our method and the potential applications to space weather forecasts.

  14. A family of algorithms for computing consensus about node state from network data.

    PubMed

    Brush, Eleanor R; Krakauer, David C; Flack, Jessica C

    2013-01-01

    Biological and social networks are composed of heterogeneous nodes that contribute differentially to network structure and function. A number of algorithms have been developed to measure this variation. These algorithms have proven useful for applications that require assigning scores to individual nodes-from ranking websites to determining critical species in ecosystems-yet the mechanistic basis for why they produce good rankings remains poorly understood. We show that a unifying property of these algorithms is that they quantify consensus in the network about a node's state or capacity to perform a function. The algorithms capture consensus by either taking into account the number of a target node's direct connections, and, when the edges are weighted, the uniformity of its weighted in-degree distribution (breadth), or by measuring net flow into a target node (depth). Using data from communication, social, and biological networks we find that that how an algorithm measures consensus-through breadth or depth- impacts its ability to correctly score nodes. We also observe variation in sensitivity to source biases in interaction/adjacency matrices: errors arising from systematic error at the node level or direct manipulation of network connectivity by nodes. Our results indicate that the breadth algorithms, which are derived from information theory, correctly score nodes (assessed using independent data) and are robust to errors. However, in cases where nodes "form opinions" about other nodes using indirect information, like reputation, depth algorithms, like Eigenvector Centrality, are required. One caveat is that Eigenvector Centrality is not robust to error unless the network is transitive or assortative. In these cases the network structure allows the depth algorithms to effectively capture breadth as well as depth. Finally, we discuss the algorithms' cognitive and computational demands. This is an important consideration in systems in which individuals use the

  15. A family of algorithms for computing consensus about node state from network data.

    PubMed

    Brush, Eleanor R; Krakauer, David C; Flack, Jessica C

    2013-01-01

    Biological and social networks are composed of heterogeneous nodes that contribute differentially to network structure and function. A number of algorithms have been developed to measure this variation. These algorithms have proven useful for applications that require assigning scores to individual nodes-from ranking websites to determining critical species in ecosystems-yet the mechanistic basis for why they produce good rankings remains poorly understood. We show that a unifying property of these algorithms is that they quantify consensus in the network about a node's state or capacity to perform a function. The algorithms capture consensus by either taking into account the number of a target node's direct connections, and, when the edges are weighted, the uniformity of its weighted in-degree distribution (breadth), or by measuring net flow into a target node (depth). Using data from communication, social, and biological networks we find that that how an algorithm measures consensus-through breadth or depth- impacts its ability to correctly score nodes. We also observe variation in sensitivity to source biases in interaction/adjacency matrices: errors arising from systematic error at the node level or direct manipulation of network connectivity by nodes. Our results indicate that the breadth algorithms, which are derived from information theory, correctly score nodes (assessed using independent data) and are robust to errors. However, in cases where nodes "form opinions" about other nodes using indirect information, like reputation, depth algorithms, like Eigenvector Centrality, are required. One caveat is that Eigenvector Centrality is not robust to error unless the network is transitive or assortative. In these cases the network structure allows the depth algorithms to effectively capture breadth as well as depth. Finally, we discuss the algorithms' cognitive and computational demands. This is an important consideration in systems in which individuals use the

  16. Improved Monkey-King Genetic Algorithm for Solving Large Winner Determination in Combinatorial Auction

    NASA Astrophysics Data System (ADS)

    Li, Yuzhong

    Using GA solve the winner determination problem (WDP) with large bids and items, run under different distribution, because the search space is large, constraint complex and it may easy to produce infeasible solution, would affect the efficiency and quality of algorithm. This paper present improved MKGA, including three operator: preprocessing, insert bid and exchange recombination, and use Monkey-king elite preservation strategy. Experimental results show that improved MKGA is better than SGA in population size and computation. The problem that traditional branch and bound algorithm hard to solve, improved MKGA can solve and achieve better effect.

  17. Domain Decomposition Algorithms for First-Order System Least Squares Methods

    NASA Technical Reports Server (NTRS)

    Pavarino, Luca F.

    1996-01-01

    Least squares methods based on first-order systems have been recently proposed and analyzed for second-order elliptic equations and systems. They produce symmetric and positive definite discrete systems by using standard finite element spaces, which are not required to satisfy the inf-sup condition. In this paper, several domain decomposition algorithms for these first-order least squares methods are studied. Some representative overlapping and substructuring algorithms are considered in their additive and multiplicative variants. The theoretical and numerical results obtained show that the classical convergence bounds (on the iteration operator) for standard Galerkin discretizations are also valid for least squares methods.

  18. Lightning Jump Algorithm and Relation to Thunderstorm Cell Tracking, GLM Proxy and Other Meteorological Measurements

    NASA Technical Reports Server (NTRS)

    Schultz, Christopher J.; Carey, Lawrence D.; Cecil, Daniel J.; Bateman, Monte

    2012-01-01

    The lightning jump algorithm has a robust history in correlating upward trends in lightning to severe and hazardous weather occurrence. The algorithm uses the correlation between the physical principles that govern an updraft's ability to produce microphysical and kinematic conditions conducive for electrification and its role in the development of severe weather conditions. Recent work has demonstrated that the lightning jump algorithm concept holds significant promise in the operational realm, aiding in the identification of thunderstorms that have potential to produce severe or hazardous weather. However, a large amount of work still needs to be completed in spite of these positive results. The total lightning jump algorithm is not a stand-alone concept that can be used independent of other meteorological measurements, parameters, and techniques. For example, the algorithm is highly dependent upon thunderstorm tracking to build lightning histories on convective cells. Current tracking methods show that thunderstorm cell tracking is most reliable and cell histories are most accurate when radar information is incorporated with lightning data. In the absence of radar data, the cell tracking is a bit less reliable but the value added by the lightning information is much greater. For optimal application, the algorithm should be integrated with other measurements that assess storm scale properties (e.g., satellite, radar). Therefore, the recent focus of this research effort has been assessing the lightning jump's relation to thunderstorm tracking, meteorological parameters, and its potential uses in operational meteorology. Furthermore, the algorithm must be tailored for the optically-based GOES-R Geostationary Lightning Mapper (GLM), as what has been observed using Very High Frequency Lightning Mapping Array (VHF LMA) measurements will not exactly translate to what will be observed by GLM due to resolution and other instrument differences. Herein, we present some of

  19. A fast portable implementation of the Secure Hash Algorithm, III.

    SciTech Connect

    McCurley, Kevin S.

    1992-10-01

    In 1992, NIST announced a proposed standard for a collision-free hash function. The algorithm for producing the hash value is known as the Secure Hash Algorithm (SHA), and the standard using the algorithm in known as the Secure Hash Standard (SHS). Later, an announcement was made that a scientist at NSA had discovered a weakness in the original algorithm. A revision to this standard was then announced as FIPS 180-1, and includes a slight change to the algorithm that eliminates the weakness. This new algorithm is called SHA-1. In this report we describe a portable and efficient implementation of SHA-1 in the C language. Performance information is given, as well as tips for porting the code to other architectures. We conclude with some observations on the efficiency of the algorithm, and a discussion of how the efficiency of SHA might be improved.

  20. CARVE--a constructive algorithm for real-valued examples.

    PubMed

    Young, S; Downs, T

    1998-01-01

    A constructive neural-network algorithm is presented. For any consistent classification task on real-valued training vectors, the algorithm constructs a feedforward network with a single hidden layer of threshold units which implements the task. The algorithm, which we call CARVE, extends the "sequential learning" algorithm of Marchand et al. from Boolean inputs to the real-valued input case, and uses convex hull methods for the determination of the network weights. The algorithm is an efficient training scheme for producing near-minimal network solutions for arbitrary classification tasks. The algorithm is applied to a number of benchmark problems including Gorman and Sejnowski's sonar data, the Monks problems and Fisher's iris data. A significant application of the constructive algorithm is in providing an initial network topology and initial weights for other neural-network training schemes and this is demonstrated by application to backpropagation.

  1. The First Results of Testing Methods and Algorithms for Automatic Real Time Identification of Waveforms Introduction from Local Earthquakes in Increased Level of Man-induced Noises for the Purposes of Ultra-short-term Warning about an Occurred Earthquake

    NASA Astrophysics Data System (ADS)

    Gravirov, V. V.; Kislov, K. V.

    2009-12-01

    The chief hazard posed by earthquakes consists in their suddenness. The number of earthquakes annually recorded is in excess of 100,000; of these, over 1000 are strong ones. Great human losses usually occur because no devices exist for advance warning of earthquakes. It is therefore high time that mobile information automatic systems should be developed for analysis of seismic information at high levels of manmade noise. The systems should be operated in real time with the minimum possible computational delays and be able to make fast decisions. The chief statement of the project is that sufficiently complete information about an earthquake can be obtained in real time by examining its first onset as recorded by a single seismic sensor or a local seismic array. The essential difference from the existing systems consists in the following: analysis of local seismic data at high levels of manmade noise (that is, when the noise level may be above the seismic signal level), as well as self-contained operation. The algorithms developed during the execution of the project will be capable to be used with success for individual personal protection kits and for warning the population in earthquake-prone areas over the world. The system being developed for this project uses P and S waves as well. The difference in the velocities of these seismic waves permits a technique to be developed for identifying a damaging earthquake. Real time analysis of first onsets yields the time that remains before surface waves arrive and the damage potential of these waves. Estimates show that, when the difference between the earthquake epicenter and the monitored site is of order 200 km, the time difference between the arrivals of P waves and surface waves will be about 30 seconds, which is quite sufficient to evacuate people from potentially hazardous space, insertion of moderators at nuclear power stations, pipeline interlocking, transportation stoppage, warnings issued to rescue services

  2. Towards a Framework for Evaluating and Comparing Diagnosis Algorithms

    NASA Technical Reports Server (NTRS)

    Kurtoglu, Tolga; Narasimhan, Sriram; Poll, Scott; Garcia,David; Kuhn, Lukas; deKleer, Johan; vanGemund, Arjan; Feldman, Alexander

    2009-01-01

    Diagnostic inference involves the detection of anomalous system behavior and the identification of its cause, possibly down to a failed unit or to a parameter of a failed unit. Traditional approaches to solving this problem include expert/rule-based, model-based, and data-driven methods. Each approach (and various techniques within each approach) use different representations of the knowledge required to perform the diagnosis. The sensor data is expected to be combined with these internal representations to produce the diagnosis result. In spite of the availability of various diagnosis technologies, there have been only minimal efforts to develop a standardized software framework to run, evaluate, and compare different diagnosis technologies on the same system. This paper presents a framework that defines a standardized representation of the system knowledge, the sensor data, and the form of the diagnosis results and provides a run-time architecture that can execute diagnosis algorithms, send sensor data to the algorithms at appropriate time steps from a variety of sources (including the actual physical system), and collect resulting diagnoses. We also define a set of metrics that can be used to evaluate and compare the performance of the algorithms, and provide software to calculate the metrics.

  3. A genetic algorithm to reduce stream channel cross section data

    USGS Publications Warehouse

    Berenbrock, C.

    2006-01-01

    A genetic algorithm (GA) was used to reduce cross section data for a hypothetical example consisting of 41 data points and for 10 cross sections on the Kootenai River. The number of data points for the Kootenai River cross sections ranged from about 500 to more than 2,500. The GA was applied to reduce the number of data points to a manageable dataset because most models and other software require fewer than 100 data points for management, manipulation, and analysis. Results indicated that the program successfully reduced the data. Fitness values from the genetic algorithm were lower (better) than those in a previous study that used standard procedures of reducing the cross section data. On average, fitnesses were 29 percent lower, and several were about 50 percent lower. Results also showed that cross sections produced by the genetic algorithm were representative of the original section and that near-optimal results could be obtained in a single run, even for large problems. Other data also can be reduced in a method similar to that for cross section data.

  4. Inconsistent Denoising and Clustering Algorithms for Amplicon Sequence Data.

    PubMed

    Koskinen, Kaisa; Auvinen, Petri; Björkroth, K Johanna; Hultman, Jenni

    2015-08-01

    Natural microbial communities have been studied for decades using the 16S rRNA gene as a marker. In recent years, the application of second-generation sequencing technologies has revolutionized our understanding of the structure and function of microbial communities in complex environments. Using these highly parallel techniques, a detailed description of community characteristics are constructed, and even the rare biosphere can be detected. The new approaches carry numerous advantages and lack many features that skewed the results using traditional techniques, but we are still facing serious bias, and the lack of reliable comparability of produced results. Here, we contrasted publicly available amplicon sequence data analysis algorithms by using two different data sets, one with defined clone-based structure, and one with food spoilage community with well-studied communities. We aimed to assess which software and parameters produce results that resemble the benchmark community best, how large differences can be detected between methods, and whether these differences are statistically significant. The results suggest that commonly accepted denoising and clustering methods used in different combinations produce significantly different outcome: clustering method impacts greatly on the number of operational taxonomic units (OTUs) and denoising algorithm influences more on taxonomic affiliations. The magnitude of the OTU number difference was up to 40-fold and the disparity between results seemed highly dependent on the community structure and diversity. Statistically significant differences in taxonomies between methods were seen even at phylum level. However, the application of effective denoising method seemed to even out the differences produced by clustering. PMID:25525895

  5. Algorithm for Simulating Atmospheric Turbulence and Aeroelastic Effects on Simulator Motion Systems

    NASA Technical Reports Server (NTRS)

    Ercole, Anthony V.; Cardullo, Frank M.; Kelly, Lon C.; Houck, Jacob A.

    2012-01-01

    Atmospheric turbulence produces high frequency accelerations in aircraft, typically greater than the response to pilot input. Motion system equipped flight simulators must present cues representative of the aircraft response to turbulence in order to maintain the integrity of the simulation. Currently, turbulence motion cueing produced by flight simulator motion systems has been less than satisfactory because the turbulence profiles have been attenuated by the motion cueing algorithms. This report presents a new turbulence motion cueing algorithm, referred to as the augmented turbulence channel. Like the previous turbulence algorithms, the output of the channel only augments the vertical degree of freedom of motion. This algorithm employs a parallel aircraft model and an optional high bandwidth cueing filter. Simulation of aeroelastic effects is also an area where frequency content must be preserved by the cueing algorithm. The current aeroelastic implementation uses a similar secondary channel that supplements the primary motion cue. Two studies were conducted using the NASA Langley Visual Motion Simulator and Cockpit Motion Facility to evaluate the effect of the turbulence channel and aeroelastic model on pilot control input. Results indicate that the pilot is better correlated with the aircraft response, when the augmented channel is in place.

  6. An Improved Multiobjective Optimization Evolutionary Algorithm Based on Decomposition for Complex Pareto Fronts.

    PubMed

    Jiang, Shouyong; Yang, Shengxiang

    2016-02-01

    The multiobjective evolutionary algorithm based on decomposition (MOEA/D) has been shown to be very efficient in solving multiobjective optimization problems (MOPs). In practice, the Pareto-optimal front (POF) of many MOPs has complex characteristics. For example, the POF may have a long tail and sharp peak and disconnected regions, which significantly degrades the performance of MOEA/D. This paper proposes an improved MOEA/D for handling such kind of complex problems. In the proposed algorithm, a two-phase strategy (TP) is employed to divide the whole optimization procedure into two phases. Based on the crowdedness of solutions found in the first phase, the algorithm decides whether or not to delicate computational resources to handle unsolved subproblems in the second phase. Besides, a new niche scheme is introduced into the improved MOEA/D to guide the selection of mating parents to avoid producing duplicate solutions, which is very helpful for maintaining the population diversity when the POF of the MOP being optimized is discontinuous. The performance of the proposed algorithm is investigated on some existing benchmark and newly designed MOPs with complex POF shapes in comparison with several MOEA/D variants and other approaches. The experimental results show that the proposed algorithm produces promising performance on these complex problems.

  7. Daylighting simulation: methods, algorithms, and resources

    SciTech Connect

    Carroll, William L.

    1999-12-01

    This document presents work conducted as part of Subtask C, ''Daylighting Design Tools'', Subgroup C2, ''New Daylight Algorithms'', of the IEA SHC Task 21 and the ECBCS Program Annex 29 ''Daylight in Buildings''. The search for and collection of daylighting analysis methods and algorithms led to two important observations. First, there is a wide range of needs for different types of methods to produce a complete analysis tool. These include: Geometry; Light modeling; Characterization of the natural illumination resource; Materials and components properties, representations; and Usability issues (interfaces, interoperability, representation of analysis results, etc). Second, very advantageously, there have been rapid advances in many basic methods in these areas, due to other forces. They are in part driven by: The commercial computer graphics community (commerce, entertainment); The lighting industry; Architectural rendering and visualization for projects; and Academia: Course materials, research. This has led to a very rich set of information resources that have direct applicability to the small daylighting analysis community. Furthermore, much of this information is in fact available online. Because much of the information about methods and algorithms is now online, an innovative reporting strategy was used: the core formats are electronic, and used to produce a printed form only secondarily. The electronic forms include both online WWW pages and a downloadable .PDF file with the same appearance and content. Both electronic forms include live primary and indirect links to actual information sources on the WWW. In most cases, little additional commentary is provided regarding the information links or citations that are provided. This in turn allows the report to be very concise. The links are expected speak for themselves. The report consists of only about 10+ pages, with about 100+ primary links, but with potentially thousands of indirect links. For purposes of

  8. QPSO-based adaptive DNA computing algorithm.

    PubMed

    Karakose, Mehmet; Cigdem, Ugur

    2013-01-01

    DNA (deoxyribonucleic acid) computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO). Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1) parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2) adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3) numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm.

  9. A seed-based plant propagation algorithm: the feeding station model.

    PubMed

    Sulaiman, Muhammad; Salhi, Abdellah

    2015-01-01

    The seasonal production of fruit and seeds is akin to opening a feeding station, such as a restaurant. Agents coming to feed on the fruit are like customers attending the restaurant; they arrive at a certain rate and get served at a certain rate following some appropriate processes. The same applies to birds and animals visiting and feeding on ripe fruit produced by plants such as the strawberry plant. This phenomenon underpins the seed dispersion of the plants. Modelling it as a queuing process results in a seed-based search/optimisation algorithm. This variant of the Plant Propagation Algorithm is described, analysed, tested on nontrivial problems, and compared with well established algorithms. The results are included.

  10. Generation of Referring Expressions: Assessing the Incremental Algorithm

    ERIC Educational Resources Information Center

    van Deemter, Kees; Gatt, Albert; van der Sluis, Ielka; Power, Richard

    2012-01-01

    A substantial amount of recent work in natural language generation has focused on the generation of "one-shot" referring expressions whose only aim is to identify a target referent. Dale and Reiter's Incremental Algorithm (IA) is often thought to be the best algorithm for maximizing the similarity to referring expressions produced by people. We…

  11. An Algorithm for Generating Structural Surrogates of English Text.

    ERIC Educational Resources Information Center

    Strong, Suzanne Marvin

    An algorithm for generating a structural syntactic surrogate of English text is defined in this paper. The "performance" of the surrogate is judged empirically according to adherence to the following criteria: (1) The surrogate is an organized representation of natural language text; (2) The algorithm which produces the surrogate is equally…

  12. A constraint consensus memetic algorithm for solving constrained optimization problems

    NASA Astrophysics Data System (ADS)

    Hamza, Noha M.; Sarker, Ruhul A.; Essam, Daryl L.; Deb, Kalyanmoy; Elsayed, Saber M.

    2014-11-01

    Constraint handling is an important aspect of evolutionary constrained optimization. Currently, the mechanism used for constraint handling with evolutionary algorithms mainly assists the selection process, but not the actual search process. In this article, first a genetic algorithm is combined with a class of search methods, known as constraint consensus methods, that assist infeasible individuals to move towards the feasible region. This approach is also integrated with a memetic algorithm. The proposed algorithm is tested and analysed by solving two sets of standard benchmark problems, and the results are compared with other state-of-the-art algorithms. The comparisons show that the proposed algorithm outperforms other similar algorithms. The algorithm has also been applied to solve a practical economic load dispatch problem, where it also shows superior performance over other algorithms.

  13. Bouc-Wen hysteresis model identification using Modified Firefly Algorithm

    NASA Astrophysics Data System (ADS)

    Zaman, Mohammad Asif; Sikder, Urmita

    2015-12-01

    The parameters of Bouc-Wen hysteresis model are identified using a Modified Firefly Algorithm. The proposed algorithm uses dynamic process control parameters to improve its performance. The algorithm is used to find the model parameter values that results in the least amount of error between a set of given data points and points obtained from the Bouc-Wen model. The performance of the algorithm is compared with the performance of conventional Firefly Algorithm, Genetic Algorithm and Differential Evolution algorithm in terms of convergence rate and accuracy. Compared to the other three optimization algorithms, the proposed algorithm is found to have good convergence rate with high degree of accuracy in identifying Bouc-Wen model parameters. Finally, the proposed method is used to find the Bouc-Wen model parameters from experimental data. The obtained model is found to be in good agreement with measured data.

  14. Integrating Algorithm Visualization Video into a First-Year Algorithm and Data Structure Course

    ERIC Educational Resources Information Center

    Crescenzi, Pilu; Malizia, Alessio; Verri, M. Cecilia; Diaz, Paloma; Aedo, Ignacio

    2012-01-01

    In this paper we describe the results that we have obtained while integrating algorithm visualization (AV) movies (strongly tightened with the other teaching material), within a first-year undergraduate course on algorithms and data structures. Our experimental results seem to support the hypothesis that making these movies available significantly…

  15. DNA-based watermarks using the DNA-Crypt algorithm

    PubMed Central

    Heider, Dominik; Barnekow, Angelika

    2007-01-01

    Background The aim of this paper is to demonstrate the application of watermarks based on DNA sequences to identify the unauthorized use of genetically modified organisms (GMOs) protected by patents. Predicted mutations in the genome can be corrected by the DNA-Crypt program leaving the encrypted information intact. Existing DNA cryptographic and steganographic algorithms use synthetic DNA sequences to store binary information however, although these sequences can be used for authentication, they may change the target DNA sequence when introduced into living organisms. Results The DNA-Crypt algorithm and image steganography are based on the same watermark-hiding principle, namely using the least significant base in case of DNA-Crypt and the least significant bit in case of the image steganography. It can be combined with binary encryption algorithms like AES, RSA or Blowfish. DNA-Crypt is able to correct mutations in the target DNA with several mutation correction codes such as the Hamming-code or the WDH-code. Mutations which can occur infrequently may destroy the encrypted information, however an integrated fuzzy controller decides on a set of heuristics based on three input dimensions, and recommends whether or not to use a correction code. These three input dimensions are the length of the sequence, the individual mutation rate and the stability over time, which is represented by the number of generations. In silico experiments using the Ypt7 in Saccharomyces cerevisiae shows that the DNA watermarks produced by DNA-Crypt do not alter the translation of mRNA into protein. Conclusion The program is able to store watermarks in living organisms and can maintain the original information by correcting mutations itself. Pairwise or multiple sequence alignments show that DNA-Crypt produces few mismatches between the sequences similar to all steganographic algorithms. PMID:17535434

  16. A systematic comparison of genome-scale clustering algorithms

    PubMed Central

    2012-01-01

    Background A wealth of clustering algorithms has been applied to gene co-expression experiments. These algorithms cover a broad range of approaches, from conventional techniques such as k-means and hierarchical clustering, to graphical approaches such as k-clique communities, weighted gene co-expression networks (WGCNA) and paraclique. Comparison of these methods to evaluate their relative effectiveness provides guidance to algorithm selection, development and implementation. Most prior work on comparative clustering evaluation has focused on parametric methods. Graph theoretical methods are recent additions to the tool set for the global analysis and decomposition of microarray co-expression matrices that have not generally been included in earlier methodological comparisons. In the present study, a variety of parametric and graph theoretical clustering algorithms are compared using well-characterized transcriptomic data at a genome scale from Saccharomyces cerevisiae. Methods For each clustering method under study, a variety of parameters were tested. Jaccard similarity was used to measure each cluster's agreement with every GO and KEGG annotation set, and the highest Jaccard score was assigned to the cluster. Clusters were grouped into small, medium, and large bins, and the Jaccard score of the top five scoring clusters in each bin were averaged and reported as the best average top 5 (BAT5) score for the particular method. Results Clusters produced by each method were evaluated based upon the positive match to known pathways. This produces a readily interpretable ranking of the relative effectiveness of clustering on the genes. Methods were also tested to determine whether they were able to identify clusters consistent with those identified by other clustering methods. Conclusions Validation of clusters against known gene classifications demonstrate that for this data, graph-based techniques outperform conventional clustering approaches, suggesting that further

  17. Cyclic cooling algorithm

    SciTech Connect

    Rempp, Florian; Mahler, Guenter; Michel, Mathias

    2007-09-15

    We introduce a scheme to perform the cooling algorithm, first presented by Boykin et al. in 2002, for an arbitrary number of times on the same set of qbits. We achieve this goal by adding an additional SWAP gate and a bath contact to the algorithm. This way one qbit may repeatedly be cooled without adding additional qbits to the system. By using a product Liouville space to model the bath contact we calculate the density matrix of the system after a given number of applications of the algorithm.

  18. Network-Control Algorithm

    NASA Technical Reports Server (NTRS)

    Chan, Hak-Wai; Yan, Tsun-Yee

    1989-01-01

    Algorithm developed for optimal routing of packets of data along links of multilink, multinode digital communication network. Algorithm iterative and converges to cost-optimal assignment independent of initial assignment. Each node connected to other nodes through links, each containing number of two-way channels. Algorithm assigns channels according to message traffic leaving and arriving at each node. Modified to take account of different priorities among packets belonging to different users by using different delay constraints or imposing additional penalties via cost function.

  19. New stereo matching algorithm

    NASA Astrophysics Data System (ADS)

    Ahmed, Yasser A.; Afifi, Hossam; Rubino, Gerardo

    1999-05-01

    This paper present a new algorithm for stereo matching. The main idea is to decompose the original problem into independent hierarchical and more elementary problems that can be solved faster without any complicated mathematics using BBD. To achieve that, we use a new image feature called 'continuity feature' instead of classical noise. This feature can be extracted from any kind of images by a simple process and without using a searching technique. A new matching technique is proposed to match the continuity feature. The new algorithm resolves the main disadvantages of feature based stereo matching algorithms.

  20. Applications and accuracy of the parallel diagonal dominant algorithm

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He

    1993-01-01

    The Parallel Diagonal Dominant (PDD) algorithm is a highly efficient, ideally scalable tridiagonal solver. In this paper, a detailed study of the PDD algorithm is given. First the PDD algorithm is introduced. Then the algorithm is extended to solve periodic tridiagonal systems. A variant, the reduced PDD algorithm, is also proposed. Accuracy analysis is provided for a class of tridiagonal systems, the symmetric, and anti-symmetric Toeplitz tridiagonal systems. Implementation results show that the analysis gives a good bound on the relative error, and the algorithm is a good candidate for the emerging massively parallel machines.

  1. New formulations of monotonically convergent quantum control algorithms

    NASA Astrophysics Data System (ADS)

    Maday, Yvon; Turinici, Gabriel

    2003-05-01

    Most of the numerical simulation in quantum (bilinear) control have used one of the monotonically convergent algorithms of Krotov (introduced by Tannor et al.) or of Zhu and Rabitz. However, until now no explicit relationship has been revealed between the two algorithms in order to understand their common properties. Within this framework, we propose in this paper a unified formulation that comprises both algorithms and that extends to a new class of monotonically convergent algorithms. Numerical results show that the newly derived algorithms behave as well as (and sometimes better than) the well-known algorithms cited above.

  2. An algorithm to estimate the object support in truncated images

    SciTech Connect

    Hsieh, Scott S.; Nett, Brian E.; Cao, Guangzhi; Pelc, Norbert J.

    2014-07-15

    Purpose: Truncation artifacts in CT occur if the object to be imaged extends past the scanner field of view (SFOV). These artifacts impede diagnosis and could possibly introduce errors in dose plans for radiation therapy. Several approaches exist for correcting truncation artifacts, but existing correction algorithms do not accurately recover the skin line (or support) of the patient, which is important in some dose planning methods. The purpose of this paper was to develop an iterative algorithm that recovers the support of the object. Methods: The authors assume that the truncated portion of the image is made up of soft tissue of uniform CT number and attempt to find a shape consistent with the measured data. Each known measurement in the sinogram is interpreted as an estimate of missing mass along a line. An initial estimate of the object support is generated by thresholding a reconstruction made using a previous truncation artifact correction algorithm (e.g., water cylinder extrapolation). This object support is iteratively deformed to reduce the inconsistency with the measured data. The missing data are estimated using this object support to complete the dataset. The method was tested on simulated and experimentally truncated CT data. Results: The proposed algorithm produces a better defined skin line than water cylinder extrapolation. On the experimental data, the RMS error of the skin line is reduced by about 60%. For moderately truncated images, some soft tissue contrast is retained near the SFOV. As the extent of truncation increases, the soft tissue contrast outside the SFOV becomes unusable although the skin line remains clearly defined, and in reformatted images it varies smoothly from slice to slice as expected. Conclusions: The support recovery algorithm provides a more accurate estimate of the patient outline than thresholded, basic water cylinder extrapolation, and may be preferred in some radiation therapy applications.

  3. Contextual classification of multispectral image data: Approximate algorithm

    NASA Technical Reports Server (NTRS)

    Tilton, J. C. (Principal Investigator)

    1980-01-01

    An approximation to a classification algorithm incorporating spatial context information in a general, statistical manner is presented which is computationally less intensive. Classifications that are nearly as accurate are produced.

  4. A new approach to optic disc detection in human retinal images using the firefly algorithm.

    PubMed

    Rahebi, Javad; Hardalaç, Fırat

    2016-03-01

    There are various methods and algorithms to detect the optic discs in retinal images. In recent years, much attention has been given to the utilization of the intelligent algorithms. In this paper, we present a new automated method of optic disc detection in human retinal images using the firefly algorithm. The firefly intelligent algorithm is an emerging intelligent algorithm that was inspired by the social behavior of fireflies. The population in this algorithm includes the fireflies, each of which has a specific rate of lighting or fitness. In this method, the insects are compared two by two, and the less attractive insects can be observed to move toward the more attractive insects. Finally, one of the insects is selected as the most attractive, and this insect presents the optimum response to the problem in question. Here, we used the light intensity of the pixels of the retinal image pixels instead of firefly lightings. The movement of these insects due to local fluctuations produces different light intensity values in the images. Because the optic disc is the brightest area in the retinal images, all of the insects move toward brightest area and thus specify the location of the optic disc in the image. The results of implementation show that proposed algorithm could acquire an accuracy rate of 100 % in DRIVE dataset, 95 % in STARE dataset, and 94.38 % in DiaRetDB1 dataset. The results of implementation reveal high capability and accuracy of proposed algorithm in the detection of the optic disc from retinal images. Also, recorded required time for the detection of the optic disc in these images is 2.13 s for DRIVE dataset, 2.81 s for STARE dataset, and 3.52 s for DiaRetDB1 dataset accordingly. These time values are average value.

  5. Ocean observations with EOS/MODIS: Algorithm development and post launch studies

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.

    1995-01-01

    An investigation of the influence of stratospheric aerosol on the performance of the atmospheric correction algorithm was carried out. The results indicate how the performance of the algorithm is degraded if the stratospheric aerosol is ignored. Use of the MODIS 1380 nm band to effect a correction for stratospheric aerosols was also studied. The development of a multi-layer Monte Carlo radiative transfer code that includes polarization by molecular and aerosol scattering and wind-induced sea surface roughness has been completed. Comparison tests with an existing two-layer successive order of scattering code suggests that both codes are capable of producing top-of-atmosphere radiances with errors usually less than 0.1 percent. An initial set of simulations to study the effects of ignoring the polarization of the the ocean-atmosphere light field, in both the development of the atmospheric correction algorithm and the generation of the lookup tables used for operation of the algorithm, have been completed. An algorithm was developed that can be used to invert the radiance exiting the top and bottom of the atmosphere to yield the columnar optical properties of the atmospheric aerosol under clear sky conditions over the ocean, for aerosol optical thicknesses as large as 2. The algorithm is capable of retrievals with such large optical thicknesses because all significant orders of multiple scattering are included.

  6. A Beacon Transmission Power Control Algorithm Based on Wireless Channel Load Forecasting in VANETs.

    PubMed

    Mo, Yuanfu; Yu, Dexin; Song, Jun; Zheng, Kun; Guo, Yajuan

    2015-01-01

    In a vehicular ad hoc network (VANET), the periodic exchange of single-hop status information broadcasts (beacon frames) produces channel loading, which causes channel congestion and induces information conflict problems. To guarantee fairness in beacon transmissions from each node and maximum network connectivity, adjustment of the beacon transmission power is an effective method for reducing and preventing channel congestion. In this study, the primary factors that influence wireless channel loading are selected to construct the KF-BCLF, which is a channel load forecasting algorithm based on a recursive Kalman filter and employs multiple regression equation. By pre-adjusting the transmission power based on the forecasted channel load, the channel load was kept within a predefined range; therefore, channel congestion was prevented. Based on this method, the CLF-BTPC, which is a transmission power control algorithm, is proposed. To verify KF-BCLF algorithm, a traffic survey method that involved the collection of floating car data along a major traffic road in Changchun City is employed. By comparing this forecast with the measured channel loads, the proposed KF-BCLF algorithm was proven to be effective. In addition, the CLF-BTPC algorithm is verified by simulating a section of eight-lane highway and a signal-controlled urban intersection. The results of the two verification process indicate that this distributed CLF-BTPC algorithm can effectively control channel load, prevent channel congestion, and enhance the stability and robustness of wireless beacon transmission in a vehicular network.

  7. A Beacon Transmission Power Control Algorithm Based on Wireless Channel Load Forecasting in VANETs.

    PubMed

    Mo, Yuanfu; Yu, Dexin; Song, Jun; Zheng, Kun; Guo, Yajuan

    2015-01-01

    In a vehicular ad hoc network (VANET), the periodic exchange of single-hop status information broadcasts (beacon frames) produces channel loading, which causes channel congestion and induces information conflict problems. To guarantee fairness in beacon transmissions from each node and maximum network connectivity, adjustment of the beacon transmission power is an effective method for reducing and preventing channel congestion. In this study, the primary factors that influence wireless channel loading are selected to construct the KF-BCLF, which is a channel load forecasting algorithm based on a recursive Kalman filter and employs multiple regression equation. By pre-adjusting the transmission power based on the forecasted channel load, the channel load was kept within a predefined range; therefore, channel congestion was prevented. Based on this method, the CLF-BTPC, which is a transmission power control algorithm, is proposed. To verify KF-BCLF algorithm, a traffic survey method that involved the collection of floating car data along a major traffic road in Changchun City is employed. By comparing this forecast with the measured channel loads, the proposed KF-BCLF algorithm was proven to be effective. In addition, the CLF-BTPC algorithm is verified by simulating a section of eight-lane highway and a signal-controlled urban intersection. The results of the two verification process indicate that this distributed CLF-BTPC algorithm can effectively control channel load, prevent channel congestion, and enhance the stability and robustness of wireless beacon transmission in a vehicular network. PMID:26571042

  8. Comparison of human observer and algorithmic target detection in nonurban forward-looking infrared imagery

    NASA Astrophysics Data System (ADS)

    Weber, Bruce A.

    2005-07-01

    We have performed an experiment that compares the performance of human observers with that of a robust algorithm for the detection of targets in difficult, nonurban forward-looking infrared imagery. Our purpose was to benchmark the comparison and document performance differences for future algorithm improvement. The scale-insensitive detection algorithm, used as a benchmark by the Night Vision Electronic Sensors Directorate for algorithm evaluation, employed a combination of contrastlike features to locate targets. Detection receiver operating characteristic curves and observer-confidence analyses were used to compare human and algorithmic responses and to gain insight into differences. The test database contained ground targets, in natural clutter, whose detectability, as judged by human observers, ranged from easy to very difficult. In general, as compared with human observers, the algorithm detected most of the same targets, but correlated confidence with correct detections poorly and produced many more false alarms at any useful level of performance. Though characterizing human performance was not the intent of this study, results suggest that previous observational experience was not a strong predictor of human performance, and that combining individual human observations by majority vote significantly reduced false-alarm rates.

  9. A Beacon Transmission Power Control Algorithm Based on Wireless Channel Load Forecasting in VANETs

    PubMed Central

    Mo, Yuanfu; Yu, Dexin; Song, Jun; Zheng, Kun; Guo, Yajuan

    2015-01-01

    In a vehicular ad hoc network (VANET), the periodic exchange of single-hop status information broadcasts (beacon frames) produces channel loading, which causes channel congestion and induces information conflict problems. To guarantee fairness in beacon transmissions from each node and maximum network connectivity, adjustment of the beacon transmission power is an effective method for reducing and preventing channel congestion. In this study, the primary factors that influence wireless channel loading are selected to construct the KF-BCLF, which is a channel load forecasting algorithm based on a recursive Kalman filter and employs multiple regression equation. By pre-adjusting the transmission power based on the forecasted channel load, the channel load was kept within a predefined range; therefore, channel congestion was prevented. Based on this method, the CLF-BTPC, which is a transmission power control algorithm, is proposed. To verify KF-BCLF algorithm, a traffic survey method that involved the collection of floating car data along a major traffic road in Changchun City is employed. By comparing this forecast with the measured channel loads, the proposed KF-BCLF algorithm was proven to be effective. In addition, the CLF-BTPC algorithm is verified by simulating a section of eight-lane highway and a signal-controlled urban intersection. The results of the two verification process indicate that this distributed CLF-BTPC algorithm can effectively control channel load, prevent channel congestion, and enhance the stability and robustness of wireless beacon transmission in a vehicular network. PMID:26571042

  10. A composition algorithm based on crossmodal taste-music correspondences

    PubMed Central

    Mesz, Bruno; Sigman, Mariano; Trevisan, Marcos A.

    2012-01-01

    While there is broad consensus about the structural similarities between language and music, comparably less attention has been devoted to semantic correspondences between these two ubiquitous manifestations of human culture. We have investigated the relations between music and a narrow and bounded domain of semantics: the words and concepts referring to taste sensations. In a recent work, we found that taste words were consistently mapped to musical parameters. Bitter is associated with low-pitched and continuous music (legato), salty is characterized by silences between notes (staccato), sour is high pitched, dissonant and fast and sweet is consonant, slow and soft (Mesz et al., 2011). Here we extended these ideas, in a synergistic dialog between music and science, investigating whether music can be algorithmically generated from taste-words. We developed and implemented an algorithm that exploits a large corpus of classic and popular songs. New musical pieces were produced by choosing fragments from the corpus and modifying them to minimize their distance to the region in musical space that characterizes each taste. In order to test the capability of the produced music to elicit significant associations with the different tastes, musical pieces were produced and judged by a group of non-musicians. Results showed that participants could decode well above chance the taste-word of the composition. We also discuss how our findings can be expressed in a performance bridging music and cognitive science. PMID:22557952

  11. A composition algorithm based on crossmodal taste-music correspondences.

    PubMed

    Mesz, Bruno; Sigman, Mariano; Trevisan, Marcos A

    2012-01-01

    While there is broad consensus about the structural similarities between language and music, comparably less attention has been devoted to semantic correspondences between these two ubiquitous manifestations of human culture. We have investigated the relations between music and a narrow and bounded domain of semantics: the words and concepts referring to taste sensations. In a recent work, we found that taste words were consistently mapped to musical parameters. Bitter is associated with low-pitched and continuous music (legato), salty is characterized by silences between notes (staccato), sour is high pitched, dissonant and fast and sweet is consonant, slow and soft (Mesz et al., 2011). Here we extended these ideas, in a synergistic dialog between music and science, investigating whether music can be algorithmically generated from taste-words. We developed and implemented an algorithm that exploits a large corpus of classic and popular songs. New musical pieces were produced by choosing fragments from the corpus and modifying them to minimize their distance to the region in musical space that characterizes each taste. In order to test the capability of the produced music to elicit significant associations with the different tastes, musical pieces were produced and judged by a group of non-musicians. Results showed that participants could decode well above chance the taste-word of the composition. We also discuss how our findings can be expressed in a performance bridging music and cognitive science.

  12. The cascaded moving k-means and fuzzy c-means clustering algorithms for unsupervised segmentation of malaria images

    NASA Astrophysics Data System (ADS)

    Abdul-Nasir, Aimi Salihah; Mashor, Mohd Yusoff; Halim, Nurul Hazwani Abd; Mohamed, Zeehaida

    2015-05-01

    Malaria is a life-threatening parasitic infectious disease that corresponds for nearly one million deaths each year. Due to the requirement of prompt and accurate diagnosis of malaria, the current study has proposed an unsupervised pixel segmentation based on clustering algorithm in order to obtain the fully segmented red blood cells (RBCs) infected with malaria parasites based on the thin blood smear images of P. vivax species. In order to obtain the segmented infected cell, the malaria images are first enhanced by using modified global contrast stretching technique. Then, an unsupervised segmentation technique based on clustering algorithm has been applied on the intensity component of malaria image in order to segment the infected cell from its blood cells background. In this study, cascaded moving k-means (MKM) and fuzzy c-means (FCM) clustering algorithms has been proposed for malaria slide image segmentation. After that, median filter algorithm has been applied to smooth the image as well as to remove any unwanted regions such as small background pixels from the image. Finally, seeded region growing area extraction algorithm has been applied in order to remove large unwanted regions that are still appeared on the image due to their size in which cannot be cleaned by using median filter. The effectiveness of the proposed cascaded MKM and FCM clustering algorithms has been analyzed qualitatively and quantitatively by comparing the proposed cascaded clustering algorithm with MKM and FCM clustering algorithms. Overall, the results indicate that segmentation using the proposed cascaded clustering algorithm has produced the best segmentation performances by achieving acceptable sensitivity as well as high specificity and accuracy values compared to the segmentation results provided by MKM and FCM algorithms.

  13. Determining the Effectiveness of Incorporating Geographic Information Into Vehicle Performance Algorithms

    SciTech Connect

    Sera White

    2012-04-01

    This thesis presents a research study using one year of driving data obtained from plug-in hybrid electric vehicles (PHEV) located in Sacramento and San Francisco, California to determine the effectiveness of incorporating geographic information into vehicle performance algorithms. Sacramento and San Francisco were chosen because of the availability of high resolution (1/9 arc second) digital elevation data. First, I present a method for obtaining instantaneous road slope, given a latitude and longitude, and introduce its use into common driving intensity algorithms. I show that for trips characterized by >40m of net elevation change (from key on to key off), the use of instantaneous road slope significantly changes the results of driving intensity calculations. For trips exhibiting elevation loss, algorithms ignoring road slope overestimated driving intensity by as much as 211 Wh/mile, while for trips exhibiting elevation gain these algorithms underestimated driving intensity by as much as 333 Wh/mile. Second, I describe and test an algorithm that incorporates vehicle route type into computations of city and highway fuel economy. Route type was determined by intersecting trip GPS points with ESRI StreetMap road types and assigning each trip as either city or highway route type according to whichever road type comprised the largest distance traveled. The fuel economy results produced by the geographic classification were compared to the fuel economy results produced by algorithms that assign route type based on average speed or driving style. Most results were within 1 mile per gallon ({approx}3%) of one another; the largest difference was 1.4 miles per gallon for charge depleting highway trips. The methods for acquiring and using geographic data introduced in this thesis will enable other vehicle technology researchers to incorporate geographic data into their research problems.

  14. A new algorithm for coding geological terminology

    NASA Astrophysics Data System (ADS)

    Apon, W.

    The Geological Survey of The Netherlands has developed an algorithm to convert the plain geological language of lithologic well logs into codes suitable for computer processing and link these to existing plotting programs. The algorithm is based on the "direct method" and operates in three steps: (1) searching for defined word combinations and assigning codes; (2) deleting duplicated codes; (3) correcting incorrect code combinations. Two simple auxiliary files are used. A simple PC demonstration program is included to enable readers to experiment with this algorithm. The Department of Quarternary Geology of the Geological Survey of The Netherlands possesses a large database of shallow lithologic well logs in plain language and has been using a program based on this algorithm for about 3 yr. Erroneous codes resulting from using this algorithm are less than 2%.

  15. A Learning Algorithm for Multimodal Grammar Inference.

    PubMed

    D'Ulizia, A; Ferri, F; Grifoni, P

    2011-12-01

    The high costs of development and maintenance of multimodal grammars in integrating and understanding input in multimodal interfaces lead to the investigation of novel algorithmic solutions in automating grammar generation and in updating processes. Many algorithms for context-free grammar inference have been developed in the natural language processing literature. An extension of these algorithms toward the inference of multimodal grammars is necessary for multimodal input processing. In this paper, we propose a novel grammar inference mechanism that allows us to learn a multimodal grammar from its positive samples of multimodal sentences. The algorithm first generates the multimodal grammar that is able to parse the positive samples of sentences and, afterward, makes use of two learning operators and the minimum description length metrics in improving the grammar description and in avoiding the over-generalization problem. The experimental results highlight the acceptable performances of the algorithm proposed in this paper since it has a very high probability of parsing valid sentences.

  16. Parallelization of an Adaptive Multigrid Algorithm for Fast Solution of Finite Element Structural Problems

    SciTech Connect

    Crane, N K; Parsons, I D; Hjelmstad, K D

    2002-03-21

    Adaptive mesh refinement selectively subdivides the elements of a coarse user supplied mesh to produce a fine mesh with reduced discretization error. Effective use of adaptive mesh refinement coupled with an a posteriori error estimator can produce a mesh that solves a problem to a given discretization error using far fewer elements than uniform refinement. A geometric multigrid solver uses increasingly finer discretizations of the same geometry to produce a very fast and numerically scalable solution to a set of linear equations. Adaptive mesh refinement is a natural method for creating the different meshes required by the multigrid solver. This paper describes the implementation of a scalable adaptive multigrid method on a distributed memory parallel computer. Results are presented that demonstrate the parallel performance of the methodology by solving a linear elastic rocket fuel deformation problem on an SGI Origin 3000. Two challenges must be met when implementing adaptive multigrid algorithms on massively parallel computing platforms. First, although the fine mesh for which the solution is desired may be large and scaled to the number of processors, the multigrid algorithm must also operate on much smaller fixed-size data sets on the coarse levels. Second, the mesh must be repartitioned as it is adapted to maintain good load balancing. In an adaptive multigrid algorithm, separate mesh levels may require separate partitioning, further complicating the load balance problem. This paper shows that, when the proper optimizations are made, parallel adaptive multigrid algorithms perform well on machines with several hundreds of processors.

  17. Algorithmic Strategies in Combinatorial Chemistry

    SciTech Connect

    GOLDMAN,DEBORAH; ISTRAIL,SORIN; LANCIA,GIUSEPPE; PICCOLBONI,ANTONIO; WALENZ,BRIAN

    2000-08-01

    Combinatorial Chemistry is a powerful new technology in drug design and molecular recognition. It is a wet-laboratory methodology aimed at ``massively parallel'' screening of chemical compounds for the discovery of compounds that have a certain biological activity. The power of the method comes from the interaction between experimental design and computational modeling. Principles of ``rational'' drug design are used in the construction of combinatorial libraries to speed up the discovery of lead compounds with the desired biological activity. This paper presents algorithms, software development and computational complexity analysis for problems arising in the design of combinatorial libraries for drug discovery. The authors provide exact polynomial time algorithms and intractability results for several Inverse Problems-formulated as (chemical) graph reconstruction problems-related to the design of combinatorial libraries. These are the first rigorous algorithmic results in the literature. The authors also present results provided by the combinatorial chemistry software package OCOTILLO for combinatorial peptide design using real data libraries. The package provides exact solutions for general inverse problems based on shortest-path topological indices. The results are superior both in accuracy and computing time to the best software reports published in the literature. For 5-peptoid design, the computation is rigorously reduced to an exhaustive search of about 2% of the search space; the exact solutions are found in a few minutes.

  18. Influence of image resolution and evaluation algorithm on estimates of the lacunarity of porous media.

    PubMed

    Pendleton, D E; Dathe, A; Baveye, P

    2005-10-01

    In recent years, experience has demonstrated that the classical fractal dimensions are not sufficient to describe uniquely the interstitial geometry of porous media. At least one additional index or dimension is necessary. Lacunarity, a measure of the degree to which a data set is translationally invariant, is a possible candidate. Unfortunately, several approaches exist to evaluate it on the basis of binary images of the object under study, and it is unclear to what extent the lacunarity estimates that these methods produce are dependent on the resolution of the images used. In the present work, the gliding-box algorithm of Allain and Cloitre [Phys. Rev. A 44, 3552 (1991)] and two variants of the sandbox algorithm of Chappard et al. [J. Pathol. 195, 515 (2001)], along with three additional algorithms, are used to evaluate the lacunarity of images of a textbook fractal, the Sierpinski carpet, of scanning electron micrographs of a thin section of a European soil, and of light transmission photographs of a Togolese soil. The results suggest that lacunarity estimates, as well as the ranking of the three tested systems according to their lacunarity, are affected strongly by the algorithm used, by the resolution of the images to which these algorithms are applied, and, at least for three of the algorithms (producing scale-dependent lacunarity estimates), by the scale at which the images are observed. Depending on the conditions under which the estimation of the lacunarity is carried out, lacunarity values range from 1.02 to 2.14 for the three systems tested, and all three of the systems used can be viewed alternatively as the most or the least "lacunar." Some of this indeterminacy and dependence on image resolution is alleviated in the averaged lacunarity estimates yielded by Chappard et al.'s algorithm. Further research will be needed to determine if these lacunarity estimates allow an improved, unique characterization of porous media.

  19. Quantum Adiabatic Algorithms and Large Spin Tunnelling

    NASA Technical Reports Server (NTRS)

    Boulatov, A.; Smelyanskiy, V. N.

    2003-01-01

    We provide a theoretical study of the quantum adiabatic evolution algorithm with different evolution paths proposed in this paper. The algorithm is applied to a random binary optimization problem (a version of the 3-Satisfiability problem) where the n-bit cost function is symmetric with respect to the permutation of individual bits. The evolution paths are produced, using the generic control Hamiltonians H (r) that preserve the bit symmetry of the underlying optimization problem. In the case where the ground state of H(0) coincides with the totally-symmetric state of an n-qubit system the algorithm dynamics is completely described in terms of the motion of a spin-n/2. We show that different control Hamiltonians can be parameterized by a set of independent parameters that are expansion coefficients of H (r) in a certain universal set of operators. Only one of these operators can be responsible for avoiding the tunnelling in the spin-n/2 system during the quantum adiabatic algorithm. We show that it is possible to select a coefficient for this operator that guarantees a polynomial complexity of the algorithm for all problem instances. We show that a successful evolution path of the algorithm always corresponds to the trajectory of a classical spin-n/2 and provide a complete characterization of such paths.

  20. Fast proximity algorithm for MAP ECT reconstruction

    NASA Astrophysics Data System (ADS)

    Li, Si; Krol, Andrzej; Shen, Lixin; Xu, Yuesheng

    2012-03-01

    We arrived at the fixed-point formulation of the total variation maximum a posteriori (MAP) regularized emission computed tomography (ECT) reconstruction problem and we proposed an iterative alternating scheme to numerically calculate the fixed point. We theoretically proved that our algorithm converges to unique solutions. Because the obtained algorithm exhibits slow convergence speed, we further developed the proximity algorithm in the transformed image space, i.e. the preconditioned proximity algorithm. We used the bias-noise curve method to select optimal regularization hyperparameters for both our algorithm and expectation maximization with total variation regularization (EM-TV). We showed in the numerical experiments that our proposed algorithms, with an appropriately selected preconditioner, outperformed conventional EM-TV algorithm in many critical aspects, such as comparatively very low noise and bias for Shepp-Logan phantom. This has major ramification for nuclear medicine because clinical implementation of our preconditioned fixed-point algorithms might result in very significant radiation dose reduction in the medical applications of emission tomography.

  1. A novel algorithm for Bluetooth ECG.

    PubMed

    Pandya, Utpal T; Desai, Uday B

    2012-11-01

    In wireless transmission of ECG, data latency will be significant when battery power level and data transmission distance are not maintained. In applications like home monitoring or personalized care, to overcome the joint effect of previous issues of wireless transmission and other ECG measurement noises, a novel filtering strategy is required. Here, a novel algorithm, identified as peak rejection adaptive sampling modified moving average (PRASMMA) algorithm for wireless ECG is introduced. This algorithm first removes error in bit pattern of received data if occurred in wireless transmission and then removes baseline drift. Afterward, a modified moving average is implemented except in the region of each QRS complexes. The algorithm also sets its filtering parameters according to different sampling rate selected for acquisition of signals. To demonstrate the work, a prototyped Bluetooth-based ECG module is used to capture ECG with different sampling rate and in different position of patient. This module transmits ECG wirelessly to Bluetooth-enabled devices where the PRASMMA algorithm is applied on captured ECG. The performance of PRASMMA algorithm is compared with moving average and S-Golay algorithms visually as well as numerically. The results show that the PRASMMA algorithm can significantly improve the ECG reconstruction by efficiently removing the noise and its use can be extended to any parameters where peaks are importance for diagnostic purpose.

  2. Algorithmic Mechanism Design of Evolutionary Computation

    PubMed Central

    Pei, Yan

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777

  3. OpenEIS Algorithms

    2013-07-29

    The OpenEIS Algorithm package seeks to provide a low-risk path for building owners, service providers and managers to explore analytical methods for improving building control and operational efficiency. Users of this software can analyze building data, and learn how commercial implementations would provide long-term value. The code also serves as a reference implementation for developers who wish to adapt the algorithms for use in commercial tools or service offerings.

  4. The Superior Lambert Algorithm

    NASA Astrophysics Data System (ADS)

    der, G.

    2011-09-01

    Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most

  5. Threshold matrix for digital halftoning by genetic algorithm optimization

    NASA Astrophysics Data System (ADS)

    Alander, Jarmo T.; Mantere, Timo J.; Pyylampi, Tero

    1998-10-01

    Digital halftoning is used both in low and high resolution high quality printing technologies. Our method is designed to be mainly used for low resolution ink jet marking machines to produce both gray tone and color images. The main problem with digital halftoning is pink noise caused by the human eye's visual transfer function. To compensate for this the random dot patterns used are optimized to contain more blue than pink noise. Several such dot pattern generator threshold matrices have been created automatically by using genetic algorithm optimization, a non-deterministic global optimization method imitating natural evolution and genetics. A hybrid of genetic algorithm with a search method based on local backtracking was developed together with several fitness functions evaluating dot patterns for rectangular grids. By modifying the fitness function, a family of dot generators results, each with its particular statistical features. Several versions of genetic algorithms, backtracking and fitness functions were tested to find a reasonable combination. The generated threshold matrices have been tested by simulating a set of test images using the Khoros image processing system. Even though the work was focused on developing low resolution marking technology, the resulting family of dot generators can be applied also in other halftoning application areas including high resolution printing technology.

  6. Dosimetric algorithm to reproduce isodose curves obtained from a LINAC.

    PubMed

    Estrada Espinosa, Julio Cesar; Martínez Ovalle, Segundo Agustín; Pereira Benavides, Cinthia Kotzian

    2014-01-01

    In this work isodose curves are obtained by the use of a new dosimetric algorithm using numerical data from percentage depth dose (PDD) and the maximum absorbed dose profile, calculated by Monte Carlo in a 18 MV LINAC. The software allows reproducing the absorbed dose percentage in the whole irradiated volume quickly and with a good approximation. To validate results an 18 MV LINAC with a whole geometry and a water phantom were constructed. On this construction, the distinct simulations were processed by the MCNPX code and then obtained the PDD and profiles for the whole depths of the radiation beam. The results data were used by the code to produce the dose percentages in any point of the irradiated volume. The absorbed dose for any voxel's size was also reproduced at any point of the irradiated volume, even when the voxels are considered to be of a pixel's size. The dosimetric algorithm is able to reproduce the absorbed dose induced by a radiation beam over a water phantom, considering PDD and profiles, whose maximum percent value is in the build-up region. Calculation time for the algorithm is only a few seconds, compared with the days taken when it is carried out by Monte Carlo. PMID:25045398

  7. Dosimetric Algorithm to Reproduce Isodose Curves Obtained from a LINAC

    PubMed Central

    Estrada Espinosa, Julio Cesar; Martínez Ovalle, Segundo Agustín; Pereira Benavides, Cinthia Kotzian

    2014-01-01

    In this work isodose curves are obtained by the use of a new dosimetric algorithm using numerical data from percentage depth dose (PDD) and the maximum absorbed dose profile, calculated by Monte Carlo in a 18 MV LINAC. The software allows reproducing the absorbed dose percentage in the whole irradiated volume quickly and with a good approximation. To validate results an 18 MV LINAC with a whole geometry and a water phantom were constructed. On this construction, the distinct simulations were processed by the MCNPX code and then obtained the PDD and profiles for the whole depths of the radiation beam. The results data were used by the code to produce the dose percentages in any point of the irradiated volume. The absorbed dose for any voxel's size was also reproduced at any point of the irradiated volume, even when the voxels are considered to be of a pixel's size. The dosimetric algorithm is able to reproduce the absorbed dose induced by a radiation beam over a water phantom, considering PDD and profiles, whose maximum percent value is in the build-up region. Calculation time for the algorithm is only a few seconds, compared with the days taken when it is carried out by Monte Carlo. PMID:25045398

  8. The "Juggler" algorithm: a hybrid deformable image registration algorithm for adaptive radiotherapy

    NASA Astrophysics Data System (ADS)

    Xia, Junyi; Chen, Yunmei; Samant, Sanjiv S.

    2007-03-01

    Fast deformable registration can potentially facilitate the clinical implementation of adaptive radiation therapy (ART), which allows for daily organ deformations not accounted for in radiotherapy treatment planning, which typically utilizes a static organ model, to be incorporated into the fractionated treatment. Existing deformable registration algorithms typically utilize a specific diffusion model, and require a large number of iterations to achieve convergence. This limits the online applications of deformable image registration for clinical radiotherapy, such as daily patient setup variations involving organ deformation, where high registration precision is required. We propose a hybrid algorithm, the "Juggler", based on a multi-diffusion model to achieve fast convergence. The Juggler achieves fast convergence by applying two different diffusion models: i) one being optimized quickly for matching high gradient features, i.e. bony anatomies; and ii) the other being optimized for further matching low gradient features, i.e. soft tissue. The regulation of these 2 competing criteria is achieved using a threshold of a similarity measure, such as cross correlation or mutual information. A multi-resolution scheme was applied for faster convergence involving large deformations. Comparisons of the Juggler algorithm were carried out with demons method, accelerated demons method, and free-form deformable registration using 4D CT lung imaging from 5 patients. Based on comparisons of difference images and similarity measure computations, the Juggler produced a superior registration result. It achieved the desired convergence within 30 iterations, and typically required <90sec to register two 3D image sets of size 256×256×40 using a 3.2 GHz PC. This hybrid registration strategy successfully incorporates the benefits of different diffusion models into a single unified model.

  9. Effective Memetic Algorithms for VLSI design = Genetic Algorithms + local search + multi-level clustering.

    PubMed

    Areibi, Shawki; Yang, Zhen

    2004-01-01

    Combining global and local search is a strategy used by many successful hybrid optimization approaches. Memetic Algorithms (MAs) are Evolutionary Algorithms (EAs) that apply some sort of local search to further improve the fitness of individuals in the population. Memetic Algorithms have been shown to be very effective in solving many hard combinatorial optimization problems. This paper provides a forum for identifying and exploring the key issues that affect the design and application of Memetic Algorithms. The approach combines a hierarchical design technique, Genetic Algorithms, constructive techniques and advanced local search to solve VLSI circuit layout in the form of circuit partitioning and placement. Results obtained indicate that Memetic Algorithms based on local search, clustering and good initial solutions improve solution quality on average by 35% for the VLSI circuit partitioning problem and 54% for the VLSI standard cell placement problem. PMID:15355604

  10. Effective Memetic Algorithms for VLSI design = Genetic Algorithms + local search + multi-level clustering.

    PubMed

    Areibi, Shawki; Yang, Zhen

    2004-01-01

    Combining global and local search is a strategy used by many successful hybrid optimization approaches. Memetic Algorithms (MAs) are Evolutionary Algorithms (EAs) that apply some sort of local search to further improve the fitness of individuals in the population. Memetic Algorithms have been shown to be very effective in solving many hard combinatorial optimization problems. This paper provides a forum for identifying and exploring the key issues that affect the design and application of Memetic Algorithms. The approach combines a hierarchical design technique, Genetic Algorithms, constructive techniques and advanced local search to solve VLSI circuit layout in the form of circuit partitioning and placement. Results obtained indicate that Memetic Algorithms based on local search, clustering and good initial solutions improve solution quality on average by 35% for the VLSI circuit partitioning problem and 54% for the VLSI standard cell placement problem.

  11. The evaluation of the OSGLR algorithm for restructurable controls

    NASA Technical Reports Server (NTRS)

    Bonnice, W. F.; Wagner, E.; Hall, S. R.; Motyka, P.

    1986-01-01

    The detection and isolation of commercial aircraft control surface and actuator failures using the orthogonal series generalized likelihood ratio (OSGLR) test was evaluated. The OSGLR algorithm was chosen as the most promising algorithm based on a preliminary evaluation of three failure detection and isolation (FDI) algorithms (the detection filter, the generalized likelihood ratio test, and the OSGLR test) and a survey of the literature. One difficulty of analytic FDI techniques and the OSGLR algorithm in particular is their sensitivity to modeling errors. Therefore, methods of improving the robustness of the algorithm were examined with the incorporation of age-weighting into the algorithm being the most effective approach, significantly reducing the sensitivity of the algorithm to modeling errors. The steady-state implementation of the algorithm based on a single cruise linear model was evaluated using a nonlinear simulation of a C-130 aircraft. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling the linear models used by the algorithm on dynamic pressure and flap deflection was also considered. Since simply scheduling the linear models over the entire flight envelope is unlikely to be adequate, scheduling of the steady-state implentation of the algorithm was briefly investigated.

  12. Algorithm for Autonomous Landing

    NASA Technical Reports Server (NTRS)

    Kuwata, Yoshiaki

    2011-01-01

    Because of their small size, high maneuverability, and easy deployment, micro aerial vehicles (MAVs) are used for a wide variety of both civilian and military missions. One of their current drawbacks is the vast array of sensors (such as GPS, altimeter, radar, and the like) required to make a landing. Due to the MAV s small payload size, this is a major concern. Replacing the imaging sensors with a single monocular camera is sufficient to land a MAV. By applying optical flow algorithms to images obtained from the camera, time-to-collision can be measured. This is a measurement of position and velocity (but not of absolute distance), and can avoid obstacles as well as facilitate a landing on a flat surface given a set of initial conditions. The key to this approach is to calculate time-to-collision based on some image on the ground. By holding the angular velocity constant, horizontal speed decreases linearly with the height, resulting in a smooth landing. Mathematical proofs show that even with actuator saturation or modeling/ measurement uncertainties, MAVs can land safely. Landings of this nature may have a higher velocity than is desirable, but this can be compensated for by a cushioning or dampening system, or by using a system of legs to grab onto a surface. Such a monocular camera system can increase vehicle payload size (or correspondingly reduce vehicle size), increase speed of descent, and guarantee a safe landing by directly correlating speed to height from the ground.

  13. [An Algorithm for Correcting Fetal Heart Rate Baseline].

    PubMed

    Li, Xiaodong; Lu, Yaosheng

    2015-10-01

    Fetal heart rate (FHR) baseline estimation is of significance for the computerized analysis of fetal heart rate and the assessment of fetal state. In our work, a fetal heart rate baseline correction algorithm was presented to make the existing baseline more accurate and fit to the tracings. Firstly, the deviation of the existing FHR baseline was found and corrected. And then a new baseline was obtained finally after treatment with some smoothing methods. To assess the performance of FHR baseline correction algorithm, a new FHR baseline estimation algorithm that combined baseline estimation algorithm and the baseline correction algorithm was compared with two existing FHR baseline estimation algorithms. The results showed that the new FHR baseline estimation algorithm did well in both accuracy and efficiency. And the results also proved the effectiveness of the FHR baseline correction algorithm.

  14. A Revision of the NASA Team Sea Ice Algorithm

    NASA Technical Reports Server (NTRS)

    Markus, T.; Cavalieri, Donald J.

    1998-01-01

    In a recent paper, two operational algorithms to derive ice concentration from satellite multichannel passive microwave sensors have been compared. Although the results of these, known as the NASA Team algorithm and the Bootstrap algorithm, have been validated and are generally in good agreement, there are areas where the ice concentrations differ, by up to 30%. These differences can be explained by shortcomings in one or the other algorithm. Here, we present an algorithm which, in addition to the 19 and 37 GHz channels used by both the Bootstrap and NASA Team algorithms, makes use of the 85 GHz channels as well. Atmospheric effects particularly at 85 GHz are reduced by using a forward atmospheric radiative transfer model. Comparisons with the NASA Team and Bootstrap algorithm show that the individual shortcomings of these algorithms are not apparent in this new approach. The results further show better quantitative agreement with ice concentrations derived from NOAA AVHRR infrared data.

  15. Improved Bat Algorithm Applied to Multilevel Image Thresholding

    PubMed Central

    2014-01-01

    Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed. PMID:25165733

  16. Progress on automated data analysis algorithms for ultrasonic inspection of composites

    NASA Astrophysics Data System (ADS)

    Aldrin, John C.; Forsyth, David S.; Welter, John T.

    2015-03-01

    Progress is presented on the development and demonstration of automated data analysis (ADA) software to address the burden in interpreting ultrasonic inspection data for large composite structures. The automated data analysis algorithm is presented in detail, which follows standard procedures for analyzing signals for time-of-flight indications and backwall amplitude dropout. New algorithms have been implemented to reliably identify indications in time-of-flight images near the front and back walls of composite panels. Adaptive call criteria have also been applied to address sensitivity to variation in backwall signal level, panel thickness variation, and internal signal noise. ADA processing results are presented for a variety of test specimens that include inserted materials and discontinuities produced under poor manufacturing conditions. Software tools have been developed to support both ADA algorithm design and certification, producing a statistical evaluation of indication results and false calls using a matching process with predefined truth tables. Parametric studies were performed to evaluate detection and false call results with respect to varying algorithm settings.

  17. Exploring Students' Conceptual Understanding of the Averaging Algorithm.

    ERIC Educational Resources Information Center

    Cai, Jinfa

    1998-01-01

    Examines 250 sixth-grade students' understanding of arithmetic average by assessing their understanding of the computational algorithm. Results indicate that the majority of the students knew the "add-them-all-up-and-divide" averaging algorithm, but only half of the students were able to correctly apply the algorithm to solve a…

  18. Feedback algorithm for simulation of multi-segmented cracks

    SciTech Connect

    Chady, T.; Napierala, L.

    2011-06-23

    In this paper, a method for obtaining a three dimensional crack model from a radiographic image is discussed. A genetic algorithm aiming at close simulation of crack's shape is presented. Results obtained with genetic algorithm are compared to those achieved in authors' previous work. The described algorithm has been tested on both simulated and real-life cracks.

  19. A hybrid genetic algorithm for resolving closely spaced objects

    NASA Technical Reports Server (NTRS)

    Abbott, R. J.; Lillo, W. E.; Schulenburg, N.

    1995-01-01

    A hybrid genetic algorithm is described for performing the difficult optimization task of resolving closely spaced objects appearing in space based and ground based surveillance data. This application of genetic algorithms is unusual in that it uses a powerful domain-specific operation as a genetic operator. Results of applying the algorithm to real data from telescopic observations of a star field are presented.

  20. An algorithm for the automatic synchronization of Omega receivers

    NASA Technical Reports Server (NTRS)

    Stonestreet, W. M.; Marzetta, T. L.

    1977-01-01

    The Omega navigation system and the requirement for receiver synchronization are discussed. A description of the synchronization algorithm is provided. The numerical simulation and its associated assumptions were examined and results of the simulation are presented. The suggested form of the synchronization algorithm and the suggested receiver design values were surveyed. A Fortran of the synchronization algorithm used in the simulation was also included.

  1. Interdisciplinary Research Produces Results in the Understanding of Planetary Dunes

    NASA Astrophysics Data System (ADS)

    Titus, Timothy N.; Hayward, Rosalyn Kay; Bourke, Mary C.

    2010-08-01

    Second International Planetary Dunes Workshop: Planetary Analogs—Integrating Models, Remote Sensing, and Field Data; Alamosa, Colorado, 18-21 May 2010; Dunes and other eolian bed forms are prominent on several planetary bodies in our solar system. Despite 4 decades of study, many questions remain regarding the composition, age, and origins of these features, as well as the climatic conditions under which they formed. Recently acquired data from orbiters and rovers, together with terrestrial analogs and numerical models, are providing new insights into Martian sand dunes, as well as eolian bed forms on other terrestrial planetary bodies (e.g., Titan). As a means of bringing together terrestrial and planetary researchers from diverse backgrounds with the goal of fostering collaborative interdisciplinary research, the U.S. Geological Survey (USGS), the Carl Sagan Center for the Study of Life in the Universe, the Desert Research Institute, and the U.S. National Park Service held a workshop in Colorado. The small group setting facilitated intensive discussion of problems and issues associated with eolian processes on Earth, Mars, and Titan.

  2. Interdisciplinary research produces results in the understanding of planetary caves

    NASA Astrophysics Data System (ADS)

    Titus, Timothy; Boston, Penelope J.

    2012-05-01

    First International Planetary Cave Research Workshop: Implications for Astrobiology, Climate, Detection, and Exploration; Carlsbad, New Mexico, 25-28 October 2011 With the advent of high-resolution spatial imaging, the idea of caves on other planets has moved from the pages of science fiction into the realm of hard-core science—complete with hypotheses, models, experiments, and observational data. Recently acquired data from spacecraft, together with terrestrial analogs and numerical models, are providing new insights into caves on Earth as well as caves on other terrestrial planetary bodies (e.g., Moon, Mars, and Titan).

  3. A digital system to produce imagery from SAR data. [Synthetic Aperture Radar

    NASA Technical Reports Server (NTRS)

    Wu, C.

    1976-01-01

    This paper describes a digital processing algorithm and its associated system design for producing images from Synthetic Aperture Radar (SAR) data. The proposed system uses the Fast Fourier Transform (FFT) approach to perform the two-dimensional correlation process. The range migration problem, which is often a major obstacle to efficient processing, can be alleviated by approximating the locus of echoes from a point target by several linear segments. SAR data corresponding to each segment is correlated separately, and the results are coherently summed to produce full-resolution images. This processing approach exhibits greatly improved computation efficiency relative to conventional digital processing methods.

  4. Fourth Order Algorithms for Solving Diverse Many-Body Problems

    NASA Astrophysics Data System (ADS)

    Chin, Siu A.; Forbert, Harald A.; Chen, Chia-Rong; Kidwell, Donald W.; Ciftja, Orion

    2001-03-01

    We show that the method of factorizing an evolution operator of the form e^ɛ(A+B) to fourth order with purely positive coefficient yields new classes of symplectic algorithms for solving classical dynamical problems, unitary algorithms for solving the time-dependent Schrödinger equation, norm preserving algorithms for solving the Langevin equation and large time step convergent Diffusion Monte Carlo algorithms. Results for each class of problems will be presented and disucss

  5. Genetic Algorithm Tuned Fuzzy Logic for Gliding Return Trajectories

    NASA Technical Reports Server (NTRS)

    Burchett, Bradley T.

    2003-01-01

    The problem of designing and flying a trajectory for successful recovery of a reusable launch vehicle is tackled using fuzzy logic control with genetic algorithm optimization. The plant is approximated by a simplified three degree of freedom non-linear model. A baseline trajectory design and guidance algorithm consisting of several Mamdani type fuzzy controllers is tuned using a simple genetic algorithm. Preliminary results show that the performance of the overall system is shown to improve with genetic algorithm tuning.

  6. An incremental clustering algorithm based on Mahalanobis distance

    NASA Astrophysics Data System (ADS)

    Aik, Lim Eng; Choon, Tan Wee

    2014-12-01

    Classical fuzzy c-means clustering algorithm is insufficient to cluster non-spherical or elliptical distributed datasets. The paper replaces classical fuzzy c-means clustering euclidean distance with Mahalanobis distance. It applies Mahalanobis distance to incremental learning for its merits. A Mahalanobis distance based fuzzy incremental clustering learning algorithm is proposed. Experimental results show the algorithm is an effective remedy for the defect in fuzzy c-means algorithm but also increase training accuracy.

  7. Lensless optical data hiding system based on phase encoding algorithm in the Fresnel domain.

    PubMed

    Chen, Yen-Yu; Wang, Jian-Hong; Lin, Cheng-Chung; Hwang, Hone-Ene

    2013-07-20

    A novel and efficient algorithm based on a modified Gerchberg-Saxton algorithm (MGSA) in the Fresnel domain is presented, together with mathematical derivation, and two pure phase-only masks (POMs) are generated. The algorithm's application to data hiding is demonstrated by a simulation procedure, in which a hidden image/logo is encoded into phase forms. A hidden image/logo can be extracted by the proposed high-performance lensless optical data-hiding system. The reconstructed image shows good quality and the errors are close to zero. In addition, the robustness of our data-hiding technique is illustrated by simulation results. The position coordinates of the POMs as well as the wavelength are used as secure keys that can ensure sufficient information security and robustness. The main advantages of this proposed watermarking system are that it uses fewer iterative processes to produce the masks, and the image-hiding scheme is straightforward.

  8. A comparative study of algorithms for radar imaging from gapped data

    NASA Astrophysics Data System (ADS)

    Xu, Xiaojian; Luan, Ruixue; Jia, Li; Huang, Ying

    2007-09-01

    In ultra wideband (UWB) radar imagery, there are often cases where the radar's operating bandwidth is interrupted due to various reasons, either periodically or randomly. Such interruption produces phase history data gaps, which in turn result in artifacts in the image if conventional image reconstruction techniques are used. The higher level artifacts severely degrade the radar images. In this work, several novel techniques for artifacts suppression in gapped data imaging were discussed. These include: (1) A maximum entropy based gap filling technique using a modified Burg algorithm (MEBGFT); (2) An alternative iteration deconvolution based on minimum entropy (AIDME) and its modified version, a hybrid max-min entropy procedure; (3) A windowed coherent CLEAN algorithm; and (4) Two-dimensional (2-D) periodically-gapped Capon (PG-Capon) and APES (PG-APES) algorithms. Performance of various techniques is comparatively studied.

  9. Stable reduced-order models of generalized dynamical systems using coordinate-transformed Arnoldi algorithms

    SciTech Connect

    Silveira, L.M.; Kamon, M.; Elfadel, I.; White, J.

    1996-12-31

    Model order reduction based on Krylov subspace iterative methods has recently emerged as a major tool for compressing the number of states in linear models used for simulating very large physical systems (VLSI circuits, electromagnetic interactions). There are currently two main methods for accomplishing such a compression: one is based on the nonsymmetric look-ahead Lanczos algorithm that gives a numerically stable procedure for finding Pade approximations, while the other is based on a less well characterized Arnoldi algorithm. In this paper, we show that for certain classes of generalized state-space systems, the reduced-order models produced by a coordinate-transformed Arnoldi algorithm inherit the stability of the original system. Complete Proofs of our results will be given in the final paper.

  10. Modeling algorithm execution time on processor arrays

    NASA Technical Reports Server (NTRS)

    Adams, L. M.; Crockett, T. W.

    1984-01-01

    An approach to modelling the execution time of algorithms on parallel arrays is presented. This time is expressed as a function of the number of processors and system parameters. The resulting model has been applied to a parallel implementation of the conjugate-gradient algorithm on NASA's FEM. Results of experiments performed to compare the model predictions against actual behavior show that the floating-point arithmetic, communication, and synchronization components of the parallel algorithm execution time were correctly modelled. The results also show that the overhead caused by the interaction of the system software and the actual parallel hardware must be reflected in the model parameters. The model has been used to predict the performance of the conjugate gradient algorithm on a given problem as the number of processors and machine characteristics varied.

  11. Exploration of new multivariate spectral calibration algorithms.

    SciTech Connect

    Van Benthem, Mark Hilary; Haaland, David Michael; Melgaard, David Kennett; Martin, Laura Elizabeth; Wehlburg, Christine Marie; Pell, Randy J.; Guenard, Robert D.

    2004-03-01

    A variety of multivariate calibration algorithms for quantitative spectral analyses were investigated and compared, and new algorithms were developed in the course of this Laboratory Directed Research and Development project. We were able to demonstrate the ability of the hybrid classical least squares/partial least squares (CLSIPLS) calibration algorithms to maintain calibrations in the presence of spectrometer drift and to transfer calibrations between spectrometers from the same or different manufacturers. These methods were found to be as good or better in prediction ability as the commonly used partial least squares (PLS) method. We also present the theory for an entirely new class of algorithms labeled augmented classical least squares (ACLS) methods. New factor selection methods are developed and described for the ACLS algorithms. These factor selection methods are demonstrated using near-infrared spectra collected from a system of dilute aqueous solutions. The ACLS algorithm is also shown to provide improved ease of use and better prediction ability than PLS when transferring calibrations between near-infrared calibrations from the same manufacturer. Finally, simulations incorporating either ideal or realistic errors in the spectra were used to compare the prediction abilities of the new ACLS algorithm with that of PLS. We found that in the presence of realistic errors with non-uniform spectral error variance across spectral channels or with spectral errors correlated between frequency channels, ACLS methods generally out-performed the more commonly used PLS method. These results demonstrate the need for realistic error structure in simulations when the prediction abilities of various algorithms are compared. The combination of equal or superior prediction ability and the ease of use of the ACLS algorithms make the new ACLS methods the preferred algorithms to use for multivariate spectral calibrations.

  12. Recent Advancements in Lightning Jump Algorithm Work

    NASA Technical Reports Server (NTRS)

    Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.

    2010-01-01

    In the past year, the primary objectives were to show the usefulness of total lightning as compared to traditional cloud-to-ground (CG) networks, test the lightning jump algorithm configurations in other regions of the country, increase the number of thunderstorms within our thunderstorm database, and to pinpoint environments that could prove difficult for any lightning jump configuration. A total of 561 thunderstorms have been examined in the past year (409 non-severe, 152 severe) from four regions of the country (North Alabama, Washington D.C., High Plains of CO/KS, and Oklahoma). Results continue to indicate that the 2 lightning jump algorithm configuration holds the most promise in terms of prospective operational lightning jump algorithms, with a probability of detection (POD) at 81%, a false alarm rate (FAR) of 45%, a critical success index (CSI) of 49% and a Heidke Skill Score (HSS) of 0.66. The second best performing algorithm configuration was the Threshold 4 algorithm, which had a POD of 72%, FAR of 51%, a CSI of 41% and an HSS of 0.58. Because a more complex algorithm configuration shows the most promise in terms of prospective operational lightning jump algorithms, accurate thunderstorm cell tracking work must be undertaken to track lightning trends on an individual thunderstorm basis over time. While these numbers for the 2 configuration are impressive, the algorithm does have its weaknesses. Specifically, low-topped and tropical cyclone thunderstorm environments are present issues for the 2 lightning jump algorithm, because of the suppressed vertical depth impact on overall flash counts (i.e., a relative dearth in lightning). For example, in a sample of 120 thunderstorms from northern Alabama that contained 72 missed events by the 2 algorithm 36% of the misses were associated with these two environments (17 storms).

  13. Applications and development of new algorithms for displacement analysis using InSAR time series

    NASA Astrophysics Data System (ADS)

    Osmanoglu, Batuhan

    -dimensional (3-D) phase unwrapping. Chapter 4 focuses on the unwrapping path. Unwrapping algorithms can be divided into two groups, path-dependent and path-independent algorithms. Path-dependent algorithms use local unwrapping functions applied pixel-by-pixel to the dataset. In contrast, path-independent algorithms use global optimization methods such as least squares, and return a unique solution. However, when aliasing and noise are present, path-independent algorithms can underestimate the signal in some areas due to global fitting criteria. Path-dependent algorithms do not underestimate the signal, but, as the name implies, the unwrapping path can affect the result. Comparison between existing path algorithms and a newly developed algorithm based on Fisher information theory was conducted. Results indicate that Fisher information theory does indeed produce lower misfit results for most tested cases. Chapter 5 presents a new time series analysis method based on 3-D unwrapping of SAR data using extended Kalman filters. Existing methods for time series generation using InSAR data employ special filters to combine two-dimensional (2-D) spatial unwrapping with one-dimensional (1-D) temporal unwrapping results. The new method, however, combines observations in azimuth, range and time for repeat pass interferometry. Due to the pixel-by-pixel characteristic of the filter, the unwrapping path is selected based on a quality map. This unwrapping algorithm is the first application of extended Kalman filters to the 3-D unwrapping problem. Time series analyses of InSAR data are used in a variety of applications with different characteristics. Consequently, it is difficult to develop a single algorithm that can provide optimal results in all cases, given that different algorithms possess a unique set of strengths and weaknesses. Nonetheless, filter-based unwrapping algorithms such as the one presented in this dissertation have the capability of joining multiple observations into a uniform

  14. Static algorithm based on MPLS and QoS routing

    NASA Astrophysics Data System (ADS)

    Yang, Ting; Sun, Yugeng; Liu, Bin

    2004-04-01

    This paper proposes a new static routing algorithm applying Traffic Engineering, which integrates Multiprotocol Label Switching (MPLS) and Quality of Service (QoS) Routing. Because of using MPLS, centralized control is applied to the transmission paths of different service type in the algorithm. At the same time, to select LSP based on the state of networks and the requirements of QoS, the algorithm can make the resource using globally optimal. It avoids the traditional routings" shortage that the network congestion is produced by the disequilibrium of resource using. United object strategic in the algorithm can produce effective projects for the problem of satisfying Multi-requirement in one routing count, which is NP-hard. Finally the paper proves that the algorithm is feasible and preferable by computer simulation and theoretical deduction.

  15. Image segmentation using an improved differential algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Hao; Shi, Yujiao; Wu, Dongmei

    2014-10-01

    Among all the existing segmentation techniques, the thresholding technique is one of the most popular due to its simplicity, robustness, and accuracy (e.g. the maximum entropy method, Otsu's method, and K-means clustering). However, the computation time of these algorithms grows exponentially with the number of thresholds due to their exhaustive searching strategy. As a population-based optimization algorithm, differential algorithm (DE) uses a population of potential solutions and decision-making processes. It has shown considerable success in solving complex optimization problems within a reasonable time limit. Thus, applying this method into segmentation algorithm should be a good choice during to its fast computational ability. In this paper, we first propose a new differential algorithm with a balance strategy, which seeks a balance between the exploration of new regions and the exploitation of the already sampled regions. Then, we apply the new DE into the traditional Otsu's method to shorten the computation time. Experimental results of the new algorithm on a variety of images show that, compared with the EA-based thresholding methods, the proposed DE algorithm gets more effective and efficient results. It also shortens the computation time of the traditional Otsu method.

  16. A Study of a Network-Flow Algorithm and a Noncorrecting Algorithm for Test Assembly.

    ERIC Educational Resources Information Center

    Armstrong, R. D.; And Others

    1996-01-01

    When the network-flow algorithm (NFA) and the average growth approximation algorithm (AGAA) were used for automated test assembly with American College Test and Armed Services Vocational Aptitude Battery item banks, results indicate that reasonable error in item parameters is not harmful for test assembly using NFA or AGAA. (SLD)

  17. The Texas Children's Medication Algorithm Project: Revision of the Algorithm for Pharmacotherapy of Attention-Deficit/Hyperactivity Disorder

    ERIC Educational Resources Information Center

    Pliszka, Steven R.; Crismon, M. Lynn; Hughes, Carroll W.; Corners, C. Keith; Emslie, Graham J.; Jensen, Peter S.; McCracken, James T.; Swanson, James M.; Lopez, Molly

    2006-01-01

    Objective: In 1998, the Texas Department of Mental Health and Mental Retardation developed algorithms for medication treatment of attention-deficit/hyperactivity disorder (ADHD). Advances in the psychopharmacology of ADHD and results of a feasibility study of algorithm use in community mental health centers caused the algorithm to be modified and…

  18. Fungi producing significant mycotoxins.

    PubMed

    2012-01-01

    Mycotoxins are secondary metabolites of microfungi that are known to cause sickness or death in humans or animals. Although many such toxic metabolites are known, it is generally agreed that only a few are significant in causing disease: aflatoxins, fumonisins, ochratoxin A, deoxynivalenol, zearalenone, and ergot alkaloids. These toxins are produced by just a few species from the common genera Aspergillus, Penicillium, Fusarium, and Claviceps. All Aspergillus and Penicillium species either are commensals, growing in crops without obvious signs of pathogenicity, or invade crops after harvest and produce toxins during drying and storage. In contrast, the important Fusarium and Claviceps species infect crops before harvest. The most important Aspergillus species, occurring in warmer climates, are A. flavus and A. parasiticus, which produce aflatoxins in maize, groundnuts, tree nuts, and, less frequently, other commodities. The main ochratoxin A producers, A. ochraceus and A. carbonarius, commonly occur in grapes, dried vine fruits, wine, and coffee. Penicillium verrucosum also produces ochratoxin A but occurs only in cool temperate climates, where it infects small grains. F. verticillioides is ubiquitous in maize, with an endophytic nature, and produces fumonisins, which are generally more prevalent when crops are under drought stress or suffer excessive insect damage. It has recently been shown that Aspergillus niger also produces fumonisins, and several commodities may be affected. F. graminearum, which is the major producer of deoxynivalenol and zearalenone, is pathogenic on maize, wheat, and barley and produces these toxins whenever it infects these grains before harvest. Also included is a short section on Claviceps purpurea, which produces sclerotia among the seeds in grasses, including wheat, barley, and triticale. The main thrust of the chapter contains information on the identification of these fungi and their morphological characteristics, as well as factors

  19. Genetic algorithm-based form error evaluation

    NASA Astrophysics Data System (ADS)

    Cui, Changcai; Li, Bing; Huang, Fugui; Zhang, Rencheng

    2007-07-01

    Form error evaluation of geometrical products is a nonlinear optimization problem, for which a solution has been attempted by different methods with some complexity. A genetic algorithm (GA) was developed to deal with the problem, which was proved simple to understand and realize, and its key techniques have been investigated in detail. Firstly, the fitness function of GA was discussed emphatically as a bridge between GA and the concrete problems to be solved. Secondly, the real numbers-based representation of the desired solutions in the continual space optimization problem was discussed. Thirdly, many improved evolutionary strategies of GA were described on emphasis. These evolutionary strategies were the selection operation of 'odd number selection plus roulette wheel selection', the crossover operation of 'arithmetic crossover between near relatives and far relatives' and the mutation operation of 'adaptive Gaussian' mutation. After evolutions from generation to generation with the evolutionary strategies, the initial population produced stochastically around the least-squared solutions of the problem would be updated and improved iteratively till the best chromosome or individual of GA appeared. Finally, some examples were given to verify the evolutionary method. Experimental results show that the GA-based method can find desired solutions that are superior to the least-squared solutions except for a few examples in which the GA-based method can obtain similar results to those by the least-squared method. Compared with other optimization techniques, the GA-based method can obtain almost equal results but with less complicated models and computation time.

  20. The Dropout Learning Algorithm

    PubMed Central

    Baldi, Pierre; Sadowski, Peter

    2014-01-01

    Dropout is a recently introduced algorithm for training neural network by randomly dropping units during training to prevent their co-adaptation. A mathematical analysis of some of the static and dynamic properties of dropout is provided using Bernoulli gating variables, general enough to accommodate dropout on units or connections, and with variable rates. The framework allows a complete analysis of the ensemble averaging properties of dropout in linear networks, which is useful to understand the non-linear case. The ensemble averaging properties of dropout in non-linear logistic networks result from three fundamental equations: (1) the approximation of the expectations of logistic functions by normalized geometric means, for which bounds and estimates are derived; (2) the algebraic equality between normalized geometric means of logistic functions with the logistic of the means, which mathematically characterizes logistic functions; and (3) the linearity of the means with respect to sums, as well as products of independent variables. The results are also extended to other classes of transfer functions, including rectified linear functions. Approximation errors tend to cancel each other and do not accumulate. Dropout can also be connected to stochastic neurons and used to predict firing rates, and to backpropagation by viewing the backward propagation as ensemble averaging in a dropout linear network. Moreover, the convergence properties of dropout can be understood in terms of stochastic gradient descent. Finally, for the regularization properties of dropout, the expectation of the dropout gradient is the gradient of the corresponding approximation ensemble, regularized by an adaptive weight decay term with a propensity for self-consistent variance minimization and sparse representations. PMID:24771879