Sample records for advanced fitting algorithms

  1. 2D Automatic body-fitted structured mesh generation using advancing extraction method

    USDA-ARS?s Scientific Manuscript database

    This paper presents an automatic mesh generation algorithm for body-fitted structured meshes in Computational Fluids Dynamics (CFD) analysis using the Advancing Extraction Method (AEM). The method is applicable to two-dimensional domains with complex geometries, which have the hierarchical tree-like...

  2. 2D automatic body-fitted structured mesh generation using advancing extraction method

    USDA-ARS?s Scientific Manuscript database

    This paper presents an automatic mesh generation algorithm for body-fitted structured meshes in Computational Fluids Dynamics (CFD) analysis using the Advancing Extraction Method (AEM). The method is applicable to two-dimensional domains with complex geometries, which have the hierarchical tree-like...

  3. Advanced Physiological Estimation of Cognitive Status (APECS)

    DTIC Science & Technology

    2009-09-15

    REPORT Advanced Physiological Estimation of Cognitive Status (APECS) Final Report 14. ABSTRACT 16. SECURITY CLASSIFICATION OF: EEG...fitness and transmit data to command and control systems. Some of the signals that the physiological sensors measure are readily interpreted, such as...electroencephalogram (EEG) and other signals requires a complex series of mathematical transformations or algorithms. Overall, research on algorithms

  4. 2D automatic body-fitted structured mesh generation using advancing extraction method

    NASA Astrophysics Data System (ADS)

    Zhang, Yaoxin; Jia, Yafei

    2018-01-01

    This paper presents an automatic mesh generation algorithm for body-fitted structured meshes in Computational Fluids Dynamics (CFD) analysis using the Advancing Extraction Method (AEM). The method is applicable to two-dimensional domains with complex geometries, which have the hierarchical tree-like topography with extrusion-like structures (i.e., branches or tributaries) and intrusion-like structures (i.e., peninsula or dikes). With the AEM, the hierarchical levels of sub-domains can be identified, and the block boundary of each sub-domain in convex polygon shape in each level can be extracted in an advancing scheme. In this paper, several examples were used to illustrate the effectiveness and applicability of the proposed algorithm for automatic structured mesh generation, and the implementation of the method.

  5. Inferring Boolean network states from partial information

    PubMed Central

    2013-01-01

    Networks of molecular interactions regulate key processes in living cells. Therefore, understanding their functionality is a high priority in advancing biological knowledge. Boolean networks are often used to describe cellular networks mathematically and are fitted to experimental datasets. The fitting often results in ambiguities since the interpretation of the measurements is not straightforward and since the data contain noise. In order to facilitate a more reliable mapping between datasets and Boolean networks, we develop an algorithm that infers network trajectories from a dataset distorted by noise. We analyze our algorithm theoretically and demonstrate its accuracy using simulation and microarray expression data. PMID:24006954

  6. Radiative Transfer Modeling and Retrievals for Advanced Hyperspectral Sensors

    NASA Technical Reports Server (NTRS)

    Liu, Xu; Zhou, Daniel K.; Larar, Allen M.; Smith, William L., Sr.; Mango, Stephen A.

    2009-01-01

    A novel radiative transfer model and a physical inversion algorithm based on principal component analysis will be presented. Instead of dealing with channel radiances, the new approach fits principal component scores of these quantities. Compared to channel-based radiative transfer models, the new approach compresses radiances into a much smaller dimension making both forward modeling and inversion algorithm more efficient.

  7. Micromagnetic measurement for characterization of ferromagnetic materials' microstructural properties

    NASA Astrophysics Data System (ADS)

    Zhang, Shuo; Shi, Xiaodong; Udpa, Lalita; Deng, Yiming

    2018-05-01

    Magnetic Barkhausen noise (MBN) is measured in low carbon steels and the relationship between carbon content and parameter extracted from MBN signal has been investigated. The parameter is extracted experimentally by fitting the original profiles with two Gaussian curves. The gap between two peaks (ΔG) of fitted Gaussian curves shows a better linear relationship with carbon contents of samples in the experiment. The result has been validated with simulation by Monte Carlo method. To ensure the sensitivity of measurement, advanced multi-objective optimization algorithm Non-dominant sorting genetic algorithm III (NSGA III) has been used to fulfill the optimization of the magnetic core of sensor.

  8. Differential evolution enhanced with multiobjective sorting-based mutation operators.

    PubMed

    Wang, Jiahai; Liao, Jianjun; Zhou, Ying; Cai, Yiqiao

    2014-12-01

    Differential evolution (DE) is a simple and powerful population-based evolutionary algorithm. The salient feature of DE lies in its mutation mechanism. Generally, the parents in the mutation operator of DE are randomly selected from the population. Hence, all vectors are equally likely to be selected as parents without selective pressure at all. Additionally, the diversity information is always ignored. In order to fully exploit the fitness and diversity information of the population, this paper presents a DE framework with multiobjective sorting-based mutation operator. In the proposed mutation operator, individuals in the current population are firstly sorted according to their fitness and diversity contribution by nondominated sorting. Then parents in the mutation operators are proportionally selected according to their rankings based on fitness and diversity, thus, the promising individuals with better fitness and diversity have more opportunity to be selected as parents. Since fitness and diversity information is simultaneously considered for parent selection, a good balance between exploration and exploitation can be achieved. The proposed operator is applied to original DE algorithms, as well as several advanced DE variants. Experimental results on 48 benchmark functions and 12 real-world application problems show that the proposed operator is an effective approach to enhance the performance of most DE algorithms studied.

  9. Robust and fast nonlinear optimization of diffusion MRI microstructure models.

    PubMed

    Harms, R L; Fritz, F J; Tobisch, A; Goebel, R; Roebroeck, A

    2017-07-15

    Advances in biophysical multi-compartment modeling for diffusion MRI (dMRI) have gained popularity because of greater specificity than DTI in relating the dMRI signal to underlying cellular microstructure. A large range of these diffusion microstructure models have been developed and each of the popular models comes with its own, often different, optimization algorithm, noise model and initialization strategy to estimate its parameter maps. Since data fit, accuracy and precision is hard to verify, this creates additional challenges to comparability and generalization of results from diffusion microstructure models. In addition, non-linear optimization is computationally expensive leading to very long run times, which can be prohibitive in large group or population studies. In this technical note we investigate the performance of several optimization algorithms and initialization strategies over a few of the most popular diffusion microstructure models, including NODDI and CHARMED. We evaluate whether a single well performing optimization approach exists that could be applied to many models and would equate both run time and fit aspects. All models, algorithms and strategies were implemented on the Graphics Processing Unit (GPU) to remove run time constraints, with which we achieve whole brain dataset fits in seconds to minutes. We then evaluated fit, accuracy, precision and run time for different models of differing complexity against three common optimization algorithms and three parameter initialization strategies. Variability of the achieved quality of fit in actual data was evaluated on ten subjects of each of two population studies with a different acquisition protocol. We find that optimization algorithms and multi-step optimization approaches have a considerable influence on performance and stability over subjects and over acquisition protocols. The gradient-free Powell conjugate-direction algorithm was found to outperform other common algorithms in terms of run time, fit, accuracy and precision. Parameter initialization approaches were found to be relevant especially for more complex models, such as those involving several fiber orientations per voxel. For these, a fitting cascade initializing or fixing parameter values in a later optimization step from simpler models in an earlier optimization step further improved run time, fit, accuracy and precision compared to a single step fit. This establishes and makes available standards by which robust fit and accuracy can be achieved in shorter run times. This is especially relevant for the use of diffusion microstructure modeling in large group or population studies and in combining microstructure parameter maps with tractography results. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  10. Novel particle tracking algorithm based on the Random Sample Consensus Model for the Active Target Time Projection Chamber (AT-TPC)

    NASA Astrophysics Data System (ADS)

    Ayyad, Yassid; Mittig, Wolfgang; Bazin, Daniel; Beceiro-Novo, Saul; Cortesi, Marco

    2018-02-01

    The three-dimensional reconstruction of particle tracks in a time projection chamber is a challenging task that requires advanced classification and fitting algorithms. In this work, we have developed and implemented a novel algorithm based on the Random Sample Consensus Model (RANSAC). The RANSAC is used to classify tracks including pile-up, to remove uncorrelated noise hits, as well as to reconstruct the vertex of the reaction. The algorithm, developed within the Active Target Time Projection Chamber (AT-TPC) framework, was tested and validated by analyzing the 4He+4He reaction. Results, performance and quality of the proposed algorithm are presented and discussed in detail.

  11. Advanced Dispersed Fringe Sensing Algorithm for Coarse Phasing Segmented Mirror Telescopes

    NASA Technical Reports Server (NTRS)

    Spechler, Joshua A.; Hoppe, Daniel J.; Sigrist, Norbert; Shi, Fang; Seo, Byoung-Joon; Bikkannavar, Siddarayappa A.

    2013-01-01

    Segment mirror phasing, a critical step of segment mirror alignment, requires the ability to sense and correct the relative pistons between segments from up to a few hundred microns to a fraction of wavelength in order to bring the mirror system to its full diffraction capability. When sampling the aperture of a telescope, using auto-collimating flats (ACFs) is more economical. The performance of a telescope with a segmented primary mirror strongly depends on how well those primary mirror segments can be phased. One such process to phase primary mirror segments in the axial piston direction is dispersed fringe sensing (DFS). DFS technology can be used to co-phase the ACFs. DFS is essentially a signal fitting and processing operation. It is an elegant method of coarse phasing segmented mirrors. DFS performance accuracy is dependent upon careful calibration of the system as well as other factors such as internal optical alignment, system wavefront errors, and detector quality. Novel improvements to the algorithm have led to substantial enhancements in DFS performance. The Advanced Dispersed Fringe Sensing (ADFS) Algorithm is designed to reduce the sensitivity to calibration errors by determining the optimal fringe extraction line. Applying an angular extraction line dithering procedure and combining this dithering process with an error function while minimizing the phase term of the fitted signal, defines in essence the ADFS algorithm.

  12. Advancing X-ray scattering metrology using inverse genetic algorithms.

    PubMed

    Hannon, Adam F; Sunday, Daniel F; Windover, Donald; Kline, R Joseph

    2016-01-01

    We compare the speed and effectiveness of two genetic optimization algorithms to the results of statistical sampling via a Markov chain Monte Carlo algorithm to find which is the most robust method for determining real space structure in periodic gratings measured using critical dimension small angle X-ray scattering. Both a covariance matrix adaptation evolutionary strategy and differential evolution algorithm are implemented and compared using various objective functions. The algorithms and objective functions are used to minimize differences between diffraction simulations and measured diffraction data. These simulations are parameterized with an electron density model known to roughly correspond to the real space structure of our nanogratings. The study shows that for X-ray scattering data, the covariance matrix adaptation coupled with a mean-absolute error log objective function is the most efficient combination of algorithm and goodness of fit criterion for finding structures with little foreknowledge about the underlying fine scale structure features of the nanograting.

  13. Structure solution of network materials by solid-state NMR without knowledge of the crystallographic space group.

    PubMed

    Brouwer, Darren H

    2013-01-01

    An algorithm is presented for solving the structures of silicate network materials such as zeolites or layered silicates from solid-state (29)Si double-quantum NMR data for situations in which the crystallographic space group is not known. The algorithm is explained and illustrated in detail using a hypothetical two-dimensional network structure as a working example. The algorithm involves an atom-by-atom structure building process in which candidate partial structures are evaluated according to their agreement with Si-O-Si connectivity information, symmetry restraints, and fits to (29)Si double quantum NMR curves followed by minimization of a cost function that incorporates connectivity, symmetry, and quality of fit to the double quantum curves. The two-dimensional network material is successfully reconstructed from hypothetical NMR data that can be reasonably expected to be obtained for real samples. This advance in "NMR crystallography" is expected to be important for structure determination of partially ordered silicate materials for which diffraction provides very limited structural information. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. Stochastic Forcing for Ocean Uncertainty Prediction

    DTIC Science & Technology

    2013-09-30

    using the desired dynamics and the fitting of that velocity field to the bathymetry, coasts and discretization for the desired simulation. New algorithms...numerical bias is removed. Pdfs of the forecast errors are shown to capture and evolve non- Gaussian statistics. Comparing the Kullback - Leibler ...advances in collaborative sea exercises of opportunity vi) Strengthen existing and initiate new collaborations with NRL, using and leveraging the MIT

  15. WE-AB-209-06: Dynamic Collimator Trajectory Algorithm for Use in VMAT Treatment Deliveries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacDonald, L; Thomas, C; Syme, A

    2016-06-15

    Purpose: To develop advanced dynamic collimator positioning algorithms for optimal beam’s-eye-view (BEV) fitting of targets in VMAT procedures, including multiple metastases stereotactic radiosurgery procedures. Methods: A trajectory algorithm was developed, which can dynamically modify the angle of the collimator as a function of VMAT control point to provide optimized collimation of target volume(s). Central to this algorithm is a concept denoted “whitespace”, defined as area within the jaw-defined BEV field, outside of the PTV, and not shielded by the MLC when fit to the PTV. Calculating whitespace at all collimator angles and every control point, a two-dimensional topographical map depictingmore » the tightness-of-fit of the MLC was generated. A variety of novel searching algorithms identified a number of candidate trajectories of continuous collimator motion. Ranking these candidate trajectories according to their accrued whitespace value produced an optimal solution for navigation of this map. Results: All trajectories were normalized to minimum possible (i.e. calculated without consideration of collimator motion constraints) accrued whitespace. On an acoustic neuroma case, a random walk algorithm generated a trajectory with 151% whitespace; random walk including a mandatory anchor point improved this to 148%; gradient search produced a trajectory with 137%; and bi-directional gradient search generated a trajectory with 130% whitespace. For comparison, a fixed collimator angle of 30° and 330° accumulated 272% and 228% of whitespace, respectively. The algorithm was tested on a clinical case with two metastases (single isocentre) and identified collimator angles that allow for simultaneous irradiation of the PTVs while minimizing normal tissue irradiation. Conclusion: Dynamic collimator trajectories have the potential to improve VMAT deliveries through increased efficiency and reduced normal tissue dose, especially in treatment of multiple cranial metastases, without significant safety concerns that hinder immediate clinical implementation.« less

  16. An advanced shape-fitting algorithm applied to quadrupedal mammals: improving volumetric mass estimates

    PubMed Central

    Brassey, Charlotte A.; Gardiner, James D.

    2015-01-01

    Body mass is a fundamental physical property of an individual and has enormous bearing upon ecology and physiology. Generating reliable estimates for body mass is therefore a necessary step in many palaeontological studies. Whilst early reconstructions of mass in extinct species relied upon isolated skeletal elements, volumetric techniques are increasingly applied to fossils when skeletal completeness allows. We apply a new ‘alpha shapes’ (α-shapes) algorithm to volumetric mass estimation in quadrupedal mammals. α-shapes are defined by: (i) the underlying skeletal structure to which they are fitted; and (ii) the value α, determining the refinement of fit. For a given skeleton, a range of α-shapes may be fitted around the individual, spanning from very coarse to very fine. We fit α-shapes to three-dimensional models of extant mammals and calculate volumes, which are regressed against mass to generate predictive equations. Our optimal model is characterized by a high correlation coefficient and mean square error (r2=0.975, m.s.e.=0.025). When applied to the woolly mammoth (Mammuthus primigenius) and giant ground sloth (Megatherium americanum), we reconstruct masses of 3635 and 3706 kg, respectively. We consider α-shapes an improvement upon previous techniques as resulting volumes are less sensitive to uncertainties in skeletal reconstructions, and do not require manual separation of body segments from skeletons. PMID:26361559

  17. Frapbot: An open-source application for FRAP data.

    PubMed

    Kohze, Robin; Dieteren, Cindy E J; Koopman, Werner J H; Brock, Roland; Schmidt, Samuel

    2017-08-01

    We introduce Frapbot, a free-of-charge open source software web application written in R, which provides manual and automated analyses of fluorescence recovery after photobleaching (FRAP) datasets. For automated operation, starting from data tables containing columns of time-dependent intensity values for various regions of interests within the images, a pattern recognition algorithm recognizes the relevant columns and identifies the presence or absence of prebleach values and the time point of photobleaching. Raw data, residuals, normalization, and boxplots indicating the distribution of half times of recovery (t 1/2 ) of all uploaded files are visualized instantly in a batch-wise manner using a variety of user-definable fitting options. The fitted results are provided as .zip file, which contains .csv formatted output tables. Alternatively, the user can manually control any of the options described earlier. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.

  18. Multitarget stool DNA testing for colorectal-cancer screening.

    PubMed

    Imperiale, Thomas F; Ransohoff, David F; Itzkowitz, Steven H; Levin, Theodore R; Lavin, Philip; Lidgard, Graham P; Ahlquist, David A; Berger, Barry M

    2014-04-03

    An accurate, noninvasive test could improve the effectiveness of colorectal-cancer screening. We compared a noninvasive, multitarget stool DNA test with a fecal immunochemical test (FIT) in persons at average risk for colorectal cancer. The DNA test includes quantitative molecular assays for KRAS mutations, aberrant NDRG4 and BMP3 methylation, and β-actin, plus a hemoglobin immunoassay. Results were generated with the use of a logistic-regression algorithm, with values of 183 or more considered to be positive. FIT values of more than 100 ng of hemoglobin per milliliter of buffer were considered to be positive. Tests were processed independently of colonoscopic findings. Of the 9989 participants who could be evaluated, 65 (0.7%) had colorectal cancer and 757 (7.6%) had advanced precancerous lesions (advanced adenomas or sessile serrated polyps measuring ≥1 cm in the greatest dimension) on colonoscopy. The sensitivity for detecting colorectal cancer was 92.3% with DNA testing and 73.8% with FIT (P=0.002). The sensitivity for detecting advanced precancerous lesions was 42.4% with DNA testing and 23.8% with FIT (P<0.001). The rate of detection of polyps with high-grade dysplasia was 69.2% with DNA testing and 46.2% with FIT (P=0.004); the rates of detection of serrated sessile polyps measuring 1 cm or more were 42.4% and 5.1%, respectively (P<0.001). Specificities with DNA testing and FIT were 86.6% and 94.9%, respectively, among participants with nonadvanced or negative findings (P<0.001) and 89.8% and 96.4%, respectively, among those with negative results on colonoscopy (P<0.001). The numbers of persons who would need to be screened to detect one cancer were 154 with colonoscopy, 166 with DNA testing, and 208 with FIT. In asymptomatic persons at average risk for colorectal cancer, multitarget stool DNA testing detected significantly more cancers than did FIT but had more false positive results. (Funded by Exact Sciences; ClinicalTrials.gov number, NCT01397747.).

  19. Dynamic Analysis of Sounding Rocket Pneumatic System Revision

    NASA Technical Reports Server (NTRS)

    Armen, Jerald

    2010-01-01

    The recent fusion of decades of advancements in mathematical models, numerical algorithms and curve fitting techniques marked the beginning of a new era in the science of simulation. It is becoming indispensable to the study of rockets and aerospace analysis. In pneumatic system, which is the main focus of this paper, particular emphasis will be placed on the efforts of compressible flow in Attitude Control System of sounding rocket.

  20. Smart Adaptive Socket to Improve Fit and Relieve Pain in Wounded Warriors

    DTIC Science & Technology

    2016-10-01

    applications were developed for wireless interaction with the socket system firmware. A control algorithm was designed and tested. Clinical trial...interface, Dynamic segmental volume control, Wireless connection, Pressure control system. 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18...charging jack, and power button are included in the design. A Bluetooth 4 radio is also included to allow for advanced user control via smartphone. The

  1. Segment and fit thresholding: a new method for image analysis applied to microarray and immunofluorescence data.

    PubMed

    Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E; Allen, Peter J; Sempere, Lorenzo F; Haab, Brian B

    2015-10-06

    Experiments involving the high-throughput quantification of image data require algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multicolor, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu's method for selected images. SFT promises to advance the goal of full automation in image analysis.

  2. Segment and Fit Thresholding: A New Method for Image Analysis Applied to Microarray and Immunofluorescence Data

    PubMed Central

    Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M.; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E.; Allen, Peter J.; Sempere, Lorenzo F.; Haab, Brian B.

    2016-01-01

    Certain experiments involve the high-throughput quantification of image data, thus requiring algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multi-color, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu’s method for selected images. SFT promises to advance the goal of full automation in image analysis. PMID:26339978

  3. Fast and Robust STEM Reconstruction in Complex Environments Using Terrestrial Laser Scanning

    NASA Astrophysics Data System (ADS)

    Wang, D.; Hollaus, M.; Puttonen, E.; Pfeifer, N.

    2016-06-01

    Terrestrial Laser Scanning (TLS) is an effective tool in forest research and management. However, accurate estimation of tree parameters still remains challenging in complex forests. In this paper, we present a novel algorithm for stem modeling in complex environments. This method does not require accurate delineation of stem points from the original point cloud. The stem reconstruction features a self-adaptive cylinder growing scheme. This algorithm is tested for a landslide region in the federal state of Vorarlberg, Austria. The algorithm results are compared with field reference data, which show that our algorithm is able to accurately retrieve the diameter at breast height (DBH) with a root mean square error (RMSE) of ~1.9 cm. This algorithm is further facilitated by applying an advanced sampling technique. Different sampling rates are applied and tested. It is found that a sampling rate of 7.5% is already able to retain the stem fitting quality and simultaneously reduce the computation time significantly by ~88%.

  4. Utilization of Solar Dynamics Observatory space weather digital image data for comparative analysis with application to Baryon Oscillation Spectroscopic Survey

    NASA Astrophysics Data System (ADS)

    Shekoyan, V.; Dehipawala, S.; Liu, Ernest; Tulsee, Vivek; Armendariz, R.; Tremberger, G.; Holden, T.; Marchese, P.; Cheung, T.

    2012-10-01

    Digital solar image data is available to users with access to standard, mass-market software. Many scientific projects utilize the Flexible Image Transport System (FITS) format, which requires specialized software typically used in astrophysical research. Data in the FITS format includes photometric and spatial calibration information, which may not be useful to researchers working with self-calibrated, comparative approaches. This project examines the advantages of using mass-market software with readily downloadable image data from the Solar Dynamics Observatory for comparative analysis over with the use of specialized software capable of reading data in the FITS format. Comparative analyses of brightness statistics that describe the solar disk in the study of magnetic energy using algorithms included in mass-market software have been shown to give results similar to analyses using FITS data. The entanglement of magnetic energy associated with solar eruptions, as well as the development of such eruptions, has been characterized successfully using mass-market software. The proposed algorithm would help to establish a publicly accessible, computing network that could assist in exploratory studies of all FITS data. The advances in computer, cell phone and tablet technology could incorporate such an approach readily for the enhancement of high school and first-year college space weather education on a global scale. Application to ground based data such as that contained in the Baryon Oscillation Spectroscopic Survey is discussed.

  5. Bayesian inference and decision theory - A framework for decision making in natural resource management

    USGS Publications Warehouse

    Dorazio, R.M.; Johnson, F.A.

    2003-01-01

    Bayesian inference and decision theory may be used in the solution of relatively complex problems of natural resource management, owing to recent advances in statistical theory and computing. In particular, Markov chain Monte Carlo algorithms provide a computational framework for fitting models of adequate complexity and for evaluating the expected consequences of alternative management actions. We illustrate these features using an example based on management of waterfowl habitat.

  6. Challenges and Recent Developments in Hearing Aids: Part I. Speech Understanding in Noise, Microphone Technologies and Noise Reduction Algorithms

    PubMed Central

    Chung, King

    2004-01-01

    This review discusses the challenges in hearing aid design and fitting and the recent developments in advanced signal processing technologies to meet these challenges. The first part of the review discusses the basic concepts and the building blocks of digital signal processing algorithms, namely, the signal detection and analysis unit, the decision rules, and the time constants involved in the execution of the decision. In addition, mechanisms and the differences in the implementation of various strategies used to reduce the negative effects of noise are discussed. These technologies include the microphone technologies that take advantage of the spatial differences between speech and noise and the noise reduction algorithms that take advantage of the spectral difference and temporal separation between speech and noise. The specific technologies discussed in this paper include first-order directional microphones, adaptive directional microphones, second-order directional microphones, microphone matching algorithms, array microphones, multichannel adaptive noise reduction algorithms, and synchrony detection noise reduction algorithms. Verification data for these technologies, if available, are also summarized. PMID:15678225

  7. Covariance Structure Model Fit Testing under Missing Data: An Application of the Supplemented EM Algorithm

    ERIC Educational Resources Information Center

    Cai, Li; Lee, Taehun

    2009-01-01

    We apply the Supplemented EM algorithm (Meng & Rubin, 1991) to address a chronic problem with the "two-stage" fitting of covariance structure models in the presence of ignorable missing data: the lack of an asymptotically chi-square distributed goodness-of-fit statistic. We show that the Supplemented EM algorithm provides a…

  8. Abdomen disease diagnosis in CT images using flexiscale curvelet transform and improved genetic algorithm.

    PubMed

    Sethi, Gaurav; Saini, B S

    2015-12-01

    This paper presents an abdomen disease diagnostic system based on the flexi-scale curvelet transform, which uses different optimal scales for extracting features from computed tomography (CT) images. To optimize the scale of the flexi-scale curvelet transform, we propose an improved genetic algorithm. The conventional genetic algorithm assumes that fit parents will likely produce the healthiest offspring that leads to the least fit parents accumulating at the bottom of the population, reducing the fitness of subsequent populations and delaying the optimal solution search. In our improved genetic algorithm, combining the chromosomes of a low-fitness and a high-fitness individual increases the probability of producing high-fitness offspring. Thereby, all of the least fit parent chromosomes are combined with high fit parent to produce offspring for the next population. In this way, the leftover weak chromosomes cannot damage the fitness of subsequent populations. To further facilitate the search for the optimal solution, our improved genetic algorithm adopts modified elitism. The proposed method was applied to 120 CT abdominal images; 30 images each of normal subjects, cysts, tumors and stones. The features extracted by the flexi-scale curvelet transform were more discriminative than conventional methods, demonstrating the potential of our method as a diagnostic tool for abdomen diseases.

  9. TransFit: Finite element analysis data fitting software

    NASA Technical Reports Server (NTRS)

    Freeman, Mark

    1993-01-01

    The Advanced X-Ray Astrophysics Facility (AXAF) mission support team has made extensive use of geometric ray tracing to analyze the performance of AXAF developmental and flight optics. One important aspect of this performance modeling is the incorporation of finite element analysis (FEA) data into the surface deformations of the optical elements. TransFit is software designed for the fitting of FEA data of Wolter I optical surface distortions with a continuous surface description which can then be used by SAO's analytic ray tracing software, currently OSAC (Optical Surface Analysis Code). The improved capabilities of Transfit over previous methods include bicubic spline fitting of FEA data to accommodate higher spatial frequency distortions, fitted data visualization for assessing the quality of fit, the ability to accommodate input data from three FEA codes plus other standard formats, and options for alignment of the model coordinate system with the ray trace coordinate system. TransFit uses the AnswerGarden graphical user interface (GUI) to edit input parameters and then access routines written in PV-WAVE, C, and FORTRAN to allow the user to interactively create, evaluate, and modify the fit. The topics covered include an introduction to TransFit: requirements, designs philosophy, and implementation; design specifics: modules, parameters, fitting algorithms, and data displays; a procedural example; verification of performance; future work; and appendices on online help and ray trace results of the verification section.

  10. Advanced fitness landscape analysis and the performance of memetic algorithms.

    PubMed

    Merz, Peter

    2004-01-01

    Memetic algorithms (MAs) have demonstrated very effective in combinatorial optimization. This paper offers explanations as to why this is so by investigating the performance of MAs in terms of efficiency and effectiveness. A special class of MAs is used to discuss efficiency and effectiveness for local search and evolutionary meta-search. It is shown that the efficiency of MAs can be increased drastically with the use of domain knowledge. However, effectiveness highly depends on the structure of the problem. As is well-known, identifying this structure is made easier with the notion of fitness landscapes: the local properties of the fitness landscape strongly influence the effectiveness of the local search while the global properties strongly influence the effectiveness of the evolutionary meta-search. This paper also introduces new techniques for analyzing the fitness landscapes of combinatorial problems; these techniques focus on the investigation of random walks in the fitness landscape starting at locally optimal solutions as well as on the escape from the basins of attractions of current local optima. It is shown for NK-landscapes and landscapes of the unconstrained binary quadratic programming problem (BQP) that a random walk to another local optimum can be used to explain the efficiency of recombination in comparison to mutation. Moreover, the paper shows that other aspects like the size of the basins of attractions of local optima are important for the efficiency of MAs and a local search escape analysis is proposed. These simple analysis techniques have several advantages over previously proposed statistical measures and provide valuable insight into the behaviour of MAs on different kinds of landscapes.

  11. Local-aggregate modeling for big data via distributed optimization: Applications to neuroimaging.

    PubMed

    Hu, Yue; Allen, Genevera I

    2015-12-01

    Technological advances have led to a proliferation of structured big data that have matrix-valued covariates. We are specifically motivated to build predictive models for multi-subject neuroimaging data based on each subject's brain imaging scans. This is an ultra-high-dimensional problem that consists of a matrix of covariates (brain locations by time points) for each subject; few methods currently exist to fit supervised models directly to this tensor data. We propose a novel modeling and algorithmic strategy to apply generalized linear models (GLMs) to this massive tensor data in which one set of variables is associated with locations. Our method begins by fitting GLMs to each location separately, and then builds an ensemble by blending information across locations through regularization with what we term an aggregating penalty. Our so called, Local-Aggregate Model, can be fit in a completely distributed manner over the locations using an Alternating Direction Method of Multipliers (ADMM) strategy, and thus greatly reduces the computational burden. Furthermore, we propose to select the appropriate model through a novel sequence of faster algorithmic solutions that is similar to regularization paths. We will demonstrate both the computational and predictive modeling advantages of our methods via simulations and an EEG classification problem. © 2015, The International Biometric Society.

  12. A fast global fitting algorithm for fluorescence lifetime imaging microscopy based on image segmentation.

    PubMed

    Pelet, S; Previte, M J R; Laiho, L H; So, P T C

    2004-10-01

    Global fitting algorithms have been shown to improve effectively the accuracy and precision of the analysis of fluorescence lifetime imaging microscopy data. Global analysis performs better than unconstrained data fitting when prior information exists, such as the spatial invariance of the lifetimes of individual fluorescent species. The highly coupled nature of global analysis often results in a significantly slower convergence of the data fitting algorithm as compared with unconstrained analysis. Convergence speed can be greatly accelerated by providing appropriate initial guesses. Realizing that the image morphology often correlates with fluorophore distribution, a global fitting algorithm has been developed to assign initial guesses throughout an image based on a segmentation analysis. This algorithm was tested on both simulated data sets and time-domain lifetime measurements. We have successfully measured fluorophore distribution in fibroblasts stained with Hoechst and calcein. This method further allows second harmonic generation from collagen and elastin autofluorescence to be differentiated in fluorescence lifetime imaging microscopy images of ex vivo human skin. On our experimental measurement, this algorithm increased convergence speed by over two orders of magnitude and achieved significantly better fits. Copyright 2004 Biophysical Society

  13. Optimal sensor placement for deployable antenna module health monitoring in SSPS using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Chen; Zhang, Xuepan; Huang, Xiaoqi; Cheng, ZhengAi; Zhang, Xinghua; Hou, Xinbin

    2017-11-01

    The concept of space solar power satellite (SSPS) is an advanced system for collecting solar energy in space and transmitting it wirelessly to earth. However, due to the long service life, in-orbit damage may occur in the structural system of SSPS. Therefore, sensor placement layouts for structural health monitoring should be firstly considered in this concept. In this paper, based on genetic algorithm, an optimal sensor placement method for deployable antenna module health monitoring in SSPS is proposed. According to the characteristics of the deployable antenna module, the designs of sensor placement are listed. Furthermore, based on effective independence method and effective interval index, a combined fitness function is defined to maximize linear independence in targeted modes while simultaneously avoiding redundant information at nearby positions. In addition, by considering the reliability of sensors located at deployable mechanisms, another fitness function is constituted. Moreover, the solution process of optimal sensor placement by using genetic algorithm is clearly demonstrated. At last, a numerical example about the sensor placement layout in a deployable antenna module of SSPS is presented, which by synthetically considering all the above mentioned performances. All results can illustrate the effectiveness and feasibility of the proposed sensor placement method in SSPS.

  14. New web-based algorithm to improve rigid gas permeable contact lens fitting in keratoconus.

    PubMed

    Ortiz-Toquero, Sara; Rodriguez, Guadalupe; de Juan, Victoria; Martin, Raul

    2017-06-01

    To calculate and validate a new web-based algorithm for selecting the back optic zone radius (BOZR) of spherical gas permeable (GP) lens in keratoconus eyes. A retrospective calculation (n=35; multiple regression analysis) and a posterior prospective validation (new sample of 50 keratoconus eyes) of a new algorithm to select the BOZR of spherical KAKC design GP lenses (Conoptica) in keratoconus were conducted. BOZR calculated with the new algorithm, manufacturer guidelines and APEX software were compared with the BOZR that was finally prescribed. Number of diagnostic lenses, ordered lenses and visits to achieve optimal fitting were recorded and compared those obtained for a control group [50 healthy eyes fitted with spherical GP (BIAS design; Conoptica)]. The new algorithm highly correlated with the final BOZR fitted (r 2 =0.825, p<0.001). BOZR of the first diagnostic lens using the new algorithm demonstrated lower difference with the final BOZR prescribed (-0.01±0.12mm, p=0.65; 58% difference≤0.05mm) than with the manufacturer guidelines (+0.12±0.22mm, p<0.001; 26% difference≤0.05mm) and APEX software (-0.14±0.16mm, p=0.001; 34% difference≤0.05mm). Close numbers of diagnostic lens (1.6±0.8, 1.3±0.5; p=0.02), ordered lens (1.4±0.6, 1.1±0.3; P<0.001), and visits (3.4±0.7, 3.2±0.4; p=0.08) were required to fit keratoconus and healthy eyes, respectively. This new algorithm (free access at www.calculens.com) improves spherical KAKC GP fitting in keratoconus and can reduce the practitioner and patient chair time to achieve a final acceptable fit in keratoconus. This algorithm reduces differences between keratoconus GP fitting (KAKC design) and standard GP (BIAS design) lenses fitting in healthy eyes. Copyright © 2016 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.

  15. Iterative Track Fitting Using Cluster Classification in Multi Wire Proportional Chamber

    NASA Astrophysics Data System (ADS)

    Primor, David; Mikenberg, Giora; Etzion, Erez; Messer, Hagit

    2007-10-01

    This paper addresses the problem of track fitting of a charged particle in a multi wire proportional chamber (MWPC) using cathode readout strips. When a charged particle crosses a MWPC, a positive charge is induced on a cluster of adjacent strips. In the presence of high radiation background, the cluster charge measurements may be contaminated due to background particles, leading to less accurate hit position estimation. The least squares method for track fitting assumes the same position error distribution for all hits and thus loses its optimal properties on contaminated data. For this reason, a new robust algorithm is proposed. The algorithm first uses the known spatial charge distribution caused by a single charged particle over the strips, and classifies the clusters into ldquocleanrdquo and ldquodirtyrdquo clusters. Then, using the classification results, it performs an iterative weighted least squares fitting procedure, updating its optimal weights each iteration. The performance of the suggested algorithm is compared to other track fitting techniques using a simulation of tracks with radiation background. It is shown that the algorithm improves the track fitting performance significantly. A practical implementation of the algorithm is presented for muon track fitting in the cathode strip chamber (CSC) of the ATLAS experiment.

  16. Nonlinear Curve-Fitting Program

    NASA Technical Reports Server (NTRS)

    Everhart, Joel L.; Badavi, Forooz F.

    1989-01-01

    Nonlinear optimization algorithm helps in finding best-fit curve. Nonlinear Curve Fitting Program, NLINEAR, interactive curve-fitting routine based on description of quadratic expansion of X(sup 2) statistic. Utilizes nonlinear optimization algorithm calculating best statistically weighted values of parameters of fitting function and X(sup 2) minimized. Provides user with such statistical information as goodness of fit and estimated values of parameters producing highest degree of correlation between experimental data and mathematical model. Written in FORTRAN 77.

  17. Motion control of nonlinear gantry crane system via priority-based fitness scheme in firefly algorithm

    NASA Astrophysics Data System (ADS)

    Jaafar, Hazriq Izzuan; Latif, Norfaneysa Abd; Kassim, Anuar Mohamed; Abidin, Amar Faiz Zainal; Hussien, Sharifah Yuslinda Syed; Aras, Mohd Shahrieel Mohd

    2015-05-01

    Advanced manufacturing technology made Gantry Crane System (GCS) is one of the suitable heavy machinery transporters and frequently employed in handling with huge materials. The interconnection of trolley movement and payload oscillation has a technical impact which needs to be considered. Once the trolley moves to the desired position with high speed, this will induce undesirable's payload oscillation. This frequent unavoidable load swing causes an efficiency drop, load damages and even accidents. In this paper, a new control strategy of Firefly Algorithm (FA) will be developed to obtain five optimal controller parameters (PID and PD) via Priority-based Fitness Scheme (PFS). Combinations of these five parameters are utilized for controlling trolley movement and minimizing the angle of payload oscillation. This PFS is prioritized based on steady-state error (SSE), overshoot (OS) and settling time (Ts) according to the needs and circumstances. Lagrange equation will be chosen for modeling and simulation will be conducted by using related software. Simulation results show that the proposed control strategy is efficient to control the trolley movement to the desired position and minimize the angle of payload oscillation.

  18. Data reduction using cubic rational B-splines

    NASA Technical Reports Server (NTRS)

    Chou, Jin J.; Piegl, Les A.

    1992-01-01

    A geometric method is proposed for fitting rational cubic B-spline curves to data that represent smooth curves including intersection or silhouette lines. The algorithm is based on the convex hull and the variation diminishing properties of Bezier/B-spline curves. The algorithm has the following structure: it tries to fit one Bezier segment to the entire data set and if it is impossible it subdivides the data set and reconsiders the subset. After accepting the subset the algorithm tries to find the longest run of points within a tolerance and then approximates this set with a Bezier cubic segment. The algorithm uses this procedure repeatedly to the rest of the data points until all points are fitted. It is concluded that the algorithm delivers fitting curves which approximate the data with high accuracy even in cases with large tolerances.

  19. Observations on computational methodologies for use in large-scale, gradient-based, multidisciplinary design incorporating advanced CFD codes

    NASA Technical Reports Server (NTRS)

    Newman, P. A.; Hou, G. J.-W.; Jones, H. E.; Taylor, A. C., III; Korivi, V. M.

    1992-01-01

    How a combination of various computational methodologies could reduce the enormous computational costs envisioned in using advanced CFD codes in gradient based optimized multidisciplinary design (MdD) procedures is briefly outlined. Implications of these MdD requirements upon advanced CFD codes are somewhat different than those imposed by a single discipline design. A means for satisfying these MdD requirements for gradient information is presented which appear to permit: (1) some leeway in the CFD solution algorithms which can be used; (2) an extension to 3-D problems; and (3) straightforward use of other computational methodologies. Many of these observations have previously been discussed as possibilities for doing parts of the problem more efficiently; the contribution here is observing how they fit together in a mutually beneficial way.

  20. CARS Spectral Fitting with Multiple Resonant Species using Sparse Libraries

    NASA Technical Reports Server (NTRS)

    Cutler, Andrew D.; Magnotti, Gaetano

    2010-01-01

    The dual pump CARS technique is often used in the study of turbulent flames. Fast and accurate algorithms are needed for fitting dual-pump CARS spectra for temperature and multiple chemical species. This paper describes the development of such an algorithm. The algorithm employs sparse libraries, whose size grows much more slowly with number of species than a conventional library. The method was demonstrated by fitting synthetic "experimental" spectra containing 4 resonant species (N2, O2, H2 and CO2), both with noise and without it, and by fitting experimental spectra from a H2-air flame produced by a Hencken burner. In both studies, weighted least squares fitting of signal, as opposed to least squares fitting signal or square-root signal, was shown to produce the least random error and minimize bias error in the fitted parameters.

  1. Fpack and Funpack Utilities for FITS Image Compression and Uncompression

    NASA Technical Reports Server (NTRS)

    Pence, W.

    2008-01-01

    Fpack is a utility program for optimally compressing images in the FITS (Flexible Image Transport System) data format (see http://fits.gsfc.nasa.gov). The associated funpack program restores the compressed image file back to its original state (as long as a lossless compression algorithm is used). These programs may be run from the host operating system command line and are analogous to the gzip and gunzip utility programs except that they are optimized for FITS format images and offer a wider choice of compression algorithms. Fpack stores the compressed image using the FITS tiled image compression convention (see http://fits.gsfc.nasa.gov/fits_registry.html). Under this convention, the image is first divided into a user-configurable grid of rectangular tiles, and then each tile is individually compressed and stored in a variable-length array column in a FITS binary table. By default, fpack usually adopts a row-by-row tiling pattern. The FITS image header keywords remain uncompressed for fast access by FITS reading and writing software. The tiled image compression convention can in principle support any number of different compression algorithms. The fpack and funpack utilities call on routines in the CFITSIO library (http://hesarc.gsfc.nasa.gov/fitsio) to perform the actual compression and uncompression of the FITS images, which currently supports the GZIP, Rice, H-compress, and PLIO IRAF pixel list compression algorithms.

  2. Stochastic approach to data analysis in fluorescence correlation spectroscopy.

    PubMed

    Rao, Ramachandra; Langoju, Rajesh; Gösch, Michael; Rigler, Per; Serov, Alexandre; Lasser, Theo

    2006-09-21

    Fluorescence correlation spectroscopy (FCS) has emerged as a powerful technique for measuring low concentrations of fluorescent molecules and their diffusion constants. In FCS, the experimental data is conventionally fit using standard local search techniques, for example, the Marquardt-Levenberg (ML) algorithm. A prerequisite for these categories of algorithms is the sound knowledge of the behavior of fit parameters and in most cases good initial guesses for accurate fitting, otherwise leading to fitting artifacts. For known fit models and with user experience about the behavior of fit parameters, these local search algorithms work extremely well. However, for heterogeneous systems or where automated data analysis is a prerequisite, there is a need to apply a procedure, which treats FCS data fitting as a black box and generates reliable fit parameters with accuracy for the chosen model in hand. We present a computational approach to analyze FCS data by means of a stochastic algorithm for global search called PGSL, an acronym for Probabilistic Global Search Lausanne. This algorithm does not require any initial guesses and does the fitting in terms of searching for solutions by global sampling. It is flexible as well as computationally faster at the same time for multiparameter evaluations. We present the performance study of PGSL for two-component with triplet fits. The statistical study and the goodness of fit criterion for PGSL are also presented. The robustness of PGSL on noisy experimental data for parameter estimation is also verified. We further extend the scope of PGSL by a hybrid analysis wherein the output of PGSL is fed as initial guesses to ML. Reliability studies show that PGSL and the hybrid combination of both perform better than ML for various thresholds of the mean-squared error (MSE).

  3. A smoothing algorithm using cubic spline functions

    NASA Technical Reports Server (NTRS)

    Smith, R. E., Jr.; Price, J. M.; Howser, L. M.

    1974-01-01

    Two algorithms are presented for smoothing arbitrary sets of data. They are the explicit variable algorithm and the parametric variable algorithm. The former would be used where large gradients are not encountered because of the smaller amount of calculation required. The latter would be used if the data being smoothed were double valued or experienced large gradients. Both algorithms use a least-squares technique to obtain a cubic spline fit to the data. The advantage of the spline fit is that the first and second derivatives are continuous. This method is best used in an interactive graphics environment so that the junction values for the spline curve can be manipulated to improve the fit.

  4. Quantum algorithm for linear regression

    NASA Astrophysics Data System (ADS)

    Wang, Guoming

    2017-07-01

    We present a quantum algorithm for fitting a linear regression model to a given data set using the least-squares approach. Differently from previous algorithms which yield a quantum state encoding the optimal parameters, our algorithm outputs these numbers in the classical form. So by running it once, one completely determines the fitted model and then can use it to make predictions on new data at little cost. Moreover, our algorithm works in the standard oracle model, and can handle data sets with nonsparse design matrices. It runs in time poly( log2(N ) ,d ,κ ,1 /ɛ ) , where N is the size of the data set, d is the number of adjustable parameters, κ is the condition number of the design matrix, and ɛ is the desired precision in the output. We also show that the polynomial dependence on d and κ is necessary. Thus, our algorithm cannot be significantly improved. Furthermore, we also give a quantum algorithm that estimates the quality of the least-squares fit (without computing its parameters explicitly). This algorithm runs faster than the one for finding this fit, and can be used to check whether the given data set qualifies for linear regression in the first place.

  5. Contact angle measurement with a smartphone

    NASA Astrophysics Data System (ADS)

    Chen, H.; Muros-Cobos, Jesus L.; Amirfazli, A.

    2018-03-01

    In this study, a smartphone-based contact angle measurement instrument was developed. Compared with the traditional measurement instruments, this instrument has the advantage of simplicity, compact size, and portability. An automatic contact point detection algorithm was developed to allow the instrument to correctly detect the drop contact points. Two different contact angle calculation methods, Young-Laplace and polynomial fitting methods, were implemented in this instrument. The performance of this instrument was tested first with ideal synthetic drop profiles. It was shown that the accuracy of the new system with ideal synthetic drop profiles can reach 0.01% with both Young-Laplace and polynomial fitting methods. Conducting experiments to measure both static and dynamic (advancing and receding) contact angles with the developed instrument, we found that the smartphone-based instrument can provide accurate and practical measurement results as the traditional commercial instruments. The successful demonstration of use of a smartphone (mobile phone) to conduct contact angle measurement is a significant advancement in the field as it breaks the dominate mold of use of a computer and a bench bound setup for such systems since their appearance in 1980s.

  6. Contact angle measurement with a smartphone.

    PubMed

    Chen, H; Muros-Cobos, Jesus L; Amirfazli, A

    2018-03-01

    In this study, a smartphone-based contact angle measurement instrument was developed. Compared with the traditional measurement instruments, this instrument has the advantage of simplicity, compact size, and portability. An automatic contact point detection algorithm was developed to allow the instrument to correctly detect the drop contact points. Two different contact angle calculation methods, Young-Laplace and polynomial fitting methods, were implemented in this instrument. The performance of this instrument was tested first with ideal synthetic drop profiles. It was shown that the accuracy of the new system with ideal synthetic drop profiles can reach 0.01% with both Young-Laplace and polynomial fitting methods. Conducting experiments to measure both static and dynamic (advancing and receding) contact angles with the developed instrument, we found that the smartphone-based instrument can provide accurate and practical measurement results as the traditional commercial instruments. The successful demonstration of use of a smartphone (mobile phone) to conduct contact angle measurement is a significant advancement in the field as it breaks the dominate mold of use of a computer and a bench bound setup for such systems since their appearance in 1980s.

  7. SEOM's Sentinel-3/OLCI' project CAWA: advanced GRASP aerosol retrieval

    NASA Astrophysics Data System (ADS)

    Dubovik, Oleg; litvinov, Pavel; Huang, Xin; Aspetsberger, Michael; Fuertes, David; Brockmann, Carsten; Fischer, Jürgen; Bojkov, Bojan

    2016-04-01

    The CAWA "Advanced Clouds, Aerosols and WAter vapour products for Sentinel-3/OLCI" ESA-SEOM project aims on the development of advanced atmospheric retrieval algorithms for the Sentinel-3/OLCI mission, and is prepared using Envisat/MERIS and Aqua/MODIS datasets. This presentation discusses mainly CAWA aerosol product developments and results. CAWA aerosol retrieval uses recently developed GRASP algorithm (Generalized Retrieval of Aerosol and Surface Properties) algorithm described by Dubovik et al. (2014). GRASP derives extended set of atmospheric parameters using multi-pixel concept - a simultaneous fitting of a large group of pixels under additional a priori constraints limiting the time variability of surface properties and spatial variability of aerosol properties. Over land GRASP simultaneously retrieves properties of both aerosol and underlying surface even over bright surfaces. GRAPS doesn't use traditional look-up-tables and performs retrieval as search in continuous space of solution. All radiative transfer calculations are performed as part of the retrieval. The results of comprehensive sensitivity tests, as well as results obtained from real Envisat/MERIS data will be presented. The tests analyze various aspects of aerosol and surface reflectance retrieval accuracy. In addition, the possibilities of retrieval improvement by means of implementing synergetic inversion of a combination of OLCI data with observations by SLSTR are explored. Both the results of numerical tests, as well as the results of processing several years of Envisat/MERIS data illustrate demonstrate reliable retrieval of AOD (Aerosol Optical Depth) and surface BRDF. Observed retrieval issues and advancements will be discussed. For example, for some situations we illustrate possibilities of retrieving aerosol absorption - property that hardly accessible from satellite observations with no multi-angular and polarimetric capabilities.

  8. Nonlinear Aerodynamic Modeling From Flight Data Using Advanced Piloted Maneuvers and Fuzzy Logic

    NASA Technical Reports Server (NTRS)

    Brandon, Jay M.; Morelli, Eugene A.

    2012-01-01

    Results of the Aeronautics Research Mission Directorate Seedling Project Phase I research project entitled "Nonlinear Aerodynamics Modeling using Fuzzy Logic" are presented. Efficient and rapid flight test capabilities were developed for estimating highly nonlinear models of airplane aerodynamics over a large flight envelope. Results showed that the flight maneuvers developed, used in conjunction with the fuzzy-logic system identification algorithms, produced very good model fits of the data, with no model structure inputs required, for flight conditions ranging from cruise to departure and spin conditions.

  9. Pulmonary lobe segmentation based on ridge surface sampling and shape model fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ross, James C., E-mail: jross@bwh.harvard.edu; Surgical Planning Lab, Brigham and Women's Hospital, Boston, Massachusetts 02215; Laboratory of Mathematics in Imaging, Brigham and Women's Hospital, Boston, Massachusetts 02126

    2013-12-15

    Purpose: Performing lobe-based quantitative analysis of the lung in computed tomography (CT) scans can assist in efforts to better characterize complex diseases such as chronic obstructive pulmonary disease (COPD). While airways and vessels can help to indicate the location of lobe boundaries, segmentations of these structures are not always available, so methods to define the lobes in the absence of these structures are desirable. Methods: The authors present a fully automatic lung lobe segmentation algorithm that is effective in volumetric inspiratory and expiratory computed tomography (CT) datasets. The authors rely on ridge surface image features indicating fissure locations and amore » novel approach to modeling shape variation in the surfaces defining the lobe boundaries. The authors employ a particle system that efficiently samples ridge surfaces in the image domain and provides a set of candidate fissure locations based on the Hessian matrix. Following this, lobe boundary shape models generated from principal component analysis (PCA) are fit to the particles data to discriminate between fissure and nonfissure candidates. The resulting set of particle points are used to fit thin plate spline (TPS) interpolating surfaces to form the final boundaries between the lung lobes. Results: The authors tested algorithm performance on 50 inspiratory and 50 expiratory CT scans taken from the COPDGene study. Results indicate that the authors' algorithm performs comparably to pulmonologist-generated lung lobe segmentations and can produce good results in cases with accessory fissures, incomplete fissures, advanced emphysema, and low dose acquisition protocols. Dice scores indicate that only 29 out of 500 (5.85%) lobes showed Dice scores lower than 0.9. Two different approaches for evaluating lobe boundary surface discrepancies were applied and indicate that algorithm boundary identification is most accurate in the vicinity of fissures detectable on CT. Conclusions: The proposed algorithm is effective for lung lobe segmentation in absence of auxiliary structures such as vessels and airways. The most challenging cases are those with mostly incomplete, absent, or near-absent fissures and in cases with poorly revealed fissures due to high image noise. However, the authors observe good performance even in the majority of these cases.« less

  10. Genetic algorithm dynamics on a rugged landscape

    NASA Astrophysics Data System (ADS)

    Bornholdt, Stefan

    1998-04-01

    The genetic algorithm is an optimization procedure motivated by biological evolution and is successfully applied to optimization problems in different areas. A statistical mechanics model for its dynamics is proposed based on the parent-child fitness correlation of the genetic operators, making it applicable to general fitness landscapes. It is compared to a recent model based on a maximum entropy ansatz. Finally it is applied to modeling the dynamics of a genetic algorithm on the rugged fitness landscape of the NK model.

  11. Weighted Least Squares Fitting Using Ordinary Least Squares Algorithms.

    ERIC Educational Resources Information Center

    Kiers, Henk A. L.

    1997-01-01

    A general approach for fitting a model to a data matrix by weighted least squares (WLS) is studied. The approach consists of iteratively performing steps of existing algorithms for ordinary least squares fitting of the same model and is based on maximizing a function that majorizes WLS loss function. (Author/SLD)

  12. 3D spherical-cap fitting procedure for (truncated) sessile nano- and micro-droplets & -bubbles.

    PubMed

    Tan, Huanshu; Peng, Shuhua; Sun, Chao; Zhang, Xuehua; Lohse, Detlef

    2016-11-01

    In the study of nanobubbles, nanodroplets or nanolenses immobilised on a substrate, a cross-section of a spherical cap is widely applied to extract geometrical information from atomic force microscopy (AFM) topographic images. In this paper, we have developed a comprehensive 3D spherical-cap fitting procedure (3D-SCFP) to extract morphologic characteristics of complete or truncated spherical caps from AFM images. Our procedure integrates several advanced digital image analysis techniques to construct a 3D spherical-cap model, from which the geometrical parameters of the nanostructures are extracted automatically by a simple algorithm. The procedure takes into account all valid data points in the construction of the 3D spherical-cap model to achieve high fidelity in morphology analysis. We compare our 3D fitting procedure with the commonly used 2D cross-sectional profile fitting method to determine the contact angle of a complete spherical cap and a truncated spherical cap. The results from 3D-SCFP are consistent and accurate, while 2D fitting is unavoidably arbitrary in the selection of the cross-section and has a much lower number of data points on which the fitting can be based, which in addition is biased to the top of the spherical cap. We expect that the developed 3D spherical-cap fitting procedure will find many applications in imaging analysis.

  13. Automatic lung lobe segmentation using particles, thin plate splines, and maximum a posteriori estimation.

    PubMed

    Ross, James C; San José Estépar, Rail; Kindlmann, Gordon; Díaz, Alejandro; Westin, Carl-Fredrik; Silverman, Edwin K; Washko, George R

    2010-01-01

    We present a fully automatic lung lobe segmentation algorithm that is effective in high resolution computed tomography (CT) datasets in the presence of confounding factors such as incomplete fissures (anatomical structures indicating lobe boundaries), advanced disease states, high body mass index (BMI), and low-dose scanning protocols. In contrast to other algorithms that leverage segmentations of auxiliary structures (esp. vessels and airways), we rely only upon image features indicating fissure locations. We employ a particle system that samples the image domain and provides a set of candidate fissure locations. We follow this stage with maximum a posteriori (MAP) estimation to eliminate poor candidates and then perform a post-processing operation to remove remaining noise particles. We then fit a thin plate spline (TPS) interpolating surface to the fissure particles to form the final lung lobe segmentation. Results indicate that our algorithm performs comparably to pulmonologist-generated lung lobe segmentations on a set of challenging cases.

  14. Automatic Lung Lobe Segmentation Using Particles, Thin Plate Splines, and Maximum a Posteriori Estimation

    PubMed Central

    Ross, James C.; Estépar, Raúl San José; Kindlmann, Gordon; Díaz, Alejandro; Westin, Carl-Fredrik; Silverman, Edwin K.; Washko, George R.

    2011-01-01

    We present a fully automatic lung lobe segmentation algorithm that is effective in high resolution computed tomography (CT) datasets in the presence of confounding factors such as incomplete fissures (anatomical structures indicating lobe boundaries), advanced disease states, high body mass index (BMI), and low-dose scanning protocols. In contrast to other algorithms that leverage segmentations of auxiliary structures (esp. vessels and airways), we rely only upon image features indicating fissure locations. We employ a particle system that samples the image domain and provides a set of candidate fissure locations. We follow this stage with maximum a posteriori (MAP) estimation to eliminate poor candidates and then perform a post-processing operation to remove remaining noise particles. We then fit a thin plate spline (TPS) interpolating surface to the fissure particles to form the final lung lobe segmentation. Results indicate that our algorithm performs comparably to pulmonologist-generated lung lobe segmentations on a set of challenging cases. PMID:20879396

  15. Study of image matching algorithm and sub-pixel fitting algorithm in target tracking

    NASA Astrophysics Data System (ADS)

    Yang, Ming-dong; Jia, Jianjun; Qiang, Jia; Wang, Jian-yu

    2015-03-01

    Image correlation matching is a tracking method that searched a region most approximate to the target template based on the correlation measure between two images. Because there is no need to segment the image, and the computation of this method is little. Image correlation matching is a basic method of target tracking. This paper mainly studies the image matching algorithm of gray scale image, which precision is at sub-pixel level. The matching algorithm used in this paper is SAD (Sum of Absolute Difference) method. This method excels in real-time systems because of its low computation complexity. The SAD method is introduced firstly and the most frequently used sub-pixel fitting algorithms are introduced at the meantime. These fitting algorithms can't be used in real-time systems because they are too complex. However, target tracking often requires high real-time performance, we put forward a fitting algorithm named paraboloidal fitting algorithm based on the consideration above, this algorithm is simple and realized easily in real-time system. The result of this algorithm is compared with that of surface fitting algorithm through image matching simulation. By comparison, the precision difference between these two algorithms is little, it's less than 0.01pixel. In order to research the influence of target rotation on precision of image matching, the experiment of camera rotation was carried on. The detector used in the camera is a CMOS detector. It is fixed to an arc pendulum table, take pictures when the camera rotated different angles. Choose a subarea in the original picture as the template, and search the best matching spot using image matching algorithm mentioned above. The result shows that the matching error is bigger when the target rotation angle is larger. It's an approximate linear relation. Finally, the influence of noise on matching precision was researched. Gaussian noise and pepper and salt noise were added in the image respectively, and the image was processed by mean filter and median filter, then image matching was processed. The result show that when the noise is little, mean filter and median filter can achieve a good result. But when the noise density of salt and pepper noise is bigger than 0.4, or the variance of Gaussian noise is bigger than 0.0015, the result of image matching will be wrong.

  16. Genetic algorithm for nuclear data evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arthur, Jennifer Ann

    These are slides on genetic algorithm for nuclear data evaluation. The following is covered: initial population, fitness (outer loop), calculate fitness, selection (first part of inner loop), reproduction (second part of inner loop), solution, and examples.

  17. Algorithm for Compressing Time-Series Data

    NASA Technical Reports Server (NTRS)

    Hawkins, S. Edward, III; Darlington, Edward Hugo

    2012-01-01

    An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").

  18. Spinoff 2011

    NASA Technical Reports Server (NTRS)

    2012-01-01

    Topics include: Bioreactors Drive Advances in Tissue Engineering; Tooling Techniques Enhance Medical Imaging; Ventilator Technologies Sustain Critically Injured Patients; Protein Innovations Advance Drug Treatments, Skin Care; Mass Analyzers Facilitate Research on Addiction; Frameworks Coordinate Scientific Data Management; Cameras Improve Navigation for Pilots, Drivers; Integrated Design Tools Reduce Risk, Cost; Advisory Systems Save Time, Fuel for Airlines; Modeling Programs Increase Aircraft Design Safety; Fly-by-Wire Systems Enable Safer, More Efficient Flight; Modified Fittings Enhance Industrial Safety; Simulation Tools Model Icing for Aircraft Design; Information Systems Coordinate Emergency Management; Imaging Systems Provide Maps for U.S. Soldiers; High-Pressure Systems Suppress Fires in Seconds; Alloy-Enhanced Fans Maintain Fresh Air in Tunnels; Control Algorithms Charge Batteries Faster; Software Programs Derive Measurements from Photographs; Retrofits Convert Gas Vehicles into Hybrids; NASA Missions Inspire Online Video Games; Monitors Track Vital Signs for Fitness and Safety; Thermal Components Boost Performance of HVAC Systems; World Wind Tools Reveal Environmental Change; Analyzers Measure Greenhouse Gasses, Airborne Pollutants; Remediation Technologies Eliminate Contaminants; Receivers Gather Data for Climate, Weather Prediction; Coating Processes Boost Performance of Solar Cells; Analyzers Provide Water Security in Space and on Earth; Catalyst Substrates Remove Contaminants, Produce Fuel; Rocket Engine Innovations Advance Clean Energy; Technologies Render Views of Earth for Virtual Navigation; Content Platforms Meet Data Storage, Retrieval Needs; Tools Ensure Reliability of Critical Software; Electronic Handbooks Simplify Process Management; Software Innovations Speed Scientific Computing; Controller Chips Preserve Microprocessor Function; Nanotube Production Devices Expand Research Capabilities; Custom Machines Advance Composite Manufacturing; Polyimide Foams Offer Superior Insulation; Beam Steering Devices Reduce Payload Weight; Models Support Energy-Saving Microwave Technologies; Materials Advance Chemical Propulsion Technology; and High-Temperature Coatings Offer Energy Savings.

  19. Floating shock fitting via Lagrangian adaptive meshes

    NASA Technical Reports Server (NTRS)

    Vanrosendale, John

    1995-01-01

    In recent work we have formulated a new approach to compressible flow simulation, combining the advantages of shock-fitting and shock-capturing. Using a cell-centered on Roe scheme discretization on unstructured meshes, we warp the mesh while marching to steady state, so that mesh edges align with shocks and other discontinuities. This new algorithm, the Shock-fitting Lagrangian Adaptive Method (SLAM), is, in effect, a reliable shock-capturing algorithm which yields shock-fitted accuracy at convergence.

  20. Habitat Design Optimization and Analysis

    NASA Technical Reports Server (NTRS)

    SanSoucie, Michael P.; Hull, Patrick V.; Tinker, Michael L.

    2006-01-01

    Long-duration surface missions to the Moon and Mars will require habitats for the astronauts. The materials chosen for the habitat walls play a direct role in the protection against the harsh environments found on the surface. Choosing the best materials, their configuration, and the amount required is extremely difficult due to the immense size of the design region. Advanced optimization techniques are necessary for habitat wall design. Standard optimization techniques are not suitable for problems with such large search spaces; therefore, a habitat design optimization tool utilizing genetic algorithms has been developed. Genetic algorithms use a "survival of the fittest" philosophy, where the most fit individuals are more likely to survive and reproduce. This habitat design optimization tool is a multi-objective formulation of structural analysis, heat loss, radiation protection, and meteoroid protection. This paper presents the research and development of this tool.

  1. Statistical Models for Averaging of the Pump–Probe Traces: Example of Denoising in Terahertz Time-Domain Spectroscopy

    NASA Astrophysics Data System (ADS)

    Skorobogatiy, Maksim; Sadasivan, Jayesh; Guerboukha, Hichem

    2018-05-01

    In this paper, we first discuss the main types of noise in a typical pump-probe system, and then focus specifically on terahertz time domain spectroscopy (THz-TDS) setups. We then introduce four statistical models for the noisy pulses obtained in such systems, and detail rigorous mathematical algorithms to de-noise such traces, find the proper averages and characterise various types of experimental noise. Finally, we perform a comparative analysis of the performance, advantages and limitations of the algorithms by testing them on the experimental data collected using a particular THz-TDS system available in our laboratories. We conclude that using advanced statistical models for trace averaging results in the fitting errors that are significantly smaller than those obtained when only a simple statistical average is used.

  2. Co-evolution for Problem Simplification

    NASA Technical Reports Server (NTRS)

    Haith, Gary L.; Lohn, Jason D.; Cplombano, Silvano P.; Stassinopoulos, Dimitris

    1999-01-01

    This paper explores a co-evolutionary approach applicable to difficult problems with limited failure/success performance feedback. Like familiar "predator-prey" frameworks this algorithm evolves two populations of individuals - the solutions (predators) and the problems (prey). The approach extends previous work by rewarding only the problems that match their difficulty to the level of solut,ion competence. In complex problem domains with limited feedback, this "tractability constraint" helps provide an adaptive fitness gradient that, effectively differentiates the candidate solutions. The algorithm generates selective pressure toward the evolution of increasingly competent solutions by rewarding solution generality and uniqueness and problem tractability and difficulty. Relative (inverse-fitness) and absolute (static objective function) approaches to evaluating problem difficulty are explored and discussed. On a simple control task, this co-evolutionary algorithm was found to have significant advantages over a genetic algorithm with either a static fitness function or a fitness function that changes on a hand-tuned schedule.

  3. Practical training framework for fitting a function and its derivatives.

    PubMed

    Pukrittayakamee, Arjpolson; Hagan, Martin; Raff, Lionel; Bukkapatnam, Satish T S; Komanduri, Ranga

    2011-06-01

    This paper describes a practical framework for using multilayer feedforward neural networks to simultaneously fit both a function and its first derivatives. This framework involves two steps. The first step is to train the network to optimize a performance index, which includes both the error in fitting the function and the error in fitting the derivatives. The second step is to prune the network by removing neurons that cause overfitting and then to retrain it. This paper describes two novel types of overfitting that are only observed when simultaneously fitting both a function and its first derivatives. A new pruning algorithm is proposed to eliminate these types of overfitting. Experimental results show that the pruning algorithm successfully eliminates the overfitting and produces the smoothest responses and the best generalization among all the training algorithms that we have tested.

  4. Flexible Space-Filling Designs for Complex System Simulations

    DTIC Science & Technology

    2013-06-01

    interior of the experimental region and cannot fit higher-order models. We present a genetic algorithm that constructs space-filling designs with...Computer Experiments, Design of Experiments, Genetic Algorithm , Latin Hypercube, Response Surface Methodology, Nearly Orthogonal 15. NUMBER OF PAGES 147...experimental region and cannot fit higher-order models. We present a genetic algorithm that constructs space-filling designs with minimal correlations

  5. Efficient parallel implementation of active appearance model fitting algorithm on GPU.

    PubMed

    Wang, Jinwei; Ma, Xirong; Zhu, Yuanping; Sun, Jizhou

    2014-01-01

    The active appearance model (AAM) is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs) that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA) on the Nvidia's GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.

  6. Efficient Parallel Implementation of Active Appearance Model Fitting Algorithm on GPU

    PubMed Central

    Wang, Jinwei; Ma, Xirong; Zhu, Yuanping; Sun, Jizhou

    2014-01-01

    The active appearance model (AAM) is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs) that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA) on the Nvidia's GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures. PMID:24723812

  7. Decomposition of mineral absorption bands using nonlinear least squares curve fitting: Application to Martian meteorites and CRISM data

    NASA Astrophysics Data System (ADS)

    Parente, Mario; Makarewicz, Heather D.; Bishop, Janice L.

    2011-04-01

    This study advances curve-fitting modeling of absorption bands of reflectance spectra and applies this new model to spectra of Martian meteorites ALH 84001 and EETA 79001 and data from the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM). This study also details a recently introduced automated parameter initialization technique. We assess the performance of this automated procedure by comparing it to the currently available initialization method and perform a sensitivity analysis of the fit results to variation in initial guesses. We explore the issues related to the removal of the continuum, offer guidelines for continuum removal when modeling the absorptions and explore different continuum-removal techniques. We further evaluate the suitability of curve fitting techniques using Gaussians/Modified Gaussians to decompose spectra into individual end-member bands. We show that nonlinear least squares techniques such as the Levenberg-Marquardt algorithm achieve comparable results to the MGM model ( Sunshine and Pieters, 1993; Sunshine et al., 1990) for meteorite spectra. Finally we use Gaussian modeling to fit CRISM spectra of pyroxene and olivine-rich terrains on Mars. Analysis of CRISM spectra of two regions show that the pyroxene-dominated rock spectra measured at Juventae Chasma were modeled well with low Ca pyroxene, while the pyroxene-rich spectra acquired at Libya Montes required both low-Ca and high-Ca pyroxene for a good fit.

  8. The performance of the SEPT9 gene methylation assay and a comparison with other CRC screening tests: A meta-analysis.

    PubMed

    Song, Lele; Jia, Jia; Peng, Xiumei; Xiao, Wenhua; Li, Yuemin

    2017-06-08

    The SEPT9 gene methylation assay is the first FDA-approved blood assay for colorectal cancer (CRC) screening. Fecal immunochemical test (FIT), FIT-DNA test and CEA assay are also in vitro diagnostic (IVD) tests used in CRC screening. This meta-analysis aims to review the SEPT9 assay performance and compare it with other IVD CRC screening tests. By searching the Ovid MEDLINE, EMBASE, CBMdisc and CJFD database, 25 out of 180 studies were identified to report the SEPT9 assay performance. 2613 CRC cases and 6030 controls were included, and sensitivity and specificity were used to evaluate its performance at various algorithms. 1/3 algorithm exhibited the best sensitivity while 2/3 and 1/1 algorithm exhibited the best balance between sensitivity and specificity. The performance of the blood SEPT9 assay is superior to that of the serum protein markers and the FIT test in symptomatic population, while appeared to be less potent than FIT and FIT-DNA tests in asymptomatic population. In conclusion, 1/3 algorithm is recommended for CRC screening, and 2/3 or 1/1 algorithms are suitable for early detection for diagnostic purpose. The SEPT9 assay exhibited better performance in symptomatic population than in asymptomatic population.

  9. Floating shock fitting via Lagrangian adaptive meshes

    NASA Technical Reports Server (NTRS)

    Vanrosendale, John

    1994-01-01

    In recent works we have formulated a new approach to compressible flow simulation, combining the advantages of shock-fitting and shock-capturing. Using a cell-centered Roe scheme discretization on unstructured meshes, we warp the mesh while marching to steady state, so that mesh edges align with shocks and other discontinuities. This new algorithm, the Shock-fitting Lagrangian Adaptive Method (SLAM) is, in effect, a reliable shock-capturing algorithm which yields shock-fitted accuracy at convergence. Shock-capturing algorithms like this, which warp the mesh to yield shock-fitted accuracy, are new and relatively untried. However, their potential is clear. In the context of sonic booms, accurate calculation of near-field sonic boom signatures is critical to the design of the High Speed Civil Transport (HSCT). SLAM should allow computation of accurate N-wave pressure signatures on comparatively coarse meshes, significantly enhancing our ability to design low-boom configurations for high-speed aircraft.

  10. Analysis of convergence of an evolutionary algorithm with self-adaptation using a stochastic Lyapunov function.

    PubMed

    Semenov, Mikhail A; Terkel, Dmitri A

    2003-01-01

    This paper analyses the convergence of evolutionary algorithms using a technique which is based on a stochastic Lyapunov function and developed within the martingale theory. This technique is used to investigate the convergence of a simple evolutionary algorithm with self-adaptation, which contains two types of parameters: fitness parameters, belonging to the domain of the objective function; and control parameters, responsible for the variation of fitness parameters. Although both parameters mutate randomly and independently, they converge to the "optimum" due to the direct (for fitness parameters) and indirect (for control parameters) selection. We show that the convergence velocity of the evolutionary algorithm with self-adaptation is asymptotically exponential, similar to the velocity of the optimal deterministic algorithm on the class of unimodal functions. Although some martingale inequalities have not be proved analytically, they have been numerically validated with 0.999 confidence using Monte-Carlo simulations.

  11. Using Ant Colony Optimization for Routing in VLSI Chips

    NASA Astrophysics Data System (ADS)

    Arora, Tamanna; Moses, Melanie

    2009-04-01

    Rapid advances in VLSI technology have increased the number of transistors that fit on a single chip to about two billion. A frequent problem in the design of such high performance and high density VLSI layouts is that of routing wires that connect such large numbers of components. Most wire-routing problems are computationally hard. The quality of any routing algorithm is judged by the extent to which it satisfies routing constraints and design objectives. Some of the broader design objectives include minimizing total routed wire length, and minimizing total capacitance induced in the chip, both of which serve to minimize power consumed by the chip. Ant Colony Optimization algorithms (ACO) provide a multi-agent framework for combinatorial optimization by combining memory, stochastic decision and strategies of collective and distributed learning by ant-like agents. This paper applies ACO to the NP-hard problem of finding optimal routes for interconnect routing on VLSI chips. The constraints on interconnect routing are used by ants as heuristics which guide their search process. We found that ACO algorithms were able to successfully incorporate multiple constraints and route interconnects on suite of benchmark chips. On an average, the algorithm routed with total wire length 5.5% less than other established routing algorithms.

  12. Towards an optimal treatment algorithm for metastatic pancreatic ductal adenocarcinoma (PDA)

    PubMed Central

    Uccello, M.; Moschetta, M.; Mak, G.; Alam, T.; Henriquez, C. Murias; Arkenau, H.-T.

    2018-01-01

    Chemotherapy remains the mainstay of treatment for advanced pancreatic ductal adenocarcinoma (pda). Two randomized trials have demonstrated superiority of the combination regimens folfirinox (5-fluorouracil, leucovorin, oxaliplatin, and irinotecan) and gemcitabine plus nab-paclitaxel over gemcitabine monotherapy as a first-line treatment in adequately fit subjects. Selected pda patients progressing to first-line therapy can receive secondline treatment with moderate clinical benefit. Nevertheless, the optimal algorithm and the role of combination therapy in second-line are still unclear. Published second-line pda clinical trials enrolled patients progressing to gemcitabine-based therapies in use before the approval of nab-paclitaxel and folfirinox. The evolving scenario in second-line may affect the choice of the first-line treatment. For example, nanoliposomal irinotecan plus 5-fluouracil and leucovorin is a novel second-line option which will be suitable only for patients progressing to gemcitabine-based therapy. Therefore, clinical judgement and appropriate patient selection remain key elements in treatment decision. In this review, we aim to illustrate currently available options and define a possible algorithm to guide treatment choice. Future clinical trials taking into account sequential treatment as a new paradigm in pda will help define a standard algorithm. PMID:29507500

  13. State estimation with incomplete nonlinear constraint

    NASA Astrophysics Data System (ADS)

    Huang, Yuan; Wang, Xueying; An, Wei

    2017-10-01

    A problem of state estimation with a new constraints named incomplete nonlinear constraint is considered. The targets are often move in the curve road, if the width of road is neglected, the road can be considered as the constraint, and the position of sensors, e.g., radar, is known in advance, this info can be used to enhance the performance of the tracking filter. The problem of how to incorporate the priori knowledge is considered. In this paper, a second-order sate constraint is considered. A fitting algorithm of ellipse is adopted to incorporate the priori knowledge by estimating the radius of the trajectory. The fitting problem is transformed to the nonlinear estimation problem. The estimated ellipse function is used to approximate the nonlinear constraint. Then, the typical nonlinear constraint methods proposed in recent works can be used to constrain the target state. Monte-Carlo simulation results are presented to illustrate the effectiveness proposed method in state estimation with incomplete constraint.

  14. Inversion group (IG) fitting: A new T1 mapping method for modified look-locker inversion recovery (MOLLI) that allows arbitrary inversion groupings and rest periods (including no rest period).

    PubMed

    Sussman, Marshall S; Yang, Issac Y; Fok, Kai-Ho; Wintersperger, Bernd J

    2016-06-01

    The Modified Look-Locker Inversion Recovery (MOLLI) technique is used for T1 mapping in the heart. However, a drawback of this technique is that it requires lengthy rest periods in between inversion groupings to allow for complete magnetization recovery. In this work, a new MOLLI fitting algorithm (inversion group [IG] fitting) is presented that allows for arbitrary combinations of inversion groupings and rest periods (including no rest period). Conventional MOLLI algorithms use a three parameter fitting model. In IG fitting, the number of parameters is two plus the number of inversion groupings. This increased number of parameters permits any inversion grouping/rest period combination. Validation was performed through simulation, phantom, and in vivo experiments. IG fitting provided T1 values with less than 1% discrepancy across a range of inversion grouping/rest period combinations. By comparison, conventional three parameter fits exhibited up to 30% discrepancy for some combinations. The one drawback with IG fitting was a loss of precision-approximately 30% worse than the three parameter fits. IG fitting permits arbitrary inversion grouping/rest period combinations (including no rest period). The cost of the algorithm is a loss of precision relative to conventional three parameter fits. Magn Reson Med 75:2332-2340, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  15. Mode-dependent templates and scan order for H.264/AVC-based intra lossless coding.

    PubMed

    Gu, Zhouye; Lin, Weisi; Lee, Bu-Sung; Lau, Chiew Tong; Sun, Ming-Ting

    2012-09-01

    In H.264/advanced video coding (AVC), lossless coding and lossy coding share the same entropy coding module. However, the entropy coders in the H.264/AVC standard were original designed for lossy video coding and do not yield adequate performance for lossless video coding. In this paper, we analyze the problem with the current lossless coding scheme and propose a mode-dependent template (MD-template) based method for intra lossless coding. By exploring the statistical redundancy of the prediction residual in the H.264/AVC intra prediction modes, more zero coefficients are generated. By designing a new scan order for each MD-template, the scanned coefficients sequence fits the H.264/AVC entropy coders better. A fast implementation algorithm is also designed. With little computation increase, experimental results confirm that the proposed fast algorithm achieves about 7.2% bit saving compared with the current H.264/AVC fidelity range extensions high profile.

  16. A Hybrid Genetic Programming Algorithm for Automated Design of Dispatching Rules.

    PubMed

    Nguyen, Su; Mei, Yi; Xue, Bing; Zhang, Mengjie

    2018-06-04

    Designing effective dispatching rules for production systems is a difficult and timeconsuming task if it is done manually. In the last decade, the growth of computing power, advanced machine learning, and optimisation techniques has made the automated design of dispatching rules possible and automatically discovered rules are competitive or outperform existing rules developed by researchers. Genetic programming is one of the most popular approaches to discovering dispatching rules in the literature, especially for complex production systems. However, the large heuristic search space may restrict genetic programming from finding near optimal dispatching rules. This paper develops a new hybrid genetic programming algorithm for dynamic job shop scheduling based on a new representation, a new local search heuristic, and efficient fitness evaluators. Experiments show that the new method is effective regarding the quality of evolved rules. Moreover, evolved rules are also significantly smaller and contain more relevant attributes.

  17. Spot quantification in two dimensional gel electrophoresis image analysis: comparison of different approaches and presentation of a novel compound fitting algorithm

    PubMed Central

    2014-01-01

    Background Various computer-based methods exist for the detection and quantification of protein spots in two dimensional gel electrophoresis images. Area-based methods are commonly used for spot quantification: an area is assigned to each spot and the sum of the pixel intensities in that area, the so-called volume, is used a measure for spot signal. Other methods use the optical density, i.e. the intensity of the most intense pixel of a spot, or calculate the volume from the parameters of a fitted function. Results In this study we compare the performance of different spot quantification methods using synthetic and real data. We propose a ready-to-use algorithm for spot detection and quantification that uses fitting of two dimensional Gaussian function curves for the extraction of data from two dimensional gel electrophoresis (2-DE) images. The algorithm implements fitting using logical compounds and is computationally efficient. The applicability of the compound fitting algorithm was evaluated for various simulated data and compared with other quantification approaches. We provide evidence that even if an incorrect bell-shaped function is used, the fitting method is superior to other approaches, especially when spots overlap. Finally, we validated the method with experimental data of urea-based 2-DE of Aβ peptides andre-analyzed published data sets. Our methods showed higher precision and accuracy than other approaches when applied to exposure time series and standard gels. Conclusion Compound fitting as a quantification method for 2-DE spots shows several advantages over other approaches and could be combined with various spot detection methods. The algorithm was scripted in MATLAB (Mathworks) and is available as a supplemental file. PMID:24915860

  18. Extensions and applications of ensemble-of-trees methods in machine learning

    NASA Astrophysics Data System (ADS)

    Bleich, Justin

    Ensemble-of-trees algorithms have emerged to the forefront of machine learning due to their ability to generate high forecasting accuracy for a wide array of regression and classification problems. Classic ensemble methodologies such as random forests (RF) and stochastic gradient boosting (SGB) rely on algorithmic procedures to generate fits to data. In contrast, more recent ensemble techniques such as Bayesian Additive Regression Trees (BART) and Dynamic Trees (DT) focus on an underlying Bayesian probability model to generate the fits. These new probability model-based approaches show much promise versus their algorithmic counterparts, but also offer substantial room for improvement. The first part of this thesis focuses on methodological advances for ensemble-of-trees techniques with an emphasis on the more recent Bayesian approaches. In particular, we focus on extensions of BART in four distinct ways. First, we develop a more robust implementation of BART for both research and application. We then develop a principled approach to variable selection for BART as well as the ability to naturally incorporate prior information on important covariates into the algorithm. Next, we propose a method for handling missing data that relies on the recursive structure of decision trees and does not require imputation. Last, we relax the assumption of homoskedasticity in the BART model to allow for parametric modeling of heteroskedasticity. The second part of this thesis returns to the classic algorithmic approaches in the context of classification problems with asymmetric costs of forecasting errors. First we consider the performance of RF and SGB more broadly and demonstrate its superiority to logistic regression for applications in criminology with asymmetric costs. Next, we use RF to forecast unplanned hospital readmissions upon patient discharge with asymmetric costs taken into account. Finally, we explore the construction of stable decision trees for forecasts of violence during probation hearings in court systems.

  19. Adaptive Mesh Refinement in Curvilinear Body-Fitted Grid Systems

    NASA Technical Reports Server (NTRS)

    Steinthorsson, Erlendur; Modiano, David; Colella, Phillip

    1995-01-01

    To be truly compatible with structured grids, an AMR algorithm should employ a block structure for the refined grids to allow flow solvers to take advantage of the strengths of unstructured grid systems, such as efficient solution algorithms for implicit discretizations and multigrid schemes. One such algorithm, the AMR algorithm of Berger and Colella, has been applied to and adapted for use with body-fitted structured grid systems. Results are presented for a transonic flow over a NACA0012 airfoil (AGARD-03 test case) and a reflection of a shock over a double wedge.

  20. Fast decision algorithms in low-power embedded processors for quality-of-service based connectivity of mobile sensors in heterogeneous wireless sensor networks.

    PubMed

    Jaraíz-Simón, María D; Gómez-Pulido, Juan A; Vega-Rodríguez, Miguel A; Sánchez-Pérez, Juan M

    2012-01-01

    When a mobile wireless sensor is moving along heterogeneous wireless sensor networks, it can be under the coverage of more than one network many times. In these situations, the Vertical Handoff process can happen, where the mobile sensor decides to change its connection from a network to the best network among the available ones according to their quality of service characteristics. A fitness function is used for the handoff decision, being desirable to minimize it. This is an optimization problem which consists of the adjustment of a set of weights for the quality of service. Solving this problem efficiently is relevant to heterogeneous wireless sensor networks in many advanced applications. Numerous works can be found in the literature dealing with the vertical handoff decision, although they all suffer from the same shortfall: a non-comparable efficiency. Therefore, the aim of this work is twofold: first, to develop a fast decision algorithm that explores the entire space of possible combinations of weights, searching that one that minimizes the fitness function; and second, to design and implement a system on chip architecture based on reconfigurable hardware and embedded processors to achieve several goals necessary for competitive mobile terminals: good performance, low power consumption, low economic cost, and small area integration.

  1. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications.

    PubMed

    Sabry, A H; W Hasan, W Z; Ab Kadir, M Z A; Radzi, M A M; Shafie, S

    2018-01-01

    The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.

  2. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications

    PubMed Central

    W. Hasan, W. Z.

    2018-01-01

    The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system’s modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model. PMID:29351554

  3. A de-noising algorithm based on wavelet threshold-exponential adaptive window width-fitting for ground electrical source airborne transient electromagnetic signal

    NASA Astrophysics Data System (ADS)

    Ji, Yanju; Li, Dongsheng; Yu, Mingmei; Wang, Yuan; Wu, Qiong; Lin, Jun

    2016-05-01

    The ground electrical source airborne transient electromagnetic system (GREATEM) on an unmanned aircraft enjoys considerable prospecting depth, lateral resolution and detection efficiency, etc. In recent years it has become an important technical means of rapid resources exploration. However, GREATEM data are extremely vulnerable to stationary white noise and non-stationary electromagnetic noise (sferics noise, aircraft engine noise and other human electromagnetic noises). These noises will cause degradation of the imaging quality for data interpretation. Based on the characteristics of the GREATEM data and major noises, we propose a de-noising algorithm utilizing wavelet threshold method and exponential adaptive window width-fitting. Firstly, the white noise is filtered in the measured data using the wavelet threshold method. Then, the data are segmented using data window whose step length is even logarithmic intervals. The data polluted by electromagnetic noise are identified within each window based on the discriminating principle of energy detection, and the attenuation characteristics of the data slope are extracted. Eventually, an exponential fitting algorithm is adopted to fit the attenuation curve of each window, and the data polluted by non-stationary electromagnetic noise are replaced with their fitting results. Thus the non-stationary electromagnetic noise can be effectively removed. The proposed algorithm is verified by the synthetic and real GREATEM signals. The results show that in GREATEM signal, stationary white noise and non-stationary electromagnetic noise can be effectively filtered using the wavelet threshold-exponential adaptive window width-fitting algorithm, which enhances the imaging quality.

  4. Pulmonary Nodule Recognition Based on Multiple Kernel Learning Support Vector Machine-PSO

    PubMed Central

    Zhu, Zhichuan; Zhao, Qingdong; Liu, Liwei; Zhang, Lijuan

    2018-01-01

    Pulmonary nodule recognition is the core module of lung CAD. The Support Vector Machine (SVM) algorithm has been widely used in pulmonary nodule recognition, and the algorithm of Multiple Kernel Learning Support Vector Machine (MKL-SVM) has achieved good results therein. Based on grid search, however, the MKL-SVM algorithm needs long optimization time in course of parameter optimization; also its identification accuracy depends on the fineness of grid. In the paper, swarm intelligence is introduced and the Particle Swarm Optimization (PSO) is combined with MKL-SVM algorithm to be MKL-SVM-PSO algorithm so as to realize global optimization of parameters rapidly. In order to obtain the global optimal solution, different inertia weights such as constant inertia weight, linear inertia weight, and nonlinear inertia weight are applied to pulmonary nodules recognition. The experimental results show that the model training time of the proposed MKL-SVM-PSO algorithm is only 1/7 of the training time of the MKL-SVM grid search algorithm, achieving better recognition effect. Moreover, Euclidean norm of normalized error vector is proposed to measure the proximity between the average fitness curve and the optimal fitness curve after convergence. Through statistical analysis of the average of 20 times operation results with different inertial weights, it can be seen that the dynamic inertial weight is superior to the constant inertia weight in the MKL-SVM-PSO algorithm. In the dynamic inertial weight algorithm, the parameter optimization time of nonlinear inertia weight is shorter; the average fitness value after convergence is much closer to the optimal fitness value, which is better than the linear inertial weight. Besides, a better nonlinear inertial weight is verified. PMID:29853983

  5. Pulmonary Nodule Recognition Based on Multiple Kernel Learning Support Vector Machine-PSO.

    PubMed

    Li, Yang; Zhu, Zhichuan; Hou, Alin; Zhao, Qingdong; Liu, Liwei; Zhang, Lijuan

    2018-01-01

    Pulmonary nodule recognition is the core module of lung CAD. The Support Vector Machine (SVM) algorithm has been widely used in pulmonary nodule recognition, and the algorithm of Multiple Kernel Learning Support Vector Machine (MKL-SVM) has achieved good results therein. Based on grid search, however, the MKL-SVM algorithm needs long optimization time in course of parameter optimization; also its identification accuracy depends on the fineness of grid. In the paper, swarm intelligence is introduced and the Particle Swarm Optimization (PSO) is combined with MKL-SVM algorithm to be MKL-SVM-PSO algorithm so as to realize global optimization of parameters rapidly. In order to obtain the global optimal solution, different inertia weights such as constant inertia weight, linear inertia weight, and nonlinear inertia weight are applied to pulmonary nodules recognition. The experimental results show that the model training time of the proposed MKL-SVM-PSO algorithm is only 1/7 of the training time of the MKL-SVM grid search algorithm, achieving better recognition effect. Moreover, Euclidean norm of normalized error vector is proposed to measure the proximity between the average fitness curve and the optimal fitness curve after convergence. Through statistical analysis of the average of 20 times operation results with different inertial weights, it can be seen that the dynamic inertial weight is superior to the constant inertia weight in the MKL-SVM-PSO algorithm. In the dynamic inertial weight algorithm, the parameter optimization time of nonlinear inertia weight is shorter; the average fitness value after convergence is much closer to the optimal fitness value, which is better than the linear inertial weight. Besides, a better nonlinear inertial weight is verified.

  6. Accurate phase extraction algorithm based on Gram–Schmidt orthonormalization and least square ellipse fitting method

    NASA Astrophysics Data System (ADS)

    Lei, Hebing; Yao, Yong; Liu, Haopeng; Tian, Yiting; Yang, Yanfu; Gu, Yinglong

    2018-06-01

    An accurate algorithm by combing Gram-Schmidt orthonormalization and least square ellipse fitting technology is proposed, which could be used for phase extraction from two or three interferograms. The DC term of background intensity is suppressed by subtraction operation on three interferograms or by high-pass filter on two interferograms. Performing Gram-Schmidt orthonormalization on pre-processing interferograms, the phase shift error is corrected and a general ellipse form is derived. Then the background intensity error and the corrected error could be compensated by least square ellipse fitting method. Finally, the phase could be extracted rapidly. The algorithm could cope with the two or three interferograms with environmental disturbance, low fringe number or small phase shifts. The accuracy and effectiveness of the proposed algorithm are verified by both of the numerical simulations and experiments.

  7. PID controller tuning using metaheuristic optimization algorithms for benchmark problems

    NASA Astrophysics Data System (ADS)

    Gholap, Vishal; Naik Dessai, Chaitali; Bagyaveereswaran, V.

    2017-11-01

    This paper contributes to find the optimal PID controller parameters using particle swarm optimization (PSO), Genetic Algorithm (GA) and Simulated Annealing (SA) algorithm. The algorithms were developed through simulation of chemical process and electrical system and the PID controller is tuned. Here, two different fitness functions such as Integral Time Absolute Error and Time domain Specifications were chosen and applied on PSO, GA and SA while tuning the controller. The proposed Algorithms are implemented on two benchmark problems of coupled tank system and DC motor. Finally, comparative study has been done with different algorithms based on best cost, number of iterations and different objective functions. The closed loop process response for each set of tuned parameters is plotted for each system with each fitness function.

  8. Using evolutionary algorithms for fitting high-dimensional models to neuronal data.

    PubMed

    Svensson, Carl-Magnus; Coombes, Stephen; Peirce, Jonathan Westley

    2012-04-01

    In the study of neurosciences, and of complex biological systems in general, there is frequently a need to fit mathematical models with large numbers of parameters to highly complex datasets. Here we consider algorithms of two different classes, gradient following (GF) methods and evolutionary algorithms (EA) and examine their performance in fitting a 9-parameter model of a filter-based visual neuron to real data recorded from a sample of 107 neurons in macaque primary visual cortex (V1). Although the GF method converged very rapidly on a solution, it was highly susceptible to the effects of local minima in the error surface and produced relatively poor fits unless the initial estimates of the parameters were already very good. Conversely, although the EA required many more iterations of evaluating the model neuron's response to a series of stimuli, it ultimately found better solutions in nearly all cases and its performance was independent of the starting parameters of the model. Thus, although the fitting process was lengthy in terms of processing time, the relative lack of human intervention in the evolutionary algorithm, and its ability ultimately to generate model fits that could be trusted as being close to optimal, made it far superior in this particular application than the gradient following methods. This is likely to be the case in many further complex systems, as are often found in neuroscience.

  9. Hierarchical animal movement models for population-level inference

    USGS Publications Warehouse

    Hooten, Mevin B.; Buderman, Frances E.; Brost, Brian M.; Hanks, Ephraim M.; Ivans, Jacob S.

    2016-01-01

    New methods for modeling animal movement based on telemetry data are developed regularly. With advances in telemetry capabilities, animal movement models are becoming increasingly sophisticated. Despite a need for population-level inference, animal movement models are still predominantly developed for individual-level inference. Most efforts to upscale the inference to the population level are either post hoc or complicated enough that only the developer can implement the model. Hierarchical Bayesian models provide an ideal platform for the development of population-level animal movement models but can be challenging to fit due to computational limitations or extensive tuning required. We propose a two-stage procedure for fitting hierarchical animal movement models to telemetry data. The two-stage approach is statistically rigorous and allows one to fit individual-level movement models separately, then resample them using a secondary MCMC algorithm. The primary advantages of the two-stage approach are that the first stage is easily parallelizable and the second stage is completely unsupervised, allowing for an automated fitting procedure in many cases. We demonstrate the two-stage procedure with two applications of animal movement models. The first application involves a spatial point process approach to modeling telemetry data, and the second involves a more complicated continuous-time discrete-space animal movement model. We fit these models to simulated data and real telemetry data arising from a population of monitored Canada lynx in Colorado, USA.

  10. Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Yupeng, E-mail: yupeng@ualberta.ca; Deutsch, Clayton V.

    2012-06-15

    In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells.more » In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.« less

  11. A Constrained Genetic Algorithm with Adaptively Defined Fitness Function in MRS Quantification

    NASA Astrophysics Data System (ADS)

    Papakostas, G. A.; Karras, D. A.; Mertzios, B. G.; Graveron-Demilly, D.; van Ormondt, D.

    MRS Signal quantification is a rather involved procedure and has attracted the interest of the medical engineering community, regarding the development of computationally efficient methodologies. Significant contributions based on Computational Intelligence tools, such as Neural Networks (NNs), demonstrated a good performance but not without drawbacks already discussed by the authors. On the other hand preliminary application of Genetic Algorithms (GA) has already been reported in the literature by the authors regarding the peak detection problem encountered in MRS quantification using the Voigt line shape model. This paper investigates a novel constrained genetic algorithm involving a generic and adaptively defined fitness function which extends the simple genetic algorithm methodology in case of noisy signals. The applicability of this new algorithm is scrutinized through experimentation in artificial MRS signals interleaved with noise, regarding its signal fitting capabilities. Although extensive experiments with real world MRS signals are necessary, the herein shown performance illustrates the method's potential to be established as a generic MRS metabolites quantification procedure.

  12. Application of genetic algorithm in modeling on-wafer inductors for up to 110 Ghz

    NASA Astrophysics Data System (ADS)

    Liu, Nianhong; Fu, Jun; Liu, Hui; Cui, Wenpu; Liu, Zhihong; Liu, Linlin; Zhou, Wei; Wang, Quan; Guo, Ao

    2018-05-01

    In this work, the genetic algorithm has been introducted into parameter extraction for on-wafer inductors for up to 110 GHz millimeter-wave operations, and nine independent parameters of the equivalent circuit model are optimized together. With the genetic algorithm, the model with the optimized parameters gives a better fitting accuracy than the preliminary parameters without optimization. Especially, the fitting accuracy of the Q value achieves a significant improvement after the optimization.

  13. An improved independent component analysis model for 3D chromatogram separation and its solution by multi-areas genetic algorithm.

    PubMed

    Cui, Lizhi; Poon, Josiah; Poon, Simon K; Chen, Hao; Gao, Junbin; Kwan, Paul; Fan, Kei; Ling, Zhihao

    2014-01-01

    The 3D chromatogram generated by High Performance Liquid Chromatography-Diode Array Detector (HPLC-DAD) has been researched widely in the field of herbal medicine, grape wine, agriculture, petroleum and so on. Currently, most of the methods used for separating a 3D chromatogram need to know the compounds' number in advance, which could be impossible especially when the compounds are complex or white noise exist. New method which extracts compounds from 3D chromatogram directly is needed. In this paper, a new separation model named parallel Independent Component Analysis constrained by Reference Curve (pICARC) was proposed to transform the separation problem to a multi-parameter optimization issue. It was not necessary to know the number of compounds in the optimization. In order to find all the solutions, an algorithm named multi-areas Genetic Algorithm (mGA) was proposed, where multiple areas of candidate solutions were constructed according to the fitness and distances among the chromosomes. Simulations and experiments on a real life HPLC-DAD data set were used to demonstrate our method and its effectiveness. Through simulations, it can be seen that our method can separate 3D chromatogram to chromatogram peaks and spectra successfully even when they severely overlapped. It is also shown by the experiments that our method is effective to solve real HPLC-DAD data set. Our method can separate 3D chromatogram successfully without knowing the compounds' number in advance, which is fast and effective.

  14. SU-G-JeP1-12: Head-To-Head Performance Characterization of Two Multileaf Collimator Tracking Algorithms for Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caillet, V; Colvill, E; Royal North Shore Hospital, St Leonards, Sydney

    2016-06-15

    Purpose: Multi-leaf collimator (MLC) tracking is being clinically pioneered to continuously compensate for thoracic and abdominal motion during radiotherapy. The purpose of this work is to characterize the performance of two MLC tracking algorithms for cancer radiotherapy, based on a direct optimization and a piecewise leaf fitting approach respectively. Methods: To test the algorithms, both physical and in silico experiments were performed. Previously published high and low modulation VMAT plans for lung and prostate cancer cases were used along with eight patient-measured organ-specific trajectories. For both MLC tracking algorithm, the plans were run with their corresponding patient trajectories. The physicalmore » experiments were performed on a Trilogy Varian linac and a programmable phantom (HexaMotion platform). For each MLC tracking algorithm, plan and patient trajectory, the tracking accuracy was quantified as the difference in aperture area between ideal and fitted MLC. To compare algorithms, the average cumulative tracking error area for each experiment was calculated. The two-sample Kolmogorov-Smirnov (KS) test was used to evaluate the cumulative tracking errors between algorithms. Results: Comparison of tracking errors for the physical and in silico experiments showed minor differences between the two algorithms. The KS D-statistics for the physical experiments were below 0.05 denoting no significant differences between the two distributions pattern and the average error area (direct optimization/piecewise leaf-fitting) were comparable (66.64 cm2/65.65 cm2). For the in silico experiments, the KS D-statistics were below 0.05 and the average errors area were also equivalent (49.38 cm2/48.98 cm2). Conclusion: The comparison between the two leaf fittings algorithms demonstrated no significant differences in tracking errors, neither in a clinically realistic environment nor in silico. The similarities in the two independent algorithms give confidence in the use of either algorithm for clinical implementation.« less

  15. Signal Analysis Algorithms for Optimized Fitting of Nonresonant Laser Induced Thermal Acoustics Damped Sinusoids

    NASA Technical Reports Server (NTRS)

    Balla, R. Jeffrey; Miller, Corey A.

    2008-01-01

    This study seeks a numerical algorithm which optimizes frequency precision for the damped sinusoids generated by the nonresonant LITA technique. It compares computed frequencies, frequency errors, and fit errors obtained using five primary signal analysis methods. Using variations on different algorithms within each primary method, results from 73 fits are presented. Best results are obtained using an AutoRegressive method. Compared to previous results using Prony s method, single shot waveform frequencies are reduced approx.0.4% and frequency errors are reduced by a factor of approx.20 at 303K to approx. 0.1%. We explore the advantages of high waveform sample rates and potential for measurements in low density gases.

  16. EM in high-dimensional spaces.

    PubMed

    Draper, Bruce A; Elliott, Daniel L; Hayes, Jeremy; Baek, Kyungim

    2005-06-01

    This paper considers fitting a mixture of Gaussians model to high-dimensional data in scenarios where there are fewer data samples than feature dimensions. Issues that arise when using principal component analysis (PCA) to represent Gaussian distributions inside Expectation-Maximization (EM) are addressed, and a practical algorithm results. Unlike other algorithms that have been proposed, this algorithm does not try to compress the data to fit low-dimensional models. Instead, it models Gaussian distributions in the (N - 1)-dimensional space spanned by the N data samples. We are able to show that this algorithm converges on data sets where low-dimensional techniques do not.

  17. NLINEAR - NONLINEAR CURVE FITTING PROGRAM

    NASA Technical Reports Server (NTRS)

    Everhart, J. L.

    1994-01-01

    A common method for fitting data is a least-squares fit. In the least-squares method, a user-specified fitting function is utilized in such a way as to minimize the sum of the squares of distances between the data points and the fitting curve. The Nonlinear Curve Fitting Program, NLINEAR, is an interactive curve fitting routine based on a description of the quadratic expansion of the chi-squared statistic. NLINEAR utilizes a nonlinear optimization algorithm that calculates the best statistically weighted values of the parameters of the fitting function and the chi-square that is to be minimized. The inputs to the program are the mathematical form of the fitting function and the initial values of the parameters to be estimated. This approach provides the user with statistical information such as goodness of fit and estimated values of parameters that produce the highest degree of correlation between the experimental data and the mathematical model. In the mathematical formulation of the algorithm, the Taylor expansion of chi-square is first introduced, and justification for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations are derived, which are solved by matrix algebra. To achieve convergence, the algorithm requires meaningful initial estimates for the parameters of the fitting function. NLINEAR is written in Fortran 77 for execution on a CDC Cyber 750 under NOS 2.3. It has a central memory requirement of 5K 60 bit words. Optionally, graphical output of the fitting function can be plotted. Tektronix PLOT-10 routines are required for graphics. NLINEAR was developed in 1987.

  18. Registration of free-hand OCT daughter endoscopy to 3D organ reconstruction

    PubMed Central

    Lurie, Kristen L.; Angst, Roland; Seibel, Eric J.; Liao, Joseph C.; Ellerbee Bowden, Audrey K.

    2016-01-01

    Despite the trend to pair white light endoscopy with secondary image modalities for in vivo characterization of suspicious lesions, challenges remain to co-register such data. We present an algorithm to co-register two different optical imaging modalities as a mother-daughter endoscopy pair. Using white light cystoscopy (mother) and optical coherence tomography (OCT) (daughter) as an example, we developed the first forward-viewing OCT endoscope that fits in the working channel of flexible cystoscopes and demonstrated our algorithm’s performance with optical phantom and clinical imaging data. The ability to register multimodal data opens opportunities for advanced analysis in cancer imaging applications. PMID:28018720

  19. Optimization-Based Model Fitting for Latent Class and Latent Profile Analyses

    ERIC Educational Resources Information Center

    Huang, Guan-Hua; Wang, Su-Mei; Hsu, Chung-Chu

    2011-01-01

    Statisticians typically estimate the parameters of latent class and latent profile models using the Expectation-Maximization algorithm. This paper proposes an alternative two-stage approach to model fitting. The first stage uses the modified k-means and hierarchical clustering algorithms to identify the latent classes that best satisfy the…

  20. Application Mail Tracking Using RSA Algorithm As Security Data and HOT-Fit a Model for Evaluation System

    NASA Astrophysics Data System (ADS)

    Permadi, Ginanjar Setyo; Adi, Kusworo; Gernowo, Rahmad

    2018-02-01

    RSA algorithm give security in the process of the sending of messages or data by using 2 key, namely private key and public key .In this research to ensure and assess directly systems are made have meet goals or desire using a comprehensive evaluation methods HOT-Fit system .The purpose of this research is to build a information system sending mail by applying methods of security RSA algorithm and to evaluate in uses the method HOT-Fit to produce a system corresponding in the faculty physics. Security RSA algorithm located at the difficulty of factoring number of large coiled factors prima, the results of the prime factors has to be done to obtain private key. HOT-Fit has three aspects assessment, in the aspect of technology judging from the system status, the quality of system and quality of service. In the aspect of human judging from the use of systems and satisfaction users while in the aspect of organization judging from the structure and environment. The results of give a tracking system sending message based on the evaluation acquired.

  1. FitSearch: a robust way to interpret a yeast fitness profile in terms of drug's mode-of-action.

    PubMed

    Lee, Minho; Han, Sangjo; Chang, Hyeshik; Kwak, Youn-Sig; Weller, David M; Kim, Dongsup

    2013-01-01

    Yeast deletion-mutant collections have been successfully used to infer the mode-of-action of drugs especially by profiling chemical-genetic and genetic-genetic interactions on a genome-wide scale. Although tens of thousands of those profiles are publicly available, a lack of an accurate method for mining such data has been a major bottleneck for more widespread use of these useful resources. For general usage of those public resources, we designed FitRankDB as a general repository of fitness profiles, and developed a new search algorithm, FitSearch, for identifying the profiles that have a high similarity score with statistical significance for a given fitness profile. We demonstrated that our new repository and algorithm are highly beneficial to researchers who attempting to make hypotheses based on unknown modes-of-action of bioactive compounds, regardless of the types of experiments that have been performed using yeast deletion-mutant collection in various types of different measurement platforms, especially non-chip-based platforms. We showed that our new database and algorithm are useful when attempting to construct a hypothesis regarding the unknown function of a bioactive compound through small-scale experiments with a yeast deletion collection in a platform independent manner. The FitRankDB and FitSearch enhance the ease of searching public yeast fitness profiles and obtaining insights into unknown mechanisms of action of drugs. FitSearch is freely available at http://fitsearch.kaist.ac.kr.

  2. FitSearch: a robust way to interpret a yeast fitness profile in terms of drug's mode-of-action

    PubMed Central

    2013-01-01

    Background Yeast deletion-mutant collections have been successfully used to infer the mode-of-action of drugs especially by profiling chemical-genetic and genetic-genetic interactions on a genome-wide scale. Although tens of thousands of those profiles are publicly available, a lack of an accurate method for mining such data has been a major bottleneck for more widespread use of these useful resources. Results For general usage of those public resources, we designed FitRankDB as a general repository of fitness profiles, and developed a new search algorithm, FitSearch, for identifying the profiles that have a high similarity score with statistical significance for a given fitness profile. We demonstrated that our new repository and algorithm are highly beneficial to researchers who attempting to make hypotheses based on unknown modes-of-action of bioactive compounds, regardless of the types of experiments that have been performed using yeast deletion-mutant collection in various types of different measurement platforms, especially non-chip-based platforms. Conclusions We showed that our new database and algorithm are useful when attempting to construct a hypothesis regarding the unknown function of a bioactive compound through small-scale experiments with a yeast deletion collection in a platform independent manner. The FitRankDB and FitSearch enhance the ease of searching public yeast fitness profiles and obtaining insights into unknown mechanisms of action of drugs. FitSearch is freely available at http://fitsearch.kaist.ac.kr. PMID:23368702

  3. Two-dimensional wavefront reconstruction based on double-shearing and least squares fitting

    NASA Astrophysics Data System (ADS)

    Liang, Peiying; Ding, Jianping; Zhu, Yangqing; Dong, Qian; Huang, Yuhua; Zhu, Zhen

    2017-06-01

    The two-dimensional wavefront reconstruction method based on double-shearing and least squares fitting is proposed in this paper. Four one-dimensional phase estimates of the measured wavefront, which correspond to the two shears and the two orthogonal directions, could be calculated from the differential phase, which solves the problem of the missing spectrum, and then by using the least squares method the two-dimensional wavefront reconstruction could be done. The numerical simulations of the proposed algorithm are carried out to verify the feasibility of this method. The influence of noise generated from different shear amount and different intensity on the accuracy of the reconstruction is studied and compared with the results from the algorithm based on single-shearing and least squares fitting. Finally, a two-grating lateral shearing interference experiment is carried out to verify the wavefront reconstruction algorithm based on doubleshearing and least squares fitting.

  4. An optimized knife-edge method for on-orbit MTF estimation of optical sensors using powell parameter fitting

    NASA Astrophysics Data System (ADS)

    Han, Lu; Gao, Kun; Gong, Chen; Zhu, Zhenyu; Guo, Yue

    2017-08-01

    On-orbit Modulation Transfer Function (MTF) is an important indicator to evaluate the performance of the optical remote sensors in a satellite. There are many methods to estimate MTF, such as pinhole method, slit method and so on. Among them, knife-edge method is quite efficient, easy-to-use and recommended in ISO12233 standard for the wholefrequency MTF curve acquisition. However, the accuracy of the algorithm is affected by Edge Spread Function (ESF) fitting accuracy significantly, which limits the range of application. So in this paper, an optimized knife-edge method using Powell algorithm is proposed to improve the ESF fitting precision. Fermi function model is the most popular ESF fitting model, yet it is vulnerable to the initial values of the parameters. Considering the characteristics of simple and fast convergence, Powell algorithm is applied to fit the accurate parameters adaptively with the insensitivity to the initial parameters. Numerical simulation results reveal the accuracy and robustness of the optimized algorithm under different SNR, edge direction and leaning angles conditions. Experimental results using images of the camera in ZY-3 satellite show that this method is more accurate than the standard knife-edge method of ISO12233 in MTF estimation.

  5. A novel clinical decision support system using improved adaptive genetic algorithm for the assessment of fetal well-being.

    PubMed

    Ravindran, Sindhu; Jambek, Asral Bahari; Muthusamy, Hariharan; Neoh, Siew-Chin

    2015-01-01

    A novel clinical decision support system is proposed in this paper for evaluating the fetal well-being from the cardiotocogram (CTG) dataset through an Improved Adaptive Genetic Algorithm (IAGA) and Extreme Learning Machine (ELM). IAGA employs a new scaling technique (called sigma scaling) to avoid premature convergence and applies adaptive crossover and mutation techniques with masking concepts to enhance population diversity. Also, this search algorithm utilizes three different fitness functions (two single objective fitness functions and multi-objective fitness function) to assess its performance. The classification results unfold that promising classification accuracy of 94% is obtained with an optimal feature subset using IAGA. Also, the classification results are compared with those of other Feature Reduction techniques to substantiate its exhaustive search towards the global optimum. Besides, five other benchmark datasets are used to gauge the strength of the proposed IAGA algorithm.

  6. Fast leaf-fitting with generalized underdose/overdose constraints for real-time MLC tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, Douglas, E-mail: douglas.moore@utsouthwestern.edu; Sawant, Amit; Ruan, Dan

    2016-01-15

    Purpose: Real-time multileaf collimator (MLC) tracking is a promising approach to the management of intrafractional tumor motion during thoracic and abdominal radiotherapy. MLC tracking is typically performed in two steps: transforming a planned MLC aperture in response to patient motion and refitting the leaves to the newly generated aperture. One of the challenges of this approach is the inability to faithfully reproduce the desired motion-adapted aperture. This work presents an optimization-based framework with which to solve this leaf-fitting problem in real-time. Methods: This optimization framework is designed to facilitate the determination of leaf positions in real-time while accounting for themore » trade-off between coverage of the PTV and avoidance of organs at risk (OARs). Derived within this framework, an algorithm is presented that can account for general linear transformations of the planned MLC aperture, particularly 3D translations and in-plane rotations. This algorithm, together with algorithms presented in Sawant et al. [“Management of three-dimensional intrafraction motion through real-time DMLC tracking,” Med. Phys. 35, 2050–2061 (2008)] and Ruan and Keall [Presented at the 2011 IEEE Power Engineering and Automation Conference (PEAM) (2011) (unpublished)], was applied to apertures derived from eight lung intensity modulated radiotherapy plans subjected to six-degree-of-freedom motion traces acquired from lung cancer patients using the kilovoltage intrafraction monitoring system developed at the University of Sydney. A quality-of-fit metric was defined, and each algorithm was evaluated in terms of quality-of-fit and computation time. Results: This algorithm is shown to perform leaf-fittings of apertures, each with 80 leaf pairs, in 0.226 ms on average as compared to 0.082 and 64.2 ms for the algorithms of Sawant et al., Ruan, and Keall, respectively. The algorithm shows approximately 12% improvement in quality-of-fit over the Sawant et al. approach, while performing comparably to Ruan and Keall. Conclusions: This work improves upon the quality of the Sawant et al. approach, but does so without sacrificing run-time performance. In addition, using this framework allows for complex leaf-fitting strategies that can be used to account for PTV/OAR trade-off during real-time MLC tracking.« less

  7. Cubic scaling algorithms for RPA correlation using interpolative separable density fitting

    NASA Astrophysics Data System (ADS)

    Lu, Jianfeng; Thicke, Kyle

    2017-12-01

    We present a new cubic scaling algorithm for the calculation of the RPA correlation energy. Our scheme splits up the dependence between the occupied and virtual orbitals in χ0 by use of Cauchy's integral formula. This introduces an additional integral to be carried out, for which we provide a geometrically convergent quadrature rule. Our scheme also uses the newly developed Interpolative Separable Density Fitting algorithm to further reduce the computational cost in a way analogous to that of the Resolution of Identity method.

  8. Enhancing the performance of MOEAs: an experimental presentation of a new fitness guided mutation operator

    NASA Astrophysics Data System (ADS)

    Liagkouras, K.; Metaxiotis, K.

    2017-01-01

    Multi-objective evolutionary algorithms (MOEAs) are currently a dynamic field of research that has attracted considerable attention. Mutation operators have been utilized by MOEAs as variation mechanisms. In particular, polynomial mutation (PLM) is one of the most popular variation mechanisms and has been utilized by many well-known MOEAs. In this paper, we revisit the PLM operator and we propose a fitness-guided version of the PLM. Experimental results obtained by non-dominated sorting genetic algorithm II and strength Pareto evolutionary algorithm 2 show that the proposed fitness-guided mutation operator outperforms the classical PLM operator, based on different performance metrics that evaluate both the proximity of the solutions to the Pareto front and their dispersion on it.

  9. Automated Spectroscopic Analysis Using the Particle Swarm Optimization Algorithm: Implementing a Guided Search Algorithm to Autofit

    NASA Astrophysics Data System (ADS)

    Ervin, Katherine; Shipman, Steven

    2017-06-01

    While rotational spectra can be rapidly collected, their analysis (especially for complex systems) is seldom straightforward, leading to a bottleneck. The AUTOFIT program was designed to serve that need by quickly matching rotational constants to spectra with little user input and supervision. This program can potentially be improved by incorporating an optimization algorithm in the search for a solution. The Particle Swarm Optimization Algorithm (PSO) was chosen for implementation. PSO is part of a family of optimization algorithms called heuristic algorithms, which seek approximate best answers. This is ideal for rotational spectra, where an exact match will not be found without incorporating distortion constants, etc., which would otherwise greatly increase the size of the search space. PSO was tested for robustness against five standard fitness functions and then applied to a custom fitness function created for rotational spectra. This talk will explain the Particle Swarm Optimization algorithm and how it works, describe how Autofit was modified to use PSO, discuss the fitness function developed to work with spectroscopic data, and show our current results. Seifert, N.A., Finneran, I.A., Perez, C., Zaleski, D.P., Neill, J.L., Steber, A.L., Suenram, R.D., Lesarri, A., Shipman, S.T., Pate, B.H., J. Mol. Spec. 312, 13-21 (2015)

  10. Evolutionary Multiobjective Design Targeting a Field Programmable Transistor Array

    NASA Technical Reports Server (NTRS)

    Aguirre, Arturo Hernandez; Zebulum, Ricardo S.; Coello, Carlos Coello

    2004-01-01

    This paper introduces the ISPAES algorithm for circuit design targeting a Field Programmable Transistor Array (FPTA). The use of evolutionary algorithms is common in circuit design problems, where a single fitness function drives the evolution process. Frequently, the design problem is subject to several goals or operating constraints, thus, designing a suitable fitness function catching all requirements becomes an issue. Such a problem is amenable for multi-objective optimization, however, evolutionary algorithms lack an inherent mechanism for constraint handling. This paper introduces ISPAES, an evolutionary optimization algorithm enhanced with a constraint handling technique. Several design problems targeting a FPTA show the potential of our approach.

  11. FitSKIRT: genetic algorithms to automatically fit dusty galaxies with a Monte Carlo radiative transfer code

    NASA Astrophysics Data System (ADS)

    De Geyter, G.; Baes, M.; Fritz, J.; Camps, P.

    2013-02-01

    We present FitSKIRT, a method to efficiently fit radiative transfer models to UV/optical images of dusty galaxies. These images have the advantage that they have better spatial resolution compared to FIR/submm data. FitSKIRT uses the GAlib genetic algorithm library to optimize the output of the SKIRT Monte Carlo radiative transfer code. Genetic algorithms prove to be a valuable tool in handling the multi- dimensional search space as well as the noise induced by the random nature of the Monte Carlo radiative transfer code. FitSKIRT is tested on artificial images of a simulated edge-on spiral galaxy, where we gradually increase the number of fitted parameters. We find that we can recover all model parameters, even if all 11 model parameters are left unconstrained. Finally, we apply the FitSKIRT code to a V-band image of the edge-on spiral galaxy NGC 4013. This galaxy has been modeled previously by other authors using different combinations of radiative transfer codes and optimization methods. Given the different models and techniques and the complexity and degeneracies in the parameter space, we find reasonable agreement between the different models. We conclude that the FitSKIRT method allows comparison between different models and geometries in a quantitative manner and minimizes the need of human intervention and biasing. The high level of automation makes it an ideal tool to use on larger sets of observed data.

  12. VDLLA: A virtual daddy-long legs optimization

    NASA Astrophysics Data System (ADS)

    Yaakub, Abdul Razak; Ghathwan, Khalil I.

    2016-08-01

    Swarm intelligence is a strong optimization algorithm based on a biological behavior of insects or animals. The success of any optimization algorithm is depending on the balance between exploration and exploitation. In this paper, we present a new swarm intelligence algorithm, which is based on daddy long legs spider (VDLLA) as a new optimization algorithm with virtual behavior. In VDLLA, each agent (spider) has nine positions which represent the legs of spider and each position represent one solution. The proposed VDLLA is tested on four standard functions using average fitness, Medium fitness and standard deviation. The results of proposed VDLLA have been compared against Particle Swarm Optimization (PSO), Differential Evolution (DE) and Bat Inspired Algorithm (BA). Additionally, the T-Test has been conducted to show the significant deference between our proposed and other algorithms. VDLLA showed very promising results on benchmark test functions for unconstrained optimization problems and also significantly improved the original swarm algorithms.

  13. Roadmap of Advanced GNC and Image Processing Algorithms for Fully Autonomous MSR-Like Rendezvous Missions

    NASA Astrophysics Data System (ADS)

    Strippoli, L. S.; Gonzalez-Arjona, D. G.

    2018-04-01

    GMV extensively worked in many activities aimed at developing, validating, and verifying up to TRL-6 advanced GNC and IP algorithms for Mars Sample Return rendezvous working under different ESA contracts on the development of advanced algorithms for VBN sensor.

  14. Inference of gene regulatory networks from genome-wide knockout fitness data

    PubMed Central

    Wang, Liming; Wang, Xiaodong; Arkin, Adam P.; Samoilov, Michael S.

    2013-01-01

    Motivation: Genome-wide fitness is an emerging type of high-throughput biological data generated for individual organisms by creating libraries of knockouts, subjecting them to broad ranges of environmental conditions, and measuring the resulting clone-specific fitnesses. Since fitness is an organism-scale measure of gene regulatory network behaviour, it may offer certain advantages when insights into such phenotypical and functional features are of primary interest over individual gene expression. Previous works have shown that genome-wide fitness data can be used to uncover novel gene regulatory interactions, when compared with results of more conventional gene expression analysis. Yet, to date, few algorithms have been proposed for systematically using genome-wide mutant fitness data for gene regulatory network inference. Results: In this article, we describe a model and propose an inference algorithm for using fitness data from knockout libraries to identify underlying gene regulatory networks. Unlike most prior methods, the presented approach captures not only structural, but also dynamical and non-linear nature of biomolecular systems involved. A state–space model with non-linear basis is used for dynamically describing gene regulatory networks. Network structure is then elucidated by estimating unknown model parameters. Unscented Kalman filter is used to cope with the non-linearities introduced in the model, which also enables the algorithm to run in on-line mode for practical use. Here, we demonstrate that the algorithm provides satisfying results for both synthetic data as well as empirical measurements of GAL network in yeast Saccharomyces cerevisiae and TyrR–LiuR network in bacteria Shewanella oneidensis. Availability: MATLAB code and datasets are available to download at http://www.duke.edu/∼lw174/Fitness.zip and http://genomics.lbl.gov/supplemental/fitness-bioinf/ Contact: wangx@ee.columbia.edu or mssamoilov@lbl.gov Supplementary information: Supplementary data are available at Bioinformatics online PMID:23271269

  15. Fitting-free algorithm for efficient quantification of collagen fiber alignment in SHG imaging applications.

    PubMed

    Hall, Gunnsteinn; Liang, Wenxuan; Li, Xingde

    2017-10-01

    Collagen fiber alignment derived from second harmonic generation (SHG) microscopy images can be important for disease diagnostics. Image processing algorithms are needed to robustly quantify the alignment in images with high sensitivity and reliability. Fourier transform (FT) magnitude, 2D power spectrum, and image autocorrelation have previously been used to extract fiber information from images by assuming a certain mathematical model (e.g. Gaussian distribution of the fiber-related parameters) and fitting. The fitting process is slow and fails to converge when the data is not Gaussian. Herein we present an efficient constant-time deterministic algorithm which characterizes the symmetricity of the FT magnitude image in terms of a single parameter, named the fiber alignment anisotropy R ranging from 0 (randomized fibers) to 1 (perfect alignment). This represents an important improvement of the technology and may bring us one step closer to utilizing the technology for various applications in real time. In addition, we present a digital image phantom-based framework for characterizing and validating the algorithm, as well as assessing the robustness of the algorithm against different perturbations.

  16. Assessment of the improvements in accuracy of aerosol characterization resulted from additions of polarimetric measurements to intensity-only observations using GRASP algorithm (Invited)

    NASA Astrophysics Data System (ADS)

    Dubovik, O.; Litvinov, P.; Lapyonok, T.; Herman, M.; Fedorenko, A.; Lopatin, A.; Goloub, P.; Ducos, F.; Aspetsberger, M.; Planer, W.; Federspiel, C.

    2013-12-01

    During last few years we were developing GRASP (Generalized Retrieval of Aerosol and Surface Properties) algorithm designed for the enhanced characterization of aerosol properties from spectral, multi-angular polarimetric remote sensing observations. The concept of GRASP essentially relies on the accumulated positive research heritage from previous remote sensing aerosol retrieval developments, in particular those from the AERONET and POLDER retrieval activities. The details of the algorithm are described by Dubovik et al. (Atmos. Meas. Tech., 4, 975-1018, 2011). The GRASP retrieves properties of both aerosol and land surface reflectance in cloud-free environments. It is based on highly advanced statistically optimized fitting and deduces nearly 50 unknowns for each observed site. The algorithm derives a similar set of aerosol parameters as AERONET including detailed particle size distribution, the spectrally dependent the complex index of refraction and the fraction of non-spherical particles. The algorithm uses detailed aerosol and surface models and fully accounts for all multiple interactions of scattered solar light with aerosol, gases and the underlying surface. All calculations are done on-line without using traditional look-up tables. In addition, the algorithm uses the new multi-pixel retrieval concept - a simultaneous fitting of a large group of pixels with additional constraints limiting the time variability of surface properties and spatial variability of aerosol properties. This principle is expected to result in higher consistency and accuracy of aerosol products compare to conventional approaches especially over bright surfaces where information content of satellite observations in respect to aerosol properties is limited. The GRASP is a highly versatile algorithm that allows input from both satellite and ground-based measurements. It also has essential flexibility in measurement processing. For example, if observation data set includes spectral measurements of both total intensity and polarization, the algorithm can be easily set to use either total intensity or polarization, as well as both of them in the same retrieval. Using this feature of the algorithm design we have studied the relative importance of total intensity and polarization measurements for retrieving different parameters of aerosol. In this presentation, we present the quantitative assessment of the improvements in aerosol retrievals associated with additions of polarimetric measurements to the intensity-only observations. The study has been performed using satellite measurements by POLDER/PARASOL polarimeter and ground-based measurements by new generation AERONET sun/sky-radiometers implementing measurements of polarization at each spectral channel.

  17. Kinetic modeling and fitting software for interconnected reaction schemes: VisKin.

    PubMed

    Zhang, Xuan; Andrews, Jared N; Pedersen, Steen E

    2007-02-15

    Reaction kinetics for complex, highly interconnected kinetic schemes are modeled using analytical solutions to a system of ordinary differential equations. The algorithm employs standard linear algebra methods that are implemented using MatLab functions in a Visual Basic interface. A graphical user interface for simple entry of reaction schemes facilitates comparison of a variety of reaction schemes. To ensure microscopic balance, graph theory algorithms are used to determine violations of thermodynamic cycle constraints. Analytical solutions based on linear differential equations result in fast comparisons of first order kinetic rates and amplitudes as a function of changing ligand concentrations. For analysis of higher order kinetics, we also implemented a solution using numerical integration. To determine rate constants from experimental data, fitting algorithms that adjust rate constants to fit the model to imported data were implemented using the Levenberg-Marquardt algorithm or using Broyden-Fletcher-Goldfarb-Shanno methods. We have included the ability to carry out global fitting of data sets obtained at varying ligand concentrations. These tools are combined in a single package, which we have dubbed VisKin, to guide and analyze kinetic experiments. The software is available online for use on PCs.

  18. An algorithm for selecting the most accurate protocol for contact angle measurement by drop shape analysis.

    PubMed

    Xu, Z N

    2014-12-01

    In this study, an error analysis is performed to study real water drop images and the corresponding numerically generated water drop profiles for three widely used static contact angle algorithms: the circle- and ellipse-fitting algorithms and the axisymmetric drop shape analysis-profile (ADSA-P) algorithm. The results demonstrate the accuracy of the numerically generated drop profiles based on the Laplace equation. A significant number of water drop profiles with different volumes, contact angles, and noise levels are generated, and the influences of the three factors on the accuracies of the three algorithms are systematically investigated. The results reveal that the above-mentioned three algorithms are complementary. In fact, the circle- and ellipse-fitting algorithms show low errors and are highly resistant to noise for water drops with small/medium volumes and contact angles, while for water drop with large volumes and contact angles just the ADSA-P algorithm can meet accuracy requirement. However, this algorithm introduces significant errors in the case of small volumes and contact angles because of its high sensitivity to noise. The critical water drop volumes of the circle- and ellipse-fitting algorithms corresponding to a certain contact angle error are obtained through a significant amount of computation. To improve the precision of the static contact angle measurement, a more accurate algorithm based on a combination of the three algorithms is proposed. Following a systematic investigation, the algorithm selection rule is described in detail, while maintaining the advantages of the three algorithms and overcoming their deficiencies. In general, static contact angles over the entire hydrophobicity range can be accurately evaluated using the proposed algorithm. The ease of erroneous judgment in static contact angle measurements is avoided. The proposed algorithm is validated by a static contact angle evaluation of real and numerically generated water drop images with different hydrophobicity values and volumes.

  19. Turbulence profiling for adaptive optics tomographic reconstructors

    NASA Astrophysics Data System (ADS)

    Laidlaw, Douglas J.; Osborn, James; Wilson, Richard W.; Morris, Timothy J.; Butterley, Timothy; Reeves, Andrew P.; Townson, Matthew J.; Gendron, Éric; Vidal, Fabrice; Morel, Carine

    2016-07-01

    To approach optimal performance advanced Adaptive Optics (AO) systems deployed on ground-based telescopes must have accurate knowledge of atmospheric turbulence as a function of altitude. Stereo-SCIDAR is a high-resolution stereoscopic instrument dedicated to this measure. Here, its profiles are directly compared to internal AO telemetry atmospheric profiling techniques for CANARY (Vidal et al. 20141), a Multi-Object AO (MOAO) pathfinder on the William Herschel Telescope (WHT), La Palma. In total twenty datasets are analysed across July and October of 2014. Levenberg-Marquardt fitting algorithms dubbed Direct Fitting and Learn 2 Step (L2S; Martin 20142) are used in the recovery of profile information via covariance matrices - respectively attaining average Pearson product-moment correlation coefficients with stereo-SCIDAR of 0.2 and 0.74. By excluding the measure of covariance between orthogonal Wavefront Sensor (WFS) slopes these results have revised values of 0.65 and 0.2. A data analysis technique that combines L2S and SLODAR is subsequently introduced that achieves a correlation coefficient of 0.76.

  20. Comparing alkaline and thermal disintegration characteristics for mechanically dewatered sludge.

    PubMed

    Tunçal, Tolga

    2011-10-01

    Thermal drying is one of the advanced technologies ultimately providing an alternative method of sludge disposal. In this study, the drying kinetics of mechanically dewatered sludge (MDS) after alkaline and thermal disintegration have been studied. In addition, the effect of total organic carbon (TOC) on specific resistance to filtration and sludge bound water content were also investigated on freshly collected sludge samples. The combined effect of pH and TOC on the thermal sludge drying rate for MDS was modelled using the two-factorial experimental design method. Statistical assessment of the obtained results proposed that sludge drying potential has increased exponentially for both increasing temperature and lime dosage. Execution of curve fitting algorithms also implied that drying profiles for raw and alkaline-disintegrated sludge were well fitted to the Henderson and Pabis model. The activation energy of MDS decreased from 28.716 to 11.390 kJ mol(-1) after disintegration. Consequently, the unit power requirement for thermal drying decreased remarkably from 706 to 281 W g(-1) H2O.

  1. Fitting membrane resistance along with action potential shape in cardiac myocytes improves convergence: application of a multi-objective parallel genetic algorithm.

    PubMed

    Kaur, Jaspreet; Nygren, Anders; Vigmond, Edward J

    2014-01-01

    Fitting parameter sets of non-linear equations in cardiac single cell ionic models to reproduce experimental behavior is a time consuming process. The standard procedure is to adjust maximum channel conductances in ionic models to reproduce action potentials (APs) recorded in isolated cells. However, vastly different sets of parameters can produce similar APs. Furthermore, even with an excellent AP match in case of single cell, tissue behaviour may be very different. We hypothesize that this uncertainty can be reduced by additionally fitting membrane resistance (Rm). To investigate the importance of Rm, we developed a genetic algorithm approach which incorporated Rm data calculated at a few points in the cycle, in addition to AP morphology. Performance was compared to a genetic algorithm using only AP morphology data. The optimal parameter sets and goodness of fit as computed by the different methods were compared. First, we fit an ionic model to itself, starting from a random parameter set. Next, we fit the AP of one ionic model to that of another. Finally, we fit an ionic model to experimentally recorded rabbit action potentials. Adding the extra objective (Rm, at a few voltages) to the AP fit, lead to much better convergence. Typically, a smaller MSE (mean square error, defined as the average of the squared error between the target AP and AP that is to be fitted) was achieved in one fifth of the number of generations compared to using only AP data. Importantly, the variability in fit parameters was also greatly reduced, with many parameters showing an order of magnitude decrease in variability. Adding Rm to the objective function improves the robustness of fitting, better preserving tissue level behavior, and should be incorporated.

  2. Dynamics in the Fitness-Income plane: Brazilian states vs World countries

    PubMed Central

    Operti, Felipe G.; Pugliese, Emanuele; Andrade, José S.; Pietronero, Luciano

    2018-01-01

    In this paper we introduce a novel algorithm, called Exogenous Fitness, to calculate the Fitness of subnational entities and we apply it to the states of Brazil. In the last decade, several indices were introduced to measure the competitiveness of countries by looking at the complexity of their export basket. Tacchella et al (2012) developed a non-monetary metric called Fitness. In this paper, after an overview about Brazil as a whole and the comparison with the other BRIC countries, we introduce a new methodology based on the Fitness algorithm, called Exogenous Fitness. Combining the results with the Gross Domestic Product per capita (GDPp), we look at the dynamics of the Brazilian states in the Fitness-Income plane. Two regimes are distinguishable: one with high predictability and the other with low predictability, showing a deep analogy with the heterogeneous dynamics of the World countries. Furthermore, we compare the ranking of the Brazilian states according to the Exogenous Fitness with the ranking obtained through two other techniques, namely Endogenous Fitness and Economic Complexity Index. PMID:29874265

  3. Dynamics in the Fitness-Income plane: Brazilian states vs World countries.

    PubMed

    Operti, Felipe G; Pugliese, Emanuele; Andrade, José S; Pietronero, Luciano; Gabrielli, Andrea

    2018-01-01

    In this paper we introduce a novel algorithm, called Exogenous Fitness, to calculate the Fitness of subnational entities and we apply it to the states of Brazil. In the last decade, several indices were introduced to measure the competitiveness of countries by looking at the complexity of their export basket. Tacchella et al (2012) developed a non-monetary metric called Fitness. In this paper, after an overview about Brazil as a whole and the comparison with the other BRIC countries, we introduce a new methodology based on the Fitness algorithm, called Exogenous Fitness. Combining the results with the Gross Domestic Product per capita (GDPp), we look at the dynamics of the Brazilian states in the Fitness-Income plane. Two regimes are distinguishable: one with high predictability and the other with low predictability, showing a deep analogy with the heterogeneous dynamics of the World countries. Furthermore, we compare the ranking of the Brazilian states according to the Exogenous Fitness with the ranking obtained through two other techniques, namely Endogenous Fitness and Economic Complexity Index.

  4. Focusing of light through turbid media by curve fitting optimization

    NASA Astrophysics Data System (ADS)

    Gong, Changmei; Wu, Tengfei; Liu, Jietao; Li, Huijuan; Shao, Xiaopeng; Zhang, Jianqi

    2016-12-01

    The construction of wavefront phase plays a critical role in focusing light through turbid media. We introduce the curve fitting algorithm (CFA) into the feedback control procedure for wavefront optimization. Unlike the existing continuous sequential algorithm (CSA), the CFA locates the optimal phase by fitting a curve to the measured signals. Simulation results show that, similar to the genetic algorithm (GA), the proposed CFA technique is far less susceptible to the experimental noise than the CSA. Furthermore, only three measurements of feedback signals are enough for CFA to fit the optimal phase while obtaining a higher focal intensity than the CSA and the GA, dramatically shortening the optimization time by a factor of 3 compared with the CSA and the GA. The proposed CFA approach can be applied to enhance the focus intensity and boost the focusing speed in the fields of biological imaging, particle trapping, laser therapy, and so on, and might help to focus light through dynamic turbid media.

  5. An improved independent component analysis model for 3D chromatogram separation and its solution by multi-areas genetic algorithm

    PubMed Central

    2014-01-01

    Background The 3D chromatogram generated by High Performance Liquid Chromatography-Diode Array Detector (HPLC-DAD) has been researched widely in the field of herbal medicine, grape wine, agriculture, petroleum and so on. Currently, most of the methods used for separating a 3D chromatogram need to know the compounds' number in advance, which could be impossible especially when the compounds are complex or white noise exist. New method which extracts compounds from 3D chromatogram directly is needed. Methods In this paper, a new separation model named parallel Independent Component Analysis constrained by Reference Curve (pICARC) was proposed to transform the separation problem to a multi-parameter optimization issue. It was not necessary to know the number of compounds in the optimization. In order to find all the solutions, an algorithm named multi-areas Genetic Algorithm (mGA) was proposed, where multiple areas of candidate solutions were constructed according to the fitness and distances among the chromosomes. Results Simulations and experiments on a real life HPLC-DAD data set were used to demonstrate our method and its effectiveness. Through simulations, it can be seen that our method can separate 3D chromatogram to chromatogram peaks and spectra successfully even when they severely overlapped. It is also shown by the experiments that our method is effective to solve real HPLC-DAD data set. Conclusions Our method can separate 3D chromatogram successfully without knowing the compounds' number in advance, which is fast and effective. PMID:25474487

  6. Cost-Effectiveness between Double and Single Fecal Immunochemical Test(s) in a Mass Colorectal Cancer Screening.

    PubMed

    Cai, Shan-Rong; Zhu, Hong-Hong; Huang, Yan-Qin; Li, Qi-Long; Ma, Xin-Yuan; Zhang, Su-Zhan; Zheng, Shu

    2016-01-01

    This study investigated the cost-effectiveness between double and single Fecal Immunochemical Test(s) (FIT) in a mass CRC screening. A two-stage sequential screening was conducted. FIT was used as a primary screening test and recommended twice by an interval of one week at the first screening stage. We defined the first-time FIT as FIT1 and the second-time FIT as FIT2. If either FIT1 or FIT2 was positive (+), then a colonoscopy was recommended at the second stage. Costs were recorded and analyzed. A total of 24,419 participants completed either FIT1 or FIT2. The detection rate of advanced neoplasm was 19.2% among both FIT1+ and FIT2+, especially high among men with age ≥55 (27.4%). About 15.4% CRC, 18.9% advanced neoplasm, and 29.9% adenoma missed by FIT1 were detected by FIT2 alone. Average cost was $2,935 for double FITs and $2,121 for FIT1 to detect each CRC and $901 for double FITs and $680 for FIT1 to detect each advanced neoplasm. Double FITs are overall more cost-effective, having significantly higher positive and detection rates with an acceptable higher cost, than single FIT. Double FITs should be encouraged for the first screening in a mass CRC screening, especially in economically and medically underserved populations/areas/countries.

  7. VizieR Online Data Catalog: Second ROSAT all-sky survey (2RXS) source catalog (Boller+, 2016)

    NASA Astrophysics Data System (ADS)

    Boller, T.; Freyberg, M. J.; Truemper, J.; Haberl, F.; Voges, W.; Nandra, K.

    2016-03-01

    We have re-analysed the photon event files from the ROSAT all-sky survey. The main goal was to create a catalogue of point-like sources, which is referred to as the 2RXS source catalogue. We improved the reliability of detections by an advanced detection algorithm and a complete screening process. New data products were created to allow timing and spectral analysis. Photon event files with corrected astrometry and Moon rejection (RASS-3.1 processing) were made available in FITS format. The 2RXS catalogue will serve as the basic X-ray all-sky survey catalogue until eROSITA data become available. (2 data files).

  8. A Model-Based Approach for the Measurement of Eye Movements Using Image Processing

    NASA Technical Reports Server (NTRS)

    Sung, Kwangjae; Reschke, Millard F.

    1997-01-01

    This paper describes a video eye-tracking algorithm which searches for the best fit of the pupil modeled as a circular disk. The algorithm is robust to common image artifacts such as the droopy eyelids and light reflections while maintaining the measurement resolution available by the centroid algorithm. The presented algorithm is used to derive the pupil size and center coordinates, and can be combined with iris-tracking techniques to measure ocular torsion. A comparison search method of pupil candidates using pixel coordinate reference lookup tables optimizes the processing requirements for a least square fit of the circular disk model. This paper includes quantitative analyses and simulation results for the resolution and the robustness of the algorithm. The algorithm presented in this paper provides a platform for a noninvasive, multidimensional eye measurement system which can be used for clinical and research applications requiring the precise recording of eye movements in three-dimensional space.

  9. Performance of a quantitative fecal immunochemical test for detecting advanced colorectal neoplasia: a prospective cohort study.

    PubMed

    Liles, Elizabeth G; Perrin, Nancy; Rosales, Ana G; Smith, David H; Feldstein, Adrianne C; Mosen, David M; Levin, Theodore R

    2018-05-02

    The fecal immunochemical test (FIT) is easier to use and more sensitive than the guaiac fecal occult blood test, but it is unclear how to optimize FIT performance. We compared the sensitivity and specificity for detecting advanced colorectal neoplasia between single-sample (1-FIT) and two-sample (2-FIT) FIT protocols at a range of hemoglobin concentration cutoffs for a positive test. We recruited 2,761 average-risk men and women ages 49-75 referred for colonoscopy within a large nonprofit, group-model health maintenance organization (HMO), and asked them to complete two separate single-sample FITs. We generated receiver-operating characteristic (ROC) curves to compare sensitivity and specificity estimates for 1-FIT and 2-FIT protocols among those who completed both FIT kits and colonoscopy. We similarly compared sensitivity and specificity between hemoglobin concentration cutoffs for a single-sample FIT. Differences in sensitivity and specificity between the 1-FIT and 2-FIT protocols were not statistically significant at any of the pre-specified hemoglobin concentration cutoffs (10, 15, 20, 25, and 30 μg/g). There was a significant difference in test performance of the one-sample FIT between 50 ng/ml (10 μg/g) and each of the higher pre-specified cutoffs. Disease prevalence was low. A two-sample FIT is not superior to a one-sample FIT in detection of advanced adenomas; the one-sample FIT at a hemoglobin concentration cutoff of 50 ng/ml (10 μg/g) is significantly more sensitive for advanced adenomas than at higher cutoffs. These findings apply to a population of younger, average-risk patients in a U.S. integrated care system with high rates of prior screening.

  10. Single element ultrasonic imaging of limb geometry: an in-vivo study with comparison to MRI

    NASA Astrophysics Data System (ADS)

    Zhang, Xiang; Fincke, Jonathan R.; Anthony, Brian W.

    2016-04-01

    Despite advancements in medical imaging, current prosthetic fitting methods remain subjective, operator dependent, and non-repeatable. The standard plaster casting method relies on prosthetist experience and tactile feel of the limb to design the prosthetic socket. Often times, many fitting iterations are required to achieve an acceptable fit. Use of improper socket fittings can lead to painful pathologies including neuromas, inflammation, soft tissue calcification, and pressure sores, often forcing the wearer to into a wheelchair and reducing mobility and quality of life. Computer software along with MRI/CT imaging has already been explored to aid the socket design process. In this paper, we explore the use of ultrasound instead of MRI/CT to accurately obtain the underlying limb geometry to assist the prosthetic socket design process. Using a single element ultrasound system, multiple subjects' proximal limbs were imaged using 1, 2.25, and 5 MHz single element transducers. Each ultrasound transducer was calibrated to ensure acoustic exposure within the limits defined by the FDA. To validate image quality, each patient was also imaged in an MRI. Fiducial markers visible in both MRI and ultrasound were used to compare the same limb cross-sectional image for each patient. After applying a migration algorithm, B-mode ultrasound cross-sections showed sufficiently high image resolution to characterize the skin and bone boundaries along with the underlying tissue structures.

  11. Learning Intelligent Genetic Algorithms Using Japanese Nonograms

    ERIC Educational Resources Information Center

    Tsai, Jinn-Tsong; Chou, Ping-Yi; Fang, Jia-Cen

    2012-01-01

    An intelligent genetic algorithm (IGA) is proposed to solve Japanese nonograms and is used as a method in a university course to learn evolutionary algorithms. The IGA combines the global exploration capabilities of a canonical genetic algorithm (CGA) with effective condensed encoding, improved fitness function, and modified crossover and…

  12. On the Structure of a Best Possible Crossover Selection Strategy in Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Lässig, Jörg; Hoffmann, Karl Heinz

    The paper considers the problem of selecting individuals in the current population in genetic algorithms for crossover to find a solution with high fitness for a given optimization problem. Many different schemes have been described in the literature as possible strategies for this task but so far comparisons have been predominantly empirical. It is shown that if one wishes to maximize any linear function of the final state probabilities, e.g. the fitness of the best individual in the final population of the algorithm, then a best probability distribution for selecting an individual in each generation is a rectangular distribution over the individuals sorted in descending sequence by their fitness values. This means uniform probabilities have to be assigned to a group of the best individuals of the population but probabilities equal to zero to individuals with lower fitness, assuming that the probability distribution to choose individuals from the current population can be chosen independently for each iteration and each individual. This result is then generalized also to typical practically applied performance measures, such as maximizing the expected fitness value of the best individual seen in any generation.

  13. Broadband spectral fitting of blazars using XSPEC

    NASA Astrophysics Data System (ADS)

    Sahayanathan, Sunder; Sinha, Atreyee; Misra, Ranjeev

    2018-03-01

    The broadband spectral energy distribution (SED) of blazars is generally interpreted as radiation arising from synchrotron and inverse Compton mechanisms. Traditionally, the underlying source parameters responsible for these emission processes, like particle energy density, magnetic field, etc., are obtained through simple visual reproduction of the observed fluxes. However, this procedure is incapable of providing confidence ranges for the estimated parameters. In this work, we propose an efficient algorithm to perform a statistical fit of the observed broadband spectrum of blazars using different emission models. Moreover, we use the observable quantities as the fit parameters, rather than the direct source parameters which govern the resultant SED. This significantly improves the convergence time and eliminates the uncertainty regarding initial guess parameters. This approach also has an added advantage of identifying the degenerate parameters, which can be removed by including more observable information and/or additional constraints. A computer code developed based on this algorithm is implemented as a user-defined routine in the standard X-ray spectral fitting package, XSPEC. Further, we demonstrate the efficacy of the algorithm by fitting the well sampled SED of blazar 3C 279 during its gamma ray flare in 2014.

  14. Cloud computing task scheduling strategy based on improved differential evolution algorithm

    NASA Astrophysics Data System (ADS)

    Ge, Junwei; He, Qian; Fang, Yiqiu

    2017-04-01

    In order to optimize the cloud computing task scheduling scheme, an improved differential evolution algorithm for cloud computing task scheduling is proposed. Firstly, the cloud computing task scheduling model, according to the model of the fitness function, and then used improved optimization calculation of the fitness function of the evolutionary algorithm, according to the evolution of generation of dynamic selection strategy through dynamic mutation strategy to ensure the global and local search ability. The performance test experiment was carried out in the CloudSim simulation platform, the experimental results show that the improved differential evolution algorithm can reduce the cloud computing task execution time and user cost saving, good implementation of the optimal scheduling of cloud computing tasks.

  15. Cultural Artifact Detection in Long Wave Infrared Imagery.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Dylan Zachary; Craven, Julia M.; Ramon, Eric

    2017-01-01

    Detection of cultural artifacts from airborne remotely sensed data is an important task in the context of on-site inspections. Airborne artifact detection can reduce the size of the search area the ground based inspection team must visit, thereby improving the efficiency of the inspection process. This report details two algorithms for detection of cultural artifacts in aerial long wave infrared imagery. The first algorithm creates an explicit model for cultural artifacts, and finds data that fits the model. The second algorithm creates a model of the background and finds data that does not fit the model. Both algorithms are appliedmore » to orthomosaic imagery generated as part of the MSFE13 data collection campaign under the spectral technology evaluation project.« less

  16. Experimental Validation of Advanced Dispersed Fringe Sensing (ADFS) Algorithm Using Advanced Wavefront Sensing and Correction Testbed (AWCT)

    NASA Technical Reports Server (NTRS)

    Wang, Xu; Shi, Fang; Sigrist, Norbert; Seo, Byoung-Joon; Tang, Hong; Bikkannavar, Siddarayappa; Basinger, Scott; Lay, Oliver

    2012-01-01

    Large aperture telescope commonly features segment mirrors and a coarse phasing step is needed to bring these individual segments into the fine phasing capture range. Dispersed Fringe Sensing (DFS) is a powerful coarse phasing technique and its alteration is currently being used for JWST.An Advanced Dispersed Fringe Sensing (ADFS) algorithm is recently developed to improve the performance and robustness of previous DFS algorithms with better accuracy and unique solution. The first part of the paper introduces the basic ideas and the essential features of the ADFS algorithm and presents the some algorithm sensitivity study results. The second part of the paper describes the full details of algorithm validation process through the advanced wavefront sensing and correction testbed (AWCT): first, the optimization of the DFS hardware of AWCT to ensure the data accuracy and reliability is illustrated. Then, a few carefully designed algorithm validation experiments are implemented, and the corresponding data analysis results are shown. Finally the fiducial calibration using Range-Gate-Metrology technique is carried out and a <10nm or <1% algorithm accuracy is demonstrated.

  17. Extension of the MIRS computer package for the modeling of molecular spectra: From effective to full ab initio ro-vibrational Hamiltonians in irreducible tensor form

    NASA Astrophysics Data System (ADS)

    Nikitin, A. V.; Rey, M.; Champion, J. P.; Tyuterev, Vl. G.

    2012-07-01

    The MIRS software for the modeling of ro-vibrational spectra of polyatomic molecules was considerably extended and improved. The original version [Nikitin AV, Champion JP, Tyuterev VlG. The MIRS computer package for modeling the rovibrational spectra of polyatomic molecules. J Quant Spectrosc Radiat Transf 2003;82:239-49.] was especially designed for separate or simultaneous treatments of complex band systems of polyatomic molecules. It was set up in the frame of effective polyad models by using algorithms based on advanced group theory algebra to take full account of symmetry properties. It has been successfully used for predictions and data fitting (positions and intensities) of numerous spectra of symmetric and spherical top molecules within the vibration extrapolation scheme. The new version offers more advanced possibilities for spectra calculations and modeling by getting rid of several previous limitations particularly for the size of polyads and the number of tensors involved. It allows dealing with overlapping polyads and includes more efficient and faster algorithms for the calculation of coefficients related to molecular symmetry properties (6C, 9C and 12C symbols for C3v, Td, and Oh point groups) and for better convergence of least-square-fit iterations as well. The new version is not limited to polyad effective models. It also allows direct predictions using full ab initio ro-vibrational normal mode Hamiltonians converted into the irreducible tensor form. Illustrative examples on CH3D, CH4, CH3Cl, CH3F and PH3 are reported reflecting the present status of data available. It is written in C++ for standard PC computer operating under Windows. The full package including on-line documentation and recent data are freely available at http://www.iao.ru/mirs/mirs.htm or http://xeon.univ-reims.fr/Mirs/ or http://icb.u-bourgogne.fr/OMR/SMA/SHTDS/MIRS.html and as supplementary data from the online version of the article.

  18. A Modified Distributed Bees Algorithm for Multi-Sensor Task Allocation.

    PubMed

    Tkach, Itshak; Jevtić, Aleksandar; Nof, Shimon Y; Edan, Yael

    2018-03-02

    Multi-sensor systems can play an important role in monitoring tasks and detecting targets. However, real-time allocation of heterogeneous sensors to dynamic targets/tasks that are unknown a priori in their locations and priorities is a challenge. This paper presents a Modified Distributed Bees Algorithm (MDBA) that is developed to allocate stationary heterogeneous sensors to upcoming unknown tasks using a decentralized, swarm intelligence approach to minimize the task detection times. Sensors are allocated to tasks based on sensors' performance, tasks' priorities, and the distances of the sensors from the locations where the tasks are being executed. The algorithm was compared to a Distributed Bees Algorithm (DBA), a Bees System, and two common multi-sensor algorithms, market-based and greedy-based algorithms, which were fitted for the specific task. Simulation analyses revealed that MDBA achieved statistically significant improved performance by 7% with respect to DBA as the second-best algorithm, and by 19% with respect to Greedy algorithm, which was the worst, thus indicating its fitness to provide solutions for heterogeneous multi-sensor systems.

  19. A Modified Distributed Bees Algorithm for Multi-Sensor Task Allocation †

    PubMed Central

    Nof, Shimon Y.; Edan, Yael

    2018-01-01

    Multi-sensor systems can play an important role in monitoring tasks and detecting targets. However, real-time allocation of heterogeneous sensors to dynamic targets/tasks that are unknown a priori in their locations and priorities is a challenge. This paper presents a Modified Distributed Bees Algorithm (MDBA) that is developed to allocate stationary heterogeneous sensors to upcoming unknown tasks using a decentralized, swarm intelligence approach to minimize the task detection times. Sensors are allocated to tasks based on sensors’ performance, tasks’ priorities, and the distances of the sensors from the locations where the tasks are being executed. The algorithm was compared to a Distributed Bees Algorithm (DBA), a Bees System, and two common multi-sensor algorithms, market-based and greedy-based algorithms, which were fitted for the specific task. Simulation analyses revealed that MDBA achieved statistically significant improved performance by 7% with respect to DBA as the second-best algorithm, and by 19% with respect to Greedy algorithm, which was the worst, thus indicating its fitness to provide solutions for heterogeneous multi-sensor systems. PMID:29498683

  20. Accuracy in breast shape alignment with 3D surface fitting algorithms.

    PubMed

    Riboldi, Marco; Gierga, David P; Chen, George T Y; Baroni, Guido

    2009-04-01

    Surface imaging is in use in radiotherapy clinical practice for patient setup optimization and monitoring. Breast alignment is accomplished by searching for a tentative spatial correspondence between the reference and daily surface shape models. In this study, the authors quantify whole breast shape alignment by relying on texture features digitized on 3D surface models. Texture feature localization was validated through repeated measurements in a silicone breast phantom, mounted on a high precision mechanical stage. Clinical investigations on breast shape alignment included 133 fractions in 18 patients treated with accelerated partial breast irradiation. The breast shape was detected with a 3D video based surface imaging system so that breathing was compensated. An in-house algorithm for breast alignment, based on surface fitting constrained by nipple matching (constrained surface fitting), was applied. Results were compared with a commercial software where no constraints are utilized (unconstrained surface fitting). Texture feature localization was validated within 2 mm in each anatomical direction. Clinical data show that unconstrained surface fitting achieves adequate accuracy in most cases, though nipple mismatch is considerably higher than residual surface distances (3.9 mm vs 0.6 mm on average). Outliers beyond 1 cm can be experienced as the result of a degenerate surface fit, where unconstrained surface fitting is not sufficient to establish spatial correspondence. In the constrained surface fitting algorithm, average surface mismatch within 1 mm was obtained when nipple position was forced to match in the [1.5; 5] mm range. In conclusion, optimal results can be obtained by trading off the desired overall surface congruence vs matching of selected landmarks (constraint). Constrained surface fitting is put forward to represent an improvement in setup accuracy for those applications where whole breast positional reproducibility is an issue.

  1. Inclusion of the fitness sharing technique in an evolutionary algorithm to analyze the fitness landscape of the genetic code adaptability.

    PubMed

    Santos, José; Monteagudo, Ángel

    2017-03-27

    The canonical code, although prevailing in complex genomes, is not universal. It was shown the canonical genetic code superior robustness compared to random codes, but it is not clearly determined how it evolved towards its current form. The error minimization theory considers the minimization of point mutation adverse effect as the main selection factor in the evolution of the code. We have used simulated evolution in a computer to search for optimized codes, which helps to obtain information about the optimization level of the canonical code in its evolution. A genetic algorithm searches for efficient codes in a fitness landscape that corresponds with the adaptability of possible hypothetical genetic codes. The lower the effects of errors or mutations in the codon bases of a hypothetical code, the more efficient or optimal is that code. The inclusion of the fitness sharing technique in the evolutionary algorithm allows the extent to which the canonical genetic code is in an area corresponding to a deep local minimum to be easily determined, even in the high dimensional spaces considered. The analyses show that the canonical code is not in a deep local minimum and that the fitness landscape is not a multimodal fitness landscape with deep and separated peaks. Moreover, the canonical code is clearly far away from the areas of higher fitness in the landscape. Given the non-presence of deep local minima in the landscape, although the code could evolve and different forces could shape its structure, the fitness landscape nature considered in the error minimization theory does not explain why the canonical code ended its evolution in a location which is not an area of a localized deep minimum of the huge fitness landscape.

  2. Data Processing Algorithm for Diagnostics of Combustion Using Diode Laser Absorption Spectrometry.

    PubMed

    Mironenko, Vladimir R; Kuritsyn, Yuril A; Liger, Vladimir V; Bolshov, Mikhail A

    2018-02-01

    A new algorithm for the evaluation of the integral line intensity for inferring the correct value for the temperature of a hot zone in the diagnostic of combustion by absorption spectroscopy with diode lasers is proposed. The algorithm is based not on the fitting of the baseline (BL) but on the expansion of the experimental and simulated spectra in a series of orthogonal polynomials, subtracting of the first three components of the expansion from both the experimental and simulated spectra, and fitting the spectra thus modified. The algorithm is tested in the numerical experiment by the simulation of the absorption spectra using a spectroscopic database, the addition of white noise, and the parabolic BL. Such constructed absorption spectra are treated as experimental in further calculations. The theoretical absorption spectra were simulated with the parameters (temperature, total pressure, concentration of water vapor) close to the parameters used for simulation of the experimental data. Then, spectra were expanded in the series of orthogonal polynomials and first components were subtracted from both spectra. The value of the correct integral line intensities and hence the correct temperature evaluation were obtained by fitting of the thus modified experimental and simulated spectra. The dependence of the mean and standard deviation of the evaluation of the integral line intensity on the linewidth and the number of subtracted components (first two or three) were examined. The proposed algorithm provides a correct estimation of temperature with standard deviation better than 60 K (for T = 1000 K) for the line half-width up to 0.6 cm -1 . The proposed algorithm allows for obtaining the parameters of a hot zone without the fitting of usually unknown BL.

  3. Fast and exact Newton and Bidirectional fitting of Active Appearance Models.

    PubMed

    Kossaifi, Jean; Tzimiropoulos, Yorgos; Pantic, Maja

    2016-12-21

    Active Appearance Models (AAMs) are generative models of shape and appearance that have proven very attractive for their ability to handle wide changes in illumination, pose and occlusion when trained in the wild, while not requiring large training dataset like regression-based or deep learning methods. The problem of fitting an AAM is usually formulated as a non-linear least squares one and the main way of solving it is a standard Gauss-Newton algorithm. In this paper we extend Active Appearance Models in two ways: we first extend the Gauss-Newton framework by formulating a bidirectional fitting method that deforms both the image and the template to fit a new instance. We then formulate a second order method by deriving an efficient Newton method for AAMs fitting. We derive both methods in a unified framework for two types of Active Appearance Models, holistic and part-based, and additionally show how to exploit the structure in the problem to derive fast yet exact solutions. We perform a thorough evaluation of all algorithms on three challenging and recently annotated inthe- wild datasets, and investigate fitting accuracy, convergence properties and the influence of noise in the initialisation. We compare our proposed methods to other algorithms and show that they yield state-of-the-art results, out-performing other methods while having superior convergence properties.

  4. Improved liver R2* mapping by pixel-wise curve fitting with adaptive neighborhood regularization.

    PubMed

    Wang, Changqing; Zhang, Xinyuan; Liu, Xiaoyun; He, Taigang; Chen, Wufan; Feng, Qianjin; Feng, Yanqiu

    2018-08-01

    To improve liver R2* mapping by incorporating adaptive neighborhood regularization into pixel-wise curve fitting. Magnetic resonance imaging R2* mapping remains challenging because of the serial images with low signal-to-noise ratio. In this study, we proposed to exploit the neighboring pixels as regularization terms and adaptively determine the regularization parameters according to the interpixel signal similarity. The proposed algorithm, called the pixel-wise curve fitting with adaptive neighborhood regularization (PCANR), was compared with the conventional nonlinear least squares (NLS) and nonlocal means filter-based NLS algorithms on simulated, phantom, and in vivo data. Visually, the PCANR algorithm generates R2* maps with significantly reduced noise and well-preserved tiny structures. Quantitatively, the PCANR algorithm produces R2* maps with lower root mean square errors at varying R2* values and signal-to-noise-ratio levels compared with the NLS and nonlocal means filter-based NLS algorithms. For the high R2* values under low signal-to-noise-ratio levels, the PCANR algorithm outperforms the NLS and nonlocal means filter-based NLS algorithms in the accuracy and precision, in terms of mean and standard deviation of R2* measurements in selected region of interests, respectively. The PCANR algorithm can reduce the effect of noise on liver R2* mapping, and the improved measurement precision will benefit the assessment of hepatic iron in clinical practice. Magn Reson Med 80:792-801, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.

  5. Dynamic data driven bidirectional reflectance distribution function measurement system

    NASA Astrophysics Data System (ADS)

    Nauyoks, Stephen E.; Freda, Sam; Marciniak, Michael A.

    2014-09-01

    The bidirectional reflectance distribution function (BRDF) is a fitted distribution function that defines the scatter of light off of a surface. The BRDF is dependent on the directions of both the incident and scattered light. Because of the vastness of the measurement space of all possible incident and reflected directions, the calculation of BRDF is usually performed using a minimal amount of measured data. This may lead to poor fits and uncertainty in certain regions of incidence or reflection. A dynamic data driven application system (DDDAS) is a concept that uses an algorithm on collected data to influence the collection space of future data acquisition. The authors propose a DDD-BRDF algorithm that fits BRDF data as it is being acquired and uses on-the-fly fittings of various BRDF models to adjust the potential measurement space. In doing so, it is hoped to find the best model to fit a surface and the best global fit of the BRDF with a minimum amount of collection space.

  6. A multi-emitter fitting algorithm for potential live cell super-resolution imaging over a wide range of molecular densities.

    PubMed

    Takeshima, T; Takahashi, T; Yamashita, J; Okada, Y; Watanabe, S

    2018-05-25

    Multi-emitter fitting algorithms have been developed to improve the temporal resolution of single-molecule switching nanoscopy, but the molecular density range they can analyse is narrow and the computation required is intensive, significantly limiting their practical application. Here, we propose a computationally fast method, wedged template matching (WTM), an algorithm that uses a template matching technique to localise molecules at any overlapping molecular density from sparse to ultrahigh density with subdiffraction resolution. WTM achieves the localization of overlapping molecules at densities up to 600 molecules μm -2 with a high detection sensitivity and fast computational speed. WTM also shows localization precision comparable with that of DAOSTORM (an algorithm for high-density super-resolution microscopy), at densities up to 20 molecules μm -2 , and better than DAOSTORM at higher molecular densities. The application of WTM to a high-density biological sample image demonstrated that it resolved protein dynamics from live cell images with subdiffraction resolution and a temporal resolution of several hundred milliseconds or less through a significant reduction in the number of camera images required for a high-density reconstruction. WTM algorithm is a computationally fast, multi-emitter fitting algorithm that can analyse over a wide range of molecular densities. The algorithm is available through the website. https://doi.org/10.17632/bf3z6xpn5j.1. © 2018 The Authors. Journal of Microscopy published by JohnWiley & Sons Ltd on behalf of Royal Microscopical Society.

  7. Fuzzy Performance between Surface Fitting and Energy Distribution in Turbulence Runner

    PubMed Central

    Liang, Zhongwei; Liu, Xiaochu; Ye, Bangyan; Brauwer, Richard Kars

    2012-01-01

    Because the application of surface fitting algorithms exerts a considerable fuzzy influence on the mathematical features of kinetic energy distribution, their relation mechanism in different external conditional parameters must be quantitatively analyzed. Through determining the kinetic energy value of each selected representative position coordinate point by calculating kinetic energy parameters, several typical algorithms of complicated surface fitting are applied for constructing microkinetic energy distribution surface models in the objective turbulence runner with those obtained kinetic energy values. On the base of calculating the newly proposed mathematical features, we construct fuzzy evaluation data sequence and present a new three-dimensional fuzzy quantitative evaluation method; then the value change tendencies of kinetic energy distribution surface features can be clearly quantified, and the fuzzy performance mechanism discipline between the performance results of surface fitting algorithms, the spatial features of turbulence kinetic energy distribution surface, and their respective environmental parameter conditions can be quantitatively analyzed in detail, which results in the acquirement of final conclusions concerning the inherent turbulence kinetic energy distribution performance mechanism and its mathematical relation. A further turbulence energy quantitative study can be ensured. PMID:23213287

  8. Angles-centroids fitting calibration and the centroid algorithm applied to reverse Hartmann test

    NASA Astrophysics Data System (ADS)

    Zhao, Zhu; Hui, Mei; Xia, Zhengzheng; Dong, Liquan; Liu, Ming; Liu, Xiaohua; Kong, Lingqin; Zhao, Yuejin

    2017-02-01

    In this paper, we develop an angles-centroids fitting (ACF) system and the centroid algorithm to calibrate the reverse Hartmann test (RHT) with sufficient precision. The essence of ACF calibration is to establish the relationship between ray angles and detector coordinates. Centroids computation is used to find correspondences between the rays of datum marks and detector pixels. Here, the point spread function of RHT is classified as circle of confusion (CoC), and the fitting of a CoC spot with 2D Gaussian profile to identify the centroid forms the basis of the centroid algorithm. Theoretical and experimental results of centroids computation demonstrate that the Gaussian fitting method has a less centroid shift or the shift grows at a slower pace when the quality of the image is reduced. In ACF tests, the optical instrumental alignments reach an overall accuracy of 0.1 pixel with the application of laser spot centroids tracking program. Locating the crystal at different positions, the feasibility and accuracy of ACF calibration are further validated to 10-6-10-4 rad root-mean-square error of the calibrations differences.

  9. Time series modeling and forecasting using memetic algorithms for regime-switching models.

    PubMed

    Bergmeir, Christoph; Triguero, Isaac; Molina, Daniel; Aznarte, José Luis; Benitez, José Manuel

    2012-11-01

    In this brief, we present a novel model fitting procedure for the neuro-coefficient smooth transition autoregressive model (NCSTAR), as presented by Medeiros and Veiga. The model is endowed with a statistically founded iterative building procedure and can be interpreted in terms of fuzzy rule-based systems. The interpretability of the generated models and a mathematically sound building procedure are two very important properties of forecasting models. The model fitting procedure employed by the original NCSTAR is a combination of initial parameter estimation by a grid search procedure with a traditional local search algorithm. We propose a different fitting procedure, using a memetic algorithm, in order to obtain more accurate models. An empirical evaluation of the method is performed, applying it to various real-world time series originating from three forecasting competitions. The results indicate that we can significantly enhance the accuracy of the models, making them competitive to models commonly used in the field.

  10. A new interferential multispectral image compression algorithm based on adaptive classification and curve-fitting

    NASA Astrophysics Data System (ADS)

    Wang, Ke-Yan; Li, Yun-Song; Liu, Kai; Wu, Cheng-Ke

    2008-08-01

    A novel compression algorithm for interferential multispectral images based on adaptive classification and curve-fitting is proposed. The image is first partitioned adaptively into major-interference region and minor-interference region. Different approximating functions are then constructed for two kinds of regions respectively. For the major interference region, some typical interferential curves are selected to predict other curves. These typical curves are then processed by curve-fitting method. For the minor interference region, the data of each interferential curve are independently approximated. Finally the approximating errors of two regions are entropy coded. The experimental results show that, compared with JPEG2000, the proposed algorithm not only decreases the average output bit-rate by about 0.2 bit/pixel for lossless compression, but also improves the reconstructed images and reduces the spectral distortion greatly, especially at high bit-rate for lossy compression.

  11. [An Improved Cubic Spline Interpolation Method for Removing Electrocardiogram Baseline Drift].

    PubMed

    Wang, Xiangkui; Tang, Wenpu; Zhang, Lai; Wu, Minghu

    2016-04-01

    The selection of fiducial points has an important effect on electrocardiogram(ECG)denoise with cubic spline interpolation.An improved cubic spline interpolation algorithm for suppressing ECG baseline drift is presented in this paper.Firstly the first order derivative of original ECG signal is calculated,and the maximum and minimum points of each beat are obtained,which are treated as the position of fiducial points.And then the original ECG is fed into a high pass filter with 1.5Hz cutoff frequency.The difference between the original and the filtered ECG at the fiducial points is taken as the amplitude of the fiducial points.Then cubic spline interpolation curve fitting is used to the fiducial points,and the fitting curve is the baseline drift curve.For the two simulated case test,the correlation coefficients between the fitting curve by the presented algorithm and the simulated curve were increased by 0.242and0.13 compared with that from traditional cubic spline interpolation algorithm.And for the case of clinical baseline drift data,the average correlation coefficient from the presented algorithm achieved 0.972.

  12. An accurate surface topography restoration algorithm for white light interferometry

    NASA Astrophysics Data System (ADS)

    Yuan, He; Zhang, Xiangchao; Xu, Min

    2017-10-01

    As an important measuring technique, white light interferometry can realize fast and non-contact measurement, thus it is now widely used in the field of ultra-precision engineering. However, the traditional recovery algorithms of surface topographies have flaws and limits. In this paper, we propose a new algorithm to solve these problems. It is a combination of Fourier transform and improved polynomial fitting method. Because the white light interference signal is usually expressed as a cosine signal whose amplitude is modulated by a Gaussian function, its fringe visibility is not constant and varies with different scanning positions. The interference signal is processed first by Fourier transform, then the positive frequency part is selected and moved back to the center of the amplitude-frequency curve. In order to restore the surface morphology, a polynomial fitting method is used to fit the amplitude curve after inverse Fourier transform and obtain the corresponding topography information. The new method is then compared to the traditional algorithms. It is proved that the aforementioned drawbacks can be effectively overcome. The relative error is less than 0.8%.

  13. SPLICER - A GENETIC ALGORITHM TOOL FOR SEARCH AND OPTIMIZATION, VERSION 1.0 (MACINTOSH VERSION)

    NASA Technical Reports Server (NTRS)

    Wang, L.

    1994-01-01

    SPLICER is a genetic algorithm tool which can be used to solve search and optimization problems. Genetic algorithms are adaptive search procedures (i.e. problem solving methods) based loosely on the processes of natural selection and Darwinian "survival of the fittest." SPLICER provides the underlying framework and structure for building a genetic algorithm application. These algorithms apply genetically-inspired operators to populations of potential solutions in an iterative fashion, creating new populations while searching for an optimal or near-optimal solution to the problem at hand. SPLICER 1.0 was created using a modular architecture that includes a Genetic Algorithm Kernel, interchangeable Representation Libraries, Fitness Modules and User Interface Libraries, and well-defined interfaces between these components. The architecture supports portability, flexibility, and extensibility. SPLICER comes with all source code and several examples. For instance, a "traveling salesperson" example searches for the minimum distance through a number of cities visiting each city only once. Stand-alone SPLICER applications can be used without any programming knowledge. However, to fully utilize SPLICER within new problem domains, familiarity with C language programming is essential. SPLICER's genetic algorithm (GA) kernel was developed independent of representation (i.e. problem encoding), fitness function or user interface type. The GA kernel comprises all functions necessary for the manipulation of populations. These functions include the creation of populations and population members, the iterative population model, fitness scaling, parent selection and sampling, and the generation of population statistics. In addition, miscellaneous functions are included in the kernel (e.g., random number generators). Different problem-encoding schemes and functions are defined and stored in interchangeable representation libraries. This allows the GA kernel to be used with any representation scheme. The SPLICER tool provides representation libraries for binary strings and for permutations. These libraries contain functions for the definition, creation, and decoding of genetic strings, as well as multiple crossover and mutation operators. Furthermore, the SPLICER tool defines the appropriate interfaces to allow users to create new representation libraries. Fitness modules are the only component of the SPLICER system a user will normally need to create or alter to solve a particular problem. Fitness functions are defined and stored in interchangeable fitness modules which must be created using C language. Within a fitness module, a user can create a fitness (or scoring) function, set the initial values for various SPLICER control parameters (e.g., population size), create a function which graphically displays the best solutions as they are found, and provide descriptive information about the problem. The tool comes with several example fitness modules, while the process of developing a fitness module is fully discussed in the accompanying documentation. The user interface is event-driven and provides graphic output in windows. SPLICER is written in Think C for Apple Macintosh computers running System 6.0.3 or later and Sun series workstations running SunOS. The UNIX version is easily ported to other UNIX platforms and requires MIT's X Window System, Version 11 Revision 4 or 5, MIT's Athena Widget Set, and the Xw Widget Set. Example executables and source code are included for each machine version. The standard distribution media for the Macintosh version is a set of three 3.5 inch Macintosh format diskettes. The standard distribution medium for the UNIX version is a .25 inch streaming magnetic tape cartridge in UNIX tar format. For the UNIX version, alternate distribution media and formats are available upon request. SPLICER was developed in 1991.

  14. A review of predictive coding algorithms.

    PubMed

    Spratling, M W

    2017-03-01

    Predictive coding is a leading theory of how the brain performs probabilistic inference. However, there are a number of distinct algorithms which are described by the term "predictive coding". This article provides a concise review of these different predictive coding algorithms, highlighting their similarities and differences. Five algorithms are covered: linear predictive coding which has a long and influential history in the signal processing literature; the first neuroscience-related application of predictive coding to explaining the function of the retina; and three versions of predictive coding that have been proposed to model cortical function. While all these algorithms aim to fit a generative model to sensory data, they differ in the type of generative model they employ, in the process used to optimise the fit between the model and sensory data, and in the way that they are related to neurobiology. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Genetic Algorithm Calibration of Probabilistic Cellular Automata for Modeling Mining Permit Activity

    USGS Publications Warehouse

    Louis, S.J.; Raines, G.L.

    2003-01-01

    We use a genetic algorithm to calibrate a spatially and temporally resolved cellular automata to model mining activity on public land in Idaho and western Montana. The genetic algorithm searches through a space of transition rule parameters of a two dimensional cellular automata model to find rule parameters that fit observed mining activity data. Previous work by one of the authors in calibrating the cellular automaton took weeks - the genetic algorithm takes a day and produces rules leading to about the same (or better) fit to observed data. These preliminary results indicate that genetic algorithms are a viable tool in calibrating cellular automata for this application. Experience gained during the calibration of this cellular automata suggests that mineral resource information is a critical factor in the quality of the results. With automated calibration, further refinements of how the mineral-resource information is provided to the cellular automaton will probably improve our model.

  16. MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions

    NASA Astrophysics Data System (ADS)

    Novosad, Philip; Reader, Andrew J.

    2016-06-01

    Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [11C]SCH23390 data, showing promising results.

  17. MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions.

    PubMed

    Novosad, Philip; Reader, Andrew J

    2016-06-21

    Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [(18)F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [(11)C]SCH23390 data, showing promising results.

  18. Development of Serum Marker Models to Increase Diagnostic Accuracy of Advanced Fibrosis in Nonalcoholic Fatty Liver Disease: The New LINKI Algorithm Compared with Established Algorithms.

    PubMed

    Lykiardopoulos, Byron; Hagström, Hannes; Fredrikson, Mats; Ignatova, Simone; Stål, Per; Hultcrantz, Rolf; Ekstedt, Mattias; Kechagias, Stergios

    2016-01-01

    Detection of advanced fibrosis (F3-F4) in nonalcoholic fatty liver disease (NAFLD) is important for ascertaining prognosis. Serum markers have been proposed as alternatives to biopsy. We attempted to develop a novel algorithm for detection of advanced fibrosis based on a more efficient combination of serological markers and to compare this with established algorithms. We included 158 patients with biopsy-proven NAFLD. Of these, 38 had advanced fibrosis. The following fibrosis algorithms were calculated: NAFLD fibrosis score, BARD, NIKEI, NASH-CRN regression score, APRI, FIB-4, King´s score, GUCI, Lok index, Forns score, and ELF. Study population was randomly divided in a training and a validation group. A multiple logistic regression analysis using bootstrapping methods was applied to the training group. Among many variables analyzed age, fasting glucose, hyaluronic acid and AST were included, and a model (LINKI-1) for predicting advanced fibrosis was created. Moreover, these variables were combined with platelet count in a mathematical way exaggerating the opposing effects, and alternative models (LINKI-2) were also created. Models were compared using area under the receiver operator characteristic curves (AUROC). Of established algorithms FIB-4 and King´s score had the best diagnostic accuracy with AUROCs 0.84 and 0.83, respectively. Higher accuracy was achieved with the novel LINKI algorithms. AUROCs in the total cohort for LINKI-1 was 0.91 and for LINKI-2 models 0.89. The LINKI algorithms for detection of advanced fibrosis in NAFLD showed better accuracy than established algorithms and should be validated in further studies including larger cohorts.

  19. Anthropometric body measurements based on multi-view stereo image reconstruction.

    PubMed

    Li, Zhaoxin; Jia, Wenyan; Mao, Zhi-Hong; Li, Jie; Chen, Hsin-Chen; Zuo, Wangmeng; Wang, Kuanquan; Sun, Mingui

    2013-01-01

    Anthropometric measurements, such as the circumferences of the hip, arm, leg and waist, waist-to-hip ratio, and body mass index, are of high significance in obesity and fitness evaluation. In this paper, we present a home based imaging system capable of conducting anthropometric measurements. Body images are acquired at different angles using a home camera and a simple rotating disk. Advanced image processing algorithms are utilized for 3D body surface reconstruction. A coarse body shape model is first established from segmented body silhouettes. Then, this model is refined through an inter-image consistency maximization process based on an energy function. Our experimental results using both a mannequin surrogate and a real human body validate the feasibility of the proposed system.

  20. Anthropometric Body Measurements Based on Multi-View Stereo Image Reconstruction*

    PubMed Central

    Li, Zhaoxin; Jia, Wenyan; Mao, Zhi-Hong; Li, Jie; Chen, Hsin-Chen; Zuo, Wangmeng; Wang, Kuanquan; Sun, Mingui

    2013-01-01

    Anthropometric measurements, such as the circumferences of the hip, arm, leg and waist, waist-to-hip ratio, and body mass index, are of high significance in obesity and fitness evaluation. In this paper, we present a home based imaging system capable of conducting automatic anthropometric measurements. Body images are acquired at different angles using a home camera and a simple rotating disk. Advanced image processing algorithms are utilized for 3D body surface reconstruction. A coarse body shape model is first established from segmented body silhouettes. Then, this model is refined through an inter-image consistency maximization process based on an energy function. Our experimental results using both a mannequin surrogate and a real human body validate the feasibility of proposed system. PMID:24109700

  1. Breakthrough in current-in-plane tunneling measurement precision by application of multi-variable fitting algorithm.

    PubMed

    Cagliani, Alberto; Østerberg, Frederik W; Hansen, Ole; Shiv, Lior; Nielsen, Peter F; Petersen, Dirch H

    2017-09-01

    We present a breakthrough in micro-four-point probe (M4PP) metrology to substantially improve precision of transmission line (transfer length) type measurements by application of advanced electrode position correction. In particular, we demonstrate this methodology for the M4PP current-in-plane tunneling (CIPT) technique. The CIPT method has been a crucial tool in the development of magnetic tunnel junction (MTJ) stacks suitable for magnetic random-access memories for more than a decade. On two MTJ stacks, the measurement precision of resistance-area product and tunneling magnetoresistance was improved by up to a factor of 3.5 and the measurement reproducibility by up to a factor of 17, thanks to our improved position correction technique.

  2. Pilot Assessment of Brain Metabolism in Perinatally HIV-Infected Youths Using Accelerated 5D Echo Planar J-Resolved Spectroscopic Imaging.

    PubMed

    Iqbal, Zohaib; Wilson, Neil E; Keller, Margaret A; Michalik, David E; Church, Joseph A; Nielsen-Saines, Karin; Deville, Jaime; Souza, Raissa; Brecht, Mary-Lynn; Thomas, M Albert

    2016-01-01

    To measure cerebral metabolite levels in perinatally HIV-infected youths and healthy controls using the accelerated five dimensional (5D) echo planar J-resolved spectroscopic imaging (EP-JRESI) sequence, which is capable of obtaining two dimensional (2D) J-resolved spectra from three spatial dimensions (3D). After acquisition and reconstruction of the 5D EP-JRESI data, T1-weighted MRIs were used to classify brain regions of interest for HIV patients and healthy controls: right frontal white (FW), medial frontal gray (FG), right basal ganglia (BG), right occipital white (OW), and medial occipital gray (OG). From these locations, respective J-resolved and TE-averaged spectra were extracted and fit using two different quantitation methods. The J-resolved spectra were fit using prior knowledge fitting (ProFit) while the TE-averaged spectra were fit using the advanced method for accurate robust and efficient spectral fitting (AMARES). Quantitation of the 5D EP-JRESI data using the ProFit algorithm yielded significant metabolic differences in two spatial locations of the perinatally HIV-infected youths compared to controls: elevated NAA/(Cr+Ch) in the FW and elevated Asp/(Cr+Ch) in the BG. Using the TE-averaged data quantified by AMARES, an increase of Glu/(Cr+Ch) was shown in the FW region. A strong negative correlation (r < -0.6) was shown between tCh/(Cr+Ch) quantified using ProFit in the FW and CD4 counts. Also, strong positive correlations (r > 0.6) were shown between Asp/(Cr+Ch) and CD4 counts in the FG and BG. The complimentary results using ProFit fitting of J-resolved spectra and AMARES fitting of TE-averaged spectra, which are a subset of the 5D EP-JRESI acquisition, demonstrate an abnormal energy metabolism in the brains of perinatally HIV-infected youths. This may be a result of the HIV pathology and long-term combinational anti-retroviral therapy (cART). Further studies of larger perinatally HIV-infected cohorts are necessary to confirm these findings.

  3. Retrieval of aerosol properties and water leaving radiance from multi-angle spectro-polarimetric measurement over coastal waters

    NASA Astrophysics Data System (ADS)

    Gao, M.; Zhai, P.; Franz, B. A.; Hu, Y.; Knobelspiesse, K. D.; Xu, F.; Ibrahim, A.

    2017-12-01

    Ocean color remote sensing in coastal waters remains a challenging task due to the complex optical properties of aerosols and ocean water properties. It is highly desirable to develop an advanced ocean color and aerosol retrieval algorithm for coastal waters, to advance our capabilities in monitoring water quality, improve our understanding of coastal carbon cycle dynamics, and allow for the development of more accurate circulation models. However, distinguishing the dissolved and suspended material from absorbing aerosols over coastal waters is challenging as they share similar absorption spectrum within the deep blue to UV range. In this paper we report a research algorithm on aerosol and ocean color retrieval with emphasis on coastal waters. The main features of our algorithm include: 1) combining co-located measurements from a hyperspectral ocean color instrument (OCI) and a multi-angle polarimeter (MAP); 2) using the radiative transfer model for coupled atmosphere and ocean system (CAOS), which is based on the highly accurate and efficient successive order of scattering method; and 3) incorporating a generalized bio-optical model with direct accounting of the total absorption of phytoplankton, CDOM and non-algal particles(NAP), and the total scattering of phytoplankton and NAP for improved description of ocean light scattering. The non-linear least square fitting algorithm is used to optimize the bio-optical model parameters and the aerosol optical and microphysical properties including refractive indices and size distributions for both fine and coarse modes. The retrieved aerosol information is used to calculate the atmospheric path radiance, which is then subtracted from the OCI observations to obtain the water leaving radiance contribution. Our work aims to maximize the use of available information from the co-located dataset and conduct the atmospheric correction with minimal assumptions. The algorithm will contribute to the success of current MAP instruments, such as the Research Scanning Polarimeter (RSP), and future ocean color missions, such as the Plankton, Aerosol, Cloud, and ocean Ecosystem (PACE) mission, by enabling retrieval of ocean biogeochemical properties under optically-complex atmospheric and oceanic conditions.

  4. QuantiFly: Robust Trainable Software for Automated Drosophila Egg Counting.

    PubMed

    Waithe, Dominic; Rennert, Peter; Brostow, Gabriel; Piper, Matthew D W

    2015-01-01

    We report the development and testing of software called QuantiFly: an automated tool to quantify Drosophila egg laying. Many laboratories count Drosophila eggs as a marker of fitness. The existing method requires laboratory researchers to count eggs manually while looking down a microscope. This technique is both time-consuming and tedious, especially when experiments require daily counts of hundreds of vials. The basis of the QuantiFly software is an algorithm which applies and improves upon an existing advanced pattern recognition and machine-learning routine. The accuracy of the baseline algorithm is additionally increased in this study through correction of bias observed in the algorithm output. The QuantiFly software, which includes the refined algorithm, has been designed to be immediately accessible to scientists through an intuitive and responsive user-friendly graphical interface. The software is also open-source, self-contained, has no dependencies and is easily installed (https://github.com/dwaithe/quantifly). Compared to manual egg counts made from digital images, QuantiFly achieved average accuracies of 94% and 85% for eggs laid on transparent (defined) and opaque (yeast-based) fly media. Thus, the software is capable of detecting experimental differences in most experimental situations. Significantly, the advanced feature recognition capabilities of the software proved to be robust to food surface artefacts like bubbles and crevices. The user experience involves image acquisition, algorithm training by labelling a subset of eggs in images of some of the vials, followed by a batch analysis mode in which new images are automatically assessed for egg numbers. Initial training typically requires approximately 10 minutes, while subsequent image evaluation by the software is performed in just a few seconds. Given the average time per vial for manual counting is approximately 40 seconds, our software introduces a timesaving advantage for experiments starting with as few as 20 vials. We also describe an optional acrylic box to be used as a digital camera mount and to provide controlled lighting during image acquisition which will guarantee the conditions used in this study.

  5. QuantiFly: Robust Trainable Software for Automated Drosophila Egg Counting

    PubMed Central

    Waithe, Dominic; Rennert, Peter; Brostow, Gabriel; Piper, Matthew D. W.

    2015-01-01

    We report the development and testing of software called QuantiFly: an automated tool to quantify Drosophila egg laying. Many laboratories count Drosophila eggs as a marker of fitness. The existing method requires laboratory researchers to count eggs manually while looking down a microscope. This technique is both time-consuming and tedious, especially when experiments require daily counts of hundreds of vials. The basis of the QuantiFly software is an algorithm which applies and improves upon an existing advanced pattern recognition and machine-learning routine. The accuracy of the baseline algorithm is additionally increased in this study through correction of bias observed in the algorithm output. The QuantiFly software, which includes the refined algorithm, has been designed to be immediately accessible to scientists through an intuitive and responsive user-friendly graphical interface. The software is also open-source, self-contained, has no dependencies and is easily installed (https://github.com/dwaithe/quantifly). Compared to manual egg counts made from digital images, QuantiFly achieved average accuracies of 94% and 85% for eggs laid on transparent (defined) and opaque (yeast-based) fly media. Thus, the software is capable of detecting experimental differences in most experimental situations. Significantly, the advanced feature recognition capabilities of the software proved to be robust to food surface artefacts like bubbles and crevices. The user experience involves image acquisition, algorithm training by labelling a subset of eggs in images of some of the vials, followed by a batch analysis mode in which new images are automatically assessed for egg numbers. Initial training typically requires approximately 10 minutes, while subsequent image evaluation by the software is performed in just a few seconds. Given the average time per vial for manual counting is approximately 40 seconds, our software introduces a timesaving advantage for experiments starting with as few as 20 vials. We also describe an optional acrylic box to be used as a digital camera mount and to provide controlled lighting during image acquisition which will guarantee the conditions used in this study. PMID:25992957

  6. Mapping minerals, amorphous materials, environmental materials, vegetation, water, ice and snow, and other materials: The USGS tricorder algorithm

    NASA Technical Reports Server (NTRS)

    Clark, Roger N.; Swayze, Gregg A.

    1995-01-01

    One of the challenges of Imaging Spectroscopy is the identification, mapping and abundance determination of materials, whether mineral, vegetable, or liquid, given enough spectral range, spectral resolution, signal to noise, and spatial resolution. Many materials show diagnostic absorption features in the visual and near infrared region (0.4 to 2.5 micrometers) of the spectrum. This region is covered by the modern imaging spectrometers such as AVIRIS. The challenge is to identify the materials from absorption bands in their spectra, and determine what specific analyses must be done to derive particular parameters of interest, ranging from simply identifying its presence to deriving its abundance, or determining specific chemistry of the material. Recently, a new analysis algorithm was developed that uses a digital spectral library of known materials and a fast, modified-least-squares method of determining if a single spectral feature for a given material is present. Clark et al. made another advance in the mapping algorithm: simultaneously mapping multiple minerals using multiple spectral features. This was done by a modified-least-squares fit of spectral features, from data in a digital spectral library, to corresponding spectral features in the image data. This version has now been superseded by a more comprehensive spectral analysis system called Tricorder.

  7. Statistically Optimized Inversion Algorithm for Enhanced Retrieval of Aerosol Properties from Spectral Multi-Angle Polarimetric Satellite Observations

    NASA Technical Reports Server (NTRS)

    Dubovik, O; Herman, M.; Holdak, A.; Lapyonok, T.; Taure, D.; Deuze, J. L.; Ducos, F.; Sinyuk, A.

    2011-01-01

    The proposed development is an attempt to enhance aerosol retrieval by emphasizing statistical optimization in inversion of advanced satellite observations. This optimization concept improves retrieval accuracy relying on the knowledge of measurement error distribution. Efficient application of such optimization requires pronounced data redundancy (excess of the measurements number over number of unknowns) that is not common in satellite observations. The POLDER imager on board the PARASOL microsatellite registers spectral polarimetric characteristics of the reflected atmospheric radiation at up to 16 viewing directions over each observed pixel. The completeness of such observations is notably higher than for most currently operating passive satellite aerosol sensors. This provides an opportunity for profound utilization of statistical optimization principles in satellite data inversion. The proposed retrieval scheme is designed as statistically optimized multi-variable fitting of all available angular observations obtained by the POLDER sensor in the window spectral channels where absorption by gas is minimal. The total number of such observations by PARASOL always exceeds a hundred over each pixel and the statistical optimization concept promises to be efficient even if the algorithm retrieves several tens of aerosol parameters. Based on this idea, the proposed algorithm uses a large number of unknowns and is aimed at retrieval of extended set of parameters affecting measured radiation.

  8. Test performance of immunologic fecal occult blood testing and sigmoidoscopy compared with primary colonoscopy screening for colorectal advanced adenomas.

    PubMed

    Khalid-de Bakker, Carolina A J; Jonkers, Daisy M A E; Sanduleanu, Silvia; de Bruïne, Adriaan P; Meijer, Gerrit A; Janssen, Jan B M J; van Engeland, Manon; Stockbrügger, Reinhold W; Masclee, Ad A M

    2011-10-01

    Given the current increase in colorectal cancer screening, information on performance of screening tests is needed, especially in groups with a presumed lower test performance. We compared test performance of immunologic fecal occult blood testing (FIT) and pseudosigmoidoscopy with colonoscopy for detection of advanced adenomas in an average risk screening population. In addition, we explored the influence of gender, age, and location on test performance. FIT was collected prior to colonoscopy with a 50 ng/mL cutoff point. FIT results and complete colonoscopy findings were available from 329 subjects (mean age: 54.6 ± 3.7 years, 58.4% women). Advanced adenomas were detected in 38 (11.6%) of 329 subjects. Sensitivity for advanced adenomas of FIT and sigmoidoscopy were 15.8% (95% CI: 6.0-31.3) and 73.7% (95% CI: 56.9-86.6), respectively. No sensitivity improvement was obtained using the combination of sigmoidoscopy and FIT. Mean fecal hemoglobin in FIT positives was significantly lower for participants with only proximal adenomas versus those with distal ones (P = 0.008), for women versus men (P = 0.023), and for younger (<55 years) versus older (≥55 years) subjects (P = 0.029). Sensitivities of FIT were 0.0% (95% CI: 0.0-30.9) in subjects with only proximal versus 21.4% (95% CI: 8.3-41.0) in those with distal nonadvanced adenomas; 5.3% (95% CI: 0.0-26.0) in women versus 26.3% (95% CI: 9.2-51.2) in men; 9.5% (95% CI: 1.2-30.4) in younger versus 23.5% (95% CI: 6.8-49.9) in older subjects. Sigmoidoscopy had a significantly higher sensitivity for advanced adenomas than FIT. A single FIT showed very low sensitivity, especially in subjects with only proximal nonadvanced adenomas, in women, and in younger subjects. This points to the existence of "low" FIT performance in subgroups and the need for more tailored screening strategies.

  9. Improved OMI Nitrogen Dioxide Retrievals Aided by NASA's A-Train High-Resolution Data

    NASA Astrophysics Data System (ADS)

    Lamsal, L. N.; Krotkov, N. A.; Vasilkov, A. P.; Marchenko, S. V.; Qin, W.; Yang, E. S.; Fasnacht, Z.; Haffner, D. P.; Swartz, W. H.; Spurr, R. J. D.; Joiner, J.

    2017-12-01

    Space-based global observation of nitrogen dioxide (NO2) is among the main objectives of the NASA Aura Ozone Monitoring Instrument (OMI) mission, aimed at advancing our understanding of the sources and trends of nitrogen oxides (NOx). These applications benefit from improved retrieval techniques and enhancement in data quality. Here, we describe our recent and planned updates to the NASA OMI standard NO2 products. The products and documentation are publicly available from the NASA Goddard Earth Sciences Data and Information Services Center (https://disc.gsfc.nasa.gov/datasets/OMNO2_V003/summary/). The major changes include (1) improvements in spectral fitting algorithms for NO2 and cloud, (2) improved information in the vertical distribution of NO2, and (3) use of geometry-dependent surface reflectivity information derived from NASA's Aqua MODIS over land and the Cox-Munk slope distribution over ocean with a contribution from water-leaving radiance. These algorithm updates, which lead to more accurate tropospheric NO2 retrievals from OMI, are relevant for other past, contemporary, and future satellite instruments.

  10. Advanced methods in NDE using machine learning approaches

    NASA Astrophysics Data System (ADS)

    Wunderlich, Christian; Tschöpe, Constanze; Duckhorn, Frank

    2018-04-01

    Machine learning (ML) methods and algorithms have been applied recently with great success in quality control and predictive maintenance. Its goal to build new and/or leverage existing algorithms to learn from training data and give accurate predictions, or to find patterns, particularly with new and unseen similar data, fits perfectly to Non-Destructive Evaluation. The advantages of ML in NDE are obvious in such tasks as pattern recognition in acoustic signals or automated processing of images from X-ray, Ultrasonics or optical methods. Fraunhofer IKTS is using machine learning algorithms in acoustic signal analysis. The approach had been applied to such a variety of tasks in quality assessment. The principal approach is based on acoustic signal processing with a primary and secondary analysis step followed by a cognitive system to create model data. Already in the second analysis steps unsupervised learning algorithms as principal component analysis are used to simplify data structures. In the cognitive part of the software further unsupervised and supervised learning algorithms will be trained. Later the sensor signals from unknown samples can be recognized and classified automatically by the algorithms trained before. Recently the IKTS team was able to transfer the software for signal processing and pattern recognition to a small printed circuit board (PCB). Still, algorithms will be trained on an ordinary PC; however, trained algorithms run on the Digital Signal Processor and the FPGA chip. The identical approach will be used for pattern recognition in image analysis of OCT pictures. Some key requirements have to be fulfilled, however. A sufficiently large set of training data, a high signal-to-noise ratio, and an optimized and exact fixation of components are required. The automated testing can be done subsequently by the machine. By integrating the test data of many components along the value chain further optimization including lifetime and durability prediction based on big data becomes possible, even if components are used in different versions or configurations. This is the promise behind German Industry 4.0.

  11. The Double Star Orbit Initial Value Problem

    NASA Astrophysics Data System (ADS)

    Hensley, Hagan

    2018-04-01

    Many precise algorithms exist to find a best-fit orbital solution for a double star system given a good enough initial value. Desmos is an online graphing calculator tool with extensive capabilities to support animations and defining functions. It can provide a useful visual means of analyzing double star data to arrive at a best guess approximation of the orbital solution. This is a necessary requirement before using a gradient-descent algorithm to find the best-fit orbital solution for a binary system.

  12. Equal Area Logistic Estimation for Item Response Theory

    NASA Astrophysics Data System (ADS)

    Lo, Shih-Ching; Wang, Kuo-Chang; Chang, Hsin-Li

    2009-08-01

    Item response theory (IRT) models use logistic functions exclusively as item response functions (IRFs). Applications of IRT models require obtaining the set of values for logistic function parameters that best fit an empirical data set. However, success in obtaining such set of values does not guarantee that the constructs they represent actually exist, for the adequacy of a model is not sustained by the possibility of estimating parameters. In this study, an equal area based two-parameter logistic model estimation algorithm is proposed. Two theorems are given to prove that the results of the algorithm are equivalent to the results of fitting data by logistic model. Numerical results are presented to show the stability and accuracy of the algorithm.

  13. Linear time algorithms to construct populations fitting multiple constraint distributions at genomic scales.

    PubMed

    Siragusa, Enrico; Haiminen, Niina; Utro, Filippo; Parida, Laxmi

    2017-10-09

    Computer simulations can be used to study population genetic methods, models and parameters, as well as to predict potential outcomes. For example, in plant populations, predicting the outcome of breeding operations can be studied using simulations. In-silico construction of populations with pre-specified characteristics is an important task in breeding optimization and other population genetic studies. We present two linear time Simulation using Best-fit Algorithms (SimBA) for two classes of problems where each co-fits two distributions: SimBA-LD fits linkage disequilibrium and minimum allele frequency distributions, while SimBA-hap fits founder-haplotype and polyploid allele dosage distributions. An incremental gap-filling version of previously introduced SimBA-LD is here demonstrated to accurately fit the target distributions, allowing efficient large scale simulations. SimBA-hap accuracy and efficiency is demonstrated by simulating tetraploid populations with varying numbers of founder haplotypes, we evaluate both a linear time greedy algoritm and an optimal solution based on mixed-integer programming. SimBA is available on http://researcher.watson.ibm.com/project/5669.

  14. Fitness landscape complexity and the emergence of modularity in neural networks

    NASA Astrophysics Data System (ADS)

    Lowell, Jessica

    Previous research has shown that the shape of the fitness landscape can affect the evolution of modularity. We evolved neural networks to solve different tasks with different fitness landscapes, using NEAT, a popular neuroevolution algorithm that quantifies similarity between genomes in order to divide them into species. We used this speciation mechanism as a means to examine fitness landscape complexity, and to examine connections between fitness landscape complexity and the emergence of modularity.

  15. Approaches to automatic parameter fitting in a microscopy image segmentation pipeline: An exploratory parameter space analysis.

    PubMed

    Held, Christian; Nattkemper, Tim; Palmisano, Ralf; Wittenberg, Thomas

    2013-01-01

    Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline's modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum.

  16. Approaches to automatic parameter fitting in a microscopy image segmentation pipeline: An exploratory parameter space analysis

    PubMed Central

    Held, Christian; Nattkemper, Tim; Palmisano, Ralf; Wittenberg, Thomas

    2013-01-01

    Introduction: Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. Methods: In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline's modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. Results: This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. Conclusion: The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum. PMID:23766941

  17. Nuclear Electric Vehicle Optimization Toolset (NEVOT): Integrated System Design Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Tinker, Michael L.; Steincamp, James W.; Stewart, Eric T.; Patton, Bruce W.; Pannell, William P.; Newby, Ronald L.; Coffman, Mark E.; Qualls, A. L.; Bancroft, S.; Molvik, Greg

    2003-01-01

    The Nuclear Electric Vehicle Optimization Toolset (NEVOT) optimizes the design of all major Nuclear Electric Propulsion (NEP) vehicle subsystems for a defined mission within constraints and optimization parameters chosen by a user. The tool uses a Genetic Algorithm (GA) search technique to combine subsystem designs and evaluate the fitness of the integrated design to fulfill a mission. The fitness of an individual is used within the GA to determine its probability of survival through successive generations in which the designs with low fitness are eliminated and replaced with combinations or mutations of designs with higher fitness. The program can find optimal solutions for different sets of fitness metrics without modification and can create and evaluate vehicle designs that might never be conceived of through traditional design techniques. It is anticipated that the flexible optimization methodology will expand present knowledge of the design trade-offs inherent in designing nuclear powered space vehicles and lead to improved NEP designs.

  18. A-Track: Detecting Moving Objects in FITS images

    NASA Astrophysics Data System (ADS)

    Atay, T.; Kaplan, M.; Kilic, Y.; Karapinar, N.

    2017-04-01

    A-Track is a fast, open-source, cross-platform pipeline for detecting moving objects (asteroids and comets) in sequential telescope images in FITS format. The moving objects are detected using a modified line detection algorithm.

  19. Pile-up correction algorithm based on successive integration for high count rate medical imaging and radiation spectroscopy

    NASA Astrophysics Data System (ADS)

    Mohammadian-Behbahani, Mohammad-Reza; Saramad, Shahyar

    2018-07-01

    In high count rate radiation spectroscopy and imaging, detector output pulses tend to pile up due to high interaction rate of the particles with the detector. Pile-up effects can lead to a severe distortion of the energy and timing information. Pile-up events are conventionally prevented or rejected by both analog and digital electronics. However, for decreasing the exposure times in medical imaging applications, it is important to maintain the pulses and extract their true information by pile-up correction methods. The single-event reconstruction method is a relatively new model-based approach for recovering the pulses one-by-one using a fitting procedure, for which a fast fitting algorithm is a prerequisite. This article proposes a fast non-iterative algorithm based on successive integration which fits the bi-exponential model to experimental data. After optimizing the method, the energy spectra, energy resolution and peak-to-peak count ratios are calculated for different counting rates using the proposed algorithm as well as the rejection method for comparison. The obtained results prove the effectiveness of the proposed method as a pile-up processing scheme designed for spectroscopic and medical radiation detection applications.

  20. A highly accurate dynamic contact angle algorithm for drops on inclined surface based on ellipse-fitting.

    PubMed

    Xu, Z N; Wang, S Y

    2015-02-01

    To improve the accuracy in the calculation of dynamic contact angle for drops on the inclined surface, a significant number of numerical drop profiles on the inclined surface with different inclination angles, drop volumes, and contact angles are generated based on the finite difference method, a least-squares ellipse-fitting algorithm is used to calculate the dynamic contact angle. The influences of the above three factors are systematically investigated. The results reveal that the dynamic contact angle errors, including the errors of the left and right contact angles, evaluated by the ellipse-fitting algorithm tend to increase with inclination angle/drop volume/contact angle. If the drop volume and the solid substrate are fixed, the errors of the left and right contact angles increase with inclination angle. After performing a tremendous amount of computation, the critical dimensionless drop volumes corresponding to the critical contact angle error are obtained. Based on the values of the critical volumes, a highly accurate dynamic contact angle algorithm is proposed and fully validated. Within nearly the whole hydrophobicity range, it can decrease the dynamic contact angle error in the inclined plane method to less than a certain value even for different types of liquids.

  1. 1/f Noise in the Simple Genetic Algorithm Applied to a Traveling Salesman Problem

    NASA Astrophysics Data System (ADS)

    Yamada, Mitsuhiro

    Complex dynamical systems are observed in physics, biology, and even economics. Such systems in balance are considered to be in a critical state, and 1/f noise is considered to be a footprint. Complex dynamical systems have also been investigated in the field of evolutionary algorithms inspired by biological evolution. The genetic algorithm (GA) is a well-known evolutionary algorithm in which many individuals interact, and the simplest GA is referred to as the simple GA (SGA). However, the GA has not been examined from the viewpoint of the emergence of 1/f noise. In the present paper, the SGA is applied to a traveling salesman problem in order to investigate the SGA from such a viewpoint. The timecourses of the fitness of the candidate solution were examined. As a result, when the mutation and crossover probabilities were optimal, the system evolved toward a critical state in which the average maximum fitness over all trial runs was maximum. In this situation, the fluctuation of the fitness of the candidate solution resulted in the 1/f power spectrum, and the dynamics of the system had no intrinsic time or length scale.

  2. Advanced Mathematical Tools in Metrology III

    NASA Astrophysics Data System (ADS)

    Ciarlini, P.

    The Table of Contents for the book is as follows: * Foreword * Invited Papers * The ISO Guide to the Expression of Uncertainty in Measurement: A Bridge between Statistics and Metrology * Bootstrap Algorithms and Applications * The TTRSs: 13 Oriented Constraints for Dimensioning, Tolerancing & Inspection * Graded Reference Data Sets and Performance Profiles for Testing Software Used in Metrology * Uncertainty in Chemical Measurement * Mathematical Methods for Data Analysis in Medical Applications * High-Dimensional Empirical Linear Prediction * Wavelet Methods in Signal Processing * Software Problems in Calibration Services: A Case Study * Robust Alternatives to Least Squares * Gaining Information from Biomagnetic Measurements * Full Papers * Increase of Information in the Course of Measurement * A Framework for Model Validation and Software Testing in Regression * Certification of Algorithms for Determination of Signal Extreme Values during Measurement * A Method for Evaluating Trends in Ozone-Concentration Data and Its Application to Data from the UK Rural Ozone Monitoring Network * Identification of Signal Components by Stochastic Modelling in Measurements of Evoked Magnetic Fields from Peripheral Nerves * High Precision 3D-Calibration of Cylindrical Standards * Magnetic Dipole Estimations for MCG-Data * Transfer Functions of Discrete Spline Filters * An Approximation Method for the Linearization of Tridimensional Metrology Problems * Regularization Algorithms for Image Reconstruction from Projections * Quality of Experimental Data in Hydrodynamic Research * Stochastic Drift Models for the Determination of Calibration Intervals * Short Communications * Projection Method for Lidar Measurement * Photon Flux Measurements by Regularised Solution of Integral Equations * Correct Solutions of Fit Problems in Different Experimental Situations * An Algorithm for the Nonlinear TLS Problem in Polynomial Fitting * Designing Axially Symmetric Electromechanical Systems of Superconducting Magnetic Levitation in Matlab Environment * Data Flow Evaluation in Metrology * A Generalized Data Model for Integrating Clinical Data and Biosignal Records of Patients * Assessment of Three-Dimensional Structures in Clinical Dentistry * Maximum Entropy and Bayesian Approaches to Parameter Estimation in Mass Metrology * Amplitude and Phase Determination of Sinusoidal Vibration in the Nanometer Range using Quadrature Signals * A Class of Symmetric Compactly Supported Wavelets and Associated Dual Bases * Analysis of Surface Topography by Maximum Entropy Power Spectrum Estimation * Influence of Different Kinds of Errors on Imaging Results in Optical Tomography * Application of the Laser Interferometry for Automatic Calibration of Height Setting Micrometer * Author Index

  3. Assessing performance of Bayesian state-space models fit to Argos satellite telemetry locations processed with Kalman filtering.

    PubMed

    Silva, Mónica A; Jonsen, Ian; Russell, Deborah J F; Prieto, Rui; Thompson, Dave; Baumgartner, Mark F

    2014-01-01

    Argos recently implemented a new algorithm to calculate locations of satellite-tracked animals that uses a Kalman filter (KF). The KF algorithm is reported to increase the number and accuracy of estimated positions over the traditional Least Squares (LS) algorithm, with potential advantages to the application of state-space methods to model animal movement data. We tested the performance of two Bayesian state-space models (SSMs) fitted to satellite tracking data processed with KF algorithm. Tracks from 7 harbour seals (Phoca vitulina) tagged with ARGOS satellite transmitters equipped with Fastloc GPS loggers were used to calculate the error of locations estimated from SSMs fitted to KF and LS data, by comparing those to "true" GPS locations. Data on 6 fin whales (Balaenoptera physalus) were used to investigate consistency in movement parameters, location and behavioural states estimated by switching state-space models (SSSM) fitted to data derived from KF and LS methods. The model fit to KF locations improved the accuracy of seal trips by 27% over the LS model. 82% of locations predicted from the KF model and 73% of locations from the LS model were <5 km from the corresponding interpolated GPS position. Uncertainty in KF model estimates (5.6 ± 5.6 km) was nearly half that of LS estimates (11.6 ± 8.4 km). Accuracy of KF and LS modelled locations was sensitive to precision but not to observation frequency or temporal resolution of raw Argos data. On average, 88% of whale locations estimated by KF models fell within the 95% probability ellipse of paired locations from LS models. Precision of KF locations for whales was generally higher. Whales' behavioural mode inferred by KF models matched the classification from LS models in 94% of the cases. State-space models fit to KF data can improve spatial accuracy of location estimates over LS models and produce equally reliable behavioural estimates.

  4. Assessing Performance of Bayesian State-Space Models Fit to Argos Satellite Telemetry Locations Processed with Kalman Filtering

    PubMed Central

    Silva, Mónica A.; Jonsen, Ian; Russell, Deborah J. F.; Prieto, Rui; Thompson, Dave; Baumgartner, Mark F.

    2014-01-01

    Argos recently implemented a new algorithm to calculate locations of satellite-tracked animals that uses a Kalman filter (KF). The KF algorithm is reported to increase the number and accuracy of estimated positions over the traditional Least Squares (LS) algorithm, with potential advantages to the application of state-space methods to model animal movement data. We tested the performance of two Bayesian state-space models (SSMs) fitted to satellite tracking data processed with KF algorithm. Tracks from 7 harbour seals (Phoca vitulina) tagged with ARGOS satellite transmitters equipped with Fastloc GPS loggers were used to calculate the error of locations estimated from SSMs fitted to KF and LS data, by comparing those to “true” GPS locations. Data on 6 fin whales (Balaenoptera physalus) were used to investigate consistency in movement parameters, location and behavioural states estimated by switching state-space models (SSSM) fitted to data derived from KF and LS methods. The model fit to KF locations improved the accuracy of seal trips by 27% over the LS model. 82% of locations predicted from the KF model and 73% of locations from the LS model were <5 km from the corresponding interpolated GPS position. Uncertainty in KF model estimates (5.6±5.6 km) was nearly half that of LS estimates (11.6±8.4 km). Accuracy of KF and LS modelled locations was sensitive to precision but not to observation frequency or temporal resolution of raw Argos data. On average, 88% of whale locations estimated by KF models fell within the 95% probability ellipse of paired locations from LS models. Precision of KF locations for whales was generally higher. Whales’ behavioural mode inferred by KF models matched the classification from LS models in 94% of the cases. State-space models fit to KF data can improve spatial accuracy of location estimates over LS models and produce equally reliable behavioural estimates. PMID:24651252

  5. Linear model for fast background subtraction in oligonucleotide microarrays.

    PubMed

    Kroll, K Myriam; Barkema, Gerard T; Carlon, Enrico

    2009-11-16

    One important preprocessing step in the analysis of microarray data is background subtraction. In high-density oligonucleotide arrays this is recognized as a crucial step for the global performance of the data analysis from raw intensities to expression values. We propose here an algorithm for background estimation based on a model in which the cost function is quadratic in a set of fitting parameters such that minimization can be performed through linear algebra. The model incorporates two effects: 1) Correlated intensities between neighboring features in the chip and 2) sequence-dependent affinities for non-specific hybridization fitted by an extended nearest-neighbor model. The algorithm has been tested on 360 GeneChips from publicly available data of recent expression experiments. The algorithm is fast and accurate. Strong correlations between the fitted values for different experiments as well as between the free-energy parameters and their counterparts in aqueous solution indicate that the model captures a significant part of the underlying physical chemistry.

  6. Statistical comparison of various interpolation algorithms for reconstructing regional grid ionospheric maps over China

    NASA Astrophysics Data System (ADS)

    Li, Min; Yuan, Yunbin; Wang, Ningbo; Li, Zishen; Liu, Xifeng; Zhang, Xiao

    2018-07-01

    This paper presents a quantitative comparison of several widely used interpolation algorithms, i.e., Ordinary Kriging (OrK), Universal Kriging (UnK), planar fit and Inverse Distance Weighting (IDW), based on a grid-based single-shell ionosphere model over China. The experimental data were collected from the Crustal Movement Observation Network of China (CMONOC) and the International GNSS Service (IGS), covering the days of year 60-90 in 2015. The quality of these interpolation algorithms was assessed by cross-validation in terms of both the ionospheric correction performance and Single-Frequency (SF) Precise Point Positioning (PPP) accuracy on an epoch-by-epoch basis. The results indicate that the interpolation models perform better at mid-latitudes than low latitudes. For the China region, the performance of OrK and UnK is relatively better than the planar fit and IDW model for estimating ionospheric delay and positioning. In addition, the computational efficiencies of the IDW and planar fit models are better than those of OrK and UnK.

  7. Object Segmentation Methods for Online Model Acquisition to Guide Robotic Grasping

    NASA Astrophysics Data System (ADS)

    Ignakov, Dmitri

    A vision system is an integral component of many autonomous robots. It enables the robot to perform essential tasks such as mapping, localization, or path planning. A vision system also assists with guiding the robot's grasping and manipulation tasks. As an increased demand is placed on service robots to operate in uncontrolled environments, advanced vision systems must be created that can function effectively in visually complex and cluttered settings. This thesis presents the development of segmentation algorithms to assist in online model acquisition for guiding robotic manipulation tasks. Specifically, the focus is placed on localizing door handles to assist in robotic door opening, and on acquiring partial object models to guide robotic grasping. First, a method for localizing a door handle of unknown geometry based on a proposed 3D segmentation method is presented. Following segmentation, localization is performed by fitting a simple box model to the segmented handle. The proposed method functions without requiring assumptions about the appearance of the handle or the door, and without a geometric model of the handle. Next, an object segmentation algorithm is developed, which combines multiple appearance (intensity and texture) and geometric (depth and curvature) cues. The algorithm is able to segment objects without utilizing any a priori appearance or geometric information in visually complex and cluttered environments. The segmentation method is based on the Conditional Random Fields (CRF) framework, and the graph cuts energy minimization technique. A simple and efficient method for initializing the proposed algorithm which overcomes graph cuts' reliance on user interaction is also developed. Finally, an improved segmentation algorithm is developed which incorporates a distance metric learning (DML) step as a means of weighing various appearance and geometric segmentation cues, allowing the method to better adapt to the available data. The improved method also models the distribution of 3D points in space as a distribution of algebraic distances from an ellipsoid fitted to the object, improving the method's ability to predict which points are likely to belong to the object or the background. Experimental validation of all methods is performed. Each method is evaluated in a realistic setting, utilizing scenarios of various complexities. Experimental results have demonstrated the effectiveness of the handle localization method, and the object segmentation methods.

  8. Parameter estimation for the 4-parameter Asymmetric Exponential Power distribution by the method of L-moments using R

    USGS Publications Warehouse

    Asquith, William H.

    2014-01-01

    The implementation characteristics of two method of L-moments (MLM) algorithms for parameter estimation of the 4-parameter Asymmetric Exponential Power (AEP4) distribution are studied using the R environment for statistical computing. The objective is to validate the algorithms for general application of the AEP4 using R. An algorithm was introduced in the original study of the L-moments for the AEP4. A second or alternative algorithm is shown to have a larger L-moment-parameter domain than the original. The alternative algorithm is shown to provide reliable parameter production and recovery of L-moments from fitted parameters. A proposal is made for AEP4 implementation in conjunction with the 4-parameter Kappa distribution to create a mixed-distribution framework encompassing the joint L-skew and L-kurtosis domains. The example application provides a demonstration of pertinent algorithms with L-moment statistics and two 4-parameter distributions (AEP4 and the Generalized Lambda) for MLM fitting to a modestly asymmetric and heavy-tailed dataset using R.

  9. Infrared measurement and composite tracking algorithm for air-breathing hypersonic vehicles

    NASA Astrophysics Data System (ADS)

    Zhang, Zhao; Gao, Changsheng; Jing, Wuxing

    2018-03-01

    Air-breathing hypersonic vehicles have capabilities of hypersonic speed and strong maneuvering, and thus pose a significant challenge to conventional tracking methodologies. To achieve desirable tracking performance for hypersonic targets, this paper investigates the problems related to measurement model design and tracking model mismatching. First, owing to the severe aerothermal effect of hypersonic motion, an infrared measurement model in near space is designed and analyzed based on target infrared radiation and an atmospheric model. Second, using information from infrared sensors, a composite tracking algorithm is proposed via a combination of the interactive multiple models (IMM) algorithm, fitting dynamics model, and strong tracking filter. During the procedure, the IMMs algorithm generates tracking data to establish a fitting dynamics model of the target. Then, the strong tracking unscented Kalman filter is employed to estimate the target states for suppressing the impact of target maneuvers. Simulations are performed to verify the feasibility of the presented composite tracking algorithm. The results demonstrate that the designed infrared measurement model effectively and continuously observes hypersonic vehicles, and the proposed composite tracking algorithm accurately and stably tracks these targets.

  10. Performance improvement for optimization of the non-linear geometric fitting problem in manufacturing metrology

    NASA Astrophysics Data System (ADS)

    Moroni, Giovanni; Syam, Wahyudin P.; Petrò, Stefano

    2014-08-01

    Product quality is a main concern today in manufacturing; it drives competition between companies. To ensure high quality, a dimensional inspection to verify the geometric properties of a product must be carried out. High-speed non-contact scanners help with this task, by both speeding up acquisition speed and increasing accuracy through a more complete description of the surface. The algorithms for the management of the measurement data play a critical role in ensuring both the measurement accuracy and speed of the device. One of the most fundamental parts of the algorithm is the procedure for fitting the substitute geometry to a cloud of points. This article addresses this challenge. Three relevant geometries are selected as case studies: a non-linear least-squares fitting of a circle, sphere and cylinder. These geometries are chosen in consideration of their common use in practice; for example the sphere is often adopted as a reference artifact for performance verification of a coordinate measuring machine (CMM) and a cylinder is the most relevant geometry for a pin-hole relation as an assembly feature to construct a complete functioning product. In this article, an improvement of the initial point guess for the Levenberg-Marquardt (LM) algorithm by employing a chaos optimization (CO) method is proposed. This causes a performance improvement in the optimization of a non-linear function fitting the three geometries. The results show that, with this combination, a higher quality of fitting results a smaller norm of the residuals can be obtained while preserving the computational cost. Fitting an ‘incomplete-point-cloud’, which is a situation where the point cloud does not cover a complete feature e.g. from half of the total part surface, is also investigated. Finally, a case study of fitting a hemisphere is presented.

  11. Application of artificial neural networks and genetic algorithms to modeling molecular electronic spectra in solution

    NASA Astrophysics Data System (ADS)

    Lilichenko, Mark; Kelley, Anne Myers

    2001-04-01

    A novel approach is presented for finding the vibrational frequencies, Franck-Condon factors, and vibronic linewidths that best reproduce typical, poorly resolved electronic absorption (or fluorescence) spectra of molecules in condensed phases. While calculation of the theoretical spectrum from the molecular parameters is straightforward within the harmonic oscillator approximation for the vibrations, "inversion" of an experimental spectrum to deduce these parameters is not. Standard nonlinear least-squares fitting methods such as Levenberg-Marquardt are highly susceptible to becoming trapped in local minima in the error function unless very good initial guesses for the molecular parameters are made. Here we employ a genetic algorithm to force a broad search through parameter space and couple it with the Levenberg-Marquardt method to speed convergence to each local minimum. In addition, a neural network trained on a large set of synthetic spectra is used to provide an initial guess for the fitting parameters and to narrow the range searched by the genetic algorithm. The combined algorithm provides excellent fits to a variety of single-mode absorption spectra with experimentally negligible errors in the parameters. It converges more rapidly than the genetic algorithm alone and more reliably than the Levenberg-Marquardt method alone, and is robust in the presence of spectral noise. Extensions to multimode systems, and/or to include other spectroscopic data such as resonance Raman intensities, are straightforward.

  12. OMI Global Tropospheric Bromine Oxide (BrO) Column Densities: Algorithm, Retrieval and Initial Validation

    NASA Astrophysics Data System (ADS)

    Suleiman, R. M.; Chance, K.; Liu, X.; Kurosu, T. P.; Gonzalez Abad, G.

    2014-12-01

    We present and discuss a detailed description of the retrieval algorithms for the OMI BrO product. The BrO algorithms are based on direct fitting of radiances from 319.0-347.5 nm. Radiances are modeled from the solar irradiance, attenuated and adjusted by contributions from the target gas and interfering gases, rotational Raman scattering, undersampling, additive and multiplicative closure polynomials and a common mode spectrum. The version of the algorithm used for both BrO includes relevant changes with respect to the operational code, including the fit of the O2-O2 collisional complex, updates in the high resolution solar reference spectrum, updates in spectroscopy, an updated Air Mass Factor (AMF) calculation scheme, and the inclusion of scattering weights and vertical profiles in the level 2 products. Updates to the algorithms include accurate scattering weights and air mass factor calculations, scattering weights and profiles in outputs and available cross sections. We include retrieval parameter and window optimization to reduce the interference from O3, HCHO, O2-O2, SO2, improve fitting accuracy and uncertainty, reduce striping, and improve the long-term stability. We validate OMI BrO with ground-based measurements from Harestua and with chemical transport model simulations. We analyze the global distribution and seasonal variation of BrO and investigate BrO emissions from volcanoes and salt lakes.

  13. Hybrid genetic algorithm with an adaptive penalty function for fitting multimodal experimental data: application to exchange-coupled non-Kramers binuclear iron active sites.

    PubMed

    Beaser, Eric; Schwartz, Jennifer K; Bell, Caleb B; Solomon, Edward I

    2011-09-26

    A Genetic Algorithm (GA) is a stochastic optimization technique based on the mechanisms of biological evolution. These algorithms have been successfully applied in many fields to solve a variety of complex nonlinear problems. While they have been used with some success in chemical problems such as fitting spectroscopic and kinetic data, many have avoided their use due to the unconstrained nature of the fitting process. In engineering, this problem is now being addressed through incorporation of adaptive penalty functions, but their transfer to other fields has been slow. This study updates the Nanakorrn Adaptive Penalty function theory, expanding its validity beyond maximization problems to minimization as well. The expanded theory, using a hybrid genetic algorithm with an adaptive penalty function, was applied to analyze variable temperature variable field magnetic circular dichroism (VTVH MCD) spectroscopic data collected on exchange coupled Fe(II)Fe(II) enzyme active sites. The data obtained are described by a complex nonlinear multimodal solution space with at least 6 to 13 interdependent variables and are costly to search efficiently. The use of the hybrid GA is shown to improve the probability of detecting the global optimum. It also provides large gains in computational and user efficiency. This method allows a full search of a multimodal solution space, greatly improving the quality and confidence in the final solution obtained, and can be applied to other complex systems such as fitting of other spectroscopic or kinetics data.

  14. Gamma ray spectroscopy employing divalent europium-doped alkaline earth halides and digital readout for accurate histogramming

    DOEpatents

    Cherepy, Nerine Jane; Payne, Stephen Anthony; Drury, Owen B; Sturm, Benjamin W

    2014-11-11

    A scintillator radiation detector system according to one embodiment includes a scintillator; and a processing device for processing pulse traces corresponding to light pulses from the scintillator, wherein pulse digitization is used to improve energy resolution of the system. A scintillator radiation detector system according to another embodiment includes a processing device for fitting digitized scintillation waveforms to an algorithm based on identifying rise and decay times and performing a direct integration of fit parameters. A method according to yet another embodiment includes processing pulse traces corresponding to light pulses from a scintillator, wherein pulse digitization is used to improve energy resolution of the system. A method in a further embodiment includes fitting digitized scintillation waveforms to an algorithm based on identifying rise and decay times; and performing a direct integration of fit parameters. Additional systems and methods are also presented.

  15. Critical Mutation Rate Has an Exponential Dependence on Population Size in Haploid and Diploid Populations

    PubMed Central

    Aston, Elizabeth; Channon, Alastair; Day, Charles; Knight, Christopher G.

    2013-01-01

    Understanding the effect of population size on the key parameters of evolution is particularly important for populations nearing extinction. There are evolutionary pressures to evolve sequences that are both fit and robust. At high mutation rates, individuals with greater mutational robustness can outcompete those with higher fitness. This is survival-of-the-flattest, and has been observed in digital organisms, theoretically, in simulated RNA evolution, and in RNA viruses. We introduce an algorithmic method capable of determining the relationship between population size, the critical mutation rate at which individuals with greater robustness to mutation are favoured over individuals with greater fitness, and the error threshold. Verification for this method is provided against analytical models for the error threshold. We show that the critical mutation rate for increasing haploid population sizes can be approximated by an exponential function, with much lower mutation rates tolerated by small populations. This is in contrast to previous studies which identified that critical mutation rate was independent of population size. The algorithm is extended to diploid populations in a system modelled on the biological process of meiosis. The results confirm that the relationship remains exponential, but show that both the critical mutation rate and error threshold are lower for diploids, rather than higher as might have been expected. Analyzing the transition from critical mutation rate to error threshold provides an improved definition of critical mutation rate. Natural populations with their numbers in decline can be expected to lose genetic material in line with the exponential model, accelerating and potentially irreversibly advancing their decline, and this could potentially affect extinction, recovery and population management strategy. The effect of population size is particularly strong in small populations with 100 individuals or less; the exponential model has significant potential in aiding population management to prevent local (and global) extinction events. PMID:24386200

  16. General advancing front packing algorithm for the discrete element method

    NASA Astrophysics Data System (ADS)

    Morfa, Carlos A. Recarey; Pérez Morales, Irvin Pablo; de Farias, Márcio Muniz; de Navarra, Eugenio Oñate Ibañez; Valera, Roberto Roselló; Casañas, Harold Díaz-Guzmán

    2018-01-01

    A generic formulation of a new method for packing particles is presented. It is based on a constructive advancing front method, and uses Monte Carlo techniques for the generation of particle dimensions. The method can be used to obtain virtual dense packings of particles with several geometrical shapes. It employs continuous, discrete, and empirical statistical distributions in order to generate the dimensions of particles. The packing algorithm is very flexible and allows alternatives for: 1—the direction of the advancing front (inwards or outwards), 2—the selection of the local advancing front, 3—the method for placing a mobile particle in contact with others, and 4—the overlap checks. The algorithm also allows obtaining highly porous media when it is slightly modified. The use of the algorithm to generate real particle packings from grain size distribution curves, in order to carry out engineering applications, is illustrated. Finally, basic applications of the algorithm, which prove its effectiveness in the generation of a large number of particles, are carried out.

  17. Pre-processing by data augmentation for improved ellipse fitting.

    PubMed

    Kumar, Pankaj; Belchamber, Erika R; Miklavcic, Stanley J

    2018-01-01

    Ellipse fitting is a highly researched and mature topic. Surprisingly, however, no existing method has thus far considered the data point eccentricity in its ellipse fitting procedure. Here, we introduce the concept of eccentricity of a data point, in analogy with the idea of ellipse eccentricity. We then show empirically that, irrespective of ellipse fitting method used, the root mean square error (RMSE) of a fit increases with the eccentricity of the data point set. The main contribution of the paper is based on the hypothesis that if the data point set were pre-processed to strategically add additional data points in regions of high eccentricity, then the quality of a fit could be improved. Conditional validity of this hypothesis is demonstrated mathematically using a model scenario. Based on this confirmation we propose an algorithm that pre-processes the data so that data points with high eccentricity are replicated. The improvement of ellipse fitting is then demonstrated empirically in real-world application of 3D reconstruction of a plant root system for phenotypic analysis. The degree of improvement for different underlying ellipse fitting methods as a function of data noise level is also analysed. We show that almost every method tested, irrespective of whether it minimizes algebraic error or geometric error, shows improvement in the fit following data augmentation using the proposed pre-processing algorithm.

  18. Miniature Biosensor with Health Risk Assessment Feedback

    NASA Technical Reports Server (NTRS)

    Hanson, Andrea; Downs, Meghan; Kalogera, Kent; Buxton, Roxanne; Cooper, Tommy; Cooper, Alan; Cooper, Ross

    2016-01-01

    Heart rate (HR) monitoring is a medical requirement during exercise on the International Space Station (ISS), fitness tests, and extravehicular activity (EVA); however, NASA does not currently have the technology to consistently and accurately monitor HR and other physiological data during these activities. Performance of currently available HR monitor technologies is dependent on uninterrupted contact with the torso and are prone to data drop-out and motion artifact. Here, we seek an alternative to the chest strap and electrode based sensors currently in use on ISS today. This project aims to develop a high performance, robust earbud based biosensor with focused efforts on improved HR data quality during exercise or EVA. A health risk assessment algorithm will further advance the goals of autonomous crew health care for exploration missions.

  19. GAMBIT: the global and modular beyond-the-standard-model inference tool

    NASA Astrophysics Data System (ADS)

    Athron, Peter; Balazs, Csaba; Bringmann, Torsten; Buckley, Andy; Chrząszcz, Marcin; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Dickinson, Hugh; Edsjö, Joakim; Farmer, Ben; Gonzalo, Tomás E.; Jackson, Paul; Krislock, Abram; Kvellestad, Anders; Lundberg, Johan; McKay, James; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Raklev, Are; Ripken, Joachim; Rogan, Christopher; Saavedra, Aldo; Savage, Christopher; Scott, Pat; Seo, Seon-Hee; Serra, Nicola; Weniger, Christoph; White, Martin; Wild, Sebastian

    2017-11-01

    We describe the open-source global fitting package GAMBIT: the Global And Modular Beyond-the-Standard-Model Inference Tool. GAMBIT combines extensive calculations of observables and likelihoods in particle and astroparticle physics with a hierarchical model database, advanced tools for automatically building analyses of essentially any model, a flexible and powerful system for interfacing to external codes, a suite of different statistical methods and parameter scanning algorithms, and a host of other utilities designed to make scans faster, safer and more easily-extendible than in the past. Here we give a detailed description of the framework, its design and motivation, and the current models and other specific components presently implemented in GAMBIT. Accompanying papers deal with individual modules and present first GAMBIT results. GAMBIT can be downloaded from gambit.hepforge.org.

  20. Generalized and efficient algorithm for computing multipole energies and gradients based on Cartesian tensors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Dejun, E-mail: dejun.lin@gmail.com

    2015-09-21

    Accurate representation of intermolecular forces has been the central task of classical atomic simulations, known as molecular mechanics. Recent advancements in molecular mechanics models have put forward the explicit representation of permanent and/or induced electric multipole (EMP) moments. The formulas developed so far to calculate EMP interactions tend to have complicated expressions, especially in Cartesian coordinates, which can only be applied to a specific kernel potential function. For example, one needs to develop a new formula each time a new kernel function is encountered. The complication of these formalisms arises from an intriguing and yet obscured mathematical relation between themore » kernel functions and the gradient operators. Here, I uncover this relation via rigorous derivation and find that the formula to calculate EMP interactions is basically invariant to the potential kernel functions as long as they are of the form f(r), i.e., any Green’s function that depends on inter-particle distance. I provide an algorithm for efficient evaluation of EMP interaction energies, forces, and torques for any kernel f(r) up to any arbitrary rank of EMP moments in Cartesian coordinates. The working equations of this algorithm are essentially the same for any kernel f(r). Recently, a few recursive algorithms were proposed to calculate EMP interactions. Depending on the kernel functions, the algorithm here is about 4–16 times faster than these algorithms in terms of the required number of floating point operations and is much more memory efficient. I show that it is even faster than a theoretically ideal recursion scheme, i.e., one that requires 1 floating point multiplication and 1 addition per recursion step. This algorithm has a compact vector-based expression that is optimal for computer programming. The Cartesian nature of this algorithm makes it fit easily into modern molecular simulation packages as compared with spherical coordinate-based algorithms. A software library based on this algorithm has been implemented in C++11 and has been released.« less

  1. Automated Algorithms for Quantum-Level Accuracy in Atomistic Simulations: LDRD Final Report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, Aidan Patrick; Schultz, Peter Andrew; Crozier, Paul

    2014-09-01

    This report summarizes the result of LDRD project 12-0395, titled "Automated Algorithms for Quantum-level Accuracy in Atomistic Simulations." During the course of this LDRD, we have developed an interatomic potential for solids and liquids called Spectral Neighbor Analysis Poten- tial (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projectedmore » on to a basis of hyperspherical harmonics in four dimensions. The SNAP coef- ficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. Global optimization methods in the DAKOTA software package are used to seek out good choices of hyperparameters that define the overall structure of the SNAP potential. FitSnap.py, a Python-based software pack- age interfacing to both LAMMPS and DAKOTA is used to formulate the linear regression problem, solve it, and analyze the accuracy of the resultant SNAP potential. We describe a SNAP potential for tantalum that accurately reproduces a variety of solid and liquid properties. Most significantly, in contrast to existing tantalum potentials, SNAP correctly predicts the Peierls barrier for screw dislocation motion. We also present results from SNAP potentials generated for indium phosphide (InP) and silica (SiO 2 ). We describe efficient algorithms for calculating SNAP forces and energies in molecular dynamics simulations using massively parallel computers and advanced processor ar- chitectures. Finally, we briefly describe the MSM method for efficient calculation of electrostatic interactions on massively parallel computers.« less

  2. Genetic Algorithms to Optimizatize Lecturer Assessment's Criteria

    NASA Astrophysics Data System (ADS)

    Jollyta, Deny; Johan; Hajjah, Alyauma

    2017-12-01

    The lecturer assessment criteria is used as a measurement of the lecturer's performance in a college environment. To determine the value for a criteriais complicated and often leads to doubt. The absence of a standard valuefor each assessment criteria will affect the final results of the assessment and become less presentational data for the leader of college in taking various policies relate to reward and punishment. The Genetic Algorithm comes as an algorithm capable of solving non-linear problems. Using chromosomes in the random initial population, one of the presentations is binary, evaluates the fitness function and uses crossover genetic operator and mutation to obtain the desired crossbreed. It aims to obtain the most optimum criteria values in terms of the fitness function of each chromosome. The training results show that Genetic Algorithm able to produce the optimal values of lecturer assessment criteria so that can be usedby the college as a standard value for lecturer assessment criteria.

  3. Regularization Paths for Cox's Proportional Hazards Model via Coordinate Descent.

    PubMed

    Simon, Noah; Friedman, Jerome; Hastie, Trevor; Tibshirani, Rob

    2011-03-01

    We introduce a pathwise algorithm for the Cox proportional hazards model, regularized by convex combinations of ℓ 1 and ℓ 2 penalties (elastic net). Our algorithm fits via cyclical coordinate descent, and employs warm starts to find a solution along a regularization path. We demonstrate the efficacy of our algorithm on real and simulated data sets, and find considerable speedup between our algorithm and competing methods.

  4. Integrating the Levels of Person-Environment Fit: The Roles of Vocational Fit and Group Fit

    ERIC Educational Resources Information Center

    Vogel, Ryan M.; Feldman, Daniel C.

    2009-01-01

    Previous research on fit has largely focused on person-organization (P-O) fit and person-job (P-J) fit. However, little research has examined the interplay of person-vocation (P-V) fit and person-group (P-G) fit with P-O fit and P-J fit in the same study. This article advances the fit literature by examining these relationships with data collected…

  5. Modeling and Bayesian parameter estimation for shape memory alloy bending actuators

    NASA Astrophysics Data System (ADS)

    Crews, John H.; Smith, Ralph C.

    2012-04-01

    In this paper, we employ a homogenized energy model (HEM) for shape memory alloy (SMA) bending actuators. Additionally, we utilize a Bayesian method for quantifying parameter uncertainty. The system consists of a SMA wire attached to a flexible beam. As the actuator is heated, the beam bends, providing endoscopic motion. The model parameters are fit to experimental data using an ordinary least-squares approach. The uncertainty in the fit model parameters is then quantified using Markov Chain Monte Carlo (MCMC) methods. The MCMC algorithm provides bounds on the parameters, which will ultimately be used in robust control algorithms. One purpose of the paper is to test the feasibility of the Random Walk Metropolis algorithm, the MCMC method used here.

  6. Enhanced ultrasound for advanced diagnostics, ultrasound tomography for volume limb imaging and prosthetic fitting

    NASA Astrophysics Data System (ADS)

    Anthony, Brian W.

    2016-04-01

    Ultrasound imaging methods hold the potential to deliver low-cost, high-resolution, operator-independent and nonionizing imaging systems - such systems couple appropriate algorithms with imaging devices and techniques. The increasing demands on general practitioners motivate us to develop more usable and productive diagnostic imaging equipment. Ultrasound, specifically freehand ultrasound, is a low cost and safe medical imaging technique. It doesn't expose a patient to ionizing radiation. Its safety and versatility make it very well suited for the increasing demands on general practitioners, or for providing improved medical care in rural regions or the developing world. However it typically suffers from sonographer variability; we will discuss techniques to address user variability. We also discuss our work to combine cylindrical scanning systems with state of the art inversion algorithms to deliver ultrasound systems for imaging and quantifying limbs in 3-D in vivo. Such systems have the potential to track the progression of limb health at a low cost and without radiation exposure, as well as, improve prosthetic socket fitting. Current methods of prosthetic socket fabrication remain subjective and ineffective at creating an interface to the human body that is both comfortable and functional. Though there has been recent success using methods like magnetic resonance imaging and biomechanical modeling, a low-cost, streamlined, and quantitative process for prosthetic cup design and fabrication has not been fully demonstrated. Medical ultrasonography may inform the design process of prosthetic sockets in a more objective manner. This keynote talk presents the results of progress in this area.

  7. Hearing Aid Fitting in Infants.

    ERIC Educational Resources Information Center

    Hoover, Brenda M.

    2000-01-01

    This article examines the latest technological advances in hearing aids and explores the available research to help families and professionals make informed decisions when fitting amplification devices on infants and young children. Diagnostic procedures, evaluation techniques, hearing aid selection, circuit and advanced technology options, and…

  8. Tmax Determined Using a Bayesian Estimation Deconvolution Algorithm Applied to Bolus Tracking Perfusion Imaging: A Digital Phantom Validation Study.

    PubMed

    Uwano, Ikuko; Sasaki, Makoto; Kudo, Kohsuke; Boutelier, Timothé; Kameda, Hiroyuki; Mori, Futoshi; Yamashita, Fumio

    2017-01-10

    The Bayesian estimation algorithm improves the precision of bolus tracking perfusion imaging. However, this algorithm cannot directly calculate Tmax, the time scale widely used to identify ischemic penumbra, because Tmax is a non-physiological, artificial index that reflects the tracer arrival delay (TD) and other parameters. We calculated Tmax from the TD and mean transit time (MTT) obtained by the Bayesian algorithm and determined its accuracy in comparison with Tmax obtained by singular value decomposition (SVD) algorithms. The TD and MTT maps were generated by the Bayesian algorithm applied to digital phantoms with time-concentration curves that reflected a range of values for various perfusion metrics using a global arterial input function. Tmax was calculated from the TD and MTT using constants obtained by a linear least-squares fit to Tmax obtained from the two SVD algorithms that showed the best benchmarks in a previous study. Correlations between the Tmax values obtained by the Bayesian and SVD methods were examined. The Bayesian algorithm yielded accurate TD and MTT values relative to the true values of the digital phantom. Tmax calculated from the TD and MTT values with the least-squares fit constants showed excellent correlation (Pearson's correlation coefficient = 0.99) and agreement (intraclass correlation coefficient = 0.99) with Tmax obtained from SVD algorithms. Quantitative analyses of Tmax values calculated from Bayesian-estimation algorithm-derived TD and MTT from a digital phantom correlated and agreed well with Tmax values determined using SVD algorithms.

  9. The Local Minima Problem in Hierarchical Classes Analysis: An Evaluation of a Simulated Annealing Algorithm and Various Multistart Procedures

    ERIC Educational Resources Information Center

    Ceulemans, Eva; Van Mechelen, Iven; Leenen, Iwin

    2007-01-01

    Hierarchical classes models are quasi-order retaining Boolean decomposition models for N-way N-mode binary data. To fit these models to data, rationally started alternating least squares (or, equivalently, alternating least absolute deviations) algorithms have been proposed. Extensive simulation studies showed that these algorithms succeed quite…

  10. Computing Maximum Likelihood Estimates of Loglinear Models from Marginal Sums with Special Attention to Loglinear Item Response Theory.

    ERIC Educational Resources Information Center

    Kelderman, Henk

    1992-01-01

    Describes algorithms used in the computer program LOGIMO for obtaining maximum likelihood estimates of the parameters in loglinear models. These algorithms are also useful for the analysis of loglinear item-response theory models. Presents modified versions of the iterative proportional fitting and Newton-Raphson algorithms. Simulated data…

  11. Image reconstruction through thin scattering media by simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Fang, Longjie; Zuo, Haoyi; Pang, Lin; Yang, Zuogang; Zhang, Xicheng; Zhu, Jianhua

    2018-07-01

    An idea for reconstructing the image of an object behind thin scattering media is proposed by phase modulation. The optimized phase mask is achieved by modulating the scattered light using simulated annealing algorithm. The correlation coefficient is exploited as a fitness function to evaluate the quality of reconstructed image. The reconstructed images optimized from simulated annealing algorithm and genetic algorithm are compared in detail. The experimental results show that our proposed method has better definition and higher speed than genetic algorithm.

  12. SCOUSE: Semi-automated multi-COmponent Universal Spectral-line fitting Engine

    NASA Astrophysics Data System (ADS)

    Henshaw, J. D.; Longmore, S. N.; Kruijssen, J. M. D.; Davies, B.; Bally, J.; Barnes, A.; Battersby, C.; Burton, M.; Cunningham, M. R.; Dale, J. E.; Ginsburg, A.; Immer, K.; Jones, P. A.; Kendrew, S.; Mills, E. A. C.; Molinari, S.; Moore, T. J. T.; Ott, J.; Pillai, T.; Rathborne, J.; Schilke, P.; Schmiedeke, A.; Testi, L.; Walker, D.; Walsh, A.; Zhang, Q.

    2016-01-01

    The Semi-automated multi-COmponent Universal Spectral-line fitting Engine (SCOUSE) is a spectral line fitting algorithm that fits Gaussian files to spectral line emission. It identifies the spatial area over which to fit the data and generates a grid of spectral averaging areas (SAAs). The spatially averaged spectra are fitted according to user-provided tolerance levels, and the best fit is selected using the Akaike Information Criterion, which weights the chisq of a best-fitting solution according to the number of free-parameters. A more detailed inspection of the spectra can be performed to improve the fit through an iterative process, after which SCOUSE integrates the new solutions into the solution file.

  13. An advancing front Delaunay triangulation algorithm designed for robustness

    NASA Technical Reports Server (NTRS)

    Mavriplis, D. J.

    1992-01-01

    A new algorithm is described for generating an unstructured mesh about an arbitrary two-dimensional configuration. Mesh points are generated automatically by the algorithm in a manner which ensures a smooth variation of elements, and the resulting triangulation constitutes the Delaunay triangulation of these points. The algorithm combines the mathematical elegance and efficiency of Delaunay triangulation algorithms with the desirable point placement features, boundary integrity, and robustness traditionally associated with advancing-front-type mesh generation strategies. The method offers increased robustness over previous algorithms in that it cannot fail regardless of the initial boundary point distribution and the prescribed cell size distribution throughout the flow-field.

  14. The environmental control and life support system advanced automation project. Phase 1: Application evaluation

    NASA Technical Reports Server (NTRS)

    Dewberry, Brandon S.

    1990-01-01

    The Environmental Control and Life Support System (ECLSS) is a Freedom Station distributed system with inherent applicability to advanced automation primarily due to the comparatively large reaction times of its subsystem processes. This allows longer contemplation times in which to form a more intelligent control strategy and to detect or prevent faults. The objective of the ECLSS Advanced Automation Project is to reduce the flight and ground manpower needed to support the initial and evolutionary ECLS system. The approach is to search out and make apparent those processes in the baseline system which are in need of more automatic control and fault detection strategies, to influence the ECLSS design by suggesting software hooks and hardware scars which will allow easy adaptation to advanced algorithms, and to develop complex software prototypes which fit into the ECLSS software architecture and will be shown in an ECLSS hardware testbed to increase the autonomy of the system. Covered here are the preliminary investigation and evaluation process, aimed at searching the ECLSS for candidate functions for automation and providing a software hooks and hardware scars analysis. This analysis shows changes needed in the baselined system for easy accommodation of knowledge-based or other complex implementations which, when integrated in flight or ground sustaining engineering architectures, will produce a more autonomous and fault tolerant Environmental Control and Life Support System.

  15. Improving the Fitness of High-Dimensional Biomechanical Models via Data-Driven Stochastic Exploration

    PubMed Central

    Bustamante, Carlos D.; Valero-Cuevas, Francisco J.

    2010-01-01

    The field of complex biomechanical modeling has begun to rely on Monte Carlo techniques to investigate the effects of parameter variability and measurement uncertainty on model outputs, search for optimal parameter combinations, and define model limitations. However, advanced stochastic methods to perform data-driven explorations, such as Markov chain Monte Carlo (MCMC), become necessary as the number of model parameters increases. Here, we demonstrate the feasibility and, what to our knowledge is, the first use of an MCMC approach to improve the fitness of realistically large biomechanical models. We used a Metropolis–Hastings algorithm to search increasingly complex parameter landscapes (3, 8, 24, and 36 dimensions) to uncover underlying distributions of anatomical parameters of a “truth model” of the human thumb on the basis of simulated kinematic data (thumbnail location, orientation, and linear and angular velocities) polluted by zero-mean, uncorrelated multivariate Gaussian “measurement noise.” Driven by these data, ten Markov chains searched each model parameter space for the subspace that best fit the data (posterior distribution). As expected, the convergence time increased, more local minima were found, and marginal distributions broadened as the parameter space complexity increased. In the 36-D scenario, some chains found local minima but the majority of chains converged to the true posterior distribution (confirmed using a cross-validation dataset), thus demonstrating the feasibility and utility of these methods for realistically large biomechanical problems. PMID:19272906

  16. Efficient scatter distribution estimation and correction in CBCT using concurrent Monte Carlo fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bootsma, G. J., E-mail: Gregory.Bootsma@rmp.uhn.on.ca; Verhaegen, F.; Medical Physics Unit, Department of Oncology, McGill University, Montreal, Quebec H3G 1A4

    2015-01-15

    Purpose: X-ray scatter is a significant impediment to image quality improvements in cone-beam CT (CBCT). The authors present and demonstrate a novel scatter correction algorithm using a scatter estimation method that simultaneously combines multiple Monte Carlo (MC) CBCT simulations through the use of a concurrently evaluated fitting function, referred to as concurrent MC fitting (CMCF). Methods: The CMCF method uses concurrently run MC CBCT scatter projection simulations that are a subset of the projection angles used in the projection set, P, to be corrected. The scattered photons reaching the detector in each MC simulation are simultaneously aggregated by an algorithmmore » which computes the scatter detector response, S{sub MC}. S{sub MC} is fit to a function, S{sub F}, and if the fit of S{sub F} is within a specified goodness of fit (GOF), the simulations are terminated. The fit, S{sub F}, is then used to interpolate the scatter distribution over all pixel locations for every projection angle in the set P. The CMCF algorithm was tested using a frequency limited sum of sines and cosines as the fitting function on both simulated and measured data. The simulated data consisted of an anthropomorphic head and a pelvis phantom created from CT data, simulated with and without the use of a compensator. The measured data were a pelvis scan of a phantom and patient taken on an Elekta Synergy platform. The simulated data were used to evaluate various GOF metrics as well as determine a suitable fitness value. The simulated data were also used to quantitatively evaluate the image quality improvements provided by the CMCF method. A qualitative analysis was performed on the measured data by comparing the CMCF scatter corrected reconstruction to the original uncorrected and corrected by a constant scatter correction reconstruction, as well as a reconstruction created using a set of projections taken with a small cone angle. Results: Pearson’s correlation, r, proved to be a suitable GOF metric with strong correlation with the actual error of the scatter fit, S{sub F}. Fitting the scatter distribution to a limited sum of sine and cosine functions using a low-pass filtered fast Fourier transform provided a computationally efficient and accurate fit. The CMCF algorithm reduces the number of photon histories required by over four orders of magnitude. The simulated experiments showed that using a compensator reduced the computational time by a factor between 1.5 and 1.75. The scatter estimates for the simulated and measured data were computed between 35–93 s and 114–122 s, respectively, using 16 Intel Xeon cores (3.0 GHz). The CMCF scatter correction improved the contrast-to-noise ratio by 10%–50% and reduced the reconstruction error to under 3% for the simulated phantoms. Conclusions: The novel CMCF algorithm significantly reduces the computation time required to estimate the scatter distribution by reducing the statistical noise in the MC scatter estimate and limiting the number of projection angles that must be simulated. Using the scatter estimate provided by the CMCF algorithm to correct both simulated and real projection data showed improved reconstruction image quality.« less

  17. Modal characterization of the ASCIE segmented optics testbed: New algorithms and experimental results

    NASA Technical Reports Server (NTRS)

    Carrier, Alain C.; Aubrun, Jean-Noel

    1993-01-01

    New frequency response measurement procedures, on-line modal tuning techniques, and off-line modal identification algorithms are developed and applied to the modal identification of the Advanced Structures/Controls Integrated Experiment (ASCIE), a generic segmented optics telescope test-bed representative of future complex space structures. The frequency response measurement procedure uses all the actuators simultaneously to excite the structure and all the sensors to measure the structural response so that all the transfer functions are measured simultaneously. Structural responses to sinusoidal excitations are measured and analyzed to calculate spectral responses. The spectral responses in turn are analyzed as the spectral data become available and, which is new, the results are used to maintain high quality measurements. Data acquisition, processing, and checking procedures are fully automated. As the acquisition of the frequency response progresses, an on-line algorithm keeps track of the actuator force distribution that maximizes the structural response to automatically tune to a structural mode when approaching a resonant frequency. This tuning is insensitive to delays, ill-conditioning, and nonproportional damping. Experimental results show that is useful for modal surveys even in high modal density regions. For thorough modeling, a constructive procedure is proposed to identify the dynamics of a complex system from its frequency response with the minimization of a least-squares cost function as a desirable objective. This procedure relies on off-line modal separation algorithms to extract modal information and on least-squares parameter subset optimization to combine the modal results and globally fit the modal parameters to the measured data. The modal separation algorithms resolved modal density of 5 modes/Hz in the ASCIE experiment. They promise to be useful in many challenging applications.

  18. Best-Fit Conic Approximation of Spacecraft Trajectory

    NASA Technical Reports Server (NTRS)

    Singh, Gurkipal

    2005-01-01

    A computer program calculates a best conic fit of a given spacecraft trajectory. Spacecraft trajectories are often propagated as conics onboard. The conic-section parameters as a result of the best-conic-fit are uplinked to computers aboard the spacecraft for use in updating predictions of the spacecraft trajectory for operational purposes. In the initial application for which this program was written, there is a requirement to fit a single conic section (necessitated by onboard memory constraints) accurate within 200 microradians to a sequence of positions measured over a 4.7-hour interval. The present program supplants a prior one that could not cover the interval with fewer than four successive conic sections. The present program is based on formulating the best-fit conic problem as a parameter-optimization problem and solving the problem numerically, on the ground, by use of a modified steepest-descent algorithm. For the purpose of this algorithm, optimization is defined as minimization of the maximum directional propagation error across the fit interval. In the specific initial application, the program generates a single 4.7-hour conic, the directional propagation of which is accurate to within 34 microradians easily exceeding the mission constraints by a wide margin.

  19. Analysis of Big Data in Gait Biomechanics: Current Trends and Future Directions.

    PubMed

    Phinyomark, Angkoon; Petri, Giovanni; Ibáñez-Marcelo, Esther; Osis, Sean T; Ferber, Reed

    2018-01-01

    The increasing amount of data in biomechanics research has greatly increased the importance of developing advanced multivariate analysis and machine learning techniques, which are better able to handle "big data". Consequently, advances in data science methods will expand the knowledge for testing new hypotheses about biomechanical risk factors associated with walking and running gait-related musculoskeletal injury. This paper begins with a brief introduction to an automated three-dimensional (3D) biomechanical gait data collection system: 3D GAIT, followed by how the studies in the field of gait biomechanics fit the quantities in the 5 V's definition of big data: volume, velocity, variety, veracity, and value. Next, we provide a review of recent research and development in multivariate and machine learning methods-based gait analysis that can be applied to big data analytics. These modern biomechanical gait analysis methods include several main modules such as initial input features, dimensionality reduction (feature selection and extraction), and learning algorithms (classification and clustering). Finally, a promising big data exploration tool called "topological data analysis" and directions for future research are outlined and discussed.

  20. Exploring the Pareto frontier using multisexual evolutionary algorithms: an application to a flexible manufacturing problem

    NASA Astrophysics Data System (ADS)

    Bonissone, Stefano R.; Subbu, Raj

    2002-12-01

    In multi-objective optimization (MOO) problems we need to optimize many possibly conflicting objectives. For instance, in manufacturing planning we might want to minimize the cost and production time while maximizing the product's quality. We propose the use of evolutionary algorithms (EAs) to solve these problems. Solutions are represented as individuals in a population and are assigned scores according to a fitness function that determines their relative quality. Strong solutions are selected for reproduction, and pass their genetic material to the next generation. Weak solutions are removed from the population. The fitness function evaluates each solution and returns a related score. In MOO problems, this fitness function is vector-valued, i.e. it returns a value for each objective. Therefore, instead of a global optimum, we try to find the Pareto-optimal or non-dominated frontier. We use multi-sexual EAs with as many genders as optimization criteria. We have created new crossover and gender assignment functions, and experimented with various parameters to determine the best setting (yielding the highest number of non-dominated solutions.) These experiments are conducted using a variety of fitness functions, and the algorithms are later evaluated on a flexible manufacturing problem with total cost and time minimization objectives.

  1. A real-time simulation evaluation of an advanced detection. Isolation and accommodation algorithm for sensor failures in turbine engines

    NASA Technical Reports Server (NTRS)

    Merrill, W. C.; Delaat, J. C.

    1986-01-01

    An advanced sensor failure detection, isolation, and accommodation (ADIA) algorithm has been developed for use with an aircraft turbofan engine control system. In a previous paper the authors described the ADIA algorithm and its real-time implementation. Subsequent improvements made to the algorithm and implementation are discussed, and the results of an evaluation presented. The evaluation used a real-time, hybrid computer simulation of an F100 turbofan engine.

  2. Fitting an MSD (mini scleral design) rigid contact lens in advanced keratoconus with INTACS.

    PubMed

    Dalton, Kristine; Sorbara, Luigina

    2011-12-01

    Keratoconus is a bilateral degenerative disease characterized by a non-inflammatory, progressive central corneal ectasia (typically asymmetric) and decreased vision. In its early stages it may be managed with spectacles and soft contact lenses but more commonly it is managed with rigid contact lenses. In advanced stages, when contact lenses can no longer be fit, have become intolerable, or corneal damage is severe, a penetrating keratoplasty is commonly performed. Alternative surgical techniques, such as the use of intra-stromal corneal ring segments (INTACS) have been developed to try and improve the fit of rigid contact lenses in keratoconic patients and avoid penetrating keratoplasties. This case report follows through the fitting of rigid contact lenses in an advanced keratoconic cornea after an INTACS procedure and discusses clinical findings, treatment options, and the use of mini-scleral and scleral lens designs as they relate to the challenges encountered in managing such a patient. Mini-scleral and scleral lenses are relatively easy to fit, and can be of benefit to many patients, including advanced keratoconic patients, post-INTAC patients and post-penetrating keratoplasty patients. 2011 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.

  3. Circuit Design Optimization Using Genetic Algorithm with Parameterized Uniform Crossover

    NASA Astrophysics Data System (ADS)

    Bao, Zhiguo; Watanabe, Takahiro

    Evolvable hardware (EHW) is a new research field about the use of Evolutionary Algorithms (EAs) to construct electronic systems. EHW refers in a narrow sense to use evolutionary mechanisms as the algorithmic drivers for system design, while in a general sense to the capability of the hardware system to develop and to improve itself. Genetic Algorithm (GA) is one of typical EAs. We propose optimal circuit design by using GA with parameterized uniform crossover (GApuc) and with fitness function composed of circuit complexity, power, and signal delay. Parameterized uniform crossover is much more likely to distribute its disruptive trials in an unbiased manner over larger portions of the space, then it has more exploratory power than one and two-point crossover, so we have more chances of finding better solutions. Its effectiveness is shown by experiments. From the results, we can see that the best elite fitness, the average value of fitness of the correct circuits and the number of the correct circuits of GApuc are better than that of GA with one-point crossover or two-point crossover. The best case of optimal circuits generated by GApuc is 10.18% and 6.08% better in evaluating value than that by GA with one-point crossover and two-point crossover, respectively.

  4. Threshold matrix for digital halftoning by genetic algorithm optimization

    NASA Astrophysics Data System (ADS)

    Alander, Jarmo T.; Mantere, Timo J.; Pyylampi, Tero

    1998-10-01

    Digital halftoning is used both in low and high resolution high quality printing technologies. Our method is designed to be mainly used for low resolution ink jet marking machines to produce both gray tone and color images. The main problem with digital halftoning is pink noise caused by the human eye's visual transfer function. To compensate for this the random dot patterns used are optimized to contain more blue than pink noise. Several such dot pattern generator threshold matrices have been created automatically by using genetic algorithm optimization, a non-deterministic global optimization method imitating natural evolution and genetics. A hybrid of genetic algorithm with a search method based on local backtracking was developed together with several fitness functions evaluating dot patterns for rectangular grids. By modifying the fitness function, a family of dot generators results, each with its particular statistical features. Several versions of genetic algorithms, backtracking and fitness functions were tested to find a reasonable combination. The generated threshold matrices have been tested by simulating a set of test images using the Khoros image processing system. Even though the work was focused on developing low resolution marking technology, the resulting family of dot generators can be applied also in other halftoning application areas including high resolution printing technology.

  5. Evolutionary search for new high-k dielectric materials: methodology and applications to hafnia-based oxides.

    PubMed

    Zeng, Qingfeng; Oganov, Artem R; Lyakhov, Andriy O; Xie, Congwei; Zhang, Xiaodong; Zhang, Jin; Zhu, Qiang; Wei, Bingqing; Grigorenko, Ilya; Zhang, Litong; Cheng, Laifei

    2014-02-01

    High-k dielectric materials are important as gate oxides in microelectronics and as potential dielectrics for capacitors. In order to enable computational discovery of novel high-k dielectric materials, we propose a fitness model (energy storage density) that includes the dielectric constant, bandgap, and intrinsic breakdown field. This model, used as a fitness function in conjunction with first-principles calculations and the global optimization evolutionary algorithm USPEX, efficiently leads to practically important results. We found a number of high-fitness structures of SiO2 and HfO2, some of which correspond to known phases and some of which are new. The results allow us to propose characteristics (genes) common to high-fitness structures--these are the coordination polyhedra and their degree of distortion. Our variable-composition searches in the HfO2-SiO2 system uncovered several high-fitness states. This hybrid algorithm opens up a new avenue for discovering novel high-k dielectrics with both fixed and variable compositions, and will speed up the process of materials discovery.

  6. Privacy-Preserving Data Exploration in Genome-Wide Association Studies.

    PubMed

    Johnson, Aaron; Shmatikov, Vitaly

    2013-08-01

    Genome-wide association studies (GWAS) have become a popular method for analyzing sets of DNA sequences in order to discover the genetic basis of disease. Unfortunately, statistics published as the result of GWAS can be used to identify individuals participating in the study. To prevent privacy breaches, even previously published results have been removed from public databases, impeding researchers' access to the data and hindering collaborative research. Existing techniques for privacy-preserving GWAS focus on answering specific questions, such as correlations between a given pair of SNPs (DNA sequence variations). This does not fit the typical GWAS process, where the analyst may not know in advance which SNPs to consider and which statistical tests to use, how many SNPs are significant for a given dataset, etc. We present a set of practical, privacy-preserving data mining algorithms for GWAS datasets. Our framework supports exploratory data analysis, where the analyst does not know a priori how many and which SNPs to consider. We develop privacy-preserving algorithms for computing the number and location of SNPs that are significantly associated with the disease, the significance of any statistical test between a given SNP and the disease, any measure of correlation between SNPs, and the block structure of correlations. We evaluate our algorithms on real-world datasets and demonstrate that they produce significantly more accurate results than prior techniques while guaranteeing differential privacy.

  7. Leveraging search and content exploration by exploiting context in folksonomy systems

    NASA Astrophysics Data System (ADS)

    Abel, Fabian; Baldoni, Matteo; Baroglio, Cristina; Henze, Nicola; Kawase, Ricardo; Krause, Daniel; Patti, Viviana

    2010-04-01

    With the advent of Web 2.0 tagging became a popular feature in social media systems. People tag diverse kinds of content, e.g. products at Amazon, music at Last.fm, images at Flickr, etc. In the last years several researchers analyzed the impact of tags on information retrieval. Most works focused on tags only and ignored context information. In this article we present context-aware approaches for learning semantics and improve personalized information retrieval in tagging systems. We investigate how explorative search, initialized by clicking on tags, can be enhanced with automatically produced context information so that search results better fit to the actual information needs of the users. We introduce the SocialHITS algorithm and present an experiment where we compare different algorithms for ranking users, tags, and resources in a contextualized way. We showcase our approaches in the domain of images and present the TagMe! system that enables users to explore and tag Flickr pictures. In TagMe! we further demonstrate how advanced context information can easily be generated: TagMe! allows users to attach tag assignments to a specific area within an image and to categorize tag assignments. In our corresponding evaluation we show that those additional facets of tag assignments gain valuable semantics, which can be applied to improve existing search and ranking algorithms significantly.

  8. Photometric Supernova Classification with Machine Learning

    NASA Astrophysics Data System (ADS)

    Lochner, Michelle; McEwen, Jason D.; Peiris, Hiranya V.; Lahav, Ofer; Winter, Max K.

    2016-08-01

    Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscopic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques that fit parametric models to curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k-nearest neighbors, support vector machines, artificial neural networks, and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieve an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.

  9. Optimization Methods in Sherpa

    NASA Astrophysics Data System (ADS)

    Siemiginowska, Aneta; Nguyen, Dan T.; Doe, Stephen M.; Refsdal, Brian L.

    2009-09-01

    Forward fitting is a standard technique used to model X-ray data. A statistic, usually assumed weighted chi^2 or Poisson likelihood (e.g. Cash), is minimized in the fitting process to obtain a set of the best model parameters. Astronomical models often have complex forms with many parameters that can be correlated (e.g. an absorbed power law). Minimization is not trivial in such setting, as the statistical parameter space becomes multimodal and finding the global minimum is hard. Standard minimization algorithms can be found in many libraries of scientific functions, but they are usually focused on specific functions. However, Sherpa designed as general fitting and modeling application requires very robust optimization methods that can be applied to variety of astronomical data (X-ray spectra, images, timing, optical data etc.). We developed several optimization algorithms in Sherpa targeting a wide range of minimization problems. Two local minimization methods were built: Levenberg-Marquardt algorithm was obtained from MINPACK subroutine LMDIF and modified to achieve the required robustness; and Nelder-Mead simplex method has been implemented in-house based on variations of the algorithm described in the literature. A global search Monte-Carlo method has been implemented following a differential evolution algorithm presented by Storn and Price (1997). We will present the methods in Sherpa and discuss their usage cases. We will focus on the application to Chandra data showing both 1D and 2D examples. This work is supported by NASA contract NAS8-03060 (CXC).

  10. Efficient scheme for parametric fitting of data in arbitrary dimensions.

    PubMed

    Pang, Ning-Ning; Tzeng, Wen-Jer; Kao, Hisen-Ching

    2008-07-01

    We propose an efficient scheme for parametric fitting expressed in terms of the Legendre polynomials. For continuous systems, our scheme is exact and the derived explicit expression is very helpful for further analytical studies. For discrete systems, our scheme is almost as accurate as the method of singular value decomposition. Through a few numerical examples, we show that our algorithm costs much less CPU time and memory space than the method of singular value decomposition. Thus, our algorithm is very suitable for a large amount of data fitting. In addition, the proposed scheme can also be used to extract the global structure of fluctuating systems. We then derive the exact relation between the correlation function and the detrended variance function of fluctuating systems in arbitrary dimensions and give a general scaling analysis.

  11. Efficient generation of sum-of-products representations of high-dimensional potential energy surfaces based on multimode expansions

    NASA Astrophysics Data System (ADS)

    Ziegler, Benjamin; Rauhut, Guntram

    2016-03-01

    The transformation of multi-dimensional potential energy surfaces (PESs) from a grid-based multimode representation to an analytical one is a standard procedure in quantum chemical programs. Within the framework of linear least squares fitting, a simple and highly efficient algorithm is presented, which relies on a direct product representation of the PES and a repeated use of Kronecker products. It shows the same scalings in computational cost and memory requirements as the potfit approach. In comparison to customary linear least squares fitting algorithms, this corresponds to a speed-up and memory saving by several orders of magnitude. Different fitting bases are tested, namely, polynomials, B-splines, and distributed Gaussians. Benchmark calculations are provided for the PESs of a set of small molecules.

  12. Efficient generation of sum-of-products representations of high-dimensional potential energy surfaces based on multimode expansions.

    PubMed

    Ziegler, Benjamin; Rauhut, Guntram

    2016-03-21

    The transformation of multi-dimensional potential energy surfaces (PESs) from a grid-based multimode representation to an analytical one is a standard procedure in quantum chemical programs. Within the framework of linear least squares fitting, a simple and highly efficient algorithm is presented, which relies on a direct product representation of the PES and a repeated use of Kronecker products. It shows the same scalings in computational cost and memory requirements as the potfit approach. In comparison to customary linear least squares fitting algorithms, this corresponds to a speed-up and memory saving by several orders of magnitude. Different fitting bases are tested, namely, polynomials, B-splines, and distributed Gaussians. Benchmark calculations are provided for the PESs of a set of small molecules.

  13. Obtaining, Maintaining, and Advancing Your Fitness Certification

    ERIC Educational Resources Information Center

    Pierce, Patricia; Herman, Susan

    2004-01-01

    Public awareness of health, fitness, and exercise has increased and the fitness industry has expanded in recent years. Yet, ironically, the health of our nation continues to deteriorate. Now more than ever there is the need for qualified fitness professionals to help individuals to improve or maintain health and fitness. Since fitness…

  14. Lord-Wingersky Algorithm Version 2.0 for Hierarchical Item Factor Models with Applications in Test Scoring, Scale Alignment, and Model Fit Testing. CRESST Report 830

    ERIC Educational Resources Information Center

    Cai, Li

    2013-01-01

    Lord and Wingersky's (1984) recursive algorithm for creating summed score based likelihoods and posteriors has a proven track record in unidimensional item response theory (IRT) applications. Extending the recursive algorithm to handle multidimensionality is relatively simple, especially with fixed quadrature because the recursions can be defined…

  15. EFFICIENT MODEL-FITTING AND MODEL-COMPARISON FOR HIGH-DIMENSIONAL BAYESIAN GEOSTATISTICAL MODELS. (R826887)

    EPA Science Inventory

    Geostatistical models are appropriate for spatially distributed data measured at irregularly spaced locations. We propose an efficient Markov chain Monte Carlo (MCMC) algorithm for fitting Bayesian geostatistical models with substantial numbers of unknown parameters to sizable...

  16. Genetic Particle Swarm Optimization-Based Feature Selection for Very-High-Resolution Remotely Sensed Imagery Object Change Detection.

    PubMed

    Chen, Qiang; Chen, Yunhao; Jiang, Weiguo

    2016-07-30

    In the field of multiple features Object-Based Change Detection (OBCD) for very-high-resolution remotely sensed images, image objects have abundant features and feature selection affects the precision and efficiency of OBCD. Through object-based image analysis, this paper proposes a Genetic Particle Swarm Optimization (GPSO)-based feature selection algorithm to solve the optimization problem of feature selection in multiple features OBCD. We select the Ratio of Mean to Variance (RMV) as the fitness function of GPSO, and apply the proposed algorithm to the object-based hybrid multivariate alternative detection model. Two experiment cases on Worldview-2/3 images confirm that GPSO can significantly improve the speed of convergence, and effectively avoid the problem of premature convergence, relative to other feature selection algorithms. According to the accuracy evaluation of OBCD, GPSO is superior at overall accuracy (84.17% and 83.59%) and Kappa coefficient (0.6771 and 0.6314) than other algorithms. Moreover, the sensitivity analysis results show that the proposed algorithm is not easily influenced by the initial parameters, but the number of features to be selected and the size of the particle swarm would affect the algorithm. The comparison experiment results reveal that RMV is more suitable than other functions as the fitness function of GPSO-based feature selection algorithm.

  17. Using Fitness Trackers and Smartwatches to Measure Physical Activity in Research: Analysis of Consumer Wrist-Worn Wearables

    PubMed Central

    Haugen Mikalsen, Martin; Woldaregay, Ashenafi Zebene; Muzny, Miroslav; Hartvigsen, Gunnar; Hopstock, Laila Arnesdatter; Grimsgaard, Sameline

    2018-01-01

    Background New fitness trackers and smartwatches are released to the consumer market every year. These devices are equipped with different sensors, algorithms, and accompanying mobile apps. With recent advances in mobile sensor technology, privately collected physical activity data can be used as an addition to existing methods for health data collection in research. Furthermore, data collected from these devices have possible applications in patient diagnostics and treatment. With an increasing number of diverse brands, there is a need for an overview of device sensor support, as well as device applicability in research projects. Objective The objective of this study was to examine the availability of wrist-worn fitness wearables and analyze availability of relevant fitness sensors from 2011 to 2017. Furthermore, the study was designed to assess brand usage in research projects, compare common brands in terms of developer access to collected health data, and features to consider when deciding which brand to use in future research. Methods We searched for devices and brand names in six wearable device databases. For each brand, we identified additional devices on official brand websites. The search was limited to wrist-worn fitness wearables with accelerometers, for which we mapped brand, release year, and supported sensors relevant for fitness tracking. In addition, we conducted a Medical Literature Analysis and Retrieval System Online (MEDLINE) and ClinicalTrials search to determine brand usage in research projects. Finally, we investigated developer accessibility to the health data collected by identified brands. Results We identified 423 unique devices from 132 different brands. Forty-seven percent of brands released only one device. Introduction of new brands peaked in 2014, and the highest number of new devices was introduced in 2015. Sensor support increased every year, and in addition to the accelerometer, a photoplethysmograph, for estimating heart rate, was the most common sensor. Out of the brands currently available, the five most often used in research projects are Fitbit, Garmin, Misfit, Apple, and Polar. Fitbit is used in twice as many validation studies as any other brands and is registered in ClinicalTrials studies 10 times as often as other brands. Conclusions The wearable landscape is in constant change. New devices and brands are released every year, promising improved measurements and user experience. At the same time, other brands disappear from the consumer market for various reasons. Advances in device quality offer new opportunities for research. However, only a few well-established brands are frequently used in research projects, and even less are thoroughly validated. PMID:29567635

  18. Advanced biologically plausible algorithms for low-level image processing

    NASA Astrophysics Data System (ADS)

    Gusakova, Valentina I.; Podladchikova, Lubov N.; Shaposhnikov, Dmitry G.; Markin, Sergey N.; Golovan, Alexander V.; Lee, Seong-Whan

    1999-08-01

    At present, in computer vision, the approach based on modeling the biological vision mechanisms is extensively developed. However, up to now, real world image processing has no effective solution in frameworks of both biologically inspired and conventional approaches. Evidently, new algorithms and system architectures based on advanced biological motivation should be developed for solution of computational problems related to this visual task. Basic problems that should be solved for creation of effective artificial visual system to process real world imags are a search for new algorithms of low-level image processing that, in a great extent, determine system performance. In the present paper, the result of psychophysical experiments and several advanced biologically motivated algorithms for low-level processing are presented. These algorithms are based on local space-variant filter, context encoding visual information presented in the center of input window, and automatic detection of perceptually important image fragments. The core of latter algorithm are using local feature conjunctions such as noncolinear oriented segment and composite feature map formation. Developed algorithms were integrated into foveal active vision model, the MARR. It is supposed that proposed algorithms may significantly improve model performance while real world image processing during memorizing, search, and recognition.

  19. Brain segmentation and the generation of cortical surfaces

    NASA Technical Reports Server (NTRS)

    Joshi, M.; Cui, J.; Doolittle, K.; Joshi, S.; Van Essen, D.; Wang, L.; Miller, M. I.

    1999-01-01

    This paper describes methods for white matter segmentation in brain images and the generation of cortical surfaces from the segmentations. We have developed a system that allows a user to start with a brain volume, obtained by modalities such as MRI or cryosection, and constructs a complete digital representation of the cortical surface. The methodology consists of three basic components: local parametric modeling and Bayesian segmentation; surface generation and local quadratic coordinate fitting; and surface editing. Segmentations are computed by parametrically fitting known density functions to the histogram of the image using the expectation maximization algorithm [DLR77]. The parametric fits are obtained locally rather than globally over the whole volume to overcome local variations in gray levels. To represent the boundary of the gray and white matter we use triangulated meshes generated using isosurface generation algorithms [GH95]. A complete system of local parametric quadratic charts [JWM+95] is superimposed on the triangulated graph to facilitate smoothing and geodesic curve tracking. Algorithms for surface editing include extraction of the largest closed surface. Results for several macaque brains are presented comparing automated and hand surface generation. Copyright 1999 Academic Press.

  20. A standard deviation selection in evolutionary algorithm for grouper fish feed formulation

    NASA Astrophysics Data System (ADS)

    Cai-Juan, Soong; Ramli, Razamin; Rahman, Rosshairy Abdul

    2016-10-01

    Malaysia is one of the major producer countries for fishery production due to its location in the equatorial environment. Grouper fish is one of the potential markets in contributing to the income of the country due to its desirable taste, high demand and high price. However, the demand of grouper fish is still insufficient from the wild catch. Therefore, there is a need to farm grouper fish to cater to the market demand. In order to farm grouper fish, there is a need to have prior knowledge of the proper nutrients needed because there is no exact data available. Therefore, in this study, primary data and secondary data are collected even though there is a limitation of related papers and 30 samples are investigated by using standard deviation selection in Evolutionary algorithm. Thus, this study would unlock frontiers for an extensive research in respect of grouper fish feed formulation. Results shown that the fitness of standard deviation selection in evolutionary algorithm is applicable. The feasible and low fitness, quick solution can be obtained. These fitness can be further predicted to minimize cost in farming grouper fish.

  1. Modeling multilayer x-ray reflectivity using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Sánchez del Río, M.; Pareschi, G.; Michetschläger, C.

    2000-06-01

    The x-ray reflectivity of a multilayer is a non-linear function of many parameters (materials, layer thickness, density, roughness). Non-linear fitting of experimental data with simulations requires the use of initial values sufficiently close to the optimum value. This is a difficult task when the topology of the space of the variables is highly structured. We apply global optimization methods to fit multilayer reflectivity. Genetic algorithms are stochastic methods based on the model of natural evolution: the improvement of a population along successive generations. A complete set of initial parameters constitutes an individual. The population is a collection of individuals. Each generation is built from the parent generation by applying some operators (selection, crossover, mutation, etc.) on the members of the parent generation. The pressure of selection drives the population to include "good" individuals. For large number of generations, the best individuals will approximate the optimum parameters. Some results on fitting experimental hard x-ray reflectivity data for Ni/C and W/Si multilayers using genetic algorithms are presented. This method can also be applied to design multilayers optimized for a target application.

  2. Genetic algorithm-based improved DOA estimation using fourth-order cumulants

    NASA Astrophysics Data System (ADS)

    Ahmed, Ammar; Tufail, Muhammad

    2017-05-01

    Genetic algorithm (GA)-based direction of arrival (DOA) estimation is proposed using fourth-order cumulants (FOC) and ESPRIT principle which results in Multiple Invariance Cumulant ESPRIT algorithm. In the existing FOC ESPRIT formulations, only one invariance is utilised to estimate DOAs. The unused multiple invariances (MIs) must be exploited simultaneously in order to improve the estimation accuracy. In this paper, a fitness function based on a carefully designed cumulant matrix is developed which incorporates MIs present in the sensor array. Better DOA estimation can be achieved by minimising this fitness function. Moreover, the effectiveness of Newton's method as well as GA for this optimisation problem has been illustrated. Simulation results show that the proposed algorithm provides improved estimation accuracy compared to existing algorithms, especially in the case of low SNR, less number of snapshots, closely spaced sources and high signal and noise correlation. Moreover, it is observed that the optimisation using Newton's method is more likely to converge to false local optima resulting in erroneous results. However, GA-based optimisation has been found attractive due to its global optimisation capability.

  3. NEUTRON STAR MASS–RADIUS CONSTRAINTS USING EVOLUTIONARY OPTIMIZATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, A. L.; Morsink, S. M.; Fiege, J. D.

    The equation of state of cold supra-nuclear-density matter, such as in neutron stars, is an open question in astrophysics. A promising method for constraining the neutron star equation of state is modeling pulse profiles of thermonuclear X-ray burst oscillations from hot spots on accreting neutron stars. The pulse profiles, constructed using spherical and oblate neutron star models, are comparable to what would be observed by a next-generation X-ray timing instrument like ASTROSAT , NICER , or a mission similar to LOFT . In this paper, we showcase the use of an evolutionary optimization algorithm to fit pulse profiles to determinemore » the best-fit masses and radii. By fitting synthetic data, we assess how well the optimization algorithm can recover the input parameters. Multiple Poisson realizations of the synthetic pulse profiles, constructed with 1.6 million counts and no background, were fitted with the Ferret algorithm to analyze both statistical and degeneracy-related uncertainty and to explore how the goodness of fit depends on the input parameters. For the regions of parameter space sampled by our tests, the best-determined parameter is the projected velocity of the spot along the observer’s line of sight, with an accuracy of ≤3% compared to the true value and with ≤5% statistical uncertainty. The next best determined are the mass and radius; for a neutron star with a spin frequency of 600 Hz, the best-fit mass and radius are accurate to ≤5%, with respective uncertainties of ≤7% and ≤10%. The accuracy and precision depend on the observer inclination and spot colatitude, with values of ∼1% achievable in mass and radius if both the inclination and colatitude are ≳60°.« less

  4. Genetic algorithms using SISAL parallel programming language

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tejada, S.

    1994-05-06

    Genetic algorithms are a mathematical optimization technique developed by John Holland at the University of Michigan [1]. The SISAL programming language possesses many of the characteristics desired to implement genetic algorithms. SISAL is a deterministic, functional programming language which is inherently parallel. Because SISAL is functional and based on mathematical concepts, genetic algorithms can be efficiently translated into the language. Several of the steps involved in genetic algorithms, such as mutation, crossover, and fitness evaluation, can be parallelized using SISAL. In this paper I will l discuss the implementation and performance of parallel genetic algorithms in SISAL.

  5. An Improved Heuristic Method for Subgraph Isomorphism Problem

    NASA Astrophysics Data System (ADS)

    Xiang, Yingzhuo; Han, Jiesi; Xu, Haijiang; Guo, Xin

    2017-09-01

    This paper focus on the subgraph isomorphism (SI) problem. We present an improved genetic algorithm, a heuristic method to search the optimal solution. The contribution of this paper is that we design a dedicated crossover algorithm and a new fitness function to measure the evolution process. Experiments show our improved genetic algorithm performs better than other heuristic methods. For a large graph, such as a subgraph of 40 nodes, our algorithm outperforms the traditional tree search algorithms. We find that the performance of our improved genetic algorithm does not decrease as the number of nodes in prototype graphs.

  6. Modeling and measurements of XRD spectra of extended solids under high pressure

    NASA Astrophysics Data System (ADS)

    Batyrev, I. G.; Coleman, S. P.; Stavrou, E.; Zaug, J. M.; Ciezak-Jenkins, J. A.

    2017-06-01

    We present results of evolutionary simulations based on density functional calculations of various extended solids: N-Si and N-H using variable and fixed concentration methods of USPEX. Predicted from the evolutionary simulations structures were analyzed in terms of thermo-dynamical stability and agreement with experimental X-ray diffraction spectra. Stability of the predicted system was estimated from convex-hull plots. X-ray diffraction spectra were calculated using a virtual diffraction algorithm which computes kinematic diffraction intensity in three-dimensional reciprocal space before being reduced to a two-theta line profile. Calculations of thousands of XRD spectra were used to search for a structure of extended solids at certain pressures with best fits to experimental data according to experimental XRD peak position, peak intensity and theoretically calculated enthalpy. Comparison of Raman and IR spectra calculated for best fitted structures with available experimental data shows reasonable agreement for certain vibration modes. Part of this work was performed by LLNL, Contract DE-AC52-07NA27344. We thank the Joint DoD / DOE Munitions Technology Development Program, the HE C-II research program at LLNL and Advanced Light Source, supported by BES DOE, Contract No. DE-AC02-05CH112.

  7. Measurements of hip-bone distortions caused by the stress of inserted prosthesis by means of the speckle photography method

    NASA Astrophysics Data System (ADS)

    Gajda, Jerzy K.; Niesterowicz, Andrzej; Mazurkiewicz, Henryk

    1995-03-01

    A high number of osseous diseases, particularly of the backbone and hip-joint regions, result in a need for their overall treatment and prevention. Two basic treatment methods are used: physical exercises at an early stage of the illness, and surgical treatment in an advanced stage. Recently, in operational treatment of coxarthrosis the elements of the joint (acetabulum and capitellum) were replaced by their artificial counterparts, despite some drawbacks and unknowns related to this kind of treatment. In order to check the effectiveness of this treatment and to eliminate its drawbacks we have tested the joint by means of speckle photography method. The objective of this paper is an attempt to evaluate stress and displacement distributions in a system consisting of artificial acetabulum and capitellum and a natural bone in order to determine an optimum fitting of artificial acetabulum and capitellum and a natural bone in order to determine an optimum fitting of artificial elements that guarantees uniform distribution of stresses corresponding to anatomical and physiological parameters of the hip-joint. Speckle photographs have been analyzed point by point with the help of the algorithm for striped images processing.

  8. INFOS: spectrum fitting software for NMR analysis.

    PubMed

    Smith, Albert A

    2017-02-01

    Software for fitting of NMR spectra in MATLAB is presented. Spectra are fitted in the frequency domain, using Fourier transformed lineshapes, which are derived using the experimental acquisition and processing parameters. This yields more accurate fits compared to common fitting methods that use Lorentzian or Gaussian functions. Furthermore, a very time-efficient algorithm for calculating and fitting spectra has been developed. The software also performs initial peak picking, followed by subsequent fitting and refinement of the peak list, by iteratively adding and removing peaks to improve the overall fit. Estimation of error on fitting parameters is performed using a Monte-Carlo approach. Many fitting options allow the software to be flexible enough for a wide array of applications, while still being straightforward to set up with minimal user input.

  9. Online spectral fit tool (OSFT) for analyzing reflectance spectra

    NASA Astrophysics Data System (ADS)

    Penttilä, A.; Kohout, T.; Muinonen, K.

    2015-10-01

    We present an algorithm and its implementation for fitting continuum and absorption bands to UV/VIS/NIR reflectance spectra. The implementation is done completely in JavaScript and HTML, and will run in any modern web browser without requiring external libraries to be installed.

  10. Enabling Computational Nanotechnology through JavaGenes in a Cycle Scavenging Environment

    NASA Technical Reports Server (NTRS)

    Globus, Al; Menon, Madhu; Srivastava, Deepak; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    A genetic algorithm procedure is developed and implemented for fitting parameters for many-body inter-atomic force field functions for simulating nanotechnology atomistic applications using portable Java on cycle-scavenged heterogeneous workstations. Given a physics based analytic functional form for the force field, correlated parameters in a multi-dimensional environment are typically chosen to fit properties given either by experiments and/or by higher accuracy quantum mechanical simulations. The implementation automates this tedious procedure using an evolutionary computing algorithm operating on hundreds of cycle-scavenged computers. As a proof of concept, we demonstrate the procedure for evaluating the Stillinger-Weber (S-W) potential by (a) reproducing the published parameters for Si using S-W energies in the fitness function, and (b) evolving a "new" set of parameters using semi-empirical tightbinding energies in the fitness function. The "new" parameters are significantly better suited for Si cluster energies and forces as compared to even the published S-W potential.

  11. Brain MRI Tumor Detection using Active Contour Model and Local Image Fitting Energy

    NASA Astrophysics Data System (ADS)

    Nabizadeh, Nooshin; John, Nigel

    2014-03-01

    Automatic abnormality detection in Magnetic Resonance Imaging (MRI) is an important issue in many diagnostic and therapeutic applications. Here an automatic brain tumor detection method is introduced that uses T1-weighted images and K. Zhang et. al.'s active contour model driven by local image fitting (LIF) energy. Local image fitting energy obtains the local image information, which enables the algorithm to segment images with intensity inhomogeneities. Advantage of this method is that the LIF energy functional has less computational complexity than the local binary fitting (LBF) energy functional; moreover, it maintains the sub-pixel accuracy and boundary regularization properties. In Zhang's algorithm, a new level set method based on Gaussian filtering is used to implement the variational formulation, which is not only vigorous to prevent the energy functional from being trapped into local minimum, but also effective in keeping the level set function regular. Experiments show that the proposed method achieves high accuracy brain tumor segmentation results.

  12. Application of particle swarm optimization in path planning of mobile robot

    NASA Astrophysics Data System (ADS)

    Wang, Yong; Cai, Feng; Wang, Ying

    2017-08-01

    In order to realize the optimal path planning of mobile robot in unknown environment, a particle swarm optimization algorithm based on path length as fitness function is proposed. The location of the global optimal particle is determined by the minimum fitness value, and the robot moves along the points of the optimal particles to the target position. The process of moving to the target point is done with MATLAB R2014a. Compared with the standard particle swarm optimization algorithm, the simulation results show that this method can effectively avoid all obstacles and get the optimal path.

  13. The Full Monte Carlo: A Live Performance with Stars

    NASA Astrophysics Data System (ADS)

    Meng, Xiao-Li

    2014-06-01

    Markov chain Monte Carlo (MCMC) is being applied increasingly often in modern Astrostatistics. It is indeed incredibly powerful, but also very dangerous. It is popular because of its apparent generality (from simple to highly complex problems) and simplicity (the availability of out-of-the-box recipes). It is dangerous because it always produces something but there is no surefire way to verify or even diagnosis that the “something” is remotely close to what the MCMC theory predicts or one hopes. Using very simple models (e.g., conditionally Gaussian), this talk starts with a tutorial of the two most popular MCMC algorithms, namely, the Gibbs Sampler and the Metropolis-Hasting Algorithm, and illustratestheir good, bad, and ugly implementations via live demonstration. The talk ends with a story of how a recent advance, the Ancillary-Sufficient Interweaving Strategy (ASIS) (Yu and Meng, 2011, http://www.stat.harvard.edu/Faculty_Content/meng/jcgs.2011-article.pdf)reduces the danger. It was discovered almost by accident during a Ph.D. student’s (Yaming Yu) struggle with fitting a Cox process model for detecting changes in source intensity of photon counts observed by the Chandra X-ray telescope from a (candidate) neutron/quark star.

  14. Grand Views of Evolution.

    PubMed

    de Vladar, Harold P; Santos, Mauro; Szathmáry, Eörs

    2017-05-01

    Despite major advances in evolutionary theories, some aspects of evolution remain neglected: whether evolution: would come to a halt without abiotic change; is unbounded and open-ended; or is progressive and something beyond fitness is maximized. Here, we discuss some models of ecology and evolution and argue that ecological change, resulting in Red Queen dynamics, facilitates (but does not ensure) innovation. We distinguish three forms of open-endedness. In weak open-endedness, novel phenotypes can occur indefinitely. Strong open-endedness requires the continual appearance of evolutionary novelties and/or innovations. Ultimate open-endedness entails an indefinite increase in complexity, which requires unlimited heredity. Open-ended innovation needs exaptations that generate novel niches. This can result in new traits and new rules as the dynamics unfolds, suggesting that evolution is not fully algorithmic. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Inclusion of temporal priors for automated neonatal EEG classification

    NASA Astrophysics Data System (ADS)

    Temko, Andriy; Stevenson, Nathan; Marnane, William; Boylan, Geraldine; Lightbody, Gordon

    2012-08-01

    The aim of this paper is to use recent advances in the clinical understanding of the temporal evolution of seizure burden in neonates with hypoxic ischemic encephalopathy to improve the performance of automated detection algorithms. Probabilistic weights are designed from temporal locations of neonatal seizure events relative to time of birth. These weights are obtained by fitting a skew-normal distribution to the temporal seizure density and introduced into the probabilistic framework of the previously developed neonatal seizure detector. The results are validated on the largest available clinical dataset, comprising 816.7 h. By exploiting these priors, the receiver operating characteristic area is increased by 23% (relative) reaching 96.74%. The number of false detections per hour is decreased from 0.45 to 0.25, while maintaining the correct detection of seizure burden at 70%.

  16. Utilization of volume correlation filters for underwater mine identification in LIDAR imagery

    NASA Astrophysics Data System (ADS)

    Walls, Bradley

    2008-04-01

    Underwater mine identification persists as a critical technology pursued aggressively by the Navy for fleet protection. As such, new and improved techniques must continue to be developed in order to provide measurable increases in mine identification performance and noticeable reductions in false alarm rates. In this paper we show how recent advances in the Volume Correlation Filter (VCF) developed for ground based LIDAR systems can be adapted to identify targets in underwater LIDAR imagery. Current automated target recognition (ATR) algorithms for underwater mine identification employ spatial based three-dimensional (3D) shape fitting of models to LIDAR data to identify common mine shapes consisting of the box, cylinder, hemisphere, truncated cone, wedge, and annulus. VCFs provide a promising alternative to these spatial techniques by correlating 3D models against the 3D rendered LIDAR data.

  17. Effective World Modeling: Multisensor Data Fusion Methodology for Automated Driving

    PubMed Central

    Elfring, Jos; Appeldoorn, Rein; van den Dries, Sjoerd; Kwakkernaat, Maurice

    2016-01-01

    The number of perception sensors on automated vehicles increases due to the increasing number of advanced driver assistance system functions and their increasing complexity. Furthermore, fail-safe systems require redundancy, thereby increasing the number of sensors even further. A one-size-fits-all multisensor data fusion architecture is not realistic due to the enormous diversity in vehicles, sensors and applications. As an alternative, this work presents a methodology that can be used to effectively come up with an implementation to build a consistent model of a vehicle’s surroundings. The methodology is accompanied by a software architecture. This combination minimizes the effort required to update the multisensor data fusion system whenever sensors or applications are added or replaced. A series of real-world experiments involving different sensors and algorithms demonstrates the methodology and the software architecture. PMID:27727171

  18. Integrating Advanced Physical Training Programs into the Marine Corps

    DTIC Science & Technology

    2009-02-20

    all of which are available to the public for use. However, the most popular training program amongst Marines is CrossFit6. While CrossFit is a...the CrossFit program and consequently a fee is required to participate in the CrossFit 3 P90X, Extreme Body Workout, (unknown... CrossFit : Forging Elite Fitness, (unknown, CrossFit : Forging Elite Fitness n.d.), CrossFit , as advertised on its website, is a principal strength and

  19. Regularization Paths for Conditional Logistic Regression: The clogitL1 Package.

    PubMed

    Reid, Stephen; Tibshirani, Rob

    2014-07-01

    We apply the cyclic coordinate descent algorithm of Friedman, Hastie, and Tibshirani (2010) to the fitting of a conditional logistic regression model with lasso [Formula: see text] and elastic net penalties. The sequential strong rules of Tibshirani, Bien, Hastie, Friedman, Taylor, Simon, and Tibshirani (2012) are also used in the algorithm and it is shown that these offer a considerable speed up over the standard coordinate descent algorithm with warm starts. Once implemented, the algorithm is used in simulation studies to compare the variable selection and prediction performance of the conditional logistic regression model against that of its unconditional (standard) counterpart. We find that the conditional model performs admirably on datasets drawn from a suitable conditional distribution, outperforming its unconditional counterpart at variable selection. The conditional model is also fit to a small real world dataset, demonstrating how we obtain regularization paths for the parameters of the model and how we apply cross validation for this method where natural unconditional prediction rules are hard to come by.

  20. Regularization Paths for Conditional Logistic Regression: The clogitL1 Package

    PubMed Central

    Reid, Stephen; Tibshirani, Rob

    2014-01-01

    We apply the cyclic coordinate descent algorithm of Friedman, Hastie, and Tibshirani (2010) to the fitting of a conditional logistic regression model with lasso (ℓ1) and elastic net penalties. The sequential strong rules of Tibshirani, Bien, Hastie, Friedman, Taylor, Simon, and Tibshirani (2012) are also used in the algorithm and it is shown that these offer a considerable speed up over the standard coordinate descent algorithm with warm starts. Once implemented, the algorithm is used in simulation studies to compare the variable selection and prediction performance of the conditional logistic regression model against that of its unconditional (standard) counterpart. We find that the conditional model performs admirably on datasets drawn from a suitable conditional distribution, outperforming its unconditional counterpart at variable selection. The conditional model is also fit to a small real world dataset, demonstrating how we obtain regularization paths for the parameters of the model and how we apply cross validation for this method where natural unconditional prediction rules are hard to come by. PMID:26257587

  1. A Method for the Interpretation of Flow Cytometry Data Using Genetic Algorithms.

    PubMed

    Angeletti, Cesar

    2018-01-01

    Flow cytometry analysis is the method of choice for the differential diagnosis of hematologic disorders. It is typically performed by a trained hematopathologist through visual examination of bidimensional plots, making the analysis time-consuming and sometimes too subjective. Here, a pilot study applying genetic algorithms to flow cytometry data from normal and acute myeloid leukemia subjects is described. Initially, Flow Cytometry Standard files from 316 normal and 43 acute myeloid leukemia subjects were transformed into multidimensional FITS image metafiles. Training was performed through introduction of FITS metafiles from 4 normal and 4 acute myeloid leukemia in the artificial intelligence system. Two mathematical algorithms termed 018330 and 025886 were generated. When tested against a cohort of 312 normal and 39 acute myeloid leukemia subjects, both algorithms combined showed high discriminatory power with a receiver operating characteristic (ROC) curve of 0.912. The present results suggest that machine learning systems hold a great promise in the interpretation of hematological flow cytometry data.

  2. An improved algorithm of laser spot center detection in strong noise background

    NASA Astrophysics Data System (ADS)

    Zhang, Le; Wang, Qianqian; Cui, Xutai; Zhao, Yu; Peng, Zhong

    2018-01-01

    Laser spot center detection is demanded in many applications. The common algorithms for laser spot center detection such as centroid and Hough transform method have poor anti-interference ability and low detection accuracy in the condition of strong background noise. In this paper, firstly, the median filtering was used to remove the noise while preserving the edge details of the image. Secondly, the binarization of the laser facula image was carried out to extract target image from background. Then the morphological filtering was performed to eliminate the noise points inside and outside the spot. At last, the edge of pretreated facula image was extracted and the laser spot center was obtained by using the circle fitting method. In the foundation of the circle fitting algorithm, the improved algorithm added median filtering, morphological filtering and other processing methods. This method could effectively filter background noise through theoretical analysis and experimental verification, which enhanced the anti-interference ability of laser spot center detection and also improved the detection accuracy.

  3. Digital Terrain from a Two-Step Segmentation and Outlier-Based Algorithm

    NASA Astrophysics Data System (ADS)

    Hingee, Kassel; Caccetta, Peter; Caccetta, Louis; Wu, Xiaoliang; Devereaux, Drew

    2016-06-01

    We present a novel ground filter for remotely sensed height data. Our filter has two phases: the first phase segments the DSM with a slope threshold and uses gradient direction to identify candidate ground segments; the second phase fits surfaces to the candidate ground points and removes outliers. Digital terrain is obtained by a surface fit to the final set of ground points. We tested the new algorithm on digital surface models (DSMs) for a 9600km2 region around Perth, Australia. This region contains a large mix of land uses (urban, grassland, native forest and plantation forest) and includes both a sandy coastal plain and a hillier region (elevations up to 0.5km). The DSMs are captured annually at 0.2m resolution using aerial stereo photography, resulting in 1.2TB of input data per annum. Overall accuracy of the filter was estimated to be 89.6% and on a small semi-rural subset our algorithm was found to have 40% fewer errors compared to Inpho's Match-T algorithm.

  4. Advanced ensemble modelling of flexible macromolecules using X-ray solution scattering.

    PubMed

    Tria, Giancarlo; Mertens, Haydyn D T; Kachala, Michael; Svergun, Dmitri I

    2015-03-01

    Dynamic ensembles of macromolecules mediate essential processes in biology. Understanding the mechanisms driving the function and molecular interactions of 'unstructured' and flexible molecules requires alternative approaches to those traditionally employed in structural biology. Small-angle X-ray scattering (SAXS) is an established method for structural characterization of biological macromolecules in solution, and is directly applicable to the study of flexible systems such as intrinsically disordered proteins and multi-domain proteins with unstructured regions. The Ensemble Optimization Method (EOM) [Bernadó et al. (2007 ▶). J. Am. Chem. Soc. 129, 5656-5664] was the first approach introducing the concept of ensemble fitting of the SAXS data from flexible systems. In this approach, a large pool of macromolecules covering the available conformational space is generated and a sub-ensemble of conformers coexisting in solution is selected guided by the fit to the experimental SAXS data. This paper presents a series of new developments and advancements to the method, including significantly enhanced functionality and also quantitative metrics for the characterization of the results. Building on the original concept of ensemble optimization, the algorithms for pool generation have been redesigned to allow for the construction of partially or completely symmetric oligomeric models, and the selection procedure was improved to refine the size of the ensemble. Quantitative measures of the flexibility of the system studied, based on the characteristic integral parameters of the selected ensemble, are introduced. These improvements are implemented in the new EOM version 2.0, and the capabilities as well as inherent limitations of the ensemble approach in SAXS, and of EOM 2.0 in particular, are discussed.

  5. Spring green-up date derived from GIMMS3g and SPOT-VGT NDVI of winter wheat cropland in the North China Plain

    NASA Astrophysics Data System (ADS)

    Liu, Zhengjia; Wu, Chaoyang; Liu, Yansui; Wang, Xiaoyue; Fang, Bin; Yuan, Wenping; Ge, Quansheng

    2017-08-01

    Satellite temporal resolution affects the fitting accuracy of vegetation growth curves. However, there are few studies that evaluate the impact of different satellite data (including temporal resolution and time series change) on spring green-up date (GUD) extraction. In this study, four GUD algorithms and two different temporal resolution satellite data (GIMMS3g during 1982-2013 and SPOT-VGT during 1999-2013) were used to investigate winter wheat GUD in the North China Plain. Four GUD algorithms included logistic-NDVI (normalized difference vegetation index), logistic-cumNDVI (cumulative NDVI), polynomial-NDVI and polynomial-cumNDVI algorithms. All algorithms and data were first regrouped into eight controlled cases. At site scale, we evaluated the performance of each case using correlation coefficient (r), bias and root mean square error (RMSE). We further compared spatial patterns and inter-annual trends of GUD inferred from different algorithms, and then analyzed the difference between GIMMS3g-based GUD and SPOT-VGT-based GUD. Our results showed that all satellite-based GUD were correlated with observations with r ranging from 0.32 to 0.57 (p < 0.01). SPOT-VGT-based GUD generally had better correlations with observed GUD than those of GIMMS3g. Spatially, SPOT-VGT-based GUD performed more reasonable spatial distributions. Inter-annual regional averaged satellite-based GUD presented overall advanced trends during 1982-2013 (0.3-2.0 days/decade) while delayed trends were observed during 1999-2013 (1.7-7.4 days/decade for GIMMS3g and 3.8-7.4 days/decade for SPOT-VGT). However, their significance levels were highly dependent on the data and algorithms used. Our findings suggest cautions on previous results of inter-annual variability of phenology from a single data/method.

  6. Recognition of Banknote Fitness Based on a Fuzzy System Using Visible Light Reflection and Near-infrared Light Transmission Images.

    PubMed

    Kwon, Seung Yong; Pham, Tuyen Danh; Park, Kang Ryoung; Jeong, Dae Sik; Yoon, Sungsoo

    2016-06-11

    Fitness classification is a technique to assess the quality of banknotes in order to determine whether they are usable. Banknote classification techniques are useful in preventing problems that arise from the circulation of substandard banknotes (such as recognition failures, or bill jams in automated teller machines (ATMs) or bank counting machines). By and large, fitness classification continues to be carried out by humans, and this can cause the problem of varying fitness classifications for the same bill by different evaluators, and requires a lot of time. To address these problems, this study proposes a fuzzy system-based method that can reduce the processing time needed for fitness classification, and can determine the fitness of banknotes through an objective, systematic method rather than subjective judgment. Our algorithm was an implementation to actual banknote counting machine. Based on the results of tests on 3856 banknotes in United States currency (USD), 3956 in Korean currency (KRW), and 2300 banknotes in Indian currency (INR) using visible light reflection (VR) and near-infrared light transmission (NIRT) imaging, the proposed method was found to yield higher accuracy than prevalent banknote fitness classification methods. Moreover, it was confirmed that the proposed algorithm can operate in real time, not only in a normal PC environment, but also in an embedded system environment of a banknote counting machine.

  7. Recognition of Banknote Fitness Based on a Fuzzy System Using Visible Light Reflection and Near-infrared Light Transmission Images

    PubMed Central

    Kwon, Seung Yong; Pham, Tuyen Danh; Park, Kang Ryoung; Jeong, Dae Sik; Yoon, Sungsoo

    2016-01-01

    Fitness classification is a technique to assess the quality of banknotes in order to determine whether they are usable. Banknote classification techniques are useful in preventing problems that arise from the circulation of substandard banknotes (such as recognition failures, or bill jams in automated teller machines (ATMs) or bank counting machines). By and large, fitness classification continues to be carried out by humans, and this can cause the problem of varying fitness classifications for the same bill by different evaluators, and requires a lot of time. To address these problems, this study proposes a fuzzy system-based method that can reduce the processing time needed for fitness classification, and can determine the fitness of banknotes through an objective, systematic method rather than subjective judgment. Our algorithm was an implementation to actual banknote counting machine. Based on the results of tests on 3856 banknotes in United States currency (USD), 3956 in Korean currency (KRW), and 2300 banknotes in Indian currency (INR) using visible light reflection (VR) and near-infrared light transmission (NIRT) imaging, the proposed method was found to yield higher accuracy than prevalent banknote fitness classification methods. Moreover, it was confirmed that the proposed algorithm can operate in real time, not only in a normal PC environment, but also in an embedded system environment of a banknote counting machine. PMID:27294940

  8. Fitting Nonlinear Ordinary Differential Equation Models with Random Effects and Unknown Initial Conditions Using the Stochastic Approximation Expectation-Maximization (SAEM) Algorithm.

    PubMed

    Chow, Sy-Miin; Lu, Zhaohua; Sherwood, Andrew; Zhu, Hongtu

    2016-03-01

    The past decade has evidenced the increased prevalence of irregularly spaced longitudinal data in social sciences. Clearly lacking, however, are modeling tools that allow researchers to fit dynamic models to irregularly spaced data, particularly data that show nonlinearity and heterogeneity in dynamical structures. We consider the issue of fitting multivariate nonlinear differential equation models with random effects and unknown initial conditions to irregularly spaced data. A stochastic approximation expectation-maximization algorithm is proposed and its performance is evaluated using a benchmark nonlinear dynamical systems model, namely, the Van der Pol oscillator equations. The empirical utility of the proposed technique is illustrated using a set of 24-h ambulatory cardiovascular data from 168 men and women. Pertinent methodological challenges and unresolved issues are discussed.

  9. FITTING NONLINEAR ORDINARY DIFFERENTIAL EQUATION MODELS WITH RANDOM EFFECTS AND UNKNOWN INITIAL CONDITIONS USING THE STOCHASTIC APPROXIMATION EXPECTATION–MAXIMIZATION (SAEM) ALGORITHM

    PubMed Central

    Chow, Sy- Miin; Lu, Zhaohua; Zhu, Hongtu; Sherwood, Andrew

    2014-01-01

    The past decade has evidenced the increased prevalence of irregularly spaced longitudinal data in social sciences. Clearly lacking, however, are modeling tools that allow researchers to fit dynamic models to irregularly spaced data, particularly data that show nonlinearity and heterogeneity in dynamical structures. We consider the issue of fitting multivariate nonlinear differential equation models with random effects and unknown initial conditions to irregularly spaced data. A stochastic approximation expectation–maximization algorithm is proposed and its performance is evaluated using a benchmark nonlinear dynamical systems model, namely, the Van der Pol oscillator equations. The empirical utility of the proposed technique is illustrated using a set of 24-h ambulatory cardiovascular data from 168 men and women. Pertinent methodological challenges and unresolved issues are discussed. PMID:25416456

  10. The comparative cost-effectiveness of colorectal cancer screening using faecal immunochemical test vs. colonoscopy.

    PubMed

    Wong, Martin C S; Ching, Jessica Y L; Chan, Victor C W; Sung, Joseph J Y

    2015-09-04

    Faecal immunochemical tests (FITs) and colonoscopy are two common screening tools for colorectal cancer(CRC). Most cost-effectiveness studies focused on survival as the outcome, and were based on modeling techniques instead of real world observational data. This study evaluated the cost-effectiveness of these two tests to detect colorectal neoplastic lesions based on data from a 5-year community screening service. The incremental cost-effectiveness ratio (ICER) was assessed based on the detection rates of neoplastic lesions, and costs including screening compliance, polypectomy, colonoscopy complications, and staging of CRC detected. A total of 5,863 patients received yearly FIT and 4,869 received colonoscopy. Compared with FIT, colonoscopy detected notably more adenomas (23.6% vs. 1.6%) and advanced lesions or cancer (4.2% vs. 1.2%). Using FIT as control, the ICER of screening colonoscopy in detecting adenoma, advanced adenoma, CRC and a composite endpoint of either advanced adenoma or stage I CRC was US$3,489, US$27,962, US$922,762 and US$23,981 respectively. The respective ICER was US$3,597, US$439,513, -US$2,765,876 and US$32,297 among lower-risk subjects; whilst the corresponding figure was US$3,153, US$14,852, US$184,162 and US$13,919 among higher-risk subjects. When compared to FIT, colonoscopy is considered cost-effective for screening adenoma, advanced neoplasia, and a composite endpoint of advanced neoplasia or stage I CRC.

  11. The comparative cost-effectiveness of colorectal cancer screening using faecal immunochemical test vs. colonoscopy

    PubMed Central

    Wong, Martin CS; Ching, Jessica YL; Chan, Victor CW; Sung, Joseph JY

    2015-01-01

    Faecal immunochemical tests (FITs) and colonoscopy are two common screening tools for colorectal cancer(CRC). Most cost-effectiveness studies focused on survival as the outcome, and were based on modeling techniques instead of real world observational data. This study evaluated the cost-effectiveness of these two tests to detect colorectal neoplastic lesions based on data from a 5-year community screening service. The incremental cost-effectiveness ratio (ICER) was assessed based on the detection rates of neoplastic lesions, and costs including screening compliance, polypectomy, colonoscopy complications, and staging of CRC detected. A total of 5,863 patients received yearly FIT and 4,869 received colonoscopy. Compared with FIT, colonoscopy detected notably more adenomas (23.6% vs. 1.6%) and advanced lesions or cancer (4.2% vs. 1.2%). Using FIT as control, the ICER of screening colonoscopy in detecting adenoma, advanced adenoma, CRC and a composite endpoint of either advanced adenoma or stage I CRC was US$3,489, US$27,962, US$922,762 and US$23,981 respectively. The respective ICER was US$3,597, US$439,513, -US$2,765,876 and US$32,297 among lower-risk subjects; whilst the corresponding figure was US$3,153, US$14,852, US$184,162 and US$13,919 among higher-risk subjects. When compared to FIT, colonoscopy is considered cost-effective for screening adenoma, advanced neoplasia, and a composite endpoint of advanced neoplasia or stage I CRC. PMID:26338314

  12. ParFit: A Python-Based Object-Oriented Program for Fitting Molecular Mechanics Parameters to ab Initio Data

    DOE PAGES

    Zahariev, Federico; De Silva, Nuwan; Gordon, Mark S.; ...

    2017-02-23

    Here, a newly created object-oriented program for automating the process of fitting molecular-mechanics parameters to ab initio data, termed ParFit, is presented. ParFit uses a hybrid of deterministic and stochastic genetic algorithms. ParFit can simultaneously handle several molecular-mechanics parameters in multiple molecules and can also apply symmetric and antisymmetric constraints on the optimized parameters. The simultaneous handling of several molecules enhances the transferability of the fitted parameters. ParFit is written in Python, uses a rich set of standard and nonstandard Python libraries, and can be run in parallel on multicore computer systems. As an example, a series of phosphine oxides,more » important for metal extraction chemistry, are parametrized using ParFit.« less

  13. ParFit: A Python-Based Object-Oriented Program for Fitting Molecular Mechanics Parameters to ab Initio Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zahariev, Federico; De Silva, Nuwan; Gordon, Mark S.

    Here, a newly created object-oriented program for automating the process of fitting molecular-mechanics parameters to ab initio data, termed ParFit, is presented. ParFit uses a hybrid of deterministic and stochastic genetic algorithms. ParFit can simultaneously handle several molecular-mechanics parameters in multiple molecules and can also apply symmetric and antisymmetric constraints on the optimized parameters. The simultaneous handling of several molecules enhances the transferability of the fitted parameters. ParFit is written in Python, uses a rich set of standard and nonstandard Python libraries, and can be run in parallel on multicore computer systems. As an example, a series of phosphine oxides,more » important for metal extraction chemistry, are parametrized using ParFit.« less

  14. Accuracy of tree diameter estimation from terrestrial laser scanning by circle-fitting methods

    NASA Astrophysics Data System (ADS)

    Koreň, Milan; Mokroš, Martin; Bucha, Tomáš

    2017-12-01

    This study compares the accuracies of diameter at breast height (DBH) estimations by three initial (minimum bounding box, centroid, and maximum distance) and two refining (Monte Carlo and optimal circle) circle-fitting methods The circle-fitting algorithms were evaluated in multi-scan mode and a simulated single-scan mode on 157 European beech trees (Fagus sylvatica L.). DBH measured by a calliper was used as reference data. Most of the studied circle-fitting algorithms significantly underestimated the mean DBH in both scanning modes. Only the Monte Carlo method in the single-scan mode significantly overestimated the mean DBH. The centroid method proved to be the least suitable and showed significantly different results from the other circle-fitting methods in both scanning modes. In multi-scan mode, the accuracy of the minimum bounding box method was not significantly different from the accuracies of the refining methods The accuracy of the maximum distance method was significantly different from the accuracies of the refining methods in both scanning modes. The accuracy of the Monte Carlo method was significantly different from the accuracy of the optimal circle method in only single-scan mode. The optimal circle method proved to be the most accurate circle-fitting method for DBH estimation from point clouds in both scanning modes.

  15. Systematic wavelength selection for improved multivariate spectral analysis

    DOEpatents

    Thomas, Edward V.; Robinson, Mark R.; Haaland, David M.

    1995-01-01

    Methods and apparatus for determining in a biological material one or more unknown values of at least one known characteristic (e.g. the concentration of an analyte such as glucose in blood or the concentration of one or more blood gas parameters) with a model based on a set of samples with known values of the known characteristics and a multivariate algorithm using several wavelength subsets. The method includes selecting multiple wavelength subsets, from the electromagnetic spectral region appropriate for determining the known characteristic, for use by an algorithm wherein the selection of wavelength subsets improves the model's fitness of the determination for the unknown values of the known characteristic. The selection process utilizes multivariate search methods that select both predictive and synergistic wavelengths within the range of wavelengths utilized. The fitness of the wavelength subsets is determined by the fitness function F=.function.(cost, performance). The method includes the steps of: (1) using one or more applications of a genetic algorithm to produce one or more count spectra, with multiple count spectra then combined to produce a combined count spectrum; (2) smoothing the count spectrum; (3) selecting a threshold count from a count spectrum to select these wavelength subsets which optimize the fitness function; and (4) eliminating a portion of the selected wavelength subsets. The determination of the unknown values can be made: (1) noninvasively and in vivo; (2) invasively and in vivo; or (3) in vitro.

  16. Comparing the ISO-recommended and the cumulative data-reduction algorithms in S-on-1 laser damage test by a reverse approach method

    NASA Astrophysics Data System (ADS)

    Zorila, Alexandru; Stratan, Aurel; Nemes, George

    2018-01-01

    We compare the ISO-recommended (the standard) data-reduction algorithm used to determine the surface laser-induced damage threshold of optical materials by the S-on-1 test with two newly suggested algorithms, both named "cumulative" algorithms/methods, a regular one and a limit-case one, intended to perform in some respects better than the standard one. To avoid additional errors due to real experiments, a simulated test is performed, named the reverse approach. This approach simulates the real damage experiments, by generating artificial test-data of damaged and non-damaged sites, based on an assumed, known damage threshold fluence of the target and on a given probability distribution function to induce the damage. In this work, a database of 12 sets of test-data containing both damaged and non-damaged sites was generated by using four different reverse techniques and by assuming three specific damage probability distribution functions. The same value for the threshold fluence was assumed, and a Gaussian fluence distribution on each irradiated site was considered, as usual for the S-on-1 test. Each of the test-data was independently processed by the standard and by the two cumulative data-reduction algorithms, the resulting fitted probability distributions were compared with the initially assumed probability distribution functions, and the quantities used to compare these algorithms were determined. These quantities characterize the accuracy and the precision in determining the damage threshold and the goodness of fit of the damage probability curves. The results indicate that the accuracy in determining the absolute damage threshold is best for the ISO-recommended method, the precision is best for the limit-case of the cumulative method, and the goodness of fit estimator (adjusted R-squared) is almost the same for all three algorithms.

  17. Using Genetic Algorithm and MODFLOW to Characterize Aquifer System of Northwest Florida

    EPA Science Inventory

    By integrating Genetic Algorithm and MODFLOW2005, an optimizing tool is developed to characterize the aquifer system of Region II, Northwest Florida. The history and the newest available observation data of the aquifer system is fitted automatically by using the numerical model c...

  18. Interface of the general fitting tool GENFIT2 in PandaRoot

    NASA Astrophysics Data System (ADS)

    Prencipe, Elisabetta; Spataro, Stefano; Stockmanns, Tobias; PANDA Collaboration

    2017-10-01

    \\bar{{{P}}}ANDA is a planned experiment at FAIR (Darmstadt) with a cooled antiproton beam in a range [1.5; 15] GeV/c, allowing a wide physics program in nuclear and particle physics. It is the only experiment worldwide, which combines a solenoid field (B=2T) and a dipole field (B=2Tm) in a spectrometer with a fixed target topology, in that energy regime. The tracking system of \\bar{{{P}}}ANDA involves the presence of a high performance silicon vertex detector, a GEM detector, a straw-tubes central tracker, a forward tracking system, and a luminosity monitor. The offline tracking algorithm is developed within the PandaRoot framework, which is a part of the FairRoot project. The tool here presented is based on algorithms containing the Kalman Filter equations and a deterministic annealing filter. This general fitting tool (GENFIT2) offers to users also a Runge-Kutta track representation, and interfaces with Millepede II (useful for alignment) and RAVE (vertex finder). It is independent on the detector geometry and the magnetic field map, and written in C++ object-oriented modular code. Several fitting algorithms are available with GENFIT2, with user-adjustable parameters; therefore the tool is of friendly usage. A check on the fit convergence is done by GENFIT2 as well. The Kalman-Filter-based algorithms have a wide range of applications; among those in particle physics they can perform extrapolations of track parameters and covariance matrices. The adoptions of the PandaRoot framework to connect to Genfit2 are described, and the impact of GENFIT2 on the physics simulations of \\bar{{{P}}}ANDA are shown: significant improvement is reported for those channels where a good low momentum tracking is required (pT < 400 MeV/c).

  19. A Simplified Algorithm for Statistical Investigation of Damage Spreading

    NASA Astrophysics Data System (ADS)

    Gecow, Andrzej

    2009-04-01

    On the way to simulating adaptive evolution of complex system describing a living object or human developed project, a fitness should be defined on node states or network external outputs. Feedbacks lead to circular attractors of these states or outputs which make it difficult to define a fitness. The main statistical effects of adaptive condition are the result of small change tendency and to appear, they only need a statistically correct size of damage initiated by evolutionary change of system. This observation allows to cut loops of feedbacks and in effect to obtain a particular statistically correct state instead of a long circular attractor which in the quenched model is expected for chaotic network with feedback. Defining fitness on such states is simple. We calculate only damaged nodes and only once. Such an algorithm is optimal for investigation of damage spreading i.e. statistical connections of structural parameters of initial change with the size of effected damage. It is a reversed-annealed method—function and states (signals) may be randomly substituted but connections are important and are preserved. The small damages important for adaptive evolution are correctly depicted in comparison to Derrida annealed approximation which expects equilibrium levels for large networks. The algorithm indicates these levels correctly. The relevant program in Pascal, which executes the algorithm for a wide range of parameters, can be obtained from the author.

  20. Spectral identification of minerals using imaging spectrometry data: Evaluating the effects of signal to noise and spectral resolution using the tricorder algorithm

    NASA Technical Reports Server (NTRS)

    Swayze, Gregg A.; Clark, Roger N.

    1995-01-01

    The rapid development of sophisticated imaging spectrometers and resulting flood of imaging spectrometry data has prompted a rapid parallel development of spectral-information extraction technology. Even though these extraction techniques have evolved along different lines (band-shape fitting, endmember unmixing, near-infrared analysis, neural-network fitting, and expert systems to name a few), all are limited by the spectrometer's signal to noise (S/N) and spectral resolution in producing useful information. This study grew from a need to quantitatively determine what effects these parameters have on our ability to differentiate between mineral absorption features using a band-shape fitting algorithm. We chose to evaluate the AVIRIS, HYDICE, MIVIS, GERIS, VIMS, NIMS, and ASTER instruments because they collect data over wide S/N and spectral-resolution ranges. The study evaluates the performance of the Tricorder algorithm, in differentiating between mineral spectra in the 0.4-2.5 micrometer spectral region. The strength of the Tricorder algorithm is in its ability to produce an easily understood comparison of band shape that can concentrate on small relevant portions of the spectra, giving it an advantage over most unmixing schemes, and in that it need not spend large amounts of time reoptimizing each time a new mineral component is added to its reference library, as is the case with neural-network schemes. We believe the flexibility of the Tricorder algorithm is unparalleled among spectral-extraction techniques and that the results from this study, although dealing with minerals, will have direct applications to spectral identification in other disciplines.

  1. PHOTOMETRIC SUPERNOVA CLASSIFICATION WITH MACHINE LEARNING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lochner, Michelle; Peiris, Hiranya V.; Lahav, Ofer

    Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscopic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques that fit parametric models tomore » curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k -nearest neighbors, support vector machines, artificial neural networks, and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieve an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.« less

  2. A gender perspective on Person-Manager fit and managerial advancement.

    PubMed

    Marongiu, S; Ekehammar, B

    2000-06-01

    This article presents two studies examining (1) the relationship between Person-Manager (P-M) fit and managerial advancement of women and men with, and without managerial aspirations and (2) the P-M fit as related to managerial and non-managerial women. The P-M fit was assessed by computing the congruence between participants' self-rated personality profile and the perceived personality profile of a manager. Sex (men show a higher P-M fit than women), gender (the higher the individual's masculine gender-role, the higher the P-M fit) and group (managers and managerial aspirants show a higher P-M fit than non-managerial aspirants and non-managers) hypotheses were tested. There was no support for the sex difference hypothesis. However, the group and gender hypotheses were confirmed showing that managers and managerial aspirants had a higher P-M fit than non-managers and non-aspirants. Further, analyses revealed that the higher the participants' masculinity scores, the higher the P-M fit. Implications of these findings are discussed in relation to the gendered image of the managerial role and adaptation theory.

  3. Benchmark for Peak Detection Algorithms in Fiber Bragg Grating Interrogation and a New Neural Network for its Performance Improvement

    PubMed Central

    Negri, Lucas; Nied, Ademir; Kalinowski, Hypolito; Paterno, Aleksander

    2011-01-01

    This paper presents a benchmark for peak detection algorithms employed in fiber Bragg grating spectrometric interrogation systems. The accuracy, precision, and computational performance of currently used algorithms and those of a new proposed artificial neural network algorithm are compared. Centroid and gaussian fitting algorithms are shown to have the highest precision but produce systematic errors that depend on the FBG refractive index modulation profile. The proposed neural network displays relatively good precision with reduced systematic errors and improved computational performance when compared to other networks. Additionally, suitable algorithms may be chosen with the general guidelines presented. PMID:22163806

  4. Genetic Particle Swarm Optimization–Based Feature Selection for Very-High-Resolution Remotely Sensed Imagery Object Change Detection

    PubMed Central

    Chen, Qiang; Chen, Yunhao; Jiang, Weiguo

    2016-01-01

    In the field of multiple features Object-Based Change Detection (OBCD) for very-high-resolution remotely sensed images, image objects have abundant features and feature selection affects the precision and efficiency of OBCD. Through object-based image analysis, this paper proposes a Genetic Particle Swarm Optimization (GPSO)-based feature selection algorithm to solve the optimization problem of feature selection in multiple features OBCD. We select the Ratio of Mean to Variance (RMV) as the fitness function of GPSO, and apply the proposed algorithm to the object-based hybrid multivariate alternative detection model. Two experiment cases on Worldview-2/3 images confirm that GPSO can significantly improve the speed of convergence, and effectively avoid the problem of premature convergence, relative to other feature selection algorithms. According to the accuracy evaluation of OBCD, GPSO is superior at overall accuracy (84.17% and 83.59%) and Kappa coefficient (0.6771 and 0.6314) than other algorithms. Moreover, the sensitivity analysis results show that the proposed algorithm is not easily influenced by the initial parameters, but the number of features to be selected and the size of the particle swarm would affect the algorithm. The comparison experiment results reveal that RMV is more suitable than other functions as the fitness function of GPSO-based feature selection algorithm. PMID:27483285

  5. A Novel Method of Aircraft Detection Based on High-Resolution Panchromatic Optical Remote Sensing Images.

    PubMed

    Wang, Wensheng; Nie, Ting; Fu, Tianjiao; Ren, Jianyue; Jin, Longxu

    2017-05-06

    In target detection of optical remote sensing images, two main obstacles for aircraft target detection are how to extract the candidates in complex gray-scale-multi background and how to confirm the targets in case the target shapes are deformed, irregular or asymmetric, such as that caused by natural conditions (low signal-to-noise ratio, illumination condition or swaying photographing) and occlusion by surrounding objects (boarding bridge, equipment). To solve these issues, an improved active contours algorithm, namely region-scalable fitting energy based threshold (TRSF), and a corner-convex hull based segmentation algorithm (CCHS) are proposed in this paper. Firstly, the maximal variance between-cluster algorithm (Otsu's algorithm) and region-scalable fitting energy (RSF) algorithm are combined to solve the difficulty of targets extraction in complex and gray-scale-multi backgrounds. Secondly, based on inherent shapes and prominent corners, aircrafts are divided into five fragments by utilizing convex hulls and Harris corner points. Furthermore, a series of new structure features, which describe the proportion of targets part in the fragment to the whole fragment and the proportion of fragment to the whole hull, are identified to judge whether the targets are true or not. Experimental results show that TRSF algorithm could improve extraction accuracy in complex background, and that it is faster than some traditional active contours algorithms. The CCHS is effective to suppress the detection difficulties caused by the irregular shape.

  6. 2006 Interferometry Imaging Beauty Contest

    NASA Technical Reports Server (NTRS)

    Lawson, Peter R.; Cotton, William D.; Hummel, Christian A.; Ireland, Michael; Monnier, John D.; Thiebaut, Eric; Rengaswamy, Sridharan; Baron, Fabien; Young, John S.; Kraus, Stefan; hide

    2006-01-01

    We present a formal comparison of the performance of algorithms used for synthesis imaging with optical/infrared long-baseline interferometers. Five different algorithms are evaluated based on their performance with simulated test data. Each set of test data is formatted in the OI-FITS format. The data are calibrated power spectra and bispectra measured with an array intended to be typical of existing imaging interferometers. The strengths and limitations of each algorithm are discussed.

  7. Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laurence, T; Chromy, B

    2009-11-10

    Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms ofmore » counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE) for the Poisson distribution is also well known, but has not become generally used. This is primarily because, in contrast to non-linear least squares fitting, there has been no quick, robust, and general fitting method. In the field of fluorescence lifetime spectroscopy and imaging, there have been some efforts to use this estimator through minimization routines such as Nelder-Mead optimization, exhaustive line searches, and Gauss-Newton minimization. Minimization based on specific one- or multi-exponential models has been used to obtain quick results, but this procedure does not allow the incorporation of the instrument response, and is not generally applicable to models found in other fields. Methods for using the MLE for Poisson-distributed data have been published by the wider spectroscopic community, including iterative minimization schemes based on Gauss-Newton minimization. The slow acceptance of these procedures for fitting event counting histograms may also be explained by the use of the ubiquitous, fast Levenberg-Marquardt (L-M) fitting procedure for fitting non-linear models using least squares fitting (simple searches obtain {approx}10000 references - this doesn't include those who use it, but don't know they are using it). The benefits of L-M include a seamless transition between Gauss-Newton minimization and downward gradient minimization through the use of a regularization parameter. This transition is desirable because Gauss-Newton methods converge quickly, but only within a limited domain of convergence; on the other hand the downward gradient methods have a much wider domain of convergence, but converge extremely slowly nearer the minimum. L-M has the advantages of both procedures: relative insensitivity to initial parameters and rapid convergence. Scientists, when wanting an answer quickly, will fit data using L-M, get an answer, and move on. Only those that are aware of the bias issues will bother to fit using the more appropriate MLE for Poisson deviates. However, since there is a simple, analytical formula for the appropriate MLE measure for Poisson deviates, it is inexcusable that least squares estimators are used almost exclusively when fitting event counting histograms. There have been ways found to use successive non-linear least squares fitting to obtain similarly unbiased results, but this procedure is justified by simulation, must be re-tested when conditions change significantly, and requires two successive fits. There is a great need for a fitting routine for the MLE estimator for Poisson deviates that has convergence domains and rates comparable to the non-linear least squares L-M fitting. We show in this report that a simple way to achieve that goal is to use the L-M fitting procedure not to minimize the least squares measure, but the MLE for Poisson deviates.« less

  8. Automatic page layout using genetic algorithms for electronic albuming

    NASA Astrophysics Data System (ADS)

    Geigel, Joe; Loui, Alexander C. P.

    2000-12-01

    In this paper, we describe a flexible system for automatic page layout that makes use of genetic algorithms for albuming applications. The system is divided into two modules, a page creator module which is responsible for distributing images amongst various album pages, and an image placement module which positions images on individual pages. Final page layouts are specified in a textual form using XML for printing or viewing over the Internet. The system makes use of genetic algorithms, a class of search and optimization algorithms that are based on the concepts of biological evolution, for generating solutions with fitness based on graphic design preferences supplied by the user. The genetic page layout algorithm has been incorporated into a web-based prototype system for interactive page layout over the Internet. The prototype system is built using client-server architecture and is implemented in java. The system described in this paper has demonstrated the feasibility of using genetic algorithms for automated page layout in albuming and web-based imaging applications. We believe that the system adequately proves the validity of the concept, providing creative layouts in a reasonable number of iterations. By optimizing the layout parameters of the fitness function, we hope to further improve the quality of the final layout in terms of user preference and computation speed.

  9. Optimised analytical models of the dielectric properties of biological tissue.

    PubMed

    Salahuddin, Saqib; Porter, Emily; Krewer, Finn; O' Halloran, Martin

    2017-05-01

    The interaction of electromagnetic fields with the human body is quantified by the dielectric properties of biological tissues. These properties are incorporated into complex numerical simulations using parametric models such as Debye and Cole-Cole, for the computational investigation of electromagnetic wave propagation within the body. These parameters can be acquired through a variety of optimisation algorithms to achieve an accurate fit to measured data sets. A number of different optimisation techniques have been proposed, but these are often limited by the requirement for initial value estimations or by the large overall error (often up to several percentage points). In this work, a novel two-stage genetic algorithm proposed by the authors is applied to optimise the multi-pole Debye parameters for 54 types of human tissues. The performance of the two-stage genetic algorithm has been examined through a comparison with five other existing algorithms. The experimental results demonstrate that the two-stage genetic algorithm produces an accurate fit to a range of experimental data and efficiently out-performs all other optimisation algorithms under consideration. Accurate values of the three-pole Debye models for 54 types of human tissues, over 500 MHz to 20 GHz, are also presented for reference. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  10. Fecal immunochemical tests in combination with blood tests for colorectal cancer and advanced adenoma detection—systematic review

    PubMed Central

    Niedermaier, Tobias; Weigl, Korbinian; Hoffmeister, Michael; Brenner, Hermann

    2017-01-01

    Background Colorectal cancer (CRC) is a common but largely preventable cancer. Although fecal immunochemical tests (FITs) detect the majority of CRCs, they miss some of the cancers and most advanced adenomas (AAs). The potential of blood tests in complementing FITs for the detection of CRC or AA has not yet been systematically investigated. Methods We conducted a systematic review of performance of FIT combined with an additional blood test for CRC and AA detection versus FIT alone. PubMed and Web of Science were searched until June 9, 2017. Results Some markers substantially increased sensitivity for CRC when combined with FIT, albeit typically at a major loss of specificity. For AA, no relevant increase in sensitivity could be achieved. Conclusion Combining FIT and blood tests might be a promising approach to enhance sensitivity of CRC screening, but comprehensive evaluation of promising marker combinations in screening populations is needed. PMID:29435309

  11. PyRhO: A Multiscale Optogenetics Simulation Platform

    PubMed Central

    Evans, Benjamin D.; Jarvis, Sarah; Schultz, Simon R.; Nikolic, Konstantin

    2016-01-01

    Optogenetics has become a key tool for understanding the function of neural circuits and controlling their behavior. An array of directly light driven opsins have been genetically isolated from several families of organisms, with a wide range of temporal and spectral properties. In order to characterize, understand and apply these opsins, we present an integrated suite of open-source, multi-scale computational tools called PyRhO. The purpose of developing PyRhO is three-fold: (i) to characterize new (and existing) opsins by automatically fitting a minimal set of experimental data to three-, four-, or six-state kinetic models, (ii) to simulate these models at the channel, neuron and network levels, and (iii) provide functional insights through model selection and virtual experiments in silico. The module is written in Python with an additional IPython/Jupyter notebook based GUI, allowing models to be fit, simulations to be run and results to be shared through simply interacting with a webpage. The seamless integration of model fitting algorithms with simulation environments (including NEURON and Brian2) for these virtual opsins will enable neuroscientists to gain a comprehensive understanding of their behavior and rapidly identify the most suitable variant for application in a particular biological system. This process may thereby guide not only experimental design and opsin choice but also alterations of the opsin genetic code in a neuro-engineering feed-back loop. In this way, we expect PyRhO will help to significantly advance optogenetics as a tool for transforming biological sciences. PMID:27148037

  12. PyRhO: A Multiscale Optogenetics Simulation Platform.

    PubMed

    Evans, Benjamin D; Jarvis, Sarah; Schultz, Simon R; Nikolic, Konstantin

    2016-01-01

    Optogenetics has become a key tool for understanding the function of neural circuits and controlling their behavior. An array of directly light driven opsins have been genetically isolated from several families of organisms, with a wide range of temporal and spectral properties. In order to characterize, understand and apply these opsins, we present an integrated suite of open-source, multi-scale computational tools called PyRhO. The purpose of developing PyRhO is three-fold: (i) to characterize new (and existing) opsins by automatically fitting a minimal set of experimental data to three-, four-, or six-state kinetic models, (ii) to simulate these models at the channel, neuron and network levels, and (iii) provide functional insights through model selection and virtual experiments in silico. The module is written in Python with an additional IPython/Jupyter notebook based GUI, allowing models to be fit, simulations to be run and results to be shared through simply interacting with a webpage. The seamless integration of model fitting algorithms with simulation environments (including NEURON and Brian2) for these virtual opsins will enable neuroscientists to gain a comprehensive understanding of their behavior and rapidly identify the most suitable variant for application in a particular biological system. This process may thereby guide not only experimental design and opsin choice but also alterations of the opsin genetic code in a neuro-engineering feed-back loop. In this way, we expect PyRhO will help to significantly advance optogenetics as a tool for transforming biological sciences.

  13. Constrained VPH+: a local path planning algorithm for a bio-inspired crawling robot with customized ultrasonic scanning sensor.

    PubMed

    Rao, Akshay; Elara, Mohan Rajesh; Elangovan, Karthikeyan

    This paper aims to develop a local path planning algorithm for a bio-inspired, reconfigurable crawling robot. A detailed description of the robotic platform is first provided, and the suitability for deployment of each of the current state-of-the-art local path planners is analyzed after an extensive literature review. The Enhanced Vector Polar Histogram algorithm is described and reformulated to better fit the requirements of the platform. The algorithm is deployed on the robotic platform in crawling configuration and favorably compared with other state-of-the-art local path planning algorithms.

  14. ParFit: A Python-Based Object-Oriented Program for Fitting Molecular Mechanics Parameters to ab Initio Data.

    PubMed

    Zahariev, Federico; De Silva, Nuwan; Gordon, Mark S; Windus, Theresa L; Dick-Perez, Marilu

    2017-03-27

    A newly created object-oriented program for automating the process of fitting molecular-mechanics parameters to ab initio data, termed ParFit, is presented. ParFit uses a hybrid of deterministic and stochastic genetic algorithms. ParFit can simultaneously handle several molecular-mechanics parameters in multiple molecules and can also apply symmetric and antisymmetric constraints on the optimized parameters. The simultaneous handling of several molecules enhances the transferability of the fitted parameters. ParFit is written in Python, uses a rich set of standard and nonstandard Python libraries, and can be run in parallel on multicore computer systems. As an example, a series of phosphine oxides, important for metal extraction chemistry, are parametrized using ParFit. ParFit is in an open source program available for free on GitHub ( https://github.com/fzahari/ParFit ).

  15. A universal deep learning approach for modeling the flow of patients under different severities.

    PubMed

    Jiang, Shancheng; Chin, Kwai-Sang; Tsui, Kwok L

    2018-02-01

    The Accident and Emergency Department (A&ED) is the frontline for providing emergency care in hospitals. Unfortunately, relative A&ED resources have failed to keep up with continuously increasing demand in recent years, which leads to overcrowding in A&ED. Knowing the fluctuation of patient arrival volume in advance is a significant premise to relieve this pressure. Based on this motivation, the objective of this study is to explore an integrated framework with high accuracy for predicting A&ED patient flow under different triage levels, by combining a novel feature selection process with deep neural networks. Administrative data is collected from an actual A&ED and categorized into five groups based on different triage levels. A genetic algorithm (GA)-based feature selection algorithm is improved and implemented as a pre-processing step for this time-series prediction problem, in order to explore key features affecting patient flow. In our improved GA, a fitness-based crossover is proposed to maintain the joint information of multiple features during iterative process, instead of traditional point-based crossover. Deep neural networks (DNN) is employed as the prediction model to utilize their universal adaptability and high flexibility. In the model-training process, the learning algorithm is well-configured based on a parallel stochastic gradient descent algorithm. Two effective regularization strategies are integrated in one DNN framework to avoid overfitting. All introduced hyper-parameters are optimized efficiently by grid-search in one pass. As for feature selection, our improved GA-based feature selection algorithm has outperformed a typical GA and four state-of-the-art feature selection algorithms (mRMR, SAFS, VIFR, and CFR). As for the prediction accuracy of proposed integrated framework, compared with other frequently used statistical models (GLM, seasonal-ARIMA, ARIMAX, and ANN) and modern machine models (SVM-RBF, SVM-linear, RF, and R-LASSO), the proposed integrated "DNN-I-GA" framework achieves higher prediction accuracy on both MAPE and RMSE metrics in pairwise comparisons. The contribution of our study is two-fold. Theoretically, the traditional GA-based feature selection process is improved to have less hyper-parameters and higher efficiency, and the joint information of multiple features is maintained by fitness-based crossover operator. The universal property of DNN is further enhanced by merging different regularization strategies. Practically, features selected by our improved GA can be used to acquire an underlying relationship between patient flows and input features. Predictive values are significant indicators of patients' demand and can be used by A&ED managers to make resource planning and allocation. High accuracy achieved by the present framework in different cases enhances the reliability of downstream decision makings. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Improved algorithm for estimating optical properties of food and biological materials using spatially-resolved diffuse reflectance

    USDA-ARS?s Scientific Manuscript database

    In this research, the inverse algorithm for estimating optical properties of food and biological materials from spatially-resolved diffuse reflectance was optimized in terms of data smoothing, normalization and spatial region of reflectance profile for curve fitting. Monte Carlo simulation was used ...

  17. Using Genetic Algorithm and MODFLOW to Characterize Aquifer System of Northwest Florida (Published Proceedings)

    EPA Science Inventory

    By integrating Genetic Algorithm and MODFLOW2005, an optimizing tool is developed to characterize the aquifer system of Region II, Northwest Florida. The history and the newest available observation data of the aquifer system is fitted automatically by using the numerical model c...

  18. Constructing the Exact Significance Level for a Person-Fit Statistic.

    ERIC Educational Resources Information Center

    Liou, Michelle; Chang, Chih-Hsin

    1992-01-01

    An extension is proposed for the network algorithm introduced by C.R. Mehta and N.R. Patel to construct exact tail probabilities for testing the general hypothesis that item responses are distributed according to the Rasch model. A simulation study indicates the efficiency of the algorithm. (SLD)

  19. High precision wavefront control in point spread function engineering for single emitter localization

    NASA Astrophysics Data System (ADS)

    Siemons, M.; Hulleman, C. N.; Thorsen, R. Ø.; Smith, C. S.; Stallinga, S.

    2018-04-01

    Point Spread Function (PSF) engineering is used in single emitter localization to measure the emitter position in 3D and possibly other parameters such as the emission color or dipole orientation as well. Advanced PSF models such as spline fits to experimental PSFs or the vectorial PSF model can be used in the corresponding localization algorithms in order to model the intricate spot shape and deformations correctly. The complexity of the optical architecture and fit model makes PSF engineering approaches particularly sensitive to optical aberrations. Here, we present a calibration and alignment protocol for fluorescence microscopes equipped with a spatial light modulator (SLM) with the goal of establishing a wavefront error well below the diffraction limit for optimum application of complex engineered PSFs. We achieve high-precision wavefront control, to a level below 20 m$\\lambda$ wavefront aberration over a 30 minute time window after the calibration procedure, using a separate light path for calibrating the pixel-to-pixel variations of the SLM, and alignment of the SLM with respect to the optical axis and Fourier plane within 3 $\\mu$m ($x/y$) and 100 $\\mu$m ($z$) error. Aberrations are retrieved from a fit of the vectorial PSF model to a bead $z$-stack and compensated with a residual wavefront error comparable to the error of the SLM calibration step. This well-calibrated and corrected setup makes it possible to create complex `3D+$\\lambda$' PSFs that fit very well to the vectorial PSF model. Proof-of-principle bead experiments show precisions below 10~nm in $x$, $y$, and $\\lambda$, and below 20~nm in $z$ over an axial range of 1 $\\mu$m with 2000 signal photons and 12 background photons.

  20. Acoustic Impedance Inversion of Seismic Data Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Eladj, Said; Djarfour, Noureddine; Ferahtia, Djalal; Ouadfeul, Sid-Ali

    2013-04-01

    The inversion of seismic data can be used to constrain estimates of the Earth's acoustic impedance structure. This kind of problem is usually known to be non-linear, high-dimensional, with a complex search space which may be riddled with many local minima, and results in irregular objective functions. We investigate here the performance and the application of a genetic algorithm, in the inversion of seismic data. The proposed algorithm has the advantage of being easily implemented without getting stuck in local minima. The effects of population size, Elitism strategy, uniform cross-over and lower mutation are examined. The optimum solution parameters and performance were decided as a function of the testing error convergence with respect to the generation number. To calculate the fitness function, we used L2 norm of the sample-to-sample difference between the reference and the inverted trace. The cross-over probability is of 0.9-0.95 and mutation has been tested at 0.01 probability. The application of such a genetic algorithm to synthetic data shows that the inverted acoustic impedance section was efficient. Keywords: Seismic, Inversion, acoustic impedance, genetic algorithm, fitness functions, cross-over, mutation.

  1. Designing Artificial Neural Networks Using Particle Swarm Optimization Algorithms

    PubMed Central

    Vázquez, Roberto A.

    2015-01-01

    Artificial Neural Network (ANN) design is a complex task because its performance depends on the architecture, the selected transfer function, and the learning algorithm used to train the set of synaptic weights. In this paper we present a methodology that automatically designs an ANN using particle swarm optimization algorithms such as Basic Particle Swarm Optimization (PSO), Second Generation of Particle Swarm Optimization (SGPSO), and a New Model of PSO called NMPSO. The aim of these algorithms is to evolve, at the same time, the three principal components of an ANN: the set of synaptic weights, the connections or architecture, and the transfer functions for each neuron. Eight different fitness functions were proposed to evaluate the fitness of each solution and find the best design. These functions are based on the mean square error (MSE) and the classification error (CER) and implement a strategy to avoid overtraining and to reduce the number of connections in the ANN. In addition, the ANN designed with the proposed methodology is compared with those designed manually using the well-known Back-Propagation and Levenberg-Marquardt Learning Algorithms. Finally, the accuracy of the method is tested with different nonlinear pattern classification problems. PMID:26221132

  2. Genetic algorithm enhanced by machine learning in dynamic aperture optimization

    NASA Astrophysics Data System (ADS)

    Li, Yongjun; Cheng, Weixing; Yu, Li Hua; Rainer, Robert

    2018-05-01

    With the aid of machine learning techniques, the genetic algorithm has been enhanced and applied to the multi-objective optimization problem presented by the dynamic aperture of the National Synchrotron Light Source II (NSLS-II) Storage Ring. During the evolution processes employed by the genetic algorithm, the population is classified into different clusters in the search space. The clusters with top average fitness are given "elite" status. Intervention on the population is implemented by repopulating some potentially competitive candidates based on the experience learned from the accumulated data. These candidates replace randomly selected candidates among the original data pool. The average fitness of the population is therefore improved while diversity is not lost. Maintaining diversity ensures that the optimization is global rather than local. The quality of the population increases and produces more competitive descendants accelerating the evolution process significantly. When identifying the distribution of optimal candidates, they appear to be located in isolated islands within the search space. Some of these optimal candidates have been experimentally confirmed at the NSLS-II storage ring. The machine learning techniques that exploit the genetic algorithm can also be used in other population-based optimization problems such as particle swarm algorithm.

  3. Identifying High-redshift Gamma-Ray Bursts with RATIR

    NASA Astrophysics Data System (ADS)

    Littlejohns, O. M.; Butler, N. R.; Cucchiara, A.; Watson, A. M.; Kutyrev, A. S.; Lee, W. H.; Richer, M. G.; Klein, C. R.; Fox, O. D.; Prochaska, J. X.; Bloom, J. S.; Troja, E.; Ramirez-Ruiz, E.; de Diego, J. A.; Georgiev, L.; González, J.; Román-Zúñiga, C. G.; Gehrels, N.; Moseley, H.

    2014-07-01

    We present a template-fitting algorithm for determining photometric redshifts, z phot, of candidate high-redshift gamma-ray bursts (GRBs). Using afterglow photometry, obtained by the Reionization and Transients InfraRed (RATIR) camera, this algorithm accounts for the intrinsic GRB afterglow spectral energy distribution, host dust extinction, and the effect of neutral hydrogen (local and cosmological) along the line of sight. We present the results obtained by this algorithm and the RATIR photometry of GRB 130606A, finding a range of best-fit solutions, 5.6 < z phot < 6.0, for models of several host dust extinction laws (none, the Milky Way, Large Magellanic Clouds, and Small Magellanic Clouds), consistent with spectroscopic measurements of the redshift of this GRB. Using simulated RATIR photometry, we find that our algorithm provides precise measures of z phot in the ranges of 4 < z phot <~ 8 and 9 < z phot < 10 and can robustly determine when z phot > 4. Further testing highlights the required caution in cases of highly dust-extincted host galaxies. These tests also show that our algorithm does not erroneously find z phot < 4 when z sim > 4, thereby minimizing false negatives and allowing us to rapidly identify all potential high-redshift events.

  4. Protein-ligand docking using fitness learning-based artificial bee colony with proximity stimuli.

    PubMed

    Uehara, Shota; Fujimoto, Kazuhiro J; Tanaka, Shigenori

    2015-07-07

    Protein-ligand docking is an optimization problem, which aims to identify the binding pose of a ligand with the lowest energy in the active site of a target protein. In this study, we employed a novel optimization algorithm called fitness learning-based artificial bee colony with proximity stimuli (FlABCps) for docking. Simulation results revealed that FlABCps improved the success rate of docking, compared to four state-of-the-art algorithms. The present results also showed superior docking performance of FlABCps, in particular for dealing with highly flexible ligands and proteins with a wide and shallow binding pocket.

  5. Finding Maximum Cliques on the D-Wave Quantum Annealer

    DOE PAGES

    Chapuis, Guillaume; Djidjev, Hristo; Hahn, Georg; ...

    2018-05-03

    This work assesses the performance of the D-Wave 2X (DW) quantum annealer for finding a maximum clique in a graph, one of the most fundamental and important NP-hard problems. Because the size of the largest graphs DW can directly solve is quite small (usually around 45 vertices), we also consider decomposition algorithms intended for larger graphs and analyze their performance. For smaller graphs that fit DW, we provide formulations of the maximum clique problem as a quadratic unconstrained binary optimization (QUBO) problem, which is one of the two input types (together with the Ising model) acceptable by the machine, andmore » compare several quantum implementations to current classical algorithms such as simulated annealing, Gurobi, and third-party clique finding heuristics. We further estimate the contributions of the quantum phase of the quantum annealer and the classical post-processing phase typically used to enhance each solution returned by DW. We demonstrate that on random graphs that fit DW, no quantum speedup can be observed compared with the classical algorithms. On the other hand, for instances specifically designed to fit well the DW qubit interconnection network, we observe substantial speed-ups in computing time over classical approaches.« less

  6. Finding Maximum Cliques on the D-Wave Quantum Annealer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chapuis, Guillaume; Djidjev, Hristo; Hahn, Georg

    This work assesses the performance of the D-Wave 2X (DW) quantum annealer for finding a maximum clique in a graph, one of the most fundamental and important NP-hard problems. Because the size of the largest graphs DW can directly solve is quite small (usually around 45 vertices), we also consider decomposition algorithms intended for larger graphs and analyze their performance. For smaller graphs that fit DW, we provide formulations of the maximum clique problem as a quadratic unconstrained binary optimization (QUBO) problem, which is one of the two input types (together with the Ising model) acceptable by the machine, andmore » compare several quantum implementations to current classical algorithms such as simulated annealing, Gurobi, and third-party clique finding heuristics. We further estimate the contributions of the quantum phase of the quantum annealer and the classical post-processing phase typically used to enhance each solution returned by DW. We demonstrate that on random graphs that fit DW, no quantum speedup can be observed compared with the classical algorithms. On the other hand, for instances specifically designed to fit well the DW qubit interconnection network, we observe substantial speed-ups in computing time over classical approaches.« less

  7. The Study of Intelligent Vehicle Navigation Path Based on Behavior Coordination of Particle Swarm.

    PubMed

    Han, Gaining; Fu, Weiping; Wang, Wen

    2016-01-01

    In the behavior dynamics model, behavior competition leads to the shock problem of the intelligent vehicle navigation path, because of the simultaneous occurrence of the time-variant target behavior and obstacle avoidance behavior. Considering the safety and real-time of intelligent vehicle, the particle swarm optimization (PSO) algorithm is proposed to solve these problems for the optimization of weight coefficients of the heading angle and the path velocity. Firstly, according to the behavior dynamics model, the fitness function is defined concerning the intelligent vehicle driving characteristics, the distance between intelligent vehicle and obstacle, and distance of intelligent vehicle and target. Secondly, behavior coordination parameters that minimize the fitness function are obtained by particle swarm optimization algorithms. Finally, the simulation results show that the optimization method and its fitness function can improve the perturbations of the vehicle planning path and real-time and reliability.

  8. Scheduling Diet for Diabetes Mellitus Patients using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Syahputra, M. F.; Felicia, V.; Rahmat, R. F.; Budiarto, R.

    2017-01-01

    Diabetes Melitus (DM) is one of metabolic diseases which affects on productivity and lowers the human resources quality. This disease can be controlled by maintaining and regulating balanced and healthy lifestyle especially for daily diet. However, nowadays, there is no system able to help DM patient to get any information of proper diet. Therefore, an approach is required to provide scheduling diet every day in a week with appropriate nutrition for DM patients to help them regulate their daily diet for healing this disease. In this research, we calculate the number of caloric needs using Harris-Benedict equation and propose genetic algorithm for scheduling diet for DM patient. The results show that the greater the number of individuals, the greater the more the possibility of changes in fitness score approaches the best fitness score. Moreover, the greater the created generation, the more the opportunites to obtain best individual with fitness score approaching 0 or equal to 0.

  9. A modified active appearance model based on an adaptive artificial bee colony.

    PubMed

    Abdulameer, Mohammed Hasan; Sheikh Abdullah, Siti Norul Huda; Othman, Zulaiha Ali

    2014-01-01

    Active appearance model (AAM) is one of the most popular model-based approaches that have been extensively used to extract features by highly accurate modeling of human faces under various physical and environmental circumstances. However, in such active appearance model, fitting the model with original image is a challenging task. State of the art shows that optimization method is applicable to resolve this problem. However, another common problem is applying optimization. Hence, in this paper we propose an AAM based face recognition technique, which is capable of resolving the fitting problem of AAM by introducing a new adaptive ABC algorithm. The adaptation increases the efficiency of fitting as against the conventional ABC algorithm. We have used three datasets: CASIA dataset, property 2.5D face dataset, and UBIRIS v1 images dataset in our experiments. The results have revealed that the proposed face recognition technique has performed effectively, in terms of accuracy of face recognition.

  10. Combined Tensor Fitting and TV Regularization in Diffusion Tensor Imaging Based on a Riemannian Manifold Approach.

    PubMed

    Baust, Maximilian; Weinmann, Andreas; Wieczorek, Matthias; Lasser, Tobias; Storath, Martin; Navab, Nassir

    2016-08-01

    In this paper, we consider combined TV denoising and diffusion tensor fitting in DTI using the affine-invariant Riemannian metric on the space of diffusion tensors. Instead of first fitting the diffusion tensors, and then denoising them, we define a suitable TV type energy functional which incorporates the measured DWIs (using an inverse problem setup) and which measures the nearness of neighboring tensors in the manifold. To approach this functional, we propose generalized forward- backward splitting algorithms which combine an explicit and several implicit steps performed on a decomposition of the functional. We validate the performance of the derived algorithms on synthetic and real DTI data. In particular, we work on real 3D data. To our knowledge, the present paper describes the first approach to TV regularization in a combined manifold and inverse problem setup.

  11. The Study of Intelligent Vehicle Navigation Path Based on Behavior Coordination of Particle Swarm

    PubMed Central

    Han, Gaining; Fu, Weiping; Wang, Wen

    2016-01-01

    In the behavior dynamics model, behavior competition leads to the shock problem of the intelligent vehicle navigation path, because of the simultaneous occurrence of the time-variant target behavior and obstacle avoidance behavior. Considering the safety and real-time of intelligent vehicle, the particle swarm optimization (PSO) algorithm is proposed to solve these problems for the optimization of weight coefficients of the heading angle and the path velocity. Firstly, according to the behavior dynamics model, the fitness function is defined concerning the intelligent vehicle driving characteristics, the distance between intelligent vehicle and obstacle, and distance of intelligent vehicle and target. Secondly, behavior coordination parameters that minimize the fitness function are obtained by particle swarm optimization algorithms. Finally, the simulation results show that the optimization method and its fitness function can improve the perturbations of the vehicle planning path and real-time and reliability. PMID:26880881

  12. Wavefront tilt feedforward for the formation interferometer testbad (FIT)

    NASA Technical Reports Server (NTRS)

    Shields, J. F.; Liewer, K.; Wehmeier, U.

    2002-01-01

    Separated spacecraft interferometry is a candidate architecture for several future NASA missions. The Formation Interferometer Testbed (FIT) is a ground based testbed dedicated to the validation of this key technology for a formation of two spacecraft. In separated spacecraft interferometry, the residual relative motion of the component spacecraft must be compensated for by articulation of the optical components. In this paper, the design of the FIT interferometer pointing control system is described. This control system is composed of a metrology pointing loop that maintains an optical link between the two spacecraft and two stellar pointing loops for stabilizing the stellar wavefront at both the right and left apertures of the instrument. A novel feedforward algorithm is used to decouple the metrology loop from the left side stellar loop. Experimental results from the testbed are presented that verify this approach and that fully demonstrate the performance of the algorithm.

  13. A curve fitting method for solving the flutter equation. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Cooper, J. L.

    1972-01-01

    A curve fitting approach was developed to solve the flutter equation for the critical flutter velocity. The psi versus nu curves are approximated by cubic and quadratic equations. The curve fitting technique utilized the first and second derivatives of psi with respect to nu. The method was tested for two structures, one structure being six times the total mass of the other structure. The algorithm never showed any tendency to diverge from the solution. The average time for the computation of a flutter velocity was 3.91 seconds on an IBM Model 50 computer for an accuracy of five per cent. For values of nu close to the critical root of the flutter equation the algorithm converged on the first attempt. The maximum number of iterations for convergence to the critical flutter velocity was five with an assumed value of nu relatively distant from the actual crossover.

  14. Neural Parallel Engine: A toolbox for massively parallel neural signal processing.

    PubMed

    Tam, Wing-Kin; Yang, Zhi

    2018-05-01

    Large-scale neural recordings provide detailed information on neuronal activities and can help elicit the underlying neural mechanisms of the brain. However, the computational burden is also formidable when we try to process the huge data stream generated by such recordings. In this study, we report the development of Neural Parallel Engine (NPE), a toolbox for massively parallel neural signal processing on graphical processing units (GPUs). It offers a selection of the most commonly used routines in neural signal processing such as spike detection and spike sorting, including advanced algorithms such as exponential-component-power-component (EC-PC) spike detection and binary pursuit spike sorting. We also propose a new method for detecting peaks in parallel through a parallel compact operation. Our toolbox is able to offer a 5× to 110× speedup compared with its CPU counterparts depending on the algorithms. A user-friendly MATLAB interface is provided to allow easy integration of the toolbox into existing workflows. Previous efforts on GPU neural signal processing only focus on a few rudimentary algorithms, are not well-optimized and often do not provide a user-friendly programming interface to fit into existing workflows. There is a strong need for a comprehensive toolbox for massively parallel neural signal processing. A new toolbox for massively parallel neural signal processing has been created. It can offer significant speedup in processing signals from large-scale recordings up to thousands of channels. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Predictive power of theoretical modelling of the nuclear mean field: examples of improving predictive capacities

    NASA Astrophysics Data System (ADS)

    Dedes, I.; Dudek, J.

    2018-03-01

    We examine the effects of the parametric correlations on the predictive capacities of the theoretical modelling keeping in mind the nuclear structure applications. The main purpose of this work is to illustrate the method of establishing the presence and determining the form of parametric correlations within a model as well as an algorithm of elimination by substitution (see text) of parametric correlations. We examine the effects of the elimination of the parametric correlations on the stabilisation of the model predictions further and further away from the fitting zone. It follows that the choice of the physics case and the selection of the associated model are of secondary importance in this case. Under these circumstances we give priority to the relative simplicity of the underlying mathematical algorithm, provided the model is realistic. Following such criteria, we focus specifically on an important but relatively simple case of doubly magic spherical nuclei. To profit from the algorithmic simplicity we chose working with the phenomenological spherically symmetric Woods–Saxon mean-field. We employ two variants of the underlying Hamiltonian, the traditional one involving both the central and the spin orbit potential in the Woods–Saxon form and the more advanced version with the self-consistent density-dependent spin–orbit interaction. We compare the effects of eliminating of various types of correlations and discuss the improvement of the quality of predictions (‘predictive power’) under realistic parameter adjustment conditions.

  16. Dark Energy Camera for Blanco

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Binder, Gary A.; /Caltech /SLAC

    2010-08-25

    In order to make accurate measurements of dark energy, a system is needed to monitor the focus and alignment of the Dark Energy Camera (DECam) to be located on the Blanco 4m Telescope for the upcoming Dark Energy Survey. One new approach under development is to fit out-of-focus star images to a point spread function from which information about the focus and tilt of the camera can be obtained. As a first test of a new algorithm using this idea, simulated star images produced from a model of DECam in the optics software Zemax were fitted. Then, real images frommore » the Mosaic II imager currently installed on the Blanco telescope were used to investigate the algorithm's capabilities. A number of problems with the algorithm were found, and more work is needed to understand its limitations and improve its capabilities so it can reliably predict camera alignment and focus.« less

  17. Prediction of Aerodynamic Coefficients for Wind Tunnel Data using a Genetic Algorithm Optimized Neural Network

    NASA Technical Reports Server (NTRS)

    Rajkumar, T.; Aragon, Cecilia; Bardina, Jorge; Britten, Roy

    2002-01-01

    A fast, reliable way of predicting aerodynamic coefficients is produced using a neural network optimized by a genetic algorithm. Basic aerodynamic coefficients (e.g. lift, drag, pitching moment) are modelled as functions of angle of attack and Mach number. The neural network is first trained on a relatively rich set of data from wind tunnel tests of numerical simulations to learn an overall model. Most of the aerodynamic parameters can be well-fitted using polynomial functions. A new set of data, which can be relatively sparse, is then supplied to the network to produce a new model consistent with the previous model and the new data. Because the new model interpolates realistically between the sparse test data points, it is suitable for use in piloted simulations. The genetic algorithm is used to choose a neural network architecture to give best results, avoiding over-and under-fitting of the test data.

  18. Statistical Inference in Hidden Markov Models Using k-Segment Constraints

    PubMed Central

    Titsias, Michalis K.; Holmes, Christopher C.; Yau, Christopher

    2016-01-01

    Hidden Markov models (HMMs) are one of the most widely used statistical methods for analyzing sequence data. However, the reporting of output from HMMs has largely been restricted to the presentation of the most-probable (MAP) hidden state sequence, found via the Viterbi algorithm, or the sequence of most probable marginals using the forward–backward algorithm. In this article, we expand the amount of information we could obtain from the posterior distribution of an HMM by introducing linear-time dynamic programming recursions that, conditional on a user-specified constraint in the number of segments, allow us to (i) find MAP sequences, (ii) compute posterior probabilities, and (iii) simulate sample paths. We collectively call these recursions k-segment algorithms and illustrate their utility using simulated and real examples. We also highlight the prospective and retrospective use of k-segment constraints for fitting HMMs or exploring existing model fits. Supplementary materials for this article are available online. PMID:27226674

  19. Autism Diagnostic Interview-Revised (ADI-R) Algorithms for Toddlers and Young Preschoolers: Application in a Non-US Sample of 1,104 Children

    ERIC Educational Resources Information Center

    de Bildt, Annelies; Sytema, Sjoerd; Zander, Eric; Bölte, Sven; Sturm, Harald; Yirmiya, Nurit; Yaari, Maya; Charman, Tony; Salomone, Erica; LeCouteur, Ann; Green, Jonathan; Bedia, Ricardo Canal; Primo, Patricia García; van Daalen, Emma; de Jonge, Maretha V.; Guðmundsdóttir, Emilía; Jóhannsdóttir, Sigurrós; Raleva, Marija; Boskovska, Meri; Rogé, Bernadette; Baduel, Sophie; Moilanen, Irma; Yliherva, Anneli; Buitelaar, Jan; Oosterling, Iris J.

    2015-01-01

    The current study aimed to investigate the Autism Diagnostic Interview-Revised (ADI-R) algorithms for toddlers and young preschoolers (Kim and Lord, "J Autism Dev Disord" 42(1):82-93, 2012) in a non-US sample from ten sites in nine countries (n = 1,104). The construct validity indicated a good fit of the algorithms. The diagnostic…

  20. Advanced detection, isolation and accommodation of sensor failures: Real-time evaluation

    NASA Technical Reports Server (NTRS)

    Merrill, Walter C.; Delaat, John C.; Bruton, William M.

    1987-01-01

    The objective of the Advanced Detection, Isolation, and Accommodation (ADIA) Program is to improve the overall demonstrated reliability of digital electronic control systems for turbine engines by using analytical redundacy to detect sensor failures. The results of a real time hybrid computer evaluation of the ADIA algorithm are presented. Minimum detectable levels of sensor failures for an F100 engine control system are determined. Also included are details about the microprocessor implementation of the algorithm as well as a description of the algorithm itself.

  1. QPO observations related to neutron star equations of state

    NASA Astrophysics Data System (ADS)

    Stuchlik, Zdenek; Urbanec, Martin; Török, Gabriel; Bakala, Pavel; Cermak, Petr

    We apply a genetic algorithm method for selection of neutron star models relating them to the resonant models of the twin peak quasiperiodic oscillations observed in the X-ray neutron star binary systems. It was suggested that pairs of kilo-hertz peaks in the X-ray Fourier power density spectra of some neutron stars reflect a non-linear resonance between two modes of accretion disk oscillations. We investigate this concept for a specific neutron star source. Each neutron star model is characterized by the equation of state (EOS), rotation frequency Ω and central energy density ρc . These determine the spacetime structure governing geodesic motion and position dependent radial and vertical epicyclic oscillations related to the stable circular geodesics. Particular kinds of resonances (KR) between the oscillations with epicyclic frequencies, or the frequencies derived from them, can take place at special positions assigned ambiguously to the spacetime structure. The pairs of resonant eigenfrequencies relevant to those positions are therefore fully given by KR,ρc , Ω, EOS and can be compared to the observationally determined pairs of eigenfrequencies in order to eliminate the unsatisfactory sets (KR,ρc , Ω, EOS). For the elimination we use the advanced genetic algorithm. Genetic algorithm comes out from the method of natural selection when subjects with the best adaptation to assigned conditions have most chances to survive. The chosen genetic algorithm with sexual reproduction contains one chromosome with restricted lifetime, uniform crossing and genes of type 3/3/5. For encryption of physical description (KR,ρ, Ω, EOS) into chromosome we used Gray code. As a fitness function we use correspondence between the observed and calculated pairs of eigenfrequencies.

  2. Neutron star equation of state and QPO observations

    NASA Astrophysics Data System (ADS)

    Urbanec, Martin; Stuchlík, Zdeněk; Török, Gabriel; Bakala, Pavel; Čermák, Petr

    2007-12-01

    Assuming a resonant origin of the twin peak quasiperiodic oscillations observed in the X-ray neutron star binary systems, we apply a genetic algorithm method for selection of neutron star models. It was suggested that pairs of kilohertz peaks in the X-ray Fourier power density spectra of some neutron stars reflect a non-linear resonance between two modes of accretion disk oscillations. We investigate this concept for a specific neutron star source. Each neutron star model is characterized by the equation of state (EOS), rotation frequency Ω and central energy density rho_{c}. These determine the spacetime structure governing geodesic motion and position dependent radial and vertical epicyclic oscillations related to the stable circular geodesics. Particular kinds of resonances (KR) between the oscillations with epicyclic frequencies, or the frequencies derived from them, can take place at special positions assigned ambiguously to the spacetime structure. The pairs of resonant eigenfrequencies relevant to those positions are therefore fully given by KR, rho_{c}, Ω, EOS and can be compared to the observationally determined pairs of eigenfrequencies in order to eliminate the unsatisfactory sets (KR, rho_{c}, Ω, EOS). For the elimination we use the advanced genetic algorithm. Genetic algorithm comes out from the method of natural selection when subjects with the best adaptation to assigned conditions have most chances to survive. The chosen genetic algorithm with sexual reproduction contains one chromosome with restricted lifetime, uniform crossing and genes of type 3/3/5. For encryption of physical description (KR, rho_{c}, Ω, EOS) into the chromosome we use the Gray code. As a fitness function we use correspondence between the observed and calculated pairs of eigenfrequencies.

  3. Advances in multi-sensor data fusion: algorithms and applications.

    PubMed

    Dong, Jiang; Zhuang, Dafang; Huang, Yaohuan; Fu, Jingying

    2009-01-01

    With the development of satellite and remote sensing techniques, more and more image data from airborne/satellite sensors have become available. Multi-sensor image fusion seeks to combine information from different images to obtain more inferences than can be derived from a single sensor. In image-based application fields, image fusion has emerged as a promising research area since the end of the last century. The paper presents an overview of recent advances in multi-sensor satellite image fusion. Firstly, the most popular existing fusion algorithms are introduced, with emphasis on their recent improvements. Advances in main applications fields in remote sensing, including object identification, classification, change detection and maneuvering targets tracking, are described. Both advantages and limitations of those applications are then discussed. Recommendations are addressed, including: (1) Improvements of fusion algorithms; (2) Development of "algorithm fusion" methods; (3) Establishment of an automatic quality assessment scheme.

  4. Cardiorespiratory fitness in urban adolescent girls: associations with race and pubertal status.

    PubMed

    Gammon, Catherine; Pfeiffer, Karin A; Kazanis, Anamaria; Ling, Jiying; Robbins, Lorraine B

    2017-01-01

    Cardiorespiratory fitness affords health benefits to youth. Among females, weight-relative fitness declines during puberty and is lower among African American (AA) than Caucasian girls. Data indicate racial differences in pubertal timing and tempo, yet the interactive influence of puberty and race on fitness, and the role of physical activity (PA) in these associations have not been examined. Thus, independent and interactive associations of race and pubertal development with fitness in adolescent girls, controlling for PA were examined. Girls in grades 5-8 (n = 1011; Caucasian = 25.2%, AA = 52.3%, Other Race group = 22.5%) completed the Pubertal Development Scale (pubertal stage assessment) and Fitnessgram® Progressive Aerobic Cardiovascular Endurance Run (PACER) test (cardiorespiratory fitness assessment). PA was assessed by accelerometry. Bivariate and multivariate analyses were used to examine associations among race, pubertal stage and fitness, controlling for vigorous PA, AA, and pubertally advanced girls demonstrated lower fitness than Caucasian and less mature counterparts. Puberty and race remained significantly associated with fitness after controlling for vigorous PA. The interaction effect of race and puberty on fitness was non-significant. The pubertal influence on fitness is observed among AA adolescents. Associations between fitness and race/puberty appear to be independent of each other and vigorous PA. Pubertally advanced AA girls represent a priority group for fitness interventions.

  5. Remote health monitoring system for detecting cardiac disorders.

    PubMed

    Bansal, Ayush; Kumar, Sunil; Bajpai, Anurag; Tiwari, Vijay N; Nayak, Mithun; Venkatesan, Shankar; Narayanan, Rangavittal

    2015-12-01

    Remote health monitoring system with clinical decision support system as a key component could potentially quicken the response of medical specialists to critical health emergencies experienced by their patients. A monitoring system, specifically designed for cardiac care with electrocardiogram (ECG) signal analysis as the core diagnostic technique, could play a vital role in early detection of a wide range of cardiac ailments, from a simple arrhythmia to life threatening conditions such as myocardial infarction. The system that the authors have developed consists of three major components, namely, (a) mobile gateway, deployed on patient's mobile device, that receives 12-lead ECG signals from any ECG sensor, (b) remote server component that hosts algorithms for accurate annotation and analysis of the ECG signal and (c) point of care device of the doctor to receive a diagnostic report from the server based on the analysis of ECG signals. In the present study, their focus has been toward developing a system capable of detecting critical cardiac events well in advance using an advanced remote monitoring system. A system of this kind is expected to have applications ranging from tracking wellness/fitness to detection of symptoms leading to fatal cardiac events.

  6. Development of iterative techniques for the solution of unsteady compressible viscous flows

    NASA Technical Reports Server (NTRS)

    Hixon, Duane; Sankar, L. N.

    1993-01-01

    During the past two decades, there has been significant progress in the field of numerical simulation of unsteady compressible viscous flows. At present, a variety of solution techniques exist such as the transonic small disturbance analyses (TSD), transonic full potential equation-based methods, unsteady Euler solvers, and unsteady Navier-Stokes solvers. These advances have been made possible by developments in three areas: (1) improved numerical algorithms; (2) automation of body-fitted grid generation schemes; and (3) advanced computer architectures with vector processing and massively parallel processing features. In this work, the GMRES scheme has been considered as a candidate for acceleration of a Newton iteration time marching scheme for unsteady 2-D and 3-D compressible viscous flow calculation; from preliminary calculations, this will provide up to a 65 percent reduction in the computer time requirements over the existing class of explicit and implicit time marching schemes. The proposed method has ben tested on structured grids, but is flexible enough for extension to unstructured grids. The described scheme has been tested only on the current generation of vector processor architecture of the Cray Y/MP class, but should be suitable for adaptation to massively parallel machines.

  7. Parameter expansion for estimation of reduced rank covariance matrices (Open Access publication)

    PubMed Central

    Meyer, Karin

    2008-01-01

    Parameter expanded and standard expectation maximisation algorithms are described for reduced rank estimation of covariance matrices by restricted maximum likelihood, fitting the leading principal components only. Convergence behaviour of these algorithms is examined for several examples and contrasted to that of the average information algorithm, and implications for practical analyses are discussed. It is shown that expectation maximisation type algorithms are readily adapted to reduced rank estimation and converge reliably. However, as is well known for the full rank case, the convergence is linear and thus slow. Hence, these algorithms are most useful in combination with the quadratically convergent average information algorithm, in particular in the initial stages of an iterative solution scheme. PMID:18096112

  8. A Self Adaptive Differential Evolution Algorithm for Global Optimization

    NASA Astrophysics Data System (ADS)

    Kumar, Pravesh; Pant, Millie

    This paper presents a new Differential Evolution algorithm based on hybridization of adaptive control parameters and trigonometric mutation. First we propose a self adaptive DE named ADE where choice of control parameter F and Cr is not fixed at some constant value but is taken iteratively. The proposed algorithm is further modified by applying trigonometric mutation in it and the corresponding algorithm is named as ATDE. The performance of ATDE is evaluated on the set of 8 benchmark functions and the results are compared with the classical DE algorithm in terms of average fitness function value, number of function evaluations, convergence time and success rate. The numerical result shows the competence of the proposed algorithm.

  9. Mobile robot dynamic path planning based on improved genetic algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Yong; Zhou, Heng; Wang, Ying

    2017-08-01

    In dynamic unknown environment, the dynamic path planning of mobile robots is a difficult problem. In this paper, a dynamic path planning method based on genetic algorithm is proposed, and a reward value model is designed to estimate the probability of dynamic obstacles on the path, and the reward value function is applied to the genetic algorithm. Unique coding techniques reduce the computational complexity of the algorithm. The fitness function of the genetic algorithm fully considers three factors: the security of the path, the shortest distance of the path and the reward value of the path. The simulation results show that the proposed genetic algorithm is efficient in all kinds of complex dynamic environments.

  10. Exercise Sensing and Pose Recovery Inference Tool (ESPRIT) - A Compact Stereo-based Motion Capture Solution For Exercise Monitoring

    NASA Technical Reports Server (NTRS)

    Lee, Mun Wai

    2015-01-01

    Crew exercise is important during long-duration space flight not only for maintaining health and fitness but also for preventing adverse health problems, such as losses in muscle strength and bone density. Monitoring crew exercise via motion capture and kinematic analysis aids understanding of the effects of microgravity on exercise and helps ensure that exercise prescriptions are effective. Intelligent Automation, Inc., has developed ESPRIT to monitor exercise activities, detect body markers, extract image features, and recover three-dimensional (3D) kinematic body poses. The system relies on prior knowledge and modeling of the human body and on advanced statistical inference techniques to achieve robust and accurate motion capture. In Phase I, the company demonstrated motion capture of several exercises, including walking, curling, and dead lifting. Phase II efforts focused on enhancing algorithms and delivering an ESPRIT prototype for testing and demonstration.

  11. Technology Readiness Level (TRL) Advancement of the MSPI On-Board Processing Platform for the ACE Decadal Survey Mission

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Werne, Thomas A.; Bekker, Dmitriy L.; Wilson, Thor O.

    2011-01-01

    The Xilinx Virtex-5QV is a new Single-event Immune Reconfigurable FPGA (SIRF) device that is targeted as the spaceborne processor for the NASA Decadal Survey Aerosol-Cloud-Ecosystem (ACE) mission's Multiangle SpectroPolarimetric Imager (MSPI) instrument, currently under development at JPL. A key technology needed for MSPI is on-board processing (OBP) to calculate polarimetry data as imaged by each of the 9 cameras forming the instrument. With funding from NASA's ESTO1 AIST2 Program, JPL is demonstrating how signal data at 95 Mbytes/sec over 16 channels for each of the 9 multi-angle cameras can be reduced to 0.45 Mbytes/sec, thereby substantially reducing the image data volume for spacecraft downlink without loss of science information. This is done via a least-squares fitting algorithm implemented on the Virtex-5 FPGA operating in real-time on the raw video data stream.

  12. Precise and fast spatial-frequency analysis using the iterative local Fourier transform.

    PubMed

    Lee, Sukmock; Choi, Heejoo; Kim, Dae Wook

    2016-09-19

    The use of the discrete Fourier transform has decreased since the introduction of the fast Fourier transform (fFT), which is a numerically efficient computing process. This paper presents the iterative local Fourier transform (ilFT), a set of new processing algorithms that iteratively apply the discrete Fourier transform within a local and optimal frequency domain. The new technique achieves 210 times higher frequency resolution than the fFT within a comparable computation time. The method's superb computing efficiency, high resolution, spectrum zoom-in capability, and overall performance are evaluated and compared to other advanced high-resolution Fourier transform techniques, such as the fFT combined with several fitting methods. The effectiveness of the ilFT is demonstrated through the data analysis of a set of Talbot self-images (1280 × 1024 pixels) obtained with an experimental setup using grating in a diverging beam produced by a coherent point source.

  13. Statistical ecology comes of age.

    PubMed

    Gimenez, Olivier; Buckland, Stephen T; Morgan, Byron J T; Bez, Nicolas; Bertrand, Sophie; Choquet, Rémi; Dray, Stéphane; Etienne, Marie-Pierre; Fewster, Rachel; Gosselin, Frédéric; Mérigot, Bastien; Monestiez, Pascal; Morales, Juan M; Mortier, Frédéric; Munoz, François; Ovaskainen, Otso; Pavoine, Sandrine; Pradel, Roger; Schurr, Frank M; Thomas, Len; Thuiller, Wilfried; Trenkel, Verena; de Valpine, Perry; Rexstad, Eric

    2014-12-01

    The desire to predict the consequences of global environmental change has been the driver towards more realistic models embracing the variability and uncertainties inherent in ecology. Statistical ecology has gelled over the past decade as a discipline that moves away from describing patterns towards modelling the ecological processes that generate these patterns. Following the fourth International Statistical Ecology Conference (1-4 July 2014) in Montpellier, France, we analyse current trends in statistical ecology. Important advances in the analysis of individual movement, and in the modelling of population dynamics and species distributions, are made possible by the increasing use of hierarchical and hidden process models. Exciting research perspectives include the development of methods to interpret citizen science data and of efficient, flexible computational algorithms for model fitting. Statistical ecology has come of age: it now provides a general and mathematically rigorous framework linking ecological theory and empirical data.

  14. An implicit solution of the three-dimensional Navier-Stokes equations for an airfoil spanning a wind tunnel. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Moitra, A.

    1982-01-01

    An implicit finite-difference algorithm is developed for the numerical solution of the incompressible three dimensional Navier-Stokes equations in the non-conservative primitive-variable formulation. The flow field about an airfoil spanning a wind-tunnel is computed. The coordinate system is generated by an extension of the two dimensional body-fitted coordinate generation techniques of Thompson, as well as that of Sorenson, into three dimensions. Two dimensional grids are stacked along a spanwise coordinate defined by a simple analytical function. A Poisson pressure equation for advancing the pressure in time is arrived at by performing a divergence operation on the momentum equations. The pressure at each time-step is calculated on the assumption that continuity be unconditionally satisfied. An eddy viscosity coefficient, computed according to the algebraic turbulence formulation of Baldwin and Lomax, simulates the effects of turbulence.

  15. Compact CH 4 sensor system based on a continuous-wave, low power consumption, room temperature interband cascade laser

    DOE PAGES

    Dong, Lei; Li, Chunguang; Sanchez, Nancy P.; ...

    2016-01-05

    A tunable diode laser absorption spectroscopy-based methane sensor, employing a dense-pattern multi-pass gas cell and a 3.3 µm, CW, DFB, room temperature interband cascade laser (ICL), is reported. The optical integration based on an advanced folded optical path design and an efficient ICL control system with appropriate electrical power management resulted in a CH 4 sensor with a small footprint (32 x 20 x 17 cm 3) and low-power consumption (6 W). Polynomial and least-squares fit algorithms are employed to remove the baseline of the spectral scan and retrieve CH 4 concentrations, respectively. An Allan-Werle deviation analysis shows that themore » measurement precision can reach 1.4 ppb for a 60 s averaging time. Continuous measurements covering a seven-day period were performed to demonstrate the stability and robustness of the reported CH 4 sensor system.« less

  16. Statistical ecology comes of age

    PubMed Central

    Gimenez, Olivier; Buckland, Stephen T.; Morgan, Byron J. T.; Bez, Nicolas; Bertrand, Sophie; Choquet, Rémi; Dray, Stéphane; Etienne, Marie-Pierre; Fewster, Rachel; Gosselin, Frédéric; Mérigot, Bastien; Monestiez, Pascal; Morales, Juan M.; Mortier, Frédéric; Munoz, François; Ovaskainen, Otso; Pavoine, Sandrine; Pradel, Roger; Schurr, Frank M.; Thomas, Len; Thuiller, Wilfried; Trenkel, Verena; de Valpine, Perry; Rexstad, Eric

    2014-01-01

    The desire to predict the consequences of global environmental change has been the driver towards more realistic models embracing the variability and uncertainties inherent in ecology. Statistical ecology has gelled over the past decade as a discipline that moves away from describing patterns towards modelling the ecological processes that generate these patterns. Following the fourth International Statistical Ecology Conference (1–4 July 2014) in Montpellier, France, we analyse current trends in statistical ecology. Important advances in the analysis of individual movement, and in the modelling of population dynamics and species distributions, are made possible by the increasing use of hierarchical and hidden process models. Exciting research perspectives include the development of methods to interpret citizen science data and of efficient, flexible computational algorithms for model fitting. Statistical ecology has come of age: it now provides a general and mathematically rigorous framework linking ecological theory and empirical data. PMID:25540151

  17. Compact CH 4 sensor system based on a continuous-wave, low power consumption, room temperature interband cascade laser

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Lei; Li, Chunguang; Sanchez, Nancy P.

    A tunable diode laser absorption spectroscopy-based methane sensor, employing a dense-pattern multi-pass gas cell and a 3.3 µm, CW, DFB, room temperature interband cascade laser (ICL), is reported. The optical integration based on an advanced folded optical path design and an efficient ICL control system with appropriate electrical power management resulted in a CH 4 sensor with a small footprint (32 x 20 x 17 cm 3) and low-power consumption (6 W). Polynomial and least-squares fit algorithms are employed to remove the baseline of the spectral scan and retrieve CH 4 concentrations, respectively. An Allan-Werle deviation analysis shows that themore » measurement precision can reach 1.4 ppb for a 60 s averaging time. Continuous measurements covering a seven-day period were performed to demonstrate the stability and robustness of the reported CH 4 sensor system.« less

  18. Towards a 'siliconeural computer': technological successes and challenges.

    PubMed

    Hughes, Mark A; Shipston, Mike J; Murray, Alan F

    2015-07-28

    Electronic signals govern the function of both nervous systems and computers, albeit in different ways. As such, hybridizing both systems to create an iono-electric brain-computer interface is a realistic goal; and one that promises exciting advances in both heterotic computing and neuroprosthetics capable of circumventing devastating neuropathology. 'Neural networks' were, in the 1980s, viewed naively as a potential panacea for all computational problems that did not fit well with conventional computing. The field bifurcated during the 1990s into a highly successful and much more realistic machine learning community and an equally pragmatic, biologically oriented 'neuromorphic computing' community. Algorithms found in nature that use the non-synchronous, spiking nature of neuronal signals have been found to be (i) implementable efficiently in silicon and (ii) computationally useful. As a result, interest has grown in techniques that could create mixed 'siliconeural' computers. Here, we discuss potential approaches and focus on one particular platform using parylene-patterned silicon dioxide.

  19. Accuracy of methods for calculating volumetric wear from coordinate measuring machine data of retrieved metal-on-metal hip joint implants.

    PubMed

    Lu, Zhen; McKellop, Harry A

    2014-03-01

    This study compared the accuracy and sensitivity of several numerical methods employing spherical or plane triangles for calculating the volumetric wear of retrieved metal-on-metal hip joint implants from coordinate measuring machine measurements. Five methods, one using spherical triangles and four using plane triangles to represent the bearing and the best-fit surfaces, were assessed and compared on a perfect hemisphere model and a hemi-ellipsoid model (i.e. unworn models), computer-generated wear models and wear-tested femoral balls, with point spacings of 0.5, 1, 2 and 3 mm. The results showed that the algorithm (Method 1) employing spherical triangles to represent the bearing surface and to scale the mesh to the best-fit surfaces produced adequate accuracy for the wear volume with point spacings of 0.5, 1, 2 and 3 mm. The algorithms (Methods 2-4) using plane triangles to represent the bearing surface and to scale the mesh to the best-fit surface also produced accuracies that were comparable to that with spherical triangles. In contrast, if the bearing surface was represented with a mesh of plane triangles and the best-fit surface was taken as a smooth surface without discretization (Method 5), the algorithm produced much lower accuracy with a point spacing of 0.5 mm than Methods 1-4 with a point spacing of 3 mm.

  20. Plagiarism Detection Algorithm for Source Code in Computer Science Education

    ERIC Educational Resources Information Center

    Liu, Xin; Xu, Chan; Ouyang, Boyu

    2015-01-01

    Nowadays, computer programming is getting more necessary in the course of program design in college education. However, the trick of plagiarizing plus a little modification exists among some students' home works. It's not easy for teachers to judge if there's plagiarizing in source code or not. Traditional detection algorithms cannot fit this…

  1. Wireless Intrusion Detection

    DTIC Science & Technology

    2007-03-01

    32 4.4 Algorithm Pseudo - Code ...................................................................................34 4.5 WIND Interface With a...difference estimates of xc temporal derivatives, or by using a polynomial fit to the previous values of xc. 34 4.4 ALGORITHM PSEUDO - CODE Pseudo ...Phase Shift Keying DQPSK Differential Quadrature Phase Shift Keying EVM Error Vector Magnitude FFT Fast Fourier Transform FPGA Field Programmable

  2. A differential optical absorption spectroscopy method for retrieval from ground-based Fourier transform spectrometers measurements of the direct solar beam

    NASA Astrophysics Data System (ADS)

    Huo, Yanfeng; Duan, Minzheng; Tian, Wenshou; Min, Qilong

    2015-08-01

    A differential optical absorption spectroscopy (DOAS)-like algorithm is developed to retrieve the column-averaged dryair mole fraction of carbon dioxide from ground-based hyper-spectral measurements of the direct solar beam. Different to the spectral fitting method, which minimizes the difference between the observed and simulated spectra, the ratios of multiple channel-pairs—one weak and one strong absorption channel—are used to retrieve from measurements of the shortwave infrared (SWIR) band. Based on sensitivity tests, a super channel-pair is carefully selected to reduce the effects of solar lines, water vapor, air temperature, pressure, instrument noise, and frequency shift on retrieval errors. The new algorithm reduces computational cost and the retrievals are less sensitive to temperature and H2O uncertainty than the spectral fitting method. Multi-day Total Carbon Column Observing Network (TCCON) measurements under clear-sky conditions at two sites (Tsukuba and Bremen) are used to derive xxxx for the algorithm evaluation and validation. The DOAS-like results agree very well with those of the TCCON algorithm after correction of an airmass-dependent bias.

  3. Improved genetic algorithm for the protein folding problem by use of a Cartesian combination operator.

    PubMed Central

    Rabow, A. A.; Scheraga, H. A.

    1996-01-01

    We have devised a Cartesian combination operator and coding scheme for improving the performance of genetic algorithms applied to the protein folding problem. The genetic coding consists of the C alpha Cartesian coordinates of the protein chain. The recombination of the genes of the parents is accomplished by: (1) a rigid superposition of one parent chain on the other, to make the relation of Cartesian coordinates meaningful, then, (2) the chains of the children are formed through a linear combination of the coordinates of their parents. The children produced with this Cartesian combination operator scheme have similar topology and retain the long-range contacts of their parents. The new scheme is significantly more efficient than the standard genetic algorithm methods for locating low-energy conformations of proteins. The considerable superiority of genetic algorithms over Monte Carlo optimization methods is also demonstrated. We have also devised a new dynamic programming lattice fitting procedure for use with the Cartesian combination operator method. The procedure finds excellent fits of real-space chains to the lattice while satisfying bond-length, bond-angle, and overlap constraints. PMID:8880904

  4. Atmospheric correction over case 2 waters with an iterative fitting algorithm: relative humidity effects.

    PubMed

    Land, P E; Haigh, J D

    1997-12-20

    In algorithms for the atmospheric correction of visible and near-IR satellite observations of the Earth's surface, it is generally assumed that the spectral variation of aerosol optical depth is characterized by an Angström power law or similar dependence. In an iterative fitting algorithm for atmospheric correction of ocean color imagery over case 2 waters, this assumption leads to an inability to retrieve the aerosol type and to the attribution to aerosol spectral variations of spectral effects actually caused by the water contents. An improvement to this algorithm is described in which the spectral variation of optical depth is calculated as a function of aerosol type and relative humidity, and an attempt is made to retrieve the relative humidity in addition to aerosol type. The aerosol is treated as a mixture of aerosol components (e.g., soot), rather than of aerosol types (e.g., urban). We demonstrate the improvement over the previous method by using simulated case 1 and case 2 sea-viewing wide field-of-view sensor data, although the retrieval of relative humidity was not successful.

  5. Advanced Targeting Cost Function Design for Evolutionary Optimization of Control of Logistic Equation

    NASA Astrophysics Data System (ADS)

    Senkerik, Roman; Zelinka, Ivan; Davendra, Donald; Oplatkova, Zuzana

    2010-06-01

    This research deals with the optimization of the control of chaos by means of evolutionary algorithms. This work is aimed on an explanation of how to use evolutionary algorithms (EAs) and how to properly define the advanced targeting cost function (CF) securing very fast and precise stabilization of desired state for any initial conditions. As a model of deterministic chaotic system, the one dimensional Logistic equation was used. The evolutionary algorithm Self-Organizing Migrating Algorithm (SOMA) was used in four versions. For each version, repeated simulations were conducted to outline the effectiveness and robustness of used method and targeting CF.

  6. Advanced synthetic image generation models and their application to multi/hyperspectral algorithm development

    NASA Astrophysics Data System (ADS)

    Schott, John R.; Brown, Scott D.; Raqueno, Rolando V.; Gross, Harry N.; Robinson, Gary

    1999-01-01

    The need for robust image data sets for algorithm development and testing has prompted the consideration of synthetic imagery as a supplement to real imagery. The unique ability of synthetic image generation (SIG) tools to supply per-pixel truth allows algorithm writers to test difficult scenarios that would require expensive collection and instrumentation efforts. In addition, SIG data products can supply the user with `actual' truth measurements of the entire image area that are not subject to measurement error thereby allowing the user to more accurately evaluate the performance of their algorithm. Advanced algorithms place a high demand on synthetic imagery to reproduce both the spectro-radiometric and spatial character observed in real imagery. This paper describes a synthetic image generation model that strives to include the radiometric processes that affect spectral image formation and capture. In particular, it addresses recent advances in SIG modeling that attempt to capture the spatial/spectral correlation inherent in real images. The model is capable of simultaneously generating imagery from a wide range of sensors allowing it to generate daylight, low-light-level and thermal image inputs for broadband, multi- and hyper-spectral exploitation algorithms.

  7. MRI quantification of diffusion and perfusion in bone marrow by intravoxel incoherent motion (IVIM) and non-negative least square (NNLS) analysis.

    PubMed

    Marchand, A J; Hitti, E; Monge, F; Saint-Jalmes, H; Guillin, R; Duvauferrier, R; Gambarota, G

    2014-11-01

    To assess the feasibility of measuring diffusion and perfusion fraction in vertebral bone marrow using the intravoxel incoherent motion (IVIM) approach and to compare two fitting methods, i.e., the non-negative least squares (NNLS) algorithm and the more commonly used Levenberg-Marquardt (LM) non-linear least squares algorithm, for the analysis of IVIM data. MRI experiments were performed on fifteen healthy volunteers, with a diffusion-weighted echo-planar imaging (EPI) sequence at five different b-values (0, 50, 100, 200, 600 s/mm2), in combination with an STIR module to suppress the lipid signal. Diffusion signal decays in the first lumbar vertebra (L1) were fitted to a bi-exponential function using the LM algorithm and further analyzed with the NNLS algorithm to calculate the values of the apparent diffusion coefficient (ADC), pseudo-diffusion coefficient (D*) and perfusion fraction. The NNLS analysis revealed two diffusion components only in seven out of fifteen volunteers, with ADC=0.60±0.09 (10(-3) mm(2)/s), D*=28±9 (10(-3) mm2/s) and perfusion fraction=14%±6%. The values obtained by the LM bi-exponential fit were: ADC=0.45±0.27 (10(-3) mm2/s), D*=63±145 (10(-3) mm2/s) and perfusion fraction=27%±17%. Furthermore, the LM algorithm yielded values of perfusion fraction in cases where the decay was not bi-exponential, as assessed by NNLS analysis. The IVIM approach allows for measuring diffusion and perfusion fraction in vertebral bone marrow; its reliability can be improved by using the NNLS, which identifies the diffusion decays that display a bi-exponential behavior. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Artificial evolution by viability rather than competition.

    PubMed

    Maesani, Andrea; Fernando, Pradeep Ruben; Floreano, Dario

    2014-01-01

    Evolutionary algorithms are widespread heuristic methods inspired by natural evolution to solve difficult problems for which analytical approaches are not suitable. In many domains experimenters are not only interested in discovering optimal solutions, but also in finding the largest number of different solutions satisfying minimal requirements. However, the formulation of an effective performance measure describing these requirements, also known as fitness function, represents a major challenge. The difficulty of combining and weighting multiple problem objectives and constraints of possibly varying nature and scale into a single fitness function often leads to unsatisfactory solutions. Furthermore, selective reproduction of the fittest solutions, which is inspired by competition-based selection in nature, leads to loss of diversity within the evolving population and premature convergence of the algorithm, hindering the discovery of many different solutions. Here we present an alternative abstraction of artificial evolution, which does not require the formulation of a composite fitness function. Inspired from viability theory in dynamical systems, natural evolution and ethology, the proposed method puts emphasis on the elimination of individuals that do not meet a set of changing criteria, which are defined on the problem objectives and constraints. Experimental results show that the proposed method maintains higher diversity in the evolving population and generates more unique solutions when compared to classical competition-based evolutionary algorithms. Our findings suggest that incorporating viability principles into evolutionary algorithms can significantly improve the applicability and effectiveness of evolutionary methods to numerous complex problems of science and engineering, ranging from protein structure prediction to aircraft wing design.

  9. Measuring molecular motions inside single cells with improved analysis of single-particle trajectories

    NASA Astrophysics Data System (ADS)

    Rowland, David J.; Biteen, Julie S.

    2017-04-01

    Single-molecule super-resolution imaging and tracking can measure molecular motions inside living cells on the scale of the molecules themselves. Diffusion in biological systems commonly exhibits multiple modes of motion, which can be effectively quantified by fitting the cumulative probability distribution of the squared step sizes in a two-step fitting process. Here we combine this two-step fit into a single least-squares minimization; this new method vastly reduces the total number of fitting parameters and increases the precision with which diffusion may be measured. We demonstrate this Global Fit approach on a simulated two-component system as well as on a mixture of diffusing 80 nm and 200 nm gold spheres to show improvements in fitting robustness and localization precision compared to the traditional Local Fit algorithm.

  10. Health and Fitness of Americans--The State of the Union.

    ERIC Educational Resources Information Center

    Day, William C.

    1984-01-01

    This article offers a perspective on the present and future status of the health and fitness of Americans. Many advances have been made in the areas of health and fitness, but heart disease and obesity are still major health problems. (DF)

  11. A Novel Method of Aircraft Detection Based on High-Resolution Panchromatic Optical Remote Sensing Images

    PubMed Central

    Wang, Wensheng; Nie, Ting; Fu, Tianjiao; Ren, Jianyue; Jin, Longxu

    2017-01-01

    In target detection of optical remote sensing images, two main obstacles for aircraft target detection are how to extract the candidates in complex gray-scale-multi background and how to confirm the targets in case the target shapes are deformed, irregular or asymmetric, such as that caused by natural conditions (low signal-to-noise ratio, illumination condition or swaying photographing) and occlusion by surrounding objects (boarding bridge, equipment). To solve these issues, an improved active contours algorithm, namely region-scalable fitting energy based threshold (TRSF), and a corner-convex hull based segmentation algorithm (CCHS) are proposed in this paper. Firstly, the maximal variance between-cluster algorithm (Otsu’s algorithm) and region-scalable fitting energy (RSF) algorithm are combined to solve the difficulty of targets extraction in complex and gray-scale-multi backgrounds. Secondly, based on inherent shapes and prominent corners, aircrafts are divided into five fragments by utilizing convex hulls and Harris corner points. Furthermore, a series of new structure features, which describe the proportion of targets part in the fragment to the whole fragment and the proportion of fragment to the whole hull, are identified to judge whether the targets are true or not. Experimental results show that TRSF algorithm could improve extraction accuracy in complex background, and that it is faster than some traditional active contours algorithms. The CCHS is effective to suppress the detection difficulties caused by the irregular shape. PMID:28481260

  12. Biomolecular Interaction Analysis Using an Optical Surface Plasmon Resonance Biosensor: The Marquardt Algorithm vs Newton Iteration Algorithm

    PubMed Central

    Hu, Jiandong; Ma, Liuzheng; Wang, Shun; Yang, Jianming; Chang, Keke; Hu, Xinran; Sun, Xiaohui; Chen, Ruipeng; Jiang, Min; Zhu, Juanhua; Zhao, Yuanyuan

    2015-01-01

    Kinetic analysis of biomolecular interactions are powerfully used to quantify the binding kinetic constants for the determination of a complex formed or dissociated within a given time span. Surface plasmon resonance biosensors provide an essential approach in the analysis of the biomolecular interactions including the interaction process of antigen-antibody and receptors-ligand. The binding affinity of the antibody to the antigen (or the receptor to the ligand) reflects the biological activities of the control antibodies (or receptors) and the corresponding immune signal responses in the pathologic process. Moreover, both the association rate and dissociation rate of the receptor to ligand are the substantial parameters for the study of signal transmission between cells. A number of experimental data may lead to complicated real-time curves that do not fit well to the kinetic model. This paper presented an analysis approach of biomolecular interactions established by utilizing the Marquardt algorithm. This algorithm was intensively considered to implement in the homemade bioanalyzer to perform the nonlinear curve-fitting of the association and disassociation process of the receptor to ligand. Compared with the results from the Newton iteration algorithm, it shows that the Marquardt algorithm does not only reduce the dependence of the initial value to avoid the divergence but also can greatly reduce the iterative regression times. The association and dissociation rate constants, ka, kd and the affinity parameters for the biomolecular interaction, KA, KD, were experimentally obtained 6.969×105 mL·g-1·s-1, 0.00073 s-1, 9.5466×108 mL·g-1 and 1.0475×10-9 g·mL-1, respectively from the injection of the HBsAg solution with the concentration of 16ng·mL-1. The kinetic constants were evaluated distinctly by using the obtained data from the curve-fitting results. PMID:26147997

  13. A plant cell division algorithm based on cell biomechanics and ellipse-fitting.

    PubMed

    Abera, Metadel K; Verboven, Pieter; Defraeye, Thijs; Fanta, Solomon Workneh; Hertog, Maarten L A T M; Carmeliet, Jan; Nicolai, Bart M

    2014-09-01

    The importance of cell division models in cellular pattern studies has been acknowledged since the 19th century. Most of the available models developed to date are limited to symmetric cell division with isotropic growth. Often, the actual growth of the cell wall is either not considered or is updated intermittently on a separate time scale to the mechanics. This study presents a generic algorithm that accounts for both symmetrically and asymmetrically dividing cells with isotropic and anisotropic growth. Actual growth of the cell wall is simulated simultaneously with the mechanics. The cell is considered as a closed, thin-walled structure, maintained in tension by turgor pressure. The cell walls are represented as linear elastic elements that obey Hooke's law. Cell expansion is induced by turgor pressure acting on the yielding cell-wall material. A system of differential equations for the positions and velocities of the cell vertices as well as for the actual growth of the cell wall is established. Readiness to divide is determined based on cell size. An ellipse-fitting algorithm is used to determine the position and orientation of the dividing wall. The cell vertices, walls and cell connectivity are then updated and cell expansion resumes. Comparisons are made with experimental data from the literature. The generic plant cell division algorithm has been implemented successfully. It can handle both symmetrically and asymmetrically dividing cells coupled with isotropic and anisotropic growth modes. Development of the algorithm highlighted the importance of ellipse-fitting to produce randomness (biological variability) even in symmetrically dividing cells. Unlike previous models, a differential equation is formulated for the resting length of the cell wall to simulate actual biological growth and is solved simultaneously with the position and velocity of the vertices. The algorithm presented can produce different tissues varying in topological and geometrical properties. This flexibility to produce different tissue types gives the model great potential for use in investigations of plant cell division and growth in silico.

  14. Sensor Fusion, Prognostics, Diagnostics and Failure Mode Control for Complex Aerospace Systems

    DTIC Science & Technology

    2010-10-01

    algorithm   and   to   then   tune   the   candidates   individually   using   known   metaheuristics .  As  will  be...parallel. The result of this arrangement is that the processing is a form that is analogous to standard parallel genetic algorithms , and as such...search algorithm then uses the hybrid of fitness data to rank the results. The ETRAS controller is developed using pre-selection, showing that a

  15. Lord-Wingersky Algorithm Version 2.0 for Hierarchical Item Factor Models with Applications in Test Scoring, Scale Alignment, and Model Fit Testing.

    PubMed

    Cai, Li

    2015-06-01

    Lord and Wingersky's (Appl Psychol Meas 8:453-461, 1984) recursive algorithm for creating summed score based likelihoods and posteriors has a proven track record in unidimensional item response theory (IRT) applications. Extending the recursive algorithm to handle multidimensionality is relatively simple, especially with fixed quadrature because the recursions can be defined on a grid formed by direct products of quadrature points. However, the increase in computational burden remains exponential in the number of dimensions, making the implementation of the recursive algorithm cumbersome for truly high-dimensional models. In this paper, a dimension reduction method that is specific to the Lord-Wingersky recursions is developed. This method can take advantage of the restrictions implied by hierarchical item factor models, e.g., the bifactor model, the testlet model, or the two-tier model, such that a version of the Lord-Wingersky recursive algorithm can operate on a dramatically reduced set of quadrature points. For instance, in a bifactor model, the dimension of integration is always equal to 2, regardless of the number of factors. The new algorithm not only provides an effective mechanism to produce summed score to IRT scaled score translation tables properly adjusted for residual dependence, but leads to new applications in test scoring, linking, and model fit checking as well. Simulated and empirical examples are used to illustrate the new applications.

  16. The Texas Children's Medication Algorithm Project: Revision of the Algorithm for Pharmacotherapy of Attention-Deficit/Hyperactivity Disorder

    ERIC Educational Resources Information Center

    Pliszka, Steven R.; Crismon, M. Lynn; Hughes, Carroll W.; Corners, C. Keith; Emslie, Graham J.; Jensen, Peter S.; McCracken, James T.; Swanson, James M.; Lopez, Molly

    2006-01-01

    Objective: In 1998, the Texas Department of Mental Health and Mental Retardation developed algorithms for medication treatment of attention-deficit/hyperactivity disorder (ADHD). Advances in the psychopharmacology of ADHD and results of a feasibility study of algorithm use in community mental health centers caused the algorithm to be modified and…

  17. Advanced power system protection and incipient fault detection and protection of spaceborne power systems

    NASA Technical Reports Server (NTRS)

    Russell, B. Don

    1989-01-01

    This research concentrated on the application of advanced signal processing, expert system, and digital technologies for the detection and control of low grade, incipient faults on spaceborne power systems. The researchers have considerable experience in the application of advanced digital technologies and the protection of terrestrial power systems. This experience was used in the current contracts to develop new approaches for protecting the electrical distribution system in spaceborne applications. The project was divided into three distinct areas: (1) investigate the applicability of fault detection algorithms developed for terrestrial power systems to the detection of faults in spaceborne systems; (2) investigate the digital hardware and architectures required to monitor and control spaceborne power systems with full capability to implement new detection and diagnostic algorithms; and (3) develop a real-time expert operating system for implementing diagnostic and protection algorithms. Significant progress has been made in each of the above areas. Several terrestrial fault detection algorithms were modified to better adapt to spaceborne power system environments. Several digital architectures were developed and evaluated in light of the fault detection algorithms.

  18. [Radiance Simulation of BUV Hyperspectral Sensor on Multi Angle Observation, and Improvement to Initial Total Ozone Estimating Model of TOMS V8 Total Ozone Algorithm].

    PubMed

    Lü, Chun-guang; Wang, Wei-he; Yang, Wen-bo; Tian, Qing-iju; Lu, Shan; Chen, Yun

    2015-11-01

    New hyperspectral sensor to detect total ozone is considered to be carried on geostationary orbit platform in the future, because local troposphere ozone pollution and diurnal variation of ozone receive more and more attention. Sensors carried on geostationary satellites frequently obtain images on the condition of larger observation angles so that it has higher requirements of total ozone retrieval on these observation geometries. TOMS V8 algorithm is developing and widely used in low orbit ozone detecting sensors, but it still lack of accuracy on big observation geometry, therefore, how to improve the accuracy of total ozone retrieval is still an urgent problem that demands immediate solution. Using moderate resolution atmospheric transmission, MODT-RAN, synthetic UV backscatter radiance in the spectra region from 305 to 360 nm is simulated, which refers to clear sky, multi angles (12 solar zenith angles and view zenith angles) and 26 standard profiles, moreover, the correlation and trends between atmospheric total ozone and backward scattering of the earth UV radiation are analyzed based on the result data. According to these result data, a new modified initial total ozone estimation model in TOMS V8 algorithm is considered to be constructed in order to improve the initial total ozone estimating accuracy on big observation geometries. The analysis results about total ozone and simulated UV backscatter radiance shows: Radiance in 317.5 nm (R₃₁₇.₅) decreased as the total ozone rise. Under the small solar zenith Angle (SZA) and the same total ozone, R₃₁₇.₅ decreased with the increase of view zenith Angle (VZA) but increased on the large SZA. Comparison of two fit models shows: without the condition that both SZA and VZA are large (> 80°), exponential fitting model and logarithm fitting model all show high fitting precision (R² > 0.90), and precision of the two decreased as the SZA and VZA rise. In most cases, the precision of logarithm fitting mode is about 0.9% higher than exponential fitting model. With the increasing of VZA or SZA, the fitting precision gradually lower, and the fall is more in the larger VZA or SZA. In addition, the precision of fitting mode exist a plateau in the small SZA range. The modified initial total ozone estimating model (ln(I) vs. Ω) is established based on logarithm fitting mode, and compared with traditional estimating model (I vs. ln(Ω)), that shows: the RMSE of ln(I) vs. Ω and I vs. ln(Ω) all have the down trend with the rise of total ozone. In the low region of total ozone (175-275 DU), the RMSE is obvious higher than high region (425-525 DU), moreover, a RMSE peak and a trough exist in 225 and 475 DU respectively. With the increase of VZA and SZA, the RMSE of two initial estimating models are overall rise, and the upraising degree is ln(I) vs. Ω obvious with the growing of SZA and VZA. The estimating result by modified model is better than traditional model on the whole total ozone range (RMSE is 0.087%-0.537% lower than traditional model), especially on lower total ozone region and large observation geometries. Traditional estimating model relies on the precision of exponential fitting model, and modified estimating model relies on the precision of logarithm fitting model. The improvement of the estimation accuracy by modified initial total ozone estimating model expand the application range of TOMS V8 algorithm. For sensor carried on geostationary orbit platform, there is no doubt that the modified estimating model can help improve the inversion accuracy on wide spatial and time range This modified model could give support and reference to TOMS algorithm update in the future.

  19. A Comparative Study of Interferometric Regridding Algorithms

    NASA Technical Reports Server (NTRS)

    Hensley, Scott; Safaeinili, Ali

    1999-01-01

    THe paper discusses regridding options: (1) The problem of interpolating data that is not sampled on a uniform grid, that is noisy, and contains gaps is a difficult problem. (2) Several interpolation algorithms have been implemented: (a) Nearest neighbor - Fast and easy but shows some artifacts in shaded relief images. (b) Simplical interpolator - uses plane going through three points containing point where interpolation is required. Reasonably fast and accurate. (c) Convolutional - uses a windowed Gaussian approximating the optimal prolate spheroidal weighting function for a specified bandwidth. (d) First or second order surface fitting - Uses the height data centered in a box about a given point and does a weighted least squares surface fit.

  20. Absolute magnitude calibration using trigonometric parallax - Incomplete, spectroscopic samples

    NASA Technical Reports Server (NTRS)

    Ratnatunga, Kavan U.; Casertano, Stefano

    1991-01-01

    A new numerical algorithm is used to calibrate the absolute magnitude of spectroscopically selected stars from their observed trigonometric parallax. This procedure, based on maximum-likelihood estimation, can retrieve unbiased estimates of the intrinsic absolute magnitude and its dispersion even from incomplete samples suffering from selection biases in apparent magnitude and color. It can also make full use of low accuracy and negative parallaxes and incorporate censorship on reported parallax values. Accurate error estimates are derived for each of the fitted parameters. The algorithm allows an a posteriori check of whether the fitted model gives a good representation of the observations. The procedure is described in general and applied to both real and simulated data.

  1. Dominant takeover regimes for genetic algorithms

    NASA Technical Reports Server (NTRS)

    Noever, David; Baskaran, Subbiah

    1995-01-01

    The genetic algorithm (GA) is a machine-based optimization routine which connects evolutionary learning to natural genetic laws. The present work addresses the problem of obtaining the dominant takeover regimes in the GA dynamics. Estimated GA run times are computed for slow and fast convergence in the limits of high and low fitness ratios. Using Euler's device for obtaining partial sums in closed forms, the result relaxes the previously held requirements for long time limits. Analytical solution reveal that appropriately accelerated regimes can mark the ascendancy of the most fit solution. In virtually all cases, the weak (logarithmic) dependence of convergence time on problem size demonstrates the potential for the GA to solve large N-P complete problems.

  2. A microcomputer program for analysis of nucleic acid hybridization data

    PubMed Central

    Green, S.; Field, J.K.; Green, C.D.; Beynon, R.J.

    1982-01-01

    The study of nucleic acid hybridization is facilitated by computer mediated fitting of theoretical models to experimental data. This paper describes a non-linear curve fitting program, using the `Patternsearch' algorithm, written in BASIC for the Apple II microcomputer. The advantages and disadvantages of using a microcomputer for local data processing are discussed. Images PMID:7071017

  3. DNA methylation of phosphatase and actin regulator 3 detects colorectal cancer in stool and complements FIT.

    PubMed

    Bosch, Linda J W; Oort, Frank A; Neerincx, Maarten; Khalid-de Bakker, Carolina A J; Terhaar sive Droste, Jochim S; Melotte, Veerle; Jonkers, Daisy M A E; Masclee, Ad A M; Mongera, Sandra; Grooteclaes, Madeleine; Louwagie, Joost; van Criekinge, Wim; Coupé, Veerle M H; Mulder, Chris J; van Engeland, Manon; Carvalho, Beatriz; Meijer, Gerrit A

    2012-03-01

    Using a bioinformatics-based strategy, we set out to identify hypermethylated genes that could serve as biomarkers for early detection of colorectal cancer (CRC) in stool. In addition, the complementary value to a Fecal Immunochemical Test (FIT) was evaluated. Candidate genes were selected by applying cluster alignment and computational analysis of promoter regions to microarray-expression data of colorectal adenomas and carcinomas. DNA methylation was measured by quantitative methylation-specific PCR on 34 normal colon mucosa, 71 advanced adenoma, and 64 CRC tissues. The performance as biomarker was tested in whole stool samples from in total 193 subjects, including 19 with advanced adenoma and 66 with CRC. For a large proportion of these series, methylation data for GATA4 and OSMR were available for comparison. The complementary value to FIT was measured in stool subsamples from 92 subjects including 44 with advanced adenoma or CRC. Phosphatase and Actin Regulator 3 (PHACTR3) was identified as a novel hypermethylated gene showing more than 70-fold increased DNA methylation levels in advanced neoplasia compared with normal colon mucosa. In a stool training set, PHACTR3 methylation showed a sensitivity of 55% (95% CI: 33-75) for CRC and a specificity of 95% (95% CI: 87-98). In a stool validation set, sensitivity reached 66% (95% CI: 50-79) for CRC and 32% (95% CI: 14-57) for advanced adenomas at a specificity of 100% (95% CI: 86-100). Adding PHACTR3 methylation to FIT increased sensitivity for CRC up to 15%. PHACTR3 is a new hypermethylated gene in CRC with a good performance in stool DNA testing and has complementary value to FIT.

  4. Results of faecal immunochemical test for colorectal cancer screening, in average risk population, in a cohort of 1389 subjects.

    PubMed

    Miuţescu, Bogdan; Sporea, Ioan; Popescu, Alina; Bota, Simona; Iovănescu, Dana; Burlea, Amelia; Mos, Liana; Miuţescu, Eftimie

    2013-01-01

    The aim of this study is to evaluate the usefulness of the fecal immunochemical test (FIT) in colorectal cancer screening, detection of precancerous lesions and early colorectal cancer. The study evaluated asymptomatic patients with average risk (no personal or family antecedents of polyps or colorectal cancer), aged between 50 and 74 years. The presence of the occult haemorrhage was tested with the immunochemical faecal test Hem Check 1 (Veda Lab, France). The subjects were not requested to have any dietary or drug restrictions. Colonoscopy was recommended in all subjects that tested positive. In our study, we had a total of 1389 participants who met the inclusion criteria, with a mean age of 61.2 ± 12.8 years, 565 (40.7%) men and 824 (59.3%) women. FIT was positive in 87 individuals (6.3%). In 57/87 subjects (65.5%) with positive FIT, colonoscopy was performed, while the rest of the subjects refused or delayed the investigation. A number of 5 (8.8%) patients were not able to have a complete colonoscopy, due to neoplastic stenosis. The colonoscopies revealed in 10 cases (0.7%) cancer, in 29 cases (2.1%) advanced adenomas and in 15 cases (1.1%) non advanced adenomas from the total participants in the study. The colonoscopies performed revealed a greater percentage of advanced adenomas in the left colon compared to the right colon, 74.1% vs. 28.6% (p<0.001). In our study, FIT had a positivity rate of 6.3%. The detection rate for advanced neoplasia was 2.8% (0.7% for cancer, 2.1% for advanced adenomas) in our study group. Adherence to colonoscopy for FIT-positive subjects was 65.5%.

  5. Spinoff 2009

    NASA Technical Reports Server (NTRS)

    2009-01-01

    Topics covered include: Image-Capture Devices Extend Medicine's Reach; Medical Devices Assess, Treat Balance Disorders; NASA Bioreactors Advance Disease Treatments; Robotics Algorithms Provide Nutritional Guidelines; "Anti-Gravity" Treadmills Speed Rehabilitation; Crew Management Processes Revitalize Patient Care; Hubble Systems Optimize Hospital Schedules; Web-based Programs Assess Cognitive Fitness; Electrolyte Concentrates Treat Dehydration; Tools Lighten Designs, Maintain Structural Integrity; Insulating Foams Save Money, Increase Safety; Polyimide Resins Resist Extreme Temperatures; Sensors Locate Radio Interference; Surface Operations Systems Improve Airport Efficiency; Nontoxic Resins Advance Aerospace Manufacturing; Sensors Provide Early Warning of Biological Threats; Robot Saves Soldier's Lives Overseas (MarcBot); Apollo-Era Life Raft Saves Hundreds of Sailors; Circuits Enhance Scientific Instruments and Safety Devices; Tough Textiles Protect Payloads and Public Safety Officers; Forecasting Tools Point to Fishing Hotspots; Air Purifiers Eliminate Pathogens, Preserve Food; Fabrics Protect Sensitive Skin from UV Rays; Phase Change Fabrics Control Temperature; Tiny Devices Project Sharp, Colorful Images; Star-Mapping Tools Enable Tracking of Endangered Animals; Nanofiber Filters Eliminate Contaminants; Modeling Innovations Advance Wind Energy Industry; Thermal Insulation Strips Conserve Energy; Satellite Respondent Buoys Identify Ocean Debris; Mobile Instruments Measure Atmospheric Pollutants; Cloud Imagers Offer New Details on Earth's Health; Antennas Lower Cost of Satellite Access; Feature Detection Systems Enhance Satellite Imagery; Chlorophyll Meters Aid Plant Nutrient Management; Telemetry Boards Interpret Rocket, Airplane Engine Data; Programs Automate Complex Operations Monitoring; Software Tools Streamline Project Management; Modeling Languages Refine Vehicle Design; Radio Relays Improve Wireless Products; Advanced Sensors Boost Optical Communication, Imaging; Tensile Fabrics Enhance Architecture Around the World; Robust Light Filters Support Powerful Imaging Devices; Thermoelectric Devices Cool, Power Electronics; Innovative Tools Advance Revolutionary Weld Technique; Methods Reduce Cost, Enhance Quality of Nanotubes; Gauging Systems Monitor Cryogenic Liquids; Voltage Sensors Monitor Harmful Static; and Compact Instruments Measure Heat Potential.

  6. An Injury Severity-, Time Sensitivity-, and Predictability-Based Advanced Automatic Crash Notification Algorithm Improves Motor Vehicle Crash Occupant Triage.

    PubMed

    Stitzel, Joel D; Weaver, Ashley A; Talton, Jennifer W; Barnard, Ryan T; Schoell, Samantha L; Doud, Andrea N; Martin, R Shayn; Meredith, J Wayne

    2016-06-01

    Advanced Automatic Crash Notification algorithms use vehicle telemetry measurements to predict risk of serious motor vehicle crash injury. The objective of the study was to develop an Advanced Automatic Crash Notification algorithm to reduce response time, increase triage efficiency, and improve patient outcomes by minimizing undertriage (<5%) and overtriage (<50%), as recommended by the American College of Surgeons. A list of injuries associated with a patient's need for Level I/II trauma center treatment known as the Target Injury List was determined using an approach based on 3 facets of injury: severity, time sensitivity, and predictability. Multivariable logistic regression was used to predict an occupant's risk of sustaining an injury on the Target Injury List based on crash severity and restraint factors for occupants in the National Automotive Sampling System - Crashworthiness Data System 2000-2011. The Advanced Automatic Crash Notification algorithm was optimized and evaluated to minimize triage rates, per American College of Surgeons recommendations. The following rates were achieved: <50% overtriage and <5% undertriage in side impacts and 6% to 16% undertriage in other crash modes. Nationwide implementation of our algorithm is estimated to improve triage decisions for 44% of undertriaged and 38% of overtriaged occupants. Annually, this translates to more appropriate care for >2,700 seriously injured occupants and reduces unnecessary use of trauma center resources for >162,000 minimally injured occupants. The algorithm could be incorporated into vehicles to inform emergency personnel of recommended motor vehicle crash triage decisions. Lower under- and overtriage was achieved, and nationwide implementation of the algorithm would yield improved triage decision making for an estimated 165,000 occupants annually. Copyright © 2016. Published by Elsevier Inc.

  7. A New Powered Lower Limb Prosthesis Control Framework Based on Adaptive Dynamic Programming.

    PubMed

    Wen, Yue; Si, Jennie; Gao, Xiang; Huang, Stephanie; Huang, He Helen

    2017-09-01

    This brief presents a novel application of adaptive dynamic programming (ADP) for optimal adaptive control of powered lower limb prostheses, a type of wearable robots to assist the motor function of the limb amputees. Current control of these robotic devices typically relies on finite state impedance control (FS-IC), which lacks adaptability to the user's physical condition. As a result, joint impedance settings are often customized manually and heuristically in clinics, which greatly hinder the wide use of these advanced medical devices. This simulation study aimed at demonstrating the feasibility of ADP for automatic tuning of the twelve knee joint impedance parameters during a complete gait cycle to achieve balanced walking. Given that the accurate models of human walking dynamics are difficult to obtain, the model-free ADP control algorithms were considered. First, direct heuristic dynamic programming (dHDP) was applied to the control problem, and its performance was evaluated on OpenSim, an often-used dynamic walking simulator. For the comparison purposes, we selected another established ADP algorithm, the neural fitted Q with continuous action (NFQCA). In both cases, the ADP controllers learned to control the right knee joint and achieved balanced walking, but dHDP outperformed NFQCA in this application during a 200 gait cycle-based testing.

  8. Experiences with Markov Chain Monte Carlo Convergence Assessment in Two Psychometric Examples

    ERIC Educational Resources Information Center

    Sinharay, Sandip

    2004-01-01

    There is an increasing use of Markov chain Monte Carlo (MCMC) algorithms for fitting statistical models in psychometrics, especially in situations where the traditional estimation techniques are very difficult to apply. One of the disadvantages of using an MCMC algorithm is that it is not straightforward to determine the convergence of the…

  9. A Branch-and-Bound Algorithm for Fitting Anti-Robinson Structures to Symmetric Dissimilarity Matrices.

    ERIC Educational Resources Information Center

    Brusco, Michael J.

    2002-01-01

    Developed a branch-and-bound algorithm that can be used to seriate a symmetric dissimilarity matrix by identifying a reordering of rows and columns of the matrix optimizing an anti-Robinson criterion. Computational results suggest that with respect to computational efficiency, the approach is generally competitive with dynamic programming. (SLD)

  10. Explaining Disparities in Youth Aerobic Fitness and Body Mass Index: Relative Impact of Socioeconomic and Minority Status

    ERIC Educational Resources Information Center

    Bai, Yang; Saint-Maurice, Pedro F.; Welk, Gregory J.; Allums-Featherston, Kelly; Candelaria, Norma

    2016-01-01

    Background: To advance research on youth fitness promotion it is important to understand factors that may explain the disparities in fitness. Methods: We evaluated data from the FitnessGram NFL PLAY60 Partnership Project to examine school factors influencing aerobic capacity (AC) and body mass index (BMI) in schoolchildren. Individual observations…

  11. Detection of suspicious pain regions on a digital infrared thermal image using the multimodal function optimization.

    PubMed

    Lee, Junghoon; Lee, Joosung; Song, Sangha; Lee, Hyunsook; Lee, Kyoungjoung; Yoon, Youngro

    2008-01-01

    Automatic detection of suspicious pain regions is very useful in the medical digital infrared thermal imaging research area. To detect those regions, we use the SOFES (Survival Of the Fitness kind of the Evolution Strategy) algorithm which is one of the multimodal function optimization methods. We apply this algorithm to famous diseases, such as a foot of the glycosuria, the degenerative arthritis and the varicose vein. The SOFES algorithm is available to detect some hot spots or warm lines as veins. And according to a hundred of trials, the algorithm is very fast to converge.

  12. Experience with a two-tier reflex gFOBT/FIT strategy in a national bowel screening programme.

    PubMed

    Fraser, Callum G; Digby, Jayne; McDonald, Paula J; Strachan, Judith A; Carey, Francis A; Steele, Robert J C

    2012-03-01

    To evaluate a two-tier reflex guaiac-based faecal occult blood test (gFOBT)/faecal immunochemical test (FIT) algorithm in screening for colorectal cancer. Fourth screening round in NHS Tayside (Scotland). gFOBT were sent to 50-74-year-olds. Participants with five or six windows positive were offered colonoscopy. Participants with one to four windows positive were sent a FIT and, if positive, were offered colonoscopy. Participants providing an untestable gFOBT were sent a FIT and, if positive, were offered colonoscopy. Outcomes following positive results, cancer stages and key performance indicators were assessed. Of 131,885 invited, 73,315 (55.6%) responded. There were 66,957 (91.3%) negative, 241 (0.3%) strong positive, 5230 (7.1%) weak positive and 887 (1.2%) untestable results. The 241 participants who had five or six windows positive had more cancers than those positive by other routes: only 3 of the 30 cancers (9.7%) were Dukes' A. Among the 983 positive results from the weak positive gFOBT then positive FIT route, there were fewer cancers and more normal colonoscopies, but more adenomas than in the group with a strong positive gFOBT. In those with an untestable gFOBT, 77 had a positive FIT result, with fewer true and more false positive results than in the other groups. Fewer males had cancer and stages were earlier than in females, but more had adenoma. The detection rate for cancer was 0.18% and the PPV for cancer and all adenomas was 41.3%. The algorithm and FIT following a weak positive gFOBT have advantages. FIT following an untestable gFOBT warrants review.

  13. Operational algorithm development and refinement approaches

    NASA Astrophysics Data System (ADS)

    Ardanuy, Philip E.

    2003-11-01

    Next-generation polar and geostationary systems, such as the National Polar-orbiting Operational Environmental Satellite System (NPOESS) and the Geostationary Operational Environmental Satellite (GOES)-R, will deploy new generations of electro-optical reflective and emissive capabilities. These will include low-radiometric-noise, improved spatial resolution multi-spectral and hyperspectral imagers and sounders. To achieve specified performances (e.g., measurement accuracy, precision, uncertainty, and stability), and best utilize the advanced space-borne sensing capabilities, a new generation of retrieval algorithms will be implemented. In most cases, these advanced algorithms benefit from ongoing testing and validation using heritage research mission algorithms and data [e.g., the Earth Observing System (EOS)] Moderate-resolution Imaging Spectroradiometer (MODIS) and Shuttle Ozone Limb Scattering Experiment (SOLSE)/Limb Ozone Retreival Experiment (LORE). In these instances, an algorithm's theoretical basis is not static, but rather improves with time. Once frozen, an operational algorithm can "lose ground" relative to research analogs. Cost/benefit analyses provide a basis for change management. The challenge is in reconciling and balancing the stability, and "comfort," that today"s generation of operational platforms provide (well-characterized, known, sensors and algorithms) with the greatly improved quality, opportunities, and risks, that the next generation of operational sensors and algorithms offer. By using the best practices and lessons learned from heritage/groundbreaking activities, it is possible to implement an agile process that enables change, while managing change. This approach combines a "known-risk" frozen baseline with preset completion schedules with insertion opportunities for algorithm advances as ongoing validation activities identify and repair areas of weak performance. This paper describes an objective, adaptive implementation roadmap that takes into account the specific maturities of each system"s (sensor and algorithm) technology to provide for a program that contains continuous improvement while retaining its manageability.

  14. Global navigation satellite system receiver for weak signals under all dynamic conditions

    NASA Astrophysics Data System (ADS)

    Ziedan, Nesreen Ibrahim

    The ability of the Global Navigation Satellite System (GNSS) receiver to work under weak signal and various dynamic conditions is required in some applications. For example, to provide a positioning capability in wireless devices, or orbit determination of Geostationary and high Earth orbit satellites. This dissertation develops Global Positioning System (GPS) receiver algorithms for such applications. Fifteen algorithms are developed for the GPS C/A signal. They cover all the receiver main functions, which include acquisition, fine acquisition, bit synchronization, code and carrier tracking, and navigation message decoding. They are integrated together, and they can be used in any software GPS receiver. They also can be modified to fit any other GPS or GNSS signals. The algorithms have new capabilities. The processing and memory requirements are considered in the design to allow the algorithms to fit the limited resources of some applications; they do not require any assisting information. Weak signals can be acquired in the presence of strong interfering signals and under high dynamic conditions. The fine acquisition, bit synchronization, and tracking algorithms are based on the Viterbi algorithm and Extended Kalman filter approaches. The tracking algorithms capabilities increase the time to lose lock. They have the ability to adaptively change the integration length and the code delay separation. More than one code delay separation can be used in the same time. Large tracking errors can be detected and then corrected by a re-initialization and an acquisition-like algorithms. Detecting the navigation message is needed to increase the coherent integration; decoding it is needed to calculate the navigation solution. The decoding algorithm utilizes the message structure to enable its decoding for signals with high Bit Error Rate. The algorithms are demonstrated using simulated GPS C/A code signals, and TCXO clocks. The results have shown the algorithms ability to reliably work with 15 dB-Hz signals and acceleration over 6 g.

  15. Fit-for-purpose: species distribution model performance depends on evaluation criteria - Dutch Hoverflies as a case study.

    PubMed

    Aguirre-Gutiérrez, Jesús; Carvalheiro, Luísa G; Polce, Chiara; van Loon, E Emiel; Raes, Niels; Reemer, Menno; Biesmeijer, Jacobus C

    2013-01-01

    Understanding species distributions and the factors limiting them is an important topic in ecology and conservation, including in nature reserve selection and predicting climate change impacts. While Species Distribution Models (SDM) are the main tool used for these purposes, choosing the best SDM algorithm is not straightforward as these are plentiful and can be applied in many different ways. SDM are used mainly to gain insight in 1) overall species distributions, 2) their past-present-future probability of occurrence and/or 3) to understand their ecological niche limits (also referred to as ecological niche modelling). The fact that these three aims may require different models and outputs is, however, rarely considered and has not been evaluated consistently. Here we use data from a systematically sampled set of species occurrences to specifically test the performance of Species Distribution Models across several commonly used algorithms. Species range in distribution patterns from rare to common and from local to widespread. We compare overall model fit (representing species distribution), the accuracy of the predictions at multiple spatial scales, and the consistency in selection of environmental correlations all across multiple modelling runs. As expected, the choice of modelling algorithm determines model outcome. However, model quality depends not only on the algorithm, but also on the measure of model fit used and the scale at which it is used. Although model fit was higher for the consensus approach and Maxent, Maxent and GAM models were more consistent in estimating local occurrence, while RF and GBM showed higher consistency in environmental variables selection. Model outcomes diverged more for narrowly distributed species than for widespread species. We suggest that matching study aims with modelling approach is essential in Species Distribution Models, and provide suggestions how to do this for different modelling aims and species' data characteristics (i.e. sample size, spatial distribution).

  16. Evolution of Quantitative Measures in NMR: Quantum Mechanical qHNMR Advances Chemical Standardization of a Red Clover (Trifolium pratense) Extract

    PubMed Central

    2017-01-01

    Chemical standardization, along with morphological and DNA analysis ensures the authenticity and advances the integrity evaluation of botanical preparations. Achievement of a more comprehensive, metabolomic standardization requires simultaneous quantitation of multiple marker compounds. Employing quantitative 1H NMR (qHNMR), this study determined the total isoflavone content (TIfCo; 34.5–36.5% w/w) via multimarker standardization and assessed the stability of a 10-year-old isoflavone-enriched red clover extract (RCE). Eleven markers (nine isoflavones, two flavonols) were targeted simultaneously, and outcomes were compared with LC-based standardization. Two advanced quantitative measures in qHNMR were applied to derive quantities from complex and/or overlapping resonances: a quantum mechanical (QM) method (QM-qHNMR) that employs 1H iterative full spin analysis, and a non-QM method that uses linear peak fitting algorithms (PF-qHNMR). A 10 min UHPLC-UV method provided auxiliary orthogonal quantitation. This is the first systematic evaluation of QM and non-QM deconvolution as qHNMR quantitation measures. It demonstrates that QM-qHNMR can account successfully for the complexity of 1H NMR spectra of individual analytes and how QM-qHNMR can be built for mixtures such as botanical extracts. The contents of the main bioactive markers were in good agreement with earlier HPLC-UV results, demonstrating the chemical stability of the RCE. QM-qHNMR advances chemical standardization by its inherent QM accuracy and the use of universal calibrants, avoiding the impractical need for identical reference materials. PMID:28067513

  17. Fitness extraction and the conceptual foundations of political biology.

    PubMed

    Boari, Mircea

    2005-01-01

    In well known formulations, political science, classical and neoclassical economics, and political economy have recognized as foundational a human impulse toward self-preservation. To employ this concept, modern social-sciences theorists have made simplifying assumptions about human nature and have then built elaborately upon their more incisive simplifications. Advances in biology, including advances in evolutionary theory, notably inclusive-fitness theory, have for decades now encouraged the reconsideration of such assumptions and, more ambitiously, the reconciliation of the social and life sciences. I ask if this reconciliation is feasible and test a path to the unification of politics and biology, called here "political biology." Two new notions, "fitness extraction" and "fitness exchange," are defined, then differentiated from each other, and lastly contrasted to cooperative gaming, the putative essential element of economics.

  18. Perinatal Depression Algorithm: A Home Visitor Step-by-Step Guide for Advanced Management of Perinatal Depressive Symptoms

    ERIC Educational Resources Information Center

    Laszewski, Audrey; Wichman, Christina L.; Doering, Jennifer J.; Maletta, Kristyn; Hammel, Jennifer

    2016-01-01

    Early childhood professionals do many things to support young families. This is true now more than ever, as researchers continue to discover the long-term benefits of early, healthy, nurturing relationships. This article provides an overview of the development of an advanced practice perinatal depression algorithm created as a step-by-step guide…

  19. Advanced processing for high-bandwidth sensor systems

    NASA Astrophysics Data System (ADS)

    Szymanski, John J.; Blain, Phil C.; Bloch, Jeffrey J.; Brislawn, Christopher M.; Brumby, Steven P.; Cafferty, Maureen M.; Dunham, Mark E.; Frigo, Janette R.; Gokhale, Maya; Harvey, Neal R.; Kenyon, Garrett; Kim, Won-Ha; Layne, J.; Lavenier, Dominique D.; McCabe, Kevin P.; Mitchell, Melanie; Moore, Kurt R.; Perkins, Simon J.; Porter, Reid B.; Robinson, S.; Salazar, Alfonso; Theiler, James P.; Young, Aaron C.

    2000-11-01

    Compute performance and algorithm design are key problems of image processing and scientific computing in general. For example, imaging spectrometers are capable of producing data in hundreds of spectral bands with millions of pixels. These data sets show great promise for remote sensing applications, but require new and computationally intensive processing. The goal of the Deployable Adaptive Processing Systems (DAPS) project at Los Alamos National Laboratory is to develop advanced processing hardware and algorithms for high-bandwidth sensor applications. The project has produced electronics for processing multi- and hyper-spectral sensor data, as well as LIDAR data, while employing processing elements using a variety of technologies. The project team is currently working on reconfigurable computing technology and advanced feature extraction techniques, with an emphasis on their application to image and RF signal processing. This paper presents reconfigurable computing technology and advanced feature extraction algorithm work and their application to multi- and hyperspectral image processing. Related projects on genetic algorithms as applied to image processing will be introduced, as will the collaboration between the DAPS project and the DARPA Adaptive Computing Systems program. Further details are presented in other talks during this conference and in other conferences taking place during this symposium.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelley, R.P., E-mail: rpkelley@ufl.edu; Ray, H.; Jordan, K.A.

    An empirical investigation of the scintillation mechanism in a pressurized {sup 4}He gas fast neutron detector was conducted using pulse shape fitting. Scintillation signals from neutron interactions were measured and averaged to produce a single generic neutron pulse shape from both a {sup 252}Cf spontaneous fission source and a (d,d) neutron generator. An expression for light output over time was then developed by treating the decay of helium excited states in the same manner as the decay of radioactive isotopes. This pulse shape expression was fitted to the measured neutron pulse shape using a least-squares optimization algorithm, allowing an empiricalmore » analysis of the mechanism of scintillation inside the {sup 4}He detector. A further understanding of this mechanism in the {sup 4}He detector will advance the use of this system as a neutron spectrometer. For {sup 252}Cf neutrons, the triplet and singlet time constants were found to be 970 ns and 686 ns, respectively. For neutrons from the (d,d) generator, the time constants were found to be 884 ns and 636 ns. Differences were noted in the magnitude of these parameters compared to previously published data, however the general relationships were noted to be the same and checked with expected trends from theory. Of the excited helium states produced from a {sup 252}Cf neutron interaction, 76% were found to be born as triplet states, similar to the result from the neutron generator of 71%. The two sources yielded similar pulse shapes despite having very different neutron energy spectra, validating the robustness of the fits across various neutron energies.« less

  1. Progress on a generalized coordinates tensor product finite element 3DPNS algorithm for subsonic

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Orzechowski, J. A.

    1983-01-01

    A generalized coordinates form of the penalty finite element algorithm for the 3-dimensional parabolic Navier-Stokes equations for turbulent subsonic flows was derived. This algorithm formulation requires only three distinct hypermatrices and is applicable using any boundary fitted coordinate transformation procedure. The tensor matrix product approximation to the Jacobian of the Newton linear algebra matrix statement was also derived. Tne Newton algorithm was restructured to replace large sparse matrix solution procedures with grid sweeping using alpha-block tridiagonal matrices, where alpha equals the number of dependent variables. Numerical experiments were conducted and the resultant data gives guidance on potentially preferred tensor product constructions for the penalty finite element 3DPNS algorithm.

  2. An information geometric approach to least squares minimization

    NASA Astrophysics Data System (ADS)

    Transtrum, Mark; Machta, Benjamin; Sethna, James

    2009-03-01

    Parameter estimation by nonlinear least squares minimization is a ubiquitous problem that has an elegant geometric interpretation: all possible parameter values induce a manifold embedded within the space of data. The minimization problem is then to find the point on the manifold closest to the origin. The standard algorithm for minimizing sums of squares, the Levenberg-Marquardt algorithm, also has geometric meaning. When the standard algorithm fails to efficiently find accurate fits to the data, geometric considerations suggest improvements. Problems involving large numbers of parameters, such as often arise in biological contexts, are notoriously difficult. We suggest an algorithm based on geodesic motion that may offer improvements over the standard algorithm for a certain class of problems.

  3. An Efficient Rank Based Approach for Closest String and Closest Substring

    PubMed Central

    2012-01-01

    This paper aims to present a new genetic approach that uses rank distance for solving two known NP-hard problems, and to compare rank distance with other distance measures for strings. The two NP-hard problems we are trying to solve are closest string and closest substring. For each problem we build a genetic algorithm and we describe the genetic operations involved. Both genetic algorithms use a fitness function based on rank distance. We compare our algorithms with other genetic algorithms that use different distance measures, such as Hamming distance or Levenshtein distance, on real DNA sequences. Our experiments show that the genetic algorithms based on rank distance have the best results. PMID:22675483

  4. Parsimony and goodness-of-fit in multi-dimensional NMR inversion

    NASA Astrophysics Data System (ADS)

    Babak, Petro; Kryuchkov, Sergey; Kantzas, Apostolos

    2017-01-01

    Multi-dimensional nuclear magnetic resonance (NMR) experiments are often used for study of molecular structure and dynamics of matter in core analysis and reservoir evaluation. Industrial applications of multi-dimensional NMR involve a high-dimensional measurement dataset with complicated correlation structure and require rapid and stable inversion algorithms from the time domain to the relaxation rate and/or diffusion domains. In practice, applying existing inverse algorithms with a large number of parameter values leads to an infinite number of solutions with a reasonable fit to the NMR data. The interpretation of such variability of multiple solutions and selection of the most appropriate solution could be a very complex problem. In most cases the characteristics of materials have sparse signatures, and investigators would like to distinguish the most significant relaxation and diffusion values of the materials. To produce an easy to interpret and unique NMR distribution with the finite number of the principal parameter values, we introduce a new method for NMR inversion. The method is constructed based on the trade-off between the conventional goodness-of-fit approach to multivariate data and the principle of parsimony guaranteeing inversion with the least number of parameter values. We suggest performing the inversion of NMR data using the forward stepwise regression selection algorithm. To account for the trade-off between goodness-of-fit and parsimony, the objective function is selected based on Akaike Information Criterion (AIC). The performance of the developed multi-dimensional NMR inversion method and its comparison with conventional methods are illustrated using real data for samples with bitumen, water and clay.

  5. Improvement on Timing Accuracy of LIDAR for Remote Sensing

    NASA Astrophysics Data System (ADS)

    Zhou, G.; Huang, W.; Zhou, X.; Huang, Y.; He, C.; Li, X.; Zhang, L.

    2018-05-01

    The traditional timing discrimination technique for laser rangefinding in remote sensing, which is lower in measurement performance and also has a larger error, has been unable to meet the high precision measurement and high definition lidar image. To solve this problem, an improvement of timing accuracy based on the improved leading-edge timing discrimination (LED) is proposed. Firstly, the method enables the corresponding timing point of the same threshold to move forward with the multiple amplifying of the received signal. Then, timing information is sampled, and fitted the timing points through algorithms in MATLAB software. Finally, the minimum timing error is calculated by the fitting function. Thereby, the timing error of the received signal from the lidar is compressed and the lidar data quality is improved. Experiments show that timing error can be significantly reduced by the multiple amplifying of the received signal and the algorithm of fitting the parameters, and a timing accuracy of 4.63 ps is achieved.

  6. A Modified Active Appearance Model Based on an Adaptive Artificial Bee Colony

    PubMed Central

    Othman, Zulaiha Ali

    2014-01-01

    Active appearance model (AAM) is one of the most popular model-based approaches that have been extensively used to extract features by highly accurate modeling of human faces under various physical and environmental circumstances. However, in such active appearance model, fitting the model with original image is a challenging task. State of the art shows that optimization method is applicable to resolve this problem. However, another common problem is applying optimization. Hence, in this paper we propose an AAM based face recognition technique, which is capable of resolving the fitting problem of AAM by introducing a new adaptive ABC algorithm. The adaptation increases the efficiency of fitting as against the conventional ABC algorithm. We have used three datasets: CASIA dataset, property 2.5D face dataset, and UBIRIS v1 images dataset in our experiments. The results have revealed that the proposed face recognition technique has performed effectively, in terms of accuracy of face recognition. PMID:25165748

  7. Direct Calculation of Protein Fitness Landscapes through Computational Protein Design

    PubMed Central

    Au, Loretta; Green, David F.

    2016-01-01

    Naturally selected amino-acid sequences or experimentally derived ones are often the basis for understanding how protein three-dimensional conformation and function are determined by primary structure. Such sequences for a protein family comprise only a small fraction of all possible variants, however, representing the fitness landscape with limited scope. Explicitly sampling and characterizing alternative, unexplored protein sequences would directly identify fundamental reasons for sequence robustness (or variability), and we demonstrate that computational methods offer an efficient mechanism toward this end, on a large scale. The dead-end elimination and A∗ search algorithms were used here to find all low-energy single mutant variants, and corresponding structures of a G-protein heterotrimer, to measure changes in structural stability and binding interactions to define a protein fitness landscape. We established consistency between these algorithms with known biophysical and evolutionary trends for amino-acid substitutions, and could thus recapitulate known protein side-chain interactions and predict novel ones. PMID:26745411

  8. Calibrating nadir striped artifacts in a multibeam backscatter image using the equal mean-variance fitting model

    NASA Astrophysics Data System (ADS)

    Yang, Fanlin; Zhao, Chunxia; Zhang, Kai; Feng, Chengkai; Ma, Yue

    2017-07-01

    Acoustic seafloor classification with multibeam backscatter measurements is an attractive approach for mapping seafloor properties over a large area. However, artifacts in the multibeam backscatter measurements prevent accurate characterization of the seafloor. In particular, the backscatter level is extremely strong and highly variable in the near-nadir region due to the specular echo phenomenon. Consequently, striped artifacts emerge in the backscatter image, which can degrade the classification accuracy. This study focuses on the striped artifacts in multibeam backscatter images. To this end, a calibration algorithm based on equal mean-variance fitting is developed. By fitting the local shape of the angular response curve, the striped artifacts are compressed and moved according to the relations between the mean and variance in the near-nadir and off-nadir region. The algorithm utilized the measured data of near-nadir region and retained the basic shape of the response curve. The experimental results verify the high performance of the proposed method.

  9. Automatic lung lobe segmentation of COPD patients using iterative B-spline fitting

    NASA Astrophysics Data System (ADS)

    Shamonin, D. P.; Staring, M.; Bakker, M. E.; Xiao, C.; Stolk, J.; Reiber, J. H. C.; Stoel, B. C.

    2012-02-01

    We present an automatic lung lobe segmentation algorithm for COPD patients. The method enhances fissures, removes unlikely fissure candidates, after which a B-spline is fitted iteratively through the remaining candidate objects. The iterative fitting approach circumvents the need to classify each object as being part of the fissure or being noise, and allows the fissure to be detected in multiple disconnected parts. This property is beneficial for good performance in patient data, containing incomplete and disease-affected fissures. The proposed algorithm is tested on 22 COPD patients, resulting in accurate lobe-based densitometry, and a median overlap of the fissure (defined 3 voxels wide) with an expert ground truth of 0.65, 0.54 and 0.44 for the three main fissures. This compares to complete lobe overlaps of 0.99, 0.98, 0.98, 0.97 and 0.87 for the five main lobes, showing promise for lobe segmentation on data of patients with moderate to severe COPD.

  10. A hybrid MD-kMC algorithm for folding proteins in explicit solvent.

    PubMed

    Peter, Emanuel Karl; Shea, Joan-Emma

    2014-04-14

    We present a novel hybrid MD-kMC algorithm that is capable of efficiently folding proteins in explicit solvent. We apply this algorithm to the folding of a small protein, Trp-Cage. Different kMC move sets that capture different possible rate limiting steps are implemented. The first uses secondary structure formation as a relevant rate event (a combination of dihedral rotations and hydrogen-bonding formation and breakage). The second uses tertiary structure formation events through formation of contacts via translational moves. Both methods fold the protein, but via different mechanisms and with different folding kinetics. The first method leads to folding via a structured helical state, with kinetics fit by a single exponential. The second method leads to folding via a collapsed loop, with kinetics poorly fit by single or double exponentials. In both cases, folding times are faster than experimentally reported values, The secondary and tertiary move sets are integrated in a third MD-kMC implementation, which now leads to folding of the protein via both pathways, with single and double-exponential fits to the rates, and to folding rates in good agreement with experimental values. The competition between secondary and tertiary structure leads to a longer search for the helix-rich intermediate in the case of the first pathway, and to the emergence of a kinetically trapped long-lived molten-globule collapsed state in the case of the second pathway. The algorithm presented not only captures experimentally observed folding intermediates and kinetics, but yields insights into the relative roles of local and global interactions in determining folding mechanisms and rates.

  11. Unimodular sequence design under frequency hopping communication compatibility requirements

    NASA Astrophysics Data System (ADS)

    Ge, Peng; Cui, Guolong; Kong, Lingjiang; Yang, Jianyu

    2016-12-01

    The integrated design for both radar and anonymous communication has drawn more attention recently since wireless communication system appeals to enhance security and reliability. Given the frequency hopping (FH) communication system, an effective way to realize integrated design is to meet the spectrum compatibility between these two systems. The paper deals with a unimodular sequence design technique which considers optimizing both the spectrum compatibility and peak sidelobes levels (PSL) of auto-correlation function (ACF). The spectrum compatibility requirement realizes anonymous communication for the FH system and provides this system lower probability of intercept (LPI) since the spectrum of the FH system is hidden in that of the radar system. The proposed algorithm, named generalized fitting template (GFT) technique, converts the sequence optimization design problem to a iterative fitting process. In this process, the power spectrum density (PSD) and PSL behaviors of the generated sequences fit both PSD and PSL templates progressively. Two templates are established based on the spectrum compatibility requirement and the expected PSL. As noted, in order to ensure the communication security and reliability, spectrum compatibility requirement is given a higher priority to achieve in the GFT algorithm. This algorithm realizes this point by adjusting the weight adaptively between these two terms during the iteration process. The simulation results are analyzed in terms of bit error rate (BER), PSD, PSL, and signal-interference rate (SIR) for both the radar and FH systems. The performance of GFT is compared with SCAN, CAN, FRE, CYC, and MAT algorithms in the above aspects, which shows its good effectiveness.

  12. Directly data processing algorithm for multi-wavelength pyrometer (MWP).

    PubMed

    Xing, Jian; Peng, Bo; Ma, Zhao; Guo, Xin; Dai, Li; Gu, Weihong; Song, Wenlong

    2017-11-27

    Data processing of multi-wavelength pyrometer (MWP) is a difficult problem because unknown emissivity. So far some solutions developed generally assumed particular mathematical relations for emissivity versus wavelength or emissivity versus temperature. Due to the deviation between the hypothesis and actual situation, the inversion results can be seriously affected. So directly data processing algorithm of MWP that does not need to assume the spectral emissivity model in advance is main aim of the study. Two new data processing algorithms of MWP, Gradient Projection (GP) algorithm and Internal Penalty Function (IPF) algorithm, each of which does not require to fix emissivity model in advance, are proposed. The novelty core idea is that data processing problem of MWP is transformed into constraint optimization problem, then it can be solved by GP or IPF algorithms. By comparison of simulation results for some typical spectral emissivity models, it is found that IPF algorithm is superior to GP algorithm in terms of accuracy and efficiency. Rocket nozzle temperature experiment results show that true temperature inversion results from IPF algorithm agree well with the theoretical design temperature as well. So the proposed combination IPF algorithm with MWP is expected to be a directly data processing algorithm to clear up the unknown emissivity obstacle for MWP.

  13. The rhizosphere microbiota of plant invaders: an overview of recent advances in the microbiomics of invasive plants

    PubMed Central

    Coats, Vanessa C.; Rumpho, Mary E.

    2014-01-01

    Plants in terrestrial systems have evolved in direct association with microbes functioning as both agonists and antagonists of plant fitness and adaptability. As such, investigations that segregate plants and microbes provide only a limited scope of the biotic interactions that dictate plant community structure and composition in natural systems. Invasive plants provide an excellent working model to compare and contrast the effects of microbial communities associated with natural plant populations on plant fitness, adaptation, and fecundity. The last decade of DNA sequencing technology advancements opened the door to microbial community analysis, which has led to an increased awareness of the importance of an organism’s microbiome and the disease states associated with microbiome shifts. Employing microbiome analysis to study the symbiotic networks associated with invasive plants will help us to understand what microorganisms contribute to plant fitness in natural systems, how different soil microbial communities impact plant fitness and adaptability, specificity of host–microbe interactions in natural plant populations, and the selective pressures that dictate the structure of above-ground and below-ground biotic communities. This review discusses recent advances in invasive plant biology that have resulted from microbiome analyses as well as the microbial factors that direct plant fitness and adaptability in natural systems. PMID:25101069

  14. Real-Time Exponential Curve Fits Using Discrete Calculus

    NASA Technical Reports Server (NTRS)

    Rowe, Geoffrey

    2010-01-01

    An improved solution for curve fitting data to an exponential equation (y = Ae(exp Bt) + C) has been developed. This improvement is in four areas -- speed, stability, determinant processing time, and the removal of limits. The solution presented avoids iterative techniques and their stability errors by using three mathematical ideas: discrete calculus, a special relationship (be tween exponential curves and the Mean Value Theorem for Derivatives), and a simple linear curve fit algorithm. This method can also be applied to fitting data to the general power law equation y = Ax(exp B) + C and the general geometric growth equation y = Ak(exp Bt) + C.

  15. Faecal immunochemical tests versus guaiac faecal occult blood tests: what clinicians and colorectal cancer screening programme organisers need to know.

    PubMed

    Tinmouth, Jill; Lansdorp-Vogelaar, Iris; Allison, James E

    2015-08-01

    Although colorectal cancer (CRC) is a common cause of cancer-related death, it is fortunately amenable to screening with faecal tests for occult blood and endoscopic tests. Despite the evidence for the efficacy of guaiac-based faecal occult blood tests (gFOBT), they have not been popular with primary care providers in many jurisdictions, in part because of poor sensitivity for advanced colorectal neoplasms (advanced adenomas and CRC). In order to address this issue, high sensitivity gFOBT have been recommended, however, these tests are limited by a reduction in specificity compared with the traditional gFOBT. Where colonoscopy is available, some providers have opted to recommend screening colonoscopy to their patients instead of faecal testing, as they believe it to be a better test. Newer methods for detecting occult human blood in faeces have been developed. These tests, called faecal immunochemical tests (FIT), are immunoassays specific for human haemoglobin. FIT hold considerable promise over the traditional guaiac methods including improved analytical and clinical sensitivity for CRC, better detection of advanced adenomas, and greater screenee participation. In addition, the quantitative FIT are more flexible than gFOBT as a numerical result is reported, allowing customisation of the positivity threshold. When compared with endoscopy, FIT are less sensitive for the detection of advanced colorectal neoplasms when only one time testing is applied to a screening population; however, this is offset by improved participation in a programme of annual or biennial screens and a better safety profile. This review will describe how gFOBT and FIT work and will present the evidence that supports the use of FIT over gFOBT, including the cost-effectiveness of FIT relative to gFOBT. Finally, specific issues related to FIT implementation will be discussed, particularly with respect to organised CRC screening programmes. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  16. Exponential-fitted methods for integrating stiff systems of ordinary differential equations: Applications to homogeneous gas-phase chemical kinetics

    NASA Technical Reports Server (NTRS)

    Pratt, D. T.

    1984-01-01

    Conventional algorithms for the numerical integration of ordinary differential equations (ODEs) are based on the use of polynomial functions as interpolants. However, the exact solutions of stiff ODEs behave like decaying exponential functions, which are poorly approximated by polynomials. An obvious choice of interpolant are the exponential functions themselves, or their low-order diagonal Pade (rational function) approximants. A number of explicit, A-stable, integration algorithms were derived from the use of a three-parameter exponential function as interpolant, and their relationship to low-order, polynomial-based and rational-function-based implicit and explicit methods were shown by examining their low-order diagonal Pade approximants. A robust implicit formula was derived by exponential fitting the trapezoidal rule. Application of these algorithms to integration of the ODEs governing homogenous, gas-phase chemical kinetics was demonstrated in a developmental code CREK1D, which compares favorably with the Gear-Hindmarsh code LSODE in spite of the use of a primitive stepsize control strategy.

  17. Planetary Crater Detection and Registration Using Marked Point Processes, Multiple Birth and Death Algorithms, and Region-Based Analysis

    NASA Technical Reports Server (NTRS)

    Solarna, David; Moser, Gabriele; Le Moigne-Stewart, Jacqueline; Serpico, Sebastiano B.

    2017-01-01

    Because of the large variety of sensors and spacecraft collecting data, planetary science needs to integrate various multi-sensor and multi-temporal images. These multiple data represent a precious asset, as they allow the study of targets spectral responses and of changes in the surface structure; because of their variety, they also require accurate and robust registration. A new crater detection algorithm, used to extract features that will be integrated in an image registration framework, is presented. A marked point process-based method has been developed to model the spatial distribution of elliptical objects (i.e. the craters) and a birth-death Markov chain Monte Carlo method, coupled with a region-based scheme aiming at computational efficiency, is used to find the optimal configuration fitting the image. The extracted features are exploited, together with a newly defined fitness function based on a modified Hausdorff distance, by an image registration algorithm whose architecture has been designed to minimize the computational time.

  18. Improved pulse shape discrimination in EJ-301 liquid scintillators

    NASA Astrophysics Data System (ADS)

    Lang, R. F.; Masson, D.; Pienaar, J.; Röttger, S.

    2017-06-01

    Digital pulse shape discrimination has become readily available to distinguish nuclear recoil and electronic recoil events in scintillation detectors. We evaluate digital implementations of pulse shape discrimination algorithms discussed in the literature, namely the Charge Comparison Method, Pulse-Gradient Analysis, Fourier Series and Standard Event Fitting. In addition, we present a novel algorithm based on a Laplace Transform. Instead of comparing the performance of these algorithms based on a single Figure of Merit, we evaluate them as a function of recoil energy. Specifically, using commercial EJ-301 liquid scintillators, we examined both the resulting acceptance of nuclear recoils at a given rejection level of electronic recoils, as well as the purity of the selected nuclear recoil event samples. We find that both a Standard Event fit and a Laplace Transform can be used to significantly improve the discrimination capabilities over the whole considered energy range of 0 - 800keVee . Furthermore, we show that the Charge Comparison Method performs poorly in accurately identifying nuclear recoils.

  19. Parameter Search Algorithms for Microwave Radar-Based Breast Imaging: Focal Quality Metrics as Fitness Functions.

    PubMed

    O'Loughlin, Declan; Oliveira, Bárbara L; Elahi, Muhammad Adnan; Glavin, Martin; Jones, Edward; Popović, Milica; O'Halloran, Martin

    2017-12-06

    Inaccurate estimation of average dielectric properties can have a tangible impact on microwave radar-based breast images. Despite this, recent patient imaging studies have used a fixed estimate although this is known to vary from patient to patient. Parameter search algorithms are a promising technique for estimating the average dielectric properties from the reconstructed microwave images themselves without additional hardware. In this work, qualities of accurately reconstructed images are identified from point spread functions. As the qualities of accurately reconstructed microwave images are similar to the qualities of focused microscopic and photographic images, this work proposes the use of focal quality metrics for average dielectric property estimation. The robustness of the parameter search is evaluated using experimental dielectrically heterogeneous phantoms on the three-dimensional volumetric image. Based on a very broad initial estimate of the average dielectric properties, this paper shows how these metrics can be used as suitable fitness functions in parameter search algorithms to reconstruct clear and focused microwave radar images.

  20. Combined genetic algorithm and multiple linear regression (GA-MLR) optimizer: Application to multi-exponential fluorescence decay surface.

    PubMed

    Fisz, Jacek J

    2006-12-07

    The optimization approach based on the genetic algorithm (GA) combined with multiple linear regression (MLR) method, is discussed. The GA-MLR optimizer is designed for the nonlinear least-squares problems in which the model functions are linear combinations of nonlinear functions. GA optimizes the nonlinear parameters, and the linear parameters are calculated from MLR. GA-MLR is an intuitive optimization approach and it exploits all advantages of the genetic algorithm technique. This optimization method results from an appropriate combination of two well-known optimization methods. The MLR method is embedded in the GA optimizer and linear and nonlinear model parameters are optimized in parallel. The MLR method is the only one strictly mathematical "tool" involved in GA-MLR. The GA-MLR approach simplifies and accelerates considerably the optimization process because the linear parameters are not the fitted ones. Its properties are exemplified by the analysis of the kinetic biexponential fluorescence decay surface corresponding to a two-excited-state interconversion process. A short discussion of the variable projection (VP) algorithm, designed for the same class of the optimization problems, is presented. VP is a very advanced mathematical formalism that involves the methods of nonlinear functionals, algebra of linear projectors, and the formalism of Fréchet derivatives and pseudo-inverses. Additional explanatory comments are added on the application of recently introduced the GA-NR optimizer to simultaneous recovery of linear and weakly nonlinear parameters occurring in the same optimization problem together with nonlinear parameters. The GA-NR optimizer combines the GA method with the NR method, in which the minimum-value condition for the quadratic approximation to chi(2), obtained from the Taylor series expansion of chi(2), is recovered by means of the Newton-Raphson algorithm. The application of the GA-NR optimizer to model functions which are multi-linear combinations of nonlinear functions, is indicated. The VP algorithm does not distinguish the weakly nonlinear parameters from the nonlinear ones and it does not apply to the model functions which are multi-linear combinations of nonlinear functions.

  1. SU-F-T-423: Automating Treatment Planning for Cervical Cancer in Low- and Middle- Income Countries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kisling, K; Zhang, L; Yang, J

    Purpose: To develop and test two independent algorithms that automatically create the photon treatment fields for a four-field box beam arrangement, a common treatment technique for cervical cancer in low- and middle-income countries. Methods: Two algorithms were developed and integrated into Eclipse using its Advanced Programming Interface:3D Method: We automatically segment bony anatomy on CT using an in-house multi-atlas contouring tool and project the structures into the beam’s-eye-view. We identify anatomical landmarks on the projections to define the field apertures. 2D Method: We generate DRRs for all four beams. An atlas of DRRs for six standard patients with corresponding fieldmore » apertures are deformably registered to the test patient DRRs. The set of deformed atlas apertures are fitted to an expected shape to define the final apertures. Both algorithms were tested on 39 patient CTs, and the resulting treatment fields were scored by a radiation oncologist. We also investigated the feasibility of using one algorithm as an independent check of the other algorithm. Results: 96% of the 3D-Method-generated fields and 79% of the 2D-method-generated fields were scored acceptable for treatment (“Per Protocol” or “Acceptable Variation”). The 3D Method generated more fields scored “Per Protocol” than the 2D Method (62% versus 17%). The 4% of the 3D-Method-generated fields that were scored “Unacceptable Deviation” were all due to an improper L5 vertebra contour resulting in an unacceptable superior jaw position. When these same patients were planned with the 2D method, the superior jaw was acceptable, suggesting that the 2D method can be used to independently check the 3D method. Conclusion: Our results show that our 3D Method is feasible for automatically generating cervical treatment fields. Furthermore, the 2D Method can serve as an automatic, independent check of the automatically-generated treatment fields. These algorithms will be implemented for fully automated cervical treatment planning.« less

  2. Systemic Console: Advanced analysis of exoplanetary data

    NASA Astrophysics Data System (ADS)

    Meschiari, Stefano; Wolf, Aaron S.; Rivera, Eugenio; Laughlin, Gregory; Vogt, Steve; Butler, Paul

    2012-10-01

    Systemic Console is a tool for advanced analysis of exoplanetary data. It comprises a graphical tool for fitting radial velocity and transits datasets and a library of routines for non-interactive calculations. Among its features are interactive plotting of RV curves and transits, combined fitting of RV and transit timing (primary and secondary), interactive periodograms and FAP estimation, and bootstrap and MCMC error estimation. The console package includes public radial velocity and transit data.

  3. State-Space Modeling of Dynamic Psychological Processes via the Kalman Smoother Algorithm: Rationale, Finite Sample Properties, and Applications

    ERIC Educational Resources Information Center

    Song, Hairong; Ferrer, Emilio

    2009-01-01

    This article presents a state-space modeling (SSM) technique for fitting process factor analysis models directly to raw data. The Kalman smoother via the expectation-maximization algorithm to obtain maximum likelihood parameter estimates is used. To examine the finite sample properties of the estimates in SSM when common factors are involved, a…

  4. Investigation of iterative image reconstruction in three-dimensional optoacoustic tomography

    PubMed Central

    Wang, Kun; Su, Richard; Oraevsky, Alexander A; Anastasio, Mark A

    2012-01-01

    Iterative image reconstruction algorithms for optoacoustic tomography (OAT), also known as photoacoustic tomography, have the ability to improve image quality over analytic algorithms due to their ability to incorporate accurate models of the imaging physics, instrument response, and measurement noise. However, to date, there have been few reported attempts to employ advanced iterative image reconstruction algorithms for improving image quality in three-dimensional (3D) OAT. In this work, we implement and investigate two iterative image reconstruction methods for use with a 3D OAT small animal imager: namely, a penalized least-squares (PLS) method employing a quadratic smoothness penalty and a PLS method employing a total variation norm penalty. The reconstruction algorithms employ accurate models of the ultrasonic transducer impulse responses. Experimental data sets are employed to compare the performances of the iterative reconstruction algorithms to that of a 3D filtered backprojection (FBP) algorithm. By use of quantitative measures of image quality, we demonstrate that the iterative reconstruction algorithms can mitigate image artifacts and preserve spatial resolution more effectively than FBP algorithms. These features suggest that the use of advanced image reconstruction algorithms can improve the effectiveness of 3D OAT while reducing the amount of data required for biomedical applications. PMID:22864062

  5. Computer-Automated Evolution of Spacecraft X-Band Antennas

    NASA Technical Reports Server (NTRS)

    Lohn, Jason D.; Homby, Gregory S.; Linden, Derek S.

    2010-01-01

    A document discusses the use of computer- aided evolution in arriving at a design for X-band communication antennas for NASA s three Space Technology 5 (ST5) satellites, which were launched on March 22, 2006. Two evolutionary algorithms, incorporating different representations of the antenna design and different fitness functions, were used to automatically design and optimize an X-band antenna design. A set of antenna designs satisfying initial ST5 mission requirements was evolved by use these algorithms. The two best antennas - one from each evolutionary algorithm - were built. During flight-qualification testing of these antennas, the mission requirements were changed. After minimal changes in the evolutionary algorithms - mostly in the fitness functions - new antenna designs satisfying the changed mission requirements were evolved and within one month of this change, two new antennas were designed and prototypes of the antennas were built and tested. One of these newly evolved antennas was approved for deployment on the ST5 mission, and flight-qualified versions of this design were built and installed on the spacecraft. At the time of writing the document, these antennas were the first computer-evolved hardware in outer space.

  6. Processing Images of Craters for Spacecraft Navigation

    NASA Technical Reports Server (NTRS)

    Cheng, Yang; Johnson, Andrew E.; Matthies, Larry H.

    2009-01-01

    A crater-detection algorithm has been conceived to enable automation of what, heretofore, have been manual processes for utilizing images of craters on a celestial body as landmarks for navigating a spacecraft flying near or landing on that body. The images are acquired by an electronic camera aboard the spacecraft, then digitized, then processed by the algorithm, which consists mainly of the following steps: 1. Edges in an image detected and placed in a database. 2. Crater rim edges are selected from the edge database. 3. Edges that belong to the same crater are grouped together. 4. An ellipse is fitted to each group of crater edges. 5. Ellipses are refined directly in the image domain to reduce errors introduced in the detection of edges and fitting of ellipses. 6. The quality of each detected crater is evaluated. It is planned to utilize this algorithm as the basis of a computer program for automated, real-time, onboard processing of crater-image data. Experimental studies have led to the conclusion that this algorithm is capable of a detection rate >93 percent, a false-alarm rate <5 percent, a geometric error <0.5 pixel, and a position error <0.3 pixel.

  7. Genetic Algorithm Optimizes Q-LAW Control Parameters

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; von Allmen, Paul; Petropoulos, Anastassios; Terrile, Richard

    2008-01-01

    A document discusses a multi-objective, genetic algorithm designed to optimize Lyapunov feedback control law (Q-law) parameters in order to efficiently find Pareto-optimal solutions for low-thrust trajectories for electronic propulsion systems. These would be propellant-optimal solutions for a given flight time, or flight time optimal solutions for a given propellant requirement. The approximate solutions are used as good initial solutions for high-fidelity optimization tools. When the good initial solutions are used, the high-fidelity optimization tools quickly converge to a locally optimal solution near the initial solution. Q-law control parameters are represented as real-valued genes in the genetic algorithm. The performances of the Q-law control parameters are evaluated in the multi-objective space (flight time vs. propellant mass) and sorted by the non-dominated sorting method that assigns a better fitness value to the solutions that are dominated by a fewer number of other solutions. With the ranking result, the genetic algorithm encourages the solutions with higher fitness values to participate in the reproduction process, improving the solutions in the evolution process. The population of solutions converges to the Pareto front that is permitted within the Q-law control parameter space.

  8. An Algorithm for Protein Helix Assignment Using Helix Geometry

    PubMed Central

    Cao, Chen; Xu, Shutan; Wang, Lincong

    2015-01-01

    Helices are one of the most common and were among the earliest recognized secondary structure elements in proteins. The assignment of helices in a protein underlies the analysis of its structure and function. Though the mathematical expression for a helical curve is simple, no previous assignment programs have used a genuine helical curve as a model for helix assignment. In this paper we present a two-step assignment algorithm. The first step searches for a series of bona fide helical curves each one best fits the coordinates of four successive backbone Cα atoms. The second step uses the best fit helical curves as input to make helix assignment. The application to the protein structures in the PDB (protein data bank) proves that the algorithm is able to assign accurately not only regular α-helix but also 310 and π helices as well as their left-handed versions. One salient feature of the algorithm is that the assigned helices are structurally more uniform than those by the previous programs. The structural uniformity should be useful for protein structure classification and prediction while the accurate assignment of a helix to a particular type underlies structure-function relationship in proteins. PMID:26132394

  9. Binaural model-based dynamic-range compression.

    PubMed

    Ernst, Stephan M A; Kortlang, Steffen; Grimm, Giso; Bisitz, Thomas; Kollmeier, Birger; Ewert, Stephan D

    2018-01-26

    Binaural cues such as interaural level differences (ILDs) are used to organise auditory perception and to segregate sound sources in complex acoustical environments. In bilaterally fitted hearing aids, dynamic-range compression operating independently at each ear potentially alters these ILDs, thus distorting binaural perception and sound source segregation. A binaurally-linked model-based fast-acting dynamic compression algorithm designed to approximate the normal-hearing basilar membrane (BM) input-output function in hearing-impaired listeners is suggested. A multi-center evaluation in comparison with an alternative binaural and two bilateral fittings was performed to assess the effect of binaural synchronisation on (a) speech intelligibility and (b) perceived quality in realistic conditions. 30 and 12 hearing impaired (HI) listeners were aided individually with the algorithms for both experimental parts, respectively. A small preference towards the proposed model-based algorithm in the direct quality comparison was found. However, no benefit of binaural-synchronisation regarding speech intelligibility was found, suggesting a dominant role of the better ear in all experimental conditions. The suggested binaural synchronisation of compression algorithms showed a limited effect on the tested outcome measures, however, linking could be situationally beneficial to preserve a natural binaural perception of the acoustical environment.

  10. A clustering method of Chinese medicine prescriptions based on modified firefly algorithm.

    PubMed

    Yuan, Feng; Liu, Hong; Chen, Shou-Qiang; Xu, Liang

    2016-12-01

    This paper is aimed to study the clustering method for Chinese medicine (CM) medical cases. The traditional K-means clustering algorithm had shortcomings such as dependence of results on the selection of initial value, trapping in local optimum when processing prescriptions form CM medical cases. Therefore, a new clustering method based on the collaboration of firefly algorithm and simulated annealing algorithm was proposed. This algorithm dynamically determined the iteration of firefly algorithm and simulates sampling of annealing algorithm by fitness changes, and increased the diversity of swarm through expansion of the scope of the sudden jump, thereby effectively avoiding premature problem. The results from confirmatory experiments for CM medical cases suggested that, comparing with traditional K-means clustering algorithms, this method was greatly improved in the individual diversity and the obtained clustering results, the computing results from this method had a certain reference value for cluster analysis on CM prescriptions.

  11. Adaptively resizing populations: Algorithm, analysis, and first results

    NASA Technical Reports Server (NTRS)

    Smith, Robert E.; Smuda, Ellen

    1993-01-01

    Deciding on an appropriate population size for a given Genetic Algorithm (GA) application can often be critical to the algorithm's success. Too small, and the GA can fall victim to sampling error, affecting the efficacy of its search. Too large, and the GA wastes computational resources. Although advice exists for sizing GA populations, much of this advice involves theoretical aspects that are not accessible to the novice user. An algorithm for adaptively resizing GA populations is suggested. This algorithm is based on recent theoretical developments that relate population size to schema fitness variance. The suggested algorithm is developed theoretically, and simulated with expected value equations. The algorithm is then tested on a problem where population sizing can mislead the GA. The work presented suggests that the population sizing algorithm may be a viable way to eliminate the population sizing decision from the application of GA's.

  12. A polarized low-coherence interferometry demodulation algorithm by recovering the absolute phase of a selected monochromatic frequency.

    PubMed

    Jiang, Junfeng; Wang, Shaohua; Liu, Tiegen; Liu, Kun; Yin, Jinde; Meng, Xiange; Zhang, Yimo; Wang, Shuang; Qin, Zunqi; Wu, Fan; Li, Dingjie

    2012-07-30

    A demodulation algorithm based on absolute phase recovery of a selected monochromatic frequency is proposed for optical fiber Fabry-Perot pressure sensing system. The algorithm uses Fourier transform to get the relative phase and intercept of the unwrapped phase-frequency linear fit curve to identify its interference-order, which are then used to recover the absolute phase. A simplified mathematical model of the polarized low-coherence interference fringes was established to illustrate the principle of the proposed algorithm. Phase unwrapping and the selection of monochromatic frequency were discussed in detail. Pressure measurement experiment was carried out to verify the effectiveness of the proposed algorithm. Results showed that the demodulation precision by our algorithm could reach up to 0.15kPa, which has been improved by 13 times comparing with phase slope based algorithm.

  13. Evaluation of the operational Aerosol Layer Height retrieval algorithm for Sentinel-5 Precursor: application to O2 A band observations from GOME-2A

    NASA Astrophysics Data System (ADS)

    Sanders, A. F. J.; de Haan, J. F.; Sneep, M.; Apituley, A.; Stammes, P.; Vieitez, M. O.; Tilstra, L. G.; Tuinder, O. N. E.; Koning, C. E.; Veefkind, J. P.

    2015-06-01

    An algorithm setup for the operational Aerosol Layer Height product for TROPOMI on the Sentinel-5 Precursor mission is described and discussed, applied to GOME-2A data, and evaluated with lidar measurements. The algorithm makes a spectral fit of reflectance at the O2 A band in the near-infrared and the fit window runs from 758 to 770 nm. The aerosol profile is parameterized by a scattering layer with constant aerosol volume extinction coefficient and aerosol single scattering albedo and with a fixed pressure thickness. The algorithm's target parameter is the height of this layer. In this paper, we apply the algorithm to observations from GOME-2A in a number of systematic and extensive case studies and we compare retrieved aerosol layer heights with lidar measurements. Aerosol scenes cover various aerosol types, both elevated and boundary layer aerosols, and land and sea surfaces. The aerosol optical thicknesses for these scenes are relatively moderate. Retrieval experiments with GOME-2A spectra are used to investigate various sensitivities, in which particular attention is given to the role of the surface albedo. From retrieval simulations with the single-layer model, we learn that the surface albedo should be a fit parameter when retrieving aerosol layer height from the O2 A band. Current uncertainties in surface albedo climatologies cause biases and non-convergences when the surface albedo is fixed in the retrieval. Biases disappear and convergence improves when the surface albedo is fitted, while precision of retrieved aerosol layer pressure is still largely within requirement levels. Moreover, we show that fitting the surface albedo helps to ameliorate biases in retrieved aerosol layer height when the assumed aerosol model is inaccurate. Subsequent retrievals with GOME-2A spectra confirm that convergence is better when the surface albedo is retrieved simultaneously with aerosol parameters. However, retrieved aerosol layer pressures are systematically low (i.e., layer high in the atmosphere) to the extent that retrieved values are not realistically representing actual extinction profiles anymore. When the surface albedo is fixed in retrievals with GOME-2A spectra, convergence deteriorates as expected, but retrieved aerosol layer pressures become much higher (i.e., layer lower in atmosphere). The comparison with lidar measurements indicates that retrieved aerosol layer heights are indeed representative of the underlying profile in that case. Finally, subsequent retrieval simulations with two-layer aerosol profiles show that a model error in the assumed profile (two layers in the simulation but only one in the retrieval) is partly absorbed by the surface albedo when this parameter is fitted. This is expected in view of the correlations between errors in fit parameters and the effect is relatively small for elevated layers (less than 100 hPa). In case one of the scattering layers is near the surface (boundary layer aerosols), the effect becomes surprisingly large such that the retrieved height of the single layer is above the two-layer profile. Furthermore, we find that the retrieval solution, once retrieval converges, hardly depends on the starting values for the fit. Sensitivity experiments with GOME-2A spectra also show that aerosol layer height is indeed relatively robust against inaccuracies in the assumed aerosol model, even when the surface albedo is not fitted. We show spectral fit residuals, which can be used for further investigations. Fit residuals may be partly explained by spectroscopic uncertainties, which is suggested by an experiment showing the improvement of convergence when the absorption cross section is scaled in agreement with Butz et al. (2012) and Crisp et al. (2012) and a temperature offset to the a priori ECMWF temperature profile is fitted. Retrieved temperature offsets are always negative and quite large (ranging between -4 and -8 K), which is not expected if temperature offsets absorb remaining inaccuracies in meteorological data. Other sensitivity experiments investigate fitting of stray light and fluorescence emissions. We find negative radiance offsets and negative fluorescence emissions, also for non-vegetated areas, but from the results it is not clear whether fitting these parameters improves the retrieval. Based on the present results, the operational baseline for the Aerosol Layer Height product currently will not fit the surface albedo. The product will be particularly suited for elevated, optically thick aerosol layers. In addition to its scientific value in climate research, anticipated applications of the product for TROPOMI are providing aerosol height information for aviation safety and improving interpretation of the Absorbing Aerosol Index.

  14. Evaluation of the operational Aerosol Layer Height retrieval algorithm for Sentinel-5 Precursor: application to O2 A band observations from GOME-2A

    NASA Astrophysics Data System (ADS)

    Sanders, A. F. J.; de Haan, J. F.; Sneep, M.; Apituley, A.; Stammes, P.; Vieitez, M. O.; Tilstra, L. G.; Tuinder, O. N. E.; Koning, C. E.; Veefkind, J. P.

    2015-11-01

    An algorithm setup for the operational Aerosol Layer Height product for TROPOMI on the Sentinel-5 Precursor mission is described and discussed, applied to GOME-2A data, and evaluated with lidar measurements. The algorithm makes a spectral fit of reflectance at the O2 A band in the near-infrared and the fit window runs from 758 to 770 nm. The aerosol profile is parameterised by a scattering layer with constant aerosol volume extinction coefficient and aerosol single scattering albedo and with a fixed pressure thickness. The algorithm's target parameter is the height of this layer. In this paper, we apply the algorithm to observations from GOME-2A in a number of systematic and extensive case studies, and we compare retrieved aerosol layer heights with lidar measurements. Aerosol scenes cover various aerosol types, both elevated and boundary layer aerosols, and land and sea surfaces. The aerosol optical thicknesses for these scenes are relatively moderate. Retrieval experiments with GOME-2A spectra are used to investigate various sensitivities, in which particular attention is given to the role of the surface albedo. From retrieval simulations with the single-layer model, we learn that the surface albedo should be a fit parameter when retrieving aerosol layer height from the O2 A band. Current uncertainties in surface albedo climatologies cause biases and non-convergences when the surface albedo is fixed in the retrieval. Biases disappear and convergence improves when the surface albedo is fitted, while precision of retrieved aerosol layer pressure is still largely within requirement levels. Moreover, we show that fitting the surface albedo helps to ameliorate biases in retrieved aerosol layer height when the assumed aerosol model is inaccurate. Subsequent retrievals with GOME-2A spectra confirm that convergence is better when the surface albedo is retrieved simultaneously with aerosol parameters. However, retrieved aerosol layer pressures are systematically low (i.e., layer high in the atmosphere) to the extent that retrieved values no longer realistically represent actual extinction profiles. When the surface albedo is fixed in retrievals with GOME-2A spectra, convergence deteriorates as expected, but retrieved aerosol layer pressures become much higher (i.e., layer lower in atmosphere). The comparison with lidar measurements indicates that retrieved aerosol layer heights are indeed representative of the underlying profile in that case. Finally, subsequent retrieval simulations with two-layer aerosol profiles show that a model error in the assumed profile (two layers in the simulation but only one in the retrieval) is partly absorbed by the surface albedo when this parameter is fitted. This is expected in view of the correlations between errors in fit parameters and the effect is relatively small for elevated layers (less than 100 hPa). If one of the scattering layers is near the surface (boundary layer aerosols), the effect becomes surprisingly large, in such a way that the retrieved height of the single layer is above the two-layer profile. Furthermore, we find that the retrieval solution, once retrieval converges, hardly depends on the starting values for the fit. Sensitivity experiments with GOME-2A spectra also show that aerosol layer height is indeed relatively robust against inaccuracies in the assumed aerosol model, even when the surface albedo is not fitted. We show spectral fit residuals, which can be used for further investigations. Fit residuals may be partly explained by spectroscopic uncertainties, which is suggested by an experiment showing the improvement of convergence when the absorption cross section is scaled in agreement with Butz et al. (2013) and Crisp et al. (2012), and a temperature offset to the a priori ECMWF temperature profile is fitted. Retrieved temperature offsets are always negative and quite large (ranging between -4 and -8 K), which is not expected if temperature offsets absorb remaining inaccuracies in meteorological data. Other sensitivity experiments investigate fitting of stray light and fluorescence emissions. We find negative radiance offsets and negative fluorescence emissions, also for non-vegetated areas, but from the results it is not clear whether fitting these parameters improves the retrieval. Based on the present results, the operational baseline for the Aerosol Layer Height product currently will not fit the surface albedo. The product will be particularly suited for elevated, optically thick aerosol layers. In addition to its scientific value in climate research, anticipated applications of the product for TROPOMI are providing aerosol height information for aviation safety and improving interpretation of the Absorbing Aerosol Index.

  15. Particle Swarm Optimization With Interswarm Interactive Learning Strategy.

    PubMed

    Qin, Quande; Cheng, Shi; Zhang, Qingyu; Li, Li; Shi, Yuhui

    2016-10-01

    The learning strategy in the canonical particle swarm optimization (PSO) algorithm is often blamed for being the primary reason for loss of diversity. Population diversity maintenance is crucial for preventing particles from being stuck into local optima. In this paper, we present an improved PSO algorithm with an interswarm interactive learning strategy (IILPSO) by overcoming the drawbacks of the canonical PSO algorithm's learning strategy. IILPSO is inspired by the phenomenon in human society that the interactive learning behavior takes place among different groups. Particles in IILPSO are divided into two swarms. The interswarm interactive learning (IIL) behavior is triggered when the best particle's fitness value of both the swarms does not improve for a certain number of iterations. According to the best particle's fitness value of each swarm, the softmax method and roulette method are used to determine the roles of the two swarms as the learning swarm and the learned swarm. In addition, the velocity mutation operator and global best vibration strategy are used to improve the algorithm's global search capability. The IIL strategy is applied to PSO with global star and local ring structures, which are termed as IILPSO-G and IILPSO-L algorithm, respectively. Numerical experiments are conducted to compare the proposed algorithms with eight popular PSO variants. From the experimental results, IILPSO demonstrates the good performance in terms of solution accuracy, convergence speed, and reliability. Finally, the variations of the population diversity in the entire search process provide an explanation why IILPSO performs effectively.

  16. Clinical risk stratification model for advanced colorectal neoplasia in persons with negative fecal immunochemical test results.

    PubMed

    Jung, Yoon Suk; Park, Chan Hyuk; Kim, Nam Hee; Park, Jung Ho; Park, Dong Il; Sohn, Chong Il

    2018-01-01

    The fecal immunochemical test (FIT) has low sensitivity for detecting advanced colorectal neoplasia (ACRN); thus, a considerable portion of FIT-negative persons may have ACRN. We aimed to develop a risk-scoring model for predicting ACRN in FIT-negative persons. We reviewed the records of participants aged ≥40 years who underwent a colonoscopy and FIT during a health check-up. We developed a risk-scoring model for predicting ACRN in FIT-negative persons. Of 11,873 FIT-negative participants, 255 (2.1%) had ACRN. On the basis of the multivariable logistic regression model, point scores were assigned as follows among FIT-negative persons: age (per year from 40 years old), 1 point; current smoker, 10 points; overweight, 5 points; obese, 7 points; hypertension, 6 points; old cerebrovascular attack (CVA), 15 points. Although the proportion of ACRN in FIT-negative persons increased as risk scores increased (from 0.6% in the group with 0-4 points to 8.1% in the group with 35-39 points), it was significantly lower than that in FIT-positive persons (14.9%). However, there was no statistical difference between the proportion of ACRN in FIT-negative persons with ≥40 points and in FIT-positive persons (10.5% vs. 14.9%, P = 0.321). FIT-negative persons may need to undergo screening colonoscopy if they clinically have a high risk of ACRN. The scoring model based on age, smoking habits, overweight or obesity, hypertension, and old CVA may be useful in selecting and prioritizing FIT-negative persons for screening colonoscopy.

  17. An advanced analysis method of initial orbit determination with too short arc data

    NASA Astrophysics Data System (ADS)

    Li, Binzhe; Fang, Li

    2018-02-01

    This paper studies the initial orbit determination (IOD) based on space-based angle measurement. Commonly, these space-based observations have short durations. As a result, classical initial orbit determination algorithms give poor results, such as Laplace methods and Gauss methods. In this paper, an advanced analysis method of initial orbit determination is developed for space-based observations. The admissible region and triangulation are introduced in the method. Genetic algorithm is also used for adding some constraints of parameters. Simulation results show that the algorithm can successfully complete the initial orbit determination.

  18. Advanced detection, isolation, and accommodation of sensor failures in turbofan engines: Real-time microcomputer implementation

    NASA Technical Reports Server (NTRS)

    Delaat, John C.; Merrill, Walter C.

    1990-01-01

    The objective of the Advanced Detection, Isolation, and Accommodation Program is to improve the overall demonstrated reliability of digital electronic control systems for turbine engines. For this purpose, an algorithm was developed which detects, isolates, and accommodates sensor failures by using analytical redundancy. The performance of this algorithm was evaluated on a real time engine simulation and was demonstrated on a full scale F100 turbofan engine. The real time implementation of the algorithm is described. The implementation used state-of-the-art microprocessor hardware and software, including parallel processing and high order language programming.

  19. Ground-truthing AVIRIS mineral mapping at Cuprite, Nevada

    NASA Technical Reports Server (NTRS)

    Swayze, Gregg; Clark, Roger N.; Kruse, Fred; Sutley, Steve; Gallagher, Andrea

    1992-01-01

    Mineral abundance maps of 18 minerals were made of the Cuprite Mining District using 1990 AVIRIS data and the Multiple Spectral Feature Mapping Algorithm (MSFMA) as discussed in Clark et al. This technique uses least-squares fitting between a scaled laboratory reference spectrum and ground calibrated AVIRIS data for each pixel. Multiple spectral features can be fitted for each mineral and an unlimited number of minerals can be mapped simultaneously. Quality of fit and depth from continuum numbers for each mineral are calculated for each pixel and the results displayed as a multicolor image.

  20. New-Generation NASA Aura Ozone Monitoring Instrument (OMI) Volcanic SO2 Dataset: Algorithm Description, Initial Results, and Continuation with the Suomi-NPP Ozone Mapping and Profiler Suite (OMPS)

    NASA Technical Reports Server (NTRS)

    Li, Can; Krotkov, Nickolay A.; Carn, Simon; Zhang, Yan; Spurr, Robert J. D.; Joiner, Joanna

    2017-01-01

    Since the fall of 2004, the Ozone Monitoring Instrument (OMI) has been providing global monitoring of volcanic SO2 emissions, helping to understand their climate impacts and to mitigate aviation hazards. Here we introduce a new-generation OMI volcanic SO2 dataset based on a principal component analysis (PCA) retrieval technique. To reduce retrieval noise and artifacts as seen in the current operational linear fit (LF) algorithm, the new algorithm, OMSO2VOLCANO, uses characteristic features extracted directly from OMI radiances in the spectral fitting, thereby helping to minimize interferences from various geophysical processes (e.g., O3 absorption) and measurement details (e.g., wavelength shift). To solve the problem of low bias for large SO2 total columns in the LF product, the OMSO2VOLCANO algorithm employs a table lookup approach to estimate SO2 Jacobians (i.e., the instrument sensitivity to a perturbation in the SO2 column amount) and iteratively adjusts the spectral fitting window to exclude shorter wavelengths where the SO2 absorption signals are saturated. To first order, the effects of clouds and aerosols are accounted for using a simple Lambertian equivalent reflectivity approach. As with the LF algorithm, OMSO2VOLCANO provides total column retrievals based on a set of predefined SO2 profiles from the lower troposphere to the lower stratosphere, including a new profile peaked at 13 km for plumes in the upper troposphere. Examples given in this study indicate that the new dataset shows significant improvement over the LF product, with at least 50% reduction in retrieval noise over the remote Pacific. For large eruptions such as Kasatochi in 2008 (approximately 1700 kt total SO2/ and Sierra Negra in 2005 (greater than 1100DU maximum SO2), OMSO2VOLCANO generally agrees well with other algorithms that also utilize the full spectral content of satellite measurements, while the LF algorithm tends to underestimate SO2. We also demonstrate that, despite the coarser spatial and spectral resolution of the Suomi National Polar-orbiting Partnership (Suomi-NPP) Ozone Mapping and Profiler Suite (OMPS) instrument, application of the new PCA algorithm to OMPS data produces highly consistent retrievals between OMI and OMPS. The new PCA algorithm is therefore capable of continuing the volcanic SO2 data record well into the future using current and future hyperspectral UV satellite instruments.

  1. Stay Fit as You Mature

    MedlinePlus

    ... For Reporters Meetings & Workshops Follow Us Home Health Information Weight Management Stay Fit as You Mature Related Topics Section ... at NIDDK Technology Advancement & Transfer Meetings & Workshops Health Information ... Disease Urologic Diseases Endocrine Diseases Diet & Nutrition ...

  2. A maximally stable extremal region based scene text localization method

    NASA Astrophysics Data System (ADS)

    Xiao, Chengqiu; Ji, Lixin; Gao, Chao; Li, Shaomei

    2015-07-01

    Text localization in natural scene images is an important prerequisite for many content-based image analysis tasks. This paper proposes a novel text localization algorithm. Firstly, a fast pruning algorithm is designed to extract Maximally Stable Extremal Regions (MSER) as basic character candidates. Secondly, these candidates are filtered by using the properties of fitting ellipse and the distribution properties of characters to exclude most non-characters. Finally, a new extremal regions projection merging algorithm is designed to group character candidates into words. Experimental results show that the proposed method has an advantage in speed and achieve relatively high precision and recall rates than the latest published algorithms.

  3. On the efficient and reliable numerical solution of rate-and-state friction problems

    NASA Astrophysics Data System (ADS)

    Pipping, Elias; Kornhuber, Ralf; Rosenau, Matthias; Oncken, Onno

    2016-03-01

    We present a mathematically consistent numerical algorithm for the simulation of earthquake rupture with rate-and-state friction. Its main features are adaptive time stepping, a novel algebraic solution algorithm involving nonlinear multigrid and a fixed point iteration for the rate-and-state decoupling. The algorithm is applied to a laboratory scale subduction zone which allows us to compare our simulations with experimental results. Using physical parameters from the experiment, we find a good fit of recurrence time of slip events as well as their rupture width and peak slip. Computations in 3-D confirm efficiency and robustness of our algorithm.

  4. Photofragment image analysis using the Onion-Peeling Algorithm

    NASA Astrophysics Data System (ADS)

    Manzhos, Sergei; Loock, Hans-Peter

    2003-07-01

    With the growing popularity of the velocity map imaging technique, a need for the analysis of photoion and photoelectron images arose. Here, a computer program is presented that allows for the analysis of cylindrically symmetric images. It permits the inversion of the projection of the 3D charged particle distribution using the Onion Peeling Algorithm. Further analysis includes the determination of radial and angular distributions, from which velocity distributions and spatial anisotropy parameters are obtained. Identification and quantification of the different photolysis channels is therefore straightforward. In addition, the program features geometry correction, centering, and multi-Gaussian fitting routines, as well as a user-friendly graphical interface and the possibility of generating synthetic images using either the fitted or user-defined parameters. Program summaryTitle of program: Glass Onion Catalogue identifier: ADRY Program Summary URL:http://cpc.cs.qub.ac.uk/summaries/ADRY Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: none Computer: IBM PC Operating system under which the program has been tested: Windows 98, Windows 2000, Windows NT Programming language used: Delphi 4.0 Memory required to execute with typical data: 18 Mwords No. of bits in a word: 32 No. of bytes in distributed program, including test data, etc.: 9 911 434 Distribution format: zip file Keywords: Photofragment image, onion peeling, anisotropy parameters Nature of physical problem: Information about velocity and angular distributions of photofragments is the basis on which the analysis of the photolysis process resides. Reconstructing the three-dimensional distribution from the photofragment image is the first step, further processing involving angular and radial integration of the inverted image to obtain velocity and angular distributions. Provisions have to be made to correct for slight distortions of the image, and to verify the accuracy of the analysis process. Method of solution: The "Onion Peeling" algorithm described by Helm [Rev. Sci. Instrum. 67 (6) (1996)] is used to perform the image reconstruction. Angular integration with a subsequent multi-Gaussian fit supplies information about the velocity distribution of the photofragments, whereas radial integration with subsequent expansion of the angular distributions over Legendre Polynomials gives the spatial anisotropy parameters. Fitting algorithms have been developed to centre the image and to correct for image distortion. Restrictions on the complexity of the problem: The maximum image size (1280×1280) and resolution (16 bit) are restricted by available memory and can be changed in the source code. Initial centre coordinates within 5 pixels may be required for the correction and the centering algorithm to converge. Peaks on the velocity profile separated by less then the peak width may not be deconvolved. In the charged particle image reconstruction, it is assumed that the kinetic energy released in the dissociation process is small compared to the energy acquired in the electric field. For the fitting parameters to be physically meaningful, cylindrical symmetry of the image has to be assumed but the actual inversion algorithm is stable to distortions of such symmetry in experimental images. Typical running time: The analysis procedure can be divided into three parts: inversion, fitting, and geometry correction. The inversion time grows approx. as R3, where R is the radius of the region of interest: for R=200 pixels it is less than a minute, for R=400 pixels less then 6 min on a 400 MHz IBM personal computer. The time for the velocity fitting procedure to converge depends strongly on the number of peaks in the velocity profile and the convergence criterion. It ranges between less then a second for simple curves and a few minutes for profiles with up to twenty peaks. The time taken for the image correction scales as R2 and depends on the curve profile. It is on the order of a few minutes for images with R=500 pixels. Unusual features of the program: Our centering and image correction algorithm is based on Fourier analysis of the radial distribution to insure the sharpest velocity profile and is insensitive to an uneven intensity distribution. There exists an angular averaging option to stabilize the inversion algorithm and not to loose the resolution at the same time.

  5. Parallel and Preemptable Dynamically Dimensioned Search Algorithms for Single and Multi-objective Optimization in Water Resources

    NASA Astrophysics Data System (ADS)

    Tolson, B.; Matott, L. S.; Gaffoor, T. A.; Asadzadeh, M.; Shafii, M.; Pomorski, P.; Xu, X.; Jahanpour, M.; Razavi, S.; Haghnegahdar, A.; Craig, J. R.

    2015-12-01

    We introduce asynchronous parallel implementations of the Dynamically Dimensioned Search (DDS) family of algorithms including DDS, discrete DDS, PA-DDS and DDS-AU. These parallel algorithms are unique from most existing parallel optimization algorithms in the water resources field in that parallel DDS is asynchronous and does not require an entire population (set of candidate solutions) to be evaluated before generating and then sending a new candidate solution for evaluation. One key advance in this study is developing the first parallel PA-DDS multi-objective optimization algorithm. The other key advance is enhancing the computational efficiency of solving optimization problems (such as model calibration) by combining a parallel optimization algorithm with the deterministic model pre-emption concept. These two efficiency techniques can only be combined because of the asynchronous nature of parallel DDS. Model pre-emption functions to terminate simulation model runs early, prior to completely simulating the model calibration period for example, when intermediate results indicate the candidate solution is so poor that it will definitely have no influence on the generation of further candidate solutions. The computational savings of deterministic model preemption available in serial implementations of population-based algorithms (e.g., PSO) disappear in synchronous parallel implementations as these algorithms. In addition to the key advances above, we implement the algorithms across a range of computation platforms (Windows and Unix-based operating systems from multi-core desktops to a supercomputer system) and package these for future modellers within a model-independent calibration software package called Ostrich as well as MATLAB versions. Results across multiple platforms and multiple case studies (from 4 to 64 processors) demonstrate the vast improvement over serial DDS-based algorithms and highlight the important role model pre-emption plays in the performance of parallel, pre-emptable DDS algorithms. Case studies include single- and multiple-objective optimization problems in water resources model calibration and in many cases linear or near linear speedups are observed.

  6. Deterministic Compressed Sensing

    DTIC Science & Technology

    2011-11-01

    of the algorithm can be derived by using the Bregman divergence based on the Kullback - Leibler function, and an additive update...regularized goodness - of - fit objective function. In contrast to many CS approaches, however, we measure the fit of an esti- mate to the data using the...sensing is information theoretically possible using any (2k, )-RIP sensing matrix . The following celebrated results of Candès, Romberg and Tao

  7. Demonstration of the use of ADAPT to derive predictive maintenance algorithms for the KSC central heat plant

    NASA Technical Reports Server (NTRS)

    Hunter, H. E.

    1972-01-01

    The Avco Data Analysis and Prediction Techniques (ADAPT) were employed to determine laws capable of detecting failures in a heat plant up to three days in advance of the occurrence of the failure. The projected performance of algorithms yielded a detection probability of 90% with false alarm rates of the order of 1 per year for a sample rate of 1 per day with each detection, followed by 3 hourly samplings. This performance was verified on 173 independent test cases. The program also demonstrated diagnostic algorithms and the ability to predict the time of failure to approximately plus or minus 8 hours up to three days in advance of the failure. The ADAPT programs produce simple algorithms which have a unique possibility of a relatively low cost updating procedure. The algorithms were implemented on general purpose computers at Kennedy Space Flight Center and tested against current data.

  8. Status Report on the First Round of the Development of the Advanced Encryption Standard

    PubMed Central

    Nechvatal, James; Barker, Elaine; Dodson, Donna; Dworkin, Morris; Foti, James; Roback, Edward

    1999-01-01

    In 1997, the National Institute of Standards and Technology (NIST) initiated a process to select a symmetric-key encryption algorithm to be used to protect sensitive (unclassified) Federal information in furtherance of NIST’s statutory responsibilities. In 1998, NIST announced the acceptance of 15 candidate algorithms and requested the assistance of the cryptographic research community in analyzing the candidates. This analysis included an initial examination of the security and efficiency characteristics for each algorithm. NIST has reviewed the results of this research and selected five algorithms (MARS, RC6™, Rijndael, Serpent and Twofish) as finalists. The research results and rationale for the selection of the finalists are documented in this report. The five finalists will be the subject of further study before the selection of one or more of these algorithms for inclusion in the Advanced Encryption Standard.

  9. Determination of the Maximum Temperature in a Non-Uniform Hot Zone by Line-of-Site Absorption Spectroscopy with a Single Diode Laser.

    PubMed

    Liger, Vladimir V; Mironenko, Vladimir R; Kuritsyn, Yurii A; Bolshov, Mikhail A

    2018-05-17

    A new algorithm for the estimation of the maximum temperature in a non-uniform hot zone by a sensor based on absorption spectrometry with a diode laser is developed. The algorithm is based on the fitting of the absorption spectrum with a test molecule in a non-uniform zone by linear combination of two single temperature spectra simulated using spectroscopic databases. The proposed algorithm allows one to better estimate the maximum temperature of a non-uniform zone and can be useful if only the maximum temperature rather than a precise temperature profile is of primary interest. The efficiency and specificity of the algorithm are demonstrated in numerical experiments and experimentally proven using an optical cell with two sections. Temperatures and water vapor concentrations could be independently regulated in both sections. The best fitting was found using a correlation technique. A distributed feedback (DFB) diode laser in the spectral range around 1.343 µm was used in the experiments. Because of the significant differences between the temperature dependences of the experimental and theoretical absorption spectra in the temperature range 300⁻1200 K, a database was constructed using experimentally detected single temperature spectra. Using the developed algorithm the maximum temperature in the two-section cell was estimated with accuracy better than 30 K.

  10. Function approximation and documentation of sampling data using artificial neural networks.

    PubMed

    Zhang, Wenjun; Barrion, Albert

    2006-11-01

    Biodiversity studies in ecology often begin with the fitting and documentation of sampling data. This study is conducted to make function approximation on sampling data and to document the sampling information using artificial neural network algorithms, based on the invertebrate data sampled in the irrigated rice field. Three types of sampling data, i.e., the curve species richness vs. the sample size, the curve rarefaction, and the curve mean abundance of newly sampled species vs.the sample size, are fitted and documented using BP (Backpropagation) network and RBF (Radial Basis Function) network. As the comparisons, The Arrhenius model, and rarefaction model, and power function are tested for their ability to fit these data. The results show that the BP network and RBF network fit the data better than these models with smaller errors. BP network and RBF network can fit non-linear functions (sampling data) with specified accuracy and don't require mathematical assumptions. In addition to the interpolation, BP network is used to extrapolate the functions and the asymptote of the sampling data can be drawn. BP network cost a longer time to train the network and the results are always less stable compared to the RBF network. RBF network require more neurons to fit functions and generally it may not be used to extrapolate the functions. The mathematical function for sampling data can be exactly fitted using artificial neural network algorithms by adjusting the desired accuracy and maximum iterations. The total numbers of functional species of invertebrates in the tropical irrigated rice field are extrapolated as 140 to 149 using trained BP network, which are similar to the observed richness.

  11. Genetic Algorithm for Initial Orbit Determination with Too Short Arc (Continued)

    NASA Astrophysics Data System (ADS)

    Li, X. R.; Wang, X.

    2016-03-01

    When using the genetic algorithm to solve the problem of too-short-arc (TSA) determination, due to the difference of computing processes between the genetic algorithm and classical method, the methods for outliers editing are no longer applicable. In the genetic algorithm, the robust estimation is acquired by means of using different loss functions in the fitness function, then the outlier problem of TSAs is solved. Compared with the classical method, the application of loss functions in the genetic algorithm is greatly simplified. Through the comparison of results of different loss functions, it is clear that the methods of least median square and least trimmed square can greatly improve the robustness of TSAs, and have a high breakdown point.

  12. Numerical simulation of a relaxation test designed to fit a quasi-linear viscoelastic model for temporomandibular joint discs.

    PubMed

    Commisso, Maria S; Martínez-Reina, Javier; Mayo, Juana; Domínguez, Jaime

    2013-02-01

    The main objectives of this work are: (a) to introduce an algorithm for adjusting the quasi-linear viscoelastic model to fit a material using a stress relaxation test and (b) to validate a protocol for performing such tests in temporomandibular joint discs. This algorithm is intended for fitting the Prony series coefficients and the hyperelastic constants of the quasi-linear viscoelastic model by considering that the relaxation test is performed with an initial ramp loading at a certain rate. This algorithm was validated before being applied to achieve the second objective. Generally, the complete three-dimensional formulation of the quasi-linear viscoelastic model is very complex. Therefore, it is necessary to design an experimental test to ensure a simple stress state, such as uniaxial compression to facilitate obtaining the viscoelastic properties. This work provides some recommendations about the experimental setup, which are important to follow, as an inadequate setup could produce a stress state far from uniaxial, thus, distorting the material constants determined from the experiment. The test considered is a stress relaxation test using unconfined compression performed in cylindrical specimens extracted from temporomandibular joint discs. To validate the experimental protocol, the test was numerically simulated using finite-element modelling. The disc was arbitrarily assigned a set of quasi-linear viscoelastic constants (c1) in the finite-element model. Another set of constants (c2) was obtained by fitting the results of the simulated test with the proposed algorithm. The deviation of constants c2 from constants c1 measures how far the stresses are from the uniaxial state. The effects of the following features of the experimental setup on this deviation have been analysed: (a) the friction coefficient between the compression plates and the specimen (which should be as low as possible); (b) the portion of the specimen glued to the compression plates (smaller areas glued are better); and (c) the variation in the thickness of the specimen. The specimen's faces should be parallel to ensure a uniaxial stress state. However, this is not possible in real specimens, and a criterion must be defined to accept the specimen in terms of the specimen's thickness variation and the deviation of the fitted constants arising from such a variation.

  13. Hybrid Model Based on Genetic Algorithms and SVM Applied to Variable Selection within Fruit Juice Classification

    PubMed Central

    Fernandez-Lozano, C.; Canto, C.; Gestal, M.; Andrade-Garda, J. M.; Rabuñal, J. R.; Dorado, J.; Pazos, A.

    2013-01-01

    Given the background of the use of Neural Networks in problems of apple juice classification, this paper aim at implementing a newly developed method in the field of machine learning: the Support Vector Machines (SVM). Therefore, a hybrid model that combines genetic algorithms and support vector machines is suggested in such a way that, when using SVM as a fitness function of the Genetic Algorithm (GA), the most representative variables for a specific classification problem can be selected. PMID:24453933

  14. Heat Transfer Search Algorithm for Non-convex Economic Dispatch Problems

    NASA Astrophysics Data System (ADS)

    Hazra, Abhik; Das, Saborni; Basu, Mousumi

    2018-06-01

    This paper presents Heat Transfer Search (HTS) algorithm for the non-linear economic dispatch problem. HTS algorithm is based on the law of thermodynamics and heat transfer. The proficiency of the suggested technique has been disclosed on three dissimilar complicated economic dispatch problems with valve point effect; prohibited operating zone; and multiple fuels with valve point effect. Test results acquired from the suggested technique for the economic dispatch problem have been fitted to that acquired from other stated evolutionary techniques. It has been observed that the suggested HTS carry out superior solutions.

  15. Heat Transfer Search Algorithm for Non-convex Economic Dispatch Problems

    NASA Astrophysics Data System (ADS)

    Hazra, Abhik; Das, Saborni; Basu, Mousumi

    2018-03-01

    This paper presents Heat Transfer Search (HTS) algorithm for the non-linear economic dispatch problem. HTS algorithm is based on the law of thermodynamics and heat transfer. The proficiency of the suggested technique has been disclosed on three dissimilar complicated economic dispatch problems with valve point effect; prohibited operating zone; and multiple fuels with valve point effect. Test results acquired from the suggested technique for the economic dispatch problem have been fitted to that acquired from other stated evolutionary techniques. It has been observed that the suggested HTS carry out superior solutions.

  16. Solar collector parameter identification from unsteady data by a discrete-gradient optimization algorithm

    NASA Technical Reports Server (NTRS)

    Hotchkiss, G. B.; Burmeister, L. C.; Bishop, K. A.

    1980-01-01

    A discrete-gradient optimization algorithm is used to identify the parameters in a one-node and a two-node capacitance model of a flat-plate collector. Collector parameters are first obtained by a linear-least-squares fit to steady state data. These parameters, together with the collector heat capacitances, are then determined from unsteady data by use of the discrete-gradient optimization algorithm with less than 10 percent deviation from the steady state determination. All data were obtained in the indoor solar simulator at the NASA Lewis Research Center.

  17. Time-optimal trajectory planning for underactuated spacecraft using a hybrid particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Zhuang, Yufei; Huang, Haibin

    2014-02-01

    A hybrid algorithm combining particle swarm optimization (PSO) algorithm with the Legendre pseudospectral method (LPM) is proposed for solving time-optimal trajectory planning problem of underactuated spacecrafts. At the beginning phase of the searching process, an initialization generator is constructed by the PSO algorithm due to its strong global searching ability and robustness to random initial values, however, PSO algorithm has a disadvantage that its convergence rate around the global optimum is slow. Then, when the change in fitness function is smaller than a predefined value, the searching algorithm is switched to the LPM to accelerate the searching process. Thus, with the obtained solutions by the PSO algorithm as a set of proper initial guesses, the hybrid algorithm can find a global optimum more quickly and accurately. 200 Monte Carlo simulations results demonstrate that the proposed hybrid PSO-LPM algorithm has greater advantages in terms of global searching capability and convergence rate than both single PSO algorithm and LPM algorithm. Moreover, the PSO-LPM algorithm is also robust to random initial values.

  18. DCL System Research Using Advanced Approaches for Land-based or Ship-based Real-Time Recognition and Localization of Marine Mammals

    DTIC Science & Technology

    2012-09-30

    recognition. Algorithm design and statistical analysis and feature analysis. Post -Doctoral Associate, Cornell University, Bioacoustics Research...short. The HPC-ADA was designed based on fielded systems [1-4, 6] that offer a variety of desirable attributes, specifically dynamic resource...The software package was designed to utilize parallel and distributed processing for running recognition and other advanced algorithms. DeLMA

  19. Incorporating partial shining effects in proton pencil-beam dose calculation

    NASA Astrophysics Data System (ADS)

    Li, Yupeng; Zhang, Xiaodong; Fwu Lii, Ming; Sahoo, Narayan; Zhu, Ron X.; Gillin, Michael; Mohan, Radhe

    2008-02-01

    A range modulator wheel (RMW) is an essential component in passively scattered proton therapy. We have observed that a proton beam spot may shine on multiple steps of the RMW. Proton dose calculation algorithms normally do not consider the partial shining effect, and thus overestimate the dose at the proximal shoulder of spread-out Bragg peak (SOBP) compared with the measurement. If the SOBP is adjusted to better fit the plateau region, the entrance dose is likely to be underestimated. In this work, we developed an algorithm that can be used to model this effect and to allow for dose calculations that better fit the measured SOBP. First, a set of apparent modulator weights was calculated without considering partial shining. Next, protons spilled from the accelerator reaching the modulator wheel were simplified as a circular spot of uniform intensity. A weight-splitting process was then performed to generate a set of effective modulator weights with the partial shining effect incorporated. The SOBPs of eight options, which are used to label different combinations of proton-beam energy and scattering devices, were calculated with the generated effective weights. Our algorithm fitted the measured SOBP at the proximal and entrance regions much better than the ones without considering partial shining effect for all SOBPs of the eight options. In a prostate patient, we found that dose calculation without considering partial shining effect underestimated the femoral head and skin dose.

  20. 2D Bayesian automated tilted-ring fitting of disc galaxies in large H I galaxy surveys: 2DBAT

    NASA Astrophysics Data System (ADS)

    Oh, Se-Heon; Staveley-Smith, Lister; Spekkens, Kristine; Kamphuis, Peter; Koribalski, Bärbel S.

    2018-01-01

    We present a novel algorithm based on a Bayesian method for 2D tilted-ring analysis of disc galaxy velocity fields. Compared to the conventional algorithms based on a chi-squared minimization procedure, this new Bayesian-based algorithm suffers less from local minima of the model parameters even with highly multimodal posterior distributions. Moreover, the Bayesian analysis, implemented via Markov Chain Monte Carlo sampling, only requires broad ranges of posterior distributions of the parameters, which makes the fitting procedure fully automated. This feature will be essential when performing kinematic analysis on the large number of resolved galaxies expected to be detected in neutral hydrogen (H I) surveys with the Square Kilometre Array and its pathfinders. The so-called 2D Bayesian Automated Tilted-ring fitter (2DBAT) implements Bayesian fits of 2D tilted-ring models in order to derive rotation curves of galaxies. We explore 2DBAT performance on (a) artificial H I data cubes built based on representative rotation curves of intermediate-mass and massive spiral galaxies, and (b) Australia Telescope Compact Array H I data from the Local Volume H I Survey. We find that 2DBAT works best for well-resolved galaxies with intermediate inclinations (20° < i < 70°), complementing 3D techniques better suited to modelling inclined galaxies.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gecow, Andrzej

    On the way to simulating adaptive evolution of complex system describing a living object or human developed project, a fitness should be defined on node states or network external outputs. Feedbacks lead to circular attractors of these states or outputs which make it difficult to define a fitness. The main statistical effects of adaptive condition are the result of small change tendency and to appear, they only need a statistically correct size of damage initiated by evolutionary change of system. This observation allows to cut loops of feedbacks and in effect to obtain a particular statistically correct state instead ofmore » a long circular attractor which in the quenched model is expected for chaotic network with feedback. Defining fitness on such states is simple. We calculate only damaged nodes and only once. Such an algorithm is optimal for investigation of damage spreading i.e. statistical connections of structural parameters of initial change with the size of effected damage. It is a reversed-annealed method--function and states (signals) may be randomly substituted but connections are important and are preserved. The small damages important for adaptive evolution are correctly depicted in comparison to Derrida annealed approximation which expects equilibrium levels for large networks. The algorithm indicates these levels correctly. The relevant program in Pascal, which executes the algorithm for a wide range of parameters, can be obtained from the author.« less

  2. Implementation and Initial Testing of Advanced Processing and Analysis Algorithms for Correlated Neutron Counting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santi, Peter Angelo; Cutler, Theresa Elizabeth; Favalli, Andrea

    In order to improve the accuracy and capabilities of neutron multiplicity counting, additional quantifiable information is needed in order to address the assumptions that are present in the point model. Extracting and utilizing higher order moments (Quads and Pents) from the neutron pulse train represents the most direct way of extracting additional information from the measurement data to allow for an improved determination of the physical properties of the item of interest. The extraction of higher order moments from a neutron pulse train required the development of advanced dead time correction algorithms which could correct for dead time effects inmore » all of the measurement moments in a self-consistent manner. In addition, advanced analysis algorithms have been developed to address specific assumptions that are made within the current analysis model, namely that all neutrons are created at a single point within the item of interest, and that all neutrons that are produced within an item are created with the same energy distribution. This report will discuss the current status of implementation and initial testing of the advanced dead time correction and analysis algorithms that have been developed in an attempt to utilize higher order moments to improve the capabilities of correlated neutron measurement techniques.« less

  3. Advanced algorithms for the identification of mixtures using condensed-phase FT-IR spectroscopy

    NASA Astrophysics Data System (ADS)

    Arnó, Josep; Andersson, Greger; Levy, Dustin; Tomczyk, Carol; Zou, Peng; Zuidema, Eric

    2011-06-01

    FT-IR spectroscopy is the technology of choice to identify solid and liquid phase unknown samples. Advances in instrument portability have made possible the use of FT-IR spectroscopy in emergency response and military field applications. The samples collected in those harsh environments are rarely pure and typically contain multiple chemical species in water, sand, or inorganic matrices. In such critical applications, it is also desired that in addition to broad chemical identification, the user is warned immediately if the sample contains a threat or target class material (i.e. biological, narcotic, explosive). The next generation HazMatID 360 combines the ruggedized design and functionality of the current HazMatID with advanced mixture analysis algorithms. The advanced FT-IR instrument allows effective chemical assessment of samples that may contain one or more interfering materials like water or dirt. The algorithm was the result of years of cumulative experience based on thousands of real-life spectra sent to our ReachBack spectral analysis service by customers in the field. The HazMatID 360 combines mixture analysis with threat detection and chemical hazard classification capabilities to provide, in record time, crucial information to the user. This paper will provide an overview of the software and algorithm enhancements, in addition to examples of improved performance in mixture identification.

  4. Advancing from Preparation to Leadership Positions: The Influence of Time, Institution & Demographics. Implications from UCEA

    ERIC Educational Resources Information Center

    Terry Orr, Margaret; Young, Michelle D.; Fuller, Edward J.

    2008-01-01

    Differences in career advancement rates among aspiring leaders and their programs provide useful frameworks for understanding both program influence and advancement challenges. These differences suggest program, district and state interventions and follow up support to improve the fit and advancement of graduates into the leadership field. This…

  5. Measuring, Understanding, and Responding to Covert Social Networks: Passive and Active Tomography

    DTIC Science & Technology

    2017-11-11

    practical algorithms for sociologically principled detection of small sub- networks. To detect “foreground” networks, we need two competing models...understanding of how to model “background” network clutter, leading to principled approaches to “foreground” sub-network detection. Before the MURI...no frameworks existed for network detection theory or goodness-of-fit, nor were models and algorithms coupled to sound sociological principles

  6. A plant cell division algorithm based on cell biomechanics and ellipse-fitting

    PubMed Central

    Abera, Metadel K.; Verboven, Pieter; Defraeye, Thijs; Fanta, Solomon Workneh; Hertog, Maarten L. A. T. M.; Carmeliet, Jan; Nicolai, Bart M.

    2014-01-01

    Background and Aims The importance of cell division models in cellular pattern studies has been acknowledged since the 19th century. Most of the available models developed to date are limited to symmetric cell division with isotropic growth. Often, the actual growth of the cell wall is either not considered or is updated intermittently on a separate time scale to the mechanics. This study presents a generic algorithm that accounts for both symmetrically and asymmetrically dividing cells with isotropic and anisotropic growth. Actual growth of the cell wall is simulated simultaneously with the mechanics. Methods The cell is considered as a closed, thin-walled structure, maintained in tension by turgor pressure. The cell walls are represented as linear elastic elements that obey Hooke's law. Cell expansion is induced by turgor pressure acting on the yielding cell-wall material. A system of differential equations for the positions and velocities of the cell vertices as well as for the actual growth of the cell wall is established. Readiness to divide is determined based on cell size. An ellipse-fitting algorithm is used to determine the position and orientation of the dividing wall. The cell vertices, walls and cell connectivity are then updated and cell expansion resumes. Comparisons are made with experimental data from the literature. Key Results The generic plant cell division algorithm has been implemented successfully. It can handle both symmetrically and asymmetrically dividing cells coupled with isotropic and anisotropic growth modes. Development of the algorithm highlighted the importance of ellipse-fitting to produce randomness (biological variability) even in symmetrically dividing cells. Unlike previous models, a differential equation is formulated for the resting length of the cell wall to simulate actual biological growth and is solved simultaneously with the position and velocity of the vertices. Conclusions The algorithm presented can produce different tissues varying in topological and geometrical properties. This flexibility to produce different tissue types gives the model great potential for use in investigations of plant cell division and growth in silico. PMID:24863687

  7. QCCM Center for Quantum Algorithms

    DTIC Science & Technology

    2008-10-17

    algorithms (e.g., quantum walks and adiabatic computing ), as well as theoretical advances relating algorithms to physical implementations (e.g...Park, NC 27709-2211 15. SUBJECT TERMS Quantum algorithms, quantum computing , fault-tolerant error correction Richard Cleve MITACS East Academic...0511200 Algebraic results on quantum automata A. Ambainis, M. Beaudry, M. Golovkins, A. Kikusts, M. Mercer, D. Thrien Theory of Computing Systems 39(2006

  8. Confirming the timing of phase-based costing in oncology studies: a case example in advanced melanoma.

    PubMed

    Atkins, Michael; Coutinho, Anna D; Nunna, Sasikiran; Gupte-Singh, Komal; Eaddy, Michael

    2018-02-01

    The utilization of healthcare services and costs among patients with cancer is often estimated by the phase of care: initial, interim, or terminal. Although their durations are often set arbitrarily, we sought to establish data-driven phases of care using joinpoint regression in an advanced melanoma population as a case example. A retrospective claims database study was conducted to assess the costs of advanced melanoma from distant metastasis diagnosis to death during January 2010-September 2014. Joinpoint regression analysis was applied to identify the best-fitting points, where statistically significant changes in the trend of average monthly costs occurred. To identify the initial phase, average monthly costs were modeled from metastasis diagnosis to death; and were modeled backward from death to metastasis diagnosis for the terminal phase. Points of monthly cost trend inflection denoted ending and starting points. The months between represented the interim phase. A total of 1,671 patients with advanced melanoma who died met the eligibility criteria. Initial phase was identified as the 5-month period starting with diagnosis of metastasis, after which there was a sharp, significant decline in monthly cost trend (monthly percent change [MPC] = -13.0%; 95% CI = -16.9% to -8.8%). Terminal phase was defined as the 5-month period before death (MPC = -14.0%; 95% CI = -17.6% to -10.2%). The claims-based algorithm may under-estimate patients due to misclassifications, and may over-estimate terminal phase costs because hospital and emergency visits were used as a death proxy. Also, recently approved therapies were not included, which may under-estimate advanced melanoma costs. In this advanced melanoma population, optimal duration of the initial and terminal phases of care was 5 months immediately after diagnosis of metastasis and before death, respectively. Joinpoint regression can be used to provide data-supported phase of cancer care durations, but should be combined with clinical judgement.

  9. NOTE: A BPF-type algorithm for CT with a curved PI detector

    NASA Astrophysics Data System (ADS)

    Tang, Jie; Zhang, Li; Chen, Zhiqiang; Xing, Yuxiang; Cheng, Jianping

    2006-08-01

    Helical cone-beam CT is used widely nowadays because of its rapid scan speed and efficient utilization of x-ray dose. Recently, an exact reconstruction algorithm for helical cone-beam CT was proposed (Zou and Pan 2004a Phys. Med. Biol. 49 941 59). The algorithm is referred to as a backprojection-filtering (BPF) algorithm. This BPF algorithm for a helical cone-beam CT with a flat-panel detector (FPD-HCBCT) requires minimum data within the Tam Danielsson window and can naturally address the problem of ROI reconstruction from data truncated in both longitudinal and transversal directions. In practical CT systems, detectors are expensive and always take a very important position in the total cost. Hence, we work on an exact reconstruction algorithm for a CT system with a detector of the smallest size, i.e., a curved PI detector fitting the Tam Danielsson window. The reconstruction algorithm is derived following the framework of the BPF algorithm. Numerical simulations are done to validate our algorithm in this study.

  10. A BPF-type algorithm for CT with a curved PI detector.

    PubMed

    Tang, Jie; Zhang, Li; Chen, Zhiqiang; Xing, Yuxiang; Cheng, Jianping

    2006-08-21

    Helical cone-beam CT is used widely nowadays because of its rapid scan speed and efficient utilization of x-ray dose. Recently, an exact reconstruction algorithm for helical cone-beam CT was proposed (Zou and Pan 2004a Phys. Med. Biol. 49 941-59). The algorithm is referred to as a backprojection-filtering (BPF) algorithm. This BPF algorithm for a helical cone-beam CT with a flat-panel detector (FPD-HCBCT) requires minimum data within the Tam-Danielsson window and can naturally address the problem of ROI reconstruction from data truncated in both longitudinal and transversal directions. In practical CT systems, detectors are expensive and always take a very important position in the total cost. Hence, we work on an exact reconstruction algorithm for a CT system with a detector of the smallest size, i.e., a curved PI detector fitting the Tam-Danielsson window. The reconstruction algorithm is derived following the framework of the BPF algorithm. Numerical simulations are done to validate our algorithm in this study.

  11. Linear feature detection algorithm for astronomical surveys - I. Algorithm description

    NASA Astrophysics Data System (ADS)

    Bektešević, Dino; Vinković, Dejan

    2017-11-01

    Computer vision algorithms are powerful tools in astronomical image analyses, especially when automation of object detection and extraction is required. Modern object detection algorithms in astronomy are oriented towards detection of stars and galaxies, ignoring completely the detection of existing linear features. With the emergence of wide-field sky surveys, linear features attract scientific interest as possible trails of fast flybys of near-Earth asteroids and meteors. In this work, we describe a new linear feature detection algorithm designed specifically for implementation in big data astronomy. The algorithm combines a series of algorithmic steps that first remove other objects (stars and galaxies) from the image and then enhance the line to enable more efficient line detection with the Hough algorithm. The rate of false positives is greatly reduced thanks to a step that replaces possible line segments with rectangles and then compares lines fitted to the rectangles with the lines obtained directly from the image. The speed of the algorithm and its applicability in astronomical surveys are also discussed.

  12. Computational mechanics - Advances and trends; Proceedings of the Session - Future directions of Computational Mechanics of the ASME Winter Annual Meeting, Anaheim, CA, Dec. 7-12, 1986

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Editor)

    1986-01-01

    The papers contained in this volume provide an overview of the advances made in a number of aspects of computational mechanics, identify some of the anticipated industry needs in this area, discuss the opportunities provided by new hardware and parallel algorithms, and outline some of the current government programs in computational mechanics. Papers are included on advances and trends in parallel algorithms, supercomputers for engineering analysis, material modeling in nonlinear finite-element analysis, the Navier-Stokes computer, and future finite-element software systems.

  13. Advancing the Physical Activity Curriculum at the Collegiate Level

    ERIC Educational Resources Information Center

    Warren, Bradley; Odenheimer Brin, Eleanor

    2017-01-01

    Purpose: The purpose of this paper is to assess college students' pre- and post- health-related, fitness levels, as determined by the American College of Sports Medicine's (ACSM) five components of fitness, in a one-credit, graded college course and to objectively measure any differences between those pre- and post- health-related fitness levels.…

  14. Intelligent scheduling of execution for customized physical fitness and healthcare system.

    PubMed

    Huang, Chung-Chi; Liu, Hsiao-Man; Huang, Chung-Lin

    2015-01-01

    Physical fitness and health of white collar business person is getting worse and worse in recent years. Therefore, it is necessary to develop a system which can enhance physical fitness and health for people. Although the exercise prescription can be generated after diagnosing for customized physical fitness and healthcare. It is hard to meet individual execution needs for general scheduling of physical fitness and healthcare system. So the main purpose of this research is to develop an intelligent scheduling of execution for customized physical fitness and healthcare system. The results of diagnosis and prescription for customized physical fitness and healthcare system will be generated by fuzzy logic Inference. Then the results of diagnosis and prescription for customized physical fitness and healthcare system will be scheduled and executed by intelligent computing. The scheduling of execution is generated by using genetic algorithm method. It will improve traditional scheduling of exercise prescription for physical fitness and healthcare. Finally, we will demonstrate the advantages of the intelligent scheduling of execution for customized physical fitness and healthcare system.

  15. Reassessing Google Flu Trends Data for Detection of Seasonal and Pandemic Influenza: A Comparative Epidemiological Study at Three Geographic Scales

    PubMed Central

    Olson, Donald R.; Konty, Kevin J.; Paladini, Marc; Viboud, Cecile; Simonsen, Lone

    2013-01-01

    The goal of influenza-like illness (ILI) surveillance is to determine the timing, location and magnitude of outbreaks by monitoring the frequency and progression of clinical case incidence. Advances in computational and information technology have allowed for automated collection of higher volumes of electronic data and more timely analyses than previously possible. Novel surveillance systems, including those based on internet search query data like Google Flu Trends (GFT), are being used as surrogates for clinically-based reporting of influenza-like-illness (ILI). We investigated the reliability of GFT during the last decade (2003 to 2013), and compared weekly public health surveillance with search query data to characterize the timing and intensity of seasonal and pandemic influenza at the national (United States), regional (Mid-Atlantic) and local (New York City) levels. We identified substantial flaws in the original and updated GFT models at all three geographic scales, including completely missing the first wave of the 2009 influenza A/H1N1 pandemic, and greatly overestimating the intensity of the A/H3N2 epidemic during the 2012/2013 season. These results were obtained for both the original (2008) and the updated (2009) GFT algorithms. The performance of both models was problematic, perhaps because of changes in internet search behavior and differences in the seasonality, geographical heterogeneity and age-distribution of the epidemics between the periods of GFT model-fitting and prospective use. We conclude that GFT data may not provide reliable surveillance for seasonal or pandemic influenza and should be interpreted with caution until the algorithm can be improved and evaluated. Current internet search query data are no substitute for timely local clinical and laboratory surveillance, or national surveillance based on local data collection. New generation surveillance systems such as GFT should incorporate the use of near-real time electronic health data and computational methods for continued model-fitting and ongoing evaluation and improvement. PMID:24146603

  16. Reassessing Google Flu Trends data for detection of seasonal and pandemic influenza: a comparative epidemiological study at three geographic scales.

    PubMed

    Olson, Donald R; Konty, Kevin J; Paladini, Marc; Viboud, Cecile; Simonsen, Lone

    2013-01-01

    The goal of influenza-like illness (ILI) surveillance is to determine the timing, location and magnitude of outbreaks by monitoring the frequency and progression of clinical case incidence. Advances in computational and information technology have allowed for automated collection of higher volumes of electronic data and more timely analyses than previously possible. Novel surveillance systems, including those based on internet search query data like Google Flu Trends (GFT), are being used as surrogates for clinically-based reporting of influenza-like-illness (ILI). We investigated the reliability of GFT during the last decade (2003 to 2013), and compared weekly public health surveillance with search query data to characterize the timing and intensity of seasonal and pandemic influenza at the national (United States), regional (Mid-Atlantic) and local (New York City) levels. We identified substantial flaws in the original and updated GFT models at all three geographic scales, including completely missing the first wave of the 2009 influenza A/H1N1 pandemic, and greatly overestimating the intensity of the A/H3N2 epidemic during the 2012/2013 season. These results were obtained for both the original (2008) and the updated (2009) GFT algorithms. The performance of both models was problematic, perhaps because of changes in internet search behavior and differences in the seasonality, geographical heterogeneity and age-distribution of the epidemics between the periods of GFT model-fitting and prospective use. We conclude that GFT data may not provide reliable surveillance for seasonal or pandemic influenza and should be interpreted with caution until the algorithm can be improved and evaluated. Current internet search query data are no substitute for timely local clinical and laboratory surveillance, or national surveillance based on local data collection. New generation surveillance systems such as GFT should incorporate the use of near-real time electronic health data and computational methods for continued model-fitting and ongoing evaluation and improvement.

  17. Reconstructing liver shape and position from MR image slices using an active shape model

    NASA Astrophysics Data System (ADS)

    Fenchel, Matthias; Thesen, Stefan; Schilling, Andreas

    2008-03-01

    We present an algorithm for fully automatic reconstruction of 3D position, orientation and shape of the human liver from a sparsely covering set of n 2D MR slice images. Reconstructing the shape of an organ from slice images can be used for scan planning, for surgical planning or other purposes where 3D anatomical knowledge has to be inferred from sparse slices. The algorithm is based on adapting an active shape model of the liver surface to a given set of slice images. The active shape model is created from a training set of liver segmentations from a group of volunteers. The training set is set up with semi-manual segmentations of T1-weighted volumetric MR images. Searching for the optimal shape model that best fits to the image data is done by maximizing a similarity measure based on local appearance at the surface. Two different algorithms for the active shape model search are proposed and compared: both algorithms seek to maximize the a-posteriori probability of the grey level appearance around the surface while constraining the surface to the space of valid shapes. The first algorithm works by using grey value profile statistics in normal direction. The second algorithm uses average and variance images to calculate the local surface appearance on the fly. Both algorithms are validated by fitting the active shape model to abdominal 2D slice images and comparing the shapes, which have been reconstructed, to the manual segmentations and to the results of active shape model searches from 3D image data. The results turn out to be promising and competitive to active shape model segmentations from 3D data.

  18. Optimal sensor placement for spatial lattice structure based on genetic algorithms

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Gao, Wei-cheng; Sun, Yi; Xu, Min-jian

    2008-10-01

    Optimal sensor placement technique plays a key role in structural health monitoring of spatial lattice structures. This paper considers the problem of locating sensors on a spatial lattice structure with the aim of maximizing the data information so that structural dynamic behavior can be fully characterized. Based on the criterion of optimal sensor placement for modal test, an improved genetic algorithm is introduced to find the optimal placement of sensors. The modal strain energy (MSE) and the modal assurance criterion (MAC) have been taken as the fitness function, respectively, so that three placement designs were produced. The decimal two-dimension array coding method instead of binary coding method is proposed to code the solution. Forced mutation operator is introduced when the identical genes appear via the crossover procedure. A computational simulation of a 12-bay plain truss model has been implemented to demonstrate the feasibility of the three optimal algorithms above. The obtained optimal sensor placements using the improved genetic algorithm are compared with those gained by exiting genetic algorithm using the binary coding method. Further the comparison criterion based on the mean square error between the finite element method (FEM) mode shapes and the Guyan expansion mode shapes identified by data-driven stochastic subspace identification (SSI-DATA) method are employed to demonstrate the advantage of the different fitness function. The results showed that some innovations in genetic algorithm proposed in this paper can enlarge the genes storage and improve the convergence of the algorithm. More importantly, the three optimal sensor placement methods can all provide the reliable results and identify the vibration characteristics of the 12-bay plain truss model accurately.

  19. Reverse engineering a gene network using an asynchronous parallel evolution strategy

    PubMed Central

    2010-01-01

    Background The use of reverse engineering methods to infer gene regulatory networks by fitting mathematical models to gene expression data is becoming increasingly popular and successful. However, increasing model complexity means that more powerful global optimisation techniques are required for model fitting. The parallel Lam Simulated Annealing (pLSA) algorithm has been used in such approaches, but recent research has shown that island Evolutionary Strategies can produce faster, more reliable results. However, no parallel island Evolutionary Strategy (piES) has yet been demonstrated to be effective for this task. Results Here, we present synchronous and asynchronous versions of the piES algorithm, and apply them to a real reverse engineering problem: inferring parameters in the gap gene network. We find that the asynchronous piES exhibits very little communication overhead, and shows significant speed-up for up to 50 nodes: the piES running on 50 nodes is nearly 10 times faster than the best serial algorithm. We compare the asynchronous piES to pLSA on the same test problem, measuring the time required to reach particular levels of residual error, and show that it shows much faster convergence than pLSA across all optimisation conditions tested. Conclusions Our results demonstrate that the piES is consistently faster and more reliable than the pLSA algorithm on this problem, and scales better with increasing numbers of nodes. In addition, the piES is especially well suited to further improvements and adaptations: Firstly, the algorithm's fast initial descent speed and high reliability make it a good candidate for being used as part of a global/local search hybrid algorithm. Secondly, it has the potential to be used as part of a hierarchical evolutionary algorithm, which takes advantage of modern multi-core computing architectures. PMID:20196855

  20. Large Footprint LiDAR Data Processing for Ground Detection and Biomass Estimation

    NASA Astrophysics Data System (ADS)

    Zhuang, Wei

    Ground detection in large footprint waveform Light Detection And Ranging (LiDAR) data is important in calculating and estimating downstream products, especially in forestry applications. For example, tree heights are calculated as the difference between the ground peak and first returned signal in a waveform. Forest attributes, such as aboveground biomass, are estimated based on the tree heights. This dissertation investigated new metrics and algorithms for estimating aboveground biomass and extracting ground peak location in large footprint waveform LiDAR data. In the first manuscript, an accurate and computationally efficient algorithm, named Filtering and Clustering Algorithm (FICA), was developed based on a set of multiscale second derivative filters for automatically detecting the ground peak in an waveform from Land, Vegetation and Ice Sensor. Compared to existing ground peak identification algorithms, FICA was tested in different land cover type plots and showed improved accuracy in ground detections of the vegetation plots and similar accuracy in developed area plots. Also, FICA adopted a peak identification strategy rather than following a curve-fitting process, and therefore, exhibited improved efficiency. In the second manuscript, an algorithm was developed specifically for shrub waveforms. The algorithm only partially fitted the shrub canopy reflection and detected the ground peak by investigating the residual signal, which was generated by deducting a Gaussian fitting function from the raw waveform. After the deduction, the overlapping ground peak was identified as the local maximum of the residual signal. In addition, an applicability model was built for determining waveforms where the proposed PCF algorithm should be applied. In the third manuscript, a new set of metrics was developed to increase accuracy in biomass estimation models. The metrics were based on the results of Gaussian decomposition. They incorporated both waveform intensity represented by the area covered by a Gaussian function and its associated heights, which was the centroid of the Gaussian function. By considering signal reflection of different vegetation layers, the developed metrics obtained better estimation accuracy in aboveground biomass when compared to existing metrics. In addition, the new developed metrics showed strong correlation with other forest structural attributes, such as mean Diameter at Breast Height (DBH) and stem density. In sum, the dissertation investigated the various techniques for large footprint waveform LiDAR processing for detecting the ground peak and estimating biomass. The novel techniques developed in this dissertation showed better performance than existing methods or metrics.

  1. Significant Advances in the AIRS Science Team Version-6 Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Blaisdell, John; Iredell, Lena; Molnar, Gyula

    2012-01-01

    AIRS/AMSU is the state of the art infrared and microwave atmospheric sounding system flying aboard EOS Aqua. The Goddard DISC has analyzed AIRS/AMSU observations, covering the period September 2002 until the present, using the AIRS Science Team Version-S retrieval algorithm. These products have been used by many researchers to make significant advances in both climate and weather applications. The AIRS Science Team Version-6 Retrieval, which will become operation in mid-20l2, contains many significant theoretical and practical improvements compared to Version-5 which should further enhance the utility of AIRS products for both climate and weather applications. In particular, major changes have been made with regard to the algOrithms used to 1) derive surface skin temperature and surface spectral emissivity; 2) generate the initial state used to start the retrieval procedure; 3) compute Outgoing Longwave Radiation; and 4) determine Quality Control. This paper will describe these advances found in the AIRS Version-6 retrieval algorithm and demonstrate the improvement of AIRS Version-6 products compared to those obtained using Version-5,

  2. Evaluation of fiber Bragg grating sensor interrogation using InGaAs linear detector arrays and Gaussian approximation on embedded hardware.

    PubMed

    Kumar, Saurabh; Amrutur, Bharadwaj; Asokan, Sundarrajan

    2018-02-01

    Fiber Bragg Grating (FBG) sensors have become popular for applications related to structural health monitoring, biomedical engineering, and robotics. However, for successful large scale adoption, FBG interrogation systems are as important as sensor characteristics. Apart from accuracy, the required number of FBG sensors per fiber and the distance between the device in which the sensors are used and the interrogation system also influence the selection of the interrogation technique. For several measurement devices developed for applications in biomedical engineering and robotics, only a few sensors per fiber are required and the device is close to the interrogation system. For these applications, interrogation systems based on InGaAs linear detector arrays provide a good choice. However, their resolution is dependent on the algorithms used for curve fitting. In this work, a detailed analysis of the choice of algorithm using the Gaussian approximation for the FBG spectrum and the number of pixels used for curve fitting on the errors is provided. The points where the maximum errors occur have been identified. All comparisons for wavelength shift detection have been made against another interrogation system based on the tunable swept laser. It has been shown that maximum errors occur when the wavelength shift is such that one new pixel is included for curve fitting. It has also been shown that an algorithm with lower computation cost compared to the more popular methods using iterative non-linear least squares estimation can be used without leading to the loss of accuracy. The algorithm has been implemented on embedded hardware, and a speed-up of approximately six times has been observed.

  3. Evaluation of fiber Bragg grating sensor interrogation using InGaAs linear detector arrays and Gaussian approximation on embedded hardware

    NASA Astrophysics Data System (ADS)

    Kumar, Saurabh; Amrutur, Bharadwaj; Asokan, Sundarrajan

    2018-02-01

    Fiber Bragg Grating (FBG) sensors have become popular for applications related to structural health monitoring, biomedical engineering, and robotics. However, for successful large scale adoption, FBG interrogation systems are as important as sensor characteristics. Apart from accuracy, the required number of FBG sensors per fiber and the distance between the device in which the sensors are used and the interrogation system also influence the selection of the interrogation technique. For several measurement devices developed for applications in biomedical engineering and robotics, only a few sensors per fiber are required and the device is close to the interrogation system. For these applications, interrogation systems based on InGaAs linear detector arrays provide a good choice. However, their resolution is dependent on the algorithms used for curve fitting. In this work, a detailed analysis of the choice of algorithm using the Gaussian approximation for the FBG spectrum and the number of pixels used for curve fitting on the errors is provided. The points where the maximum errors occur have been identified. All comparisons for wavelength shift detection have been made against another interrogation system based on the tunable swept laser. It has been shown that maximum errors occur when the wavelength shift is such that one new pixel is included for curve fitting. It has also been shown that an algorithm with lower computation cost compared to the more popular methods using iterative non-linear least squares estimation can be used without leading to the loss of accuracy. The algorithm has been implemented on embedded hardware, and a speed-up of approximately six times has been observed.

  4. Effect of normalization methods on the performance of supervised learning algorithms applied to HTSeq-FPKM-UQ data sets: 7SK RNA expression as a predictor of survival in patients with colon adenocarcinoma.

    PubMed

    Shahriyari, Leili

    2017-11-03

    One of the main challenges in machine learning (ML) is choosing an appropriate normalization method. Here, we examine the effect of various normalization methods on analyzing FPKM upper quartile (FPKM-UQ) RNA sequencing data sets. We collect the HTSeq-FPKM-UQ files of patients with colon adenocarcinoma from TCGA-COAD project. We compare three most common normalization methods: scaling, standardizing using z-score and vector normalization by visualizing the normalized data set and evaluating the performance of 12 supervised learning algorithms on the normalized data set. Additionally, for each of these normalization methods, we use two different normalization strategies: normalizing samples (files) or normalizing features (genes). Regardless of normalization methods, a support vector machine (SVM) model with the radial basis function kernel had the maximum accuracy (78%) in predicting the vital status of the patients. However, the fitting time of SVM depended on the normalization methods, and it reached its minimum fitting time when files were normalized to the unit length. Furthermore, among all 12 learning algorithms and 6 different normalization techniques, the Bernoulli naive Bayes model after standardizing files had the best performance in terms of maximizing the accuracy as well as minimizing the fitting time. We also investigated the effect of dimensionality reduction methods on the performance of the supervised ML algorithms. Reducing the dimension of the data set did not increase the maximum accuracy of 78%. However, it leaded to discovery of the 7SK RNA gene expression as a predictor of survival in patients with colon adenocarcinoma with accuracy of 78%. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  5. Multiscale approach to contour fitting for MR images

    NASA Astrophysics Data System (ADS)

    Rueckert, Daniel; Burger, Peter

    1996-04-01

    We present a new multiscale contour fitting process which combines information about the image and the contour of the object at different levels of scale. The algorithm is based on energy minimizing deformable models but avoids some of the problems associated with these models. The segmentation algorithm starts by constructing a linear scale-space of an image through convolution of the original image with a Gaussian kernel at different levels of scale, where the scale corresponds to the standard deviation of the Gaussian kernel. At high levels of scale large scale features of the objects are preserved while small scale features, like object details as well as noise, are suppressed. In order to maximize the accuracy of the segmentation, the contour of the object of interest is then tracked in scale-space from coarse to fine scales. We propose a hybrid multi-temperature simulated annealing optimization to minimize the energy of the deformable model. At high levels of scale the SA optimization is started at high temperatures, enabling the SA optimization to find a global optimal solution. At lower levels of scale the SA optimization is started at lower temperatures (at the lowest level the temperature is close to 0). This enforces a more deterministic behavior of the SA optimization at lower scales and leads to an increasingly local optimization as high energy barriers cannot be crossed. The performance and robustness of the algorithm have been tested on spin-echo MR images of the cardiovascular system. The task was to segment the ascending and descending aorta in 15 datasets of different individuals in order to measure regional aortic compliance. The results show that the algorithm is able to provide more accurate segmentation results than the classic contour fitting process and is at the same time very robust to noise and initialization.

  6. An expert fitness diagnosis system based on elastic cloud computing.

    PubMed

    Tseng, Kevin C; Wu, Chia-Chuan

    2014-01-01

    This paper presents an expert diagnosis system based on cloud computing. It classifies a user's fitness level based on supervised machine learning techniques. This system is able to learn and make customized diagnoses according to the user's physiological data, such as age, gender, and body mass index (BMI). In addition, an elastic algorithm based on Poisson distribution is presented to allocate computation resources dynamically. It predicts the required resources in the future according to the exponential moving average of past observations. The experimental results show that Naïve Bayes is the best classifier with the highest accuracy (90.8%) and that the elastic algorithm is able to capture tightly the trend of requests generated from the Internet and thus assign corresponding computation resources to ensure the quality of service.

  7. Simultaneous parameter optimization of x-ray and neutron reflectivity data using genetic algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Surendra, E-mail: surendra@barc.gov.in; Basu, Saibal

    2016-05-23

    X-ray and neutron reflectivity are two non destructive techniques which provide a wealth of information on thickness, structure and interracial properties in nanometer length scale. Combination of X-ray and neutron reflectivity is well suited for obtaining physical parameters of nanostructured thin films and superlattices. Neutrons provide a different contrast between the elements than X-rays and are also sensitive to the magnetization depth profile in thin films and superlattices. The real space information is extracted by fitting a model for the structure of the thin film sample in reflectometry experiments. We have applied a Genetic Algorithms technique to extract depth dependentmore » structure and magnetic in thin film and multilayer systems by simultaneously fitting X-ray and neutron reflectivity data.« less

  8. Conjugate Gradient Algorithms For Manipulator Simulation

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Scheid, Robert E.

    1991-01-01

    Report discusses applicability of conjugate-gradient algorithms to computation of forward dynamics of robotic manipulators. Rapid computation of forward dynamics essential to teleoperation and other advanced robotic applications. Part of continuing effort to find algorithms meeting requirements for increased computational efficiency and speed. Method used for iterative solution of systems of linear equations.

  9. Differences in Gaussian diffusion tensor imaging and non-Gaussian diffusion kurtosis imaging model-based estimates of diffusion tensor invariants in the human brain.

    PubMed

    Lanzafame, S; Giannelli, M; Garaci, F; Floris, R; Duggento, A; Guerrisi, M; Toschi, N

    2016-05-01

    An increasing number of studies have aimed to compare diffusion tensor imaging (DTI)-related parameters [e.g., mean diffusivity (MD), fractional anisotropy (FA), radial diffusivity (RD), and axial diffusivity (AD)] to complementary new indexes [e.g., mean kurtosis (MK)/radial kurtosis (RK)/axial kurtosis (AK)] derived through diffusion kurtosis imaging (DKI) in terms of their discriminative potential about tissue disease-related microstructural alterations. Given that the DTI and DKI models provide conceptually and quantitatively different estimates of the diffusion tensor, which can also depend on fitting routine, the aim of this study was to investigate model- and algorithm-dependent differences in MD/FA/RD/AD and anisotropy mode (MO) estimates in diffusion-weighted imaging of human brain white matter. The authors employed (a) data collected from 33 healthy subjects (20-59 yr, F: 15, M: 18) within the Human Connectome Project (HCP) on a customized 3 T scanner, and (b) data from 34 healthy subjects (26-61 yr, F: 5, M: 29) acquired on a clinical 3 T scanner. The DTI model was fitted to b-value =0 and b-value =1000 s/mm(2) data while the DKI model was fitted to data comprising b-value =0, 1000 and 3000/2500 s/mm(2) [for dataset (a)/(b), respectively] through nonlinear and weighted linear least squares algorithms. In addition to MK/RK/AK maps, MD/FA/MO/RD/AD maps were estimated from both models and both algorithms. Using tract-based spatial statistics, the authors tested the null hypothesis of zero difference between the two MD/FA/MO/RD/AD estimates in brain white matter for both datasets and both algorithms. DKI-derived MD/FA/RD/AD and MO estimates were significantly higher and lower, respectively, than corresponding DTI-derived estimates. All voxelwise differences extended over most of the white matter skeleton. Fractional differences between the two estimates [(DKI - DTI)/DTI] of most invariants were seen to vary with the invariant value itself as well as with MK/RK/AK values, indicating substantial anatomical variability of these discrepancies. In the HCP dataset, the median voxelwise percentage differences across the whole white matter skeleton were (nonlinear least squares algorithm) 14.5% (8.2%-23.1%) for MD, 4.3% (1.4%-17.3%) for FA, -5.2% (-48.7% to -0.8%) for MO, 12.5% (6.4%-21.2%) for RD, and 16.1% (9.9%-25.6%) for AD (all ranges computed as 0.01 and 0.99 quantiles). All differences/trends were consistent between the discovery (HCP) and replication (local) datasets and between estimation algorithms. However, the relationships between such trends, estimated diffusion tensor invariants, and kurtosis estimates were impacted by the choice of fitting routine. Model-dependent differences in the estimation of conventional indexes of MD/FA/MO/RD/AD can be well beyond commonly seen disease-related alterations. While estimating diffusion tensor-derived indexes using the DKI model may be advantageous in terms of mitigating b-value dependence of diffusivity estimates, such estimates should not be referred to as conventional DTI-derived indexes in order to avoid confusion in interpretation as well as multicenter comparisons. In order to assess the potential and advantages of DKI with respect to DTI as well as to standardize diffusion-weighted imaging methods between centers, both conventional DTI-derived indexes and diffusion tensor invariants derived by fitting the non-Gaussian DKI model should be separately estimated and analyzed using the same combination of fitting routines.

  10. Micro-Doppler Signal Time-Frequency Algorithm Based on STFRFT.

    PubMed

    Pang, Cunsuo; Han, Yan; Hou, Huiling; Liu, Shengheng; Zhang, Nan

    2016-09-24

    This paper proposes a time-frequency algorithm based on short-time fractional order Fourier transformation (STFRFT) for identification of a complicated movement targets. This algorithm, consisting of a STFRFT order-changing and quick selection method, is effective in reducing the computation load. A multi-order STFRFT time-frequency algorithm is also developed that makes use of the time-frequency feature of each micro-Doppler component signal. This algorithm improves the estimation accuracy of time-frequency curve fitting through multi-order matching. Finally, experiment data were used to demonstrate STFRFT's performance in micro-Doppler time-frequency analysis. The results validated the higher estimate accuracy of the proposed algorithm. It may be applied to an LFM (Linear frequency modulated) pulse radar, SAR (Synthetic aperture radar), or ISAR (Inverse synthetic aperture radar), for improving the probability of target recognition.

  11. Platform for Post-Processing Waveform-Based NDE

    NASA Technical Reports Server (NTRS)

    Roth, Don J.

    2010-01-01

    Signal- and image-processing methods are commonly needed to extract information from the waves, improve resolution of, and highlight defects in an image. Since some similarity exists for all waveform-based nondestructive evaluation (NDE) methods, it would seem that a common software platform containing multiple signal- and image-processing techniques to process the waveforms and images makes sense where multiple techniques, scientists, engineers, and organizations are involved. NDE Wave & Image Processor Version 2.0 software provides a single, integrated signal- and image-processing and analysis environment for total NDE data processing and analysis. It brings some of the most useful algorithms developed for NDE over the past 20 years into a commercial-grade product. The software can import signal/spectroscopic data, image data, and image series data. This software offers the user hundreds of basic and advanced signal- and image-processing capabilities including esoteric 1D and 2D wavelet-based de-noising, de-trending, and filtering. Batch processing is included for signal- and image-processing capability so that an optimized sequence of processing operations can be applied to entire folders of signals, spectra, and images. Additionally, an extensive interactive model-based curve-fitting facility has been included to allow fitting of spectroscopy data such as from Raman spectroscopy. An extensive joint-time frequency module is included for analysis of non-stationary or transient data such as that from acoustic emission, vibration, or earthquake data.

  12. Correlation of Respirator Fit Measured on Human Subjects and a Static Advanced Headform

    PubMed Central

    Bergman, Michael S.; He, Xinjian; Joseph, Michael E.; Zhuang, Ziqing; Heimbuch, Brian K.; Shaffer, Ronald E.; Choe, Melanie; Wander, Joseph D.

    2015-01-01

    This study assessed the correlation of N95 filtering face-piece respirator (FFR) fit between a Static Advanced Headform (StAH) and 10 human test subjects. Quantitative fit evaluations were performed on test subjects who made three visits to the laboratory. On each visit, one fit evaluation was performed on eight different FFRs of various model/size variations. Additionally, subject breathing patterns were recorded. Each fit evaluation comprised three two-minute exercises: “Normal Breathing,” “Deep Breathing,” and again “Normal Breathing.” The overall test fit factors (FF) for human tests were recorded. The same respirator samples were later mounted on the StAH and the overall test manikin fit factors (MFF) were assessed utilizing the recorded human breathing patterns. Linear regression was performed on the mean log10-transformed FF and MFF values to assess the relationship between the values obtained from humans and the StAH. This is the first study to report a positive correlation of respirator fit between a headform and test subjects. The linear regression by respirator resulted in R2 = 0.95, indicating a strong linear correlation between FF and MFF. For all respirators the geometric mean (GM) FF values were consistently higher than those of the GM MFF. For 50% of respirators, GM FF and GM MFF values were significantly different between humans and the StAH. For data grouped by subject/respirator combinations, the linear regression resulted in R2 = 0.49. A weaker correlation (R2 = 0.11) was found using only data paired by subject/respirator combination where both the test subject and StAH had passed a real-time leak check before performing the fit evaluation. For six respirators, the difference in passing rates between the StAH and humans was < 20%, while two respirators showed a difference of 29% and 43%. For data by test subject, GM FF and GM MFF values were significantly different for 40% of the subjects. Overall, the advanced headform system has potential for assessing fit for some N95 FFR model/sizes. PMID:25265037

  13. Determination of calibration parameters of a VRX CT system using an “Amoeba” algorithm

    PubMed Central

    Jordan, Lawrence M.; DiBianca, Frank A.; Melnyk, Roman; Choudhary, Apoorva; Shukla, Hemant; Laughter, Joseph; Gaber, M. Waleed

    2008-01-01

    Efforts to improve the spatial resolution of CT scanners have focused mainly on reducing the source and detector element sizes, ignoring losses from the size of the secondary-ionization charge “clouds” created by the detected x-ray photons, i.e., the “physics limit.” This paper focuses on implementing a technique called “projective compression.” which allows further reduction in effective cell size while overcoming the physics limit as well. Projective compression signifies detector geometries in which the apparent cell size is smaller than the physical cell size, allowing large resolution boosts. A realization of this technique has been developed with a dual-arm “variable-resolution x-ray” (VRX) detector. Accurate values of the geometrical parameters are needed to convert VRX outputs to formats ready for optimal image reconstruction by standard CT techniques. The required calibrating data are obtained by scanning a rotating pin and fitting a theoretical parametric curve (using a multi-parameter minimization algorithm) to the resulting pin sinogram. Excellent fits are obtained for both detector-arm sections with an average (maximum) fit deviation of ~0.05 (0.1) detector cell width. Fit convergence and sensitivity to starting conditions are considered. Pre- and post-optimization reconstructions of the alignment pin and a biological subject reconstruction after calibration are shown. PMID:19430581

  14. Determination of calibration parameters of a VRX CT system using an "Amoeba" algorithm.

    PubMed

    Jordan, Lawrence M; Dibianca, Frank A; Melnyk, Roman; Choudhary, Apoorva; Shukla, Hemant; Laughter, Joseph; Gaber, M Waleed

    2004-01-01

    Efforts to improve the spatial resolution of CT scanners have focused mainly on reducing the source and detector element sizes, ignoring losses from the size of the secondary-ionization charge "clouds" created by the detected x-ray photons, i.e., the "physics limit." This paper focuses on implementing a technique called "projective compression." which allows further reduction in effective cell size while overcoming the physics limit as well. Projective compression signifies detector geometries in which the apparent cell size is smaller than the physical cell size, allowing large resolution boosts. A realization of this technique has been developed with a dual-arm "variable-resolution x-ray" (VRX) detector. Accurate values of the geometrical parameters are needed to convert VRX outputs to formats ready for optimal image reconstruction by standard CT techniques. The required calibrating data are obtained by scanning a rotating pin and fitting a theoretical parametric curve (using a multi-parameter minimization algorithm) to the resulting pin sinogram. Excellent fits are obtained for both detector-arm sections with an average (maximum) fit deviation of ~0.05 (0.1) detector cell width. Fit convergence and sensitivity to starting conditions are considered. Pre- and post-optimization reconstructions of the alignment pin and a biological subject reconstruction after calibration are shown.

  15. Global optimization and reflectivity data fitting for x-ray multilayer mirrors by means of genetic algorithms

    NASA Astrophysics Data System (ADS)

    Sanchez del Rio, Manuel; Pareschi, Giovanni

    2001-01-01

    The x-ray reflectivity of a multilayer is a non-linear function of many parameters (materials, layer thicknesses, densities, roughness). Non-linear fitting of experimental data with simulations requires to use initial values sufficiently close to the optimum value. This is a difficult task when the space topology of the variables is highly structured, as in our case. The application of global optimization methods to fit multilayer reflectivity data is presented. Genetic algorithms are stochastic methods based on the model of natural evolution: the improvement of a population along successive generations. A complete set of initial parameters constitutes an individual. The population is a collection of individuals. Each generation is built from the parent generation by applying some operators (e.g. selection, crossover, mutation) on the members of the parent generation. The pressure of selection drives the population to include 'good' individuals. For large number of generations, the best individuals will approximate the optimum parameters. Some results on fitting experimental hard x-ray reflectivity data for Ni/C multilayers recorded at the ESRF BM5 are presented. This method could be also applied to the help in the design of multilayers optimized for a target application, like for an astronomical grazing-incidence hard X-ray telescopes.

  16. Exploration and extension of an improved Riemann track fitting algorithm

    NASA Astrophysics Data System (ADS)

    Strandlie, A.; Frühwirth, R.

    2017-09-01

    Recently, a new Riemann track fit which operates on translated and scaled measurements has been proposed. This study shows that the new Riemann fit is virtually as precise as popular approaches such as the Kalman filter or an iterative non-linear track fitting procedure, and significantly more precise than other, non-iterative circular track fitting approaches over a large range of measurement uncertainties. The fit is then extended in two directions: first, the measurements are allowed to lie on plane sensors of arbitrary orientation; second, the full error propagation from the measurements to the estimated circle parameters is computed. The covariance matrix of the estimated track parameters can therefore be computed without recourse to asymptotic properties, and is consequently valid for any number of observation. It does, however, assume normally distributed measurement errors. The calculations are validated on a simulated track sample and show excellent agreement with the theoretical expectations.

  17. Advanced scatter search approach and its application in a sequencing problem of mixed-model assembly lines in a case company

    NASA Astrophysics Data System (ADS)

    Liu, Qiong; Wang, Wen-xi; Zhu, Ke-ren; Zhang, Chao-yong; Rao, Yun-qing

    2014-11-01

    Mixed-model assembly line sequencing is significant in reducing the production time and overall cost of production. To improve production efficiency, a mathematical model aiming simultaneously to minimize overtime, idle time and total set-up costs is developed. To obtain high-quality and stable solutions, an advanced scatter search approach is proposed. In the proposed algorithm, a new diversification generation method based on a genetic algorithm is presented to generate a set of potentially diverse and high-quality initial solutions. Many methods, including reference set update, subset generation, solution combination and improvement methods, are designed to maintain the diversification of populations and to obtain high-quality ideal solutions. The proposed model and algorithm are applied and validated in a case company. The results indicate that the proposed advanced scatter search approach is significant for mixed-model assembly line sequencing in this company.

  18. Comparison and optimization of machine learning methods for automated classification of circulating tumor cells.

    PubMed

    Lannin, Timothy B; Thege, Fredrik I; Kirby, Brian J

    2016-10-01

    Advances in rare cell capture technology have made possible the interrogation of circulating tumor cells (CTCs) captured from whole patient blood. However, locating captured cells in the device by manual counting bottlenecks data processing by being tedious (hours per sample) and compromises the results by being inconsistent and prone to user bias. Some recent work has been done to automate the cell location and classification process to address these problems, employing image processing and machine learning (ML) algorithms to locate and classify cells in fluorescent microscope images. However, the type of machine learning method used is a part of the design space that has not been thoroughly explored. Thus, we have trained four ML algorithms on three different datasets. The trained ML algorithms locate and classify thousands of possible cells in a few minutes rather than a few hours, representing an order of magnitude increase in processing speed. Furthermore, some algorithms have a significantly (P < 0.05) higher area under the receiver operating characteristic curve than do other algorithms. Additionally, significant (P < 0.05) losses to performance occur when training on cell lines and testing on CTCs (and vice versa), indicating the need to train on a system that is representative of future unlabeled data. Optimal algorithm selection depends on the peculiarities of the individual dataset, indicating the need of a careful comparison and optimization of algorithms for individual image classification tasks. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.

  19. Improved effective-one-body model of spinning, nonprecessing binary black holes for the era of gravitational-wave astrophysics with advanced detectors

    NASA Astrophysics Data System (ADS)

    Bohé, Alejandro; Shao, Lijing; Taracchini, Andrea; Buonanno, Alessandra; Babak, Stanislav; Harry, Ian W.; Hinder, Ian; Ossokine, Serguei; Pürrer, Michael; Raymond, Vivien; Chu, Tony; Fong, Heather; Kumar, Prayush; Pfeiffer, Harald P.; Boyle, Michael; Hemberger, Daniel A.; Kidder, Lawrence E.; Lovelace, Geoffrey; Scheel, Mark A.; Szilágyi, Béla

    2017-02-01

    We improve the accuracy of the effective-one-body (EOB) waveforms that were employed during the first observing run of Advanced LIGO for binaries of spinning, nonprecessing black holes by calibrating them to a set of 141 numerical-relativity (NR) waveforms. The NR simulations expand the domain of calibration toward larger mass ratios and spins, as compared to the previous EOBNR model. Merger-ringdown waveforms computed in black-hole perturbation theory for Kerr spins close to extremal provide additional inputs to the calibration. For the inspiral-plunge phase, we use a Markov-chain Monte Carlo algorithm to efficiently explore the calibration space. For the merger-ringdown phase, we fit the NR signals with phenomenological formulae. After extrapolation of the calibrated model to arbitrary mass ratios and spins, the (dominant-mode) EOBNR waveforms have faithfulness—at design Advanced-LIGO sensitivity—above 99% against all the NR waveforms, including 16 additional waveforms used for validation, when maximizing only on initial phase and time. This implies a negligible loss in event rate due to modeling for these binary configurations. We find that future NR simulations at mass ratios ≳4 and double spin ≳0.8 will be crucial to resolving discrepancies between different ways of extrapolating waveform models. We also find that some of the NR simulations that already exist in such region of parameter space are too short to constrain the low-frequency portion of the models. Finally, we build a reduced-order version of the EOBNR model to speed up waveform generation by orders of magnitude, thus enabling intensive data-analysis applications during the upcoming observation runs of Advanced LIGO.

  20. Will Nintendo "Wii Fit" Get You Fit? An Evaluation of the Energy Expenditure from Active-Play Videogames.

    PubMed

    Xian, Ying; Kakinami, Lisa; Peterson, Eric D; Mustian, Karen M; Fernandez, I Diana

    2014-04-01

    This study aimed to determine whether Nintendo(®) (Redmond, WA) "Wii Fit™" games can help individuals meet physical activity recommendations. Thirty young healthy volunteers were recruited for this randomized crossover study to evaluate the energy expenditure associated with (1) a 30-minute "Wii Fit Free Run," (2) three 10-minute bouts of "Wii Fit" aerobic games ("Rhythm Boxing," "Super Hula Hoop," and "Advanced Steps"), and (3) 30-minute treadmill running/walking. Energy expenditure was measured by indirect calorimetry using breath-by-breath analyses of O2 consumption and CO2 production. The "Wii Fit" conditions produced a moderate exercise intensity (5.0, 4.1, 3.9, and 3.8 metabolic equivalents [METs] in "Free Run," "Rhythm Boxing," "Super Hula Hoop," and "Advanced Steps"), whereas the treadmill running/walking produced a vigorous exercise intensity (METs=8.0). Based on federal guidelines, an individual could achieve the minimum weekly goal of 500 MET-minutes by playing selected "Wii Fit" aerobics games for 20-26 minutes a day, 5 days a week. Although not as vigorous as the treadmill, active-play videogames such as "Wii Fit" may provide an alternative way to encourage exercise and increase adoption and adherence to the physical activity guidelines.

  1. Wide baseline stereo matching based on double topological relationship consistency

    NASA Astrophysics Data System (ADS)

    Zou, Xiaohong; Liu, Bin; Song, Xiaoxue; Liu, Yang

    2009-07-01

    Stereo matching is one of the most important branches in computer vision. In this paper, an algorithm is proposed for wide-baseline stereo vision matching. Here, a novel scheme is presented called double topological relationship consistency (DCTR). The combination of double topological configuration includes the consistency of first topological relationship (CFTR) and the consistency of second topological relationship (CSTR). It not only sets up a more advanced model on matching, but discards mismatches by iteratively computing the fitness of the feature matches and overcomes many problems of traditional methods depending on the powerful invariance to changes in the scale, rotation or illumination across large view changes and even occlusions. Experimental examples are shown where the two cameras have been located in very different orientations. Also, epipolar geometry can be recovered using RANSAC by far the most widely method adopted possibly. By the method, we can obtain correspondences with high precision on wide baseline matching problems. Finally, the effectiveness and reliability of this method are demonstrated in wide-baseline experiments on the image pairs.

  2. Estimating population diversity with CatchAll

    PubMed Central

    Bunge, John; Woodard, Linda; Böhning, Dankmar; Foster, James A.; Connolly, Sean; Allen, Heather K.

    2012-01-01

    Motivation: The massive data produced by next-generation sequencing require advanced statistical tools. We address estimating the total diversity or species richness in a population. To date, only relatively simple methods have been implemented in available software. There is a need for software employing modern, computationally intensive statistical analyses including error, goodness-of-fit and robustness assessments. Results: We present CatchAll, a fast, easy-to-use, platform-independent program that computes maximum likelihood estimates for finite-mixture models, weighted linear regression-based analyses and coverage-based non-parametric methods, along with outlier diagnostics. Given sample ‘frequency count’ data, CatchAll computes 12 different diversity estimates and applies a model-selection algorithm. CatchAll also derives discounted diversity estimates to adjust for possibly uncertain low-frequency counts. It is accompanied by an Excel-based graphics program. Availability: Free executable downloads for Linux, Windows and Mac OS, with manual and source code, at www.northeastern.edu/catchall. Contact: jab18@cornell.edu PMID:22333246

  3. Compact CH{sub 4} sensor system based on a continuous-wave, low power consumption, room temperature interband cascade laser

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Lei, E-mail: donglei@sxu.edu.cn; State Key Laboratory of Quantum Optics and Quantum Optics Devices, Institute of Laser Spectroscopy, Shanxi University, Taiyuan 030006; Li, Chunguang

    A tunable diode laser absorption spectroscopy-based methane sensor, employing a dense-pattern multi-pass gas cell and a 3.3 μm, CW, DFB, room temperature interband cascade laser (ICL), is reported. The optical integration based on an advanced folded optical path design and an efficient ICL control system with appropriate electrical power management resulted in a CH{sub 4} sensor with a small footprint (32 × 20 × 17 cm{sup 3}) and low-power consumption (6 W). Polynomial and least-squares fit algorithms are employed to remove the baseline of the spectral scan and retrieve CH{sub 4} concentrations, respectively. An Allan-Werle deviation analysis shows that the measurement precision can reach 1.4 ppb for amore » 60 s averaging time. Continuous measurements covering a seven-day period were performed to demonstrate the stability and robustness of the reported CH{sub 4} sensor system.« less

  4. An overview of state-of-the-art image restoration in electron microscopy.

    PubMed

    Roels, J; Aelterman, J; Luong, H Q; Lippens, S; Pižurica, A; Saeys, Y; Philips, W

    2018-06-08

    In Life Science research, electron microscopy (EM) is an essential tool for morphological analysis at the subcellular level as it allows for visualization at nanometer resolution. However, electron micrographs contain image degradations such as noise and blur caused by electromagnetic interference, electron counting errors, magnetic lens imperfections, electron diffraction, etc. These imperfections in raw image quality are inevitable and hamper subsequent image analysis and visualization. In an effort to mitigate these artefacts, many electron microscopy image restoration algorithms have been proposed in the last years. Most of these methods rely on generic assumptions on the image or degradations and are therefore outperformed by advanced methods that are based on more accurate models. Ideally, a method will accurately model the specific degradations that fit the physical acquisition settings. In this overview paper, we discuss different electron microscopy image degradation solutions and demonstrate that dedicated artefact regularisation results in higher quality restoration and is applicable through recently developed probabilistic methods. © 2018 The Authors Journal of Microscopy © 2018 Royal Microscopical Society.

  5. A novel line segment detection algorithm based on graph search

    NASA Astrophysics Data System (ADS)

    Zhao, Hong-dan; Liu, Guo-ying; Song, Xu

    2018-02-01

    To overcome the problem of extracting line segment from an image, a method of line segment detection was proposed based on the graph search algorithm. After obtaining the edge detection result of the image, the candidate straight line segments are obtained in four directions. For the candidate straight line segments, their adjacency relationships are depicted by a graph model, based on which the depth-first search algorithm is employed to determine how many adjacent line segments need to be merged. Finally we use the least squares method to fit the detected straight lines. The comparative experimental results verify that the proposed algorithm has achieved better results than the line segment detector (LSD).

  6. Methods and Algorithms for Computer-aided Engineering of Die Tooling of Compressor Blades from Titanium Alloy

    NASA Astrophysics Data System (ADS)

    Khaimovich, A. I.; Khaimovich, I. N.

    2018-01-01

    The articles provides the calculation algorithms for blank design and die forming fitting to produce the compressor blades for aircraft engines. The design system proposed in the article allows generating drafts of trimming and reducing dies automatically, leading to significant reduction of work preparation time. The detailed analysis of the blade structural elements features was carried out, the taken limitations and technological solutions allowed to form generalized algorithms of forming parting stamp face over the entire circuit of the engraving for different configurations of die forgings. The author worked out the algorithms and programs to calculate three dimensional point locations describing the configuration of die cavity.

  7. Lack of age-related increase in carotid artery wall viscosity in cardiorespiratory fit men

    PubMed Central

    Kawano, Hiroshi; Yamamoto, Kenta; Gando, Yuko; Tanimoto, Michiya; Murakami, Haruka; Ohmori, Yumi; Sanada, Kiyoshi; Tabata, Izumi; Higuchi, Mitsuru; Miyachi, Motohiko

    2013-01-01

    Objectives: Age-related arterial stiffening and reduction of arterial elasticity are attenuated in individuals with high levels of cardiorespiratory fitness. Viscosity is another mechanical characteristic of the arterial wall; however, the effects of age and cardiorespiratory fitness have not been determined. We examined the associations among age, cardiorespiratory fitness and carotid arterial wall viscosity. Methods: A total of 111 healthy men, aged 25–39 years (young) and 40–64 years (middle-aged), were divided into either cardiorespiratory fit or unfit groups on the basis of peak oxygen uptake. The common carotid artery was measured noninvasively by tonometry and automatic tracking of B-mode images to obtain instantaneous pressure and diameter hysteresis loops, and we calculated the effective compliance, isobaric compliance and viscosity index. Results: In the middle-aged men, the viscosity index was larger in the unfit group than in the fit group (2533 vs. 2018 mmHg·s/mm, respectively: P < 0.05), but this was not the case in the young men. In addition, effective and isobaric compliance were increased, and viscosity index was increased with advancing age, but these parameters were unaffected by cardiorespiratory fitness level. Conclusion: These results suggest that the wall viscosity in the central artery is increased with advancing age and that the age-associated increase in wall viscosity may be attenuated in cardiorespiratory fit men. PMID:24029868

  8. Lack of age-related increase in carotid artery wall viscosity in cardiorespiratory fit men.

    PubMed

    Kawano, Hiroshi; Yamamoto, Kenta; Gando, Yuko; Tanimoto, Michiya; Murakami, Haruka; Ohmori, Yumi; Sanada, Kiyoshi; Tabata, Izumi; Higuchi, Mitsuru; Miyachi, Motohiko

    2013-12-01

    Age-related arterial stiffening and reduction of arterial elasticity are attenuated in individuals with high levels of cardiorespiratory fitness. Viscosity is another mechanical characteristic of the arterial wall; however, the effects of age and cardiorespiratory fitness have not been determined. We examined the associations among age, cardiorespiratory fitness and carotid arterial wall viscosity. A total of 111 healthy men, aged 25-39 years (young) and 40-64 years (middle-aged), were divided into either cardiorespiratory fit or unfit groups on the basis of peak oxygen uptake. The common carotid artery was measured noninvasively by tonometry and automatic tracking of B-mode images to obtain instantaneous pressure and diameter hysteresis loops, and we calculated the effective compliance, isobaric compliance and viscosity index. In the middle-aged men, the viscosity index was larger in the unfit group than in the fit group (2533 vs. 2018 mmHg·s/mm, respectively: P<0.05), but this was not the case in the young men. In addition, effective and isobaric compliance were increased, and viscosity index was increased with advancing age, but these parameters were unaffected by cardiorespiratory fitness level. These results suggest that the wall viscosity in the central artery is increased with advancing age and that the age-associated increase in wall viscosity may be attenuated in cardiorespiratory fit men.

  9. Estimation of heart rate and heart rate variability from pulse oximeter recordings using localized model fitting.

    PubMed

    Wadehn, Federico; Carnal, David; Loeliger, Hans-Andrea

    2015-08-01

    Heart rate variability is one of the key parameters for assessing the health status of a subject's cardiovascular system. This paper presents a local model fitting algorithm used for finding single heart beats in photoplethysmogram recordings. The local fit of exponentially decaying cosines of frequencies within the physiological range is used to detect the presence of a heart beat. Using 42 subjects from the CapnoBase database, the average heart rate error was 0.16 BPM and the standard deviation of the absolute estimation error was 0.24 BPM.

  10. Theory of post-block 2 VLBI observable extraction

    NASA Technical Reports Server (NTRS)

    Lowe, Stephen T.

    1992-01-01

    The algorithms used in the post-Block II fringe-fitting software called 'Fit' are described. The steps needed to derive the very long baseline interferometry (VLBI) charged-particle corrected group delay, phase delay rate, and phase delay (the latter without resolving cycle ambiguities) are presented beginning with the set of complex fringe phasors as a function of observation frequency and time. The set of complex phasors is obtained from the JPL/CIT Block II correlator. The output of Fit is the set of charged-particle corrected observables (along with ancillary information) in a form amenable to the software program 'Modest.'

  11. KAM Tori Construction Algorithms

    NASA Astrophysics Data System (ADS)

    Wiesel, W.

    In this paper we evaluate and compare two algorithms for the calculation of KAM tori in Hamiltonian systems. The direct fitting of a torus Fourier series to a numerically integrated trajectory is the first method, while an accelerated finite Fourier transform is the second method. The finite Fourier transform, with Hanning window functions, is by far superior in both computational loading and numerical accuracy. Some thoughts on applications of KAM tori are offered.

  12. Least-Squares Approximation of an Improper by a Proper Correlation Matrix Using a Semi-Infinite Convex Program. Research Report 87-7.

    ERIC Educational Resources Information Center

    Knol, Dirk L.; ten Berge, Jos M. F.

    An algorithm is presented for the best least-squares fitting correlation matrix approximating a given missing value or improper correlation matrix. The proposed algorithm is based on a solution for C. I. Mosier's oblique Procrustes rotation problem offered by J. M. F. ten Berge and K. Nevels (1977). It is shown that the minimization problem…

  13. Evolvable Hardware for Space Applications

    NASA Technical Reports Server (NTRS)

    Lohn, Jason; Globus, Al; Hornby, Gregory; Larchev, Gregory; Kraus, William

    2004-01-01

    This article surveys the research of the Evolvable Systems Group at NASA Ames Research Center. Over the past few years, our group has developed the ability to use evolutionary algorithms in a variety of NASA applications ranging from spacecraft antenna design, fault tolerance for programmable logic chips, atomic force field parameter fitting, analog circuit design, and earth observing satellite scheduling. In some of these applications, evolutionary algorithms match or improve on human performance.

  14. Analysis of Power Laws, Shape Collapses, and Neural Complexity: New Techniques and MATLAB Support via the NCC Toolbox

    PubMed Central

    Marshall, Najja; Timme, Nicholas M.; Bennett, Nicholas; Ripp, Monica; Lautzenhiser, Edward; Beggs, John M.

    2016-01-01

    Neural systems include interactions that occur across many scales. Two divergent methods for characterizing such interactions have drawn on the physical analysis of critical phenomena and the mathematical study of information. Inferring criticality in neural systems has traditionally rested on fitting power laws to the property distributions of “neural avalanches” (contiguous bursts of activity), but the fractal nature of avalanche shapes has recently emerged as another signature of criticality. On the other hand, neural complexity, an information theoretic measure, has been used to capture the interplay between the functional localization of brain regions and their integration for higher cognitive functions. Unfortunately, treatments of all three methods—power-law fitting, avalanche shape collapse, and neural complexity—have suffered from shortcomings. Empirical data often contain biases that introduce deviations from true power law in the tail and head of the distribution, but deviations in the tail have often been unconsidered; avalanche shape collapse has required manual parameter tuning; and the estimation of neural complexity has relied on small data sets or statistical assumptions for the sake of computational efficiency. In this paper we present technical advancements in the analysis of criticality and complexity in neural systems. We use maximum-likelihood estimation to automatically fit power laws with left and right cutoffs, present the first automated shape collapse algorithm, and describe new techniques to account for large numbers of neural variables and small data sets in the calculation of neural complexity. In order to facilitate future research in criticality and complexity, we have made the software utilized in this analysis freely available online in the MATLAB NCC (Neural Complexity and Criticality) Toolbox. PMID:27445842

  15. Analysis of Power Laws, Shape Collapses, and Neural Complexity: New Techniques and MATLAB Support via the NCC Toolbox.

    PubMed

    Marshall, Najja; Timme, Nicholas M; Bennett, Nicholas; Ripp, Monica; Lautzenhiser, Edward; Beggs, John M

    2016-01-01

    Neural systems include interactions that occur across many scales. Two divergent methods for characterizing such interactions have drawn on the physical analysis of critical phenomena and the mathematical study of information. Inferring criticality in neural systems has traditionally rested on fitting power laws to the property distributions of "neural avalanches" (contiguous bursts of activity), but the fractal nature of avalanche shapes has recently emerged as another signature of criticality. On the other hand, neural complexity, an information theoretic measure, has been used to capture the interplay between the functional localization of brain regions and their integration for higher cognitive functions. Unfortunately, treatments of all three methods-power-law fitting, avalanche shape collapse, and neural complexity-have suffered from shortcomings. Empirical data often contain biases that introduce deviations from true power law in the tail and head of the distribution, but deviations in the tail have often been unconsidered; avalanche shape collapse has required manual parameter tuning; and the estimation of neural complexity has relied on small data sets or statistical assumptions for the sake of computational efficiency. In this paper we present technical advancements in the analysis of criticality and complexity in neural systems. We use maximum-likelihood estimation to automatically fit power laws with left and right cutoffs, present the first automated shape collapse algorithm, and describe new techniques to account for large numbers of neural variables and small data sets in the calculation of neural complexity. In order to facilitate future research in criticality and complexity, we have made the software utilized in this analysis freely available online in the MATLAB NCC (Neural Complexity and Criticality) Toolbox.

  16. A Robust Random Forest-Based Approach for Heart Rate Monitoring Using Photoplethysmography Signal Contaminated by Intense Motion Artifacts.

    PubMed

    Ye, Yalan; He, Wenwen; Cheng, Yunfei; Huang, Wenxia; Zhang, Zhilin

    2017-02-16

    The estimation of heart rate (HR) based on wearable devices is of interest in fitness. Photoplethysmography (PPG) is a promising approach to estimate HR due to low cost; however, it is easily corrupted by motion artifacts (MA). In this work, a robust approach based on random forest is proposed for accurately estimating HR from the photoplethysmography signal contaminated by intense motion artifacts, consisting of two stages. Stage 1 proposes a hybrid method to effectively remove MA with a low computation complexity, where two MA removal algorithms are combined by an accurate binary decision algorithm whose aim is to decide whether or not to adopt the second MA removal algorithm. Stage 2 proposes a random forest-based spectral peak-tracking algorithm, whose aim is to locate the spectral peak corresponding to HR, formulating the problem of spectral peak tracking into a pattern classification problem. Experiments on the PPG datasets including 22 subjects used in the 2015 IEEE Signal Processing Cup showed that the proposed approach achieved the average absolute error of 1.65 beats per minute (BPM) on the 22 PPG datasets. Compared to state-of-the-art approaches, the proposed approach has better accuracy and robustness to intense motion artifacts, indicating its potential use in wearable sensors for health monitoring and fitness tracking.

  17. Improved OSIRIS NO2 retrieval algorithm: description and validation

    NASA Astrophysics Data System (ADS)

    Sioris, Christopher E.; Rieger, Landon A.; Lloyd, Nicholas D.; Bourassa, Adam E.; Roth, Chris Z.; Degenstein, Douglas A.; Camy-Peyret, Claude; Pfeilsticker, Klaus; Berthet, Gwenaël; Catoire, Valéry; Goutail, Florence; Pommereau, Jean-Pierre; McLinden, Chris A.

    2017-03-01

    A new retrieval algorithm for OSIRIS (Optical Spectrograph and Infrared Imager System) nitrogen dioxide (NO2) profiles is described and validated. The algorithm relies on spectral fitting to obtain slant column densities of NO2, followed by inversion using an algebraic reconstruction technique and the SaskTran spherical radiative transfer model (RTM) to obtain vertical profiles of local number density. The validation covers different latitudes (tropical to polar), years (2002-2012), all seasons (winter, spring, summer, and autumn), different concentrations of nitrogen dioxide (from denoxified polar vortex to polar summer), a range of solar zenith angles (68.6-90.5°), and altitudes between 10.5 and 39 km, thereby covering the full retrieval range of a typical OSIRIS NO2 profile. The use of a larger spectral fitting window than used in previous retrievals reduces retrieval uncertainties and the scatter in the retrieved profiles due to noisy radiances. Improvements are also demonstrated through the validation in terms of bias reduction at 15-17 km relative to the OSIRIS operational v3.0 algorithm. The diurnal variation of NO2 along the line of sight is included in a fully spherical multiple scattering RTM for the first time. Using this forward model with built-in photochemistry, the scatter of the differences relative to the correlative balloon NO2 profile data is reduced.

  18. Genetic algorithm with maximum-minimum crossover (GA-MMC) applied in optimization of radiation pattern control of phased-array radars for rocket tracking systems.

    PubMed

    Silva, Leonardo W T; Barros, Vitor F; Silva, Sandro G

    2014-08-18

    In launching operations, Rocket Tracking Systems (RTS) process the trajectory data obtained by radar sensors. In order to improve functionality and maintenance, radars can be upgraded by replacing antennas with parabolic reflectors (PRs) with phased arrays (PAs). These arrays enable the electronic control of the radiation pattern by adjusting the signal supplied to each radiating element. However, in projects of phased array radars (PARs), the modeling of the problem is subject to various combinations of excitation signals producing a complex optimization problem. In this case, it is possible to calculate the problem solutions with optimization methods such as genetic algorithms (GAs). For this, the Genetic Algorithm with Maximum-Minimum Crossover (GA-MMC) method was developed to control the radiation pattern of PAs. The GA-MMC uses a reconfigurable algorithm with multiple objectives, differentiated coding and a new crossover genetic operator. This operator has a different approach from the conventional one, because it performs the crossover of the fittest individuals with the least fit individuals in order to enhance the genetic diversity. Thus, GA-MMC was successful in more than 90% of the tests for each application, increased the fitness of the final population by more than 20% and reduced the premature convergence.

  19. Genetic Algorithm with Maximum-Minimum Crossover (GA-MMC) Applied in Optimization of Radiation Pattern Control of Phased-Array Radars for Rocket Tracking Systems

    PubMed Central

    Silva, Leonardo W. T.; Barros, Vitor F.; Silva, Sandro G.

    2014-01-01

    In launching operations, Rocket Tracking Systems (RTS) process the trajectory data obtained by radar sensors. In order to improve functionality and maintenance, radars can be upgraded by replacing antennas with parabolic reflectors (PRs) with phased arrays (PAs). These arrays enable the electronic control of the radiation pattern by adjusting the signal supplied to each radiating element. However, in projects of phased array radars (PARs), the modeling of the problem is subject to various combinations of excitation signals producing a complex optimization problem. In this case, it is possible to calculate the problem solutions with optimization methods such as genetic algorithms (GAs). For this, the Genetic Algorithm with Maximum-Minimum Crossover (GA-MMC) method was developed to control the radiation pattern of PAs. The GA-MMC uses a reconfigurable algorithm with multiple objectives, differentiated coding and a new crossover genetic operator. This operator has a different approach from the conventional one, because it performs the crossover of the fittest individuals with the least fit individuals in order to enhance the genetic diversity. Thus, GA-MMC was successful in more than 90% of the tests for each application, increased the fitness of the final population by more than 20% and reduced the premature convergence. PMID:25196013

  20. Seven years of aerosol scattering hygroscopic growth measurements from SGP: Factors influencing water uptake: Aerosol Scattering Hygroscopic Growth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jefferson, A.; Hageman, D.; Morrow, H.

    Long-term measurements of changes in the aerosol scattering coefficient hygroscopic growth at the U.S. Department of Energy Southern Great Plains site provide information on the seasonal as well as size and chemical dependence of aerosol hygroscopic growth. Annual average sub 10 um fRH values (the ratio of aerosol scattering at 85%/40% RH) were 1.75 and 1.87 for the gamma and kappa fit algorithms, respectively. The study found higher growth rates in the winter and spring seasons that correlated with high aerosol nitrate mass fraction. FRH, exhibited strong, but differing correlations with the scattering Ångström exponent and backscatter fraction, two opticalmore » size-dependent parameters. The aerosol organic fraction had a strong influence, with fRH decreasing with increases in the organic mass fraction and absorption Ångström exponent and increasing with the aerosol single scatter albedo. Uncertainty analysis if the fit algorithms revealed high uncertainty at low scattering coefficients and slight increases in uncertainty at high RH and fit parameters values.« less

Top