Longhurst, G.R.; Merrill, B.J.; Jones, J.L.
2000-10-31
The TMAP Code was written in the late 1980s as a tool for safety analysis of systems involving tritium. Since then it was upgraded to TMAP4 and used in numerous applications including experiments supporting fusion safety predictions for advanced systems such as the International Thermonuclear Experimental Reactor (ITER), and estimates involving tritium production technologies. Its further upgrade to TMAP2000 was accomplished in response to several needs. TMAP and TMAP4 had the capacity to deal with only a single trap for diffusing gaseous species in solid structures. TMAP2000 has been revised to include up to three separate traps and to keep track separately of each of up to 10 diffusing species in each of the traps. A difficulty in the original code dealing with heteronuclear molecule formation such as HD and DT has been removed. Under equilibrium boundary conditions such as Sieverts' law, TMAP2000 generates heteronuclear molecular partial pressures when solubilities and partial pressures of the homonuclear molecular species and the equilibrium stoichiometry are provided. A further sophistication is the addition of non-diffusing surface species and surface binding energy dynamics options. Atoms such as oxygen or nitrogen on metal surfaces are sometimes important in molecule formation with diffusing hydrogen isotopes but do not themselves diffuse appreciably in the material. TMAP2000 will accommodate up to 30 such surface species, allowing the user to specify relationships between those surface concentrations and populations of gaseous species above the surfaces. Additionally, TMAP2000 allows the user to include a surface binding energy and an adsorption barrier energy and includes asymmetrical diffusion between the surface sites and regular diffusion sites in the bulk. All of the previously existing features for heat transfer, flows between enclosures, and chemical reactions within the enclosures have been retained, but the allowed problem size and complexity have
Glen R. Longhurst
2006-09-01
The TMAP Code was written at the Idaho National Engineering and Environmental Laboratory by Brad Merrill and James Jones in the late 1980s as a tool for safety analysis of systems involving tritium. Since then it has been upgraded to TMAP4 and has been used in numerous applications including experiments supporting fusion safety, predictions for advanced systems such as the International Thermonuclear Experimental Reactor (ITER), and estimates involving tritium production technologies. Its further upgrade to TMAP2000 and now to TMAP7 was accomplished in response to several needs. TMAP and TMAP4 had the capacity to deal with only a single trap for diffusing gaseous species in solid structures. TMAP7 includes up to three separate traps and up to 10 diffusing species. The original code had difficulty dealing with heteronuclear molecule formation such as HD and DT. That has been removed. Under pre-specified boundary enclosure conditions and solution-law dependent diffusion boundary conditions, such as Sieverts' law, TMAP7 automatically generates heteronuclear molecular partial pressures when solubilities and partial pressures of the homonuclear molecular species are provided for law-dependent diffusion boundary conditions. A further sophistication is the addition of non-diffusing surface species. Atoms such as oxygen or nitrogen or formation and decay or combination of hydroxyl radicals on metal surfaces are sometimes important in reactions with diffusing hydrogen isotopes but do not themselves diffuse appreciably in the material. TMAP7 will accommodate up to 30 such surface species, allowing the user to specify relationships between those surface concentrations and partial pressures of gaseous species above the surfaces or to form them dynamically by combining diffusion species or other surface species. Additionally, TMAP7 allows the user to include a surface binding energy and an adsorption barrier energy. The code includes asymmetrical diffusion between the surface
Kernel Affine Projection Algorithms
NASA Astrophysics Data System (ADS)
Liu, Weifeng; Príncipe, José C.
2008-12-01
The combination of the famed kernel trick and affine projection algorithms (APAs) yields powerful nonlinear extensions, named collectively here, KAPA. This paper is a follow-up study of the recently introduced kernel least-mean-square algorithm (KLMS). KAPA inherits the simplicity and online nature of KLMS while reducing its gradient noise, boosting performance. More interestingly, it provides a unifying model for several neural network techniques, including kernel least-mean-square algorithms, kernel adaline, sliding-window kernel recursive-least squares (KRLS), and regularization networks. Therefore, many insights can be gained into the basic relations among them and the tradeoff between computation complexity and performance. Several simulations illustrate its wide applicability.
Reimold, Matthias; Slifstein, Mark; Heinz, Andreas; Mueller-Schauenburg, Wolfgang; Bares, Roland
2006-06-01
Voxelwise statistical analysis has become popular in explorative functional brain mapping with fMRI or PET. Usually, results are presented as voxelwise levels of significance (t-maps), and for clusters that survive correction for multiple testing the coordinates of the maximum t-value are reported. Before calculating a voxelwise statistical test, spatial smoothing is required to achieve a reasonable statistical power. Little attention is being given to the fact that smoothing has a nonlinear effect on the voxel variances and thus the local characteristics of a t-map, which becomes most evident after smoothing over different types of tissue. We investigated the related artifacts, for example, white matter peaks whose position depend on the relative variance (variance over contrast) of the surrounding regions, and suggest improving spatial precision with 'masked contrast images': color-codes are attributed to the voxelwise contrast, and significant clusters (e.g., detected with statistical parametric mapping, SPM) are enlarged by including contiguous pixels with a contrast above the mean contrast in the original cluster, provided they satisfy P < 0.05. The potential benefit is demonstrated with simulations and data from a [11C]Carfentanil PET study. We conclude that spatial smoothing may lead to critical, sometimes-counterintuitive artifacts in t-maps, especially in subcortical brain regions. If significant clusters are detected, for example, with SPM, the suggested method is one way to improve spatial precision and may give the investigator a more direct sense of the underlying data. Its simplicity and the fact that no further assumptions are needed make it a useful complement for standard methods of statistical mapping.
Programming the gradient projection algorithm
NASA Technical Reports Server (NTRS)
Hargrove, A.
1983-01-01
The gradient projection method of numerical optimization which is applied to problems having linear constraints but nonlinear objective functions is described and analyzed. The algorithm is found to be efficient and thorough for small systems, but requires the addition of auxiliary methods and programming for large scale systems with severe nonlinearities. In order to verify the theoretical results a digital computer is used to simulate the algorithm.
Jeon, Sang-Min; Choi, Bongkun; Hong, Kyung Uk; Kim, Eunhee; Seong, Yeon-Sun; Bae, Chang-Dae; Park, Joobae . E-mail: jbpark@med.skku.ac.kr
2006-09-15
Previously, we reported the cloning of a cytoskeleton-associated protein, TMAP/CKAP2, which was up-regulated in primary human gastric cancers. Although TMAP/CKAP2 has been found to be expressed in most cancer cell lines examined, the function of CKAP2 is not known. In this study, we found that TMAP/CKAP2 was not expressed in G0/G1 arrested HFFs, but that it was expressed in actively dividing cells. After initiating the cell cycle, TMAP/CKAP2 levels remained low throughout most of the G1 phase, but gradually increased between late G1 and G2/M. Knockdown of TMAP/CKAP2 reduced pRB phosphorylation and increased p27 expression, and consequently reduced HFF proliferation, whereas constitutive TMAP/CKAP2 expression increased pRB phosphorylation and enhanced proliferation. Our results show that this novel cytoskeleton-associated protein is expressed cell cycle dependently and that it is involved in cell proliferation.
Sliding Window Generalized Kernel Affine Projection Algorithm Using Projection Mappings
NASA Astrophysics Data System (ADS)
Slavakis, Konstantinos; Theodoridis, Sergios
2008-12-01
Very recently, a solution to the kernel-based online classification problem has been given by the adaptive projected subgradient method (APSM). The developed algorithm can be considered as a generalization of a kernel affine projection algorithm (APA) and the kernel normalized least mean squares (NLMS). Furthermore, sparsification of the resulting kernel series expansion was achieved by imposing a closed ball (convex set) constraint on the norm of the classifiers. This paper presents another sparsification method for the APSM approach to the online classification task by generating a sequence of linear subspaces in a reproducing kernel Hilbert space (RKHS). To cope with the inherent memory limitations of online systems and to embed tracking capabilities to the design, an upper bound on the dimension of the linear subspaces is imposed. The underlying principle of the design is the notion of projection mappings. Classification is performed by metric projection mappings, sparsification is achieved by orthogonal projections, while the online system's memory requirements and tracking are attained by oblique projections. The resulting sparsification scheme shows strong similarities with the classical sliding window adaptive schemes. The proposed design is validated by the adaptive equalization problem of a nonlinear communication channel, and is compared with classical and recent stochastic gradient descent techniques, as well as with the APSM's solution where sparsification is performed by a closed ball constraint on the norm of the classifiers.
The Global Precipitation Climatology Project: First Algorithm Intercomparison Project
NASA Technical Reports Server (NTRS)
Arkin, Phillip A.; Xie, Pingping
1994-01-01
The Global Precipitation Climatology Project (GPCP) was established by the World Climate Research Program to produce global analyses of the area- and time-averaged precipitation for use in climate research. To achieve the required spatial coverage, the GPCP uses simple rainfall estimates derived from IR and microwave satellite observations. In this paper, we describe the GPCP and its first Algorithm Intercomparison Project (AIP/1), which compared a variety of rainfall estimates derived from Geostationary Meteorological Satellite visible and IR observations and Special Sensor Microwave/Imager (SSM/I) microwave observations with rainfall derived from a combination of radar and raingage data over the Japanese islands and the adjacent ocean regions during the June and mid-July through mid-August periods of 1989. To investigate potential improvements in the use of satellite IR data for the estimation of large-scale rainfall for the GPCP, the relationship between rainfall and the fractional coverage of cold clouds in the AIP/1 dataset is examined. Linear regressions between fractional coverage and rainfall are analyzed for a number of latitude-longitude areas and for a range of averaging times. The results show distinct differences in the character of the relationship for different portions of the area. These results suggest that the simple IR-based estimation technique currently used in the GPCP can be used to estimate rainfall for global tropical and subtropical areas, provided that a method for adjusting the proportional coefficient for varying areas and seasons can be determined.
Fast image matching algorithm based on projection characteristics
NASA Astrophysics Data System (ADS)
Zhou, Lijuan; Yue, Xiaobo; Zhou, Lijun
2011-06-01
Based on analyzing the traditional template matching algorithm, this paper identified the key factors restricting the speed of matching and put forward a brand new fast matching algorithm based on projection. Projecting the grayscale image, this algorithm converts the two-dimensional information of the image into one-dimensional one, and then matches and identifies through one-dimensional correlation, meanwhile, because of normalization has been done, when the image brightness or signal amplitude increasing in proportion, it could also perform correct matching. Experimental results show that the projection characteristics based image registration method proposed in this article could greatly improve the matching speed, which ensuring the matching accuracy as well.
An affine projection algorithm using grouping selection of input vectors
NASA Astrophysics Data System (ADS)
Shin, JaeWook; Kong, NamWoong; Park, PooGyeon
2011-10-01
This paper present an affine projection algorithm (APA) using grouping selection of input vectors. To improve the performance of conventional APA, the proposed algorithm adjusts the number of the input vectors using two procedures: grouping procedure and selection procedure. In grouping procedure, the some input vectors that have overlapping information for update is grouped using normalized inner product. Then, few input vectors that have enough information for for coefficient update is selected using steady-state mean square error (MSE) in selection procedure. Finally, the filter coefficients update using selected input vectors. The experimental results show that the proposed algorithm has small steady-state estimation errors comparing with the existing algorithms.
Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-01-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835
The Coastcolour project regional algorithm round robin exercise
NASA Astrophysics Data System (ADS)
Ruddick, K.; Brockmann, C.; Doerffer, R.; Lee, Z.; Brotas, V.; Fomferra, N.; Groom, S.; Krasemann, H.; Martinez-Vicente, V.; Sa, C.; Santer, R.; Sathyendranath, S.; Stelzer, K.; Pinnock, S.
2010-10-01
The MERIS instrument delivers a unique dataset of ocean colour measurements of the coastal zone, at 300m resolution and with a unique spectral band set. The motivation for the Coastcolour project is to fully exploit the potential of the MERIS instrument for remote sensing of the coastal zone. The general objective of the project is to develop, demonstrate, validate and intercompare different processing algorithms for MERIS over a global range of coastal water types in order to identify best practices. In this paper the Coastcolour project is presented in general and the Regional Algorithm Round Robin (RARR) exercise is described in detail. The RARR has the objective of determining the best approach to retrieval of chlorophyll a and other marine products (e.g. Inherent Optical Properties) for each of the Coastcolour coastal water test sites. Benchmark datasets of reflectances at MERIS bands will be distributed to algorithm provider participants for testing of both global (Coastcolour and other) algorithms and site-specific local algorithms. Results from all algorithms will be analysed and compared according to a uniform methodology. Participation of algorithm providers from outside the Coastcolour consortium is encouraged.
Phase unwrapping using an extrapolation-projection algorithm
NASA Astrophysics Data System (ADS)
Marendic, Boris; Yang, Yongyi; Stark, Henry
2006-08-01
We explore an approach to the unwrapping of two-dimensional phase functions using a robust extrapolation-projection algorithm. Phase unwrapping is essential for imaging systems that construct the image from phase information. Unlike some existing methods where unwrapping is performed locally on a pixel-by-pixel basis, this work approaches the unwrapping problem from a global point of view. The unwrapping is done iteratively by a modification of the Gerchberg-Papoulis extrapolation algorithm, and the solution is refined by projecting onto the available global data at each iteration. Robustness of the algorithm is demonstrated through its performance in a noisy environment, and in comparison with a least-squares algorithm well-known in the literature.
Phase unwrapping using an extrapolation-projection algorithm.
Marendic, Boris; Yang, Yongyi; Stark, Henry
2006-08-01
We explore an approach to the unwrapping of two-dimensional phase functions using a robust extrapolation-projection algorithm. Phase unwrapping is essential for imaging systems that construct the image from phase information. Unlike some existing methods where unwrapping is performed locally on a pixel-by-pixel basis, this work approaches the unwrapping problem from a global point of view. The unwrapping is done iteratively by a modification of the Gerchberg-Papoulis extrapolation algorithm, and the solution is refined by projecting onto the available global data at each iteration. Robustness of the algorithm is demonstrated through its performance in a noisy environment, and in comparison with a least-squares algorithm well-known in the literature.
A unified evaluation of iterative projection algorithms for phase retrieval
Marchesini, S
2006-03-08
Iterative projection algorithms are successfully being used as a substitute of lenses to recombine, numerically rather than optically, light scattered by illuminated objects. Images obtained computationally allow aberration-free diffraction-limited imaging and allow new types of imaging using radiation for which no lenses exist. The challenge of this imaging technique is transferred from the lenses to the algorithms. We evaluate these new computational ''instruments'' developed for the phase retrieval problem, and discuss acceleration strategies.
Projection learning algorithm for threshold - controlled neural networks
Reznik, A.M.
1995-03-01
The projection learning algorithm proposed in [1, 2] and further developed in [3] substantially improves the efficiency of memorizing information and accelerates the learning process in neural networks. This algorithm is compatible with the completely connected neural network architecture (the Hopfield network [4]), but its application to other networks involves a number of difficulties. The main difficulties include constraints on interconnection structure and the need to eliminate the state uncertainty of latent neurons if such are present in the network. Despite the encouraging preliminary results of [3], further extension of the applications of the projection algorithm therefore remains problematic. In this paper, which is a continuation of the work begun in [3], we consider threshold-controlled neural networks. Networks of this type are quite common. They represent the receptor neuron layers in some neurocomputer designs. A similar structure is observed in the lower divisions of biological sensory systems [5]. In multilayer projection neural networks with lateral interconnections, the neuron layers or parts of these layers may also have the structure of a threshold-controlled completely connected network. Here the thresholds are the potentials delivered through the projection connections from other parts of the network. The extension of the projection algorithm to the class of threshold-controlled networks may accordingly prove to be useful both for extending its technical applications and for better understanding of the operation of the nervous system in living organisms.
A Turn-Projected State-Based Conflict Resolution Algorithm
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Lewis, Timothy A.
2013-01-01
State-based conflict detection and resolution (CD&R) algorithms detect conflicts and resolve them on the basis on current state information without the use of additional intent information from aircraft flight plans. Therefore, the prediction of the trajectory of aircraft is based solely upon the position and velocity vectors of the traffic aircraft. Most CD&R algorithms project the traffic state using only the current state vectors. However, the past state vectors can be used to make a better prediction of the future trajectory of the traffic aircraft. This paper explores the idea of using past state vectors to detect traffic turns and resolve conflicts caused by these turns using a non-linear projection of the traffic state. A new algorithm based on this idea is presented and validated using a fast-time simulator developed for this study.
An improved back projection algorithm of ultrasound tomography
Xiaozhen, Chen; Mingxu, Su; Xiaoshu, Cai
2014-04-11
Binary logic back projection algorithm is improved in this work for the development of fast ultrasound tomography system with a better effect of image reconstruction. The new algorithm is characterized by an extra logical value ‘2’ and dual-threshold processing of collected raw data. To compare with the original algorithm, a numerical simulation was conducted by the verification of COMSOL simulations formerly, and then a set of ultrasonic tomography system is established to perform the experiments of one, two and three cylindrical objects. The object images are reconstructed through the inversion of signals matrix acquired by the transducer array after a preconditioning, while the corresponding spatial imaging errors can obviously indicate that the improved back projection method can achieve modified inversion effect.
Approximated affine projection algorithm for feedback cancellation in hearing aids.
Lee, Sangmin; Kim, In-Young; Park, Young-Cheol
2007-09-01
We propose an approximated affine projection (AP) algorithm for feedback cancellation in hearing aids. It is based on the conventional approach using the Gauss-Seidel (GS) iteration, but provides more stable convergence behaviour even with small step sizes. In the proposed algorithm, a residue of the weighted error vector, instead of the current error sample, is used to provide stable convergence. A new learning rate control scheme is also applied to the proposed algorithm to prevent signal cancellation and system instability. The new scheme determines step size in proportion to the prediction factor of the input, so that adaptation is inhibited whenever tone-like signals are present in the input. Simulation results verified the efficiency of the proposed algorithm.
The PRISM project: Infrastructure and algorithms for parallel eigensolvers
Bischof, C.; Sun, X.; Huss-Lederman, S.; Tsao, A.
1993-12-31
The goal of the PRISM project is the development of infrastructure and algorithms for the parallel solution of eigenvalue problems. We are currently investigating a complete eigensolver based on the Invariant Subspace Decomposition Algorithm for dense symmetric matrices (SYISDA). After briefly reviewing the SYISDA approach, we discuss the algorithmic highlights of a distributed-memory implementation of an eigensolver based on this approach. These include a fast matrix-matrix multiplication algorithm, a new approach to parallel band reduction and tridiagonalization, and a harness for coordinating the divide-and-conquer parallelism in the problem. We also present performance results of these kernels as well as the overall SYISDA implementation on the Intel Touchstone Delta prototype and the IBM SP/1.
NASA Astrophysics Data System (ADS)
Liu, Jianming; Grant, Steven L.; Benesty, Jacob
2015-12-01
A new reweighted proportionate affine projection algorithm (RPAPA) with memory and row action projection (MRAP) is proposed in this paper. The reweighted PAPA is derived from a family of sparseness measures, which demonstrate performance similar to mu-law and the l 0 norm PAPA but with lower computational complexity. The sparseness of the channel is taken into account to improve the performance for dispersive system identification. Meanwhile, the memory of the filter's coefficients is combined with row action projections (RAP) to significantly reduce computational complexity. Simulation results demonstrate that the proposed RPAPA MRAP algorithm outperforms both the affine projection algorithm (APA) and PAPA, and has performance similar to l 0 PAPA and mu-law PAPA, in terms of convergence speed and tracking ability. Meanwhile, the proposed RPAPA MRAP has much lower computational complexity than PAPA, mu-law PAPA, and l 0 PAPA, etc., which makes it very appealing for real-time implementation.
Grant, C W; Lenderman, J S; Gansemer, J D
2011-02-24
This document is an update to the 'ADIS Algorithm Evaluation Project Plan' specified in the Statement of Work for the US-VISIT Identity Matching Algorithm Evaluation Program, as deliverable II.D.1. The original plan was delivered in August 2010. This document modifies the plan to reflect modified deliverables reflecting delays in obtaining a database refresh. This document describes the revised schedule of the program deliverables. The detailed description of the processes used, the statistical analysis processes and the results of the statistical analysis will be described fully in the program deliverables. The US-VISIT Identity Matching Algorithm Evaluation Program is work performed by Lawrence Livermore National Laboratory (LLNL) under IAA HSHQVT-07-X-00002 P00004 from the Department of Homeland Security (DHS).
Abejuela, Harmony Raylen; Osser, David N
2016-01-01
This revision of previous algorithms for the pharmacotherapy of generalized anxiety disorder was developed by the Psychopharmacology Algorithm Project at the Harvard South Shore Program. Algorithms from 1999 and 2010 and associated references were reevaluated. Newer studies and reviews published from 2008-14 were obtained from PubMed and analyzed with a focus on their potential to justify changes in the recommendations. Exceptions to the main algorithm for special patient populations, such as women of childbearing potential, pregnant women, the elderly, and those with common medical and psychiatric comorbidities, were considered. Selective serotonin reuptake inhibitors (SSRIs) are still the basic first-line medication. Early alternatives include duloxetine, buspirone, hydroxyzine, pregabalin, or bupropion, in that order. If response is inadequate, then the second recommendation is to try a different SSRI. Additional alternatives now include benzodiazepines, venlafaxine, kava, and agomelatine. If the response to the second SSRI is unsatisfactory, then the recommendation is to try a serotonin-norepinephrine reuptake inhibitor (SNRI). Other alternatives to SSRIs and SNRIs for treatment-resistant or treatment-intolerant patients include tricyclic antidepressants, second-generation antipsychotics, and valproate. This revision of the GAD algorithm responds to issues raised by new treatments under development (such as pregabalin) and organizes the evidence systematically for practical clinical application.
An Overview of the JPSS Ground Project Algorithm Integration Process
NASA Astrophysics Data System (ADS)
Vicente, G. A.; Williams, R.; Dorman, T. J.; Williamson, R. C.; Shaw, F. J.; Thomas, W. M.; Hung, L.; Griffin, A.; Meade, P.; Steadley, R. S.; Cember, R. P.
2015-12-01
The smooth transition, implementation and operationalization of scientific software's from the National Oceanic and Atmospheric Administration (NOAA) development teams to the Join Polar Satellite System (JPSS) Ground Segment requires a variety of experiences and expertise. This task has been accomplished by a dedicated group of scientist and engineers working in close collaboration with the NOAA Satellite and Information Services (NESDIS) Center for Satellite Applications and Research (STAR) science teams for the JPSS/Suomi-NPOES Preparatory Project (S-NPP) Advanced Technology Microwave Sounder (ATMS), Cross-track Infrared Sounder (CrIS), Visible Infrared Imaging Radiometer Suite (VIIRS) and Ozone Mapping and Profiler Suite (OMPS) instruments. The presentation purpose is to describe the JPSS project process for algorithm implementation from the very early delivering stages by the science teams to the full operationalization into the Interface Processing Segment (IDPS), the processing system that provides Environmental Data Records (EDR's) to NOAA. Special focus is given to the NASA Data Products Engineering and Services (DPES) Algorithm Integration Team (AIT) functional and regression test activities. In the functional testing phase, the AIT uses one or a few specific chunks of data (granules) selected by the NOAA STAR Calibration and Validation (cal/val) Teams to demonstrate that a small change in the code performs properly and does not disrupt the rest of the algorithm chain. In the regression testing phase, the modified code is placed into to the Government Resources for Algorithm Verification, Integration, Test and Evaluation (GRAVITE) Algorithm Development Area (ADA), a simulated and smaller version of the operational IDPS. Baseline files are swapped out, not edited and the whole code package runs in one full orbit of Science Data Records (SDR's) using Calibration Look Up Tables (Cal LUT's) for the time of the orbit. The purpose of the regression test is to
Mohammad, Othman; Osser, David N
2014-01-01
This new algorithm for the pharmacotherapy of acute mania was developed by the Psychopharmacology Algorithm Project at the Harvard South Shore Program. The authors conducted a literature search in PubMed and reviewed key studies, other algorithms and guidelines, and their references. Treatments were prioritized considering three main considerations: (1) effectiveness in treating the current episode, (2) preventing potential relapses to depression, and (3) minimizing side effects over the short and long term. The algorithm presupposes that clinicians have made an accurate diagnosis, decided how to manage contributing medical causes (including substance misuse), discontinued antidepressants, and considered the patient's childbearing potential. We propose different algorithms for mixed and nonmixed mania. Patients with mixed mania may be treated first with a second-generation antipsychotic, of which the first choice is quetiapine because of its greater efficacy for depressive symptoms and episodes in bipolar disorder. Valproate and then either lithium or carbamazepine may be added. For nonmixed mania, lithium is the first-line recommendation. A second-generation antipsychotic can be added. Again, quetiapine is favored, but if quetiapine is unacceptable, risperidone is the next choice. Olanzapine is not considered a first-line treatment due to its long-term side effects, but it could be second-line. If the patient, whether mixed or nonmixed, is still refractory to the above medications, then depending on what has already been tried, consider carbamazepine, haloperidol, olanzapine, risperidone, and valproate first tier; aripiprazole, asenapine, and ziprasidone second tier; and clozapine third tier (because of its weaker evidence base and greater side effects). Electroconvulsive therapy may be considered at any point in the algorithm if the patient has a history of positive response or is intolerant of medications.
Staff line detection and revision algorithm based on subsection projection and correlation algorithm
NASA Astrophysics Data System (ADS)
Yang, Yin-xian; Yang, Ding-li
2013-03-01
Staff line detection plays a key role in OMR technology, and is the precon-ditions of subsequent segmentation 1& recognition of music sheets. For the phenomena of horizontal inclination & curvature of staff lines and vertical inclination of image, which often occur in music scores, an improved approach based on subsection projection is put forward to realize the detection of original staff lines and revision in an effect to implement staff line detection more successfully. Experimental results show the presented algorithm can detect and revise staff lines fast and effectively.
Cascade Error Projection: A Learning Algorithm for Hardware Implementation
NASA Technical Reports Server (NTRS)
Duong, Tuan A.; Daud, Taher
1996-01-01
In this paper, we workout a detailed mathematical analysis for a new learning algorithm termed Cascade Error Projection (CEP) and a general learning frame work. This frame work can be used to obtain the cascade correlation learning algorithm by choosing a particular set of parameters. Furthermore, CEP learning algorithm is operated only on one layer, whereas the other set of weights can be calculated deterministically. In association with the dynamical stepsize change concept to convert the weight update from infinite space into a finite space, the relation between the current stepsize and the previous energy level is also given and the estimation procedure for optimal stepsize is used for validation of our proposed technique. The weight values of zero are used for starting the learning for every layer, and a single hidden unit is applied instead of using a pool of candidate hidden units similar to cascade correlation scheme. Therefore, simplicity in hardware implementation is also obtained. Furthermore, this analysis allows us to select from other methods (such as the conjugate gradient descent or the Newton's second order) one of which will be a good candidate for the learning technique. The choice of learning technique depends on the constraints of the problem (e.g., speed, performance, and hardware implementation); one technique may be more suitable than others. Moreover, for a discrete weight space, the theoretical analysis presents the capability of learning with limited weight quantization. Finally, 5- to 8-bit parity and chaotic time series prediction problems are investigated; the simulation results demonstrate that 4-bit or more weight quantization is sufficient for learning neural network using CEP. In addition, it is demonstrated that this technique is able to compensate for less bit weight resolution by incorporating additional hidden units. However, generation result may suffer somewhat with lower bit weight quantization.
Buscema, C A; Abbasi, Q A; Barry, D J; Lauve, T H
2000-10-01
The Forensic Algorithm Project (FAP) was born of the need for a holistic approach in the treatment of the inmate with schizophrenia. Schizophrenia was chosen as the first entity to be addressed by the algorithm because of its refractory nature and high rate of recidivism in the correctional setting. Schizophrenia is regarded as a spectrum disorder, with symptom clusters and behaviors ranging from positive to negative symptoms to neurocognitive dysfunction and affective instability. Furthermore, the clinical picture is clouded by Axis II symptomatology (particularly prominent in the inmate population), comorbid Axis I disorders, and organicity. Four subgroups of schizophrenia were created to coincide with common clinical presentations in the forensic inpatient facility and also to parallel 4 tracks of intervention, consisting of pharmacologic management and programming recommendations. The algorithm begins with any antipsychotic medication and proceeds to atypical neuroleptic usage, augmentation with other psychotropic agents, and, finally, the use of clozapine as the common pathway for refractory schizophrenia. Outcome measurement of pharmacologic intervention is assessed every 6 weeks through the use of a 4-item subscale, specific for each forensic subgroup. A "floating threshold" of 40% symptom severity reduction on Positive and Negative Syndrome Scale and Brief Psychiatric Rating Scale items over a 6-week period is considered an indication for neuroleptic continuation. The forensic algorithm differs from other clinical practice guidelines in that specific programming in certain prison environments is stipulated. Finally, a social commentary on the importance of state-of-the-art psychiatric treatment for all members of society is woven into the clinical tapestry of this article.
An Algorithm for Projecting Points onto a Patched CAD Model
Henshaw, W D
2001-05-29
We are interested in building structured overlapping grids for geometries defined by computer-aided-design (CAD) packages. Geometric information defining the boundary surfaces of a computation domain is often provided in the form of a collection of possibly hundreds of trimmed patches. The first step in building an overlapping volume grid on such a geometry is to build overlapping surface grids. A surface grid is typically built using hyperbolic grid generation; starting from a curve on the surface, a grid is grown by marching over the surface. A given hyperbolic grid will typically cover many of the underlying CAD surface patches. The fundamental operation needed for building surface grids is that of projecting a point in space onto the closest point on the CAD surface. We describe an fast algorithm for performing this projection, it will make use of a fairly coarse global triangulation of the CAD geometry. We describe how to build this global triangulation by first determining the connectivity of the CAD surface patches. This step is necessary since it often the case that the CAD description will contain no information specifying how a given patch connects to other neighboring patches. Determining the connectivity is difficult since the surface patches may contain mistakes such as gaps or overlaps between neighboring patches.
Lee, Heui Chang; Song, Bongyong; Kim, Jin Sung; Jung, James J.; Li, H. Harold; Mutic, Sasa; Park, Justin C.
2016-01-01
The purpose of this study is to develop a fast and convergence proofed CBCT reconstruction framework based on the compressed sensing theory which not only lowers the imaging dose but also is computationally practicable in the busy clinic. We simplified the original mathematical formulation of gradient projection for sparse reconstruction (GPSR) to minimize the number of forward and backward projections for line search processes at each iteration. GPSR based algorithms generally showed improved image quality over the FDK algorithm especially when only a small number of projection data were available. When there were only 40 projections from 360 degree fan beam geometry, the quality of GPSR based algorithms surpassed FDK algorithm within 10 iterations in terms of the mean squared relative error. Our proposed GPSR algorithm converged as fast as the conventional GPSR with a reasonably low computational complexity. The outcomes demonstrate that the proposed GPSR algorithm is attractive for use in real time applications such as on-line IGRT. PMID:27894103
A frameshift error detection algorithm for DNA sequencing projects.
Fichant, G A; Quentin, Y
1995-01-01
During the determination of DNA sequences, frameshift errors are not the most frequent but they are the most bothersome as they corrupt the amino acid sequence over several residues. Detection of such errors by sequence alignment is only possible when related sequences are found in the databases. To avoid this limitation, we have developed a new tool based on the distribution of non-overlapping 3-tuples or 6-tuples in the three frames of an ORF. The method relies upon the result of a correspondence analysis. It has been extensively tested on Bacillus subtilis and Saccharomyces cerevisiae sequences and has also been examined with human sequences. The results indicate that it can detect frameshift errors affecting as few as 20 bp with a low rate of false positives (no more than 1.0/1000 bp scanned). The proposed algorithm can be used to scan a large collection of data, but it is mainly intended for laboratory practice as a tool for checking the quality of the sequences produced during a sequencing project. PMID:7659513
Practical eight-frame algorithms for fringe projection profilometry.
Gutiérrez-García, Juan C; Mosiño, J F; Martínez, Amalia; Gutiérrez-García, Tania A; Vázquez-Domínguez, Ella; Arroyo-Cabrales, Joaquín
2013-01-14
In this paper we present several eight-frame algorithms for their use in phase shifting profilometry and their application for the analysis of semi-fossilized materials. All algorithms are obtained from a set of two-frame algorithms and designed to compensate common errors such as phase shift detuning and bias errors.
Korean Medication Algorithm Project for Bipolar Disorder: third revision
Woo, Young Sup; Lee, Jung Goo; Jeong, Jong-Hyun; Kim, Moon-Doo; Sohn, Inki; Shim, Se-Hoon; Jon, Duk-In; Seo, Jeong Seok; Shin, Young-Chul; Min, Kyung Joon; Yoon, Bo-Hyun; Bahk, Won-Myong
2015-01-01
Objective To constitute the third revision of the guidelines for the treatment of bipolar disorder issued by the Korean Medication Algorithm Project for Bipolar Disorder (KMAP-BP 2014). Methods A 56-item questionnaire was used to obtain the consensus of experts regarding pharmacological treatment strategies for the various phases of bipolar disorder and for special populations. The review committee included 110 Korean psychiatrists and 38 experts for child and adolescent psychiatry. Of the committee members, 64 general psychiatrists and 23 child and adolescent psychiatrists responded to the survey. Results The treatment of choice (TOC) for euphoric, mixed, and psychotic mania was the combination of a mood stabilizer (MS) and an atypical antipsychotic (AAP); the TOC for acute mild depression was monotherapy with MS or AAP; and the TOC for moderate or severe depression was MS plus AAP/antidepressant. The first-line maintenance treatment following mania or depression was MS monotherapy or MS plus AAP; the first-line treatment after mania was AAP monotherapy; and the first-line treatment after depression was lamotrigine (LTG) monotherapy, LTG plus MS/AAP, or MS plus AAP plus LTG. The first-line treatment strategy for mania in children and adolescents was MS plus AAP or AAP monotherapy. For geriatric bipolar patients, the TOC for mania was AAP/MS monotherapy, and the TOC for depression was AAP plus MS or AAP monotherapy. Conclusion The expert consensus in the KMAP-BP 2014 differed from that in previous publications; most notably, the preference for AAP was increased in the treatment of acute mania, depression, and maintenance treatment. There was increased expert preference for the use of AAP and LTG. The major limitation of the present study is that it was based on the consensus of Korean experts rather than on experimental evidence. PMID:25750530
NASA Astrophysics Data System (ADS)
Yannibelli, Virginia; Amandi, Analía
2013-01-01
In this article, the project scheduling problem is addressed in order to assist project managers at the early stage of scheduling. Thus, as part of the problem, two priority optimization objectives for managers at that stage are considered. One of these objectives is to assign the most effective set of human resources to each project activity. The effectiveness of a human resource is considered to depend on its work context. The other objective is to minimize the project makespan. To solve the problem, a multi-objective evolutionary algorithm is proposed. This algorithm designs feasible schedules for a given project and evaluates the designed schedules in relation to each objective. The algorithm generates an approximation to the Pareto set as a solution to the problem. The computational experiments carried out on nine different instance sets are reported.
Affine Projection Algorithm with Improved Data-Selective Method Using the Condition Number
NASA Astrophysics Data System (ADS)
Ban, Sung Jun; Lee, Chang Woo; Kim, Sang Woo
Recently, a data-selective method has been proposed to achieve low misalignment in affine projection algorithm (APA) by keeping the condition number of an input data matrix small. We present an improved method, and a complexity reduction algorithm for the APA with the data-selective method. Experimental results show that the proposed algorithm has lower misalignment and a lower condition number for an input data matrix than both the conventional APA and the APA with the previous data-selective method.
Two algorithms to compute projected correlation functions in molecular dynamics simulations
NASA Astrophysics Data System (ADS)
Carof, Antoine; Vuilleumier, Rodolphe; Rotenberg, Benjamin
2014-03-01
An explicit derivation of the Mori-Zwanzig orthogonal dynamics of observables is presented and leads to two practical algorithms to compute exactly projected observables (e.g., random noise) and projected correlation function (e.g., memory kernel) from a molecular dynamics trajectory. The algorithms are then applied to study the diffusive dynamics of a tagged particle in a Lennard-Jones fluid, the properties of the associated random noise, and a decomposition of the corresponding memory kernel.
A denoising algorithm for projection measurements in cone-beam computed tomography.
Karimi, Davood; Ward, Rabab
2016-02-01
The ability to reduce the radiation dose in computed tomography (CT) is limited by the excessive quantum noise present in the projection measurements. Sinogram denoising is, therefore, an essential step towards reconstructing high-quality images, especially in low-dose CT. Effective denoising requires accurate modeling of the photon statistics and of the prior knowledge about the characteristics of the projection measurements. This paper proposes an algorithm for denoising low-dose sinograms in cone-beam CT. The proposed algorithm is based on minimizing a cost function that includes a measurement consistency term and two regularizations in terms of the gradient and the Hessian of the sinogram. This choice of the regularization is motivated by the nature of CT projections. We use a split Bregman algorithm to minimize the proposed cost function. We apply the algorithm on simulated and real cone-beam projections and compare the results with another algorithm based on bilateral filtering. Our experiments with simulated and real data demonstrate the effectiveness of the proposed algorithm. Denoising of the projections with the proposed algorithm leads to a significant reduction of the noise in the reconstructed images without oversmoothing the edges or introducing artifacts.
Rios, A. B.; Valda, A.; Somacal, H.
2007-10-26
Usually tomographic procedure requires a set of projections around the object under study and a mathematical processing of such projections through reconstruction algorithms. An accurate reconstruction requires a proper number of projections (angular sampling) and a proper number of elements in each projection (linear sampling). However in several practical cases it is not possible to fulfill these conditions leading to the so-called problem of few projections. In this case, iterative reconstruction algorithms are more suitable than analytic ones. In this work we present a program written in C++ that provides an environment for two iterative algorithm implementations, one algebraic and the other statistical. The software allows the user a full definition of the acquisition and reconstruction geometries used for the reconstruction algorithms but also to perform projection and backprojection operations. A set of analysis tools was implemented for the characterization of the convergence process. We analyze the performance of the algorithms on numerical phantoms and present the reconstruction of experimental data with few projections coming from transmission X-ray and micro PIXE (Particle-Induced X-Ray Emission) images.
Filtered back-projection algorithm for Compton telescopes
Gunter, Donald L.
2008-03-18
A method for the conversion of Compton camera data into a 2D image of the incident-radiation flux on the celestial sphere includes detecting coincident gamma radiation flux arriving from various directions of a 2-sphere. These events are mapped by back-projection onto the 2-sphere to produce a convolution integral that is subsequently stereographically projected onto a 2-plane to produce a second convolution integral which is deconvolved by the Fourier method to produce an image that is then projected onto the 2-sphere.
ERIC Educational Resources Information Center
Emslie, Graham J.; Hughes, Carroll W.; Crismon, M. Lynn; Lopez, Molly; Pliszka, Steve; Toprac, Marcia G.; Boemer, Christine
2004-01-01
Objective: To evaluate the feasibility and impact on clinical response and function associated with the use of an algorithm-driven disease management program (ALGO) for children and adolescents treated for depression with or without attention-deficit/hyperactivity disorder (ADHD) in community mental health centers. Method: Interventions included…
Wide-field wide-band Interferometric Imaging: The WB A-Projection and Hybrid Algorithms
NASA Astrophysics Data System (ADS)
Bhatnagar, S.; Rau, U.; Golap, K.
2013-06-01
Variations of the antenna primary beam (PB) pattern as a function of time, frequency, and polarization form one of the dominant direction-dependent effects at most radio frequency bands. These gains may also vary from antenna to antenna. The A-Projection algorithm, published earlier, accounts for the effects of the narrow-band antenna PB in full polarization. In this paper, we present the wide-band A-Projection algorithm (WB A-Projection) to include the effects of wide bandwidth in the A-term itself and show that the resulting algorithm simultaneously corrects for the time, frequency, and polarization dependence of the PB. We discuss the combination of the WB A-Projection and the multi-term multi-frequency synthesis (MT-MFS) algorithm for simultaneous mapping of the sky brightness distribution and the spectral index distribution across a wide field of view. We also discuss the use of the narrow-band A-Projection algorithm in hybrid imaging schemes that account for the frequency dependence of the PB in the image domain.
WIDE-FIELD WIDE-BAND INTERFEROMETRIC IMAGING: THE WB A-PROJECTION AND HYBRID ALGORITHMS
Bhatnagar, S.; Rau, U.; Golap, K. E-mail: rurvashi@nrao.edu
2013-06-20
Variations of the antenna primary beam (PB) pattern as a function of time, frequency, and polarization form one of the dominant direction-dependent effects at most radio frequency bands. These gains may also vary from antenna to antenna. The A-Projection algorithm, published earlier, accounts for the effects of the narrow-band antenna PB in full polarization. In this paper, we present the wide-band A-Projection algorithm (WB A-Projection) to include the effects of wide bandwidth in the A-term itself and show that the resulting algorithm simultaneously corrects for the time, frequency, and polarization dependence of the PB. We discuss the combination of the WB A-Projection and the multi-term multi-frequency synthesis (MT-MFS) algorithm for simultaneous mapping of the sky brightness distribution and the spectral index distribution across a wide field of view. We also discuss the use of the narrow-band A-Projection algorithm in hybrid imaging schemes that account for the frequency dependence of the PB in the image domain.
Zhang, Jinkai; Rivard, Benoit; Rogge, D.M.
2008-01-01
Spectral mixing is a problem inherent to remote sensing data and results in few image pixel spectra representing ″pure″ targets. Linear spectral mixture analysis is designed to address this problem and it assumes that the pixel-to-pixel variability in a scene results from varying proportions of spectral endmembers. In this paper we present a different endmember-search algorithm called the Successive Projection Algorithm (SPA). SPA builds on convex geometry and orthogonal projection common to other endmember search algorithms by including a constraint on the spatial adjacency of endmember candidate pixels. Consequently it can reduce the susceptibility to outlier pixels and generates realistic endmembers.This is demonstrated using two case studies (AVIRIS Cuprite cube and Probe-1 imagery for Baffin Island) where image endmembers can be validated with ground truth data. The SPA algorithm extracts endmembers from hyperspectral data without having to reduce the data dimensionality. It uses the spectral angle (alike IEA) and the spatial adjacency of pixels in the image to constrain the selection of candidate pixels representing an endmember. We designed SPA based on the observation that many targets have spatial continuity (e.g. bedrock lithologies) in imagery and thus a spatial constraint would be beneficial in the endmember search. An additional product of the SPA is data describing the change of the simplex volume ratio between successive iterations during the endmember extraction. It illustrates the influence of a new endmember on the data structure, and provides information on the convergence of the algorithm. It can provide a general guideline to constrain the total number of endmembers in a search. PMID:27879768
Neural network algorithm for image reconstruction using the "grid-friendly" projections.
Cierniak, Robert
2011-09-01
The presented paper describes a development of original approach to the reconstruction problem using a recurrent neural network. Particularly, the "grid-friendly" angles of performed projections are selected according to the discrete Radon transform (DRT) concept to decrease the number of projections required. The methodology of our approach is consistent with analytical reconstruction algorithms. Reconstruction problem is reformulated in our approach to optimization problem. This problem is solved in present concept using method based on the maximum likelihood methodology. The reconstruction algorithm proposed in this work is consequently adapted for more practical discrete fan beam projections. Computer simulation results show that the neural network reconstruction algorithm designed to work in this way improves obtained results and outperforms conventional methods in reconstructed image quality.
NASA Astrophysics Data System (ADS)
Sekihara, Kensuke; Kawabata, Yuya; Ushio, Shuta; Sumiya, Satoshi; Kawabata, Shigenori; Adachi, Yoshiaki; Nagarajan, Srikantan S.
2016-06-01
Objective. In functional electrophysiological imaging, signals are often contaminated by interference that can be of considerable magnitude compared to the signals of interest. This paper proposes a novel algorithm for removing such interferences that does not require separate noise measurements. Approach. The algorithm is based on a dual definition of the signal subspace in the spatial- and time-domains. Since the algorithm makes use of this duality, it is named the dual signal subspace projection (DSSP). The DSSP algorithm first projects the columns of the measured data matrix onto the inside and outside of the spatial-domain signal subspace, creating a set of two preprocessed data matrices. The intersection of the row spans of these two matrices is estimated as the time-domain interference subspace. The original data matrix is projected onto the subspace that is orthogonal to this interference subspace. Main results. The DSSP algorithm is validated by using the computer simulation, and using two sets of real biomagnetic data: spinal cord evoked field data measured from a healthy volunteer and magnetoencephalography data from a patient with a vagus nerve stimulator. Significance. The proposed DSSP algorithm is effective for removing overlapped interference in a wide variety of biomagnetic measurements.
An Algorithm for the Weighted Earliness-Tardiness Unconstrained Project Scheduling Problem
NASA Astrophysics Data System (ADS)
Afshar Nadjafi, Behrouz; Shadrokh, Shahram
This research considers a project scheduling problem with the object of minimizing weighted earliness-tardiness penalty costs, taking into account a deadline for the project and precedence relations among the activities. An exact recursive method has been proposed for solving the basic form of this problem. We present a new depth-first branch and bound algorithm for extended form of the problem, which time value of money is taken into account by discounting the cash flows. The algorithm is extended with two bounding rules in order to reduce the size of the branch and bound tree. Finally, some test problems are solved and computational results are reported.
NASA Astrophysics Data System (ADS)
Luo, Shouhua; Wu, Huazhen; Sun, Yi; Li, Jing; Li, Guang; Gu, Ning
2017-03-01
The beam hardening effect can induce strong artifacts in CT images, which result in severely deteriorated image quality with incorrect intensities (CT numbers). This paper develops an effective and efficient beam hardening correction algorithm incorporated in a filtered back-projection based maximum a posteriori (BHC-FMAP). In the proposed algorithm, the beam hardening effect is modeled and incorporated into the forward-projection of the MAP to suppress beam hardening induced artifacts, and the image update process is performed by Feldkamp–Davis–Kress method based back-projection to speed up the convergence. The proposed BHC-FMAP approach does not require information about the beam spectrum or the material properties, or any additional segmentation operation. The proposed method was qualitatively and quantitatively evaluated using both phantom and animal projection data. The experimental results demonstrate that the BHC-FMAP method can efficiently provide a good correction of beam hardening induced artefacts.
Luo, Shouhua; Wu, Huazhen; Sun, Yi; Li, Jing; Li, Guang; Gu, Ning
2017-03-07
The beam hardening effect can induce strong artifacts in CT images, which result in severely deteriorated image quality with incorrect intensities (CT numbers). This paper develops an effective and efficient beam hardening correction algorithm incorporated in a filtered back-projection based maximum a posteriori (BHC-FMAP). In the proposed algorithm, the beam hardening effect is modeled and incorporated into the forward-projection of the MAP to suppress beam hardening induced artifacts, and the image update process is performed by Feldkamp-Davis-Kress method based back-projection to speed up the convergence. The proposed BHC-FMAP approach does not require information about the beam spectrum or the material properties, or any additional segmentation operation. The proposed method was qualitatively and quantitatively evaluated using both phantom and animal projection data. The experimental results demonstrate that the BHC-FMAP method can efficiently provide a good correction of beam hardening induced artefacts.
Fast maximum intensity projection algorithm using shear warp factorization and reduced resampling.
Fang, Laifa; Wang, Yi; Qiu, Bensheng; Qian, Yuancheng
2002-04-01
Maximal intensity projection (MIP) is routinely used to view MRA and other volumetric angiographic data. The straightforward implementation of MIP is ray casting that traces a volumetric data set in a computationally expensive manner. This article reports a fast MIP algorithm using shear warp factorization and reduced resampling that drastically reduced the redundancy in the computations for projection, thereby speeding up MIP by more than 10 times.
Designing of an environmental assessment algorithm for surface mining projects.
Mirmohammadi, Mirsaleh; Gholamnejad, Javad; Fattahpour, Vahidoddin; Seyedsadri, Pejman; Ghorbani, Yousef
2009-06-01
This paper depicts the method used to quantify the environmental impact of mining activities in surface mine projects. The affected environment was broken down into thirteen components, such as Human health and immunity, Surface water, Air quality, etc. The effect of twenty impacting factors from the mining and milling activities was then calculated for each Environmental Component. Environmental assessments are often performed by using matrix methods in which one dimension of the matrix is the "Impacting Factor" and the other one is the "Environmental Components". For the presented matrix method, each Impacting Factor was first given a magnitude between -10 and 10. These factors are used to set up a matrix named Impacting Factor Matrix, whose elements represent the Impacting Factor values. The effects of each Impacting Factor on each Environmental Component were then quantified by multiplying the Impacting Factor Matrix by Weighting Factor Matrix. The elements of the weighting factors matrix reflect the effects of each Impacting Factor on each Environmental Component. The outlined method was originally developed for a mining and milling operation in Iran, but it can successfully be used for mining ventures and more general industrial activities in other countries in accordance to their environmental regulations and laws.
Improvement of wavelet threshold filtered back-projection image reconstruction algorithm
NASA Astrophysics Data System (ADS)
Ren, Zhong; Liu, Guodong; Huang, Zhen
2014-11-01
Image reconstruction technique has been applied into many fields including some medical imaging, such as X ray computer tomography (X-CT), positron emission tomography (PET) and nuclear magnetic resonance imaging (MRI) etc, but the reconstructed effects are still not satisfied because original projection data are inevitably polluted by noises in process of image reconstruction. Although some traditional filters e.g., Shepp-Logan (SL) and Ram-Lak (RL) filter have the ability to filter some noises, Gibbs oscillation phenomenon are generated and artifacts leaded by back-projection are not greatly improved. Wavelet threshold denoising can overcome the noises interference to image reconstruction. Since some inherent defects exist in the traditional soft and hard threshold functions, an improved wavelet threshold function combined with filtered back-projection (FBP) algorithm was proposed in this paper. Four different reconstruction algorithms were compared in simulated experiments. Experimental results demonstrated that this improved algorithm greatly eliminated the shortcomings of un-continuity and large distortion of traditional threshold functions and the Gibbs oscillation. Finally, the availability of this improved algorithm was verified from the comparison of two evaluation criterions, i.e. mean square error (MSE), peak signal to noise ratio (PSNR) among four different algorithms, and the optimum dual threshold values of improved wavelet threshold function was gotten.
The industrial use of filtered back projection and maximum entropy reconstruction algorithms
Kruger, R.P.; London, J.R.
1982-11-01
Industrial tomography involves applications where experimental conditions may vary greatly. Some applications resemble more conventional medical tomography because a large number of projections are available. However, in other situations, scan time restrictions, object accessibility, or equipment limitations will reduce the number and/or angular range of the projections. This paper presents results from studies where both experimental conditions exist. The use of two algorithms, the more conventional filtered back projection (FBP) and the maximum entropy (MENT), are discussed and applied to several examples.
Quantum algorithm for universal implementation of the projective measurement of energy.
Nakayama, Shojun; Soeda, Akihito; Murao, Mio
2015-05-15
A projective measurement of energy (PME) on a quantum system is a quantum measurement determined by the Hamiltonian of the system. PME protocols exist when the Hamiltonian is given in advance. Unknown Hamiltonians can be identified by quantum tomography, but the time cost to achieve a given accuracy increases exponentially with the size of the quantum system. In this Letter, we improve the time cost by adapting quantum phase estimation, an algorithm designed for computational problems, to measurements on physical systems. We present a PME protocol without quantum tomography for Hamiltonians whose dimension and energy scale are given but which are otherwise unknown. Our protocol implements a PME to arbitrary accuracy without any dimension dependence on its time cost. We also show that another computational quantum algorithm may be used for efficient estimation of the energy scale. These algorithms show that computational quantum algorithms, with suitable modifications, have applications beyond their original context.
Quantum Algorithm for Universal Implementation of the Projective Measurement of Energy
NASA Astrophysics Data System (ADS)
Nakayama, Shojun; Soeda, Akihito; Murao, Mio
2015-05-01
A projective measurement of energy (PME) on a quantum system is a quantum measurement determined by the Hamiltonian of the system. PME protocols exist when the Hamiltonian is given in advance. Unknown Hamiltonians can be identified by quantum tomography, but the time cost to achieve a given accuracy increases exponentially with the size of the quantum system. In this Letter, we improve the time cost by adapting quantum phase estimation, an algorithm designed for computational problems, to measurements on physical systems. We present a PME protocol without quantum tomography for Hamiltonians whose dimension and energy scale are given but which are otherwise unknown. Our protocol implements a PME to arbitrary accuracy without any dimension dependence on its time cost. We also show that another computational quantum algorithm may be used for efficient estimation of the energy scale. These algorithms show that computational quantum algorithms, with suitable modifications, have applications beyond their original context.
A selective-update affine projection algorithm with selective input vectors
NASA Astrophysics Data System (ADS)
Kong, NamWoong; Shin, JaeWook; Park, PooGyeon
2011-10-01
This paper proposes an affine projection algorithm (APA) with selective input vectors, which based on the concept of selective-update in order to reduce estimation errors and computations. The algorithm consists of two procedures: input- vector-selection and state-decision. The input-vector-selection procedure determines the number of input vectors by checking with mean square error (MSE) whether the input vectors have enough information for update. The state-decision procedure determines the current state of the adaptive filter by using the state-decision criterion. As the adaptive filter is in transient state, the algorithm updates the filter coefficients with the selected input vectors. On the other hand, as soon as the adaptive filter reaches the steady state, the update procedure is not performed. Through these two procedures, the proposed algorithm achieves small steady-state estimation errors, low computational complexity and low update complexity for colored input signals.
An accelerated threshold-based back-projection algorithm for Compton camera image reconstruction
Mundy, Daniel W.; Herman, Michael G.
2011-01-15
Purpose: Compton camera imaging (CCI) systems are currently under investigation for radiotherapy dose reconstruction and verification. The ability of such a system to provide real-time images during dose delivery will be limited by the computational speed of the image reconstruction algorithm. In this work, the authors present a fast and simple method by which to generate an initial back-projected image from acquired CCI data, suitable for use in a filtered back-projection algorithm or as a starting point for iterative reconstruction algorithms, and compare its performance to the current state of the art. Methods: Each detector event in a CCI system describes a conical surface that includes the true point of origin of the detected photon. Numerical image reconstruction algorithms require, as a first step, the back-projection of each of these conical surfaces into an image space. The algorithm presented here first generates a solution matrix for each slice of the image space by solving the intersection of the conical surface with the image plane. Each element of the solution matrix is proportional to the distance of the corresponding voxel from the true intersection curve. A threshold function was developed to extract those pixels sufficiently close to the true intersection to generate a binary intersection curve. This process is repeated for each image plane for each CCI detector event, resulting in a three-dimensional back-projection image. The performance of this algorithm was tested against a marching algorithm known for speed and accuracy. Results: The threshold-based algorithm was found to be approximately four times faster than the current state of the art with minimal deficit to image quality, arising from the fact that a generically applicable threshold function cannot provide perfect results in all situations. The algorithm fails to extract a complete intersection curve in image slices near the detector surface for detector event cones having axes nearly
NASA Astrophysics Data System (ADS)
Pokhrel, Damodar
Interstitial and intracavitary brachytherapy plays an essential role in management of several malignancies. However, the achievable accuracy of brachytherapy treatment for prostate and cervical cancer is limited due to the lack of intraoperative planning and adaptive replanning. A major problem in implementing TRUS-based intraoperative planning is an inability of TRUS to accurately localize individual seed poses (positions and orientations) relative to the prostate volume during or after the implantation. For the locally advanced cervical cancer patient, manual drawing of the source positions on orthogonal films can not localize the full 3D intracavitary brachytherapy (ICB) applicator geometry. A new iterative forward projection matching (IFPM) algorithm can explicitly localize each individual seed/applicator by iteratively matching computed projections of the post-implant patient with the measured projections. This thesis describes adaptation and implementation of a novel IFPM algorithm that addresses hitherto unsolved problems in localization of brachytherapy seeds and applicators. The prototype implementation of 3-parameter point-seed IFPM algorithm was experimentally validated using a set of a few cone-beam CT (CBCT) projections of both the phantom and post-implant patient's datasets. Geometric uncertainty due to gantry angle inaccuracy was incorporated. After this, IFPM algorithm was extended to 5-parameter elongated line-seed model which automatically reconstructs individual seed orientation as well as position. The accuracy of this algorithm was tested using both the synthetic-measured projections of clinically-realistic Model-6711 125I seed arrangements and measured projections of an in-house precision-machined prostate implant phantom that allows the orientations and locations of up to 100 seeds to be set to known values. The seed reconstruction error for simulation was less than 0.6 mm/3o. For the physical phantom experiments, IFPM absolute accuracy for
Performance analysis of approximate Affine Projection Algorithm in acoustic feedback cancellation.
Nikjoo S, Mohammad; Seyedi, Amir; Tehrani, Arash Saber
2008-01-01
Acoustic feedback is an annoying problem in several audio applications and especially in hearing aids. Adaptive feedback cancellation techniques have attracted recent attention and show great promise in reducing the deleterious effects of feedback. In this paper, we investigated the performance of a class of adaptive feedback cancellation algorithms viz. the approximated Affine Projection Algorithms (APA). Mixed results were obtained with the natural speech and music data collected from five different commercial hearing aids in a variety of sub-oscillatory and oscillatory feedback conditions. The performance of the approximated APA was significantly better with music stimuli than natural speech stimuli.
Developing a synergy algorithm for land surface temperature: the SEN4LST project
NASA Astrophysics Data System (ADS)
Sobrino, Jose A.; Jimenez, Juan C.; Ghent, Darren J.
2013-04-01
Land surface Temperature (LST) is one of the key parameters in the physics of land-surface processes on regional and global scales, combining the results of all surface-atmosphere interactions and energy fluxes between the surface and the atmosphere. An adequate characterization of LST distribution and its temporal evolution requires measurements with detailed spatial and temporal frequencies. With the advent of the Sentinel 2 (S2) and 3 (S3) series of satellites a unique opportunity exists to go beyond the current state of the art of single instrument algorithms. The Synergistic Use of The Sentinel Missions For Estimating And Monitoring Land Surface Temperature (SEN4LST) project aims at developing techniques to fully utilize synergy between S2 and S3 instruments in order to improve LST retrievals. In the framework of the SEN4LST project, three LST retrieval algorithms were proposed using the thermal infrared bands of the Sea and Land Surface Temperature Retrieval (SLSTR) instrument on board the S3 platform: split-window (SW), dual-angle (DA) and a combined algorithm using both split-window and dual-angle techniques (SW-DA). One of the objectives of the project is to select the best algorithm to generate LST products from the synergy between S2/S3 instruments. In this sense, validation is a critical step in the selection process for the best performing candidate algorithm. A unique match-up database constructed at University of Leicester (UoL) of in situ observations from over twenty ground stations and corresponding brightness temperature (BT) and LST match-ups from multi-sensor overpasses is utilised for validating the candidate algorithms. Furthermore, their performance is also evaluated against the standard ESA LST product and the enhanced offline UoL LST product. In addition, a simulation dataset is constructed using 17 synthetic images of LST and the radiative transfer model MODTRAN carried under 66 different atmospheric conditions. Each candidate LST
BSIRT: a block-iterative SIRT parallel algorithm using curvilinear projection model.
Zhang, Fa; Zhang, Jingrong; Lawrence, Albert; Ren, Fei; Wang, Xuan; Liu, Zhiyong; Wan, Xiaohua
2015-03-01
Large-field high-resolution electron tomography enables visualizing detailed mechanisms under global structure. As field enlarges, the distortions of reconstruction and processing time become more critical. Using the curvilinear projection model can improve the quality of large-field ET reconstruction, but its computational complexity further exacerbates the processing time. Moreover, there is no parallel strategy on GPU for iterative reconstruction method with curvilinear projection. Here we propose a new Block-iterative SIRT parallel algorithm with the curvilinear projection model (BSIRT) for large-field ET reconstruction, to improve the quality of reconstruction and accelerate the reconstruction process. We also develop some key techniques, including block-iterative method with the curvilinear projection, a scope-based data decomposition method and a page-based data transfer scheme to implement the parallelization of BSIRT on GPU platform. Experimental results show that BSIRT can improve the reconstruction quality as well as the speed of the reconstruction process.
NASA Astrophysics Data System (ADS)
Azimi, Ehsan; Behrad, Alireza; Ghaznavi-Ghoushchi, Mohammad Bagher; Shanbehzadeh, Jamshid
2016-11-01
The projective model is an important mapping function for the calculation of global transformation between two images. However, its hardware implementation is challenging because of a large number of coefficients with different required precisions for fixed point representation. A VLSI hardware architecture is proposed for the calculation of a global projective model between input and reference images and refining false matches using random sample consensus (RANSAC) algorithm. To make the hardware implementation feasible, it is proved that the calculation of the projective model can be divided into four submodels comprising two translations, an affine model and a simpler projective mapping. This approach makes the hardware implementation feasible and considerably reduces the required number of bits for fixed point representation of model coefficients and intermediate variables. The proposed hardware architecture for the calculation of a global projective model using the RANSAC algorithm was implemented using Verilog hardware description language and the functionality of the design was validated through several experiments. The proposed architecture was synthesized by using an application-specific integrated circuit digital design flow utilizing 180-nm CMOS technology as well as a Virtex-6 field programmable gate array. Experimental results confirm the efficiency of the proposed hardware architecture in comparison with software implementation.
NASA Astrophysics Data System (ADS)
Kochanek, Anna
2015-12-01
The process of area development and planning in compliance with conditions outlined in the Zoning Scheme is significant because of the current rapid development of rural and urban areas. The verification of project documentation in terms of observing constant and nationally binding norms, legislation and local laws is based on certain standards. In order to streamline the process of verification undertaken by the relevant public authorities, it is necessary to create formal algorithms that will automate the existing method of control of architecture-building documentation. The objective of this article is algorithmisation of the project documentation verification allowing further streamlining and automation of the process.
A homotopy algorithm for digital optimal projection control GASD-HADOC
NASA Technical Reports Server (NTRS)
Collins, Emmanuel G., Jr.; Richter, Stephen; Davis, Lawrence D.
1993-01-01
The linear-quadratic-gaussian (LQG) compensator was developed to facilitate the design of control laws for multi-input, multi-output (MIMO) systems. The compensator is computed by solving two algebraic equations for which standard closed-loop solutions exist. Unfortunately, the minimal dimension of an LQG compensator is almost always equal to the dimension of the plant and can thus often violate practical implementation constraints on controller order. This deficiency is especially highlighted when considering control-design for high-order systems such as flexible space structures. This deficiency motivated the development of techniques that enable the design of optimal controllers whose dimension is less than that of the design plant. A homotopy approach based on the optimal projection equations that characterize the necessary conditions for optimal reduced-order control. Homotopy algorithms have global convergence properties and hence do not require that the initializing reduced-order controller be close to the optimal reduced-order controller to guarantee convergence. However, the homotopy algorithm previously developed for solving the optimal projection equations has sublinear convergence properties and the convergence slows at higher authority levels and may fail. A new homotopy algorithm for synthesizing optimal reduced-order controllers for discrete-time systems is described. Unlike the previous homotopy approach, the new algorithm is a gradient-based, parameter optimization formulation and was implemented in MATLAB. The results reported may offer the foundation for a reliable approach to optimal, reduced-order controller design.
The finite state projection algorithm for the solution of the chemical master equation
NASA Astrophysics Data System (ADS)
Munsky, Brian; Khammash, Mustafa
2006-01-01
This article introduces the finite state projection (FSP) method for use in the stochastic analysis of chemically reacting systems. One can describe the chemical populations of such systems with probability density vectors that evolve according to a set of linear ordinary differential equations known as the chemical master equation (CME). Unlike Monte Carlo methods such as the stochastic simulation algorithm (SSA) or τ leaping, the FSP directly solves or approximates the solution of the CME. If the CME describes a system that has a finite number of distinct population vectors, the FSP method provides an exact analytical solution. When an infinite or extremely large number of population variations is possible, the state space can be truncated, and the FSP method provides a certificate of accuracy for how closely the truncated space approximation matches the true solution. The proposed FSP algorithm systematically increases the projection space in order to meet prespecified tolerance in the total probability density error. For any system in which a sufficiently accurate FSP exists, the FSP algorithm is shown to converge in a finite number of steps. The FSP is utilized to solve two examples taken from the field of systems biology, and comparisons are made between the FSP, the SSA, and τ leaping algorithms. In both examples, the FSP outperforms the SSA in terms of accuracy as well as computational efficiency. Furthermore, due to very small molecular counts in these particular examples, the FSP also performs far more effectively than τ leaping methods.
NASA Astrophysics Data System (ADS)
Zhang, Xing; Wen, Gongjian
2015-10-01
Anomaly detection (AD) becomes increasingly important in hyperspectral imagery analysis with many practical applications. Local orthogonal subspace projection (LOSP) detector is a popular anomaly detector which exploits local endmembers/eigenvectors around the pixel under test (PUT) to construct background subspace. However, this subspace only takes advantage of the spectral information, but the spatial correlat ion of the background clutter is neglected, which leads to the anomaly detection result sensitive to the accuracy of the estimated subspace. In this paper, a local three dimensional orthogonal subspace projection (3D-LOSP) algorithm is proposed. Firstly, under the jointly use of both spectral and spatial information, three directional background subspaces are created along the image height direction, the image width direction and the spectral direction, respectively. Then, the three corresponding orthogonal subspaces are calculated. After that, each vector along three direction of the local cube is projected onto the corresponding orthogonal subspace. Finally, a composite score is given through the three direction operators. In 3D-LOSP, the anomalies are redefined as the target not only spectrally different to the background, but also spatially distinct. Thanks to the addition of the spatial information, the robustness of the anomaly detection result has been improved greatly by the proposed 3D-LOSP algorithm. It is noteworthy that the proposed algorithm is an expansion of LOSP and this ideology can inspire many other spectral-based anomaly detection methods. Experiments with real hyperspectral images have proved the stability of the detection result.
Evaluation of observation-driven evaporation algorithms: results of the WACMOS-ET project
NASA Astrophysics Data System (ADS)
Miralles, Diego G.; Jimenez, Carlos; Ershadi, Ali; McCabe, Matthew F.; Michel, Dominik; Hirschi, Martin; Seneviratne, Sonia I.; Jung, Martin; Wood, Eric F.; (Bob) Su, Z.; Timmermans, Joris; Chen, Xuelong; Fisher, Joshua B.; Mu, Quiaozen; Fernandez, Diego
2015-04-01
Terrestrial evaporation (ET) links the continental water, energy and carbon cycles. Understanding the magnitude and variability of ET at the global scale is an essential step towards reducing uncertainties in our projections of climatic conditions and water availability for the future. However, the requirement of global observational data of ET can neither be satisfied with our sparse global in-situ networks, nor with the existing satellite sensors (which cannot measure evaporation directly from space). This situation has led to the recent rise of several algorithms dedicated to deriving ET fields from satellite data indirectly, based on the combination of ET-drivers that can be observed from space (e.g. radiation, temperature, phenological variability, water content, etc.). These algorithms can either be based on physics (e.g. Priestley and Taylor or Penman-Monteith approaches) or be purely statistical (e.g., machine learning). However, and despite the efforts from different initiatives like GEWEX LandFlux (Jimenez et al., 2011; Mueller et al., 2013), the uncertainties inherent in the resulting global ET datasets remain largely unexplored, partly due to a lack of inter-product consistency in forcing data. In response to this need, the ESA WACMOS-ET project started in 2012 with the main objectives of (a) developing a Reference Input Data Set to derive and validate ET estimates, and (b) performing a cross-comparison, error characterization and validation exercise of a group of selected ET algorithms driven by this Reference Input Data Set and by in-situ forcing data. The algorithms tested are SEBS (Su et al., 2002), the Penman- Monteith approach from MODIS (Mu et al., 2011), the Priestley and Taylor JPL model (Fisher et al., 2008), the MPI-MTE model (Jung et al., 2010) and GLEAM (Miralles et al., 2011). In this presentation we will show the first results from the ESA WACMOS-ET project. The performance of the different algorithms at multiple spatial and temporal
ERIC Educational Resources Information Center
Pliszka, Steven R.; Crismon, M. Lynn; Hughes, Carroll W.; Corners, C. Keith; Emslie, Graham J.; Jensen, Peter S.; McCracken, James T.; Swanson, James M.; Lopez, Molly
2006-01-01
Objective: In 1998, the Texas Department of Mental Health and Mental Retardation developed algorithms for medication treatment of attention-deficit/hyperactivity disorder (ADHD). Advances in the psychopharmacology of ADHD and results of a feasibility study of algorithm use in community mental health centers caused the algorithm to be modified and…
Calculation of projected ranges — analytical solutions and a simple general algorithm
NASA Astrophysics Data System (ADS)
Biersack, J. P.
1981-05-01
The concept of multiple scattering is reconsidered for obtaining the directional spreading of ion motion as a function of energy loss. From this the mean projection of each pathlength element of the ion trajectory is derived which — upon summation or integration — leads to the desired mean projected range. In special cases, the calculation can be carried out analytically, otherwise a simple general algorithm is derived which is suitable even for the smallest programmable calculators. Necessary input for the present treatment consists only of generally accessable stopping power and straggling formulas. The procedure does not rely on scattering cross sections, e.g. power potential or f(t {1}/{2}) approximations. The present approach lends itself easily to include electronic straggling or to treat composite target materials, or even to account for the so-called "time integral".
Webb-Robertson, Bobbie-Jo M.; Jarman, Kristin H.; Harvey, Scott D.; Posse, Christian; Wright, Bob W.
2005-05-28
A fundamental problem in analysis of highly multivariate spectral or chromatographic data is reduction of dimensionality. Principal components analysis (PCA), concerned with explaining the variance-covariance structure of the data, is a commonly used approach to dimension reduction. Recently an attractive alternative to PCA, sequential projection pursuit (SPP), has been introduced. Designed to elicit clustering tendencies in the data, SPP may be more appropriate when performing clustering or classification analysis. However, the existing genetic algorithm (GA) implementation of SPP has two shortcomings, computation time and inability to determine the number of factors necessary to explain the majority of the structure in the data. We address both these shortcomings. First, we introduce a new SPP algorithm, a random scan sampling algorithm (RSSA), that significantly reduces computation time. We compare the computational burden of the RSS and GA implementation for SPP on a dataset containing Raman spectra of twelve organic compounds. Second, we propose a Bayes factor criterion, BFC, as an effective measure for selecting the number of factors needed to explain the majority of the structure in the data. We compare SPP to PCA on two datasets varying in type, size, and difficulty; in both cases SPP achieves a higher accuracy with a lower number of latent variables.
Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review
NASA Astrophysics Data System (ADS)
Zuo, Chao; Huang, Lei; Zhang, Minliang; Chen, Qian; Asundi, Anand
2016-10-01
In fringe projection profilometry (FPP), temporal phase unwrapping is an essential procedure to recover an unambiguous absolute phase even in the presence of large discontinuities or spatially isolated surfaces. So far, there are typically three groups of temporal phase unwrapping algorithms proposed in the literature: multi-frequency (hierarchical) approach, multi-wavelength (heterodyne) approach, and number-theoretical approach. In this paper, the three methods are investigated and compared in detail by analytical, numerical, and experimental means. The basic principles and recent developments of the three kind of algorithms are firstly reviewed. Then, the reliability of different phase unwrapping algorithms is compared based on a rigorous stochastic noise model. Furthermore, this noise model is used to predict the optimum fringe period for each unwrapping approach, which is a key factor governing the phase measurement accuracy in FPP. Simulations and experimental results verified the correctness and validity of the proposed noise model as well as the prediction scheme. The results show that the multi-frequency temporal phase unwrapping provides the best unwrapping reliability, while the multi-wavelength approach is the most susceptible to noise-induced unwrapping errors.
Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review
Zuo, Chao; Huang, Lei; Zhang, Minliang; ...
2016-05-06
In fringe projection pro lometry (FPP), temporal phase unwrapping is an essential procedure to recover an unambiguous absolute phase even in the presence of large discontinuities or spatially isolated surfaces. So far, there are typically three groups of temporal phase unwrapping algorithms proposed in the literature: multi-frequency (hierarchical) approach, multi-wavelength (heterodyne) approach, and number-theoretical approach. In this paper, the three methods are investigated and compared in details by analytical, numerical, and experimental means. The basic principles and recent developments of the three kind of algorithms are firstly reviewed. Then, the reliability of different phase unwrapping algorithms is compared based onmore » a rigorous stochastic noise model. Moreover, this noise model is used to predict the optimum fringe period for each unwrapping approach, which is a key factor governing the phase measurement accuracy in FPP. Simulations and experimental results verified the correctness and validity of the proposed noise model as well as the prediction scheme. The results show that the multi-frequency temporal phase unwrapping provides the best unwrapping reliability, while the multi-wavelength approach is the most susceptible to noise-induced unwrapping errors.« less
Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review
Zuo, Chao; Huang, Lei; Zhang, Minliang; Chen, Qian; Asundi, Anand
2016-05-06
In fringe projection pro lometry (FPP), temporal phase unwrapping is an essential procedure to recover an unambiguous absolute phase even in the presence of large discontinuities or spatially isolated surfaces. So far, there are typically three groups of temporal phase unwrapping algorithms proposed in the literature: multi-frequency (hierarchical) approach, multi-wavelength (heterodyne) approach, and number-theoretical approach. In this paper, the three methods are investigated and compared in details by analytical, numerical, and experimental means. The basic principles and recent developments of the three kind of algorithms are firstly reviewed. Then, the reliability of different phase unwrapping algorithms is compared based on a rigorous stochastic noise model. Moreover, this noise model is used to predict the optimum fringe period for each unwrapping approach, which is a key factor governing the phase measurement accuracy in FPP. Simulations and experimental results verified the correctness and validity of the proposed noise model as well as the prediction scheme. The results show that the multi-frequency temporal phase unwrapping provides the best unwrapping reliability, while the multi-wavelength approach is the most susceptible to noise-induced unwrapping errors.
Field depth extension of 2D barcode scanner based on wavefront coding and projection algorithm
NASA Astrophysics Data System (ADS)
Zhao, Tingyu; Ye, Zi; Zhang, Wenzi; Huang, Weiwei; Yu, Feihong
2008-03-01
Wavefront coding (WFC) used in 2D barcode scanners can extend the depth of field into a great extent with simpler structure compared to the autofocus microscope system. With a cubic phase mask (CPM) employed in the STOP, blurred images will be obtained in charge coupled device (CCD), which can be restored by digital filters. Direct methods are used widely in real-time restoration with good computational efficiency but with details smoothed. Here, the results of direct method are firstly filtered by hard-threshold function. The positions of the steps can be detected by simple differential operators. With the positions corrected by projection algorithm, the exact barcode information is restored. A wavefront coding system with 7mm effective focal length and 6 F-number is designed as an example. Although with the different magnification, images of different object distances can be restored by one point spread function (PSF) with 200mm object distance. A QR code (Quickly Response Code) of 31mm X 27mm is used as a target object. The simulation results showed that the sharp imaging objective distance is from 80mm to 355mm. The 2D barcode scanner with wavefront coding extends field depth with simple structure, low cost and large manufacture tolerance. This combination of the direct filter and projection algorithm proposed here could get the exact 2D barcode information with good computational efficiency.
NASA Astrophysics Data System (ADS)
Shams Esfand Abadi, Mohammad; AbbasZadeh Arani, Seyed Ali Asghar
2011-12-01
This paper extends the recently introduced variable step-size (VSS) approach to the family of adaptive filter algorithms. This method uses prior knowledge of the channel impulse response statistic. Accordingly, optimal step-size vector is obtained by minimizing the mean-square deviation (MSD). The presented algorithms are the VSS affine projection algorithm (VSS-APA), the VSS selective partial update NLMS (VSS-SPU-NLMS), the VSS-SPU-APA, and the VSS selective regressor APA (VSS-SR-APA). In VSS-SPU adaptive algorithms the filter coefficients are partially updated which reduce the computational complexity. In VSS-SR-APA, the optimal selection of input regressors is performed during the adaptation. The presented algorithms have good convergence speed, low steady state mean square error (MSE), and low computational complexity features. We demonstrate the good performance of the proposed algorithms through several simulations in system identification scenario.
Gomes, Adriano de Araújo; Alcaraz, Mirta Raquel; Goicoechea, Hector C; Araújo, Mario Cesar U
2014-02-06
In this work the Successive Projection Algorithm is presented for intervals selection in N-PLS for three-way data modeling. The proposed algorithm combines noise-reduction properties of PLS with the possibility of discarding uninformative variables in SPA. In addition, second-order advantage can be achieved by the residual bilinearization (RBL) procedure when an unexpected constituent is present in a test sample. For this purpose, SPA was modified in order to select intervals for use in trilinear PLS. The ability of the proposed algorithm, namely iSPA-N-PLS, was evaluated on one simulated and two experimental data sets, comparing the results to those obtained by N-PLS. In the simulated system, two analytes were quantitated in two test sets, with and without unexpected constituent. In the first experimental system, the determination of the four fluorophores (l-phenylalanine; l-3,4-dihydroxyphenylalanine; 1,4-dihydroxybenzene and l-tryptophan) was conducted with excitation-emission data matrices. In the second experimental system, quantitation of ofloxacin was performed in water samples containing two other uncalibrated quinolones (ciprofloxacin and danofloxacin) by high performance liquid chromatography with UV-vis diode array detector. For comparison purpose, a GA algorithm coupled with N-PLS/RBL was also used in this work. In most of the studied cases iSPA-N-PLS proved to be a promising tool for selection of variables in second-order calibration, generating models with smaller RMSEP, when compared to both the global model using all of the sensors in two dimensions and GA-NPLS/RBL.
NASA Astrophysics Data System (ADS)
Ren, Zhong; Liu, Guodong; Huang, Zhen
2012-11-01
The image reconstruction is a key step in medical imaging (MI) and its algorithm's performance determinates the quality and resolution of reconstructed image. Although some algorithms have been used, filter back-projection (FBP) algorithm is still the classical and commonly-used algorithm in clinical MI. In FBP algorithm, filtering of original projection data is a key step in order to overcome artifact of the reconstructed image. Since simple using of classical filters, such as Shepp-Logan (SL), Ram-Lak (RL) filter have some drawbacks and limitations in practice, especially for the projection data polluted by non-stationary random noises. So, an improved wavelet denoising combined with parallel-beam FBP algorithm is used to enhance the quality of reconstructed image in this paper. In the experiments, the reconstructed effects were compared between the improved wavelet denoising and others (directly FBP, mean filter combined FBP and median filter combined FBP method). To determine the optimum reconstruction effect, different algorithms, and different wavelet bases combined with three filters were respectively test. Experimental results show the reconstruction effect of improved FBP algorithm is better than that of others. Comparing the results of different algorithms based on two evaluation standards i.e. mean-square error (MSE), peak-to-peak signal-noise ratio (PSNR), it was found that the reconstructed effects of the improved FBP based on db2 and Hanning filter at decomposition scale 2 was best, its MSE value was less and the PSNR value was higher than others. Therefore, this improved FBP algorithm has potential value in the medical imaging.
Duong, T A; Stubberud, A R
2000-06-01
In this paper, we present a mathematical foundation, including a convergence analysis, for cascading architecture neural network. Our analysis also shows that the convergence of the cascade architecture neural network is assured because it satisfies Liapunov criteria, in an added hidden unit domain rather than in the time domain. From this analysis, a mathematical foundation for the cascade correlation learning algorithm can be found. Furthermore, it becomes apparent that the cascade correlation scheme is a special case from mathematical analysis in which an efficient hardware learning algorithm called Cascade Error Projection(CEP) is proposed. The CEP provides efficient learning in hardware and it is faster to train, because part of the weights are deterministically obtained, and the learning of the remaining weights from the inputs to the hidden unit is performed as a single-layer perceptron learning with previously determined weights kept frozen. In addition, one can start out with zero weight values (rather than random finite weight values) when the learning of each layer is commenced. Further, unlike cascade correlation algorithm (where a pool of candidate hidden units is added), only a single hidden unit is added at a time. Therefore, the simplicity in hardware implementation is also achieved. Finally, 5- to 8-bit parity and chaotic time series prediction problems are investigated; the simulation results demonstrate that 4-bit or more weight quantization is sufficient for learning neural network using CEP. In addition, it is demonstrated that this technique is able to compensate for less bit weight resolution by incorporating additional hidden units. However, generation result may suffer somewhat with lower bit weight quantization.
Osser, David N; Roudsari, Mohsen Jalali; Manschreck, Theo
2013-01-01
This article is an update of the algorithm for schizophrenia from the Psychopharmacology Algorithm Project at the Harvard South Shore Program. A literature review was conducted focusing on new data since the last published version (1999-2001). The first-line treatment recommendation for new-onset schizophrenia is with amisulpride, aripiprazole, risperidone, or ziprasidone for four to six weeks. In some settings the trial could be shorter, considering that evidence of clear improvement with antipsychotics usually occurs within the first two weeks. If the trial of the first antipsychotic cannot be completed due to intolerance, try another until one of the four is tolerated and given an adequate trial. There should be evidence of bioavailability. If the response to this adequate trial is unsatisfactory, try a second monotherapy. If the response to this second adequate trial is also unsatisfactory, and if at least one of the first two trials was with risperidone, olanzapine, or a first-generation (typical) antipsychotic, then clozapine is recommended for the third trial. If neither trial was with any these three options, a third trial prior to clozapine should occur, using one of those three. If the response to monotherapy with clozapine (with dose adjusted by using plasma levels) is unsatisfactory, consider adding risperidone, lamotrigine, or ECT. Beyond that point, there is little solid evidence to support further psychopharmacological treatment choices, though we do review possible options.
Quantum Monte Carlo algorithms for electronic structure at the petascale; the endstation project.
Kim, J; Ceperley, D M; Purwanto, W; Walter, E J; Krakauer, H; Zhang, S W; Kent, P.R. C; Hennig, R G; Umrigar, C; Bajdich, M; Kolorenc, J; Mitas, L; Srinivasan, A
2008-10-01
Over the past two decades, continuum quantum Monte Carlo (QMC) has proved to be an invaluable tool for predicting of the properties of matter from fundamental principles. By solving the Schrodinger equation through a stochastic projection, it achieves the greatest accuracy and reliability of methods available for physical systems containing more than a few quantum particles. QMC enjoys scaling favorable to quantum chemical methods, with a computational effort which grows with the second or third power of system size. This accuracy and scalability has enabled scientific discovery across a broad spectrum of disciplines. The current methods perform very efficiently at the terascale. The quantum Monte Carlo Endstation project is a collaborative effort among researchers in the field to develop a new generation of algorithms, and their efficient implementations, which will take advantage of the upcoming petaflop architectures. Some aspects of these developments are discussed here. These tools will expand the accuracy, efficiency and range of QMC applicability and enable us to tackle challenges which are currently out of reach. The methods will be applied to several important problems including electronic and structural properties of water, transition metal oxides, nanosystems and ultracold atoms.
Drawert, Brian; Lawson, Michael J.; Petzold, Linda; Khammash, Mustafa
2010-01-01
We have developed a computational framework for accurate and efficient simulation of stochastic spatially inhomogeneous biochemical systems. The new computational method employs a fractional step hybrid strategy. A novel formulation of the finite state projection (FSP) method, called the diffusive FSP method, is introduced for the efficient and accurate simulation of diffusive transport. Reactions are handled by the stochastic simulation algorithm. PMID:20170209
NASA Astrophysics Data System (ADS)
Qu, Lele; Yin, Yuqing
2016-10-01
Stepped frequency continuous wave ground penetrating radar (SFCW-GPR) systems are becoming increasingly popular in the GPR community due to the wider dynamic range and higher immunity to radio interference. The traditional back-projection (BP) algorithm is preferable for SFCW-GPR imaging in layered mediums scenarios for its convenience and robustness. However, the existing BP imaging algorithms are usually very computationally intensive, which limits their practical applications to SFCW-GPR imaging. To solve the above problem, a fast SFCW-GPR BP imaging algorithm based on nonuniform fast Fourier transform (NUFFT) technique is proposed in this paper. By reformulating the traditional BP imaging algorithm into the evaluations of NUFFT, the computational efficiency of NUFFT is exploited to reduce the computational complexity of the imaging reconstruction. Both simulation and experimental results have verified the effectiveness and improvement of computational efficiency of the proposed imaging method.
AsteroidZoo: A New Zooniverse project to detect asteroids and improve asteroid detection algorithms
NASA Astrophysics Data System (ADS)
Beasley, M.; Lewicki, C. A.; Smith, A.; Lintott, C.; Christensen, E.
2013-12-01
We present a new citizen science project: AsteroidZoo. A collaboration between Planetary Resources, Inc., the Zooniverse Team, and the Catalina Sky Survey, we will bring the science of asteroid identification to the citizen scientist. Volunteer astronomers have proved to be a critical asset in identification and characterization of asteroids, especially potentially hazardous objects. These contributions, to date, have required that the volunteer possess a moderate telescope and the ability and willingness to be responsive to observing requests. Our new project will use data collected by the Catalina Sky Survey (CSS), currently the most productive asteroid survey, to be used by anyone with sufficient interest and an internet connection. As previous work by the Zooniverse has demonstrated, the capability of the citizen scientist is superb at classification of objects. Even the best automated searches require human intervention to identify new objects. These searches are optimized to reduce false positive rates and to prevent a single operator from being overloaded with requests. With access to the large number of people in Zooniverse, we will be able to avoid that problem and instead work to produce a complete detection list. Each frame from CSS will be searched in detail, generating a large number of new detections. We will be able to evaluate the completeness of the CSS data set and potentially provide improvements to the automated pipeline. The data corpus produced by AsteroidZoo will be used as a training environment for machine learning challenges in the future. Our goals include a more complete asteroid detection algorithm and a minimum computation program that skims the cream of the data suitable for implemention on small spacecraft. Our goal is to have the site become live in the Fall 2013.
An Ensemble Successive Project Algorithm for Liquor Detection Using Near Infrared Sensor
Qu, Fangfang; Ren, Dong; Wang, Jihua; Zhang, Zhong; Lu, Na; Meng, Lei
2016-01-01
Spectral analysis technique based on near infrared (NIR) sensor is a powerful tool for complex information processing and high precision recognition, and it has been widely applied to quality analysis and online inspection of agricultural products. This paper proposes a new method to address the instability of small sample sizes in the successive projections algorithm (SPA) as well as the lack of association between selected variables and the analyte. The proposed method is an evaluated bootstrap ensemble SPA method (EBSPA) based on a variable evaluation index (EI) for variable selection, and is applied to the quantitative prediction of alcohol concentrations in liquor using NIR sensor. In the experiment, the proposed EBSPA with three kinds of modeling methods are established to test their performance. In addition, the proposed EBSPA combined with partial least square is compared with other state-of-the-art variable selection methods. The results show that the proposed method can solve the defects of SPA and it has the best generalization performance and stability. Furthermore, the physical meaning of the selected variables from the near infrared sensor data is clear, which can effectively reduce the variables and improve their prediction accuracy. PMID:26761015
A method of generalized projections (MGP) ghost correction algorithm for interleaved EPI.
Lee, K J; Papadakis, N G; Barber, D C; Wilkinson, I D; Griffiths, P D; Paley, M N J
2004-07-01
Investigations into the method of generalized projections (MGP) as a ghost correction method for interleaved EPI are described. The technique is image-based and does not require additional reference scans. The algorithm was found to be more effective if a priori knowledge was incorporated to reduce the degrees of freedom, by modeling the ghosting as arising from a small number of phase offsets. In simulations with phase variation between consecutive shots for n-interleaved echo planar imaging (EPI), ghost reduction was achieved for n = 2 only. With no phase variation between shots, ghost reduction was obtained with n up to 16. Incorporating a relaxation parameter was found to improve convergence. Dependence of convergence on the region of support was also investigated. A fully automatic version of the method was developed, using results from the simulations. When tested on in vivo 2-, 16-, and 32-interleaved spin-echo EPI data, the method achieved deghosting and image restoration close to that obtained by both reference scan and odd/even filter correction, although some residual artifacts remained.
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
Wanjun, Shuai; Xiuzhen, Dong; Feng, Fu; Fusheng, You; Xiaodong, Liu; Canhua, Xu
2005-01-01
It is found that Electrical Impedance Tomography(EIT) is promising in its application to the clinical image monitoring and that the Back-Projection algorithm of EIT can meet the preliminary requirements of the real-time monitoring through our work. In order to improve the computed speed and the imaged resolution, different ways of completing the algorithm were tried in this paper. Moreover, it is shown that the impedance change due to physiological saline with the concentration of not more than 50 milliliter 0.9% can be detected and imaged by our system. The above result is helpful for our further work of image monitoring by EIT.
NASA Astrophysics Data System (ADS)
Vecharynski, Eugene; Yang, Chao; Pask, John E.
2015-06-01
We present an iterative algorithm for computing an invariant subspace associated with the algebraically smallest eigenvalues of a large sparse or structured Hermitian matrix A. We are interested in the case in which the dimension of the invariant subspace is large (e.g., over several hundreds or thousands) even though it may still be small relative to the dimension of A. These problems arise from, for example, density functional theory (DFT) based electronic structure calculations for complex materials. The key feature of our algorithm is that it performs fewer Rayleigh-Ritz calculations compared to existing algorithms such as the locally optimal block preconditioned conjugate gradient or the Davidson algorithm. It is a block algorithm, and hence can take advantage of efficient BLAS3 operations and be implemented with multiple levels of concurrency. We discuss a number of practical issues that must be addressed in order to implement the algorithm efficiently on a high performance computer.
Workshop on algorithms for macromolecular modeling. Final project report, June 1, 1994--May 31, 1995
Leimkuhler, B.; Hermans, J.; Skeel, R.D.
1995-07-01
A workshop was held on algorithms and parallel implementations for macromolecular dynamics, protein folding, and structural refinement. This document contains abstracts and brief reports from that workshop.
NASA Astrophysics Data System (ADS)
Zoric, Nenad; Livshits, Irina; Dilworth, Don; Okishev, Sergey
2017-02-01
This paper describes a method for designing an ultraviolet (UV) projection lens for microlithography. Our approach for meeting this objective is to use a starting design automatically obtained by the DSEARCH feature in the SYNOPSYS™ lens design program. We describe the steps for getting a desired starting point for the projection lens and discuss optimization problems unique to this system, where the two parts of the projection lens are designed independently.
Xu, Wei-Heng; Feng, Zhong-Ke; Su, Zhi-Fang; Xu, Hui; Jiao, You-Quan; Deng, Ou
2014-02-01
Tree crown projection area and crown volume are the important parameters for the estimation of biomass, tridimensional green biomass and other forestry science applications. Using conventional measurements of tree crown projection area and crown volume will produce a large area of errors in the view of practical situations referring to complicated tree crown structures or different morphological characteristics. However, it is difficult to measure and validate their accuracy through conventional measurement methods. In view of practical problems which include complicated tree crown structure, different morphological characteristics, so as to implement the objective that tree crown projection and crown volume can be extracted by computer program automatically. This paper proposes an automatic untouched measurement based on terrestrial three-dimensional laser scanner named FARO Photon120 using plane scattered data point convex hull algorithm and slice segmentation and accumulation algorithm to calculate the tree crown projection area. It is exploited on VC+6.0 and Matlab7.0. The experiments are exploited on 22 common tree species of Beijing, China. The results show that the correlation coefficient of the crown projection between Av calculated by new method and conventional method A4 reaches 0.964 (p<0.01); and the correlation coefficient of tree crown volume between V(VC) derived from new method and V(C) by the formula of a regular body is 0.960 (p<0.001). The results also show that the average of V(C) is smaller than that of V(VC) at the rate of 8.03%, and the average of A4 is larger than that of A(V) at the rate of 25.5%. Assumed Av and V(VC) as ture values, the deviations of the new method could be attributed to irregularity of the crowns' silhouettes. Different morphological characteristics of tree crown led to measurement error in forest simple plot survey. Based on the results, the paper proposes that: (1) the use of eight-point or sixteen-point projection with
Gradient Projection Algorithms and Software for Arbitrary Rotation Criteria in Factor Analysis
ERIC Educational Resources Information Center
Bernaards, Coen A.; Jennrich, Robert I.
2005-01-01
Almost all modern rotation of factor loadings is based on optimizing a criterion, for example, the quartimax criterion for quartimax rotation. Recent advancements in numerical methods have led to general orthogonal and oblique algorithms for optimizing essentially any rotation criterion. All that is required for a specific application is a…
Implementation of a new algorithm for Density Equalizing Map Projections (DEMP)
Close, E.R.; Merrill, D.W.; Holmes, H.H.
1995-07-01
The purpose of the PAREP (Populations at Risk to Environmental Pollution) Project at Lawrence Berkeley National Laboratory (LBNL), an ongoing Department of Energy (DOE) project since 1978, is to develop resources (data, computing techniques, and biostatistical methodology) applicable to DOE`s needs. Specifically, the PAREP project has developed techniques for statistically analyzing disease distributions in the vicinity of supposed environmental hazards. Such techniques can be applied to assess the health risks in populations residing near DOE installations, provided adequate small-area health data are available. The FY 1994 task descriptions for the PAREP project were determined in discussions at LBNL on 11/2/93. The FY94 PAREP Work Authorization specified three major tasks: a prototype small area study, a feasibility study for obtaining small-area data, and preservation of the PAREP data archive. The complete FY94 work plan, and the subtasks accomplished to date, were included in the Cumulative FY94 progress report.
Warm starting the projected Gauss-Seidel algorithm for granular matter simulation
NASA Astrophysics Data System (ADS)
Wang, Da; Servin, Martin; Berglund, Tomas
2016-03-01
The effect on the convergence of warm starting the projected Gauss-Seidel solver for nonsmooth discrete element simulation of granular matter are investigated. It is found that the computational performance can be increased by a factor 2-5.
NASA Astrophysics Data System (ADS)
Selim, I. M.; Abd El Aziz, Mohamed
2017-02-01
The development of automated morphological classification schemes can successfully distinguish between morphological types of galaxies and can be used for studies of the formation and subsequent evolution of galaxies in our universe. In this paper, we present a new automated machine supervised learning astronomical classification scheme based on the Nonnegative Matrix Factorization algorithm. This scheme is making distinctions between all types roughly corresponding to Hubble types such as elliptical, lenticulars, spiral, and irregular galaxies. The proposed algorithm is performed on two examples with different number of image (small dataset contains 110 image and large dataset contains 700 images). The experimental results show that galaxy images from EFIGI catalog can be classified automatically with an accuracy of ˜93% for small and ˜92% for large number. These results are in good agreement when compared with the visual classifications.
NASA Astrophysics Data System (ADS)
Selim, I. M.; Abd El Aziz, Mohamed
2017-04-01
The development of automated morphological classification schemes can successfully distinguish between morphological types of galaxies and can be used for studies of the formation and subsequent evolution of galaxies in our universe. In this paper, we present a new automated machine supervised learning astronomical classification scheme based on the Nonnegative Matrix Factorization algorithm. This scheme is making distinctions between all types roughly corresponding to Hubble types such as elliptical, lenticulars, spiral, and irregular galaxies. The proposed algorithm is performed on two examples with different number of image (small dataset contains 110 image and large dataset contains 700 images). The experimental results show that galaxy images from EFIGI catalog can be classified automatically with an accuracy of ˜93% for small and ˜92% for large number. These results are in good agreement when compared with the visual classifications.
Resource-Constrained Project Scheduling Under Uncertainty: Models, Algorithms and Applications
2014-11-10
decision problem under uncertainty, known as Markov decision process ( MDP [2]). To overcome the well-known “curse- of-dimensionalities” of the...simulation. This avoids the need of exhaustively visiting all possible MDP states. The essence of ADP is to replace the exact cost-to-go function with some...collaborators have developed computationally tractable ADP algorithms for obtaining high-quality near optimal solutions to the MDP model of SRCPSP
NASA Astrophysics Data System (ADS)
Michel, D.; Jiménez, C.; Miralles, D. G.; Jung, M.; Hirschi, M.; Ershadi, A.; Martens, B.; McCabe, M. F.; Fisher, J. B.; Mu, Q.; Seneviratne, S. I.; Wood, E. F.; Fernández-Prieto, D.
2015-10-01
The WACMOS-ET project has compiled a forcing data set covering the period 2005-2007 that aims to maximize the exploitation of European Earth Observations data sets for evapotranspiration (ET) estimation. The data set was used to run 4 established ET algorithms: the Priestley-Taylor Jet Propulsion Laboratory model (PT-JPL), the Penman-Monteith algorithm from the MODIS evaporation product (PM-MOD), the Surface Energy Balance System (SEBS) and the Global Land Evaporation Amsterdam Model (GLEAM). In addition, in-situ meteorological data from 24 FLUXNET towers was used to force the models, with results from both forcing sets compared to tower-based flux observations. Model performance was assessed across several time scales using both sub-daily and daily forcings. The PT-JPL model and GLEAM provide the best performance for both satellite- and tower-based forcing as well as for the considered temporal resolutions. Simulations using the PM-MOD were mostly underestimated, while the SEBS performance was characterized by a systematic overestimation. In general, all four algorithms produce the best results in wet and moderately wet climate regimes. In dry regimes, the correlation and the absolute agreement to the reference tower ET observations were consistently lower. While ET derived with in situ forcing data agrees best with the tower measurements (R2 = 0.67), the agreement of the satellite-based ET estimates is only marginally lower (R2 = 0.58). Results also show similar model performance at daily and sub-daily (3-hourly) resolutions. Overall, our validation experiments against in situ measurements indicate that there is no single best-performing algorithm across all biome and forcing types. An extension of the evaluation to a larger selection of 85 towers (model inputs re-sampled to a common grid to facilitate global estimates) confirmed the original findings.
NASA Astrophysics Data System (ADS)
Michel, D.; Jiménez, C.; Miralles, D. G.; Jung, M.; Hirschi, M.; Ershadi, A.; Martens, B.; McCabe, M. F.; Fisher, J. B.; Mu, Q.; Seneviratne, S. I.; Wood, E. F.; Fernández-Prieto, D.
2016-02-01
The WAter Cycle Multi-mission Observation Strategy - EvapoTranspiration (WACMOS-ET) project has compiled a forcing data set covering the period 2005-2007 that aims to maximize the exploitation of European Earth Observations data sets for evapotranspiration (ET) estimation. The data set was used to run four established ET algorithms: the Priestley-Taylor Jet Propulsion Laboratory model (PT-JPL), the Penman-Monteith algorithm from the MODerate resolution Imaging Spectroradiometer (MODIS) evaporation product (PM-MOD), the Surface Energy Balance System (SEBS) and the Global Land Evaporation Amsterdam Model (GLEAM). In addition, in situ meteorological data from 24 FLUXNET towers were used to force the models, with results from both forcing sets compared to tower-based flux observations. Model performance was assessed on several timescales using both sub-daily and daily forcings. The PT-JPL model and GLEAM provide the best performance for both satellite- and tower-based forcing as well as for the considered temporal resolutions. Simulations using the PM-MOD were mostly underestimated, while the SEBS performance was characterized by a systematic overestimation. In general, all four algorithms produce the best results in wet and moderately wet climate regimes. In dry regimes, the correlation and the absolute agreement with the reference tower ET observations were consistently lower. While ET derived with in situ forcing data agrees best with the tower measurements (R2 = 0.67), the agreement of the satellite-based ET estimates is only marginally lower (R2 = 0.58). Results also show similar model performance at daily and sub-daily (3-hourly) resolutions. Overall, our validation experiments against in situ measurements indicate that there is no single best-performing algorithm across all biome and forcing types. An extension of the evaluation to a larger selection of 85 towers (model inputs resampled to a common grid to facilitate global estimates) confirmed the original
NASA Astrophysics Data System (ADS)
Maltz, Jonathan S.
2000-06-01
We present an algorithm which is able to reconstruct dynamic emission computed tomography (ECT) image series directly from inconsistent projection data that have been obtained using a rotating camera. By finding a reduced dimension time-activity curve (TAC) basis with which all physiologically feasible TAC's in an image may be accurately approximated, we are able to recast this large non-linear problem as one of constrained linear least squares (CLLSQ) and to reduce parameter vector dimension by a factor of 20. Implicit is the assumption that each pixel may be modeled using a single compartment model, as is typical in 99mTc teboroxime wash-in wash-out studies; and that the blood input function is known. A disadvantage of the change of basis is that TAC non-negativity is no longer ensured. As a consequence, non-negativity constraints must appear in the CLLSQ formulation. A warm-start multiresolution approach is proposed, whereby the problem is initially solved at a resolution below that finally desired. At the next iteration, the number of reconstructed pixels is increased and the solution of the lower resolution problem is then used to warm-start the estimation of the higher resolution kinetic parameters. We demonstrate the algorithm by applying it to dynamic myocardial slice phantom projection data at resolutions of 16 X 16 and 32 X 32 pixels. We find that the warm-start method employed leads to computational savings of between 2 and 4 times when compared to cold start execution times. A 20% RMS error in the reconstructed TAC's is achieved for a total number of detected sinogram counts of 1 X 105 for the 16 X 16 problem and at 1 X 106 counts for the 32 X 32 grid. These errors are 1.5 - 2 times greater than those obtained in conventional (consistent projection) SPECT imaging at similar count levels.
Khaled, Alia S; Beck, Thomas J
2013-01-01
Relatively high radiation CT techniques are being widely used in diagnostic imaging raising the concerns about cancer risk especially for routine screening of asymptomatic populations. An important strategy for dose reduction is to reduce the number of projections, although doing so with high image quality is technically difficult. We developed an algorithm to reconstruct discrete (limited gray scale) images decomposed into individual tissue types from a small number of projections acquired over a limited view angle. The algorithm was tested using projection simulations from segmented CT scans of different cross sections including mid femur, distal femur and lower leg. It can provide high quality images from as low as 5-7 projections if the skin boundary of the cross section is used as prior information in the reconstruction process, and from 11-13 projections if the skin boundary is unknown.
Fox, Andrew; Williams, Mathew; Richardson, Andrew D.; Cameron, David; Gove, Jeffrey H.; Quaife, Tristan; Ricciuto, Daniel M; Reichstein, Markus; Tomelleri, Enrico; Trudinger, Cathy; Van Wijk, Mark T.
2009-10-01
We describe a model-data fusion (MDF) inter-comparison project (REFLEX), which compared various algorithms for estimating carbon (C) model parameters consistent with both measured carbon fluxes and states and a simple C model. Participants were provided with the model and with both synthetic net ecosystem exchange (NEE) ofCO2 and leaf area index (LAI) data, generated from the model with added noise, and observed NEE and LAI data from two eddy covariance sites. Participants endeavoured to estimate model parameters and states consistent with the model for all cases over the two years for which data were provided, and generate predictions for one additional year without observations. Nine participants contributed results using Metropolis algorithms, Kalman filters and a genetic algorithm. For the synthetic data case, parameter estimates compared well with the true values. The results of the analyses indicated that parameters linked directly to gross primary production (GPP) and ecosystem respiration, such as those related to foliage allocation and turnover, or temperature sensitivity of heterotrophic respiration,were best constrained and characterised. Poorly estimated parameters were those related to the allocation to and turnover of fine root/wood pools. Estimates of confidence intervals varied among algorithms, but several algorithms successfully located the true values of annual fluxes from synthetic experiments within relatively narrow 90% confidence intervals, achieving>80% success rate and mean NEE confidence intervals <110 gCm-2 year-1 for the synthetic case. Annual C flux estimates generated by participants generally agreed with gap-filling approaches using half-hourly data. The estimation of ecosystem respiration and GPP through MDF agreed well with outputs from partitioning studies using half-hourly data. Confidence limits on annual NEE increased by an average of 88% in the prediction year compared to the previous year, when data were available. Confidence
Han, Wenhua; Shen, Xiaohui; Xu, Jun; Wang, Ping; Tian, Guiyun; Wu, Zhengyang
2014-09-04
Magnetic flux leakage (MFL) inspection is one of the most important and sensitive nondestructive testing approaches. For online MFL inspection of a long-range railway track or oil pipeline, a fast and effective defect profile estimating method based on a multi-power affine projection algorithm (MAPA) is proposed, where the depth of a sampling point is related with not only the MFL signals before it, but also the ones after it, and all of the sampling points related to one point appear as serials or multi-power. Defect profile estimation has two steps: regulating a weight vector in an MAPA filter and estimating a defect profile with the MAPA filter. Both simulation and experimental data are used to test the performance of the proposed method. The results demonstrate that the proposed method exhibits high speed while maintaining the estimated profiles clearly close to the desired ones in a noisy environment, thereby meeting the demand of accurate online inspection.
Han, Wenhua; Shen, Xiaohui; Xu, Jun; Wang, Ping; Tian, Guiyun; Wu, Zhengyang
2014-01-01
Magnetic flux leakage (MFL) inspection is one of the most important and sensitive nondestructive testing approaches. For online MFL inspection of a long-range railway track or oil pipeline, a fast and effective defect profile estimating method based on a multi-power affine projection algorithm (MAPA) is proposed, where the depth of a sampling point is related with not only the MFL signals before it, but also the ones after it, and all of the sampling points related to one point appear as serials or multi-power. Defect profile estimation has two steps: regulating a weight vector in an MAPA filter and estimating a defect profile with the MAPA filter. Both simulation and experimental data are used to test the performance of the proposed method. The results demonstrate that the proposed method exhibits high speed while maintaining the estimated profiles clearly close to the desired ones in a noisy environment, thereby meeting the demand of accurate online inspection. PMID:25192314
Sidje, R B; Vo, H D
2015-11-01
The mathematical framework of the chemical master equation (CME) uses a Markov chain to model the biochemical reactions that are taking place within a biological cell. Computing the transient probability distribution of this Markov chain allows us to track the composition of molecules inside the cell over time, with important practical applications in a number of areas such as molecular biology or medicine. However the CME is typically difficult to solve, since the state space involved can be very large or even countably infinite. We present a novel way of using the stochastic simulation algorithm (SSA) to reduce the size of the finite state projection (FSP) method. Numerical experiments that demonstrate the effectiveness of the reduction are included.
An iterative algorithm for soft tissue reconstruction from truncated flat panel projections
NASA Astrophysics Data System (ADS)
Langan, D.; Claus, B.; Edic, P.; Vaillant, R.; De Man, B.; Basu, S.; Iatrou, M.
2006-03-01
The capabilities of flat panel interventional x-ray systems continue to expand, enabling a broader array of medical applications to be performed in a minimally invasive manner. Although CT is providing pre-operative 3D information, there is a need for 3D imaging of low contrast soft tissue during interventions in a number of areas including neurology, cardiac electro-physiology, and oncology. Unlike CT systems, interventional angiographic x-ray systems provide real-time large field of view 2D imaging, patient access, and flexible gantry positioning enabling interventional procedures. However, relative to CT, these C-arm flat panel systems have additional technical challenges in 3D soft tissue imaging including slower rotation speed, gantry vibration, reduced lateral patient field of view (FOV), and increased scatter. The reduced patient FOV often results in significant data truncation. Reconstruction of truncated (incomplete) data is known an "interior problem", and it is mathematically impossible to obtain an exact reconstruction. Nevertheless, it is an important problem in 3D imaging on a C-arm to address the need to generate a 3D reconstruction representative of the object being imaged with minimal artifacts. In this work we investigate the application of an iterative Maximum Likelihood Transmission (MLTR) algorithm to truncated data. We also consider truncated data with limited views for cardiac imaging where the views are gated by the electrocardiogram(ECG) to combat motion artifacts.
NASA Astrophysics Data System (ADS)
Alanís, Francisco Carlos Mejía; Rodríguez, J. Apolinar Muñoz
2015-05-01
A self-calibration technique based on genetic algorithms (GAs) with simulated binary crossover (SBX) and laser line imaging is presented. In this technique, the GA determines the vision parameters based on perspective projection geometry. The GA is constructed by means of an objective function, which is deduced from the equations of the laser line projection. To minimize the objective function, the GA performs a recombination of chromosomes through the SBX. This procedure provides the vision parameters, which are represented as chromosomes. The approach of the proposed GA is to achieve calibration and recalibration without external references and physical measurements. Thus, limitations caused by the missing of references are overcome to make self-calibration and three-dimensional (3-D) vision. Therefore, the proposed technique improves the self-calibration obtained by GAs with references. Additionally, 3-D vision is carried out via laser line position and vision parameters. The contribution of the proposed method is elucidated based on the accuracy of the self-calibration, which is performed with GAs.
Distributed maximum-intensity projection (partitioned-MIP) algorithm for visualizing medical data
Nguyen, H.T.; Srinivasan, R.
1995-12-01
Voxel-based three-dimensional object representation has been extensively used in medical imaging applications for the manipulation and visualization of volumetrically sampled data. This paper presents a partitioning strategy that allows general-purpose graphics workstations to be used for the {open_quote}hot-spot{close_quote} imaging of Magnetic Resonance Angiography (MRA), Computed Tomography Angiography (CTA), and Spiral CT data. Our divide-and-conquer approach creates sub-volumes that are projected in parallel, and merges the corresponding computed sub-images in image space to form the final image. Inter-processor communication is totally eliminated. This technique can also be applied in a uni-processor environment; in this case the original volume is partitioned so that each sub-volume fits into the on-chip cache, thereby minimizing miss-cache problems.
NASA Astrophysics Data System (ADS)
Yao, Dalei; Wen, Desheng; Xue, Jianru; Chen, Zhi; Wen, Yan; Jiang, Baotan; Ma, Junyong
2015-10-01
The article presents a new method to detect small moving targets in space surveillance. Image sequences are processed to detect and track targets under the assumption that the data samples are spatially registered. Maximum value projection and normalization are performed to reduce the data samples and eliminate the background clutter. Targets are then detected through connected component analysis. The velocities of the targets are estimated by centroid localization and least squares regression. The estimated velocities are utilized to track the targets. A sliding neighborhood operation is performed prior to target detection to significantly reduce the computation while preserving as much target information as possible. Actual data samples are acquired to test the proposed method. Experimental results show that the method can efficiently detect small moving targets and track their traces accurately. The centroid locating precision and tracking accuracy of the method are within a pixel.
Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T.; Keall, Paul J.; Kuncic, Zdenka
2014-04-15
Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An investigation of the impacts of these four factors on image quality can help determine the most effective strategy in improving 4D-CBCT for IGRT. Methods: Fourteen 4D-CBCT patient projection datasets with various respiratory motion features were reconstructed with the following controllable factors: (i) respiratory signal (real-time position management, projection image intensity analysis, or fiducial marker tracking), (ii) binning method (phase, displacement, or equal-projection-density displacement binning), and (iii) reconstruction algorithm [Feldkamp–Davis–Kress (FDK), McKinnon–Bates (MKB), or adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS)]. The image quality was quantified using signal-to-noise ratio (SNR), contrast-to-noise ratio, and edge-response width in order to assess noise/streaking and blur. The SNR values were also analyzed with respect to the maximum, mean, and root-mean-squared-error (RMSE) projection angular spacing to investigate how projection angular spacing affects image quality. Results: The choice of respiratory signals was found to have no significant impact on image quality. Displacement-based binning was found to be less prone to motion artifacts compared to phase binning in more than half of the cases, but was shown to suffer from large interbin image quality variation and large projection angular gaps. Both MKB and ASD-POCS resulted in noticeably improved image quality almost 100% of the time relative to FDK. In addition, SNR
NASA Astrophysics Data System (ADS)
Melli, S. Ali; Wahid, Khan A.; Babyn, Paul; Cooper, David M. L.; Gopi, Varun P.
2016-12-01
Synchrotron X-ray Micro Computed Tomography (Micro-CT) is an imaging technique which is increasingly used for non-invasive in vivo preclinical imaging. However, it often requires a large number of projections from many different angles to reconstruct high-quality images leading to significantly high radiation doses and long scan times. To utilize this imaging technique further for in vivo imaging, we need to design reconstruction algorithms that reduce the radiation dose and scan time without reduction of reconstructed image quality. This research is focused on using a combination of gradient-based Douglas-Rachford splitting and discrete wavelet packet shrinkage image denoising methods to design an algorithm for reconstruction of large-scale reduced-view synchrotron Micro-CT images with acceptable quality metrics. These quality metrics are computed by comparing the reconstructed images with a high-dose reference image reconstructed from 1800 equally spaced projections spanning 180°. Visual and quantitative-based performance assessment of a synthetic head phantom and a femoral cortical bone sample imaged in the biomedical imaging and therapy bending magnet beamline at the Canadian Light Source demonstrates that the proposed algorithm is superior to the existing reconstruction algorithms. Using the proposed reconstruction algorithm to reduce the number of projections in synchrotron Micro-CT is an effective way to reduce the overall radiation dose and scan time which improves in vivo imaging protocols.
NASA Astrophysics Data System (ADS)
Michel, Dominik; Miralles, Diego; Jimenez, Carlos; Ershadi, Ali; McCabe, Matthew F.; Hirschi, Martin; Seneviratne, Sonia I.; Jung, Martin; Wood, Eric F.; (Bob) Su, Z.; Timmermans, Joris; Chen, Xuelong; Fisher, Joshua B.; Mu, Quiaozen; Fernandez, Diego
2015-04-01
Research on climate variations and the development of predictive capabilities largely rely on globally available reference data series of the different components of the energy and water cycles. Several efforts have recently aimed at producing large-scale and long-term reference data sets of these components, e.g. based on in situ observations and remote sensing, in order to allow for diagnostic analyses of the drivers of temporal variations in the climate system. Evapotranspiration (ET) is an essential component of the energy and water cycle, which cannot be monitored directly on a global scale by remote sensing techniques. In recent years, several global multi-year ET data sets have been derived from remote sensing-based estimates, observation-driven land surface model simulations or atmospheric reanalyses. The LandFlux-EVAL initiative presented an ensemble-evaluation of these data sets over the time periods 1989-1995 and 1989-2005 (Mueller et al. 2013). The WACMOS-ET project (http://wacmoset.estellus.eu) started in the year 2012 and constitutes an ESA contribution to the GEWEX initiative LandFlux. It focuses on advancing the development of ET estimates at global, regional and tower scales. WACMOS-ET aims at developing a Reference Input Data Set exploiting European Earth Observations assets and deriving ET estimates produced by a set of four ET algorithms covering the period 2005-2007. The algorithms used are the SEBS (Su et al., 2002), Penman-Monteith from MODIS (Mu et al., 2011), the Priestley and Taylor JPL model (Fisher et al., 2008) and GLEAM (Miralles et al., 2011). The algorithms are run with Fluxnet tower observations, reanalysis data (ERA-Interim), and satellite forcings. They are cross-compared and validated against in-situ data. In this presentation the performance of the different ET algorithms with respect to different temporal resolutions, hydrological regimes, land cover types (including grassland, cropland, shrubland, vegetation mosaic, savanna
Ghobadi, Kimia; Ghaffari, Hamid R.; Aleman, Dionne M.; Jaffray, David A.; Ruschin, Mark
2012-06-15
Purpose: The purpose of this work is to develop a framework to the inverse problem for radiosurgery treatment planning on the Gamma Knife{sup Registered-Sign} Perfexion Trade-Mark-Sign (PFX) for intracranial targets. Methods: The approach taken in the present study consists of two parts. First, a hybrid grassfire and sphere-packing algorithm is used to obtain shot positions (isocenters) based on the geometry of the target to be treated. For the selected isocenters, a sector duration optimization (SDO) model is used to optimize the duration of radiation delivery from each collimator size from each individual source bank. The SDO model is solved using a projected gradient algorithm. This approach has been retrospectively tested on seven manually planned clinical cases (comprising 11 lesions) including acoustic neuromas and brain metastases. Results: In terms of conformity and organ-at-risk (OAR) sparing, the quality of plans achieved with the inverse planning approach were, on average, improved compared to the manually generated plans. The mean difference in conformity index between inverse and forward plans was -0.12 (range: -0.27 to +0.03) and +0.08 (range: 0.00-0.17) for classic and Paddick definitions, respectively, favoring the inverse plans. The mean difference in volume receiving the prescribed dose (V{sub 100}) between forward and inverse plans was 0.2% (range: -2.4% to +2.0%). After plan renormalization for equivalent coverage (i.e., V{sub 100}), the mean difference in dose to 1 mm{sup 3} of brainstem between forward and inverse plans was -0.24 Gy (range: -2.40 to +2.02 Gy) favoring the inverse plans. Beam-on time varied with the number of isocenters but for the most optimal plans was on average 33 min longer than manual plans (range: -17 to +91 min) when normalized to a calibration dose rate of 3.5 Gy/min. In terms of algorithm performance, the isocenter selection for all the presented plans was performed in less than 3 s, while the SDO was performed in an
NASA Astrophysics Data System (ADS)
Chen, Ying; Lo, Joseph Y.; Baker, Jay A.; Dobbins, James T., III
2006-03-01
Breast cancer is a major problem and the most common cancer among women. The nature of conventional mammpgraphy makes it very difficult to distinguish a cancer from overlying breast tissues. Digital Tomosynthesis refers to a three-dimensional imaging technique that allows reconstruction of an arbitrary set of planes in the breast from limited-angle series of projection images as the x-ray source moves. Several tomosynthesis algorithms have been proposed, including Matrix Inversion Tomosynthesis (MITS) and Filtered Back Projection (FBP) that have been investigated in our lab. MITS shows better high frequency response in removing out-of-plane blur, while FBP shows better low frequency noise propertities. This paper presents an effort to combine MITS and FBP for better breast tomosynthesis reconstruction. A high-pass Gaussian filter was designed and applied to three-slice "slabbing" MITS reconstructions. A low-pass Gaussian filter was designed and applied to the FBP reconstructions. A frequency weighting parameter was studied to blend the high-passed MITS with low-passed FBP frequency components. Four different reconstruction methods were investigated and compared with human subject images: 1) MITS blended with Shift-And-Add (SAA), 2) FBP alone, 3) FBP with applied Hamming and Gaussian Filters, and 4) Gaussian Frequency Blending (GFB) of MITS and FBP. Results showed that, compared with FBP, Gaussian Frequency Blending (GFB) has better performance for high frequency content such as better reconstruction of micro-calcifications and removal of high frequency noise. Compared with MITS, GFB showed more low frequency breast tissue content.
Insausti, Matías; Gomes, Adriano A; Cruz, Fernanda V; Pistonesi, Marcelo F; Araujo, Mario C U; Galvão, Roberto K H; Pereira, Claudete F; Band, Beatriz S F
2012-08-15
This paper investigates the use of UV-vis, near infrared (NIR) and synchronous fluorescence (SF) spectrometries coupled with multivariate classification methods to discriminate biodiesel samples with respect to the base oil employed in their production. More specifically, the present work extends previous studies by investigating the discrimination of corn-based biodiesel from two other biodiesel types (sunflower and soybean). Two classification methods are compared, namely full-spectrum SIMCA (soft independent modelling of class analogies) and SPA-LDA (linear discriminant analysis with variables selected by the successive projections algorithm). Regardless of the spectrometric technique employed, full-spectrum SIMCA did not provide an appropriate discrimination of the three biodiesel types. In contrast, all samples were correctly classified on the basis of a reduced number of wavelengths selected by SPA-LDA. It can be concluded that UV-vis, NIR and SF spectrometries can be successfully employed to discriminate corn-based biodiesel from the two other biodiesel types, but wavelength selection by SPA-LDA is key to the proper separation of the classes.
Li, Si; Xu, Yuesheng; Zhang, Jiahan; Lipson, Edward; Krol, Andrzej; Feiglin, David; Schmidtlein, C. Ross; Vogelsang, Levon; Shen, Lixin
2015-08-15
Purpose: The authors have recently developed a preconditioned alternating projection algorithm (PAPA) with total variation (TV) regularizer for solving the penalized-likelihood optimization model for single-photon emission computed tomography (SPECT) reconstruction. This algorithm belongs to a novel class of fixed-point proximity methods. The goal of this work is to investigate how PAPA performs while dealing with realistic noisy SPECT data, to compare its performance with more conventional methods, and to address issues with TV artifacts by proposing a novel form of the algorithm invoking high-order TV regularization, denoted as HOTV-PAPA, which has been explored and studied extensively in the present work. Methods: Using Monte Carlo methods, the authors simulate noisy SPECT data from two water cylinders; one contains lumpy “warm” background and “hot” lesions of various sizes with Gaussian activity distribution, and the other is a reference cylinder without hot lesions. The authors study the performance of HOTV-PAPA and compare it with PAPA using first-order TV regularization (TV-PAPA), the Panin–Zeng–Gullberg one-step-late method with TV regularization (TV-OSL), and an expectation–maximization algorithm with Gaussian postfilter (GPF-EM). The authors select penalty-weights (hyperparameters) by qualitatively balancing the trade-off between resolution and image noise separately for TV-PAPA and TV-OSL. However, the authors arrived at the same penalty-weight value for both of them. The authors set the first penalty-weight in HOTV-PAPA equal to the optimal penalty-weight found for TV-PAPA. The second penalty-weight needed for HOTV-PAPA is tuned by balancing resolution and the severity of staircase artifacts. The authors adjust the Gaussian postfilter to approximately match the local point spread function of GPF-EM and HOTV-PAPA. The authors examine hot lesion detectability, study local spatial resolution, analyze background noise properties, estimate mean
NASA Astrophysics Data System (ADS)
Shechter, Gilad; Naveh, Galit; Altman, Ami; Proksa, Roland M.; Grass, Michael
2003-05-01
Fast 16-slice spiral CT delivers superior cardiac visualization in comparison to older generation 2- to 8-slice scanners due to the combination of high temporal resolution along with isotropic spatial resolution and large coverage. The large beam opening of such scanners necessitates the use of adequate algorithms to avoid cone beam artifacts. We have developed a multi-cycle phase selective 3D back projection reconstruction algorithm that provides excellent temporal and spatial resolution for 16-slice CT cardiac images free of cone beam artifacts.
Vecharynski, Eugene; Yang, Chao; Pask, John E.
2015-02-25
Here, we present an iterative algorithm for computing an invariant subspace associated with the algebraically smallest eigenvalues of a large sparse or structured Hermitian matrix A. We are interested in the case in which the dimension of the invariant subspace is large (e.g., over several hundreds or thousands) even though it may still be small relative to the dimension of A. These problems arise from, for example, density functional theory (DFT) based electronic structure calculations for complex materials. The key feature of our algorithm is that it performs fewer Rayleigh–Ritz calculations compared to existing algorithms such as the locally optimal block preconditioned conjugate gradient or the Davidson algorithm. It is a block algorithm, and hence can take advantage of efficient BLAS3 operations and be implemented with multiple levels of concurrency. We discuss a number of practical issues that must be addressed in order to implement the algorithm efficiently on a high performance computer.
NASA Astrophysics Data System (ADS)
Aragón, J. L.; Vázquez Polo, G.; Gómez, A.
A computational algorithm for the generation of quasiperiodic tiles based on the cut and projection method is presented. The algorithm is capable of projecting any type of lattice embedded in any euclidean space onto any subspace making it possible to generate quasiperiodic tiles with any desired symmetry. The simplex method of linear programming and the Moore-Penrose generalized inverse are used to construct the cut (strip) in the higher dimensional space which is to be projected.
Mahowald, Natalie
2016-11-29
Soils in natural and managed ecosystems and wetlands are well known sources of methane, nitrous oxides, and reactive nitrogen gases, but the magnitudes of gas flux to the atmosphere are still poorly constrained. Thus, the reasons for the large increases in atmospheric concentrations of methane and nitrous oxide since the preindustrial time period are not well understood. The low atmospheric concentrations of methane and nitrous oxide, despite being more potent greenhouse gases than carbon dioxide, complicate empirical studies to provide explanations. In addition to climate concerns, the emissions of reactive nitrogen gases from soils are important to the changing nitrogen balance in the earth system, subject to human management, and may change substantially in the future. Thus improved modeling of the emission fluxes of these species from the land surface is important. Currently, there are emission modules for methane and some nitrogen species in the Community Earth System Model’s Community Land Model (CLM-ME/N); however, there are large uncertainties and problems in the simulations, resulting in coarse estimates. In this proposal, we seek to improve these emission modules by combining state-of-the-art process modules for emissions, available data, and new optimization methods. In earth science problems, we often have substantial data and knowledge of processes in disparate systems, and thus we need to combine data and a general process level understanding into a model for projections of future climate that are as accurate as possible. The best methodologies for optimization of parameters in earth system models are still being developed. In this proposal we will develop and apply surrogate algorithms that a) were especially developed for computationally expensive simulations like CLM-ME/N models; b) were (in the earlier surrogate optimization Stochastic RBF) demonstrated to perform very well on computationally expensive complex partial differential equations in
Vecharynski, Eugene; Yang, Chao; Pask, John E.
2015-02-25
Here, we present an iterative algorithm for computing an invariant subspace associated with the algebraically smallest eigenvalues of a large sparse or structured Hermitian matrix A. We are interested in the case in which the dimension of the invariant subspace is large (e.g., over several hundreds or thousands) even though it may still be small relative to the dimension of A. These problems arise from, for example, density functional theory (DFT) based electronic structure calculations for complex materials. The key feature of our algorithm is that it performs fewer Rayleigh–Ritz calculations compared to existing algorithms such as the locally optimalmore » block preconditioned conjugate gradient or the Davidson algorithm. It is a block algorithm, and hence can take advantage of efficient BLAS3 operations and be implemented with multiple levels of concurrency. We discuss a number of practical issues that must be addressed in order to implement the algorithm efficiently on a high performance computer.« less
NASA Astrophysics Data System (ADS)
Pawlowski, Jason M.; Ding, George X.
2014-04-01
A new model-based dose calculation algorithm is presented for kilovoltage x-rays and is tested for the cases of calculating the radiation dose from kilovoltage cone-beam CT (kV-CBCT) and 2D planar projected radiographs. This algorithm calculates the radiation dose to water-like media as the sum of primary and scattered dose components. The scatter dose is calculated by convolution of a newly introduced, empirically parameterized scatter dose kernel with the primary photon fluence. Several approximations are introduced to increase the scatter dose calculation efficiency: (1) the photon energy spectrum is approximated as monoenergetic; (2) density inhomogeneities are accounted for by implementing a global distance scaling factor in the scatter kernel; (3) kernel tilting is ignored. These approximations allow for efficient calculation of the scatter dose convolution with the fast Fourier transform. Monte Carlo simulations were used to obtain the model parameters. The accuracy of using this model-based algorithm was validated by comparing with the Monte Carlo method for calculating dose distributions for real patients resulting from radiotherapy image guidance procedures including volumetric kV-CBCT scans and 2D planar projected radiographs. For all patients studied, mean dose-to-water errors for kV-CBCT are within 0.3% with a maximum standard deviation error of 4.1%. Using a medium-dependent correction method to account for the effects of photoabsorption in bone on the dose distribution, mean dose-to-medium errors for kV-CBCT are within 3.6% for bone and 2.4% for soft tissues. This algorithm offers acceptable accuracy and has the potential to extend the applicability of model-based dose calculation algorithms from megavoltage to kilovoltage photon beams.
NASA Astrophysics Data System (ADS)
Abuhadi, Nouf; Bradley, David; Katarey, Dev; Podolyak, Zsolt; Sassi, Salem
2014-03-01
Introduction: Single-Photon Emission Computed Tomography (SPECT) is used to measure and quantify radiopharmaceutical distribution within the body. The accuracy of quantification depends on acquisition parameters and reconstruction algorithms. Until recently, most SPECT images were constructed using Filtered Back Projection techniques with no attenuation or scatter corrections. The introduction of 3-D Iterative Reconstruction algorithms with the availability of both computed tomography (CT)-based attenuation correction and scatter correction may provide for more accurate measurement of radiotracer bio-distribution. The effect of attenuation and scatter corrections on accuracy of SPECT measurements is well researched. It has been suggested that the combination of CT-based attenuation correction and scatter correction can allow for more accurate quantification of radiopharmaceutical distribution in SPECT studies (Bushberg et al., 2012). However, The effect of respiratory induced cardiac motion on SPECT images acquired using higher resolution algorithms such 3-D iterative reconstruction with attenuation and scatter corrections has not been investigated. Aims: To investigate the quantitative accuracy of 3D iterative reconstruction algorithms in comparison to filtered back projection (FBP) methods implemented on cardiac SPECT/CT imaging with and without CT-attenuation and scatter corrections. Also to investigate the effects of respiratory induced cardiac motion on myocardium perfusion quantification. Lastly, to present a comparison of spatial resolution for FBP and ordered subset expectation maximization (OSEM) Flash 3D together with and without respiratory induced motion, and with and without attenuation and scatter correction. Methods: This study was performed on a Siemens Symbia T16 SPECT/CT system using clinical acquisition protocols. Respiratory induced cardiac motion was simulated by imaging a cardiac phantom insert whilst moving it using a respiratory motion motor
Algorithms and Algorithmic Languages.
ERIC Educational Resources Information Center
Veselov, V. M.; Koprov, V. M.
This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…
Bare, Kimberly; Drain, Jerri; Timko-Progar, Monica; Stallings, Bobbie; Smith, Kimberly; Ward, Naomi; Wright, Sandra
2017-03-22
Many nurses have limited experience with ostomy management. We sought to provide a standardized approach to ostomy education and management to support nurses in early identification of stomal and peristomal complications, pouching problems, and provide standardized solutions for managing ostomy care in general while improving utilization of formulary products. This article describes development and testing of an ostomy algorithm tool.
ERIC Educational Resources Information Center
Hughes, Carroll W.; Emslie, Graham J.; Crismon, M. Lynn; Posner, Kelly; Birmaher, Boris; Ryan, Neal; Jensen, Peter; Curry, John; Vitiello, Benedetto; Lopez, Molly; Shon, Steve P.; Pliszka, Steven R.; Trivedi, Madhukar H.
2007-01-01
Objective: To revise and update consensus guidelines for medication treatment algorithms for childhood major depressive disorder based on new scientific evidence and expert clinical consensus when evidence is lacking. Method: A consensus conference was held January 13-14, 2005, that included academic clinicians and researchers, practicing…
Hybrid-optimization algorithm for the management of a conjunctive-use project and well field design
Chiu, Yung-Chia; Nishikawa, Tracy; Martin, Peter
2012-01-01
Hi-Desert Water District (HDWD), the primary water-management agency in the Warren Groundwater Basin, California, plans to construct a waste water treatment plant to reduce future septic-tank effluent from reaching the groundwater system. The treated waste water will be reclaimed by recharging the groundwater basin via recharge ponds as part of a larger conjunctive-use strategy. HDWD wishes to identify the least-cost conjunctiveuse strategies for managing imported surface water, reclaimed water, and local groundwater. As formulated, the mixed-integer nonlinear programming (MINLP) groundwater-management problem seeks to minimize water delivery costs subject to constraints including potential locations of the new pumping wells, California State regulations, groundwater-level constraints, water-supply demand, available imported water, and pump/recharge capacities. In this study, a hybrid-optimization algorithm, which couples a genetic algorithm and successive-linear programming, is developed to solve the MINLP problem. The algorithm was tested by comparing results to the enumerative solution for a simplified version of the HDWD groundwater-management problem. The results indicate that the hybrid-optimization algorithm can identify the global optimum. The hybrid-optimization algorithm is then applied to solve a complex groundwater-management problem. Sensitivity analyses were also performed to assess the impact of varying the new recharge pond orientation, varying the mixing ratio of reclaimed water and pumped water, and varying the amount of imported water available. The developed conjunctive management model can provide HDWD water managers with information that will improve their ability to manage their surface water, reclaimed water, and groundwater resources.
Hybrid-optimization algorithm for the management of a conjunctive-use project and well field design
Chiu, Y.-C.; Nishikawa, T.; Martin, P.
2012-01-01
Hi-Desert Water District (HDWD), the primary water-management agency in the Warren Groundwater Basin, California, plans to construct a waste water treatment plant to reduce future septic-tank effluent from reaching the groundwater system. The treated waste water will be reclaimed by recharging the groundwater basin via recharge ponds as part of a larger conjunctive-use strategy. HDWD wishes to identify the least-cost conjunctive-use strategies for managing imported surface water, reclaimed water, and local groundwater. As formulated, the mixed-integer nonlinear programming (MINLP) groundwater-management problem seeks to minimize water-delivery costs subject to constraints including potential locations of the new pumping wells, California State regulations, groundwater-level constraints, water-supply demand, available imported water, and pump/recharge capacities. In this study, a hybrid-optimization algorithm, which couples a genetic algorithm and successive-linear programming, is developed to solve the MINLP problem. The algorithm was tested by comparing results to the enumerative solution for a simplified version of the HDWD groundwater-management problem. The results indicate that the hybrid-optimization algorithm can identify the global optimum. The hybrid-optimization algorithm is then applied to solve a complex groundwater-management problem. Sensitivity analyses were also performed to assess the impact of varying the new recharge pond orientation, varying the mixing ratio of reclaimed water and pumped water, and varying the amount of imported water available. The developed conjunctive management model can provide HDWD water managers with information that will improve their ability to manage their surface water, reclaimed water, and groundwater resources. ?? 2011, National Ground Water Association.
2006-11-30
of Mechanical and Environmental Engineering University of California, Santa Barbara Abstract At the mesoscopic scale, chemical processes have...linearity property of super-positioning, and we illustrate the benefits of this algorithm on a simplified model of the heat shock mechanism in E. coli...random number generator, the collected statistical data would converge to the exact solution to the CME. Unfortunately, the convergence rate for any
INSENS classification algorithm report
Hernandez, J.E.; Frerking, C.J.; Myers, D.W.
1993-07-28
This report describes a new algorithm developed for the Imigration and Naturalization Service (INS) in support of the INSENS project for classifying vehicles and pedestrians using seismic data. This algorithm is less sensitive to nuisance alarms due to environmental events than the previous algorithm. Furthermore, the algorithm is simple enough that it can be implemented in the 8-bit microprocessor used in the INSENS system.
NASA Technical Reports Server (NTRS)
Wehrbein, W. M.; Leovy, C. B.
1981-01-01
A Curtis matrix is used to compute cooling by the 15 micron and 10 micron bands of carbon dioxide. Escape of radiation to space and exchange the lower boundary are used for the 9.6 micron band of ozone. Voigt line shape, vibrational relaxation, line overlap, and the temperature dependence of line strength distributions and transmission functions are incorporated into the Curtis matrices. The distributions of the atmospheric constituents included in the algorithm, and the method used to compute the Curtis matrices are discussed as well as cooling or heating by the 9.6 micron band of ozone. The FORTRAN programs and subroutines that were developed are described and listed.
NASA Technical Reports Server (NTRS)
Phillips, J. R.
1996-01-01
In this paper we derive error bounds for a collocation-grid-projection scheme tuned for use in multilevel methods for solving boundary-element discretizations of potential integral equations. The grid-projection scheme is then combined with a precorrected FFT style multilevel method for solving potential integral equations with 1/r and e(sup ikr)/r kernels. A complexity analysis of this combined method is given to show that for homogeneous problems, the method is order n natural log n nearly independent of the kernel. In addition, it is shown analytically and experimentally that for an inhomogeneity generated by a very finely discretized surface, the combined method slows to order n(sup 4/3). Finally, examples are given to show that the collocation-based grid-projection plus precorrected-FFT scheme is competitive with fast-multipole algorithms when considering realistic problems and 1/r kernels, but can be used over a range of spatial frequencies with only a small performance penalty.
NASA Astrophysics Data System (ADS)
Ping, J.; Tavakoli, R.; Min, B.; Srinivasan, S.; Wheeler, M. F.
2015-12-01
Optimal management of subsurface processes requires the characterization of the uncertainty in reservoir description and reservoir performance prediction. The application of ensemble-based algorithms for history matching reservoir models has been steadily increasing over the past decade. However, the majority of implementations in the reservoir engineering have dealt only with production history matching. During geologic sequestration, the injection of large quantities of CO2 into the subsurface may alter the stress/strain field which in turn can lead to surface uplift or subsidence. Therefore, it is essential to couple multiphase flow and geomechanical response in order to predict and quantify the uncertainty of CO2 plume movement for long-term, large-scale CO2 sequestration projects. In this work, we simulate and estimate the properties of a reservoir that is being used to store CO2 as part of the In Salah Capture and Storage project in Algeria. The CO2 is separated from produced natural gas and is re-injected into downdip aquifer portion of the field from three long horizontal wells. The field observation data includes ground surface deformations (uplift) measured using satellite-based radar (InSAR), injection well locations and CO2 injection rate histories provided by the operators. We implement ensemble-based algorithms for assimilating both injection rate data as well as geomechanical observations (surface uplift) into reservoir model. The preliminary estimation results of horizontal permeability and material properties such as Young Modulus and Poisson Ratio are consistent with available measurements and previous studies in this field. Moreover, the existence of high-permeability channels/fractures within the reservoir; especially in the regions around the injection wells are confirmed. This estimation results can be used to accurately and efficiently predict and monitor the movement of CO2 plume.
NASA Astrophysics Data System (ADS)
Mobashsher, Ahmed Toaha; Mahmoud, A.; Abbosh, A. M.
2016-02-01
Intracranial hemorrhage is a medical emergency that requires rapid detection and medication to restrict any brain damage to minimal. Here, an effective wideband microwave head imaging system for on-the-spot detection of intracranial hemorrhage is presented. The operation of the system relies on the dielectric contrast between healthy brain tissues and a hemorrhage that causes a strong microwave scattering. The system uses a compact sensing antenna, which has an ultra-wideband operation with directional radiation, and a portable, compact microwave transceiver for signal transmission and data acquisition. The collected data is processed to create a clear image of the brain using an improved back projection algorithm, which is based on a novel effective head permittivity model. The system is verified in realistic simulation and experimental environments using anatomically and electrically realistic human head phantoms. Quantitative and qualitative comparisons between the images from the proposed and existing algorithms demonstrate significant improvements in detection and localization accuracy. The radiation and thermal safety of the system are examined and verified. Initial human tests are conducted on healthy subjects with different head sizes. The reconstructed images are statistically analyzed and absence of false positive results indicate the efficacy of the proposed system in future preclinical trials.
Mobashsher, Ahmed Toaha; Mahmoud, A.; Abbosh, A. M.
2016-01-01
Intracranial hemorrhage is a medical emergency that requires rapid detection and medication to restrict any brain damage to minimal. Here, an effective wideband microwave head imaging system for on-the-spot detection of intracranial hemorrhage is presented. The operation of the system relies on the dielectric contrast between healthy brain tissues and a hemorrhage that causes a strong microwave scattering. The system uses a compact sensing antenna, which has an ultra-wideband operation with directional radiation, and a portable, compact microwave transceiver for signal transmission and data acquisition. The collected data is processed to create a clear image of the brain using an improved back projection algorithm, which is based on a novel effective head permittivity model. The system is verified in realistic simulation and experimental environments using anatomically and electrically realistic human head phantoms. Quantitative and qualitative comparisons between the images from the proposed and existing algorithms demonstrate significant improvements in detection and localization accuracy. The radiation and thermal safety of the system are examined and verified. Initial human tests are conducted on healthy subjects with different head sizes. The reconstructed images are statistically analyzed and absence of false positive results indicate the efficacy of the proposed system in future preclinical trials. PMID:26842761
NASA Astrophysics Data System (ADS)
Wang, Haipeng; Xu, Feng; Jin, Ya-Qiu; Ouchi, Kazuo
An inversion method of bridge height over water by polarimetric synthetic aperture radar (SAR) is developed. A geometric ray description to illustrate scattering mechanism of a bridge over water surface is identified by polarimetric image analysis. Using the mapping and projecting algorithm, a polarimetric SAR image of a bridge model is first simulated and shows that scattering from a bridge over water can be identified by three strip lines corresponding to single-, double-, and triple-order scattering, respectively. A set of polarimetric parameters based on the de-orientation theory is applied to analysis of three types scattering, and the thinning-clustering algorithm and Hough transform are then employed to locate the image positions of these strip lines. These lines are used to invert the bridge height. Fully polarimetric image data of airborne Pi-SAR at X-band are applied to inversion of the height and width of the Naruto Bridge in Japan. Based on the same principle, this approach is also applicable to spaceborne ALOSPALSAR single-polarization data of the Eastern Ocean Bridge in China. The results show good feasibility to realize the bridge height inversion.
Flanigan, Patrick W; Ostfeld, Aminy E; Serrino, Natalie G; Ye, Zhen; Pacifici, Domenico
2013-02-11
This report will present a generalized two-dimensional quasiperiodic (QP) tiling algorithm based on de Bruijn's "cut and projection" method for use in plasmonic concentrator (PC) / photovoltaic hybrid devices to produce wide-angle, polarization-insensitive, and broadband light absorption enhancement. This algorithm can be employed with any PC consisting of point-like scattering objects, and can be fine-tuned to achieve a high spatial density of points and high orders of local and long-range rotational symmetry. Simulations and experimental data demonstrate this enhancement in ultra-thin layers of organic photovoltaic materials resting on metallic films etched with arrays of shallow sub-wavelength nanoholes. These devices work by coupling the incident light to surface plasmon polariton (SPP) modes that propagate along the dielectric / metal interface. This effectively increases the scale of light-matter interaction, and can also result in constructive interference between propagating SPP waves. By comparing PCs made with random, periodic, and QP arrangements, it is clear that QP is superior in intensifying the local fields and enhancing absorption in the active layer.
Mobashsher, Ahmed Toaha; Mahmoud, A; Abbosh, A M
2016-02-04
Intracranial hemorrhage is a medical emergency that requires rapid detection and medication to restrict any brain damage to minimal. Here, an effective wideband microwave head imaging system for on-the-spot detection of intracranial hemorrhage is presented. The operation of the system relies on the dielectric contrast between healthy brain tissues and a hemorrhage that causes a strong microwave scattering. The system uses a compact sensing antenna, which has an ultra-wideband operation with directional radiation, and a portable, compact microwave transceiver for signal transmission and data acquisition. The collected data is processed to create a clear image of the brain using an improved back projection algorithm, which is based on a novel effective head permittivity model. The system is verified in realistic simulation and experimental environments using anatomically and electrically realistic human head phantoms. Quantitative and qualitative comparisons between the images from the proposed and existing algorithms demonstrate significant improvements in detection and localization accuracy. The radiation and thermal safety of the system are examined and verified. Initial human tests are conducted on healthy subjects with different head sizes. The reconstructed images are statistically analyzed and absence of false positive results indicate the efficacy of the proposed system in future preclinical trials.
Goodarzi, Mohammad; Saeys, Wouter; de Araujo, Mario Cesar Ugulino; Galvão, Roberto Kawakami Harrop; Vander Heyden, Yvan
2014-01-23
Chalcones are naturally occurring aromatic ketones, which consist of an α-, β-unsaturated carbonyl system joining two aryl rings. These compounds are reported to exhibit several pharmacological activities, including antiparasitic, antibacterial, antifungal, anticancer, immunomodulatory, nitric oxide inhibition and anti-inflammatory effects. In the present work, a Quantitative Structure-Activity Relationship (QSAR) study is carried out to classify chalcone derivatives with respect to their antileishmanial activity (active/inactive) on the basis of molecular descriptors. For this purpose, two techniques to select descriptors are employed, the Successive Projections Algorithm (SPA) and the Genetic Algorithm (GA). The selected descriptors are initially employed to build Linear Discriminant Analysis (LDA) models. An additional investigation is then carried out to determine whether the results can be improved by using a non-parametric classification technique (One Nearest Neighbour, 1NN). In a case study involving 100 chalcone derivatives, the 1NN models were found to provide better rates of correct classification than LDA, both in the training and test sets. The best result was achieved by a SPA-1NN model with six molecular descriptors, which provided correct classification rates of 97% and 84% for the training and test sets, respectively.
Michel, D.; Jimenez, C.; Miralles, D. G.; Jung, M.; Hirschi, M.; Ershadi, A.; Martens, B.; McCabe, M. F.; Fisher, J. B.; Mu, Q.; Seneviratne, S. I.; Wood, E. F.; Fernandez-Prieto, D.
2016-02-23
The WAter Cycle Multi-mission Observation Strategy – EvapoTranspiration (WACMOS-ET) project has compiled a forcing data set covering the period 2005–2007 that aims to maximize the exploitation of European Earth Observations data sets for evapotranspiration (ET) estimation. The data set was used to run four established ET algorithms: the Priestley–Taylor Jet Propulsion Laboratory model (PT-JPL), the Penman–Monteith algorithm from the MODerate resolution Imaging Spectroradiometer (MODIS) evaporation product (PM-MOD), the Surface Energy Balance System (SEBS) and the Global Land Evaporation Amsterdam Model (GLEAM). In addition, in situ meteorological data from 24 FLUXNET towers were used to force the models, with results from both forcing sets compared to tower-based flux observations. Model performance was assessed on several timescales using both sub-daily and daily forcings. The PT-JPL model and GLEAM provide the best performance for both satellite- and tower-based forcing as well as for the considered temporal resolutions. Simulations using the PM-MOD were mostly underestimated, while the SEBS performance was characterized by a systematic overestimation. In general, all four algorithms produce the best results in wet and moderately wet climate regimes. In dry regimes, the correlation and the absolute agreement with the reference tower ET observations were consistently lower. While ET derived with in situ forcing data agrees best with the tower measurements (R^{2} = 0.67), the agreement of the satellite-based ET estimates is only marginally lower (R^{2} = 0.58). Results also show similar model performance at daily and sub-daily (3-hourly) resolutions. Overall, our validation experiments against in situ measurements indicate that there is no single best-performing algorithm across all biome and forcing types. In conclusion, an extension of the evaluation to a larger selection of 85 towers (model inputs resampled to a
Michel, D.; Jimenez, C.; Miralles, D. G.; ...
2016-02-23
The WAter Cycle Multi-mission Observation Strategy – EvapoTranspiration (WACMOS-ET) project has compiled a forcing data set covering the period 2005–2007 that aims to maximize the exploitation of European Earth Observations data sets for evapotranspiration (ET) estimation. The data set was used to run four established ET algorithms: the Priestley–Taylor Jet Propulsion Laboratory model (PT-JPL), the Penman–Monteith algorithm from the MODerate resolution Imaging Spectroradiometer (MODIS) evaporation product (PM-MOD), the Surface Energy Balance System (SEBS) and the Global Land Evaporation Amsterdam Model (GLEAM). In addition, in situ meteorological data from 24 FLUXNET towers were used to force the models, with results from both forcing sets compared tomore » tower-based flux observations. Model performance was assessed on several timescales using both sub-daily and daily forcings. The PT-JPL model and GLEAM provide the best performance for both satellite- and tower-based forcing as well as for the considered temporal resolutions. Simulations using the PM-MOD were mostly underestimated, while the SEBS performance was characterized by a systematic overestimation. In general, all four algorithms produce the best results in wet and moderately wet climate regimes. In dry regimes, the correlation and the absolute agreement with the reference tower ET observations were consistently lower. While ET derived with in situ forcing data agrees best with the tower measurements (R2 = 0.67), the agreement of the satellite-based ET estimates is only marginally lower (R2 = 0.58). Results also show similar model performance at daily and sub-daily (3-hourly) resolutions. Overall, our validation experiments against in situ measurements indicate that there is no single best-performing algorithm across all biome and forcing types. In conclusion, an extension of the evaluation to a larger selection of 85 towers (model inputs resampled to a common grid to facilitate global
NASA Astrophysics Data System (ADS)
Barnard, L.; Scott, C. J.; Owens, M.; Lockwood, M.; Crothers, S. R.; Davies, J. A.; Harrison, R. A.
2015-10-01
Observations from the Heliospheric Imager (HI) instruments aboard the twin STEREO spacecraft have enabled the compilation of several catalogues of coronal mass ejections (CMEs), each characterizing the propagation of CMEs through the inner heliosphere. Three such catalogues are the Rutherford Appleton Laboratory (RAL)-HI event list, the Solar Stormwatch CME catalogue, and, presented here, the J-tracker catalogue. Each catalogue uses a different method to characterize the location of CME fronts in the HI images: manual identification by an expert, the statistical reduction of the manual identifications of many citizen scientists, and an automated algorithm. We provide a quantitative comparison of the differences between these catalogues and techniques, using 51 CMEs common to each catalogue. The time-elongation profiles of these CME fronts are compared, as are the estimates of the CME kinematics derived from application of three widely used single-spacecraft-fitting techniques. The J-tracker and RAL-HI profiles are most similar, while the Solar Stormwatch profiles display a small systematic offset. Evidence is presented that these differences arise because the RAL-HI and J-tracker profiles follow the sunward edge of CME density enhancements, while Solar Stormwatch profiles track closer to the antisunward (leading) edge. We demonstrate that the method used to produce the time-elongation profile typically introduces more variability into the kinematic estimates than differences between the various single-spacecraft-fitting techniques. This has implications for the repeatability and robustness of these types of analyses, arguably especially so in the context of space weather forecasting, where it could make the results strongly dependent on the methods used by the forecaster.
Solomon, Justin; Marin, Daniele; Roy Choudhury, Kingshuk; Patel, Bhavik; Samei, Ehsan
2017-02-07
Purpose To determine the effect of radiation dose and iterative reconstruction (IR) on noise, contrast, resolution, and observer-based detectability of subtle hypoattenuating liver lesions and to estimate the dose reduction potential of the IR algorithm in question. Materials and Methods This prospective, single-center, HIPAA-compliant study was approved by the institutional review board. A dual-source computed tomography (CT) system was used to reconstruct CT projection data from 21 patients into six radiation dose levels (12.5%, 25%, 37.5%, 50%, 75%, and 100%) on the basis of two CT acquisitions. A series of virtual liver lesions (five per patient, 105 total, lesion-to-liver prereconstruction contrast of -15 HU, 12-mm diameter) were inserted into the raw CT projection data and images were reconstructed with filtered back projection (FBP) (B31f kernel) and sinogram-affirmed IR (SAFIRE) (I31f-5 kernel). Image noise (pixel standard deviation), lesion contrast (after reconstruction), lesion boundary sharpness (average normalized gradient at lesion boundary), and contrast-to-noise ratio (CNR) were compared. Next, a two-alternative forced choice perception experiment was performed (16 readers [six radiologists, 10 medical physicists]). A linear mixed-effects statistical model was used to compare detection accuracy between FBP and SAFIRE and to estimate the radiation dose reduction potential of SAFIRE. Results Compared with FBP, SAFIRE reduced noise by a mean of 53% ± 5, lesion contrast by 12% ± 4, and lesion sharpness by 13% ± 10 but increased CNR by 89% ± 19. Detection accuracy was 2% higher on average with SAFIRE than with FBP (P = .03), which translated into an estimated radiation dose reduction potential (±95% confidence interval) of 16% ± 13. Conclusion SAFIRE increases detectability at a given radiation dose (approximately 2% increase in detection accuracy) and allows for imaging at reduced radiation dose (16% ± 13), while maintaining low
NASA Astrophysics Data System (ADS)
Altaleb, Anas; Saeed, Muhammad Sarwar; Hussain, Iqtadar; Aslam, Muhammad
2017-03-01
The aim of this work is to synthesize 8*8 substitution boxes (S-boxes) for block ciphers. The confusion creating potential of an S-box depends on its construction technique. In the first step, we have applied the algebraic action of the projective general linear group PGL(2,GF(28)) on Galois field GF(28). In step 2 we have used the permutations of the symmetric group S256 to construct new kind of S-boxes. To explain the proposed extension scheme, we have given an example and constructed one new S-box. The strength of the extended S-box is computed, and an insight is given to calculate the confusion-creating potency. To analyze the security of the S-box some popular algebraic and statistical attacks are performed as well. The proposed S-box has been analyzed by bit independent criterion, linear approximation probability test, non-linearity test, strict avalanche criterion, differential approximation probability test, and majority logic criterion. A comparison of the proposed S-box with existing S-boxes shows that the analyses of the extended S-box are comparatively better.
Calzado, A; Geleijns, J; Joemai, R M S; Veldkamp, W J H
2014-01-01
Objective: To compare low-contrast detectability (LCDet) performance between a model [non–pre-whitening matched filter with an eye filter (NPWE)] and human observers in CT images reconstructed with filtered back projection (FBP) and iterative [adaptive iterative dose reduction three-dimensional (AIDR 3D; Toshiba Medical Systems, Zoetermeer, Netherlands)] algorithms. Methods: Images of the Catphan® phantom (Phantom Laboratories, New York, NY) were acquired with Aquilion ONE™ 320-detector row CT (Toshiba Medical Systems, Tokyo, Japan) at five tube current levels (20–500 mA range) and reconstructed with FBP and AIDR 3D. Samples containing either low-contrast objects (diameters, 2–15 mm) or background were extracted and analysed by the NPWE model and four human observers in a two-alternative forced choice detection task study. Proportion correct (PC) values were obtained for each analysed object and used to compare human and model observer performances. An efficiency factor (η) was calculated to normalize NPWE to human results. Results: Human and NPWE model PC values (normalized by the efficiency, η = 0.44) were highly correlated for the whole dose range. The Pearson's product-moment correlation coefficients (95% confidence interval) between human and NPWE were 0.984 (0.972–0.991) for AIDR 3D and 0.984 (0.971–0.991) for FBP, respectively. Bland–Altman plots based on PC results showed excellent agreement between human and NPWE [mean absolute difference 0.5 ± 0.4%; range of differences (−4.7%, 5.6%)]. Conclusion: The NPWE model observer can predict human performance in LCDet tasks in phantom CT images reconstructed with FBP and AIDR 3D algorithms at different dose levels. Advances in knowledge: Quantitative assessment of LCDet in CT can accurately be performed using software based on a model observer. PMID:24837275
Grünhut, Marcos; Centurión, María E; Fragoso, Wallace D; Almeida, Luciano F; de Araújo, Mário C U; Fernández Band, Beatriz S
2008-05-30
An enzymatic flow-batch system with spectrophotometric detection was developed for simultaneous determination of levodopa [(S)-2 amino-3-(3,4-dihydroxyphenyl)propionic acid] and carbidopa [(S)-3-(3,4-dihydroxyphenyl)-2-hydrazino-2-methylpropionic acid] in pharmaceutical preparations. The data were analysed by univariate method, partial least squares (PLS) and a novel variable selection for multiple lineal regression (MLR), the successive projections algorithm (SPA). The enzyme polyphenol oxidase (PPO; EC 1.14.18.1) obtained from Ipomoea batatas (L.) Lam. was used to oxidize both analytes to their respective dopaquinones, which presented a strong absorption between 295 and 540 nm. The statistical parameters (RMSE and correlation coefficient) calculated after the PLS in the spectral region between 295 and 540 nm and MLR-SPA application were appropriate for levodopa and carbidopa. A comparative study of univariate, PLS, in different ranges, and MLR-SPA chemometrics models, was carried out by applying the elliptical joint confidence region (EJCR) test. The results were satisfactory for PLS in the spectral region between 295 and 540 nm and for MLR-SPA. Tablets of commercial samples were analysed and the results obtained are in close agreement with both, spectrophotometric and HPLC pharmacopeia methods. The sample throughput was 18 h(-1).
An algorithm for segmenting range imagery
Roberts, R.S.
1997-03-01
This report describes the technical accomplishments of the FY96 Cross Cutting and Advanced Technology (CC&AT) project at Los Alamos National Laboratory. The project focused on developing algorithms for segmenting range images. The image segmentation algorithm developed during the project is described here. In addition to segmenting range images, the algorithm can fuse multiple range images thereby providing true 3D scene models. The algorithm has been incorporated into the Rapid World Modelling System at Sandia National Laboratory.
Ali, Imad; Alsbou, Nesreen; Herman, Terence; Ahmad, Salahuddin
2011-02-01
The purpose of this work is to extract three-dimensional (3D) motion trajectories of internal implanted and external skin-attached markers from kV cone-beam projections and reduce image artifact from patient motion in cone-beam computed tomography (CBCT) from on-board imager. Cone beam radiographic projections were acquired for a mobile phantom and liver patients with internal implanted and external skin-attached markers. An algorithm was developed to automatically find the positions of the markers in the projections. It uses normalized cross-correlation between a template image of a metal seed marker and the projections to find the marker position. From these positions and time-tagged angular views, the marker 3D motion trajectory was obtained over a time interval of nearly one minute, which is the time required for scanning. This marker trajectory was used to remap the pixels of the projections to eliminate motion. Then, the motion-corrected projections were used to reconstruct CBCT. An algorithm was developed to extract 3D motion trajectories of internal and external markers from cone-beam projections using a kV monoscopic on-board imager. This algorithm was tested and validated using a mobile phantom and patients with liver masses that had radio-markers implanted in the tumor and attached to the skin. The extracted motion trajectories were used to investigate motion correlation between internal and external markers in liver patients. Image artifacts from respiratory motion were reduced in CBCT reconstructed from cone-beam projections that were preprocessed to remove motion shifts obtained from marker tracking. With this method, motion-related image artifacts such as blurring and spatial distortion were reduced, and contrast and position resolutions were improved significantly in CBCT reconstructed from motion-corrected projections. Furthermore, correlated internal and external marker 3D-motion tracks obtained from the kV projections might be useful for 4DCBCT
NASA Technical Reports Server (NTRS)
Piepmeier, Jeffrey; Mohammed, Priscilla; De Amici, Giovanni; Kim, Edward; Peng, Jinzheng; Ruf, Christopher; Hanna, Maher; Yueh, Simon; Entekhabi, Dara
2016-01-01
The purpose of the Soil Moisture Active Passive (SMAP) radiometer calibration algorithm is to convert Level 0 (L0) radiometer digital counts data into calibrated estimates of brightness temperatures referenced to the Earth's surface within the main beam. The algorithm theory in most respects is similar to what has been developed and implemented for decades for other satellite radiometers; however, SMAP includes two key features heretofore absent from most satellite borne radiometers: radio frequency interference (RFI) detection and mitigation, and measurement of the third and fourth Stokes parameters using digital correlation. The purpose of this document is to describe the SMAP radiometer and forward model, explain the SMAP calibration algorithm, including approximations, errors, and biases, provide all necessary equations for implementing the calibration algorithm and detail the RFI detection and mitigation process. Section 2 provides a summary of algorithm objectives and driving requirements. Section 3 is a description of the instrument and Section 4 covers the forward models, upon which the algorithm is based. Section 5 gives the retrieval algorithm and theory. Section 6 describes the orbit simulator, which implements the forward model and is the key for deriving antenna pattern correction coefficients and testing the overall algorithm.
ERIC Educational Resources Information Center
Rheinboldt, Werner C.
This material contains two units which view applications of computer science. The first of these examines Horner's scheme; and is designed to instruct the user on how to apply both this scheme and related algorithms. The second unit aims for student understanding of standard bisection, secant, and Newton methods of root finding and appreciation of…
ERIC Educational Resources Information Center
Grayson, Katherine
2007-01-01
In November 2006, the editors of "Campus Technology" launched their first-ever High-Resolution Projection Study, to find out if the latest in projector technology could really make a significant difference in teaching, learning, and educational innovation on US campuses. The author and her colleagues asked campus educators,…
NASA Astrophysics Data System (ADS)
Shahriari, Mohammadreza
2016-03-01
The time-cost tradeoff problem is one of the most important and applicable problems in project scheduling area. There are many factors that force the mangers to crash the time. This factor could be early utilization, early commissioning and operation, improving the project cash flow, avoiding unfavorable weather conditions, compensating the delays, and so on. Since there is a need to allocate extra resources to short the finishing time of project and the project managers are intended to spend the lowest possible amount of money and achieve the maximum crashing time, as a result, both direct and indirect costs will be influenced in the project, and here, we are facing into the time value of money. It means that when we crash the starting activities in a project, the extra investment will be tied in until the end date of the project; however, when we crash the final activities, the extra investment will be tied in for a much shorter period. This study is presenting a two-objective mathematical model for balancing compressing the project time with activities delay to prepare a suitable tool for decision makers caught in available facilities and due to the time of projects. Also drawing the scheduling problem to real world conditions by considering nonlinear objective function and the time value of money are considered. The presented problem was solved using NSGA-II, and the effect of time compressing reports on the non-dominant set.
NASA Technical Reports Server (NTRS)
Abrams, D.; Williams, C.
1999-01-01
This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases for which all know classical algorithms require exponential time.
The Xmath Integration Algorithm
ERIC Educational Resources Information Center
Bringslid, Odd
2009-01-01
The projects Xmath (Bringslid and Canessa, 2002) and dMath (Bringslid, de la Villa and Rodriguez, 2007) were supported by the European Commission in the so called Minerva Action (Xmath) and The Leonardo da Vinci programme (dMath). The Xmath eBook (Bringslid, 2006) includes algorithms into a wide range of undergraduate mathematical issues embedded…
Algorithm Diversity for Resilent Systems
2016-06-27
4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Algorithm Diversity for Resilent Systems N/A 5b. GRANT NUMBER NOOO 141512208 5c. PROGRAM ELEMENT NUMBER...changes to a prograrn’s state during execution. Specifically, the project aims to develop techniques to introduce algorithm -level diversity, in contrast...to existing work on execution-level diversity. Algorithm -level diversity can introduce larger differences between variants than execution-level
ERIC Educational Resources Information Center
Dershem, Herbert L.
These modules view aspects of computer use in the problem-solving process, and introduce techniques and ideas that are applicable to other modes of problem solving. The first unit looks at algorithms, flowchart language, and problem-solving steps that apply this knowledge. The second unit describes ways in which computer iteration may be used…
Anglada-Escude, Guillem; Butler, R. Paul
2012-06-01
Doppler spectroscopy has uncovered or confirmed all the known planets orbiting nearby stars. Two main techniques are used to obtain precision Doppler measurements at optical wavelengths. The first approach is the gas cell method, which consists of least-squares matching of the spectrum of iodine imprinted on the spectrum of the star. The second method relies on the construction of a stabilized spectrograph externally calibrated in wavelength. The most precise stabilized spectrometer in operation is the High Accuracy Radial velocity Planet Searcher (HARPS), operated by the European Southern Observatory in La Silla Observatory, Chile. The Doppler measurements obtained with HARPS are typically obtained using the cross-correlation function (CCF) technique. This technique consists of multiplying the stellar spectrum by a weighted binary mask and finding the minimum of the product as a function of the Doppler shift. It is known that CCF is suboptimal in exploiting the Doppler information in the stellar spectrum. Here we describe an algorithm to obtain precision radial velocity measurements using least-squares matching of each observed spectrum to a high signal-to-noise ratio template derived from the same observations. This algorithm is implemented in our software HARPS-TERRA (Template-Enhanced Radial velocity Re-analysis Application). New radial velocity measurements on a representative sample of stars observed by HARPS are used to illustrate the benefits of the proposed method. We show that, compared with CCF, template matching provides a significant improvement in accuracy, especially when applied to M dwarfs.
NYU Ultracomputer project. Final project summary, 1979-1993
Gottlieb, A.
1994-10-01
This report discusses the following on the Ultracomputer project: simulation studies; network analysis; prototype hardware; VSLI design; coordination algorithms; systems software; application software; and compiler development.
NASA Astrophysics Data System (ADS)
Wolfe, William J.; Wood, David; Sorensen, Stephen E.
1996-12-01
This paper discusses automated scheduling as it applies to complex domains such as factories, transportation, and communications systems. The window-constrained-packing problem is introduced as an ideal model of the scheduling trade offs. Specific algorithms are compared in terms of simplicity, speed, and accuracy. In particular, dispatch, look-ahead, and genetic algorithms are statistically compared on randomly generated job sets. The conclusion is that dispatch methods are fast and fairly accurate; while modern algorithms, such as genetic and simulate annealing, have excessive run times, and are too complex to be practical.
Sobel, E.; Lange, K.; O`Connell, J.R.
1996-12-31
Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.
NOSS altimeter algorithm specifications
NASA Technical Reports Server (NTRS)
Hancock, D. W.; Forsythe, R. G.; Mcmillan, J. D.
1982-01-01
A description of all algorithms required for altimeter processing is given. Each description includes title, description, inputs/outputs, general algebraic sequences and data volume. All required input/output data files are described and the computer resources required for the entire altimeter processing system were estimated. The majority of the data processing requirements for any radar altimeter of the Seasat-1 type are scoped. Additions and deletions could be made for the specific altimeter products required by other projects.
Gomes, Adriano de Araújo; Schenone, Agustina V; Goicoechea, Héctor C; de Araújo, Mario Cesar U
2015-07-01
The use of the successive projections algorithm (SPA) for elimination of uninformative variables in interval selection, and unfold partial least squares regression (U-PLS) modeling of excitation-emission matrices (EEM), when under the inner filter effect (IFE) is reported for first time. Post-calibration residual bilinearization (RBL) was employed against events of unknown components in the test samples. The inner filter effect can originate changes in both the shape and intensity of analyte spectra, leading to trilinearity losses in both modes, and thus invalidating most multiway calibration methods. The algorithm presented in this paper was named iSPA-U-PLS/RBL. Both simulated and experimental data sets were used to compare the prediction capability during: (1) simulated EEM; and (2) quantitation of phenylephrine (PHE) in the presence of paracetamol (PAR) (or acetaminophen) in water samples. Test sets containing unexpected components were built in both systems [a single interference was taken into account in the simulated data set, while water samples were added with varying amounts of ibuprofen (IBU), and acetyl salicylic acid (ASA)]. The prediction results and figures of merit obtained with the new algorithm were compared with those obtained with U-PLS/RBL (without intervals selection), and with the well-known parallel factors analysis (PARAFAC). In all cases, U-PLS/RBL displayed better EEM handling capability in the presence of the inner filter effect compared with PARAFAC. In addition, iSPA-U-PLS/RBL improved the results obtained with the full U-PLS/RBL model, in this case demonstrating the potential of variable selection.
Hogan, Robin
2008-01-15
Cloudnet is a research project supported by the European Commission. This project aims to use data obtained quasi-continuously for the development and implementation of cloud remote sensing synergy algorithms. The use of active instruments (lidar and radar) results in detailed vertical profiles of important cloud parameters which cannot be derived from current satellite sensing techniques. A network of three already existing cloud remote sensing stations (CRS-stations) will be operated for a two year period, activities will be co-ordinated, data formats harmonised and analysis of the data performed to evaluate the representation of clouds in four major european weather forecast models.
NASA Technical Reports Server (NTRS)
Dongarra, Jack
1998-01-01
This exploratory study initiated our inquiry into algorithms and applications that would benefit by latency tolerant approach to algorithm building, including the construction of new algorithms where appropriate. In a multithreaded execution, when a processor reaches a point where remote memory access is necessary, the request is sent out on the network and a context--switch occurs to a new thread of computation. This effectively masks a long and unpredictable latency due to remote loads, thereby providing tolerance to remote access latency. We began to develop standards to profile various algorithm and application parameters, such as the degree of parallelism, granularity, precision, instruction set mix, interprocessor communication, latency etc. These tools will continue to develop and evolve as the Information Power Grid environment matures. To provide a richer context for this research, the project also focused on issues of fault-tolerance and computation migration of numerical algorithms and software. During the initial phase we tried to increase our understanding of the bottlenecks in single processor performance. Our work began by developing an approach for the automatic generation and optimization of numerical software for processors with deep memory hierarchies and pipelined functional units. Based on the results we achieved in this study we are planning to study other architectures of interest, including development of cost models, and developing code generators appropriate to these architectures.
NASA Technical Reports Server (NTRS)
Gregory, Kyle J.; Hill, Joanne E. (Editor); Black, J. Kevin; Baumgartner, Wayne H.; Jahoda, Keith
2016-01-01
A fundamental challenge in a spaceborne application of a gas-based Time Projection Chamber (TPC) for observation of X-ray polarization is handling the large amount of data collected. The TPC polarimeter described uses the APV-25 Application Specific Integrated Circuit (ASIC) to readout a strip detector. Two dimensional photoelectron track images are created with a time projection technique and used to determine the polarization of the incident X-rays. The detector produces a 128x30 pixel image per photon interaction with each pixel registering 12 bits of collected charge. This creates challenging requirements for data storage and downlink bandwidth with only a modest incidence of photons and can have a significant impact on the overall mission cost. An approach is described for locating and isolating the photoelectron track within the detector image, yielding a much smaller data product, typically between 8x8 pixels and 20x20 pixels. This approach is implemented using a Microsemi RT-ProASIC3-3000 Field-Programmable Gate Array (FPGA), clocked at 20 MHz and utilizing 10.7k logic gates (14% of FPGA), 20 Block RAMs (17% of FPGA), and no external RAM. Results will be presented, demonstrating successful photoelectron track cluster detection with minimal impact to detector dead-time.
Research on Routing Selection Algorithm Based on Genetic Algorithm
NASA Astrophysics Data System (ADS)
Gao, Guohong; Zhang, Baojian; Li, Xueyong; Lv, Jinna
The hereditary algorithm is a kind of random searching and method of optimizing based on living beings natural selection and hereditary mechanism. In recent years, because of the potentiality in solving complicate problems and the successful application in the fields of industrial project, hereditary algorithm has been widely concerned by the domestic and international scholar. Routing Selection communication has been defined a standard communication model of IP version 6.This paper proposes a service model of Routing Selection communication, and designs and implements a new Routing Selection algorithm based on genetic algorithm.The experimental simulation results show that this algorithm can get more resolution at less time and more balanced network load, which enhances search ratio and the availability of network resource, and improves the quality of service.
Kuo, Yu; Lin, Yi-Yang; Lee, Rheun-Chuan; Lin, Chung-Jung; Chiou, Yi-You; Guo, Wan-Yuo
2016-08-01
The purpose of this study was to compare the image noise-reducing abilities of iterative model reconstruction (IMR) with those of traditional filtered back projection (FBP) and statistical iterative reconstruction (IR) in abdominal computed tomography (CT) imagesThis institutional review board-approved retrospective study enrolled 103 patients; informed consent was waived. Urinary bladder (n = 83) and renal cysts (n = 44) were used as targets for evaluating imaging quality. Raw data were retrospectively reconstructed using FBP, statistical IR, and IMR. Objective image noise and signal-to-noise ratio (SNR) were calculated and analyzed using one-way analysis of variance. Subjective image quality was evaluated and analyzed using Wilcoxon signed-rank test with Bonferroni correction.Objective analysis revealed a reduction in image noise for statistical IR compared with that for FBP, with no significant differences in SNR. In the urinary bladder group, IMR achieved up to 53.7% noise reduction, demonstrating a superior performance to that of statistical IR. IMR also yielded a significantly superior SNR to that of statistical IR. Similar results were obtained in the cyst group. Subjective analysis revealed reduced image noise for IMR, without inferior margin delineation or diagnostic confidence.IMR reduced noise and increased SNR to greater degrees than did FBP and statistical IR. Applying the IMR technique to abdominal CT imaging has potential for reducing the radiation dose without sacrificing imaging quality.
Kuo, Yu; Lin, Yi-Yang; Lee, Rheun-Chuan; Lin, Chung-Jung; Chiou, Yi-You; Guo, Wan-Yuo
2016-01-01
Abstract The purpose of this study was to compare the image noise-reducing abilities of iterative model reconstruction (IMR) with those of traditional filtered back projection (FBP) and statistical iterative reconstruction (IR) in abdominal computed tomography (CT) images This institutional review board-approved retrospective study enrolled 103 patients; informed consent was waived. Urinary bladder (n = 83) and renal cysts (n = 44) were used as targets for evaluating imaging quality. Raw data were retrospectively reconstructed using FBP, statistical IR, and IMR. Objective image noise and signal-to-noise ratio (SNR) were calculated and analyzed using one-way analysis of variance. Subjective image quality was evaluated and analyzed using Wilcoxon signed-rank test with Bonferroni correction. Objective analysis revealed a reduction in image noise for statistical IR compared with that for FBP, with no significant differences in SNR. In the urinary bladder group, IMR achieved up to 53.7% noise reduction, demonstrating a superior performance to that of statistical IR. IMR also yielded a significantly superior SNR to that of statistical IR. Similar results were obtained in the cyst group. Subjective analysis revealed reduced image noise for IMR, without inferior margin delineation or diagnostic confidence. IMR reduced noise and increased SNR to greater degrees than did FBP and statistical IR. Applying the IMR technique to abdominal CT imaging has potential for reducing the radiation dose without sacrificing imaging quality. PMID:27495078
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Lomax, Harvard
1987-01-01
The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.
Schulz, Andreas S.; Shmoys, David B.; Williamson, David P.
1997-01-01
Increasing global competition, rapidly changing markets, and greater consumer awareness have altered the way in which corporations do business. To become more efficient, many industries have sought to model some operational aspects by gigantic optimization problems. It is not atypical to encounter models that capture 106 separate “yes” or “no” decisions to be made. Although one could, in principle, try all 2106 possible solutions to find the optimal one, such a method would be impractically slow. Unfortunately, for most of these models, no algorithms are known that find optimal solutions with reasonable computation times. Typically, industry must rely on solutions of unguaranteed quality that are constructed in an ad hoc manner. Fortunately, for some of these models there are good approximation algorithms: algorithms that produce solutions quickly that are provably close to optimal. Over the past 6 years, there has been a sequence of major breakthroughs in our understanding of the design of approximation algorithms and of limits to obtaining such performance guarantees; this area has been one of the most flourishing areas of discrete mathematics and theoretical computer science. PMID:9370525
Schmidtlein, CR; Beattie, B; Humm, J; Li, S; Wu, Z; Xu, Y; Zhang, J; Shen, L; Vogelsang, L; Feiglin, D; Krol, A
2014-06-15
Purpose: To investigate the performance of a new penalized-likelihood PET image reconstruction algorithm using the 1{sub 1}-norm total-variation (TV) sum of the 1st through 4th-order gradients as the penalty. Simulated and brain patient data sets were analyzed. Methods: This work represents an extension of the preconditioned alternating projection algorithm (PAPA) for emission-computed tomography. In this new generalized algorithm (GPAPA), the penalty term is expanded to allow multiple components, in this case the sum of the 1st to 4th order gradients, to reduce artificial piece-wise constant regions (“staircase” artifacts typical for TV) seen in PAPA images penalized with only the 1st order gradient. Simulated data were used to test for “staircase” artifacts and to optimize the penalty hyper-parameter in the root-mean-squared error (RMSE) sense. Patient FDG brain scans were acquired on a GE D690 PET/CT (370 MBq at 1-hour post-injection for 10 minutes) in time-of-flight mode and in all cases were reconstructed using resolution recovery projectors. GPAPA images were compared PAPA and RMSE-optimally filtered OSEM (fully converged) in simulations and to clinical OSEM reconstructions (3 iterations, 32 subsets) with 2.6 mm XYGaussian and standard 3-point axial smoothing post-filters. Results: The results from the simulated data show a significant reduction in the 'staircase' artifact for GPAPA compared to PAPA and lower RMSE (up to 35%) compared to optimally filtered OSEM. A simple power-law relationship between the RMSE-optimal hyper-parameters and the noise equivalent counts (NEC) per voxel is revealed. Qualitatively, the patient images appear much sharper and with less noise than standard clinical images. The convergence rate is similar to OSEM. Conclusions: GPAPA reconstructions using the 1{sub 1}-norm total-variation sum of the 1st through 4th-order gradients as the penalty show great promise for the improvement of image quality over that currently achieved
Problem solving with genetic algorithms and Splicer
NASA Technical Reports Server (NTRS)
Bayer, Steven E.; Wang, Lui
1991-01-01
Genetic algorithms are highly parallel, adaptive search procedures (i.e., problem-solving methods) loosely based on the processes of population genetics and Darwinian survival of the fittest. Genetic algorithms have proven useful in domains where other optimization techniques perform poorly. The main purpose of the paper is to discuss a NASA-sponsored software development project to develop a general-purpose tool for using genetic algorithms. The tool, called Splicer, can be used to solve a wide variety of optimization problems and is currently available from NASA and COSMIC. This discussion is preceded by an introduction to basic genetic algorithm concepts and a discussion of genetic algorithm applications.
Understanding Algorithms in Different Presentations
ERIC Educational Resources Information Center
Csernoch, Mária; Biró, Piroska; Abari, Kálmán; Máth, János
2015-01-01
Within the framework of the Testing Algorithmic and Application Skills project we tested first year students of Informatics at the beginning of their tertiary education. We were focusing on the students' level of understanding in different programming environments. In the present paper we provide the results from the University of Debrecen, the…
Next Generation Suspension Dynamics Algorithms
Schunk, Peter Randall; Higdon, Jonathon; Chen, Steven
2014-12-01
This research project has the objective to extend the range of application, improve the efficiency and conduct simulations with the Fast Lubrication Dynamics (FLD) algorithm for concentrated particle suspensions in a Newtonian fluid solvent. The research involves a combination of mathematical development, new computational algorithms, and application to processing flows of relevance in materials processing. The mathematical developments clarify the underlying theory, facilitate verification against classic monographs in the field and provide the framework for a novel parallel implementation optimized for an OpenMP shared memory environment. The project considered application to consolidation flows of major interest in high throughput materials processing and identified hitherto unforeseen challenges in the use of FLD in these applications. Extensions to the algorithm have been developed to improve its accuracy in these applications.
Genetic algorithms at UC Davis/LLNL
Vemuri, V.R.
1993-12-31
A tutorial introduction to genetic algorithms is given. This brief tutorial should serve the purpose of introducing the subject to the novice. The tutorial is followed by a brief commentary on the term project reports that follow.
Advanced CHP Control Algorithms: Scope Specification
Katipamula, Srinivas; Brambley, Michael R.
2006-04-28
The primary objective of this multiyear project is to develop algorithms for combined heat and power systems to ensure optimal performance, increase reliability, and lead to the goal of clean, efficient, reliable and affordable next generation energy systems.
Rotational Invariant Dimensionality Reduction Algorithms.
Lai, Zhihui; Xu, Yong; Yang, Jian; Shen, Linlin; Zhang, David
2016-06-30
A common intrinsic limitation of the traditional subspace learning methods is the sensitivity to the outliers and the image variations of the object since they use the L₂ norm as the metric. In this paper, a series of methods based on the L₂,₁-norm are proposed for linear dimensionality reduction. Since the L₂,₁-norm based objective function is robust to the image variations, the proposed algorithms can perform robust image feature extraction for classification. We use different ideas to design different algorithms and obtain a unified rotational invariant (RI) dimensionality reduction framework, which extends the well-known graph embedding algorithm framework to a more generalized form. We provide the comprehensive analyses to show the essential properties of the proposed algorithm framework. This paper indicates that the optimization problems have global optimal solutions when all the orthogonal projections of the data space are computed and used. Experimental results on popular image datasets indicate that the proposed RI dimensionality reduction algorithms can obtain competitive performance compared with the previous L₂ norm based subspace learning algorithms.
Applications of the Schur Basis to Quantum Algorithms
2011-01-10
REPORT Applications of the Schur Basis to Quantum Algorithms 14. ABSTRACT 16. SECURITY CLASSIFICATION OF: Quantum computation offers a promising avenue...to high performance computing, for certain applications, but depends on development of new quantum algorithms. Thus far, all major quantum ...algorithms which are exponentially fast compared with classical counterparts are based on the quantum Fourier transform. This project seeks to develop new
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
Tilted cone beam VCT reconstruction algorithm
NASA Astrophysics Data System (ADS)
Hsieh, Jiang; Tang, Xiangyang
2005-04-01
Reconstruction algorithms for volumetric CT have been the focus of many studies. Several exact and approximate reconstruction algorithms have been proposed for step-and-shoot and helical scanning trajectories to combat cone beam related artifacts. In this paper, we present a closed form cone beam reconstruction formula for tilted gantry data acquisition. Although several algorithms were proposed to compensate for errors induced by the gantry tilt, none of the algorithms addresses the case in which the cone beam geometry is first rebinned to a set of parallel beams prior to the filtered backprojection. Because of the rebinning process, the amount of iso-center adjustment depends not only on the projection angle and tilt angle, but also on the reconstructed pixel location. The proposed algorithm has been tested extensively on both 16 and 64 slice VCT with phantoms and clinical data. The efficacy of the algorithm is clearly demonstrated by the experiments.
Total variation projection with first order schemes.
Fadili, Jalal M; Peyre, Gabriel
2011-03-01
This article proposes a new algorithm to compute the projection on the set of images whose total variation is bounded by a constant. The projection is computed through a dual formulation that is solved by first order non-smooth optimization methods. This yields an iterative algorithm that applies iterative soft thresholding to the dual vector field, and for which we establish convergence rate on the primal iterates. This projection algorithm can then be used as a building block in a variety of applications such as solving inverse problems under a total variation constraint, or for texture synthesis. Numerical results are reported to illustrate the usefulness and potential applicability of our TV projection algorithm on various examples including denoising, texture synthesis, inpainting, deconvolution and tomography problems. We also show that our projection algorithm competes favorably with state-of-the-art TV projection methods in terms of convergence speed.
Project PRISM: Project Manual.
ERIC Educational Resources Information Center
Cunnion, Maryellen; And Others
The first of three volumes of Project PRISM, a program designed to help classroom teachers (grades 6 through 8) provide for the needs of their gifted and talented students without removing those students from the mainstream of education, outlines the project's background and achievements. Sections review the following project aspects (sample…
Algorithm Animation with Galant.
Stallmann, Matthias F
2017-01-01
Although surveys suggest positive student attitudes toward the use of algorithm animations, it is not clear that they improve learning outcomes. The Graph Algorithm Animation Tool, or Galant, challenges and motivates students to engage more deeply with algorithm concepts, without distracting them with programming language details or GUIs. Even though Galant is specifically designed for graph algorithms, it has also been used to animate other algorithms, most notably sorting algorithms.
Motion Cueing Algorithm Development: Initial Investigation and Redesign of the Algorithms
NASA Technical Reports Server (NTRS)
Telban, Robert J.; Wu, Weimin; Cardullo, Frank M.; Houck, Jacob A. (Technical Monitor)
2000-01-01
In this project four motion cueing algorithms were initially investigated. The classical algorithm generated results with large distortion and delay and low magnitude. The NASA adaptive algorithm proved to be well tuned with satisfactory performance, while the UTIAS adaptive algorithm produced less desirable results. Modifications were made to the adaptive algorithms to reduce the magnitude of undesirable spikes. The optimal algorithm was found to have the potential for improved performance with further redesign. The center of simulator rotation was redefined. More terms were added to the cost function to enable more tuning flexibility. A new design approach using a Fortran/Matlab/Simulink setup was employed. A new semicircular canals model was incorporated in the algorithm. With these changes results show the optimal algorithm has some advantages over the NASA adaptive algorithm. Two general problems observed in the initial investigation required solutions. A nonlinear gain algorithm was developed that scales the aircraft inputs by a third-order polynomial, maximizing the motion cues while remaining within the operational limits of the motion system. A braking algorithm was developed to bring the simulator to a full stop at its motion limit and later release the brake to follow the cueing algorithm output.
Conflict-Aware Scheduling Algorithm
NASA Technical Reports Server (NTRS)
Wang, Yeou-Fang; Borden, Chester
2006-01-01
conflict-aware scheduling algorithm is being developed to help automate the allocation of NASA s Deep Space Network (DSN) antennas and equipment that are used to communicate with interplanetary scientific spacecraft. The current approach for scheduling DSN ground resources seeks to provide an equitable distribution of tracking services among the multiple scientific missions and is very labor intensive. Due to the large (and increasing) number of mission requests for DSN services, combined with technical and geometric constraints, the DSN is highly oversubscribed. To help automate the process, and reduce the DSN and spaceflight project labor effort required for initiating, maintaining, and negotiating schedules, a new scheduling algorithm is being developed. The scheduling algorithm generates a "conflict-aware" schedule, where all requests are scheduled based on a dynamic priority scheme. The conflict-aware scheduling algorithm allocates all requests for DSN tracking services while identifying and maintaining the conflicts to facilitate collaboration and negotiation between spaceflight missions. These contrast with traditional "conflict-free" scheduling algorithms that assign tracks that are not in conflict and mark the remainder as unscheduled. In the case where full schedule automation is desired (based on mission/event priorities, fairness, allocation rules, geometric constraints, and ground system capabilities/ constraints), a conflict-free schedule can easily be created from the conflict-aware schedule by removing lower priority items that are in conflict.
Improved autonomous star identification algorithm
NASA Astrophysics Data System (ADS)
Luo, Li-Yan; Xu, Lu-Ping; Zhang, Hua; Sun, Jing-Rong
2015-06-01
The log-polar transform (LPT) is introduced into the star identification because of its rotation invariance. An improved autonomous star identification algorithm is proposed in this paper to avoid the circular shift of the feature vector and to reduce the time consumed in the star identification algorithm using LPT. In the proposed algorithm, the star pattern of the same navigation star remains unchanged when the stellar image is rotated, which makes it able to reduce the star identification time. The logarithmic values of the plane distances between the navigation and its neighbor stars are adopted to structure the feature vector of the navigation star, which enhances the robustness of star identification. In addition, some efforts are made to make it able to find the identification result with fewer comparisons, instead of searching the whole feature database. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition rate and robustness by the proposed algorithm are better than those by the LPT algorithm and the modified grid algorithm. Project supported by the National Natural Science Foundation of China (Grant Nos. 61172138 and 61401340), the Open Research Fund of the Academy of Satellite Application, China (Grant No. 2014_CXJJ-DH_12), the Fundamental Research Funds for the Central Universities, China (Grant Nos. JB141303 and 201413B), the Natural Science Basic Research Plan in Shaanxi Province, China (Grant No. 2013JQ8040), the Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20130203120004), and the Xi’an Science and Technology Plan, China (Grant. No CXY1350(4)).
Geist, G.A.; Howell, G.W.; Watkins, D.S.
1997-11-01
The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.
Improved algorithm for hyperspectral data dimension determination
NASA Astrophysics Data System (ADS)
CHEN, Jie; DU, Lei; LI, Jing; HAN, Yachao; GAO, Zihong
2017-02-01
The correlation between adjacent bands of hyperspectral image data is relatively strong. However, signal coexists with noise and the HySime (hyperspectral signal identification by minimum error) algorithm which is based on the principle of least squares is designed to calculate the estimated noise value and the estimated signal correlation matrix value. The algorithm is effective with accurate noise value but ineffective with estimated noise value obtained from spectral dimension reduction and de-correlation process. This paper proposes an improved HySime algorithm based on noise whitening process. It carries out the noise whitening, instead of removing noise pixel by pixel, process on the original data first, obtains the noise covariance matrix estimated value accurately, and uses the HySime algorithm to calculate the signal correlation matrix value in order to improve the precision of results. With simulated as well as real data experiments in this paper, results show that: firstly, the improved HySime algorithm are more accurate and stable than the original HySime algorithm; secondly, the improved HySime algorithm results have better consistency under the different conditions compared with the classic noise subspace projection algorithm (NSP); finally, the improved HySime algorithm improves the adaptability of non-white image noise with noise whitening process.
Algorithms for radio networks with dynamic topology
NASA Astrophysics Data System (ADS)
Shacham, Nachum; Ogier, Richard; Rutenburg, Vladislav V.; Garcia-Luna-Aceves, Jose
1991-08-01
The objective of this project was the development of advanced algorithms and protocols that efficiently use network resources to provide optimal or nearly optimal performance in future communication networks with highly dynamic topologies and subject to frequent link failures. As reflected by this report, we have achieved our objective and have significantly advanced the state-of-the-art in this area. The research topics of the papers summarized include the following: efficient distributed algorithms for computing shortest pairs of disjoint paths; minimum-expected-delay alternate routing algorithms for highly dynamic unreliable networks; algorithms for loop-free routing; multipoint communication by hierarchically encoded data; efficient algorithms for extracting the maximum information from event-driven topology updates; methods for the neural network solution of link scheduling and other difficult problems arising in communication networks; and methods for robust routing in networks subject to sophisticated attacks.
Software Management Environment (SME): Components and algorithms
NASA Technical Reports Server (NTRS)
Hendrick, Robert; Kistler, David; Valett, Jon
1994-01-01
This document presents the components and algorithms of the Software Management Environment (SME), a management tool developed for the Software Engineering Branch (Code 552) of the Flight Dynamics Division (FDD) of the Goddard Space Flight Center (GSFC). The SME provides an integrated set of visually oriented experienced-based tools that can assist software development managers in managing and planning software development projects. This document describes and illustrates the analysis functions that underlie the SME's project monitoring, estimation, and planning tools. 'SME Components and Algorithms' is a companion reference to 'SME Concepts and Architecture' and 'Software Engineering Laboratory (SEL) Relationships, Models, and Management Rules.'
Birkhoffian symplectic algorithms derived from Hamiltonian symplectic algorithms
NASA Astrophysics Data System (ADS)
Xin-Lei, Kong; Hui-Bin, Wu; Feng-Xiang, Mei
2016-01-01
In this paper, we focus on the construction of structure preserving algorithms for Birkhoffian systems, based on existing symplectic schemes for the Hamiltonian equations. The key of the method is to seek an invertible transformation which drives the Birkhoffian equations reduce to the Hamiltonian equations. When there exists such a transformation, applying the corresponding inverse map to symplectic discretization of the Hamiltonian equations, then resulting difference schemes are verified to be Birkhoffian symplectic for the original Birkhoffian equations. To illustrate the operation process of the method, we construct several desirable algorithms for the linear damped oscillator and the single pendulum with linear dissipation respectively. All of them exhibit excellent numerical behavior, especially in preserving conserved quantities. Project supported by the National Natural Science Foundation of China (Grant No. 11272050), the Excellent Young Teachers Program of North China University of Technology (Grant No. XN132), and the Construction Plan for Innovative Research Team of North China University of Technology (Grant No. XN129).
Margolis, C Z
1983-02-04
The clinical algorithm (flow chart) is a text format that is specially suited for representing a sequence of clinical decisions, for teaching clinical decision making, and for guiding patient care. A representative clinical algorithm is described in detail; five steps for writing an algorithm and seven steps for writing a set of algorithms are outlined. Five clinical education and patient care uses of algorithms are then discussed, including a map for teaching clinical decision making and protocol charts for guiding step-by-step care of specific problems. Clinical algorithms are compared as to their clinical usefulness with decision analysis. Three objections to clinical algorithms are answered, including the one that they restrict thinking. It is concluded that methods should be sought for writing clinical algorithms that represent expert consensus. A clinical algorithm could then be written for any area of medical decision making that can be standardized. Medical practice could then be taught more effectively, monitored accurately, and understood better.
NASA Astrophysics Data System (ADS)
Santer, Richard P.; Fell, Frank
2003-05-01
), combining satellite data, evaluation algorithms and value-adding ancillary digital information. This prevents the end user from investing funds into expensive equipment or to hire specialized personnel. The data processor shall be a generic tool, which may be applied to a large variety of operationally gathered satellite data. In the frame of SISCAL, the processor shall be applied to remotely sensed data of selected coastal areas and lakes in Central Europe and the Eastern Mediterranean, according to the needs of the end users within the SISCAL consortium. A number of measures are required to achieve the objective of the proposed project: (1) Identification and specification of the SISCAL end user needs for NRT water related data products accessible to EO techniques. (2) Selection of the most appropriate instruments, evaluation algorithms and ancillary data bases required to provide the identified data products. (3) Development of the actual Near-Real-Time data processor for the specified EO data products. (4) Development of the GIS processor adding ancillary digital information to the satellite images and providing the required geographical projections. (5) Development of a product retrieval and management system to handle ordering and distribution of data products between the SISCAL server and the end users, including payment and invoicing. (6) Evaluation of the derived data products in terms of accuracy and usefulness by comparison with available in-situ measurements and by making use of the local expertise of the end users. (7) Establishing an Internet server dedicated to internal communication between the consortium members as well as presenting the SISCAL project to a larger public. (8) Marketing activities, presentation of data processor to potential external customers, identification of their exact needs. The innovative aspect of the SISCAL project consists in the generation of NRT data products on water quality parameters from EO data. This article mainly deals
Novel biomedical tetrahedral mesh methods: algorithms and applications
NASA Astrophysics Data System (ADS)
Yu, Xiao; Jin, Yanfeng; Chen, Weitao; Huang, Pengfei; Gu, Lixu
2007-12-01
Tetrahedral mesh generation algorithm, as a prerequisite of many soft tissue simulation methods, becomes very important in the virtual surgery programs because of the real-time requirement. Aiming to speed up the computation in the simulation, we propose a revised Delaunay algorithm which makes a good balance of quality of tetrahedra, boundary preservation and time complexity, with many improved methods. Another mesh algorithm named Space-Disassembling is also presented in this paper, and a comparison of Space-Disassembling, traditional Delaunay algorithm and the revised Delaunay algorithm is processed based on clinical soft-tissue simulation projects, including craniofacial plastic surgery and breast reconstruction plastic surgery.
ERIC Educational Resources Information Center
School Science Review, 1978
1978-01-01
Presents sixteen project notes developed by pupils of Chipping Norton School and Bristol Grammar School, in the United Kingdom. These Projects include eight biology A-level projects and eight Chemistry A-level projects. (HM)
Software For Genetic Algorithms
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steve E.
1992-01-01
SPLICER computer program is genetic-algorithm software tool used to solve search and optimization problems. Provides underlying framework and structure for building genetic-algorithm application program. Written in Think C.
Algorithm-development activities
NASA Technical Reports Server (NTRS)
Carder, Kendall L.
1994-01-01
The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
Quantum algorithms: an overview
NASA Astrophysics Data System (ADS)
Montanaro, Ashley
2016-01-01
Quantum computers are designed to outperform standard computers by running quantum algorithms. Areas in which quantum algorithms can be applied include cryptography, search and optimisation, simulation of quantum systems and solving large systems of linear equations. Here we briefly survey some known quantum algorithms, with an emphasis on a broad overview of their applications rather than their technical details. We include a discussion of recent developments and near-term applications of quantum algorithms.
Algorithmic methods in diffraction microscopy
NASA Astrophysics Data System (ADS)
Thibault, Pierre
Recent diffraction imaging techniques use properties of coherent sources (most notably x-rays and electrons) to transfer a portion of the imaging task to computer algorithms. "Diffraction microscopy" is a method which consists in reconstructing the image of a specimen from its diffraction pattern. Because only the amplitude of a wavefield incident on a detector is measured, reconstruction of the image entails to recovering the lost phases. This extension of the 'phase problem" commonly met in crystallography is solved only if additional information is available. The main topic of this thesis is the development of algorithmic techniques in diffraction microscopy. In addition to introducing new methods, it is meant to be a review of the algorithmic aspects of the field of diffractive imaging. An overview of the scattering approximations used in the interpretation of diffraction datasets is first given, as well as a numerical propagation tool useful in conditions where known approximations fail. Concepts central to diffraction microscopy---such as oversampling---are then introduced and other similar imaging techniques described. A complete description of iterative reconstruction algorithms follows, with a special emphasis on the difference map, the algorithm used in this thesis. The formalism, based on constraint sets and projection onto these sets, is then defined and explained. Simple projections commonly used in diffraction imaging are then described. The various ways experimental realities can affect reconstruction methods will then be enumerated. Among the diverse sources of algorithmic difficulties, one finds that noise, missing data and partial coherence are typically the most important. Other related difficulties discussed are the detrimental effects of crystalline domains in a specimen, and the convergence problems occurring when the support of a complex-valued specimen is not well known. The last part of this thesis presents reconstruction results; an
NASA Astrophysics Data System (ADS)
Graf, Norman A.
2001-07-01
An object-oriented framework for undertaking clustering algorithm studies has been developed. We present here the definitions for the abstract Cells and Clusters as well as the interface for the algorithm. We intend to use this framework to investigate the interplay between various clustering algorithms and the resulting jet reconstruction efficiency and energy resolutions to assist in the design of the calorimeter detector.
Teaching Computation in Primary School without Traditional Written Algorithms
ERIC Educational Resources Information Center
Hartnett, Judy
2015-01-01
Concerns regarding the dominance of the traditional written algorithms in schools have been raised by many mathematics educators, yet the teaching of these procedures remains a dominant focus in in primary schools. This paper reports on a project in one school where the staff agreed to put the teaching of the traditional written algorithm aside,…
Exploration of new multivariate spectral calibration algorithms.
Van Benthem, Mark Hilary; Haaland, David Michael; Melgaard, David Kennett; Martin, Laura Elizabeth; Wehlburg, Christine Marie; Pell, Randy J.; Guenard, Robert D.
2004-03-01
A variety of multivariate calibration algorithms for quantitative spectral analyses were investigated and compared, and new algorithms were developed in the course of this Laboratory Directed Research and Development project. We were able to demonstrate the ability of the hybrid classical least squares/partial least squares (CLSIPLS) calibration algorithms to maintain calibrations in the presence of spectrometer drift and to transfer calibrations between spectrometers from the same or different manufacturers. These methods were found to be as good or better in prediction ability as the commonly used partial least squares (PLS) method. We also present the theory for an entirely new class of algorithms labeled augmented classical least squares (ACLS) methods. New factor selection methods are developed and described for the ACLS algorithms. These factor selection methods are demonstrated using near-infrared spectra collected from a system of dilute aqueous solutions. The ACLS algorithm is also shown to provide improved ease of use and better prediction ability than PLS when transferring calibrations between near-infrared calibrations from the same manufacturer. Finally, simulations incorporating either ideal or realistic errors in the spectra were used to compare the prediction abilities of the new ACLS algorithm with that of PLS. We found that in the presence of realistic errors with non-uniform spectral error variance across spectral channels or with spectral errors correlated between frequency channels, ACLS methods generally out-performed the more commonly used PLS method. These results demonstrate the need for realistic error structure in simulations when the prediction abilities of various algorithms are compared. The combination of equal or superior prediction ability and the ease of use of the ACLS algorithms make the new ACLS methods the preferred algorithms to use for multivariate spectral calibrations.
Advancements to the planogram frequency–distance rebinning algorithm
Champley, Kyle M; Raylman, Raymond R; Kinahan, Paul E
2010-01-01
In this paper we consider the task of image reconstruction in positron emission tomography (PET) with the planogram frequency–distance rebinning (PFDR) algorithm. The PFDR algorithm is a rebinning algorithm for PET systems with panel detectors. The algorithm is derived in the planogram coordinate system which is a native data format for PET systems with panel detectors. A rebinning algorithm averages over the redundant four-dimensional set of PET data to produce a three-dimensional set of data. Images can be reconstructed from this rebinned three-dimensional set of data. This process enables one to reconstruct PET images more quickly than reconstructing directly from the four-dimensional PET data. The PFDR algorithm is an approximate rebinning algorithm. We show that implementing the PFDR algorithm followed by the (ramp) filtered backprojection (FBP) algorithm in linogram coordinates from multiple views reconstructs a filtered version of our image. We develop an explicit formula for this filter which can be used to achieve exact reconstruction by means of a modified FBP algorithm applied to the stack of rebinned linograms and can also be used to quantify the errors introduced by the PFDR algorithm. This filter is similar to the filter in the planogram filtered backprojection algorithm derived by Brasse et al. The planogram filtered backprojection and exact reconstruction with the PFDR algorithm require complete projections which can be completed with a reprojection algorithm. The PFDR algorithm is similar to the rebinning algorithm developed by Kao et al. By expressing the PFDR algorithm in detector coordinates, we provide a comparative analysis between the two algorithms. Numerical experiments using both simulated data and measured data from a positron emission mammography/tomography (PEM/PET) system are performed. Images are reconstructed by PFDR+FBP (PFDR followed by 2D FBP reconstruction), PFDRX (PFDR followed by the modified FBP algorithm for exact
Event-driven management algorithm of an Engineering documents circulation system
NASA Astrophysics Data System (ADS)
Kuzenkov, V.; Zebzeev, A.; Gromakov, E.
2015-04-01
Development methodology of an engineering documents circulation system in the design company is reviewed. Discrete event-driven automatic models using description algorithms of project management is offered. Petri net use for dynamic design of projects is offered.
ERIC Educational Resources Information Center
Textor, Martin R.
2005-01-01
The great educational value of projects is emphasized by contrasting negative aspects of the life of today's children with the goals of project work. This is illustrated by a project "Shopping." It is shown what children are learning in such projects and what the advantages of project work are. Relevant topic areas, criteria for selecting a…
ERIC Educational Resources Information Center
Siegenthaler, David
For 37 states in the United States, Project Wild has become an officially sanctioned, distributed and funded "environemtnal and conservation education program." For those who are striving to implement focused, sequential, learning programs, as well as those who wish to promote harmony through a non-anthropocentric world view, Project…
Content Addressable Memory Project
1990-11-01
The Content Addressable M1-emory Project consists of the development of several experimental software systems on an AMT Distributed Array Processor...searching (database) compiler algorithms memory management other systems software) Linear C is an unlovely hybrid language which imports the CAM...memory from AMT’s operating system for the DAP; how- ever, other than this limitation, the memory management routines work exactly as their C counterparts
License plate detection algorithm
NASA Astrophysics Data System (ADS)
Broitman, Michael; Klopovsky, Yuri; Silinskis, Normunds
2013-12-01
A novel algorithm for vehicle license plates localization is proposed. The algorithm is based on pixel intensity transition gradient analysis. Near to 2500 natural-scene gray-level vehicle images of different backgrounds and ambient illumination was tested. The best set of algorithm's parameters produces detection rate up to 0.94. Taking into account abnormal camera location during our tests and therefore geometrical distortion and troubles from trees this result could be considered as passable. Correlation between source data, such as license Plate dimensions and texture, cameras location and others, and parameters of algorithm were also defined.
Distributed Minimum Hop Algorithms
1982-01-01
acknowledgement), node d starts iteration i+1, and otherwise the algorithm terminates. A detailed description of the algorithm is given in pidgin algol...precise behavior of the algorithm under these circumstances is described by the pidgin algol program in the appendix which is executed by each node. The...l) < N!(2) for each neighbor j, and thus by induction,J -1 N!(2-1) < n-i + (Z-1) + N!(Z-1), completing the proof. Algorithm Dl in Pidgin Algol It is
Infrared algorithm development for ocean observations
NASA Technical Reports Server (NTRS)
Brown, Otis B.
1995-01-01
Efforts continue under this contract to develop algorithms for the computation of sea surface temperature (SST) from MODIS infrared retrievals. This effort includes radiative transfer modeling, comparison of in situ and satellite observations, development and evaluation of processing and networking methodologies for algorithm computation and data accession, evaluation of surface validation approaches for IR radiances, and participation in MODIS (project) related activities. Efforts in this contract period have focused on radiative transfer modeling, evaluation of atmospheric correction methodologies, involvement in field studies, production and evaluation of new computer networking strategies, and objective analysis approaches.
Graph algorithms in the titan toolkit.
McLendon, William Clarence, III; Wylie, Brian Neil
2009-10-01
Graph algorithms are a key component in a wide variety of intelligence analysis activities. The Graph-Based Informatics for Non-Proliferation and Counter-Terrorism project addresses the critical need of making these graph algorithms accessible to Sandia analysts in a manner that is both intuitive and effective. Specifically we describe the design and implementation of an open source toolkit for doing graph analysis, informatics, and visualization that provides Sandia with novel analysis capability for non-proliferation and counter-terrorism.
Parallel Algorithms for the Exascale Era
Robey, Robert W.
2016-10-19
New parallel algorithms are needed to reach the Exascale level of parallelism with millions of cores. We look at some of the research developed by students in projects at LANL. The research blends ideas from the early days of computing while weaving in the fresh approach brought by students new to the field of high performance computing. We look at reproducibility of global sums and why it is important to parallel computing. Next we look at how the concept of hashing has led to the development of more scalable algorithms suitable for next-generation parallel computers. Nearly all of this work has been done by undergraduates and published in leading scientific journals.
Robot Guidance Using A Morphological Vision Algorithm
NASA Astrophysics Data System (ADS)
Lougheed, Robert M.; Tomko, Leonard M.
1985-12-01
An algorithm has been developed to guide a robot by identifying the orientation of a randomly-acquired part held in the robot's gripper. A program implementing this algorithm is being used to demonstrate the feasibility of part-independent robotic bin picking*. The project task was to extract unmodified industrial parts from a compartmentalized tray and position them on a fixture. The parts are singulated in the compartments but are positionally and rotationally unconstrained. The part is acquired based upon three-dimensional image data which is processed by a 3D morphological algorithm described in [1]. The vision algorithm discussed here inspects the parts, determines their orientation and calculates the robot trajectory to a keyed housing with which the part must be mated. When parts are extracted during a bin picking operation their position and orientation are affected by many factors, such as gripper insertion-induced motion, interference with container side walls during extraction, slippage due to gravity and vibration during robot motions. The loss of the known position and orientation of the part in the robot gripper makes accurate fixturing impossible. Our solution to this problem was to redetermine the orientation of the part after acquisition. This paper describes the application in detail and discusses the problems encountered in robot acquisition of unconstrained parts. Next, the physical setup and image acquisition system, including lighting and optical components, are discussed. The principles of morphological (shape-based) image processing are presented, followed by a description of the interactive algorithm development process which was used for this project. The algorithm is illustrated step by step with a series of diagrams showing the effects of the transformations applied to the data. The algorithms were run on ERIM' s new fourth generation hybrid image processing architecture, the Cyto-HSS, which is described in detail in [2], and the
Algorithm That Synthesizes Other Algorithms for Hashing
NASA Technical Reports Server (NTRS)
James, Mark
2010-01-01
An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the
A digitally reconstructed radiograph algorithm calculated from first principles
Staub, David; Murphy, Martin J.
2013-01-15
Purpose: To develop an algorithm for computing realistic digitally reconstructed radiographs (DRRs) that match real cone-beam CT (CBCT) projections with no artificial adjustments. Methods: The authors used measured attenuation data from cone-beam CT projection radiographs of different materials to obtain a function to convert CT number to linear attenuation coefficient (LAC). The effects of scatter, beam hardening, and veiling glare were first removed from the attenuation data. Using this conversion function the authors calculated the line integral of LAC through a CT along rays connecting the radiation source and detector pixels with a ray-tracing algorithm, producing raw DRRs. The effects of scatter, beam hardening, and veiling glare were then included in the DRRs through postprocessing. Results: The authors compared actual CBCT projections to DRRs produced with all corrections (scatter, beam hardening, and veiling glare) and to uncorrected DRRs. Algorithm accuracy was assessed through visual comparison of projections and DRRs, pixel intensity comparisons, intensity histogram comparisons, and correlation plots of DRR-to-projection pixel intensities. In general, the fully corrected algorithm provided a small but nontrivial improvement in accuracy over the uncorrected algorithm. The authors also investigated both measurement- and computation-based methods for determining the beam hardening correction, and found the computation-based method to be superior, as it accounted for nonuniform bowtie filter thickness. The authors benchmarked the algorithm for speed and found that it produced DRRs in about 0.35 s for full detector and CT resolution at a ray step-size of 0.5 mm. Conclusions: The authors have demonstrated a DRR algorithm calculated from first principles that accounts for scatter, beam hardening, and veiling glare in order to produce accurate DRRs. The algorithm is computationally efficient, making it a good candidate for iterative CT reconstruction techniques
A digitally reconstructed radiograph algorithm calculated from first principles
Staub, David; Murphy, Martin J.
2013-01-01
Purpose: To develop an algorithm for computing realistic digitally reconstructed radiographs (DRRs) that match real cone-beam CT (CBCT) projections with no artificial adjustments. Methods: The authors used measured attenuation data from cone-beam CT projection radiographs of different materials to obtain a function to convert CT number to linear attenuation coefficient (LAC). The effects of scatter, beam hardening, and veiling glare were first removed from the attenuation data. Using this conversion function the authors calculated the line integral of LAC through a CT along rays connecting the radiation source and detector pixels with a ray-tracing algorithm, producing raw DRRs. The effects of scatter, beam hardening, and veiling glare were then included in the DRRs through postprocessing. Results: The authors compared actual CBCT projections to DRRs produced with all corrections (scatter, beam hardening, and veiling glare) and to uncorrected DRRs. Algorithm accuracy was assessed through visual comparison of projections and DRRs, pixel intensity comparisons, intensity histogram comparisons, and correlation plots of DRR-to-projection pixel intensities. In general, the fully corrected algorithm provided a small but nontrivial improvement in accuracy over the uncorrected algorithm. The authors also investigated both measurement- and computation-based methods for determining the beam hardening correction, and found the computation-based method to be superior, as it accounted for nonuniform bowtie filter thickness. The authors benchmarked the algorithm for speed and found that it produced DRRs in about 0.35 s for full detector and CT resolution at a ray step-size of 0.5 mm. Conclusions: The authors have demonstrated a DRR algorithm calculated from first principles that accounts for scatter, beam hardening, and veiling glare in order to produce accurate DRRs. The algorithm is computationally efficient, making it a good candidate for iterative CT reconstruction techniques
Transitional Division Algorithms.
ERIC Educational Resources Information Center
Laing, Robert A.; Meyer, Ruth Ann
1982-01-01
A survey of general mathematics students whose teachers were taking an inservice workshop revealed that they had not yet mastered division. More direct introduction of the standard division algorithm is favored in elementary grades, with instruction of transitional processes curtailed. Weaknesses in transitional algorithms appear to outweigh…
Ultrametric Hierarchical Clustering Algorithms.
ERIC Educational Resources Information Center
Milligan, Glenn W.
1979-01-01
Johnson has shown that the single linkage and complete linkage hierarchical clustering algorithms induce a metric on the data known as the ultrametric. Johnson's proof is extended to four other common clustering algorithms. Two additional methods also produce hierarchical structures which can violate the ultrametric inequality. (Author/CTM)
The Training Effectiveness Algorithm.
ERIC Educational Resources Information Center
Cantor, Jeffrey A.
1988-01-01
Describes the Training Effectiveness Algorithm, a systematic procedure for identifying the cause of reported training problems which was developed for use in the U.S. Navy. A two-step review by subject matter experts is explained, and applications of the algorithm to other organizations and training systems are discussed. (Author/LRW)
Evaluating ACLS Algorithms for the International Space Station (ISS) - A Paradigm Revisited
NASA Technical Reports Server (NTRS)
Alexander, Dave; Brandt, Keith; Locke, James; Hurst, Victor, IV; Mack, Michael D.; Pettys, Marianne; Smart, Kieran
2007-01-01
The ISS may have communication gaps of up to 45 minutes during each orbit and therefore it is imperative to have medical protocols, including an effective ACLS algorithm, that can be reliably autonomously executed during flight. The aim of this project was to compare the effectiveness of the current ACLS algorithm with an improved algorithm having a new navigation format.
Advanced Algorithms and Automation Tools for Discrete Ordinates Methods in Parallel Environments
Alireza Haghighat
2003-05-07
This final report discusses major accomplishments of a 3-year project under the DOE's NEER Program. The project has developed innovative and automated algorithms, codes, and tools for solving the discrete ordinates particle transport method efficiently in parallel environments. Using a number of benchmark and real-life problems, the performance and accuracy of the new algorithms have been measured and analyzed.
Totally parallel multilevel algorithms
NASA Technical Reports Server (NTRS)
Frederickson, Paul O.
1988-01-01
Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.
Sampling Within k-Means Algorithm to Cluster Large Datasets
Bejarano, Jeremy; Bose, Koushiki; Brannan, Tyler; Thomas, Anita; Adragni, Kofi; Neerchal, Nagaraj; Ostrouchov, George
2011-08-01
Due to current data collection technology, our ability to gather data has surpassed our ability to analyze it. In particular, k-means, one of the simplest and fastest clustering algorithms, is ill-equipped to handle extremely large datasets on even the most powerful machines. Our new algorithm uses a sample from a dataset to decrease runtime by reducing the amount of data analyzed. We perform a simulation study to compare our sampling based k-means to the standard k-means algorithm by analyzing both the speed and accuracy of the two methods. Results show that our algorithm is significantly more efficient than the existing algorithm with comparable accuracy. Further work on this project might include a more comprehensive study both on more varied test datasets as well as on real weather datasets. This is especially important considering that this preliminary study was performed on rather tame datasets. Also, these datasets should analyze the performance of the algorithm on varied values of k. Lastly, this paper showed that the algorithm was accurate for relatively low sample sizes. We would like to analyze this further to see how accurate the algorithm is for even lower sample sizes. We could find the lowest sample sizes, by manipulating width and confidence level, for which the algorithm would be acceptably accurate. In order for our algorithm to be a success, it needs to meet two benchmarks: match the accuracy of the standard k-means algorithm and significantly reduce runtime. Both goals are accomplished for all six datasets analyzed. However, on datasets of three and four dimension, as the data becomes more difficult to cluster, both algorithms fail to obtain the correct classifications on some trials. Nevertheless, our algorithm consistently matches the performance of the standard algorithm while becoming remarkably more efficient with time. Therefore, we conclude that analysts can use our algorithm, expecting accurate results in considerably less time.
Randomized Algorithms for Matrices and Data
NASA Astrophysics Data System (ADS)
Mahoney, Michael W.
2012-03-01
This chapter reviews recent work on randomized matrix algorithms. By “randomized matrix algorithms,” we refer to a class of recently developed random sampling and random projection algorithms for ubiquitous linear algebra problems such as least-squares (LS) regression and low-rank matrix approximation. These developments have been driven by applications in large-scale data analysis—applications which place very different demands on matrices than traditional scientific computing applications. Thus, in this review, we will focus on highlighting the simplicity and generality of several core ideas that underlie the usefulness of these randomized algorithms in scientific applications such as genetics (where these algorithms have already been applied) and astronomy (where, hopefully, in part due to this review they will soon be applied). The work we will review here had its origins within theoretical computer science (TCS). An important feature in the use of randomized algorithms in TCS more generally is that one must identify and then algorithmically deal with relevant “nonuniformity structure” in the data. For the randomized matrix algorithms to be reviewed here and that have proven useful recently in numerical linear algebra (NLA) and large-scale data analysis applications, the relevant nonuniformity structure is defined by the so-called statistical leverage scores. Defined more precisely below, these leverage scores are basically the diagonal elements of the projection matrix onto the dominant part of the spectrum of the input matrix. As such, they have a long history in statistical data analysis, where they have been used for outlier detection in regression diagnostics. More generally, these scores often have a very natural interpretation in terms of the data and processes generating the data. For example, they can be interpreted in terms of the leverage or influence that a given data point has on, say, the best low-rank matrix approximation; and this
Analysis of Community Detection Algorithms for Large Scale Cyber Networks
Mane, Prachita; Shanbhag, Sunanda; Kamath, Tanmayee; Mackey, Patrick S.; Springer, John
2016-09-30
The aim of this project is to use existing community detection algorithms on an IP network dataset to create supernodes within the network. This study compares the performance of different algorithms on the network in terms of running time. The paper begins with an introduction to the concept of clustering and community detection followed by the research question that the team aimed to address. Further the paper describes the graph metrics that were considered in order to shortlist algorithms followed by a brief explanation of each algorithm with respect to the graph metric on which it is based. The next section in the paper describes the methodology used by the team in order to run the algorithms and determine which algorithm is most efficient with respect to running time. Finally, the last section of the paper includes the results obtained by the team and a conclusion based on those results as well as future work.
NASA Technical Reports Server (NTRS)
Manning, Robert M.
1991-01-01
The dynamic and composite nature of propagation impairments that are incurred on Earth-space communications links at frequencies in and above 30/20 GHz Ka band, i.e., rain attenuation, cloud and/or clear air scintillation, etc., combined with the need to counter such degradations after the small link margins have been exceeded, necessitate the use of dynamic statistical identification and prediction processing of the fading signal in order to optimally estimate and predict the levels of each of the deleterious attenuation components. Such requirements are being met in NASA's Advanced Communications Technology Satellite (ACTS) Project by the implementation of optimal processing schemes derived through the use of the Rain Attenuation Prediction Model and nonlinear Markov filtering theory.
Parallel vision algorithms. Annual technical report No. 1, 1 October 1986-30 September 1987
Ibrahim, H.A.; Kender, J.R.; Brown, L.G.
1987-10-01
The objective of this project is to develop and implement, on highly parallel computers, vision algorithms that combine stereo, texture, and multi-resolution techniques for determining local surface orientation and depth. Such algorithms will immediately serve as front-ends for autonomous land vehicle navigation systems. During the first year of the project, efforts have concentrated on two fronts. First, developing and testing the parallel programming environment that will be used to develop, implement and test the parallel vision algorithms. Second, developing and testing multi-resolution stereo, and texture algorithms. This report describes the status and progress on these two fronts. The authors describe first the programming environment developed, and mapping scheme that allows efficient use of the connection machine for pyramid (multi-resolution) algorithms. Second, they present algorithms and test results for multi-resolution stereo, and texture algorithms. Also the initial results of the starting efforts of integrating stereo and texture algorithms are presented.
Improved Heat-Stress Algorithm
NASA Technical Reports Server (NTRS)
Teets, Edward H., Jr.; Fehn, Steven
2007-01-01
NASA Dryden presents an improved and automated site-specific algorithm for heat-stress approximation using standard atmospheric measurements routinely obtained from the Edwards Air Force Base weather detachment. Heat stress, which is the net heat load a worker may be exposed to, is officially measured using a thermal-environment monitoring system to calculate the wet-bulb globe temperature (WBGT). This instrument uses three independent thermometers to measure wet-bulb, dry-bulb, and the black-globe temperatures. By using these improvements, a more realistic WBGT estimation value can now be produced. This is extremely useful for researchers and other employees who are working on outdoor projects that are distant from the areas that the Web system monitors. Most importantly, the improved WBGT estimations will make outdoor work sites safer by reducing the likelihood of heat stress.
LDRD Report: Scheduling Irregular Algorithms
Boman, Erik G.
2014-10-01
This LDRD project was a campus exec fellowship to fund (in part) Donald Nguyen’s PhD research at UT-Austin. His work has focused on parallel programming models, and scheduling irregular algorithms on shared-memory systems using the Galois framework. Galois provides a simple but powerful way for users and applications to automatically obtain good parallel performance using certain supported data containers. The naïve user can write serial code, while advanced users can optimize performance by advanced features, such as specifying the scheduling policy. Galois was used to parallelize two sparse matrix reordering schemes: RCM and Sloan. Such reordering is important in high-performance computing to obtain better data locality and thus reduce run times.
The Algorithm Selection Problem
NASA Technical Reports Server (NTRS)
Minton, Steve; Allen, John; Deiss, Ron (Technical Monitor)
1994-01-01
Work on NP-hard problems has shown that many instances of these theoretically computationally difficult problems are quite easy. The field has also shown that choosing the right algorithm for the problem can have a profound effect on the time needed to find a solution. However, to date there has been little work showing how to select the right algorithm for solving any particular problem. The paper refers to this as the algorithm selection problem. It describes some of the aspects that make this problem difficult, as well as proposes a technique for addressing it.
Identification of Secret Algorithms Using Oracle Attacks
2011-01-28
very little information. Since it is likely for steganography to be used on very large multimedia files, e.g. audio and video, there are substantial...algorithms, the PI developed methods to better detect and estimate +/-K embedding, a common form of steganography . This research project was undertaken at...exist at all. The use of an unusually artificial carrier is suspicious on its face, negating the purpose of steganography . If, however, such a
Leukocyte Recognition Using EM-Algorithm
NASA Astrophysics Data System (ADS)
Colunga, Mario Chirinos; Siordia, Oscar Sánchez; Maybank, Stephen J.
This document describes a method for classifying images of blood cells. Three different classes of cells are used: Band Neutrophils, Eosinophils and Lymphocytes. The image pattern is projected down to a lower dimensional sub space using PCA; the probability density function for each class is modeled with a Gaussian mixture using the EM-Algorithm. A new cell image is classified using the maximum a posteriori decision rule.
Lam, Ka Chun; Gu, Xianfeng; Lui, Lok Ming
2015-10-01
We address the registration problem of genus-one surfaces (such as vertebrae bones) with prescribed landmark constraints. The high-genus topology of the surfaces makes it challenging to obtain a unique and bijective surface mapping that matches landmarks consistently. This work proposes to tackle this registration problem using a special class of quasi-conformal maps called Teichmüller maps (T-Maps). A landmark constrained T-Map is the unique mapping between genus-1 surfaces that minimizes the maximal conformality distortion while matching the prescribed feature landmarks. Existence and uniqueness of the landmark constrained T-Map are theoretically guaranteed. This work presents an iterative algorithm to compute the T-Map. The main idea is to represent the set of diffeomorphism using the Beltrami coefficients (BC). The BC is iteratively adjusted to an optimal one, which corresponds to our desired T-Map that matches the prescribed landmarks and satisfies the periodic boundary condition on the universal covering space. Numerical experiments demonstrate the effectiveness of our proposed algorithm. The method has also been applied to register vertebrae bones with prescribed landmark points and curves, which gives accurate surface registrations.
Influence of DBT reconstruction algorithm on power law spectrum coefficient
NASA Astrophysics Data System (ADS)
Vancamberg, Laurence; Carton, Ann-Katherine; Abderrahmane, Ilyes H.; Palma, Giovanni; Milioni de Carvalho, Pablo; Iordache, Rǎzvan; Muller, Serge
2015-03-01
In breast X-ray images, texture has been characterized by a noise power spectrum (NPS) that has an inverse power-law shape described by its slope β in the log-log domain. It has been suggested that the magnitude of the power-law spectrum coefficient β is related to mass lesion detection performance. We assessed β in reconstructed digital breast tomosynthesis (DBT) images to evaluate its sensitivity to different typical reconstruction algorithms including simple back projection (SBP), filtered back projection (FBP) and a simultaneous iterative reconstruction algorithm (SIRT 30 iterations). Results were further compared to the β coefficient estimated from 2D central DBT projections. The calculations were performed on 31 unilateral clinical DBT data sets and simulated DBT images from 31 anthropomorphic software breast phantoms. Our results show that β highly depends on the reconstruction algorithm; the highest β values were found for SBP, followed by reconstruction with FBP, while the lowest β values were found for SIRT. In contrast to previous studies, we found that β is not always lower in reconstructed DBT slices, compared to 2D projections and this depends on the reconstruction algorithm. All β values estimated in DBT slices reconstructed with SBP were larger than β values from 2D central projections. Our study also shows that the reconstruction algorithm affects the symmetry of the breast texture NPS; the NPS of clinical cases reconstructed with SBP exhibit the highest symmetry, while the NPS of cases reconstructed with SIRT exhibit the highest asymmetry.
Diagnostic Algorithm Benchmarking
NASA Technical Reports Server (NTRS)
Poll, Scott
2011-01-01
A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.
Inclusive Flavour Tagging Algorithm
NASA Astrophysics Data System (ADS)
Likhomanenko, Tatiana; Derkach, Denis; Rogozhnikov, Alex
2016-10-01
Identifying the flavour of neutral B mesons production is one of the most important components needed in the study of time-dependent CP violation. The harsh environment of the Large Hadron Collider makes it particularly hard to succeed in this task. We present an inclusive flavour-tagging algorithm as an upgrade of the algorithms currently used by the LHCb experiment. Specifically, a probabilistic model which efficiently combines information from reconstructed vertices and tracks using machine learning is proposed. The algorithm does not use information about underlying physics process. It reduces the dependence on the performance of lower level identification capacities and thus increases the overall performance. The proposed inclusive flavour-tagging algorithm is applicable to tag the flavour of B mesons in any proton-proton experiment.
2013-07-29
The OpenEIS Algorithm package seeks to provide a low-risk path for building owners, service providers and managers to explore analytical methods for improving building control and operational efficiency. Users of this software can analyze building data, and learn how commercial implementations would provide long-term value. The code also serves as a reference implementation for developers who wish to adapt the algorithms for use in commercial tools or service offerings.
Implementation of Parallel Algorithms
1993-06-30
their socia ’ relations or to achieve some goals. For example, we define a pair-wise force law of i epulsion and attraction for a group of identical...quantization based compression schemes. Photo-refractive crystals, which provide high density recording in real time, are used as our holographic media . The...of Parallel Algorithms (J. Reif, ed.). Kluwer Academic Pu’ ishers, 1993. (4) "A Dynamic Separator Algorithm", D. Armon and J. Reif. To appear in
The Superior Lambert Algorithm
NASA Astrophysics Data System (ADS)
der, G.
2011-09-01
Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most
Parallel Wolff Cluster Algorithms
NASA Astrophysics Data System (ADS)
Bae, S.; Ko, S. H.; Coddington, P. D.
The Wolff single-cluster algorithm is the most efficient method known for Monte Carlo simulation of many spin models. Due to the irregular size, shape and position of the Wolff clusters, this method does not easily lend itself to efficient parallel implementation, so that simulations using this method have thus far been confined to workstations and vector machines. Here we present two parallel implementations of this algorithm, and show that one gives fairly good performance on a MIMD parallel computer.
Content addressable memory project
NASA Technical Reports Server (NTRS)
Hall, Josh; Levy, Saul; Smith, D.; Wei, S.; Miyake, K.; Murdocca, M.
1991-01-01
The progress on the Rutgers CAM (Content Addressable Memory) Project is described. The overall design of the system is completed at the architectural level and described. The machine is composed of two kinds of cells: (1) the CAM cells which include both memory and processor, and support local processing within each cell; and (2) the tree cells, which have smaller instruction set, and provide global processing over the CAM cells. A parameterized design of the basic CAM cell is completed. Progress was made on the final specification of the CPS. The machine architecture was driven by the design of algorithms whose requirements are reflected in the resulted instruction set(s). A few of these algorithms are described.
A Probabilistic Cell Tracking Algorithm
NASA Astrophysics Data System (ADS)
Steinacker, Reinhold; Mayer, Dieter; Leiding, Tina; Lexer, Annemarie; Umdasch, Sarah
2013-04-01
The research described below was carried out during the EU-Project Lolight - development of a low cost, novel and accurate lightning mapping and thunderstorm (supercell) tracking system. The Project aims to develop a small-scale tracking method to determine and nowcast characteristic trajectories and velocities of convective cells and cell complexes. The results of the algorithm will provide a higher accuracy than current locating systems distributed on a coarse scale. Input data for the developed algorithm are two temporally separated lightning density fields. Additionally a Monte Carlo method minimizing a cost function is utilizied which leads to a probabilistic forecast for the movement of thunderstorm cells. In the first step the correlation coefficients between the first and the second density field are computed. Hence, the first field is shifted by all shifting vectors which are physically allowed. The maximum length of each vector is determined by the maximum possible speed of thunderstorm cells and the difference in time for both density fields. To eliminate ambiguities in determination of directions and velocities, the so called Random Walker of the Monte Carlo process is used. Using this method a grid point is selected at random. Moreover, one vector out of all predefined shifting vectors is suggested - also at random but with a probability that is related to the correlation coefficient. If this exchange of shifting vectors reduces the cost function, the new direction and velocity are accepted. Otherwise it is discarded. This process is repeated until the change of cost functions falls below a defined threshold. The Monte Carlo run gives information about the percentage of accepted shifting vectors for all grid points. In the course of the forecast, amplifications of cell density are permitted. For this purpose, intensity changes between the investigated areas of both density fields are taken into account. Knowing the direction and speed of thunderstorm
Formation Algorithms and Simulation Testbed
NASA Technical Reports Server (NTRS)
Wette, Matthew; Sohl, Garett; Scharf, Daniel; Benowitz, Edward
2004-01-01
Formation flying for spacecraft is a rapidly developing field that will enable a new era of space science. For one of its missions, the Terrestrial Planet Finder (TPF) project has selected a formation flying interferometer design to detect earth-like planets orbiting distant stars. In order to advance technology needed for the TPF formation flying interferometer, the TPF project has been developing a distributed real-time testbed to demonstrate end-to-end operation of formation flying with TPF-like functionality and precision. This is the Formation Algorithms and Simulation Testbed (FAST) . This FAST was conceived to bring out issues in timing, data fusion, inter-spacecraft communication, inter-spacecraft sensing and system-wide formation robustness. In this paper we describe the FAST and show results from a two-spacecraft formation scenario. The two-spacecraft simulation is the first time that precision end-to-end formation flying operation has been demonstrated in a distributed real-time simulation environment.
Advanced algorithms for information science
Argo, P.; Brislawn, C.; Fitzgerald, T.J.; Kelley, B.; Kim, W.H.; Mazieres, B.; Roeder, H.; Strottman, D.
1998-12-31
This is the final report of a one-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). In a modern information-controlled society the importance of fast computational algorithms facilitating data compression and image analysis cannot be overemphasized. Feature extraction and pattern recognition are key to many LANL projects and the same types of dimensionality reduction and compression used in source coding are also applicable to image understanding. The authors have begun developing wavelet coding which decomposes data into different length-scale and frequency bands. New transform-based source-coding techniques offer potential for achieving better, combined source-channel coding performance by using joint-optimization techniques. They initiated work on a system that compresses the video stream in real time, and which also takes the additional step of analyzing the video stream concurrently. By using object-based compression schemes (where an object is an identifiable feature of the video signal, repeatable in time or space), they believe that the analysis is directly related to the efficiency of the compression.
Undergraduate computational physics projects on quantum computing
NASA Astrophysics Data System (ADS)
Candela, D.
2015-08-01
Computational projects on quantum computing suitable for students in a junior-level quantum mechanics course are described. In these projects students write their own programs to simulate quantum computers. Knowledge is assumed of introductory quantum mechanics through the properties of spin 1/2. Initial, more easily programmed projects treat the basics of quantum computation, quantum gates, and Grover's quantum search algorithm. These are followed by more advanced projects to increase the number of qubits and implement Shor's quantum factoring algorithm. The projects can be run on a typical laptop or desktop computer, using most programming languages. Supplementing resources available elsewhere, the projects are presented here in a self-contained format especially suitable for a short computational module for physics students.
A Combined Reconstruction Algorithm for Limited-View Multi-Element Photoacoustic Imaging
NASA Astrophysics Data System (ADS)
Yang, Di-Wu; Xing, Da; Zhao, Xue-Hui; Pan, Chang-Ning; Fang, Jian-Shu
2010-05-01
We present a photoacoustic imaging system with a linear transducer array scanning in limited-view fields and develop a combined reconstruction algorithm, which is a combination of the limited-field filtered back projection (LFBP) algorithm and the simultaneous iterative reconstruction technique (SIRT) algorithm, to reconstruct the optical absorption distribution. In this algorithm, the LFBP algorithm is exploited to reconstruct the original photoacoustic image, and then the SIRT algorithm is used to improve the quality of the final reconstructed photoacoustic image. Numerical simulations with calculated incomplete data validate the reliability of this algorithm and the reconstructed experimental results further demonstrate that the combined reconstruction algorithm effectively reduces the artifacts and blurs and yields better quality of reconstruction image than that with the LFBP algorithm.
A limited-memory algorithm for bound-constrained optimization
Byrd, R.H.; Peihuang, L.; Nocedal, J. |
1996-03-01
An algorithm for solving large nonlinear optimization problems with simple bounds is described. It is based on the gradient projection method and uses a limited-memory BFGS matrix to approximate the Hessian of the objective function. We show how to take advantage of the form of the limited-memory approximation to implement the algorithm efficiently. The results of numerical tests on a set of large problems are reported.
Evolutionary pattern search algorithms
Hart, W.E.
1995-09-19
This paper defines a class of evolutionary algorithms called evolutionary pattern search algorithms (EPSAs) and analyzes their convergence properties. This class of algorithms is closely related to evolutionary programming, evolutionary strategie and real-coded genetic algorithms. EPSAs are self-adapting systems that modify the step size of the mutation operator in response to the success of previous optimization steps. The rule used to adapt the step size can be used to provide a stationary point convergence theory for EPSAs on any continuous function. This convergence theory is based on an extension of the convergence theory for generalized pattern search methods. An experimental analysis of the performance of EPSAs demonstrates that these algorithms can perform a level of global search that is comparable to that of canonical EAs. We also describe a stopping rule for EPSAs, which reliably terminated near stationary points in our experiments. This is the first stopping rule for any class of EAs that can terminate at a given distance from stationary points.
Algorithmization in Learning and Instruction.
ERIC Educational Resources Information Center
Landa, L. N.
An introduction to the theory of algorithms reviews the theoretical issues of teaching algorithms, the logical and psychological problems of devising algorithms of identification, and the selection of efficient algorithms; and then relates all of these to the classroom teaching process. It also descirbes some major research on the effectiveness of…
Power spectral estimation algorithms
NASA Technical Reports Server (NTRS)
Bhatia, Manjit S.
1989-01-01
Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.
Temperature Corrected Bootstrap Algorithm
NASA Technical Reports Server (NTRS)
Comiso, Joey C.; Zwally, H. Jay
1997-01-01
A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.
Optical rate sensor algorithms
NASA Technical Reports Server (NTRS)
Uhde-Lacovara, Jo A.
1989-01-01
Optical sensors, in particular Charge Coupled Device (CCD) arrays, will be used on Space Station to track stars in order to provide inertial attitude reference. Algorithms are presented to derive attitude rate from the optical sensors. The first algorithm is a recursive differentiator. A variance reduction factor (VRF) of 0.0228 was achieved with a rise time of 10 samples. A VRF of 0.2522 gives a rise time of 4 samples. The second algorithm is based on the direct manipulation of the pixel intensity outputs of the sensor. In 1-dimensional simulations, the derived rate was with 0.07 percent of the actual rate in the presence of additive Gaussian noise with a signal to noise ratio of 60 dB.
Analysis of estimation algorithms for CDTI and CAS applications
NASA Technical Reports Server (NTRS)
Goka, T.
1985-01-01
Estimation algorithms for Cockpit Display of Traffic Information (CDTI) and Collision Avoidance System (CAS) applications were analyzed and/or developed. The algorithms are based on actual or projected operational and performance characteristics of an Enhanced TCAS II traffic sensor developed by Bendix and the Federal Aviation Administration. Three algorithm areas are examined and discussed. These are horizontal x and y, range and altitude estimation algorithms. Raw estimation errors are quantified using Monte Carlo simulations developed for each application; the raw errors are then used to infer impacts on the CDTI and CAS applications. Applications of smoothing algorithms to CDTI problems are also discussed briefly. Technical conclusions are summarized based on the analysis of simulation results.
,
1993-01-01
A map projection is used to portray all or part of the round Earth on a flat surface. This cannot be done without some distortion. Every projection has its own set of advantages and disadvantages. There is no "best" projection. The mapmaker must select the one best suited to the needs, reducing distortion of the most important features. Mapmakers and mathematicians have devised almost limitless ways to project the image of the globe onto paper. Scientists at the U. S. Geological Survey have designed projections for their specific needs—such as the Space Oblique Mercator, which allows mapping from satellites with little or no distortion. This document gives the key properties, characteristics, and preferred uses of many historically important projections and of those frequently used by mapmakers today.
Parallel Algorithms and Patterns
Robey, Robert W.
2016-06-16
This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.
Improved Chaff Solution Algorithm
2009-03-01
Programme de démonstration de technologies (PDT) sur l’intégration de capteurs et de systèmes d’armes embarqués (SISWS), un algorithme a été élaboré...technologies (PDT) sur l’intégration de capteurs et de systèmes d’armes embarqués (SISWS), un algorithme a été élaboré pour déterminer automatiquement...0Z4 2. SECURITY CLASSIFICATION (Overall security classification of the document including special warning terms if applicable .) UNCLASSIFIED
Algorithm implementation on the Navier-Stokes computer
NASA Technical Reports Server (NTRS)
Krist, Steven E.; Zang, Thomas A.
1987-01-01
The Navier-Stokes Computer is a multi-purpose parallel-processing supercomputer which is currently under development at Princeton University. It consists of multiple local memory parallel processors, called Nodes, which are interconnected in a hypercube network. Details of the procedures involved in implementing an algorithm on the Navier-Stokes computer are presented. The particular finite difference algorithm considered in this analysis was developed for simulation of laminar-turbulent transition in wall bounded shear flows. Projected timing results for implementing this algorithm indicate that operation rates in excess of 42 GFLOPS are feasible on a 128 Node machine.
Automatic design of decision-tree algorithms with evolutionary algorithms.
Barros, Rodrigo C; Basgalupp, Márcio P; de Carvalho, André C P L F; Freitas, Alex A
2013-01-01
This study reports the empirical analysis of a hyper-heuristic evolutionary algorithm that is capable of automatically designing top-down decision-tree induction algorithms. Top-down decision-tree algorithms are of great importance, considering their ability to provide an intuitive and accurate knowledge representation for classification problems. The automatic design of these algorithms seems timely, given the large literature accumulated over more than 40 years of research in the manual design of decision-tree induction algorithms. The proposed hyper-heuristic evolutionary algorithm, HEAD-DT, is extensively tested using 20 public UCI datasets and 10 microarray gene expression datasets. The algorithms automatically designed by HEAD-DT are compared with traditional decision-tree induction algorithms, such as C4.5 and CART. Experimental results show that HEAD-DT is capable of generating algorithms which are significantly more accurate than C4.5 and CART.
Nonlinear optimization with linear constraints using a projection method
NASA Technical Reports Server (NTRS)
Fox, T.
1982-01-01
Nonlinear optimization problems that are encountered in science and industry are examined. A method of projecting the gradient vector onto a set of linear contraints is developed, and a program that uses this method is presented. The algorithm that generates this projection matrix is based on the Gram-Schmidt method and overcomes some of the objections to the Rosen projection method.
NASA Technical Reports Server (NTRS)
Nobbs, Steven G.
1995-01-01
An overview of the performance seeking control (PSC) algorithm and details of the important components of the algorithm are given. The onboard propulsion system models, the linear programming optimization, and engine control interface are described. The PSC algorithm receives input from various computers on the aircraft including the digital flight computer, digital engine control, and electronic inlet control. The PSC algorithm contains compact models of the propulsion system including the inlet, engine, and nozzle. The models compute propulsion system parameters, such as inlet drag and fan stall margin, which are not directly measurable in flight. The compact models also compute sensitivities of the propulsion system parameters to change in control variables. The engine model consists of a linear steady state variable model (SSVM) and a nonlinear model. The SSVM is updated with efficiency factors calculated in the engine model update logic, or Kalman filter. The efficiency factors are used to adjust the SSVM to match the actual engine. The propulsion system models are mathematically integrated to form an overall propulsion system model. The propulsion system model is then optimized using a linear programming optimization scheme. The goal of the optimization is determined from the selected PSC mode of operation. The resulting trims are used to compute a new operating point about which the optimization process is repeated. This process is continued until an overall (global) optimum is reached before applying the trims to the controllers.
Comprehensive eye evaluation algorithm
NASA Astrophysics Data System (ADS)
Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.
2016-03-01
In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.
Quantum gate decomposition algorithms.
Slepoy, Alexander
2006-07-01
Quantum computing algorithms can be conveniently expressed in a format of a quantum logical circuits. Such circuits consist of sequential coupled operations, termed ''quantum gates'', or quantum analogs of bits called qubits. We review a recently proposed method [1] for constructing general ''quantum gates'' operating on an qubits, as composed of a sequence of generic elementary ''gates''.
Algorithm for reaction classification.
Kraut, Hans; Eiblmaier, Josef; Grethe, Guenter; Löw, Peter; Matuszczyk, Heinz; Saller, Heinz
2013-11-25
Reaction classification has important applications, and many approaches to classification have been applied. Our own algorithm tests all maximum common substructures (MCS) between all reactant and product molecules in order to find an atom mapping containing the minimum chemical distance (MCD). Recent publications have concluded that new MCS algorithms need to be compared with existing methods in a reproducible environment, preferably on a generalized test set, yet the number of test sets available is small, and they are not truly representative of the range of reactions that occur in real reaction databases. We have designed a challenging test set of reactions and are making it publicly available and usable with InfoChem's software or other classification algorithms. We supply a representative set of example reactions, grouped into different levels of difficulty, from a large number of reaction databases that chemists actually encounter in practice, in order to demonstrate the basic requirements for a mapping algorithm to detect the reaction centers in a consistent way. We invite the scientific community to contribute to the future extension and improvement of this data set, to achieve the goal of a common standard.
2005-03-30
The Robotic Follow Algorithm enables allows any robotic vehicle to follow a moving target while reactively choosing a route around nearby obstacles. The robotic follow behavior can be used with different camera systems and can be used with thermal or visual tracking as well as other tracking methods such as radio frequency tags.
Engine Removal Projection Tool
Ferryman, Thomas A.; Matzke, Brett D.; Wilson, John E.; Sharp, Julia L.; Greitzer, Frank L.
2005-06-02
The US Navy has over 3500 gas turbine engines used throughout the surface fleet for propulsion and the generation of electrical power. Past data is used to forecast the number of engine removals for the next ten years and determine engine down times between removals. Currently this is done via a FORTRAN program created in the early 1970s. This paper presents results of R&D associated with creating a new algorithm and software program. We tested over 60 techniques on data spanning 20 years from over 3100 engines and 120 ships. Investigated techniques for the forecast basis including moving averages, empirical negative binomial, generalized linear models, Cox regression, and Kaplan Meier survival curves, most of which are documented in engineering, medical and scientific research literature. We applied those techniques to the data, and chose the best algorithm based on its performance on real-world data. The software uses the best algorithm in combination with user-friendly interfaces and intuitively understandable displays. The user can select a specific engine type, forecast time period, and op-tempo. Graphical displays and numerical tables present forecasts and uncertainty intervals. The technology developed for the project is applicable to other logistic forecasting challenges.
Fast autodidactic adaptive equalization algorithms
NASA Astrophysics Data System (ADS)
Hilal, Katia
Autodidactic equalization by adaptive filtering is addressed in a mobile radio communication context. A general method, using an adaptive stochastic gradient Bussgang type algorithm, to deduce two low cost computation algorithms is given: one equivalent to the initial algorithm and the other having improved convergence properties thanks to a block criteria minimization. Two start algorithms are reworked: the Godard algorithm and the decision controlled algorithm. Using a normalization procedure, and block normalization, the performances are improved, and their common points are evaluated. These common points are used to propose an algorithm retaining the advantages of the two initial algorithms. This thus inherits the robustness of the Godard algorithm and the precision and phase correction of the decision control algorithm. The work is completed by a study of the stable states of Bussgang type algorithms and of the stability of the Godard algorithms, initial and normalized. The simulation of these algorithms, carried out in a mobile radio communications context, and under severe conditions on the propagation channel, gave a 75% reduction in the number of samples required for the processing in relation with the initial algorithms. The improvement of the residual error was of a much lower return. These performances are close to making possible the use of autodidactic equalization in the mobile radio system.
ERIC Educational Resources Information Center
Chemical and Engineering News, 1986
1986-01-01
Reports on Project SEED (Summer Educational Experience for the Disadvantaged) a project in which high school students from low-income families work in summer jobs in a variety of academic, industrial, and government research labs. The program introduces the students to career possibilities in chemistry and to the advantages of higher education.…
ERIC Educational Resources Information Center
Alvord, David J.; Tack, Leland R.; Dallam, Jerald W.
1998-01-01
Describes the development of Project EASIER, a collaborative electronic-data interchange for networking Iowa local school districts, education agencies, community colleges, universities, and the Department of Education. The primary goal of this project is to develop and implement a system for collection of student information for state and federal…
ERIC Educational Resources Information Center
Meredith, Larry D.
Project Success consists of after-school, weekend, and summer educational programs geared toward minority and disadvantaged students to increase their numbers seeking postsecondary education from the Meadville, Pennsylvania area. The project is funded primarily through the Edinboro University of Pennsylvania, whose administration is committed to…
ERIC Educational Resources Information Center
Robison, Helen F.; And Others
This document described Project CHILD, a program of educational change and curriculum development for disadvantaged prekindergarten and kindergarten children. The historical part of this report indicates that the project began in 1966 with a small-scale study of teacher behavior and children's responses in a few classrooms in a Harlem school…
ERIC Educational Resources Information Center
Essexville-Hampton Public Schools, MI.
Described are components of Project FAST (Functional Analysis Systems Training) a nationally validated project to provide more effective educational and support services to learning disordered children and their regular elementary classroom teachers. The program is seen to be based on a series of modules of delivery systems ranging from mainstream…
Benchmarking monthly homogenization algorithms
NASA Astrophysics Data System (ADS)
Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.
2011-08-01
The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data
The systems biology simulation core algorithm
2013-01-01
Background With the increasing availability of high dimensional time course data for metabolites, genes, and fluxes, the mathematical description of dynamical systems has become an essential aspect of research in systems biology. Models are often encoded in formats such as SBML, whose structure is very complex and difficult to evaluate due to many special cases. Results This article describes an efficient algorithm to solve SBML models that are interpreted in terms of ordinary differential equations. We begin our consideration with a formal representation of the mathematical form of the models and explain all parts of the algorithm in detail, including several preprocessing steps. We provide a flexible reference implementation as part of the Systems Biology Simulation Core Library, a community-driven project providing a large collection of numerical solvers and a sophisticated interface hierarchy for the definition of custom differential equation systems. To demonstrate the capabilities of the new algorithm, it has been tested with the entire SBML Test Suite and all models of BioModels Database. Conclusions The formal description of the mathematics behind the SBML format facilitates the implementation of the algorithm within specifically tailored programs. The reference implementation can be used as a simulation backend for Java™-based programs. Source code, binaries, and documentation can be freely obtained under the terms of the LGPL version 3 from http://simulation-core.sourceforge.net. Feature requests, bug reports, contributions, or any further discussion can be directed to the mailing list simulation-core-development@lists.sourceforge.net. PMID:23826941
Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales
Xiu, Dongbin
2016-06-21
The focus of the project is the development of mathematical methods and high-performance com- putational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly e cient and scalable numer- ical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.
Probe set algorithms: is there a rational best bet?
Seo, Jinwook; Hoffman, Eric P
2006-01-01
Affymetrix microarrays have become a standard experimental platform for studies of mRNA expression profiling. Their success is due, in part, to the multiple oligonucleotide features (probes) against each transcript (probe set). This multiple testing allows for more robust background assessments and gene expression measures, and has permitted the development of many computational methods to translate image data into a single normalized "signal" for mRNA transcript abundance. There are now many probe set algorithms that have been developed, with a gradual movement away from chip-by-chip methods (MAS5), to project-based model-fitting methods (dCHIP, RMA, others). Data interpretation is often profoundly changed by choice of algorithm, with disoriented biologists questioning what the "accurate" interpretation of their experiment is. Here, we summarize the debate concerning probe set algorithms. We provide examples of how changes in mismatch weight, normalizations, and construction of expression ratios each dramatically change data interpretation. All interpretations can be considered as computationally appropriate, but with varying biological credibility. We also illustrate the performance of two new hybrid algorithms (PLIER, GC-RMA) relative to more traditional algorithms (dCHIP, MAS5, Probe Profiler PCA, RMA) using an interactive power analysis tool. PLIER appears superior to other algorithms in avoiding false positives with poorly performing probe sets. Based on our interpretation of the literature, and examples presented here, we suggest that the variability in performance of probe set algorithms is more dependent upon assumptions regarding "background", than on calculations of "signal". We argue that "background" is an enormously complex variable that can only be vaguely quantified, and thus the "best" probe set algorithm will vary from project to project. PMID:16942624
Genetic Algorithms and Local Search
NASA Technical Reports Server (NTRS)
Whitley, Darrell
1996-01-01
The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.
Algorithmic commonalities in the parallel environment
NASA Technical Reports Server (NTRS)
Mcanulty, Michael A.; Wainer, Michael S.
1987-01-01
The ultimate aim of this project was to analyze procedures from substantially different application areas to discover what is either common or peculiar in the process of conversion to the Massively Parallel Processor (MPP). Three areas were identified: molecular dynamic simulation, production systems (rule systems), and various graphics and vision algorithms. To date, only selected graphics procedures have been investigated. They are the most readily available, and produce the most visible results. These include simple polygon patch rendering, raycasting against a constructive solid geometric model, and stochastic or fractal based textured surface algorithms. Only the simplest of conversion strategies, mapping a major loop to the array, has been investigated so far. It is not entirely satisfactory.
Algorithms for the quasiconvex feasibility problem
NASA Astrophysics Data System (ADS)
Censor, Yair; Segal, Alexander
2006-01-01
We study the behavior of subgradient projections algorithms for the quasiconvex feasibility problem of finding a point x*[set membership, variant]Rn that satisfies the inequalities f1(x*)[less-than-or-equals, slant]0,f2(x*)[less-than-or-equals, slant]0,...,fm(x*)[less-than-or-equals, slant]0, where all functions are continuous and quasiconvex. We consider the consistent case when the solution set is nonempty. Since the Fenchel-Moreau subdifferential might be empty we look at different notions of the subdifferential and determine their suitability for our problem. We also determine conditions on the functions, that are needed for convergence of our algorithms. The quasiconvex functions on the left-hand side of the inequalities need not be differentiable but have to satisfy a Lipschitz or a Holder condition.
Empirical algorithms for ocean optics parameters.
Smart, Jeffrey H
2007-06-11
As part of the Worldwide Ocean Optics Database (WOOD) Project, The Johns Hopkins University Applied Physics Laboratory has developed and evaluated a variety of empirical models that can predict ocean optical properties, such as profiles of the beam attenuation coefficient computed from profiles of the diffuse attenuation coefficient. In this paper, we briefly summarize published empirical optical algorithms and assess their accuracy for estimating derived profiles. We also provide new algorithms and discuss their applicability for deriving optical profiles based on data collected from a variety of locations, including the Yellow Sea, the Sea of Japan, and the North Atlantic Ocean. We show that the scattering coefficient (b) can be computed from the beam attenuation coefficient (c) to about 10% accuracy. The availability of such relatively accurate predictions is important in the many situations where the set of data is incomplete.
A MEDLINE categorization algorithm
Darmoni, Stefan J; Névéol, Aurelie; Renard, Jean-Marie; Gehanno, Jean-Francois; Soualmia, Lina F; Dahamna, Badisse; Thirion, Benoit
2006-01-01
Background Categorization is designed to enhance resource description by organizing content description so as to enable the reader to grasp quickly and easily what are the main topics discussed in it. The objective of this work is to propose a categorization algorithm to classify a set of scientific articles indexed with the MeSH thesaurus, and in particular those of the MEDLINE bibliographic database. In a large bibliographic database such as MEDLINE, finding materials of particular interest to a specialty group, or relevant to a particular audience, can be difficult. The categorization refines the retrieval of indexed material. In the CISMeF terminology, metaterms can be considered as super-concepts. They were primarily conceived to improve recall in the CISMeF quality-controlled health gateway. Methods The MEDLINE categorization algorithm (MCA) is based on semantic links existing between MeSH terms and metaterms on the one hand and between MeSH subheadings and metaterms on the other hand. These links are used to automatically infer a list of metaterms from any MeSH term/subheading indexing. Medical librarians manually select the semantic links. Results The MEDLINE categorization algorithm lists the medical specialties relevant to a MEDLINE file by decreasing order of their importance. The MEDLINE categorization algorithm is available on a Web site. It can run on any MEDLINE file in a batch mode. As an example, the top 3 medical specialties for the set of 60 articles published in BioMed Central Medical Informatics & Decision Making, which are currently indexed in MEDLINE are: information science, organization and administration and medical informatics. Conclusion We have presented a MEDLINE categorization algorithm in order to classify the medical specialties addressed in any MEDLINE file in the form of a ranked list of relevant specialties. The categorization method introduced in this paper is based on the manual indexing of resources with MeSH (terms
Reactive Collision Avoidance Algorithm
NASA Technical Reports Server (NTRS)
Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred
2010-01-01
The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on
Algorithm Visualization System for Teaching Spatial Data Algorithms
ERIC Educational Resources Information Center
Nikander, Jussi; Helminen, Juha; Korhonen, Ari
2010-01-01
TRAKLA2 is a web-based learning environment for data structures and algorithms. The system delivers automatically assessed algorithm simulation exercises that are solved using a graphical user interface. In this work, we introduce a novel learning environment for spatial data algorithms, SDA-TRAKLA2, which has been implemented on top of the…
Algorithms, games, and evolution
Chastain, Erick; Livnat, Adi; Papadimitriou, Christos; Vazirani, Umesh
2014-01-01
Even the most seasoned students of evolution, starting with Darwin himself, have occasionally expressed amazement that the mechanism of natural selection has produced the whole of Life as we see it around us. There is a computational way to articulate the same amazement: “What algorithm could possibly achieve all this in a mere three and a half billion years?” In this paper we propose an answer: We demonstrate that in the regime of weak selection, the standard equations of population genetics describing natural selection in the presence of sex become identical to those of a repeated game between genes played according to multiplicative weight updates (MWUA), an algorithm known in computer science to be surprisingly powerful and versatile. MWUA maximizes a tradeoff between cumulative performance and entropy, which suggests a new view on the maintenance of diversity in evolution. PMID:24979793
Tomasz Plawski, J. Hovater
2010-09-01
A digital low level radio frequency (RF) system typically incorporates either a heterodyne or direct sampling technique, followed by fast ADCs, then an FPGA, and finally a transmitting DAC. This universal platform opens up the possibilities for a variety of control algorithm implementations. The foremost concern for an RF control system is cavity field stability, and to meet the required quality of regulation, the chosen control system needs to have sufficient feedback gain. In this paper we will investigate the effectiveness of the regulation for three basic control system algorithms: I&Q (In-phase and Quadrature), Amplitude & Phase and digital SEL (Self Exciting Loop) along with the example of the Jefferson Lab 12 GeV cavity field control system.
Irregular Applications: Architectures & Algorithms
Feo, John T.; Villa, Oreste; Tumeo, Antonino; Secchi, Simone
2012-02-06
Irregular applications are characterized by irregular data structures, control and communication patterns. Novel irregular high performance applications which deal with large data sets and require have recently appeared. Unfortunately, current high performance systems and software infrastructures executes irregular algorithms poorly. Only coordinated efforts by end user, area specialists and computer scientists that consider both the architecture and the software stack may be able to provide solutions to the challenges of modern irregular applications.
Basic cluster compression algorithm
NASA Technical Reports Server (NTRS)
Hilbert, E. E.; Lee, J.
1980-01-01
Feature extraction and data compression of LANDSAT data is accomplished by BCCA program which reduces costs associated with transmitting, storing, distributing, and interpreting multispectral image data. Algorithm uses spatially local clustering to extract features from image data to describe spectral characteristics of data set. Approach requires only simple repetitive computations, and parallel processing can be used for very high data rates. Program is written in FORTRAN IV for batch execution and has been implemented on SEL 32/55.
NASA Astrophysics Data System (ADS)
Reda, Ibrahim; Andreas, Afshin
2015-04-01
The Solar Position Algorithm (SPA) calculates the solar zenith and azimuth angles in the period from the year -2000 to 6000, with uncertainties of +/- 0.0003 degrees based on the date, time, and location on Earth. SPA is implemented in C; in addition to being available for download, an online calculator using this code is available at http://www.nrel.gov/midc/solpos/spa.html.
Algorithmic Complexity. Volume II.
1982-06-01
works, give an example, and discuss the inherent weaknesses and their causes. Electrical Network Analysis Knuth mentions the applicability of...of these 3 products of 2-coefficient 2 1 polynomials can be found by a repeated application of the 3 multiplication W Ascheme, only 3.3-9 scalar...see another application of this paradigm later. We now investigate the efficiency of the divide-and-conquer polynomial multiplication algorithm. Let M(n
ARPANET Routing Algorithm Improvements
1978-10-01
IMPROVEMENTS . .PFOnINI ORG. REPORT MUNDER -- ) _ .. .... 3940 7, AUT(c) .. .. .. CONTRACT Of GRANT NUMSlet e) SJ. M. /Mc~uillan E. C./Rosen I...8217), this problem may persist for a very long time, causing extremely bad performance throughout the whole network (for instance, if w’ reports that one of...algorithm may naturally tend to oscillate between bad routing paths and become itself a major contributor to network congestion. These examples show
1983-10-13
determining the solu- tion using the Moore - Penrose inverse . An expression for the mean square error is derived [8,9]. The expression indicates that...Proc. 10. "An Iterative Algorithm for Finding the Minimum Eigenvalue of a Class of Symmetric Matrices," D. Fuhrmann and B. Liu, submitted to 1984 IEEE...Int. Conf. Acous. Sp. 5V. Proc. 11. "Approximating the Eigenvectors of a Symmetric Toeplitz Matrix," by D. Fuhrmann and B. Liu, 1983 Allerton Conf. an
2016-06-07
XBT’s sound speed values instead of temperature values. Studies show that the sound speed at the surface in a specific location varies less than...be entered at the terminal in metric or English temperatures or sound speeds. The algorithm automatically determines which form each data point was... sound speeds. Leroy’s equation is used to derive sound speed from temperature or temperature from sound speed. The previous, current, and next months
Adaptive continuous twisting algorithm
NASA Astrophysics Data System (ADS)
Moreno, Jaime A.; Negrete, Daniel Y.; Torres-González, Victor; Fridman, Leonid
2016-09-01
In this paper, an adaptive continuous twisting algorithm (ACTA) is presented. For double integrator, ACTA produces a continuous control signal ensuring finite time convergence of the states to zero. Moreover, the control signal generated by ACTA compensates the Lipschitz perturbation in finite time, i.e. its value converges to the opposite value of the perturbation. ACTA also keeps its convergence properties, even in the case that the upper bound of the derivative of the perturbation exists, but it is unknown.
Genetic Algorithm for Optimization: Preprocessor and Algorithm
NASA Technical Reports Server (NTRS)
Sen, S. K.; Shaykhian, Gholam A.
2006-01-01
Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.
Stubbs, Allston Julius; Atilla, Halis Atil
2016-01-01
Summary Background Despite the rapid advancement of imaging and arthroscopic techniques about the hip joint, missed diagnoses are still common. As a deep joint and compared to the shoulder and knee joints, localization of hip symptoms is difficult. Hip pathology is not easily isolated and is often related to intra and extra-articular abnormalities. In light of these diagnostic challenges, we recommend an algorithmic approach to effectively diagnoses and treat hip pain. Methods In this review, hip pain is evaluated from diagnosis to treatment in a clear decision model. First we discuss emergency hip situations followed by the differentiation of intra and extra-articular causes of the hip pain. We differentiate the intra-articular hip as arthritic and non-arthritic and extra-articular pain as surrounding or remote tissue generated. Further, extra-articular hip pain is evaluated according to pain location. Finally we summarize the surgical treatment approach with an algorithmic diagram. Conclusion Diagnosis of hip pathology is difficult because the etiologies of pain may be various. An algorithmic approach to hip restoration from diagnosis to rehabilitation is crucial to successfully identify and manage hip pathologies. Level of evidence: V. PMID:28066734
Baudoin, T; Grgić, M V; Zadravec, D; Geber, G; Tomljenović, D; Kalogjera, L
2013-12-01
ENT navigation has given new opportunities in performing Endoscopic Sinus Surgery (ESS) and improving surgical outcome of the patients` treatment. ESS assisted by a navigation system could be called Navigated Endoscopic Sinus Surgery (NESS). As it is generally accepted that the NESS should be performed only in cases of complex anatomy and pathology, it has not yet been established as a state-of-the-art procedure and thus not used on a daily basis. This paper presents an algorithm for use of a navigation system for basic ESS in the treatment of chronic rhinosinusitis (CRS). The algorithm includes five units that should be highlighted using a navigation system. They are as follows: 1) nasal vestibule unit, 2) OMC unit, 3) anterior ethmoid unit, 4) posterior ethmoid unit, and 5) sphenoid unit. Each unit has a shape of a triangular pyramid and consists of at least four reference points or landmarks. As many landmarks as possible should be marked when determining one of the five units. Navigated orientation in each unit should always precede any surgical intervention. The algorithm should improve the learning curve of trainees and enable surgeons to use the navigation system routinely and systematically.
Large scale tracking algorithms
Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
Symbalisty, E.M.D.; Zinn, J.; Whitaker, R.W.
1995-09-01
This paper describes the history, physics, and algorithms of the computer code RADFLO and its extension HYCHEM. RADFLO is a one-dimensional, radiation-transport hydrodynamics code that is used to compute early-time fireball behavior for low-altitude nuclear bursts. The primary use of the code is the prediction of optical signals produced by nuclear explosions. It has also been used to predict thermal and hydrodynamic effects that are used for vulnerability and lethality applications. Another closely related code, HYCHEM, is an extension of RADFLO which includes the effects of nonequilibrium chemistry. Some examples of numerical results will be shown, along with scaling expressions derived from those results. We describe new computations of the structures and luminosities of steady-state shock waves and radiative thermal waves, which have been extended to cover a range of ambient air densities for high-altitude applications. We also describe recent modifications of the codes to use a one-dimensional analog of the CAVEAT fluid-dynamics algorithm in place of the former standard Richtmyer-von Neumann algorithm.
Petascale algorithms for reactor hydrodynamics.
Fischer, P.; Lottes, J.; Pointer, W. D.; Siegel, A.
2008-01-01
We describe recent algorithmic developments that have enabled large eddy simulations of reactor flows on up to P = 65, 000 processors on the IBM BG/P at the Argonne Leadership Computing Facility. Petascale computing is expected to play a pivotal role in the design and analysis of next-generation nuclear reactors. Argonne's SHARP project is focused on advanced reactor simulation, with a current emphasis on modeling coupled neutronics and thermal-hydraulics (TH). The TH modeling comprises a hierarchy of computational fluid dynamics approaches ranging from detailed turbulence computations, using DNS (direct numerical simulation) and LES (large eddy simulation), to full core analysis based on RANS (Reynolds-averaged Navier-Stokes) and subchannel models. Our initial study is focused on LES of sodium-cooled fast reactor cores. The aim is to leverage petascale platforms at DOE's Leadership Computing Facilities (LCFs) to provide detailed information about heat transfer within the core and to provide baseline data for less expensive RANS and subchannel models.
Scheduling projects with multiskill learning effect.
Zha, Hong; Zhang, Lianying
2014-01-01
We investigate the project scheduling problem with multiskill learning effect. A new model is proposed to deal with the problem, where both autonomous and induced learning are considered. In order to obtain the optimal solution, a genetic algorithm with specific encoding and decoding schemes is introduced. A numerical example is used to illustrate the proposed model. The computational results show that the learning effect cannot be neglected in project scheduling. By means of determining the level of induced learning, the project manager can balance the project makespan with total cost.
Project LEAF has a goal of educating farmworkers about how to reduce pesticide exposure to their families from pesticide residues they may be inadvertently taking home on their clothing, etc. Find outreach materials.
ERIC Educational Resources Information Center
Drake, Charles L.
1977-01-01
Describes activities of Geodynamics Project of the Federal Council on Science and Technology, such as the application of multichannel seismic-reflection techniques to study the nature of the deep crust and upper mantle. (MLH)
ERIC Educational Resources Information Center
Diffily, Deborah
2001-01-01
Integrating curriculum is important in helping children make connections within and among areas. Presents a class project for kindergarten children which came out of the students' interests and desire to build a reptile exhibit. (ASK)
Iterative restoration algorithms for nonlinear constraint computing
NASA Astrophysics Data System (ADS)
Szu, Harold
A general iterative-restoration principle is introduced to facilitate the implementation of nonlinear optical processors. The von Neumann convergence theorem is generalized to include nonorthogonal subspaces which can be reduced to a special orthogonal projection operator by applying an orthogonality condition. This principle is shown to permit derivation of the Jacobi algorithm, the recursive principle, the van Cittert (1931) deconvolution method, the iteration schemes of Gerchberg (1974) and Papoulis (1975), and iteration schemes using two Fourier conjugate domains (e.g., Fienup, 1981). Applications to restoring the image of a double star and division by hard and soft zeros are discussed, and sample results are presented graphically.
A Novel Latin hypercube algorithm via translational propagation.
Pan, Guang; Ye, Pengcheng; Wang, Peng
2014-01-01
Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is directly related to the experimental designs used. Optimal Latin hypercube designs are frequently used and have been shown to have good space-filling and projective properties. However, the high cost in constructing them limits their use. In this paper, a methodology for creating novel Latin hypercube designs via translational propagation and successive local enumeration algorithm (TPSLE) is developed without using formal optimization. TPSLE algorithm is based on the inspiration that a near optimal Latin Hypercube design can be constructed by a simple initial block with a few points generated by algorithm SLE as a building block. In fact, TPSLE algorithm offers a balanced trade-off between the efficiency and sampling performance. The proposed algorithm is compared to two existing algorithms and is found to be much more efficient in terms of the computation time and has acceptable space-filling and projective properties.
Local tomographic phase microscopy from differential projections
NASA Astrophysics Data System (ADS)
Vishnyakov, G. N.; Levin, G. G.; Minaev, V. L.; Nekrasov, N. A.
2016-12-01
It is proposed to use local tomography for optical studies of the internal structure of transparent phase microscopic objects, for example, living cells. From among the many local tomography methods that exist, the algorithms of back projection summation (in which partial derivatives of projections are used as projection data) are chosen. The application of local tomography to living cells is reasonable because, using optical phase microscopy, one can easily obtain projection data in the form of first-order derivatives of projections applying the methods of differential interference contrast and shear interferometry. The mathematical fundamentals of local tomography in differential projections are considered, and a computer simulation of different local tomography methods is performed. A tomographic phase microscope and the results of reconstructing a local tomogram of an erythrocyte from a set of experimental differential projections are described.
2005-12-01
development, evaluate training regimes and design of new systems with complex man- machine interface problems. The project uses advanced statistical...physiological measures to provide input to adaptive man- machine interfaces . The goal of the projects is to further develop measurement methods with...dinteraction Homme -Système Intuitive)., The original document contains color images. 14. ABSTRACT 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17
Algorithm for Constructing Contour Plots
NASA Technical Reports Server (NTRS)
Johnson, W.; Silva, F.
1984-01-01
General computer algorithm developed for construction of contour plots. algorithm accepts as input data values at set of points irregularly distributed over plane. Algorithm based on interpolation scheme: points in plane connected by straight-line segments to form set of triangles. Program written in FORTRAN IV.
Two Meanings of Algorithmic Mathematics.
ERIC Educational Resources Information Center
Maurer, Stephen B.
1984-01-01
Two mathematical topics are interpreted from the viewpoints of traditional (performing algorithms) and contemporary (creating algorithms and thinking in terms of them for solving problems and developing theory) algorithmic mathematics. The two topics are Horner's method for evaluating polynomials and Gauss's method for solving systems of linear…
Greedy algorithms in disordered systems
NASA Astrophysics Data System (ADS)
Duxbury, P. M.; Dobrin, R.
1999-08-01
We discuss search, minimal path and minimal spanning tree algorithms and their applications to disordered systems. Greedy algorithms solve these problems exactly, and are related to extremal dynamics in physics. Minimal cost path (Dijkstra) and minimal cost spanning tree (Prim) algorithms provide extremal dynamics for a polymer in a random medium (the KPZ universality class) and invasion percolation (without trapping) respectively.
Grammar Rules as Computer Algorithms.
ERIC Educational Resources Information Center
Rieber, Lloyd
1992-01-01
One college writing teacher engaged his class in the revision of a computer program to check grammar, focusing on improvement of the algorithms for identifying inappropriate uses of the passive voice. Process and problems of constructing new algorithms, effects on student writing, and other algorithm applications are discussed. (MSE)
Verifying a Computer Algorithm Mathematically.
ERIC Educational Resources Information Center
Olson, Alton T.
1986-01-01
Presents an example of mathematics from an algorithmic point of view, with emphasis on the design and verification of this algorithm. The program involves finding roots for algebraic equations using the half-interval search algorithm. The program listing is included. (JN)
A novel dual-axis reconstruction algorithm for electron tomography
NASA Astrophysics Data System (ADS)
Tong, Jenna; Midgley, Paul
2006-02-01
A new algorithm for computing electron microscopy tomograms which combines iterative methods with dual-axis geometry is presented. Initial modelling using test data shows several improvements over both the weighted back-projection (WBP) and Simultaneous Iterative Reconstruction Technique (SIRT) method, and, with increased stability and tomogram fidelity under high-noise conditions.
Managing and learning with multiple models: Objectives and optimization algorithms
Probert, William J. M.; Hauser, C.E.; McDonald-Madden, E.; Runge, M.C.; Baxter, P.W.J.; Possingham, H.P.
2011-01-01
The quality of environmental decisions should be gauged according to managers' objectives. Management objectives generally seek to maximize quantifiable measures of system benefit, for instance population growth rate. Reaching these goals often requires a certain degree of learning about the system. Learning can occur by using management action in combination with a monitoring system. Furthermore, actions can be chosen strategically to obtain specific kinds of information. Formal decision making tools can choose actions to favor such learning in two ways: implicitly via the optimization algorithm that is used when there is a management objective (for instance, when using adaptive management), or explicitly by quantifying knowledge and using it as the fundamental project objective, an approach new to conservation.This paper outlines three conservation project objectives - a pure management objective, a pure learning objective, and an objective that is a weighted mixture of these two. We use eight optimization algorithms to choose actions that meet project objectives and illustrate them in a simulated conservation project. The algorithms provide a taxonomy of decision making tools in conservation management when there is uncertainty surrounding competing models of system function. The algorithms build upon each other such that their differences are highlighted and practitioners may see where their decision making tools can be improved. ?? 2010 Elsevier Ltd.
Algorithms to Automate LCLS Undulator Tuning
Wolf, Zachary
2010-12-03
Automation of the LCLS undulator tuning offers many advantages to the project. Automation can make a substantial reduction in the amount of time the tuning takes. Undulator tuning is fairly complex and automation can make the final tuning less dependent on the skill of the operator. Also, algorithms are fixed and can be scrutinized and reviewed, as opposed to an individual doing the tuning by hand. This note presents algorithms implemented in a computer program written for LCLS undulator tuning. The LCLS undulators must meet the following specifications. The maximum trajectory walkoff must be less than 5 {micro}m over 10 m. The first field integral must be below 40 x 10{sup -6} Tm. The second field integral must be below 50 x 10{sup -6} Tm{sup 2}. The phase error between the electron motion and the radiation field must be less than 10 degrees in an undulator. The K parameter must have the value of 3.5000 {+-} 0.0005. The phase matching from the break regions into the undulator must be accurate to better than 10 degrees. A phase change of 113 x 2{pi} must take place over a distance of 3.656 m centered on the undulator. Achieving these requirements is the goal of the tuning process. Most of the tuning is done with Hall probe measurements. The field integrals are checked using long coil measurements. An analysis program written in Matlab takes the Hall probe measurements and computes the trajectories, phase errors, K value, etc. The analysis program and its calculation techniques were described in a previous note. In this note, a second Matlab program containing tuning algorithms is described. The algorithms to determine the required number and placement of the shims are discussed in detail. This note describes the operation of a computer program which was written to automate LCLS undulator tuning. The algorithms used to compute the shim sizes and locations are discussed.
An Effective CUDA Parallelization of Projection in Iterative Tomography Reconstruction
Xie, Lizhe; Hu, Yining; Yan, Bin; Wang, Lin; Yang, Benqiang; Liu, Wenyuan; Zhang, Libo; Luo, Limin; Shu, Huazhong; Chen, Yang
2015-01-01
Projection and back-projection are the most computationally intensive parts in Computed Tomography (CT) reconstruction, and are essential to acceleration of CT reconstruction algorithms. Compared to back-projection, parallelization efficiency in projection is highly limited by racing condition and thread unsynchronization. In this paper, a strategy of Fixed Sampling Number Projection (FSNP) is proposed to ensure the operation synchronization in the ray-driven projection with Graphical Processing Unit (GPU). Texture fetching is also used utilized to further accelerate the interpolations in both projection and back-projection. We validate the performance of this FSNP approach using both simulated and real cone-beam CT data. Experimental results show that compare to the conventional approach, the proposed FSNP method together with texture fetching is 10~16 times faster than the conventional approach based on global memory, and thus leads to more efficient iterative algorithm in CT reconstruction. PMID:26618857
An Effective CUDA Parallelization of Projection in Iterative Tomography Reconstruction.
Xie, Lizhe; Hu, Yining; Yan, Bin; Wang, Lin; Yang, Benqiang; Liu, Wenyuan; Zhang, Libo; Luo, Limin; Shu, Huazhong; Chen, Yang
2015-01-01
Projection and back-projection are the most computationally intensive parts in Computed Tomography (CT) reconstruction, and are essential to acceleration of CT reconstruction algorithms. Compared to back-projection, parallelization efficiency in projection is highly limited by racing condition and thread unsynchronization. In this paper, a strategy of Fixed Sampling Number Projection (FSNP) is proposed to ensure the operation synchronization in the ray-driven projection with Graphical Processing Unit (GPU). Texture fetching is also used utilized to further accelerate the interpolations in both projection and back-projection. We validate the performance of this FSNP approach using both simulated and real cone-beam CT data. Experimental results show that compare to the conventional approach, the proposed FSNP method together with texture fetching is 10~16 times faster than the conventional approach based on global memory, and thus leads to more efficient iterative algorithm in CT reconstruction.
Image reconstruction algorithms with wavelet filtering for optoacoustic imaging
NASA Astrophysics Data System (ADS)
Gawali, S.; Leggio, L.; Broadway, C.; González, P.; Sánchez, M.; Rodríguez, S.; Lamela, H.
2016-03-01
Optoacoustic imaging (OAI) is a hybrid biomedical imaging modality based on the generation and detection of ultrasound by illuminating the target tissue by laser light. Typically, laser light in visible or near infrared spectrum is used as an excitation source. OAI is based on the implementation of image reconstruction algorithms using the spatial distribution of optical absorption in tissues. In this work, we apply a time-domain back-projection (BP) reconstruction algorithm and a wavelet filtering for point and line detection, respectively. A comparative study between point detection and integrated line detection has been carried out by evaluating their effects on the image reconstructed. Our results demonstrate that the back-projection algorithm proposed is efficient for reconstructing high-resolution images of absorbing spheres embedded in a non-absorbing medium when it is combined with the wavelet filtering.
Selfish Gene Algorithm Vs Genetic Algorithm: A Review
NASA Astrophysics Data System (ADS)
Ariff, Norharyati Md; Khalid, Noor Elaiza Abdul; Hashim, Rathiah; Noor, Noorhayati Mohamed
2016-11-01
Evolutionary algorithm is one of the algorithms inspired by the nature. Within little more than a decade hundreds of papers have reported successful applications of EAs. In this paper, the Selfish Gene Algorithms (SFGA), as one of the latest evolutionary algorithms (EAs) inspired from the Selfish Gene Theory which is an interpretation of Darwinian Theory ideas from the biologist Richards Dawkins on 1989. In this paper, following a brief introduction to the Selfish Gene Algorithm (SFGA), the chronology of its evolution is presented. It is the purpose of this paper is to present an overview of the concepts of Selfish Gene Algorithm (SFGA) as well as its opportunities and challenges. Accordingly, the history, step involves in the algorithm are discussed and its different applications together with an analysis of these applications are evaluated.
Neurons to algorithms LDRD final report.
Rothganger, Fredrick H.; Aimone, James Bradley; Warrender, Christina E.; Trumbo, Derek
2013-09-01
Over the last three years the Neurons to Algorithms (N2A) LDRD project teams has built infrastructure to discover computational structures in the brain. This consists of a modeling language, a tool that enables model development and simulation in that language, and initial connections with the Neuroinformatics community, a group working toward similar goals. The approach of N2A is to express large complex systems like the brain as populations of a discrete part types that have specific structural relationships with each other, along with internal and structural dynamics. Such an evolving mathematical system may be able to capture the essence of neural processing, and ultimately of thought itself. This final report is a cover for the actual products of the project: the N2A Language Specification, the N2A Application, and a journal paper summarizing our methods.
Fox, Christopher; Romeijn, H Edwin; Dempsey, James F
2006-05-01
We present work on combining three algorithms to improve ray-tracing efficiency in radiation therapy dose computation. The three algorithms include: An improved point-in-polygon algorithm, incremental voxel ray tracing algorithm, and stereographic projection of beamlets for voxel truncation. The point-in-polygon and incremental voxel ray-tracing algorithms have been used in computer graphics and nuclear medicine applications while the stereographic projection algorithm was developed by our group. These algorithms demonstrate significant improvements over the current standard algorithms in peer reviewed literature, i.e., the polygon and voxel ray-tracing algorithms of Siddon for voxel classification (point-in-polygon testing) and dose computation, respectively, and radius testing for voxel truncation. The presented polygon ray-tracing technique was tested on 10 intensity modulated radiation therapy (IMRT) treatment planning cases that required the classification of between 0.58 and 2.0 million voxels on a 2.5 mm isotropic dose grid into 1-4 targets and 5-14 structures represented as extruded polygons (a.k.a. Siddon prisms). Incremental voxel ray tracing and voxel truncation employing virtual stereographic projection was tested on the same IMRT treatment planning cases where voxel dose was required for 230-2400 beamlets using a finite-size pencil-beam algorithm. Between a 100 and 360 fold cpu time improvement over Siddon's method was observed for the polygon ray-tracing algorithm to perform classification of voxels for target and structure membership. Between a 2.6 and 3.1 fold reduction in cpu time over current algorithms was found for the implementation of incremental ray tracing. Additionally, voxel truncation via stereographic projection was observed to be 11-25 times faster than the radial-testing beamlet extent approach and was further improved 1.7-2.0 fold through point-classification using the method of translation over the cross product technique.
Self-Correcting HVAC Controls Project Final Report
Fernandez, Nicholas; Brambley, Michael R.; Katipamula, Srinivas; Cho, Heejin; Goddard, James K.; Dinh, Liem H.
2010-01-04
This document represents the final project report for the Self-Correcting Heating, Ventilating and Air-Conditioning (HVAC) Controls Project jointly funded by Bonneville Power Administration (BPA) and the U.S. Department of Energy (DOE) Building Technologies Program (BTP). The project, initiated in October 2008, focused on exploratory initial development of self-correcting controls for selected HVAC components in air handlers. This report, along with the companion report documenting the algorithms developed, Self-Correcting HVAC Controls: Algorithms for Sensors and Dampers in Air-Handling Units (Fernandez et al. 2009), document the work performed and results of this project.
PRISMA Formatio n Flying Project in System Test Phase
NASA Astrophysics Data System (ADS)
Persson, S.
2008-08-01
The PRISMA project for in-flight demonstration of autonomous formation flying and rendezvous is well into the flight units integration and integrated systems testing stage. The project comprises two satellites which constitute an in-orbit test bed for Guidance, Navigation and Control (GNC) algorithms and sensors for advanced formation flying and rendezvous. Several experiments involving GNC algorithms, sensors (GPS, RF and vision based) and thrusters, will be performed during a 10 month mission with launch planned for the first half of 2009. The project now enters the system level testing phase. This paper gives a brief overview of the project and highlights several steps in the system level verification process.
Testing Algorithmic Skills in Traditional and Non-Traditional Programming Environments
ERIC Educational Resources Information Center
Csernoch, Mária; Biró, Piroska; Máth, János; Abari, Kálmán
2015-01-01
The Testing Algorithmic and Application Skills (TAaAS) project was launched in the 2011/2012 academic year to test first year students of Informatics, focusing on their algorithmic skills in traditional and non-traditional programming environments, and on the transference of their knowledge of Informatics from secondary to tertiary education. The…
NASA Technical Reports Server (NTRS)
Schultz, Christopher J.; Carey, Larry; Cecil, Dan; Bateman, Monte; Stano, Geoffrey; Goodman, Steve
2012-01-01
Objective of project is to refine, adapt and demonstrate the Lightning Jump Algorithm (LJA) for transition to GOES -R GLM (Geostationary Lightning Mapper) readiness and to establish a path to operations Ongoing work . reducing risk in GLM lightning proxy, cell tracking, LJA algorithm automation, and data fusion (e.g., radar + lightning).
2007-12-06
problems studied in this project involve numerically solving partial differential equations with either discontinuous or rapidly changing solutions ...REPORT Algorithm Development and Application of High Order Numerical Methods for Shocked and Rapid Changing Solutions 14. ABSTRACT 16. SECURITY...discontinuous Galerkin finite element methods, for solving partial differential equations with discontinuous or rapidly changing solutions . Algorithm
Analysis of the Karmarkar-Karp differencing algorithm
NASA Astrophysics Data System (ADS)
Boettcher, S.; Mertens, S.
2008-09-01
The Karmarkar-Karp differencing algorithm is the best known polynomial time heuristic for the number partitioning problem, fundamental in both theoretical computer science and statistical physics. We analyze the performance of the differencing algorithm on random instances by mapping it to a nonlinear rate equation. Our analysis reveals strong finite size effects that explain why the precise asymptotics of the differencing solution is hard to establish by simulations. The asymptotic series emerging from the rate equation satisfies all known bounds on the Karmarkar-Karp algorithm and projects a scaling n - c ln n , where c = 1/(2 ln 2) = 0.7213 .... Our calculations reveal subtle relations between the algorithm and Fibonacci-like sequences, and we establish an explicit identity to that effect.
Algorithm of Finding Hypo-Critical Path in Network Planning
NASA Astrophysics Data System (ADS)
Qi, Jianxun; Zhao, Xiuhua
Network planning technology could be used to represent project plan management, such Critical Path Method (CPM for short) and Performance Evaluation Review Technique (PERT for short) etc. Aiming at problem that how to find hypo-critical path in network planning, firstly, properties of total float. free float and safety float are analyzed, and total float theorem is deduced on the basis of above analysis; and secondly, simple algorithm of finding the hypo-critical path is designed by using these properties of float and total theorem, and correctness of the algorithm is analyzed. Proof shows that the algorithm could realize effect of whole optimization could be realized by part optimization. Finally, one illustration is given to expatiate the algorithm.
A Message-Passing Algorithm for Wireless Network Scheduling.
Paschalidis, Ioannis Ch; Huang, Fuzhuo; Lai, Wei
2015-10-01
We consider scheduling in wireless networks and formulate it as Maximum Weighted Independent Set (MWIS) problem on a "conflict" graph that captures interference among simultaneous transmissions. We propose a novel, low-complexity, and fully distributed algorithm that yields high-quality feasible solutions. Our proposed algorithm consists of two phases, each of which requires only local information and is based on message-passing. The first phase solves a relaxation of the MWIS problem using a gradient projection method. The relaxation we consider is tighter than the simple linear programming relaxation and incorporates constraints on all cliques in the graph. The second phase of the algorithm starts from the solution of the relaxation and constructs a feasible solution to the MWIS problem. We show that our algorithm always outputs an optimal solution to the MWIS problem for perfect graphs. Simulation results compare our policies against Carrier Sense Multiple Access (CSMA) and other alternatives and show excellent performance.
Maximum Capital Project Management.
ERIC Educational Resources Information Center
Adams, Matt
2002-01-01
Describes the stages of capital project planning and development: (1) individual capital project submission; (2) capital project proposal assessment; (3) executive committee; and (4) capital project execution. (EV)
An innovative localisation algorithm for railway vehicles
NASA Astrophysics Data System (ADS)
Allotta, B.; D'Adamio, P.; Malvezzi, M.; Pugi, L.; Ridolfi, A.; Rindi, A.; Vettori, G.
2014-11-01
In modern railway automatic train protection and automatic train control systems, odometry is a safety relevant on-board subsystem which estimates the instantaneous speed and the travelled distance of the train; a high reliability of the odometry estimate is fundamental, since an error on the train position may lead to a potentially dangerous overestimation of the distance available for braking. To improve the odometry estimate accuracy, data fusion of different inputs coming from a redundant sensor layout may be used. The aim of this work has been developing an innovative localisation algorithm for railway vehicles able to enhance the performances, in terms of speed and position estimation accuracy, of the classical odometry algorithms, such as the Italian Sistema Controllo Marcia Treno (SCMT). The proposed strategy consists of a sensor fusion between the information coming from a tachometer and an Inertial Measurements Unit (IMU). The sensor outputs have been simulated through a 3D multibody model of a railway vehicle. The work has provided the development of a custom IMU, designed by ECM S.p.a, in order to meet their industrial and business requirements. The industrial requirements have to be compliant with the European Train Control System (ETCS) standards: the European Rail Traffic Management System (ERTMS), a project developed by the European Union to improve the interoperability among different countries, in particular as regards the train control and command systems, fixes some standard values for the odometric (ODO) performance, in terms of speed and travelled distance estimation. The reliability of the ODO estimation has to be taken into account basing on the allowed speed profiles. The results of the currently used ODO algorithms can be improved, especially in case of degraded adhesion conditions; it has been verified in the simulation environment that the results of the proposed localisation algorithm are always compliant with the ERTMS requirements
JPSS CGS Tools For Rapid Algorithm Updates
NASA Astrophysics Data System (ADS)
Smith, D. C.; Grant, K. D.
2011-12-01
The National Oceanic and Atmospheric Administration (NOAA) and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation civilian weather and environmental satellite system: the Joint Polar Satellite System (JPSS). JPSS will contribute the afternoon orbit component and ground processing system of the restructured National Polar-orbiting Operational Environmental Satellite System (NPOESS). As such, JPSS replaces the current Polar-orbiting Operational Environmental Satellites (POES) managed by NOAA and the ground processing component of both POES and the Defense Meteorological Satellite Program (DMSP) replacement known as the Defense Weather Satellite System (DWSS), managed by the Department of Defense (DoD). The JPSS satellites will carry a suite of sensors designed to collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The ground processing system for JPSS is known as the JPSS Common Ground System (JPSS CGS), and consists of a Command, Control, and Communications Segment (C3S) and the Interface Data Processing Segment (IDPS). Both are developed by Raytheon Intelligence and Information Systems (IIS). The Interface Data Processing Segment will process NPOESS Preparatory Project, Joint Polar Satellite System and Defense Weather Satellite System satellite data to provide environmental data products to NOAA and DoD processing centers operated by the United States government. Under NPOESS, Northrop Grumman Aerospace Systems Algorithms and Data Products (A&DP) organization was responsible for the algorithms that produce the EDRs, including their quality aspects. For JPSS, that responsibility has transferred to NOAA's Center for Satellite Applications & Research (STAR). As the Calibration and Validation (Cal/Val) activities move forward following both the NPP launch and subsequent JPSS and DWSS launches, rapid algorithm updates may be required. Raytheon and
Join-Graph Propagation Algorithms
Mateescu, Robert; Kask, Kalev; Gogate, Vibhav; Dechter, Rina
2010-01-01
The paper investigates parameterized approximate message-passing schemes that are based on bounded inference and are inspired by Pearl's belief propagation algorithm (BP). We start with the bounded inference mini-clustering algorithm and then move to the iterative scheme called Iterative Join-Graph Propagation (IJGP), that combines both iteration and bounded inference. Algorithm IJGP belongs to the class of Generalized Belief Propagation algorithms, a framework that allowed connections with approximate algorithms from statistical physics and is shown empirically to surpass the performance of mini-clustering and belief propagation, as well as a number of other state-of-the-art algorithms on several classes of networks. We also provide insight into the accuracy of iterative BP and IJGP by relating these algorithms to well known classes of constraint propagation schemes. PMID:20740057
Parallel algorithm development
Adams, T.F.
1996-06-01
Rapid changes in parallel computing technology are causing significant changes in the strategies being used for parallel algorithm development. One approach is simply to write computer code in a standard language like FORTRAN 77 or with the expectation that the compiler will produce executable code that will run in parallel. The alternatives are: (1) to build explicit message passing directly into the source code; or (2) to write source code without explicit reference to message passing or parallelism, but use a general communications library to provide efficient parallel execution. Application of these strategies is illustrated with examples of codes currently under development.
Algorithm performance evaluation
NASA Astrophysics Data System (ADS)
Smith, Richard N.; Greci, Anthony M.; Bradley, Philip A.
1995-03-01
Traditionally, the performance of adaptive antenna systems is measured using automated antenna array pattern measuring equipment. This measurement equipment produces a plot of the receive gain of the antenna array as a function of angle. However, communications system users more readily accept and understand bit error rate (BER) as a performance measure. The work reported on here was conducted to characterize adaptive antenna receiver performance in terms of overall communications system performance using BER as a performance measure. The adaptive antenna system selected for this work featured a linear array, least mean square (LMS) adaptive algorithm and a high speed phase shift keyed (PSK) communications modem.
Parallel vision algorithms. Annual technical report No. 2, 1 October 1987-28 December 1988
Ibrahim, H.A.; Kender, J.R.; Brown, L.G.
1989-01-01
This Second Annual Technical Report covers the project activities during the period from October 1, 1987 through December 31, 1988. The objective of this project is to develop and implement, on highly parallel computers, vision algorithms that combine stereo, texture, and multi-resolution techniques for determining local surface orientation and depth. Such algorithms can serve as front-end components of autonomous land-vehicle vision systems. During the second year of the project, efforts concentrated on the following: first, implementing and testing on the Connection Machine the parallel programming environment that will be used to develop, implement and test our parallel vision algorithms; second, implementing and testing primitives for the multi-resolution stereo and texture algorithms, in this environment. Also, efforts were continued to refine techniques used in the texture algorithms, and to develop a system that integrates information from several shape-from-texture methods. This report describes the status and progress of these efforts. The authors describe first the programming environment implementation, and how to use it. They summarize the results for multi-resolution based depth-interpolation algorithms on parallel architectures. Then, they present algorithms and test results for the texture algorithms. Finally, the results of the efforts of integrating information from various shape-from-texture algorithms are presented.
Control algorithms for dynamic attenuators
Hsieh, Scott S.; Pelc, Norbert J.
2014-06-15
Purpose: The authors describe algorithms to control dynamic attenuators in CT and compare their performance using simulated scans. Dynamic attenuators are prepatient beam shaping filters that modulate the distribution of x-ray fluence incident on the patient on a view-by-view basis. These attenuators can reduce dose while improving key image quality metrics such as peak or mean variance. In each view, the attenuator presents several degrees of freedom which may be individually adjusted. The total number of degrees of freedom across all views is very large, making many optimization techniques impractical. The authors develop a theory for optimally controlling these attenuators. Special attention is paid to a theoretically perfect attenuator which controls the fluence for each ray individually, but the authors also investigate and compare three other, practical attenuator designs which have been previously proposed: the piecewise-linear attenuator, the translating attenuator, and the double wedge attenuator. Methods: The authors pose and solve the optimization problems of minimizing the mean and peak variance subject to a fixed dose limit. For a perfect attenuator and mean variance minimization, this problem can be solved in simple, closed form. For other attenuator designs, the problem can be decomposed into separate problems for each view to greatly reduce the computational complexity. Peak variance minimization can be approximately solved using iterated, weighted mean variance (WMV) minimization. Also, the authors develop heuristics for the perfect and piecewise-linear attenuators which do not requirea priori knowledge of the patient anatomy. The authors compare these control algorithms on different types of dynamic attenuators using simulated raw data from forward projected DICOM files of a thorax and an abdomen. Results: The translating and double wedge attenuators reduce dose by an average of 30% relative to current techniques (bowtie filter with tube current
Construction project selection with the use of fuzzy preference relation
NASA Astrophysics Data System (ADS)
Ibadov, Nabi
2016-06-01
In the article, author describes the problem of the construction project variant selection during pre-investment phase. As a solution, the algorithm basing on fuzzy preference relation is presented. The article provides an example of the algorithm used for selection of the best variant for construction project. The choice is made basing on criteria such as: net present value (NPV), level of technological difficulty, financing possibilities, and level of organizational difficulty.
Preparing projected entangled pair states on a quantum computer.
Schwarz, Martin; Temme, Kristan; Verstraete, Frank
2012-03-16
We present a quantum algorithm to prepare injective projected entangled pair states (PEPS) on a quantum computer, a class of open tensor networks representing quantum states. The run time of our algorithm scales polynomially with the inverse of the minimum condition number of the PEPS projectors and, essentially, with the inverse of the spectral gap of the PEPS's parent Hamiltonian.
NASA Astrophysics Data System (ADS)
Schlifske, Daniel; Medeiros, Henry
2016-03-01
Modern CT image reconstruction algorithms rely on projection and back-projection operations to refine an image estimate in iterative image reconstruction. A widely-used state-of-the-art technique is distance-driven projection and back-projection. While the distance-driven technique yields superior image quality in iterative algorithms, it is a computationally demanding process. This has a detrimental effect on the relevance of the algorithms in clinical settings. A few methods have been proposed for enhancing the distance-driven technique in order to take advantage of modern computer hardware. This paper explores a two-dimensional extension of the branchless method proposed by Samit Basu and Bruno De Man. The extension of the branchless method is named "pre-integration" because it achieves a significant performance boost by integrating the data before the projection and back-projection operations. It was written with Nvidia's CUDA platform and carefully designed for massively parallel GPUs. The performance and the image quality of the pre-integration method were analyzed. Both projection and back-projection are significantly faster with preintegration. The image quality was analyzed using cone beam image reconstruction algorithms within Jeffrey Fessler's Image Reconstruction Toolbox. Images produced from regularized, iterative image reconstruction algorithms using the pre-integration method show no significant impact to image quality.
Valence-bond quantum Monte Carlo algorithms defined on trees.
Deschner, Andreas; Sørensen, Erik S
2014-09-01
We present a class of algorithms for performing valence-bond quantum Monte Carlo of quantum spin models. Valence-bond quantum Monte Carlo is a projective T=0 Monte Carlo method based on sampling of a set of operator strings that can be viewed as forming a treelike structure. The algorithms presented here utilize the notion of a worm that moves up and down this tree and changes the associated operator string. In quite general terms, we derive a set of equations whose solutions correspond to a whole class of algorithms. As specific examples of this class of algorithms, we focus on two cases. The bouncing worm algorithm, for which updates are always accepted by allowing the worm to bounce up and down the tree, and the driven worm algorithm, where a single parameter controls how far up the tree the worm reaches before turning around. The latter algorithm involves only a single bounce where the worm turns from going up the tree to going down. The presence of the control parameter necessitates the introduction of an acceptance probability for the update.
Efficient iterative image reconstruction algorithm for dedicated breast CT
NASA Astrophysics Data System (ADS)
Antropova, Natalia; Sanchez, Adrian; Reiser, Ingrid S.; Sidky, Emil Y.; Boone, John; Pan, Xiaochuan
2016-03-01
Dedicated breast computed tomography (bCT) is currently being studied as a potential screening method for breast cancer. The X-ray exposure is set low to achieve an average glandular dose comparable to that of mammography, yielding projection data that contains high levels of noise. Iterative image reconstruction (IIR) algorithms may be well-suited for the system since they potentially reduce the effects of noise in the reconstructed images. However, IIR outcomes can be difficult to control since the algorithm parameters do not directly correspond to the image properties. Also, IIR algorithms are computationally demanding and have optimal parameter settings that depend on the size and shape of the breast and positioning of the patient. In this work, we design an efficient IIR algorithm with meaningful parameter specifications and that can be used on a large, diverse sample of bCT cases. The flexibility and efficiency of this method comes from having the final image produced by a linear combination of two separately reconstructed images - one containing gray level information and the other with enhanced high frequency components. Both of the images result from few iterations of separate IIR algorithms. The proposed algorithm depends on two parameters both of which have a well-defined impact on image quality. The algorithm is applied to numerous bCT cases from a dedicated bCT prototype system developed at University of California, Davis.
PCA-LBG-based algorithms for VQ codebook generation
NASA Astrophysics Data System (ADS)
Tsai, Jinn-Tsong; Yang, Po-Yuan
2015-04-01
Vector quantisation (VQ) codebooks are generated by combining principal component analysis (PCA) algorithms with Linde-Buzo-Gray (LBG) algorithms. All training vectors are grouped according to the projected values of the principal components. The PCA-LBG-based algorithms include (1) PCA-LBG-Median, which selects the median vector of each group, (2) PCA-LBG-Centroid, which adopts the centroid vector of each group, and (3) PCA-LBG-Random, which randomly selects a vector of each group. The LBG algorithm finds a codebook based on the better vectors sent to an initial codebook by the PCA. The PCA performs an orthogonal transformation to convert a set of potentially correlated variables into a set of variables that are not linearly correlated. Because the orthogonal transformation efficiently distinguishes test image vectors, the proposed PCA-LBG-based algorithm is expected to outperform conventional algorithms in designing VQ codebooks. The experimental results confirm that the proposed PCA-LBG-based algorithms indeed obtain better results compared to existing methods reported in the literature.
NASA Technical Reports Server (NTRS)
Birchenough, Shawn; Kato, Denise; Kennedy, Fred; Akin, David
1990-01-01
The goals of Project Artemis are designed to meet the challege of President Bush to return to the Moon, this time to stay. The first goal of the project is to establish a permanent manned base on the Moon for the purposes of scientific research and technological development. The knowledge gained from the establishment and operations of the lunar base will then be used to achieve the second goal of Project Artemis, the establishment of a manned base on the Martian surface. Throughout both phases of the program, crew safety will be the number one priority. There are four main issues that have governed the entire program: crew safety and mission success, commonality, growth potential, and costing and scheduling. These issues are discussed in more detail.
NASA Technical Reports Server (NTRS)
1965-01-01
Langley personnel at Cape Canaveral during preliminary checkout of Project FIRE velocity package before launch. Project FIRE (Flight Investigation Reentry Environment) studied the effects of reentry heating on spacecraft materials. It involved both wind tunnel and flight tests, although the majority were tests with Atlas rockets and recoverable reentry packages. These flight tests took place at Cape Canaveral in Florida. Wind tunnel tests were made in several Langley tunnels including the Unitary Plan Wind Tunnel, the 8-foot High-Temperature Tunnel and the 9- x 6-Foot Thermal Structures Tunnel.
NASA Astrophysics Data System (ADS)
Arnal, E. M.; Abraham, Z.; Giménez de Castro, G.; de Gouveia dal Pino, E. M.; Larrarte, J. J.; Lepine, J.; Morras, R.; Viramonte, J.
2014-10-01
The project LLAMA, acronym of Long Latin American Millimetre Array is very briefly described in this paper. This project is a joint scientific and technological undertaking of Argentina and Brazil on the basis of an equal investment share, whose mail goal is both to install and to operate an observing facility capable of exploring the Universe at millimetre and sub/millimetre wavelengths. This facility will be erected in the argentinean province of Salta, in a site located at 4830m above sea level.
NASA Technical Reports Server (NTRS)
Rabideau, Gregg R.; Chien, Steve A.
2010-01-01
AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.
NASA Technical Reports Server (NTRS)
Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John
2005-01-01
The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.
Online Pairwise Learning Algorithms.
Ying, Yiming; Zhou, Ding-Xuan
2016-04-01
Pairwise learning usually refers to a learning task that involves a loss function depending on pairs of examples, among which the most notable ones are bipartite ranking, metric learning, and AUC maximization. In this letter we study an online algorithm for pairwise learning with a least-square loss function in an unconstrained setting of a reproducing kernel Hilbert space (RKHS) that we refer to as the Online Pairwise lEaRning Algorithm (OPERA). In contrast to existing works (Kar, Sriperumbudur, Jain, & Karnick, 2013 ; Wang, Khardon, Pechyony, & Jones, 2012 ), which require that the iterates are restricted to a bounded domain or the loss function is strongly convex, OPERA is associated with a non-strongly convex objective function and learns the target function in an unconstrained RKHS. Specifically, we establish a general theorem that guarantees the almost sure convergence for the last iterate of OPERA without any assumptions on the underlying distribution. Explicit convergence rates are derived under the condition of polynomially decaying step sizes. We also establish an interesting property for a family of widely used kernels in the setting of pairwise learning and illustrate the convergence results using such kernels. Our methodology mainly depends on the characterization of RKHSs using its associated integral operators and probability inequalities for random variables with values in a Hilbert space.
An improved genetic algorithm with dynamic topology
NASA Astrophysics Data System (ADS)
Cai, Kai-Quan; Tang, Yan-Wu; Zhang, Xue-Jun; Guan, Xiang-Min
2016-12-01
The genetic algorithm (GA) is a nature-inspired evolutionary algorithm to find optima in search space via the interaction of individuals. Recently, researchers demonstrated that the interaction topology plays an important role in information exchange among individuals of evolutionary algorithm. In this paper, we investigate the effect of different network topologies adopted to represent the interaction structures. It is found that GA with a high-density topology ends up more likely with an unsatisfactory solution, contrarily, a low-density topology can impede convergence. Consequently, we propose an improved GA with dynamic topology, named DT-GA, in which the topology structure varies dynamically along with the fitness evolution. Several experiments executed with 15 well-known test functions have illustrated that DT-GA outperforms other test GAs for making a balance of convergence speed and optimum quality. Our work may have implications in the combination of complex networks and computational intelligence. Project supported by the National Natural Science Foundation for Young Scientists of China (Grant No. 61401011), the National Key Technologies R & D Program of China (Grant No. 2015BAG15B01), and the National Natural Science Foundation of China (Grant No. U1533119).
Spent Nuclear Fuel project, project management plan
Fuquay, B.J.
1995-10-25
The Hanford Spent Nuclear Fuel Project has been established to safely store spent nuclear fuel at the Hanford Site. This Project Management Plan sets forth the management basis for the Spent Nuclear Fuel Project. The plan applies to all fabrication and construction projects, operation of the Spent Nuclear Fuel Project facilities, and necessary engineering and management functions within the scope of the project
A Numerical Algorithm for the Solution of a Phase-Field Model of Polycrystalline Materials
Dorr, M R; Fattebert, J; Wickett, M E; Belak, J F; Turchi, P A
2008-12-04
We describe an algorithm for the numerical solution of a phase-field model (PFM) of microstructure evolution in polycrystalline materials. The PFM system of equations includes a local order parameter, a quaternion representation of local orientation and a species composition parameter. The algorithm is based on the implicit integration of a semidiscretization of the PFM system using a backward difference formula (BDF) temporal discretization combined with a Newton-Krylov algorithm to solve the nonlinear system at each time step. The BDF algorithm is combined with a coordinate projection method to maintain quaternion unit length, which is related to an important solution invariant. A key element of the Newton-Krylov algorithm is the selection of a preconditioner to accelerate the convergence of the Generalized Minimum Residual algorithm used to solve the Jacobian linear system in each Newton step. Results are presented for the application of the algorithm to 2D and 3D examples.
STAR Algorithm Integration Team - Facilitating operational algorithm development
NASA Astrophysics Data System (ADS)
Mikles, V. J.
2015-12-01
The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.
Algorithm aversion: people erroneously avoid algorithms after seeing them err.
Dietvorst, Berkeley J; Simmons, Joseph P; Massey, Cade
2015-02-01
Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.
ERIC Educational Resources Information Center
Journal of College Science Teaching, 1972
1972-01-01
The Environmental Protection Agency has started a project to actually picture the environmental movement in the United States. This is an attempt to make the public aware of the air pollution in their area or state and to acquaint them with the effects of air cleaning efforts. (PS)
ERIC Educational Resources Information Center
Owen, Ben
1975-01-01
Describes "Project School Flight" which is an idea originated by the Experimental Aircraft Association to provide the opportunity for young people to construct a light aircraft in the schools as part of a normal class. Address included of Experimental Aircraft Association for interested persons. (BR)
ERIC Educational Resources Information Center
McBain, Susan L.; And Others
Project CLASS (Competency-Based Live-Ability Skills) uses a series of 60 modules to teach life survival skills to adults with low-level reading ability--especially Adult Basic Education/English as a Second Language students. Two versions of the modules have been developed: one for use with teacher-directed instruction and another for independent…
ERIC Educational Resources Information Center
School Science Review, 1979
1979-01-01
Listed are 32 biology A-level projects, categorized by organisms studied as follows: algae (1), bryophytes (1), angiosperms (14), fungi (1), flatworms (1), annelids (2), molluscs (1), crustaceans (2), insects (4), fish (2), mammals (1), humans (1); and one synecological study. (CS)
ERIC Educational Resources Information Center
Hambler, David J.; Dixon, Jean M.
1982-01-01
Describes collection of quantitative samples of microorganisms and accumulation of physical data from a pond over a year. Provides examples of how final-year degree students have used materials and data for ecological projects (involving mainly algae), including their results/conclusions. Also describes apparatus and reagents used in the student…
This final report summarizes the seven foot Hydrosphere Project. During the course of this program, three Interim Reports were submitted. Interim...to the final assembly of the seven foot Hydrosphere . This final report includes a brief outline of each of the above noted Interim Reports, as well as
ERIC Educational Resources Information Center
Charles County Board of Education, La Plata, MD. Office of Special Education.
The document outlines procedures for implementing Project CAST (Community and School Together), a community-based career education program for secondary special education students in Charles County, Maryland. Initial sections discuss the role of a learning coordinator, (including relevant travel reimbursement and mileage forms) and an overview of…
ERIC Educational Resources Information Center
Ewing Marion Kauffman Foundation, Kansas City, MO.
Project Choice was begun with the goal of increasing the number of inner-city students who graduate on time. Ewing M. Kauffman and his business and foundation associates designed and elected to test a model that used the promise of postsecondary education or training as the incentive to stay in school. This report details the evolution of Project…
ERIC Educational Resources Information Center
Hilden, Pauline
1976-01-01
A teacher describes a Thanksgiving project in which 40 educable mentally retarded students (6-13 years old) made and served their own dinner of stew, butter, bread, ice cream, and pie, and in the process learned about social studies, cooking, and proper meal behavior. (CL)
ERIC Educational Resources Information Center
King, Allen L.
1975-01-01
Describes an experimental project on boomerangs designed for an undergraduate course in classical mechanics. The students designed and made their own boomerangs, devised their own procedures, and carried out suitable measurements. Presents some of their data and a simple analysis for the two-bladed boomerang. (Author/MLH)
ERIC Educational Resources Information Center
Gwaley, Elizabeth; And Others
Project ENRICH was conceived in Beaver County, Pennsylvania, to: (1) identify preschool children with learning disabilities, and (2) to develop a program geared to the remediation of the learning disabilities within a school year, while allowing the child to be enrolled in a regular class situation for the following school year. Through…
ERIC Educational Resources Information Center
Patterson, John
Project Succeed is a program for helping failure- and dropout-oriented pupils to improve their school achievement. Attendance and assignment completion are the key behaviors for enhancing achievement. Behavior modification and communications procedures are used to bring about the desired changes. Treatment procedures include current assessment…
ERIC Educational Resources Information Center
Helisek, Harriet; Pratt, Donald
1994-01-01
Presents a project in which students monitor their use of trash, input and analyze information via a database and computerized graphs, and "reconstruct" extinct or endangered animals from recyclable materials. The activity was done with second-grade students over a period of three to four weeks. (PR)
Driscoll, Mary C.
2012-07-12
The Project Narrative describes how the funds from the DOE grant were used to purchase equipment for the biology, chemistry, physics and mathematics departments. The Narrative also describes how the equipment is being used. There is also a list of the positive outcomes as a result of having the equipment that was purchased with the DOE grant.
ERIC Educational Resources Information Center
School Science Review, 1977
1977-01-01
Listed and described are student A-level biology projects in the following areas: Angiosperm studies (e.g., factors affecting growth of various plants), 7; Bacterial studies, 1; Insect studies, 2; Fish studies, 1; Mammal studies, 1; Human studies, 1; Synecology studies, 2; Environmental studies, 2; and Enzyme studies, 1. (CS)
Multisensor data fusion algorithm development
Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.
1995-12-01
This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.
An Adaptive Data Collection Algorithm Based on a Bayesian Compressed Sensing Framework
Liu, Zhi; Zhang, Mengmeng; Cui, Jian
2014-01-01
For Wireless Sensor Networks, energy efficiency is always a key consideration in system design. Compressed sensing is a new theory which has promising prospects in WSNs. However, how to construct a sparse projection matrix is a problem. In this paper, based on a Bayesian compressed sensing framework, a new adaptive algorithm which can integrate routing and data collection is proposed. By introducing new target node selection metrics, embedding the routing structure and maximizing the differential entropy for each collection round, an adaptive projection vector is constructed. Simulations show that compared to reference algorithms, the proposed algorithm can decrease computation complexity and improve energy efficiency. PMID:24818659
Image recombination transform algorithm for superresolution structured illumination microscopy
NASA Astrophysics Data System (ADS)
Zhou, Xing; Lei, Ming; Dan, Dan; Yao, Baoli; Yang, Yanlong; Qian, Jia; Chen, Guangde; Bianco, Piero R.
2016-09-01
Structured illumination microscopy (SIM) is an attractive choice for fast superresolution imaging. The generation of structured illumination patterns made by interference of laser beams is broadly employed to obtain high modulation depth of patterns, while the polarizations of the laser beams must be elaborately controlled to guarantee the high contrast of interference intensity, which brings a more complex configuration for the polarization control. The emerging pattern projection strategy is much more compact, but the modulation depth of patterns is deteriorated by the optical transfer function of the optical system, especially in high spatial frequency near the diffraction limit. Therefore, the traditional superresolution reconstruction algorithm for interference-based SIM will suffer from many artifacts in the case of projection-based SIM that possesses a low modulation depth. Here, we propose an alternative reconstruction algorithm based on image recombination transform, which provides an alternative solution to address this problem even in a weak modulation depth. We demonstrated the effectiveness of this algorithm in the multicolor superresolution imaging of bovine pulmonary arterial endothelial cells in our developed projection-based SIM system, which applies a computer controlled digital micromirror device for fast fringe generation and multicolor light-emitting diodes for illumination. The merit of the system incorporated with the proposed algorithm allows for a low excitation intensity fluorescence imaging even less than 1 W/cm2, which is beneficial for the long-term, in vivo superresolved imaging of live cells and tissues.
[Algorithm for assessment of exposure to asbestos].
Martines, V; Fioravanti, M; Anselmi, A; Attili, F; Battaglia, D; Cerratti, D; Ciarrocca, M; D'Amelio, R; De Lorenzo, G; Ferrante, E; Gaudioso, F; Mascia, E; Rauccio, A; Siena, S; Palitti, T; Tucci, L; Vacca, D; Vigliano, R; Zelano, V; Tomei, F; Sancini, A
2010-01-01
There is no universally approved method in the scientific literature to identify subjects exposed to asbestos and divide them in classes according to intensity of exposure. The aim of our work is to study and develope an algorithm based on the findings of occupational anamnestical information provided by a large group of workers. The algorithm allows to discriminate, in a probabilistic way, the risk of exposure by the attribution of a code for each worker (ELSA Code--work estimated exposure to asbestos). The ELSA code has been obtained through a synthesis of information that the international scientific literature identifies as the most predictive for the onset of asbestos-related abnormalities. Four dimensions are analyzed and described: 1) present and/or past occupation; 2) type of materials and equipment used in performing working activity; 3) environment where these activities are carried out; 4) period of time when activities are performed. Although it is possible to have informations in a subjective manner, the decisional procedure is objective and is based on the systematic evaluation of asbestos exposure. From the combination of the four identified dimensions it is possible to have 108 ELSA codes divided in three typological profiles of estimated risk of exposure. The application of the algorithm offers some advantages compared to other methods used for identifying individuals exposed to asbestos: 1) it can be computed both in case of present and past exposure to asbestos; 2) the classification of workers exposed to asbestos using ELSA code is more detailed than the one we have obtained with Job Exposure Matrix (JEM) because the ELSA Code takes in account other indicators of risk besides those considered in the JEM. This algorithm was developed for a project sponsored by the Italian Armed Forces and is also adaptable to other work conditions for in which it could be necessary to assess risk for asbestos exposure.
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar; Camps, Octavia; Coraor, Lee
2000-01-01
The research reported here is a part of NASA's Synthetic Vision System (SVS) project for the development of a High Speed Civil Transport Aircraft (HSCT). One of the components of the SVS is a module for detection of potential obstacles in the aircraft's flight path by analyzing the images captured by an on-board camera in real-time. Design of such a module includes the selection and characterization of robust, reliable, and fast techniques and their implementation for execution in real-time. This report describes the results of our research in realizing such a design. It is organized into three parts. Part I. Data modeling and camera characterization; Part II. Algorithms for detecting airborne obstacles; and Part III. Real time implementation of obstacle detection algorithms on the Datacube MaxPCI architecture. A list of publications resulting from this grant as well as a list of relevant publications resulting from prior NASA grants on this topic are presented.
Fighting Censorship with Algorithms
NASA Astrophysics Data System (ADS)
Mahdian, Mohammad
In countries such as China or Iran where Internet censorship is prevalent, users usually rely on proxies or anonymizers to freely access the web. The obvious difficulty with this approach is that once the address of a proxy or an anonymizer is announced for use to the public, the authorities can easily filter all traffic to that address. This poses a challenge as to how proxy addresses can be announced to users without leaking too much information to the censorship authorities. In this paper, we formulate this question as an interesting algorithmic problem. We study this problem in a static and a dynamic model, and give almost tight bounds on the number of proxy servers required to give access to n people k of whom are adversaries. We will also discuss how trust networks can be used in this context.
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess
2011-01-01
More efficient versions of an interpolation method, called kriging, have been introduced in order to reduce its traditionally high computational cost. Written in C++, these approaches were tested on both synthetic and real data. Kriging is a best unbiased linear estimator and suitable for interpolation of scattered data points. Kriging has long been used in the geostatistic and mining communities, but is now being researched for use in the image fusion of remotely sensed data. This allows a combination of data from various locations to be used to fill in any missing data from any single location. To arrive at the faster algorithms, sparse SYMMLQ iterative solver, covariance tapering, Fast Multipole Methods (FMM), and nearest neighbor searching techniques were used. These implementations were used when the coefficient matrix in the linear system is symmetric, but not necessarily positive-definite.
One improved LSB steganography algorithm
NASA Astrophysics Data System (ADS)
Song, Bing; Zhang, Zhi-hong
2013-03-01
It is easy to be detected by X2 and RS steganalysis with high accuracy that using LSB algorithm to hide information in digital image. We started by selecting information embedded location and modifying the information embedded method, combined with sub-affine transformation and matrix coding method, improved the LSB algorithm and a new LSB algorithm was proposed. Experimental results show that the improved one can resist the X2 and RS steganalysis effectively.
Messy genetic algorithms: Recent developments
Kargupta, H.
1996-09-01
Messy genetic algorithms define a rare class of algorithms that realize the need for detecting appropriate relations among members of the search domain in optimization. This paper reviews earlier works in messy genetic algorithms and describes some recent developments. It also describes the gene expression messy GA (GEMGA)--an {Omicron}({Lambda}{sup {kappa}}({ell}{sup 2} + {kappa})) sample complexity algorithm for the class of order-{kappa} delineable problems (problems that can be solved by considering no higher than order-{kappa} relations) of size {ell} and alphabet size {Lambda}. Experimental results are presented to demonstrate the scalability of the GEMGA.
DNABIT Compress - Genome compression algorithm.
Rajarajeswari, Pothuraju; Apparao, Allam
2011-01-22
Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.
Quantum algorithm for data fitting.
Wiebe, Nathan; Braun, Daniel; Lloyd, Seth
2012-08-03
We provide a new quantum algorithm that efficiently determines the quality of a least-squares fit over an exponentially large data set by building upon an algorithm for solving systems of linear equations efficiently [Harrow et al., Phys. Rev. Lett. 103, 150502 (2009)]. In many cases, our algorithm can also efficiently find a concise function that approximates the data to be fitted and bound the approximation error. In cases where the input data are pure quantum states, the algorithm can be used to provide an efficient parametric estimation of the quantum state and therefore can be applied as an alternative to full quantum-state tomography given a fault tolerant quantum computer.
Preconditioned quantum linear system algorithm.
Clader, B D; Jacobs, B C; Sprouse, C R
2013-06-21
We describe a quantum algorithm that generalizes the quantum linear system algorithm [Harrow et al., Phys. Rev. Lett. 103, 150502 (2009)] to arbitrary problem specifications. We develop a state preparation routine that can initialize generic states, show how simple ancilla measurements can be used to calculate many quantities of interest, and integrate a quantum-compatible preconditioner that greatly expands the number of problems that can achieve exponential speedup over classical linear systems solvers. To demonstrate the algorithm's applicability, we show how it can be used to compute the electromagnetic scattering cross section of an arbitrary target exponentially faster than the best classical algorithm.
NOSS Altimeter Detailed Algorithm specifications
NASA Technical Reports Server (NTRS)
Hancock, D. W.; Mcmillan, J. D.
1982-01-01
The details of the algorithms and data sets required for satellite radar altimeter data processing are documented in a form suitable for (1) development of the benchmark software and (2) coding the operational software. The algorithms reported in detail are those established for altimeter processing. The algorithms which required some additional development before documenting for production were only scoped. The algorithms are divided into two levels of processing. The first level converts the data to engineering units and applies corrections for instrument variations. The second level provides geophysical measurements derived from altimeter parameters for oceanographic users.
An efficient algorithm for prioritizing NEA physical observations
NASA Astrophysics Data System (ADS)
Cortese, M.; Perozzi, E.; Micheli, M.; Borgia, B.; Dotto, E.; Mazzotta Epifani, E.; Ieva, S.; Barucci, M. A.; Perna, D.
2017-03-01
The present near-Earth asteroids (NEA) discovery rate has surpassed 1500 objects per year, thus calling for extensive observation campaigns devoted to physical characterization in order to define successful mitigation strategies in case of possible impactors. A tool is presented which, through a prioritization algorithm, aims to optimize the planning and the execution of NEA physical observations. Two ranking criteria are introduced, Importance and Urgency, accounting for the need of satisfying the two basic observational modes for physical characterization, that is, rapid response and large observing programs, aimed at selecting targets for exploration and mitigation space missions. The resulting tool generates a daily table of observable targets and it can also be run as a stand-alone tool in order to provide future observing opportunities at a specific date of interest. It has been developed and implemented within the framework of the NEOShield-2 EU/HORIZON 2020 Project; the output of the prioritization algorithm is publicly available, in tabular format, on the NEOShield-2 NEO Properties Portal (
Cognitive Education Project. Summary Project.
ERIC Educational Resources Information Center
Mulcahy, Robert; And Others
The Cognitive Education Project conducted a 3-year longitudinal evaluation of two cognitive education programs that were aimed at teaching thinking skills. The critical difference between the two experimental programs was that one, Feuerstein's Instrumental Enrichment (IE) method, was taught out of curricular content, while the other, the…
Infrared Algorithm Development for Ocean Observations with EOS/MODIS
NASA Technical Reports Server (NTRS)
Brown, Otis B.
1997-01-01
Efforts continue under this contract to develop algorithms for the computation of sea surface temperature (SST) from MODIS infrared measurements. This effort includes radiative transfer modeling, comparison of in situ and satellite observations, development and evaluation of processing and networking methodologies for algorithm computation and data accession, evaluation of surface validation approaches for IR radiances, development of experimental instrumentation, and participation in MODIS (project) related activities. Activities in this contract period have focused on radiative transfer modeling, evaluation of atmospheric correction methodologies, undertake field campaigns, analysis of field data, and participation in MODIS meetings.
Infrared algorithm development for ocean observations with EOS/MODIS
NASA Technical Reports Server (NTRS)
Brown, Otis B.
1994-01-01
Efforts continue under this contract to develop algorithms for the computation of sea surface temperature (SST) from MODIS infrared retrievals. This effort includes radiative transfer modeling, comparison of in situ and satellite observations, development and evaluation of processing and networking methodologies for algorithm computation and data accession, evaluation of surface validation approaches for IR radiances, and participation in MODIS (project) related activities. Efforts in this contract period have focused on radiative transfer modeling and evaluation of atmospheric path radiance efforts on SST estimation, exploration of involvement in ongoing field studies, evaluation of new computer networking strategies, and objective analysis approaches.
NASA Technical Reports Server (NTRS)
Johnson, Steve
2003-01-01
Project Prometheus will enable a new paradigm in the scientific exploration of the Solar System. The proposed JIMO mission will start a new generation of missions characterized by more maneuverability, flexibility, power and lifetime. Project Prometheus organization is established at NASA Headquarters: 1.Organization established to carry out development of JIMO, nuclear power (radioisotope), and nuclear propulsion research. 2.Completed broad technology and national capacity assessments to inform decision making on planning and technology development. 3.Awarded five NRA s for nuclear propulsion research. 4.Radioisotope power systems in development, and Plutonium-238 being purchased from Russia. 5.Formulated science driven near-term and long-term plan for the safe utilization of nuclear propulsion based missions. 6.Completed preliminary studies (Pre-Phase A) of JIMO and other missions. 7.Initiated JIMO Phase A studies by Contractors and NASA.
NASA Technical Reports Server (NTRS)
1990-01-01
Lunar base projects, including a reconfigurable lunar cargo launcher, a thermal and micrometeorite protection system, a versatile lifting machine with robotic capabilities, a cargo transport system, the design of a road construction system for a lunar base, and the design of a device for removing lunar dust from material surfaces, are discussed. The emphasis on the Gulf of Mexico project was on the development of a computer simulation model for predicting vessel station keeping requirements. An existing code, used in predicting station keeping requirements for oil drilling platforms operating in North Shore (Alaska) waters was used as a basis for the computer simulation. Modifications were made to the existing code. The input into the model consists of satellite altimeter readings and water velocity readings from buoys stationed in the Gulf of Mexico. The satellite data consists of altimeter readings (wave height) taken during the spring of 1989. The simulation model predicts water velocity and direction, and wind velocity.
2015-04-02
The Water Power Program helps industry harness this renewable, emissions-free resource to generate environmentally sustainable and cost-effective electricity. Through support for public, private, and nonprofit efforts, the Water Power Program promotes the development, demonstration, and deployment of advanced hydropower devices and pumped storage hydropower applications. These technologies help capture energy stored by diversionary structures, increase the efficiency of hydroelectric generation, and use excess grid energy to replenish storage reserves for use during periods of peak electricity demand. In addition, the Water Power Program works to assess the potential extractable energy from domestic water resources to assist industry and government in planning for our nation’s energy future. From FY 2008 to FY 2014, DOE’s Water Power Program announced awards totaling approximately $62.5 million to 33 projects focused on hydropower. Table 1 provides a brief description of these projects.
NASA Technical Reports Server (NTRS)
Fargion, Giulietta S.; McClain, Charles R.; Busalacchi, Antonio J. (Technical Monitor)
2001-01-01
The purpose of this technical report is to provide current documentation of the Sensor Intercomparison and Merger for Biological and Interdisciplinary Oceanic Studies (SIMBIOS) Project activities, NASA Research Announcement (NRAI) research status, satellite data processing, data product validation, and field calibration. This documentation is necessary to ensure that critical information is related to the scientific community and NASA management. This critical information includes the technical difficulties and challenges of validating and combining ocean color data from an array of independent satellite systems to form consistent and accurate global bio-optical time series products. This technical report is not meant as a substitute for scientific literature. Instead, it will provide a ready and responsive vehicle for the multitude of technical reports issued by an operational project.
Concluding Report: Quantitative Tomography Simulations and Reconstruction Algorithms
Aufderheide, M B; Martz, H E; Slone, D M; Jackson, J A; Schach von Wittenau, A E; Goodman, D M; Logan, C M; Hall, J M
2002-02-01
In this report we describe the original goals and final achievements of this Laboratory Directed Research and Development project. The Quantitative was Tomography Simulations and Reconstruction Algorithms project (99-ERD-015) funded as a multi-directorate, three-year effort to advance the state of the art in radiographic simulation and tomographic reconstruction by improving simulation and including this simulation in the tomographic reconstruction process. Goals were to improve the accuracy of radiographic simulation, and to couple advanced radiographic simulation tools with a robust, many-variable optimization algorithm. In this project, we were able to demonstrate accuracy in X-Ray simulation at the 2% level, which is an improvement of roughly a factor of 5 in accuracy, and we have successfully coupled our simulation tools with the CCG (Constrained Conjugate Gradient) optimization algorithm, allowing reconstructions that include spectral effects and blurring in the reconstructions. Another result of the project was the assembly of a low-scatter X-Ray imaging facility for use in nondestructive evaluation applications. We conclude with a discussion of future work.
A novel algorithm of maximin Latin hypercube design using successive local enumeration
NASA Astrophysics Data System (ADS)
Zhu, Huaguang; Liu, Li; Long, Teng; Peng, Lei
2012-05-01
The design of computer experiments (DoCE) is a key technique in the field of metamodel-based design optimization. Space-filling and projective properties are desired features in DoCE. In this article, a novel algorithm of maximin Latin hypercube design (LHD) using successive local enumeration (SLE) is proposed for generating arbitrary m points in n-dimensional space. Testing results compared with lhsdesign function, binary encoded genetic algorithm (BinGA), permutation encoded genetic algorithm (PermGA) and translational propagation algorithm (TPLHD) indicate that SLE is effective to generate sampling points with good space-filling and projective properties. The accuracies of metamodels built with the sampling points produced by lhsdesign function and SLE are compared to illustrate the preferable performance of SLE. Through the comparative study on efficiency with BinGA, PermGA, and TPLHD, as a novel algorithm of LHD sampling techniques, SLE has good space-filling property and acceptable efficiency.
Experimental Analysis of Algorithms.
1987-12-01
resource requirements. :nternatrona! Journal ct Coniputer and informarron Sciences 6(2):131-149, 1977. [37] T. Ohya M In. and K. Murota. Improvements...foreign nations. This technical report has been reviewed and is approved for publication. CRAH IRA M . HOPPER RICHARD C. JONES. Project Encineer Ch, Advanced...CI ASSIFIC TON OF T ’I5 PAGE REPORT DOCUMENTATION PAGE OMBNo. 0704-0188K TI .& EPO R T S(C U RITY C L SS CCA I , jH .R E S IR TIV E M A R K IN G Se
NASA Astrophysics Data System (ADS)
During the winter term of 1991, two design courses at the University of Michigan worked on a joint project, MEDSAT. The two design teams consisted of the Atmospheric, Oceanic, and Spacite System Design and Aerospace Engineering 483 (Aero 483) Aerospace System Design. In collaboration, they worked to produce MEDSAT, a satellite and scientific payload whose purpose was to monitor environmental conditions over Chiapas, Mexico. Information gained from the sensing, combined with regional data, would be used to determine the potential for malaria occurrence in that area. The responsibilities of AOSS 605 consisted of determining the remote sensing techniques, the data processing, and the method to translate the information into a usable output. Aero 483 developed the satellite configuration and the subsystems required for the satellite to accomplish its task. The MEDSAT project is an outgrowth of work already being accomplished by NASA's Biospheric and Disease Monitoring Program and Ames Research Center. NASA's work has been to develop remote sensing techniques to determine the abundance of disease carriers and now this project will place the techniques aboard a satellite. MEDSAT will be unique in its use of both a Synthetic Aperture Radar and visual/IR sensor to obtain comprehensive monitoring of the site. In order to create a highly feasible system, low cost was a high priority. To obtain this goal, a light satellite configuration launched by the Pegasus launch vehicle was used.
NASA Technical Reports Server (NTRS)
1991-01-01
During the winter term of 1991, two design courses at the University of Michigan worked on a joint project, MEDSAT. The two design teams consisted of the Atmospheric, Oceanic, and Spacite System Design and Aerospace Engineering 483 (Aero 483) Aerospace System Design. In collaboration, they worked to produce MEDSAT, a satellite and scientific payload whose purpose was to monitor environmental conditions over Chiapas, Mexico. Information gained from the sensing, combined with regional data, would be used to determine the potential for malaria occurrence in that area. The responsibilities of AOSS 605 consisted of determining the remote sensing techniques, the data processing, and the method to translate the information into a usable output. Aero 483 developed the satellite configuration and the subsystems required for the satellite to accomplish its task. The MEDSAT project is an outgrowth of work already being accomplished by NASA's Biospheric and Disease Monitoring Program and Ames Research Center. NASA's work has been to develop remote sensing techniques to determine the abundance of disease carriers and now this project will place the techniques aboard a satellite. MEDSAT will be unique in its use of both a Synthetic Aperture Radar and visual/IR sensor to obtain comprehensive monitoring of the site. In order to create a highly feasible system, low cost was a high priority. To obtain this goal, a light satellite configuration launched by the Pegasus launch vehicle was used.
Using DFX for Algorithm Evaluation
Beiriger, J.I.; Funkhouser, D.R.; Young, C.J.
1998-10-20
Evaluating whether or not a new seismic processing algorithm can improve the performance of the operational system can be problematic: it maybe difficult to isolate the comparable piece of the operational system it maybe necessary to duplicate ancillary timctions; and comparing results to the tuned, full-featured operational system maybe an unsat- isfactory basis on which to draw conclusions. Algorithm development and evaluation in an environment that more closely resembles the operational system can be achieved by integrating the algorithm with the custom user library of the Detection and Feature Extraction (DFX) code, developed by Science Applications kternational Corporation. This integration gives the seismic researcher access to all of the functionality of DFX, such as database access, waveform quality control, and station-specific tuning, and provides a more meaningfid basis for evaluation. The goal of this effort is to make the DFX environment more accessible to seismic researchers for algorithm evalua- tion. Typically, anew algorithm will be developed as a C-language progmm with an ASCII test parameter file. The integration process should allow the researcher to focus on the new algorithm developmen~ with minimum attention to integration issues. Customizing DFX, however, requires soflsvare engineering expertise, knowledge of the Scheme and C programming languages, and familiarity with the DFX source code. We use a C-language spatial coherence processing algorithm with a parameter and recipe file to develop a general process for integrating and evaluating a new algorithm in the DFX environment. To aid in configuring and managing the DFX environment, we develop a simple parameter management tool. We also identifi and examine capabilities that could simplify the process further, thus reducing the barriers facing researchers in using DFX..These capabilities include additional parameter manage- ment features, a Scheme-language template for algorithm testing, a
Modular algorithm concept evaluation tool (MACET) sensor fusion algorithm testbed
NASA Astrophysics Data System (ADS)
Watson, John S.; Williams, Bradford D.; Talele, Sunjay E.; Amphay, Sengvieng A.
1995-07-01
Target acquisition in a high clutter environment in all-weather at any time of day represents a much needed capability for the air-to-surface strike mission. A considerable amount of the research at the Armament Directorate at Wright Laboratory, Advanced Guidance Division WL/MNG, has been devoted to exploring various seeker technologies, including multi-spectral sensor fusion, that may yield a cost efficient system with these capabilities. Critical elements of any such seekers are the autonomous target acquisition and tracking algorithms. These algorithms allow the weapon system to operate independently and accurately in realistic battlefield scenarios. In order to assess the performance of the multi-spectral sensor fusion algorithms being produced as part of the seeker technology development programs, the Munition Processing Technology Branch of WL/MN is developing an algorithm testbed. This testbed consists of the Irma signature prediction model, data analysis workstations, such as the TABILS Analysis and Management System (TAMS), and the Modular Algorithm Concept Evaluation Tool (MACET) algorithm workstation. All three of these components are being enhanced to accommodate multi-spectral sensor fusion systems. MACET is being developed to provide a graphical interface driven simulation by which to quickly configure algorithm components and conduct performance evaluations. MACET is being developed incrementally with each release providing an additional channel of operation. To date MACET 1.0, a passive IR algorithm environment, has been delivered. The second release, MACET 1.1 is presented in this paper using the MMW/IR data from the Advanced Autonomous Dual Mode Seeker (AADMS) captive flight demonstration. Once completed, the delivered software from past algorithm development efforts will be converted to the MACET library format, thereby providing an on-line database of the algorithm research conducted to date.
Quantitative tomography simulations and reconstruction algorithms
Martz, H E; Aufderheide, M B; Goodman, D; Schach von Wittenau, A; Logan, C; Hall, J; Jackson, J; Slone, D
2000-11-01
X-ray, neutron and proton transmission radiography and computed tomography (CT) are important diagnostic tools that are at the heart of LLNLs effort to meet the goals of the DOE's Advanced Radiography Campaign. This campaign seeks to improve radiographic simulation and analysis so that radiography can be a useful quantitative diagnostic tool for stockpile stewardship. Current radiographic accuracy does not allow satisfactory separation of experimental effects from the true features of an object's tomographically reconstructed image. This can lead to difficult and sometimes incorrect interpretation of the results. By improving our ability to simulate the whole radiographic and CT system, it will be possible to examine the contribution of system components to various experimental effects, with the goal of removing or reducing them. In this project, we are merging this simulation capability with a maximum-likelihood (constrained-conjugate-gradient-CCG) reconstruction technique yielding a physics-based, forward-model image-reconstruction code. In addition, we seek to improve the accuracy of computed tomography from transmission radiographs by studying what physics is needed in the forward model. During FY 2000, an improved version of the LLNL ray-tracing code called HADES has been coupled with a recently developed LLNL CT algorithm known as CCG. The problem of image reconstruction is expressed as a large matrix equation relating a model for the object being reconstructed to its projections (radiographs). Using a constrained-conjugate-gradient search algorithm, a maximum likelihood solution is sought. This search continues until the difference between the input measured radiographs or projections and the simulated or calculated projections is satisfactorily small. We developed a 2D HADES-CCG CT code that uses full ray-tracing simulations from HADES as the projector. Often an object has axial symmetry and it is desirable to reconstruct into a 2D r-z mesh with a limited
Using advanced computer vision algorithms on small mobile robots
NASA Astrophysics Data System (ADS)
Kogut, G.; Birchmore, F.; Biagtan Pacis, E.; Everett, H. R.
2006-05-01
The Technology Transfer project employs a spiral development process to enhance the functionality and autonomy of mobile robot systems in the Joint Robotics Program (JRP) Robotic Systems Pool by converging existing component technologies onto a transition platform for optimization. An example of this approach is the implementation of advanced computer vision algorithms on small mobile robots. We demonstrate the implementation and testing of the following two algorithms useful on mobile robots: 1) object classification using a boosted Cascade of classifiers trained with the Adaboost training algorithm, and 2) human presence detection from a moving platform. Object classification is performed with an Adaboost training system developed at the University of California, San Diego (UCSD) Computer Vision Lab. This classification algorithm has been used to successfully detect the license plates of automobiles in motion in real-time. While working towards a solution to increase the robustness of this system to perform generic object recognition, this paper demonstrates an extension to this application by detecting soda cans in a cluttered indoor environment. The human presence detection from a moving platform system uses a data fusion algorithm which combines results from a scanning laser and a thermal imager. The system is able to detect the presence of humans while both the humans and the robot are moving simultaneously. In both systems, the two aforementioned algorithms were implemented on embedded hardware and optimized for use in real-time. Test results are shown for a variety of environments.
A computational study of routing algorithms for realistic transportation networks
Jacob, R.; Marathe, M.V.; Nagel, K.
1998-12-01
The authors carry out an experimental analysis of a number of shortest path (routing) algorithms investigated in the context of the TRANSIMS (Transportation Analysis and Simulation System) project. The main focus of the paper is to study how various heuristic and exact solutions, associated data structures affected the computational performance of the software developed especially for realistic transportation networks. For this purpose the authors have used Dallas Fort-Worth road network with very high degree of resolution. The following general results are obtained: (1) they discuss and experimentally analyze various one-one shortest path algorithms, which include classical exact algorithms studied in the literature as well as heuristic solutions that are designed to take into account the geometric structure of the input instances; (2) they describe a number of extensions to the basic shortest path algorithm. These extensions were primarily motivated by practical problems arising in TRANSIMS and ITS (Intelligent Transportation Systems) related technologies. Extensions discussed include--(i) time dependent networks, (ii) multi-modal networks, (iii) networks with public transportation and associated schedules. Computational results are provided to empirically compare the efficiency of various algorithms. The studies indicate that a modified Dijkstra`s algorithm is computationally fast and an excellent candidate for use in various transportation planning applications as well as ITS related technologies.
A High Precision Terahertz Wave Image Reconstruction Algorithm
Guo, Qijia; Chang, Tianying; Geng, Guoshuai; Jia, Chengyan; Cui, Hong-Liang
2016-01-01
With the development of terahertz (THz) technology, the applications of this spectrum have become increasingly wide-ranging, in areas such as non-destructive testing, security applications and medical scanning, in which one of the most important methods is imaging. Unlike remote sensing applications, THz imaging features sources of array elements that are almost always supposed to be spherical wave radiators, including single antennae. As such, well-developed methodologies such as Range-Doppler Algorithm (RDA) are not directly applicable in such near-range situations. The Back Projection Algorithm (BPA) can provide products of high precision at the the cost of a high computational burden, while the Range Migration Algorithm (RMA) sacrifices the quality of images for efficiency. The Phase-shift Migration Algorithm (PMA) is a good alternative, the features of which combine both of the classical algorithms mentioned above. In this research, it is used for mechanical scanning, and is extended to array imaging for the first time. In addition, the performances of PMA are studied in detail in contrast to BPA and RMA. It is demonstrated in our simulations and experiments described herein that the algorithm can reconstruct images with high precision. PMID:27455269
Optimization algorithms for large-scale multireservoir hydropower systems
Hiew, K.L.
1987-01-01
Five optimization algorithms were vigorously evaluated based on applications on a hypothetical five-reservoir hydropower system. These algorithms are incremental dynamic programming (IDP), successive linear programing (SLP), feasible direction method (FDM), optimal control theory (OCT) and objective-space dynamic programming (OSDP). The performance of these algorithms were comparatively evaluated using unbiased, objective criteria which include accuracy of results, rate of convergence, smoothness of resulting storage and release trajectories, computer time and memory requirements, robustness and other pertinent secondary considerations. Results have shown that all the algorithms, with the exception of OSDP converge to optimum objective values within 1.0% difference from one another. The highest objective value is obtained by IDP, followed closely by OCT. Computer time required by these algorithms, however, differ by more than two orders of magnitude, ranging from 10 seconds in the case of OCT to a maximum of about 2000 seconds for IDP. With a well-designed penalty scheme to deal with state-space constraints, OCT proves to be the most-efficient algorithm based on its overall performance. SLP, FDM, and OCT were applied to the case study of Mahaweli project, a ten-powerplant system in Sri Lanka.
NASA Astrophysics Data System (ADS)
Abir, Muhammad Imran Khan
The core components (e.g. fuel assemblies, spacer grids, control rods) of the nuclear reactors encounter harsh environment due to high temperature, physical stress, and a tremendous level of radiation. The integrity of these elements is crucial for safe operation of the nuclear power plants. The Post Irradiation Examination (PIE) can reveal information about the integrity of the elements during normal operations and off?normal events. Computed tomography (CT) is a tool for evaluating the structural integrity of elements non-destructively. CT requires many projections to be acquired from different view angles after which a mathematical algorithm is adopted for reconstruction. Obtaining many projections is laborious and expensive in nuclear industries. Reconstructions from a small number of projections are explored to achieve faster and cost-efficient PIE. Classical reconstruction algorithms (e.g. filtered back projection) cannot offer stable reconstructions from few projections and create severe streaking artifacts. In this thesis, conventional algorithms are reviewed, and new algorithms are developed for reconstructions of the nuclear fuel assemblies using few projections. CT reconstruction from few projections falls into two categories: the sparse-view CT and the limited-angle CT or tomosynthesis. Iterative reconstruction algorithms are developed for both cases in the field of compressed sensing (CS). The performance of the algorithms is assessed using simulated projections and validated through real projections. The thesis also describes the systematic strategy towards establishing the conditions of reconstructions and finds the optimal imaging parameters for reconstructions of the fuel assemblies from few projections.
Algorithms on ensemble quantum computers.
Boykin, P Oscar; Mor, Tal; Roychowdhury, Vwani; Vatan, Farrokh
2010-06-01
In ensemble (or bulk) quantum computation, all computations are performed on an ensemble of computers rather than on a single computer. Measurements of qubits in an individual computer cannot be performed; instead, only expectation values (over the complete ensemble of computers) can be measured. As a result of this limitation on the model of computation, many algorithms cannot be processed directly on such computers, and must be modified, as the common strategy of delaying the measurements usually does not resolve this ensemble-measurement problem. Here we present several new strategies for resolving this problem. Based on these strategies we provide new versions of some of the most important quantum algorithms, versions that are suitable for implementing on ensemble quantum computers, e.g., on liquid NMR quantum computers. These algorithms are Shor's factorization algorithm, Grover's search algorithm (with several marked items), and an algorithm for quantum fault-tolerant computation. The first two algorithms are simply modified using a randomizing and a sorting strategies. For the last algorithm, we develop a classical-quantum hybrid strategy for removing measurements. We use it to present a novel quantum fault-tolerant scheme. More explicitly, we present schemes for fault-tolerant measurement-free implementation of Toffoli and σ(z)(¼) as these operations cannot be implemented "bitwise", and their standard fault-tolerant implementations require measurement.
Search for New Quantum Algorithms
2006-05-01
Topological computing for beginners, (slide presentation), Lecture Notes for Chapter 9 - Physics 219 - Quantum Computation. (http...14 II.A.8. A QHS algorithm for Feynman integrals ......................................................18 II.A.9. Non-abelian QHS algorithms -- A...idea is that NOT all environmentally entangling transformations are equally likely. In particular, for spatially separated physical quantum
Algorithm Calculates Cumulative Poisson Distribution
NASA Technical Reports Server (NTRS)
Bowerman, Paul N.; Nolty, Robert C.; Scheuer, Ernest M.
1992-01-01
Algorithm calculates accurate values of cumulative Poisson distribution under conditions where other algorithms fail because numbers are so small (underflow) or so large (overflow) that computer cannot process them. Factors inserted temporarily to prevent underflow and overflow. Implemented in CUMPOIS computer program described in "Cumulative Poisson Distribution Program" (NPO-17714).
FORTRAN Algorithm for Image Processing
NASA Technical Reports Server (NTRS)
Roth, Don J.; Hull, David R.
1987-01-01
FORTRAN computer algorithm containing various image-processing analysis and enhancement functions developed. Algorithm developed specifically to process images of developmental heat-engine materials obtained with sophisticated nondestructive evaluation instruments. Applications of program include scientific, industrial, and biomedical imaging for studies of flaws in materials, analyses of steel and ores, and pathology.
Projection-Based Volume Alignment
Yu, Lingbo; Snapp, Robert R.; Ruiz, Teresa; Radermacher, Michael
2013-01-01
When heterogeneous samples of macromolecular assemblies are being examined by 3D electron microscopy (3DEM), often multiple reconstructions are obtained. For example, subtomograms of individual particles can be acquired from tomography, or volumes of multiple 2D classes can be obtained by random conical tilt reconstruction. Of these, similar volumes can be averaged to achieve higher resolution. Volume alignment is an essential step before 3D classification and averaging. Here we present a projection-based volume alignment (PBVA) algorithm. We select a set of projections to represent the reference volume and align them to a second volume. Projection alignment is achieved by maximizing the cross-correlation function with respect to rotation and translation parameters. If data are missing, the cross-correlation functions are normalized accordingly. Accurate alignments are obtained by averaging and quadratic interpolation of the cross-correlation maximum. Comparisons of the computation time between PBVA and traditional 3D cross-correlation methods demonstrate that PBVA outperforms the traditional methods. Performance tests were carried out with different signal-to-noise ratios using modeled noise and with different percentages of missing data using a cryo-EM dataset. All tests show that the algorithm is robust and highly accurate. PBVA was applied to align the reconstructions of a subcomplex of the NADH: ubiquinone oxidoreductase (Complex I) from the yeast Yarrowia lipolytica, followed by classification and averaging. PMID:23410725
Testing an earthquake prediction algorithm
Kossobokov, V.G.; Healy, J.H.; Dewey, J.W.
1997-01-01
A test to evaluate earthquake prediction algorithms is being applied to a Russian algorithm known as M8. The M8 algorithm makes intermediate term predictions for earthquakes to occur in a large circle, based on integral counts of transient seismicity in the circle. In a retroactive prediction for the period January 1, 1985 to July 1, 1991 the algorithm as configured for the forward test would have predicted eight of ten strong earthquakes in the test area. A null hypothesis, based on random assignment of predictions, predicts eight earthquakes in 2.87% of the trials. The forward test began July 1, 1991 and will run through December 31, 1997. As of July 1, 1995, the algorithm had forward predicted five out of nine earthquakes in the test area, which success ratio would have been achieved in 53% of random trials with the null hypothesis.
Algorithm for Autonomous Landing
NASA Technical Reports Server (NTRS)
Kuwata, Yoshiaki
2011-01-01
Because of their small size, high maneuverability, and easy deployment, micro aerial vehicles (MAVs) are used for a wide variety of both civilian and military missions. One of their current drawbacks is the vast array of sensors (such as GPS, altimeter, radar, and the like) required to make a landing. Due to the MAV s small payload size, this is a major concern. Replacing the imaging sensors with a single monocular camera is sufficient to land a MAV. By applying optical flow algorithms to images obtained from the camera, time-to-collision can be measured. This is a measurement of position and velocity (but not of absolute distance), and can avoid obstacles as well as facilitate a landing on a flat surface given a set of initial conditions. The key to this approach is to calculate time-to-collision based on some image on the ground. By holding the angular velocity constant, horizontal speed decreases linearly with the height, resulting in a smooth landing. Mathematical proofs show that even with actuator saturation or modeling/ measurement uncertainties, MAVs can land safely. Landings of this nature may have a higher velocity than is desirable, but this can be compensated for by a cushioning or dampening system, or by using a system of legs to grab onto a surface. Such a monocular camera system can increase vehicle payload size (or correspondingly reduce vehicle size), increase speed of descent, and guarantee a safe landing by directly correlating speed to height from the ground.
Algorithmic advances in stochastic programming
Morton, D.P.
1993-07-01
Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.
NASA Technical Reports Server (NTRS)
1990-01-01
Project Exodus is an in-depth study to identify and address the basic problems of a manned mission to Mars. The most important problems concern propulsion, life support, structure, trajectory, and finance. Exodus will employ a passenger ship, cargo ship, and landing craft for the journey to Mars. These three major components of the mission design are discussed separately. Within each component the design characteristics of structures, trajectory, and propulsion are addressed. The design characteristics of life support are mentioned only in those sections requiring it.
Algorithms + Observations = VStar
NASA Astrophysics Data System (ADS)
Benn, D.
2012-05-01
VStar is a multi-platform, free, open source application for visualizing and analyzing time-series data. It is primarily intended for use with variable star observations, permitting light curves and phase plots to be created, viewed in tabular form, and filtered. Period search and model creation are supported. Wavelet-based time-frequency analysis permits change in period over time to be investigated. Data can be loaded from the AAVSO International Database or files of various formats. vstar's feature set can be expanded via plug-ins, for example, to read Kepler mission data. This article explores vstar's beginnings from a conversation with Arne Henden in 2008 to its development since 2009 in the context of the AAVSO's Citizen Sky Project. Science examples are provided and anticipated future directions are outlined.
Wind farm optimization using evolutionary algorithms
NASA Astrophysics Data System (ADS)
Ituarte-Villarreal, Carlos M.
In recent years, the wind power industry has focused its efforts on solving the Wind Farm Layout Optimization (WFLO) problem. Wind resource assessment is a pivotal step in optimizing the wind-farm design and siting and, in determining whether a project is economically feasible or not. In the present work, three (3) different optimization methods are proposed for the solution of the WFLO: (i) A modified Viral System Algorithm applied to the optimization of the proper location of the components in a wind-farm to maximize the energy output given a stated wind environment of the site. The optimization problem is formulated as the minimization of energy cost per unit produced and applies a penalization for the lack of system reliability. The viral system algorithm utilized in this research solves three (3) well-known problems in the wind-energy literature; (ii) a new multiple objective evolutionary algorithm to obtain optimal placement of wind turbines while considering the power output, cost, and reliability of the system. The algorithm presented is based on evolutionary computation and the objective functions considered are the maximization of power output, the minimization of wind farm cost and the maximization of system reliability. The final solution to this multiple objective problem is presented as a set of Pareto solutions and, (iii) A hybrid viral-based optimization algorithm adapted to find the proper component configuration for a wind farm with the introduction of the universal generating function (UGF) analytical approach to discretize the different operating or mechanical levels of the wind turbines in addition to the various wind speed states. The proposed methodology considers the specific probability functions of the wind resource to describe their proper behaviors to account for the stochastic comportment of the renewable energy components, aiming to increase their power output and the reliability of these systems. The developed heuristic considers a
Sparse matrix transform for fast projection to reduced dimension
Theiler, James P; Cao, Guangzhi; Bouman, Charles A
2010-01-01
We investigate three algorithms that use the sparse matrix transform (SMT) to produce variance-maximizing linear projections to a lower-dimensional space. The SMT expresses the projection as a sequence of Givens rotations and this enables computationally efficient implementation of the projection operator. The baseline algorithm uses the SMT to directly approximate the optimal solution that is given by principal components analysis (PCA). A variant of the baseline begins with a standard SMT solution, but prunes the sequence of Givens rotations to only include those that contribute to the variance maximization. Finally, a simpler and faster third algorithm is introduced; this also estimates the projection operator with a sequence of Givens rotations, but in this case, the rotations are chosen to optimize a criterion that more directly expresses the dimension reduction criterion.
NASA Technical Reports Server (NTRS)
Fargion, Giulietta S.; McClain, Charles R.
2002-01-01
The purpose of this technical report is to provide current documentation of the Sensor Intercomparison and Merger for Biological and Interdisciplinary Oceanic Studies (SIMBIOS) Project activities, NASA Research Announcement (NRA) research status, satellite data processing, data product validation, and field calibration. This documentation is necessary to ensure that critical information is related to the scientific community and NASA management. This critical information includes the technical difficulties and challenges of validating and combining ocean color data from an array of independent satellite systems to form consistent and accurate global bio-optical time series products. This technical report is not meant as a substitute for scientific literature. Instead, it will provide a ready and responsive vehicle for the multitude of technical reports issued by an operational project. The SIMBIOS Science Team Principal Investigators' (PIs) original contributions to this report are in chapters four and above. The purpose of these contributions is to describe the current research status of the SIMBIOS-NRA-96 funded research. The contributions are published as submitted, with the exception of minor edits to correct obvious grammatical or clerical errors.
NASA Technical Reports Server (NTRS)
Bryant, Rodney (Compiler); Dillon, Jennifer (Compiler); Grewe, George (Compiler); Mcmorrow, Jim (Compiler); Melton, Craig (Compiler); Rainey, Gerald (Compiler); Rinko, John (Compiler); Singh, David (Compiler); Yen, Tzu-Liang (Compiler)
1990-01-01
A design for a manned Mars mission, PROJECT EXODUS is presented. PROJECT EXODUS incorporates the design of a hypersonic waverider, cargo ship and NIMF (nuclear rocket using indigenous Martian fuel) shuttle lander to safely carry out a three to five month mission on the surface of Mars. The cargo ship transports return fuel, return engine, surface life support, NIMF shuttle, and the Mars base to low Mars orbit (LMO). The cargo ship is powered by a nuclear electric propulsion (NEP) system which allows the cargo ship to execute a spiral trajectory to Mars. The waverider transports ten astronauts to Mars and back. It is launched from the Space Station with propulsion provided by a chemical engine and a delta velocity of 9 km/sec. The waverider performs an aero-gravity assist maneuver through the atmosphere of Venus to obtain a deflection angle and increase in delta velocity. Once the waverider and cargo ship have docked the astronauts will detach the landing cargo capsules and nuclear electric power plant and remotely pilot them to the surface. They will then descend to the surface aboard the NIMF shuttle. A dome base will be quickly constructed on the surface and the astronauts will conduct an exploratory mission for three to five months. They will return to Earth and dock with the Space Station using the waverider.
NASA Technical Reports Server (NTRS)
Dannenberg, K. K.; Henderson, A.; Lee, J.; Smith, G.; Stluka, E.
1984-01-01
PROJECT EXPLORER is a program that will fly student-developed experiments onboard the Space Shuttle in NASA's Get-Away Special (GAS) containers. The program is co-sponsored by the Alabama Space and Rocket Center, the Alabama-Mississippi Section of the American Institute of Aeronautics and Astronautics, Alabama A&M University and requires extensive support by the University of Alabama in Huntsville. A unique feature of this project will demonstrate transmissions to ground stations on amateur radio frequencies in English language. Experiments Nos. 1, 2, and 3 use the microgravity of space flight to study the solidification of lead-antimony and aluminum-copper alloys, the growth of potassium-tetracyanoplatinate hydrate crystals in an aqueous solution, and the germination of radish seeds. Flight results will be compared with Earth-based data. Experiment No. 4 features radio transmission and will also provide timing for the start of all other experiments. A microprocessor will obtain real-time data from all experiments as well as temperature and pressure measurements taken inside the canister. These data will be transmitted on previously announced amateur radio frequencies after they have been converted into the English language by a digitalker for general reception.
Loyal, Rebecca E.
2015-07-14
The objective of the Portunus Project is to create large, automated offshore ports that will the pace and scale of international trade. Additionally, these ports would increase the number of U.S. domestic trade vessels needed, as the imported goods would need to be transported from these offshore platforms to land-based ports such as Boston, Los Angeles, and Newark. Currently, domestic trade in the United States can only be conducted by vessels that abide by the Merchant Marine Act of 1920 – also referred to as the Jones Act. The Jones Act stipulates that vessels involved in domestic trade must be U.S. owned, U.S. built, and manned by a crew made up of U.S. citizens. The Portunus Project would increase the number of Jones Act vessels needed, which raises an interesting economic concern. Are Jones Act ships more expensive to operate than foreign vessels? Would it be more economically efficient to modify the Jones Act and allow vessels manned by foreign crews to engage in U.S. domestic trade? While opposition to altering the Jones Act is strong, it is important to consider the possibility that ship-owners who employ foreign crews will lobby for the chance to enter a growing domestic trade market. Their success would mean potential job loss for thousands of Americans currently employed in maritime trade.
A hybrid ECT image reconstruction based on Tikhonov regularization theory and SIRT algorithm
NASA Astrophysics Data System (ADS)
Lei, Wang; Xiaotong, Du; Xiaoyin, Shao
2007-07-01
Electrical Capacitance Tomography (ECT) image reconstruction is a key problem that is not well solved due to the influence of soft-field in the ECT system. In this paper, a new hybrid ECT image reconstruction algorithm is proposed by combining Tikhonov regularization theory and Simultaneous Reconstruction Technique (SIRT) algorithm. Tikhonov regularization theory is used to solve ill-posed image reconstruction problem to obtain a stable original reconstructed image in the region of the optimized solution aggregate. Then, SIRT algorithm is used to improve the quality of the final reconstructed image. In order to satisfy the industrial requirement of real-time computation, the proposed algorithm is further been modified to improve the calculation speed. Test results show that the quality of reconstructed image is better than that of the well-known Filter Linear Back Projection (FLBP) algorithm and the time consumption of the new algorithm is less than 0.1 second that satisfies the online requirements.
A general-purpose contact detection algorithm for nonlinear structural analysis codes
Heinstein, M.W.; Attaway, S.W.; Swegle, J.W.; Mello, F.J.
1993-05-01
A new contact detection algorithm has been developed to address difficulties associated with the numerical simulation of contact in nonlinear finite element structural analysis codes. Problems including accurate and efficient detection of contact for self-contacting surfaces, tearing and eroding surfaces, and multi-body impact are addressed. The proposed algorithm is portable between dynamic and quasi-static codes and can efficiently model contact between a variety of finite element types including shells, bricks, beams and particles. The algorithm is composed of (1) a location strategy that uses a global search to decide which slave nodes are in proximity to a master surface and (2) an accurate detailed contact check that uses the projected motions of both master surface and slave node. In this report, currently used contact detection algorithms and their associated difficulties are discussed. Then the proposed algorithm and how it addresses these problems is described. Finally, the capability of the new algorithm is illustrated with several example problems.
A novel sequential algorithm for clutter and direct signal cancellation in passive bistatic radars
NASA Astrophysics Data System (ADS)
Ansari, Farzad; Taban, Mohammad Reza; Gazor, Saeed
2016-12-01
Cancellation of clutter and multipath is an important problem in passive bistatic radars. Some important recent algorithms such as the ECA, the SCA and the ECA-B project the received signals onto a subspace orthogonal to both clutter and pre-detected target subspaces. In this paper, we generalize the SCA algorithm and propose a novel sequential algorithm for clutter and multipath cancellation in the passive radars. This proposed sequential cancellation batch (SCB) algorithm has lower complexity and requires less memory than the mentioned methods. The SCB algorithm can be employed for static and non-static clutter cancellation. The proposed algorithm is evaluated by computer simulation under practical FM radio signals. Simulation results reveal that the SCB provides an admissible performance with lower computational complexity.
Che, Yanting; Wang, Qiuying; Gao, Wei; Yu, Fei
2015-10-05
In this paper, an improved inertial frame alignment algorithm for a marine SINS under mooring conditions is proposed, which significantly improves accuracy. Since the horizontal alignment is easy to complete, and a characteristic of gravity is that its component in the horizontal plane is zero, we use a clever method to improve the conventional inertial alignment algorithm. Firstly, a large misalignment angle model and a dimensionality reduction Gauss-Hermite filter are employed to establish the fine horizontal reference frame. Based on this, the projection of the gravity in the body inertial coordinate frame can be calculated easily. Then, the initial alignment algorithm is accomplished through an inertial frame alignment algorithm. The simulation and experiment results show that the improved initial alignment algorithm performs better than the conventional inertial alignment algorithm, and meets the accuracy requirements of a medium-accuracy marine SINS.
An improved POCS super-resolution infrared image reconstruction algorithm based on visual mechanism
NASA Astrophysics Data System (ADS)
Liu, Jinsong; Dai, Shaosheng; Guo, Zhongyuan; Zhang, Dezhou
2016-09-01
The traditional projection onto convex sets (POCS) super-resolution (SR) reconstruction algorithm can only get reconstructed images with poor contrast, low signal-to-noise ratio and blurring edges. In order to solve the above disadvantages, an improved POCS SR infrared image reconstruction algorithm based on visual mechanism is proposed, which introduces data consistency constraint with variable correction thresholds to highlight the target edges and filter out background noises; further, the algorithm introduces contrast constraint considering the resolving ability of human eyes into the traditional algorithm, enhancing the contrast of the image reconstructed adaptively. The experimental results show that the improved POCS algorithm can acquire high quality infrared images whose contrast, average gradient and peak signal to noise ratio are improved many times compared with traditional algorithm.
A novel blinding digital watermark algorithm based on lab color space
NASA Astrophysics Data System (ADS)
Dong, Bing-feng; Qiu, Yun-jie; Lu, Hong-tao
2010-02-01
It is necessary for blinding digital image watermark algorithm to extract watermark information without any extra information except the watermarked image itself. But most of the current blinding watermark algorithms have the same disadvantage: besides the watermarked image, they also need the size and other information about the original image when extracting the watermark. This paper presents an innovative blinding color image watermark algorithm based on Lab color space, which does not have the disadvantages mentioned above. This algorithm first marks the watermark region size and position through embedding some regular blocks called anchor points in image spatial domain, and then embeds the watermark into the image. In doing so, the watermark information can be easily extracted after doing cropping and scale change to the image. Experimental results show that the algorithm is particularly robust against the color adjusting and geometry transformation. This algorithm has already been used in a copyright protecting project and works very well.
Scheduling with genetic algorithms
NASA Technical Reports Server (NTRS)
Fennel, Theron R.; Underbrink, A. J., Jr.; Williams, George P. W., Jr.
1994-01-01
In many domains, scheduling a sequence of jobs is an important function contributing to the overall efficiency of the operation. At Boeing, we develop schedules for many different domains, including assembly of military and commercial aircraft, weapons systems, and space vehicles. Boeing is under contract to develop scheduling systems for the Space Station Payload Planning System (PPS) and Payload Operations and Integration Center (POIC). These applications require that we respect certain sequencing restrictions among the jobs to be scheduled while at the same time assigning resources to the jobs. We call this general problem scheduling and resource allocation. Genetic algorithms (GA's) offer a search method that uses a population of solutions and benefits from intrinsic parallelism to search the problem space rapidly, producing near-optimal solutions. Good intermediate solutions are probabalistically recombined to produce better offspring (based upon some application specific measure of solution fitness, e.g., minimum flowtime, or schedule completeness). Also, at any point in the search, any intermediate solution can be accepted as a final solution; allowing the search to proceed longer usually produces a better solution while terminating the search at virtually any time may yield an acceptable solution. Many processes are constrained by restrictions of sequence among the individual jobs. For a specific job, other jobs must be completed beforehand. While there are obviously many other constraints on processes, it is these on which we focussed for this research: how to allocate crews to jobs while satisfying job precedence requirements and personnel, and tooling and fixture (or, more generally, resource) requirements.
Template based illumination compensation algorithm for multiview video coding
NASA Astrophysics Data System (ADS)
Li, Xiaoming; Jiang, Lianlian; Ma, Siwei; Zhao, Debin; Gao, Wen
2010-07-01
Recently multiview video coding (MVC) standard has been finalized as an extension of H.264/AVC by Joint Video Team (JVT). In the project Joint Multiview Video Model (JMVM) for the standardization, illumination compensation (IC) is adopted as a useful tool. In this paper, a novel illumination compensation algorithm based on template is proposed. The basic idea of the algorithm is that the illumination of the current block has a strong correlation with its adjacent template. Based on this idea, firstly a template based illumination compensation method is presented, and then a template models selection strategy is devised to improve the illumination compensation performance. The experimental results show that the proposed algorithm can improve the coding efficiency significantly.
Reconstruction algorithms for optoacoustic imaging based on fiber optic detectors
NASA Astrophysics Data System (ADS)
Lamela, Horacio; Díaz-Tendero, Gonzalo; Gutiérrez, Rebeca; Gallego, Daniel
2011-06-01
Optoacoustic Imaging (OAI), a novel hybrid imaging technology, offers high contrast, molecular specificity and excellent resolution to overcome limitations of the current clinical modalities for detection of solid tumors. The exact time-domain reconstruction formula produces images with excellent resolution but poor contrast. Some approximate time-domain filtered back-projection reconstruction algorithms have also been reported to solve this problem. A wavelet transform implementation filtering can be used to sharpen object boundaries while simultaneously preserving high contrast of the reconstructed objects. In this paper, several algorithms, based on Back Projection (BP) techniques, have been suggested to process OA images in conjunction with signal filtering for ultrasonic point detectors and integral detectors. We apply these techniques first directly to a numerical generated sample image and then to the laserdigitalized image of a tissue phantom, obtaining in both cases the best results in resolution and contrast for a waveletbased filter.
Reconstruction from divergent ray projections
NASA Astrophysics Data System (ADS)
Sastry, C. S.; Singh, Santosh
2012-03-01
Despite major advances in x-ray sources, detector arrays, gantry mechanical design and special computer performances, computed tomography (CT) enjoys the filtered back projection (FBP) algorithm as its first choice for the CT image reconstruction in the commercial scanners [1]. Over the years, a lot of fundamental work has been done in the area of finding the sophisticated solutions for the inverse problems using different kinds of optimization techniques. Recent literature in applied mathematics is being dominated by the compressive sensing techniques and/or sparse reconstruction techniques [2], [3]. Still there is a long way to go for translating these newly developed algorithms in the clinical environment. The reasons are not obvious and seldom discussed [1]. Knowing the fact that the filtered back projection is one of the most popular CT image reconstruction algorithms, one pursues research work to improve the different error estimates at different steps performed in the filtered back projection. In this paper, we present a back projection formula for the reconstruction of divergent beam tomography with unique convolution structure. Using such a proposed approximate convolution structure, the approximation error mathematically justifies that the reconstruction error is low for a suitable choice of parameters. In order to minimize the exposure time and possible distortions due to the motion of the patient, the fan beam method of collection of data is used. Rebinning [4] transformation is used to connect fan beam data into parallel beam data so that the well developed methods of image reconstruction for parallel beam geometry can be used. Due to the computational errors involved in the numerical process of rebinning, some degradation of image is inevitable. However, to date very little work has been done for the reconstruction of fan beam tomography. There have been some recent results [5], [6] on wavelet reconstruction of divergent beam tomography. In this paper
Portable Health Algorithms Test System
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.
2010-01-01
A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.
Comparison of heterogeneity quantification algorithms for brain SPECT perfusion images
2012-01-01
Background Several algorithms from the literature were compared with the original random walk (RW) algorithm for brain perfusion heterogeneity quantification purposes. Algorithms are compared on a set of 210 brain single photon emission computed tomography (SPECT) simulations and 40 patient exams. Methods Five algorithms were tested on numerical phantoms. The numerical anthropomorphic Zubal head phantom was used to generate 42 (6 × 7) different brain SPECT simulations. Seven diffuse cortical heterogeneity levels were simulated with an adjustable Gaussian noise function and six focal perfusion defect levels with temporoparietal (TP) defects. The phantoms were successively projected and smoothed with Gaussian kernel with full width at half maximum (FWHM = 5 mm), and Poisson noise was added to the 64 projections. For each simulation, 5 Poisson noise realizations were performed yielding a total of 210 datasets. The SPECT images were reconstructed using filtered black projection (Hamming filter: α = 0.5). The five algorithms or measures tested were the following: the coefficient of variation, the entropy and local entropy, fractal dimension (FD) (box counting and Fourier power spectrum methods), the gray-level co-occurrence matrix (GLCM), and the new RW. The heterogeneity discrimination power was obtained with a linear regression for each algorithm. This regression line is a mean function of the measure of heterogeneity compared to the different diffuse heterogeneity and focal defect levels generated in the phantoms. A greater slope denotes a larger separation between the levels of diffuse heterogeneity. The five algorithms were computed using 40 99mTc-ethyl-cysteinate-dimer (ECD) SPECT images of patients referred for memory impairment. Scans were blindly ranked by two physicians according to the level of heterogeneity, and a consensus was obtained. The rankings obtained by the algorithms were compared with the physicians' consensus ranking. Results The GLCM method
Entropic Lattice Boltzmann Algorithms for Turbulence
NASA Astrophysics Data System (ADS)
Vahala, George; Yepez, Jeffrey; Soe, Min; Vahala, Linda; Keating, Brian; Carter, Jonathan
2007-11-01
For turbulent flows in non-trivial geometry, the scaling of CFD codes (now necessarily non-pseudo spectral) quickly saturate with the number of PEs. By projecting into a lattice kinetic phase space, the turbulent dynamics are simpler and much easier to solve since the underlying kinetic equation has only local algebraic nonlinearities in the macroscopic variables with simple linear kinetic advection. To achieve arbitrary high Reynolds number, a discrete H-theorem constraint is imposed on the collision operator resulting in an entropic lattice Boltzmann (ELB) algorithm that is unconditionally stable and scales almost perfectly with PE's on any supercomputer architecture. At this mesoscopic level, there are various kinetic lattices (ELB-27, ELB-19, ELB-15) which will recover the Navier-Stokes equation to leading order in the Chapman-Enskog asymptotics. We comment on the morphology of turbulence and its correlation to the rate of change of enstrophy as well as simulations on 1600^3 grids.
Component evaluation testing and analysis algorithms.
Hart, Darren M.; Merchant, Bion John
2011-10-01
The Ground-Based Monitoring R&E Component Evaluation project performs testing on the hardware components that make up Seismic and Infrasound monitoring systems. The majority of the testing is focused on the Digital Waveform Recorder (DWR), Seismic Sensor, and Infrasound Sensor. In order to guarantee consistency, traceability, and visibility into the results of the testing process, it is necessary to document the test and analysis procedures that are in place. Other reports document the testing procedures that are in place (Kromer, 2007). This document serves to provide a comprehensive overview of the analysis and the algorithms that are applied to the Component Evaluation testing. A brief summary of each test is included to provide the context for the analysis that is to be performed.
A universal symmetry detection algorithm.
Maurer, Peter M
2015-01-01
Research on symmetry detection focuses on identifying and detecting new types of symmetry. The paper presents an algorithm that is capable of detecting any type of permutation-based symmetry, including many types for which there are no existing algorithms. General symmetry detection is library-based, but symmetries that can be parameterized, (i.e. total, partial, rotational, and dihedral symmetry), can be detected without using libraries. In many cases it is faster than existing techniques. Furthermore, it is simpler than most existing techniques, and can easily be incorporated into existing software. The algorithm can also be used with virtually any type of matrix-based symmetry, including conjugate symmetry.
Review of jet reconstruction algorithms
NASA Astrophysics Data System (ADS)
Atkin, Ryan
2015-10-01
Accurate jet reconstruction is necessary for understanding the link between the unobserved partons and the jets of observed collimated colourless particles the partons hadronise into. Understanding this link sheds light on the properties of these partons. A review of various common jet algorithms is presented, namely the Kt, Anti-Kt, Cambridge/Aachen, Iterative cones and the SIScone, highlighting their strengths and weaknesses. If one is interested in studying jets, the Anti-Kt algorithm is the best choice, however if ones interest is in the jet substructures then the Cambridge/Aachen algorithm would be the best option.
Routing Algorithm Exploits Spatial Relations
NASA Technical Reports Server (NTRS)
Okino, Clayton; Jennings, Esther
2004-01-01
A recently developed routing algorithm for broadcasting in an ad hoc wireless communication network takes account of, and exploits, the spatial relationships among the locations of nodes, in addition to transmission power levels and distances between the nodes. In contrast, most prior algorithms for discovering routes through ad hoc networks rely heavily on transmission power levels and utilize limited graph-topology techniques that do not involve consideration of the aforesaid spatial relationships. The present algorithm extracts the relevant spatial-relationship information by use of a construct denoted the relative-neighborhood graph (RNG).
Research on Quantum Algorithms at the Institute for Quantum Information
2009-10-17
developed earlier by Aliferis, Gottesman, and Preskill to encompass leakage-reduction units, such as those based on quantum teleportation . They also...NUMBER QA - Research on Quantum Algorithms at the Institute for W91INF-05-I-0294 Quantum lnfonnation 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...ABSTRACT The central goals ofour project are (I) to bring large-scale quantum computers closer to realization by proposing and analyzing new schemes for
Performance Comparison of Superresolution Array Processing Algorithms. Revised
2007-11-02
OF SUPERRESOLUTION ARRAY PROCESSING ALGORITHMS A.J. BARABELL J. CAPON D.F. DeLONG K.D. SENNE Group 44 J.R. JOHNSON Group 96 PROJECT REPORT...adaptive superresolution direction finding and spatial nulling to support sig- nal copy in the presence of strong cochannel interference. The need for such... superresolution array processing have their origin in spectral estimation for time series. Since the sampling of a function in time is analogous to
Supermultiplicative Speedups of Probabilistic Model-Building Genetic Algorithms
2009-02-01
simulations. We (Todd Martinez (2005 MacArthur fellow), Duanc Johnson, Kumara Sastry and David E. Goldberg) have applied inultiobjcctive GAs and model...AUTHOR(S) David E. Goldberg. Kumara Sastry. Martin Pelikan 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S...Speedups of Probabilistic Model-Building Genetic Algorithms AFOSR Grant No. FA9550-06-1-0096 February 1, 2006 to November 30, 2008 David E. Goldberg
Exact and heuristic algorithms for weighted cluster editing.
Rahmann, Sven; Wittkop, Tobias; Baumbach, Jan; Martin, Marcel; Truss, Anke; Böcker, Sebastian
2007-01-01
Clustering objects according to given similarity or distance values is a ubiquitous problem in computational biology with diverse applications, e.g., in defining families of orthologous genes, or in the analysis of microarray experiments. While there exists a plenitude of methods, many of them produce clusterings that can be further improved. "Cleaning up" initial clusterings can be formalized as projecting a graph on the space of transitive graphs; it is also known as the cluster editing or cluster partitioning problem in the literature. In contrast to previous work on cluster editing, we allow arbitrary weights on the similarity graph. To solve the so-defined weighted transitive graph projection problem, we present (1) the first exact fixed-parameter algorithm, (2) a polynomial-time greedy algorithm that returns the optimal result on a well-defined subset of "close-to-transitive" graphs and works heuristically on other graphs, and (3) a fast heuristic that uses ideas similar to those from the Fruchterman-Reingold graph layout algorithm. We compare quality and running times of these algorithms on both artificial graphs and protein similarity graphs derived from the 66 organisms of the COG dataset.
This page provides information for Project Expo sites that were featured at the LMOP Conferences in 2013 and 2014. Project Expo sites were featured as being interested in identifying project partners for the development of an LFG energy project.
A fast optimization algorithm for multicriteria intensity modulated proton therapy planning
Chen Wei; Craft, David; Madden, Thomas M.; Zhang, Kewu; Kooy, Hanne M.; Herman, Gabor T.
2010-09-15
Purpose: To describe a fast projection algorithm for optimizing intensity modulated proton therapy (IMPT) plans and to describe and demonstrate the use of this algorithm in multicriteria IMPT planning. Methods: The authors develop a projection-based solver for a class of convex optimization problems and apply it to IMPT treatment planning. The speed of the solver permits its use in multicriteria optimization, where several optimizations are performed which span the space of possible treatment plans. The authors describe a plan database generation procedure which is customized to the requirements of the solver. The optimality precision of the solver can be specified by the user. Results: The authors apply the algorithm to three clinical cases: A pancreas case, an esophagus case, and a tumor along the rib cage case. Detailed analysis of the pancreas case shows that the algorithm is orders of magnitude faster than industry-standard general purpose algorithms (MOSEK's interior point optimizer, primal simplex optimizer, and dual simplex optimizer). Additionally, the projection solver has almost no memory overhead. Conclusions: The speed and guaranteed accuracy of the algorithm make it suitable for use in multicriteria treatment planning, which requires the computation of several diverse treatment plans. Additionally, given the low memory overhead of the algorithm, the method can be extended to include multiple geometric instances and proton range possibilities, for robust optimization.
Improving Algorithm for Automatic Spectra Processing
NASA Astrophysics Data System (ADS)
Rackovic, K.; Nikolic, S.; Kotrc, P.
2009-09-01
Testing and improving of the computer program for automatic processing (flat-fielding) of a great number of solar spectra obtained with the horizontal heliospectrograph HSFA2 has been done. This program was developed in the Astronomical Institute of Academy of Sciences of the Czech Republic in Ondřejov. An irregularity in its work has been discovered, i.e. the program didn't work for some of the spectra. To discover a cause of this error an algorithm has been developed, and a program for examination of the parallelism of reference hairs crossing the spectral slit on records of solar spectra has been made. The standard methods for data processing have been applied-calculating and analyzing higher-order moments of distribution of radiation intensity. The spectra with the disturbed parallelism of the reference hairs have been eliminated from further processing. In order to improve this algorithm of smoothing of spectra, isolation and removal of the harmonic made by a sunspot with multiple elementary transformations of ordinates (Labrouste's transformations) are planned. This project was accomplished at the first summer astronomy practice of students of the Faculty of Mathematics, University of Belgrade, Serbia in 2007 in Ondřejov.
UWB Tracking System Design with TDOA Algorithm
NASA Technical Reports Server (NTRS)
Ni, Jianjun; Arndt, Dickey; Ngo, Phong; Phan, Chau; Gross, Julia; Dusl, John; Schwing, Alan
2006-01-01
This presentation discusses an ultra-wideband (UWB) tracking system design effort using a tracking algorithm TDOA (Time Difference of Arrival). UWB technology is exploited to implement the tracking system due to its properties, such as high data rate, fine time resolution, and low power spectral density. A system design using commercially available UWB products is proposed. A two-stage weighted least square method is chosen to solve the TDOA non-linear equations. Matlab simulations in both two-dimensional space and three-dimensional space show that the tracking algorithm can achieve fine tracking resolution with low noise TDOA data. The error analysis reveals various ways to improve the tracking resolution. Lab experiments demonstrate the UWBTDOA tracking capability with fine resolution. This research effort is motivated by a prototype development project Mini-AERCam (Autonomous Extra-vehicular Robotic Camera), a free-flying video camera system under development at NASA Johnson Space Center for aid in surveillance around the International Space Station (ISS).
Belief network algorithms: A study of performance
Jitnah, N.
1996-12-31
This abstract gives an overview of the work. We present a survey of Belief Network algorithms and propose a domain characterization system to be used as a basis for algorithm comparison and for predicting algorithm performance.
2013-09-16
The purpose of the Project Grandmaster Application is to allow individuals to opt-in and give the application access to data sources about their activities on social media sites. The application will cross-reference these data sources to build up a picture of each individual activities they discuss, either at present or in the past, and place this picture in reference to groups of all participants. The goal is to allow an individual to place themselves in the collective and to understand how their behavior patterns fit with the group and potentially find changes to make, such as activities they werent already aware of or different groups of interest they might want to follow.
NASA Technical Reports Server (NTRS)
Kershaw, John
1990-01-01
The VIPER project has so far produced a formal specification of a 32 bit RISC microprocessor, an implementation of that chip in radiation-hard SOS technology, a partial proof of correctness of the implementation which is still being extended, and a large body of supporting software. The time has now come to consider what has been achieved and what directions should be pursued in the future. The most obvious lesson from the VIPER project was the time and effort needed to use formal methods properly. Most of the problems arose in the interfaces between different formalisms, e.g., between the (informal) English description and the HOL spec, between the block-level spec in HOL and the equivalent in ELLA needed by the low-level CAD tools. These interfaces need to be made rigorous or (better) eliminated. VIPER 1A (the latest chip) is designed to operate in pairs, to give protection against breakdowns in service as well as design faults. We have come to regard redundancy and formal design methods as complementary, the one to guard against normal component failures and the other to provide insurance against the risk of the common-cause failures which bedevil reliability predictions. Any future VIPER chips will certainly need improved performance to keep up with increasingly demanding applications. We have a prototype design (not yet specified formally) which includes 32 and 64 bit multiply, instruction pre-fetch, more efficient interface timing, and a new instruction to allow a quick response to peripheral requests. Work is under way to specify this device in MIRANDA, and then to refine the spec into a block-level design by top-down transformations. When the refinement is complete, a relatively simple proof checker should be able to demonstrate its correctness. This paper is presented in viewgraph form.
Do You Understand Your Algorithms?
ERIC Educational Resources Information Center
Pickreign, Jamar; Rogers, Robert
2006-01-01
This article discusses relationships between the development of an understanding of algorithms and algebraic thinking. It also provides some sample activities for middle school teachers of mathematics to help promote students' algebraic thinking. (Contains 11 figures.)
Fibonacci Numbers and Computer Algorithms.
ERIC Educational Resources Information Center
Atkins, John; Geist, Robert
1987-01-01
The Fibonacci Sequence describes a vast array of phenomena from nature. Computer scientists have discovered and used many algorithms which can be classified as applications of Fibonacci's sequence. In this article, several of these applications are considered. (PK)
APL simulation of Grover's algorithm
NASA Astrophysics Data System (ADS)
Lipovaca, Samir
2012-02-01
Grover's algorithm is a fast quantum search algorithm. Classically, to solve the search problem for a search space of size N we need approximately N operations. Grover's algorithm offers a quadratic speedup. Since present quantum computers are not robust enough for code writing and execution, to experiment with Grover's algorithm, we will simulate it using the APL programming language. The APL programming language is especially suited for this task. For example, to compute Walsh-Hadamard transformation matrix for N quantum states via a tensor product of N Hadamard matrices we need to iterate N-1 times only one line of the code. Initial study indicates the quantum mechanical amplitude of the solution is almost independent of the search space size and rapidly reaches 0.999 values with slight variations at higher decimal places.
Ace Project as a Project Management Tool
ERIC Educational Resources Information Center
Cline, Melinda; Guynes, Carl S.; Simard, Karine
2010-01-01
The primary challenge of project management is to achieve the project goals and objectives while adhering to project constraints--usually scope, quality, time and budget. The secondary challenge is to optimize the allocation and integration of resources necessary to meet pre-defined objectives. Project management software provides an active…
Project Success in Agile Development Software Projects
ERIC Educational Resources Information Center
Farlik, John T.
2016-01-01
Project success has multiple definitions in the scholarly literature. Research has shown that some scholars and practitioners define project success as the completion of a project within schedule and within budget. Others consider a successful project as one in which the customer is satisfied with the product. This quantitative study was conducted…
Project Information Packages Kit.
ERIC Educational Resources Information Center
RMC Research Corp., Mountain View, CA.
Presented are an overview booklet, a project selection guide, and six Project Information Packages (PIPs) for six exemplary projects serving underachieving students in grades k through 9. The overview booklet outlines the PIP projects and includes a chart of major project features. A project selection guide reviews the PIP history, PIP contents,…
The Langley Parameterized Shortwave Algorithm (LPSA) for Surface Radiation Budget Studies. 1.0
NASA Technical Reports Server (NTRS)
Gupta, Shashi K.; Kratz, David P.; Stackhouse, Paul W., Jr.; Wilber, Anne C.
2001-01-01
An efficient algorithm was developed during the late 1980's and early 1990's by W. F. Staylor at NASA/LaRC for the purpose of deriving shortwave surface radiation budget parameters on a global scale. While the algorithm produced results in good agreement with observations, the lack of proper documentation resulted in a weak acceptance by the science community. The primary purpose of this report is to develop detailed documentation of the algorithm. In the process, the algorithm was modified whenever discrepancies were found between the algorithm and its referenced literature sources. In some instances, assumptions made in the algorithm could not be justified and were replaced with those that were justifiable. The algorithm uses satellite and operational meteorological data for inputs. Most of the original data sources have been replaced by more recent, higher quality data sources, and fluxes are now computed on a higher spatial resolution. Many more changes to the basic radiation scheme and meteorological inputs have been proposed to improve the algorithm and make the product more useful for new research projects. Because of the many changes already in place and more planned for the future, the algorithm has been renamed the Langley Parameterized Shortwave Algorithm (LPSA).
Updated core libraries of the ALPS project
NASA Astrophysics Data System (ADS)
Gaenko, A.; Antipov, A. E.; Carcassi, G.; Chen, T.; Chen, X.; Dong, Q.; Gamper, L.; Gukelberger, J.; Igarashi, R.; Iskakov, S.; Könz, M.; LeBlanc, J. P. F.; Levy, R.; Ma, P. N.; Paki, J. E.; Shinaoka, H.; Todo, S.; Troyer, M.; Gull, E.
2017-04-01
The open source ALPS (Algorithms and Libraries for Physics Simulations) project provides a collection of physics libraries and applications, with a focus on simulations of lattice models and strongly correlated systems. The libraries provide a convenient set of well-documented and reusable components for developing condensed matter physics simulation code, and the applications strive to make commonly used and proven computational algorithms available to a non-expert community. In this paper we present an updated and refactored version of the core ALPS libraries geared at the computational physics software development community, rewritten with focus on documentation, ease of installation, and software maintainability.
NASA Astrophysics Data System (ADS)
Rao, Sailesh K.; Kollath, T.
1986-07-01
In this paper, we show that every systolic array executes a Regular Iterative Algorithm with a strongly separating hyperplane and conversely, that every such algorithm can be implemented on a systolic array. This characterization provides us with an unified framework for describing the contributions of other authors. It also exposes the relevance of many fundamental concepts that were introduced in the sixties by Hennie, Waite and Karp, Miller and Winograd, to the present day concern of systolic array
Genetic algorithms as discovery programs
Hilliard, M.R.; Liepins, G.
1986-01-01
Genetic algorithms are mathematical counterparts to natural selection and gene recombination. As such, they have provided one of the few significant breakthroughs in machine learning. Used with appropriate reward functions and apportionment of credit, they have been successfully applied to gas pipeline operation, x-ray registration and mathematical optimization problems. This paper discusses the basics of genetic algorithms, describes a few successes, and reports on current progress at Oak Ridge National Laboratory in applications to set covering and simulated robots.
Inversion Algorithms for Geophysical Problems
1987-12-16
ktdud* Sccumy Oass/Kjoon) Inversion Algorithms for Geophysical Problems (U) 12. PERSONAL AUTHOR(S) Lanzano, Paolo 13 «. TYPE OF REPORT Final 13b...spectral density. 20. DISTRIBUTION/AVAILABILITY OF ABSTRACT 13 UNCLASSIFIED/UNLIMITED D SAME AS RPT n OTIC USERS 22a. NAME OF RESPONSIBLE...Research Laboratory ’^^ SSZ ’.Washington. DC 20375-5000 NRLrMemorandum Report-6138 Inversion Algorithms for Geophysical Problems p. LANZANO Space
Label Ranking Algorithms: A Survey
NASA Astrophysics Data System (ADS)
Vembu, Shankar; Gärtner, Thomas
Label ranking is a complex prediction task where the goal is to map instances to a total order over a finite set of predefined labels. An interesting aspect of this problem is that it subsumes several supervised learning problems, such as multiclass prediction, multilabel classification, and hierarchical classification. Unsurprisingly, there exists a plethora of label ranking algorithms in the literature due, in part, to this versatile nature of the problem. In this paper, we survey these algorithms.
A retrodictive stochastic simulation algorithm
Vaughan, T.G. Drummond, P.D.; Drummond, A.J.
2010-05-20
In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.
Tactical Synthesis Of Efficient Global Search Algorithms
NASA Technical Reports Server (NTRS)
Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.
2009-01-01
Algorithm synthesis transforms a formal specification into an efficient algorithm to solve a problem. Algorithm synthesis in Specware combines the formal specification of a problem with a high-level algorithm strategy. To derive an efficient algorithm, a developer must define operators that refine the algorithm by combining the generic operators in the algorithm with the details of the problem specification. This derivation requires skill and a deep understanding of the problem and the algorithmic strategy. In this paper we introduce two tactics to ease this process. The tactics serve a similar purpose to tactics used for determining indefinite integrals in calculus, that is suggesting possible ways to attack the problem.
Ensembles of satellite aerosol retrievals based on three AATSR algorithms within aerosol_cci
NASA Astrophysics Data System (ADS)
Kosmale, Miriam; Popp, Thomas
2016-04-01
Ensemble techniques are widely used in the modelling community, combining different modelling results in order to reduce uncertainties. This approach could be also adapted to satellite measurements. Aerosol_cci is an ESA funded project, where most of the European aerosol retrieval groups work together. The different algorithms are homogenized as far as it makes sense, but remain essentially different. Datasets are compared with ground based measurements and between each other. Three AATSR algorithms (Swansea university aerosol retrieval, ADV aerosol retrieval by FMI and Oxford aerosol retrieval ORAC) provide within this project 17 year global aerosol records. Each of these algorithms provides also uncertainty information on pixel level. Within the presented work, an ensembles of the three AATSR algorithms is performed. The advantage over each single algorithm is the higher spatial coverage due to more measurement pixels per gridbox. A validation to ground based AERONET measurements shows still a good correlation of the ensemble, compared to the single algorithms. Annual mean maps show the global aerosol distribution, based on a combination of the three aerosol algorithms. In addition, pixel level uncertainties of each algorithm are used for weighting the contributions, in order to reduce the uncertainty of the ensemble. Results of different versions of the ensembles for aerosol optical depth will be presented and discussed. The results are validated against ground based AERONET measurements. A higher spatial coverage on daily basis allows better results in annual mean maps. The benefit of using pixel level uncertainties is analysed.
Linear array implementation of the EM algorithm for PET image reconstruction
Rajan, K.; Patnaik, L.M.; Ramakrishna, J.
1995-08-01
The PET image reconstruction based on the EM algorithm has several attractive advantages over the conventional convolution back projection algorithms. However, the PET image reconstruction based on the EM algorithm is computationally burdensome for today`s single processor systems. In addition, a large memory is required for the storage of the image, projection data, and the probability matrix. Since the computations are easily divided into tasks executable in parallel, multiprocessor configurations are the ideal choice for fast execution of the EM algorithms. In tis study, the authors attempt to overcome these two problems by parallelizing the EM algorithm on a multiprocessor systems. The parallel EM algorithm on a linear array topology using the commercially available fast floating point digital signal processor (DSP) chips as the processing elements (PE`s) has been implemented. The performance of the EM algorithm on a 386/387 machine, IBM 6000 RISC workstation, and on the linear array system is discussed and compared. The results show that the computational speed performance of a linear array using 8 DSP chips as PE`s executing the EM image reconstruction algorithm is about 15.5 times better than that of the IBM 6000 RISC workstation. The novelty of the scheme is its simplicity. The linear array topology is expandable with a larger number of PE`s. The architecture is not dependant on the DSP chip chosen, and the substitution of the latest DSP chip is straightforward and could yield better speed performance.
Multimodal Estimation of Distribution Algorithms.
Yang, Qiang; Chen, Wei-Neng; Li, Yun; Chen, C L Philip; Xu, Xiang-Min; Zhang, Jun
2016-02-15
Taking the advantage of estimation of distribution algorithms (EDAs) in preserving high diversity, this paper proposes a multimodal EDA. Integrated with clustering strategies for crowding and speciation, two versions of this algorithm are developed, which operate at the niche level. Then these two algorithms are equipped with three distinctive techniques: 1) a dynamic cluster sizing strategy; 2) an alternative utilization of Gaussian and Cauchy distributions to generate offspring; and 3) an adaptive local search. The dynamic cluster sizing affords a potential balance between exploration and exploitation and reduces the sensitivity to the cluster size in the niching methods. Taking advantages of Gaussian and Cauchy distributions, we generate the offspring at the niche level through alternatively using these two distributions. Such utilization can also potentially offer a balance between exploration and exploitation. Further, solution accuracy is enhanced through a new local search scheme probabilistically conducted around seeds of niches with probabilities determined self-adaptively according to fitness values of these seeds. Extensive experiments conducted on 20 benchmark multimodal problems confirm that both algorithms can achieve competitive performance compared with several state-of-the-art multimodal algorithms, which is supported by nonparametric tests. Especially, the proposed algorithms are very promising for complex problems with many local optima.
A novel fingerprint recognition algorithm based on VK-LLE
NASA Astrophysics Data System (ADS)
Luo, Jing; Lin, Shu-zhong; Ni, Jian-yun; Song, Li-mei
2009-07-01
It is a challenging problem to overcome shift and rotation and nonlinearity in fingerprint images. By analyzing the shortcoming of fingerprint recognition algorithm on shift or rotation images at present, manifold learning algorithm is introduced. A fingerprint recognition algorithm has been proposed based on locally linear embedding of variable neighbourhood k (VK-LLE). Firstly, approximate geodesic distance between any two points is computed by ISOMAP ( isometric feature mapping) and then the neighborhood is determined for each point by the relationship between its local estimated geodesic distance matrix and local Euclidean distance matrix. Secondly, the dimension of fingerprint image is reduced by nonlinear dimension-reduction method. And the best projected features of original fingerprint data of large dimension are acquired. By analyzing the changes of recognition accuracy with the neighborhood and embedding dimension, the neighborhood and embedding dimension is determined at last. Finally, fingerprint recognition is accomplished by Euclidean distance Classifier. The experimental results based on standard fingerprint datasets have verified the proposed algorithm had a better robustness to those fingerprint images of shift or rotation or nonlinearity than the algorithm using LLE, thus this method has some values in practice.
Evaluation of the VIIRS Land Algorithms at Land PEATE
NASA Technical Reports Server (NTRS)
Wolfe, Robert E.; Devadiga, Sadashiva; Ye, Gang; Masuoka, Edward J.; Schweiss, Robert J.
2010-01-01
The Land Product Evaluation and Algorithm Testing Element (Land PEATE), a component of the Science Data Segment of the National Polar-orbiting Operational Environmental Satellite System (NPOESS) Preparatory Project (NPP), is being developed at the NASA Goddard Space Flight Center (GSFC). The primary task of the Land PEATE is to assess the quality of the Visible Infrared Imaging Radiometer Suite (VIIRS) Land data products made by the Interface Data Processing System (IDPS) using the Operational (OPS) Code during the NPP era and to recommend improvements to the algorithms in the IDPS OPS code. The Land PEATE uses a version of the MODIS Adaptive Processing System (MODAPS), NPPDAPS, that has been modified to produce products from the IDPS OPS code and software provided by the VIIRS Science Team, and uses the MODIS Land Data Operational Product Evaluation (LDOPE) team for evaluation of the data records generated by the NPPDAPS. Land PEATE evaluates the algorithms by comparing data products generated using different versions of the algorithm and also by comparing to heritage products generated from different instrument such as MODIS using various quality assessment tools developed at LDOPE. This paper describes the Land PEATE system and some of the approaches used by the Land PEATE for evaluating the VIIRS Land algorithms during the pre-launch period of the NPP mission and the proposed plan for long term monitoring of the quality of the VIIRS Land products post-launch.
Recursive Branching Simulated Annealing Algorithm
NASA Technical Reports Server (NTRS)
Bolcar, Matthew; Smith, J. Scott; Aronstein, David
2012-01-01
This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal
The Algebraic Nature of Students' Numerical Manipulation in the New Zealand Numeracy Project
ERIC Educational Resources Information Center
Irwin, Kathryn C.; Britt, Murray S.
2005-01-01
The New Zealand Ministry of Education has introduced a Numeracy Project for students aged 5-14 years in selected schools. The project encourages the adoption of flexible strategies for solving numerical problems, and discourages reliance on standard computational algorithms. One potential benefit of the project is that the methods students acquire…
Development and Testing of Data Mining Algorithms for Earth Observation
NASA Technical Reports Server (NTRS)
Glymour, Clark
2005-01-01
The new algorithms developed under this project included a principled procedure for classification of objects, events or circumstances according to a target variable when a very large number of potential predictor variables is available but the number of cases that can be used for training a classifier is relatively small. These "high dimensional" problems require finding a minimal set of variables -called the Markov Blanket-- sufficient for predicting the value of the target variable. An algorithm, the Markov Blanket Fan Search, was developed, implemented and tested on both simulated and real data in conjunction with a graphical model classifier, which was also implemented. Another algorithm developed and implemented in TETRAD IV for time series elaborated on work by C. Granger and N. Swanson, which in turn exploited some of our earlier work. The algorithms in question learn a linear time series model from data. Given such a time series, the simultaneous residual covariances, after factoring out time dependencies, may provide information about causal processes that occur more rapidly than the time series representation allow, so called simultaneous or contemporaneous causal processes. Working with A. Monetta, a graduate student from Italy, we produced the correct statistics for estimating the contemporaneous causal structure from time series data using the TETRAD IV suite of algorithms. Two economists, David Bessler and Kevin Hoover, have independently published applications using TETRAD style algorithms to the same purpose. These implementations and algorithmic developments were separately used in two kinds of studies of climate data: Short time series of geographically proximate climate variables predicting agricultural effects in California, and longer duration climate measurements of temperature teleconnections.
NASA Technical Reports Server (NTRS)
1990-01-01
NASA formally launched Project LASER (Learning About Science, Engineering and Research) in March 1990, a program designed to help teachers improve science and mathematics education and to provide 'hands on' experiences. It featured the first LASER Mobile Teacher Resource Center (MTRC), is designed to reach educators all over the nation. NASA hopes to operate several MTRCs with funds provided by private industry. The mobile unit is a 22-ton tractor-trailer stocked with NASA educational publications and outfitted with six work stations. Each work station, which can accommodate two teachers at a time, has a computer providing access to NASA Spacelink. Each also has video recorders and photocopy/photographic equipment for the teacher's use. MTRC is only one of the five major elements within LASER. The others are: a Space Technology Course, to promote integration of space science studies with traditional courses; the Volunteer Databank, in which NASA employees are encouraged to volunteer as tutors, instructors, etc; Mobile Discovery Laboratories that will carry simple laboratory equipment and computers to provide hands-on activities for students and demonstrations of classroom activities for teachers; and the Public Library Science Program which will present library based science and math programs.
A curve-filtered FDK (C-FDK) reconstruction algorithm for circular cone-beam CT.
Li, Liang; Xing, Yuxiang; Chen, Zhiqiang; Zhang, Li; Kang, Kejun
2011-01-01
Circular cone-beam CT is one of the most popular configurations in both medical and industrial applications. The FDK algorithm is the most popular method for circular cone-beam CT. However, with increasing cone-angle the cone-beam artifacts associated with the FDK algorithm deteriorate because the circular trajectory does not satisfy the data sufficiency condition. Along with an experimental evaluation and verification, this paper proposed a curve-filtered FDK (C-FDK) algorithm. First, cone-parallel projections are rebinned from the native cone-beam geometry in two separate directions. C-FDK rebins and filters projections along different curves from T-FDK in the centrally virtual detector plane. Then, numerical experiments are done to validate the effectiveness of the proposed algorithm by comparing with both FDK and T-FDK reconstruction. Without any other extra trajectories supplemental to the circular orbit, C-FDK has a visible image quality improvement.
Fixed-point image orthorectification algorithms for reduced computational cost
NASA Astrophysics Data System (ADS)
French, Joseph Clinton
Imaging systems have been applied to many new applications in recent years. With the advent of low-cost, low-power focal planes and more powerful, lower cost computers, remote sensing applications have become more wide spread. Many of these applications require some form of geolocation, especially when relative distances are desired. However, when greater global positional accuracy is needed, orthorectification becomes necessary. Orthorectification is the process of projecting an image onto a Digital Elevation Map (DEM), which removes terrain distortions and corrects the perspective distortion by changing the viewing angle to be perpendicular to the projection plane. Orthorectification is used in disaster tracking, landscape management, wildlife monitoring and many other applications. However, orthorectification is a computationally expensive process due to floating point operations and divisions in the algorithm. To reduce the computational cost of on-board processing, two novel algorithm modifications are proposed. One modification is projection utilizing fixed-point arithmetic. Fixed point arithmetic removes the floating point operations and reduces the processing time by operating only on integers. The second modification is replacement of the division inherent in projection with a multiplication of the inverse. The inverse must operate iteratively. Therefore, the inverse is replaced with a linear approximation. As a result of these modifications, the processing time of projection is reduced by a factor of 1.3x with an average pixel position error of 0.2% of a pixel size for 128-bit integer processing and over 4x with an average pixel position error of less than 13% of a pixel size for a 64-bit integer processing. A secondary inverse function approximation is also developed that replaces the linear approximation with a quadratic. The quadratic approximation produces a more accurate approximation of the inverse, allowing for an integer multiplication calculation
Kim, Ye-seul; Park, Hye-suk; Lee, Haeng-Hwa; Choi, Young-Wook; Choi, Jae-Gu; Kim, Hak Hee; Kim, Hee-Joung
2016-02-01
Digital breast tomosynthesis (DBT) is a recently developed system for three-dimensional imaging that offers the potential to reduce the false positives of mammography by preventing tissue overlap. Many qualitative evaluations of digital breast tomosynthesis were previously performed by using a phantom with an unrealistic model and with heterogeneous background and noise, which is not representative of real breasts. The purpose of the present work was to compare reconstruction algorithms for DBT by using various breast phantoms; validation was also performed by using patient images. DBT was performed by using a prototype unit that was optimized for very low exposures and rapid readout. Three algorithms were compared: a back-projection (BP) algorithm, a filtered BP (FBP) algorithm, and an iterative expectation maximization (EM) algorithm. To compare the algorithms, three types of breast phantoms (homogeneous background phantom, heterogeneous background phantom, and anthropomorphic breast phantom) were evaluated, and clinical images were also reconstructed by using the different reconstruction algorithms. The in-plane image quality was evaluated based on the line profile and the contrast-to-noise ratio (CNR), and out-of-plane artifacts were evaluated by means of the artifact spread function (ASF). Parenchymal texture features of contrast and homogeneity were computed based on reconstructed images of an anthropomorphic breast phantom. The clinical images were studied to validate the effect of reconstruction algorithms. The results showed that the CNRs of masses reconstructed by using the EM algorithm were slightly higher than those obtained by using the BP algorithm, whereas the FBP algorithm yielded much lower CNR due to its high fluctuations of background noise. The FBP algorithm provides the best conspicuity for larger calcifications by enhancing their contrast and sharpness more than the other algorithms; however, in the case of small-size and low
Mathematical algorithms for approximate reasoning
NASA Technical Reports Server (NTRS)
Murphy, John H.; Chay, Seung C.; Downs, Mary M.
1988-01-01
Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away
NASA Astrophysics Data System (ADS)
Andretta, Marina; Birgin, Ernesto; Martínez, J.
2010-01-01
A method for linearly constrained optimization which modifies and generalizes recent box-constraint optimization algorithms is introduced. The new algorithm is based on a relaxed form of Spectral Projected Gradient iterations. Intercalated with these projected steps, internal iterations restricted to faces of the polytope are performed, which enhance the efficiency of the algorithm. Convergence proofs are given and numerical experiments are included and commented. Software supporting this paper is available through the Tango Project web page: http://www.ime.usp.br/˜egbirgin/tango/.
Final Report: Algorithms for Diffractive Microscopy
Elser, Veit
2010-10-08
The phenomenal coherence and brightness of x-ray free-electron laser light sources, such as the LCLS at SLAC, have the potential of revolutionizing the investigation of structure and dynamics in the nano-domain. However, this potential will go unrealized without a similar revolution in the way the data are analyzed. While it is true that the ambitious design parameters of the LCLS have been achieved, the prospects of realizing the most publicized goal of this instrument — the imaging of individual bio-particles — remains daunting. Even with 10{sup 12} photons per x-ray pulse, the feebleness of the scattering process represents a fundamental limit that no amount of engineering ingenuity can overcome. Large bio-molecules will scatter on the order of only 10{sup 3} photons per pulse into a detector with 106 pixels; the diffraction “images” will be virtually indistinguishable from noise. Averaging such noisy signals over many pulses is not possible because the particle orientation cannot be controlled. Each noisy laser snapshot is thus confounded by the unknown viewpoint of the particle. Given the heavy DOE investment in LCLS and the profound technical challenges facing single-particle imaging, the final two years of this project have concentrated on this effort. We are happy to report that we succeeded in developing an extremely efficient algorithm that can reconstruct the shapes of particles at even the extremes of noise expected in future LCLS experiments with single bio-particles. Since this is the most important outcome of this project, the major part of this report documents this accomplishment. The theoretical techniques that were developed for the single-particle imaging project have proved useful in other imaging problems that are described at the end of the report.
NASA Technical Reports Server (NTRS)
Parker, Ray; Coan, Mary; Cryderman, Kate; Captain, Janine
2013-01-01
The RESOLVE project is a lunar prospecting mission whose primary goal is to characterize water and other volatiles in lunar regolith. The Lunar Advanced Volatiles Analysis (LAVA) subsystem is comprised of a fluid subsystem that transports flow to the gas chromatograph - mass spectrometer (GC-MS) instruments that characterize volatiles and the Water Droplet Demonstration (WDD) that will capture and display water condensation in the gas stream. The LAVA Engineering Test Unit (ETU) is undergoing risk reduction testing this summer and fall within a vacuum chamber to understand and characterize component and integrated system performance. Testing of line heaters, printed circuit heaters, pressure transducers, temperature sensors, regulators, and valves in atmospheric and vacuum environments was done. Test procedures were developed to guide experimental tests and test reports to analyze and draw conclusions from the data. In addition, knowledge and experience was gained with preparing a vacuum chamber with fluid and electrical connections. Further testing will include integrated testing of the fluid subsystem with the gas supply system, near-infrared spectrometer, WDD, Sample Delivery System, and GC-MS in the vacuum chamber. This testing will provide hands-on exposure to a flight forward spaceflight subsystem, the processes associated with testing equipment in a vacuum chamber, and experience working in a laboratory setting. Examples of specific analysis conducted include: pneumatic analysis to calculate the WDD's efficiency at extracting water vapor from the gas stream to form condensation; thermal analysis of the conduction and radiation along a line connecting two thermal masses; and proportional-integral-derivative (PID) heater control analysis. Since LAVA is a scientific subsystem, the near-infrared spectrometer and GC-MS instruments will be tested during the ETU testing phase.
Projection preconditioning for Lanczos-type methods
Bielawski, S.S.; Mulyarchik, S.G.; Popov, A.V.
1996-12-31
We show how auxiliary subspaces and related projectors may be used for preconditioning nonsymmetric system of linear equations. It is shown that preconditioned in such a way (or projected) system is better conditioned than original system (at least if the coefficient matrix of the system to be solved is symmetrizable). Two approaches for solving projected system are outlined. The first one implies straightforward computation of the projected matrix and consequent using some direct or iterative method. The second approach is the projection preconditioning of conjugate gradient-type solver. The latter approach is developed here in context with biconjugate gradient iteration and some related Lanczos-type algorithms. Some possible particular choices of auxiliary subspaces are discussed. It is shown that one of them is equivalent to using colorings. Some results of numerical experiments are reported.
Development of Algorithms for Nonlinear Physics on Type-II Quantum Computers
2007-07-01
Jan. 31, 2007 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Quantumn Lattice Algorithms for Nonlinear Physics: Optical Solutions and Bose-Eitistein...macroscopic nonlinear derivatives by local moments. Chapman-Enskog asymptotics will then, on projecting back into physical space, yield these nonlinear ...Entropic Lattice Boltzmaim Model will be being strongly pursued in future proposals. AFOSR FINAL REPORT "DEVELOPMENT OF ALGORITHMS For NONLINEAR
A Web-Based Library and Algorithm System for Satellite and Airborne Image Products
2011-06-28
Sequoia Scientific, Inc., and Dr. Paul Bissett at FERI, under other 6.1/6.2 program funding. 2 A Web-Based Library And Algorithm System For...of the spectrum matching approach to inverting hyperspectral imagery created by Drs. C. Mobley ( Sequoia Scientific) and P. Bissett (FERI...algorithms developed by Sequoia Scientific and FERI. Testing and Implementation of Library This project will result in the delivery of a WeoGeo
Web-Based Library and Algorithm System for Satellite and Airborne Image Products
2011-01-01
the spectrum matching approach to inverting hyperspectral imagery created by Drs. C. Mobley ( Sequoia Scientific) and P. Bissett (FERI). 5...matching algorithms developed by Sequoia Scientific and FERI. Testing and Implementation of Library This project will result in the delivery of a...transitioning VSW algorithms developed by Dr. Curtis D. Mobley at Sequoia Scientific, Inc., and Dr. Paul Bissett at FERI, under other 6.1/6.2 program funding.
2015-02-04
SECURITY CLASSIFICATION OF: This project opens up a brand new area of research that fuses two separate subareas of game theory: algorithmic game theory...and behavioral game theory. More specifically, game -theoretic algorithms have been deployed by several security agencies, allowing them to generate...optimal randomized schedules against adversaries who may exploit predictability. However, one key challenge in applying game theory to solving real
Higher-Order, Space-Time Adaptive Finite Volume Methods: Algorithms, Analysis and Applications
Minion, Michael
2014-04-29
The four main goals outlined in the proposal for this project were: 1. Investigate the use of higher-order (in space and time) finite-volume methods for fluid flow problems. 2. Explore the embedding of iterative temporal methods within traditional block-structured AMR algorithms. 3. Develop parallel in time methods for ODEs and PDEs. 4. Work collaboratively with the Center for Computational Sciences and Engineering (CCSE) at Lawrence Berkeley National Lab towards incorporating new algorithms within existing DOE application codes.
NASA Astrophysics Data System (ADS)
Yerkes, Christopher R.; Webster, Eric D.
1994-06-01
Advanced algorithms for synthetic aperture radar (SAR) imaging have in the past required computing capabilities only available from high performance special purpose hardware. Such architectures have tended to have short life cycles with respect to development expense. Current generation Massively Parallel Processors (MPP) are offering high performance capabilities necessary for such applications with both a scalable architecture and a longer projected life cycle. In this paper we explore issues associated with implementation of a SAR imaging algorithm on a mesh configured MPP architecture.
Fourier Lucas-Kanade algorithm.
Lucey, Simon; Navarathna, Rajitha; Ashraf, Ahmed Bilal; Sridharan, Sridha
2013-06-01
In this paper, we propose a framework for both gradient descent image and object alignment in the Fourier domain. Our method centers upon the classical Lucas & Kanade (LK) algorithm where we represent the source and template/model in the complex 2D Fourier domain rather than in the spatial 2D domain. We refer to our approach as the Fourier LK (FLK) algorithm. The FLK formulation is advantageous when one preprocesses the source image and template/model with a bank of filters (e.g., oriented edges, Gabor, etc.) as 1) it can handle substantial illumination variations, 2) the inefficient preprocessing filter bank step can be subsumed within the FLK algorithm as a sparse diagonal weighting matrix, 3) unlike traditional LK, the computational cost is invariant to the number of filters and as a result is far more efficient, and 4) this approach can be extended to the Inverse Compositional (IC) form of the LK algorithm where nearly all steps (including Fourier transform and filter bank preprocessing) can be precomputed, leading to an extremely efficient and robust approach to gradient descent image matching. Further, these computational savings translate to nonrigid object alignment tasks that are considered extensions of the LK algorithm, such as those found in Active Appearance Models (AAMs).
Algorithms for automated DNA assembly
Densmore, Douglas; Hsiau, Timothy H.-C.; Kittleson, Joshua T.; DeLoache, Will; Batten, Christopher; Anderson, J. Christopher
2010-01-01
Generating a defined set of genetic constructs within a large combinatorial space provides a powerful method for engineering novel biological functions. However, the process of assembling more than a few specific DNA sequences can be costly, time consuming and error prone. Even if a correct theoretical construction scheme is developed manually, it is likely to be suboptimal by any number of cost metrics. Modular, robust and formal approaches are needed for exploring these vast design spaces. By automating the design of DNA fabrication schemes using computational algorithms, we can eliminate human error while reducing redundant operations, thus minimizing the time and cost required for conducting biological engineering experiments. Here, we provide algorithms that optimize the simultaneous assembly of a collection of related DNA sequences. We compare our algorithms to an exhaustive search on a small synthetic dataset and our results show that our algorithms can quickly find an optimal solution. Comparison with random search approaches on two real-world datasets show that our algorithms can also quickly find lower-cost solutions for large datasets. PMID:20335162
SDR Input Power Estimation Algorithms
NASA Technical Reports Server (NTRS)
Nappier, Jennifer M.; Briones, Janette C.
2013-01-01
The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.
Metal detector depth estimation algorithms
NASA Astrophysics Data System (ADS)
Marble, Jay; McMichael, Ian
2009-05-01
This paper looks at depth estimation techniques using electromagnetic induction (EMI) metal detectors. Four algorithms are considered. The first utilizes a vertical gradient sensor configuration. The second is a dual frequency approach. The third makes use of dipole and quadrapole receiver configurations. The fourth looks at coils of different sizes. Each algorithm is described along with its associated sensor. Two figures of merit ultimately define algorithm/sensor performance. The first is the depth of penetration obtainable. (That is, the maximum detection depth obtainable.) This describes the performance of the method to achieve detection of deep targets. The second is the achievable statistical depth resolution. This resolution describes the precision with which depth can be estimated. In this paper depth of penetration and statistical depth resolution are qualitatively determined for each sensor/algorithm. A scientific method is used to make these assessments. A field test was conducted using 2 lanes with emplaced UXO. The first lane contains 155 shells at increasing depths from 0" to 48". The second is more realistic containing objects of varying size. The first lane is used for algorithm training purposes, while the second is used for testing. The metal detectors used in this study are the: Geonics EM61, Geophex GEM5, Minelab STMR II, and the Vallon VMV16.
POSE Algorithms for Automated Docking
NASA Technical Reports Server (NTRS)
Heaton, Andrew F.; Howard, Richard T.
2011-01-01
POSE (relative position and attitude) can be computed in many different ways. Given a sensor that measures bearing to a finite number of spots corresponding to known features (such as a target) of a spacecraft, a number of different algorithms can be used to compute the POSE. NASA has sponsored the development of a flash LIDAR proximity sensor called the Vision Navigation Sensor (VNS) for use by the Orion capsule in future docking missions. This sensor generates data that can be used by a variety of algorithms to compute POSE solutions inside of 15 meters, including at the critical docking range of approximately 1-2 meters. Previously NASA participated in a DARPA program called Orbital Express that achieved the first automated docking for the American space program. During this mission a large set of high quality mated sensor data was obtained at what is essentially the docking distance. This data set is perhaps the most accurate truth data in existence for docking proximity sensors in orbit. In this paper, the flight data from Orbital Express is used to test POSE algorithms at 1.22 meters range. Two different POSE algorithms are tested for two different Fields-of-View (FOVs) and two different pixel noise levels. The results of the analysis are used to predict future performance of the POSE algorithms with VNS data.
Parallel job-scheduling algorithms
Rodger, S.H.
1989-01-01
In this thesis, we consider solving job scheduling problems on the CREW PRAM model. We show how to adapt Cole's pipeline merge technique to yield several efficient parallel algorithms for a number of job scheduling problems and one optimal parallel algorithm for the following job scheduling problem: Given a set of n jobs defined by release times, deadlines and processing times, find a schedule that minimizes the maximum lateness of the jobs and allows preemption when the jobs are scheduled to run on one machine. In addition, we present the first NC algorithm for the following job scheduling problem: Given a set of n jobs defined by release times, deadlines and unit processing times, determine if there is a schedule of jobs on one machine, and calculate the schedule if it exists. We identify the notion of a canonical schedule, which is the type of schedule our algorithm computes if there is a schedule. Our algorithm runs in O((log n){sup 2}) time and uses O(n{sup 2}k{sup 2}) processors, where k is the minimum number of distinct offsets of release times or deadlines.
Using Alternative Multiplication Algorithms to "Offload" Cognition
ERIC Educational Resources Information Center
Jazby, Dan; Pearn, Cath
2015-01-01
When viewed through a lens of embedded cognition, algorithms may enable aspects of the cognitive work of multi-digit multiplication to be "offloaded" to the environmental structure created by an algorithm. This study analyses four multiplication algorithms by viewing different algorithms as enabling cognitive work to be distributed…
Seamless Merging of Hypertext and Algorithm Animation
ERIC Educational Resources Information Center
Karavirta, Ville
2009-01-01
Online learning material that students use by themselves is one of the typical usages of algorithm animation (AA). Thus, the integration of algorithm animations into hypertext is seen as an important topic today to promote the usage of algorithm animation in teaching. This article presents an algorithm animation viewer implemented purely using…
Biomedical Terminology Mapper for UML projects.
Thibault, Julien C; Frey, Lewis
2013-01-01
As the biomedical community collects and generates more and more data, the need to describe these datasets for exchange and interoperability becomes crucial. This paper presents a mapping algorithm that can help developers expose local implementations described with UML through standard terminologies. The input UML class or attribute name is first normalized and tokenized, then lookups in a UMLS-based dictionary are performed. For the evaluation of the algorithm 142 UML projects were extracted from caGrid and automatically mapped to National Cancer Institute (NCI) terminology concepts. Resulting mappings at the UML class and attribute levels were compared to the manually curated annotations provided in caGrid. Results are promising and show that this type of algorithm could speed-up the tedious process of mapping local implementations to standard biomedical terminologies.
Annotated Bibliography for the DEWPOINT project
Oehmen, Christopher S.
2009-04-21
This bibliography covers aspects of the Detection and Early Warning of Proliferation from Online INdicators of Threat (DEWPOINT) project including 1) data management and querying, 2) baseline and advanced methods for classifying free text, and 3) algorithms to achieve the ultimate goal of inferring intent from free text sources. Metrics for assessing the quality and correctness of classification are addressed in the second group. Data management and querying include methods for efficiently storing, indexing, searching, and organizing the data we expect to operate on within the DEWPOINT project.
Synthesis of Greedy Algorithms Using Dominance Relations
NASA Technical Reports Server (NTRS)
Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.
2010-01-01
Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.
NASA Technical Reports Server (NTRS)
Parker, Ray O.
2012-01-01
The RESOLVE project is a lunar prospecting mission whose primary goal is to characterize water and other volatiles in lunar regolith. The Lunar Advanced Volatiles Analysis (LAVA) subsystem is comprised of a fluid subsystem that transports flow to the gas chromatograph- mass spectrometer (GC-MS) instruments that characterize volatiles and the Water Droplet Demonstration (WDD) that will capture and display water condensation in the gas stream. The LAVA Engineering Test Unit (ETU) is undergoing risk reduction testing this summer and fall within a vacuum chamber to understand and characterize C!Jmponent and integrated system performance. Ray will be assisting with component testing of line heaters, printed circuit heaters, pressure transducers, temperature sensors, regulators, and valves in atmospheric and vacuum environments. He will be developing procedures to guide these tests and test reports to analyze and draw conclusions from the data. In addition, he will gain experience with preparing a vacuum chamber with fluid and electrical connections. Further testing will include integrated testing of the fluid subsystem with the gas supply system, near-infrared spectrometer, WDD, Sample Delivery System, and GC-MS in the vacuum chamber. This testing will provide hands-on exposure to a flight forward spaceflight subsystem, the processes associated with testing equipment in a vacuum chamber, and experience working in a laboratory setting. Examples of specific analysis Ray will conduct include: pneumatic analysis to calculate the WOO's efficiency at extracting water vapor from the gas stream to form condensation; thermal analysis of the conduction and radiation along a line connecting two thermal masses; and proportional-integral-derivative (PID) heater control analysis. In this Research and Technology environment, Ray will be asked to problem solve real-time as issues arise. Since LAVA is a scientific subsystem, Ray will be utilizing his chemical engineering background to
NASA Technical Reports Server (NTRS)
1991-01-01
California Polytechnic State University's design project for the 1990-91 school year was the design of a close air support aircraft. There were eight design groups that participated and were given requests for proposals. These proposals contained mission specifications, particular performance and payload requirements, as well as the main design drivers. The mission specifications called for a single pilot weighing 225 lb with equipment. The design mission profile consisted of the following: (1) warm-up, taxi, take off, and accelerate to cruise speed; (2) dash at sea level at 500 knots to a point 250 nmi from take off; (3) combat phase, requiring two combat passes at 450 knots that each consist of a 360 deg turn and an energy increase of 4000 ft. - at each pass, half of air-to-surface ordnance is released; (4) dash at sea level at 500 knots 250 nmi back to base; and (5) land with 20 min of reserve fuel. The request for proposal also specified the following performance requirements with 50 percent internal fuel and standard stores: (1) the aircraft must be able to accelerate from Mach 0.3 to 0.5 at sea level in less than 20 sec; (2) required turn rates are 4.5 sustained g at 450 knots at sea level; (3) the aircraft must have a reattack time of 25 sec or less (reattack time was defined as the time between the first and second weapon drops); (4) the aircraft is allowed a maximum take off and landing ground roll of 2000 ft. The payload requirements were 20 Mk 82 general-purpose free-fall bombs and racks; 1 GAU-8A 30-mm cannon with 1350 rounds; and 2 AIM-9L Sidewinder missiles and racks. The main design drivers expressed in the request for proposal were that the aircraft should be survivable and maintainable. It must be able to operate in remote areas with little or no maintenance. Simplicity was considered the most important factor in achieving the former goal. In addition, the aircraft must be low cost both in acquisition and operation. The summaries of the aircraft
Direct dynamics simulations using Hessian-based predictor-corrector integration algorithms.
Lourderaj, Upakarasamy; Song, Kihyung; Windus, Theresa L; Zhuang, Yu; Hase, William L
2007-01-28
In previous research [J. Chem. Phys. 111, 3800 (1999)] a Hessian-based integration algorithm was derived for performing direct dynamics simulations. In the work presented here, improvements to this algorithm are described. The algorithm has a predictor step based on a local second-order Taylor expansion of the potential in Cartesian coordinates, within a trust radius, and a fifth-order correction to this predicted trajectory. The current algorithm determines the predicted trajectory in Cartesian coordinates, instead of the instantaneous normal mode coordinates used previously, to ensure angular momentum conservation. For the previous algorithm the corrected step was evaluated in rotated Cartesian coordinates. Since the local potential expanded in Cartesian coordinates is not invariant to rotation, the constants of motion are not necessarily conserved during the corrector step. An approximate correction to this shortcoming was made by projecting translation and rotation out of the rotated coordinates. For the current algorithm unrotated Cartesian coordinates are used for the corrected step to assure the constants of motion are conserved. An algorithm is proposed for updating the trust radius to enhance the accuracy and efficiency of the numerical integration. This modified Hessian-based integration algorithm, with its new components, has been implemented into the VENUS/NWChem software package and compared with the velocity-Verlet algorithm for the H(2)CO-->H(2)+CO, O(3)+C(3)H(6), and F(-)+CH(3)OOH chemical reactions.
Binocular self-calibration performed via adaptive genetic algorithm based on laser line imaging
NASA Astrophysics Data System (ADS)
Apolinar Muñoz Rodríguez, J.; Mejía Alanís, Francisco Carlos
2016-07-01
An accurate technique to perform binocular self-calibration by means of an adaptive genetic algorithm based on a laser line is presented. In this calibration, the genetic algorithm computes the vision parameters through simulated binary crossover (SBX). To carry it out, the genetic algorithm constructs an objective function from the binocular geometry of the laser line projection. Then, the SBX minimizes the objective function via chromosomes recombination. In this algorithm, the adaptive procedure determines the search space via line position to obtain the minimum convergence. Thus, the chromosomes of vision parameters provide the minimization. The approach of the proposed adaptive genetic algorithm is to calibrate and recalibrate the binocular setup without references and physical measurements. This procedure leads to improve the traditional genetic algorithms, which calibrate the vision parameters by means of references and an unknown search space. It is because the proposed adaptive algorithm avoids errors produced by the missing of references. Additionally, the three-dimensional vision is carried out based on the laser line position and vision parameters. The contribution of the proposed algorithm is corroborated by an evaluation of accuracy of binocular calibration, which is performed via traditional genetic algorithms.
Optimizing connected component labeling algorithms
NASA Astrophysics Data System (ADS)
Wu, Kesheng; Otoo, Ekow; Shoshani, Arie
2005-04-01
This paper presents two new strategies that can be used to greatly improve the speed of connected component labeling algorithms. To assign a label to a new object, most connected component labeling algorithms use a scanning step that examines some of its neighbors. The first strategy exploits the dependencies among them to reduce the number of neighbors examined. When considering 8-connected components in a 2D image, this can reduce the number of neighbors examined from four to one in many cases. The second strategy uses an array to store the equivalence information among the labels. This replaces the pointer based rooted trees used to store the same equivalence information. It reduces the memory required and also produces consecutive final labels. Using an array instead of the pointer based rooted trees speeds up the connected component labeling algorithms by a factor of 5 ~ 100 in our tests on random binary images.
Learning with the ratchet algorithm.
Hush, D. R.; Scovel, James C.
2003-01-01
This paper presents a randomized algorithm called Ratchet that asymptotically minimizes (with probability 1) functions that satisfy a positive-linear-dependent (PLD) property. We establish the PLD property and a corresponding realization of Ratchet for a generalized loss criterion for both linear machines and linear classifiers. We describe several learning criteria that can be obtained as special cases of this generalized loss criterion, e.g. classification error, classification loss and weighted classification error. We also establish the PLD property and a corresponding realization of Ratchet for the Neyman-Pearson criterion for linear classifiers. Finally we show how, for linear classifiers, the Ratchet algorithm can be derived as a modification of the Pocket algorithm.