Level 2 Ancillary Products and Datasets Algorithm Theoretical Basis
NASA Technical Reports Server (NTRS)
Diner, D.; Abdou, W.; Gordon, H.; Kahn, R.; Knyazikhin, Y.; Martonchik, J.; McDonald, D.; McMuldroch, S.; Myneni, R.; West, R.
1999-01-01
This Algorithm Theoretical Basis (ATB) document describes the algorithms used to generate the parameters of certain ancillary products and datasets used during Level 2 processing of Multi-angle Imaging SpectroRadiometer (MIST) data.
Aquarius Salinity Retrieval Algorithm: Final Pre-Launch Version
NASA Technical Reports Server (NTRS)
Wentz, Frank J.; Le Vine, David M.
2011-01-01
This document provides the theoretical basis for the Aquarius salinity retrieval algorithm. The inputs to the algorithm are the Aquarius antenna temperature (T(sub A)) measurements along with a number of NCEP operational products and pre-computed tables of space radiation coming from the galaxy and sun. The output is sea-surface salinity and many intermediate variables required for the salinity calculation. This revision of the Algorithm Theoretical Basis Document (ATBD) is intended to be the final pre-launch version.
Towards improving the NASA standard soil moisture retrieval algorithm and product
NASA Astrophysics Data System (ADS)
Mladenova, I. E.; Jackson, T. J.; Njoku, E. G.; Bindlish, R.; Cosh, M. H.; Chan, S.
2013-12-01
Soil moisture mapping using passive-based microwave remote sensing techniques has proven to be one of the most effective ways of acquiring reliable global soil moisture information on a routine basis. An important step in this direction was made by the launch of the Advanced Microwave Scanning Radiometer on the NASA's Earth Observing System Aqua satellite (AMSR-E). Along with the standard NASA algorithm and operational AMSR-E product, the easy access and availability of the AMSR-E data promoted the development and distribution of alternative retrieval algorithms and products. Several evaluation studies have demonstrated issues with the standard NASA AMSR-E product such as dampened temporal response and limited range of the final retrievals and noted that the available global passive-based algorithms, even though based on the same electromagnetic principles, produce different results in terms of accuracy and temporal dynamics. Our goal is to identify the theoretical causes that determine the reduced sensitivity of the NASA AMSR-E product and outline ways to improve the operational NASA algorithm, if possible. Properly identifying the underlying reasons that cause the above mentioned features of the NASA AMSR-E product and differences between the alternative algorithms requires a careful examination of the theoretical basis of each approach. Specifically, the simplifying assumptions and parametrization approaches adopted by each algorithm to reduce the dimensionality of unknowns and characterize the observing system. Statistically-based error analyses, which are useful and necessary, provide information on the relative accuracy of each product but give very little information on the theoretical causes, knowledge that is essential for algorithm improvement. Thus, we are currently examining the possibility of improving the standard NASA AMSR-E global soil moisture product by conducting a thorough theoretically-based review of and inter-comparisons between several well established global retrieval techniques. A detailed discussion focused on the theoretical basis of each approach and algorithms sensitivity to assumptions and parametrization approaches will be presented. USDA is an equal opportunity provider and employer.
NASA Astrophysics Data System (ADS)
Loughman, Robert; Bhartia, Pawan K.; Chen, Zhong; Xu, Philippe; Nyaku, Ernest; Taha, Ghassan
2018-05-01
The theoretical basis of the Ozone Mapping and Profiler Suite (OMPS) Limb Profiler (LP) Version 1 aerosol extinction retrieval algorithm is presented. The algorithm uses an assumed bimodal lognormal aerosol size distribution to retrieve aerosol extinction profiles at 675 nm from OMPS LP radiance measurements. A first-guess aerosol extinction profile is updated by iteration using the Chahine nonlinear relaxation method, based on comparisons between the measured radiance profile at 675 nm and the radiance profile calculated by the Gauss-Seidel limb-scattering (GSLS) radiative transfer model for a spherical-shell atmosphere. This algorithm is discussed in the context of previous limb-scattering aerosol extinction retrieval algorithms, and the most significant error sources are enumerated. The retrieval algorithm is limited primarily by uncertainty about the aerosol phase function. Horizontal variations in aerosol extinction, which violate the spherical-shell atmosphere assumed in the version 1 algorithm, may also limit the quality of the retrieved aerosol extinction profiles significantly.
The Algorithm Theoretical Basis Document for the GLAS Atmospheric Data Products
NASA Technical Reports Server (NTRS)
Palm, Stephen P.; Hart, William D.; Hlavka, Dennis L.; Welton, Ellsworth J.; Spinhirne, James D.
2012-01-01
The purpose of this document is to present a detailed description of the algorithm theoretical basis for each of the GLAS data products. This will be the final version of this document. The algorithms were initially designed and written based on the authors prior experience with high altitude lidar data on systems such as the Cloud and Aerosol Lidar System (CALS) and the Cloud Physics Lidar (CPL), both of which fly on the NASA ER-2 high altitude aircraft. These lidar systems have been employed in many field experiments around the world and algorithms have been developed to analyze these data for a number of atmospheric parameters. CALS data have been analyzed for cloud top height, thin cloud optical depth, cirrus cloud emittance (Spinhirne and Hart, 1990) and boundary layer depth (Palm and Spinhirne, 1987, 1998). The successor to CALS, the CPL, has also been extensively deployed in field missions since 2000 including the validation of GLAS and CALIPSO. The CALS and early CPL data sets also served as the basis for the construction of simulated GLAS data sets which were then used to develop and test the GLAS analysis algorithms.
NASA Technical Reports Server (NTRS)
Brenner, Anita C.; Zwally, H. Jay; Bentley, Charles R.; Csatho, Bea M.; Harding, David J.; Hofton, Michelle A.; Minster, Jean-Bernard; Roberts, LeeAnne; Saba, Jack L.; Thomas, Robert H.;
2012-01-01
The primary purpose of the GLAS instrument is to detect ice elevation changes over time which are used to derive changes in ice volume. Other objectives include measuring sea ice freeboard, ocean and land surface elevation, surface roughness, and canopy heights over land. This Algorithm Theoretical Basis Document (ATBD) describes the theory and implementation behind the algorithms used to produce the level 1B products for waveform parameters and global elevation and the level 2 products that are specific to ice sheet, sea ice, land, and ocean elevations respectively. These output products, are defined in detail along with the associated quality, and the constraints, and assumptions used to derive them.
Zombie algorithms: a timesaving remote sensing systems engineering tool
NASA Astrophysics Data System (ADS)
Ardanuy, Philip E.; Powell, Dylan C.; Marley, Stephen
2008-08-01
In modern horror fiction, zombies are generally undead corpses brought back from the dead by supernatural or scientific means, and are rarely under anyone's direct control. They typically have very limited intelligence, and hunger for the flesh of the living [1]. Typical spectroradiometric or hyperspectral instruments providess calibrated radiances for a number of remote sensing algorithms. The algorithms typically must meet specified latency and availability requirements while yielding products at the required quality. These systems, whether research, operational, or a hybrid, are typically cost constrained. Complexity of the algorithms can be high, and may evolve and mature over time as sensor characterization changes, product validation occurs, and areas of scientific basis improvement are identified and completed. This suggests the need for a systems engineering process for algorithm maintenance that is agile, cost efficient, repeatable, and predictable. Experience on remote sensing science data systems suggests the benefits of "plug-n-play" concepts of operation. The concept, while intuitively simple, can be challenging to implement in practice. The use of zombie algorithms-empty shells that outwardly resemble the form, fit, and function of a "complete" algorithm without the implemented theoretical basis-provides the ground systems advantages equivalent to those obtained by integrating sensor engineering models onto the spacecraft bus. Combined with a mature, repeatable process for incorporating the theoretical basis, or scientific core, into the "head" of the zombie algorithm, along with associated scripting and registration, provides an easy "on ramp" for the rapid and low-risk integration of scientific applications into operational systems.
A new BP Fourier algorithm and its application in English teaching evaluation
NASA Astrophysics Data System (ADS)
Pei, Xuehui; Pei, Guixin
2017-08-01
BP neural network algorithm has wide adaptability and accuracy when used in complicated system evaluation, but its calculation defects such as slow convergence have limited its practical application. The paper tries to speed up the calculation convergence of BP neural network algorithm with Fourier basis functions and presents a new BP Fourier algorithm for complicated system evaluation. First, shortages and working principle of BP algorithm are analyzed for subsequent targeted improvement; Second, the presented BP Fourier algorithm adopts Fourier basis functions to simplify calculation structure, designs new calculation transfer function between input and output layers, and conducts theoretical analysis to prove the efficiency of the presented algorithm; Finally, the presented algorithm is used in evaluating university English teaching and the application results shows that the presented BP Fourier algorithm has better performance in calculation efficiency and evaluation accuracy and can be used in evaluating complicated system practically.
NASA Astrophysics Data System (ADS)
Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua
2016-07-01
On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.
Gueddida, Saber; Yan, Zeyin; Kibalin, Iurii; Voufack, Ariste Bolivard; Claiser, Nicolas; Souhassou, Mohamed; Lecomte, Claude; Gillon, Béatrice; Gillet, Jean-Michel
2018-04-28
In this paper, we propose a simple cluster model with limited basis sets to reproduce the unpaired electron distributions in a YTiO 3 ferromagnetic crystal. The spin-resolved one-electron-reduced density matrix is reconstructed simultaneously from theoretical magnetic structure factors and directional magnetic Compton profiles using our joint refinement algorithm. This algorithm is guided by the rescaling of basis functions and the adjustment of the spin population matrix. The resulting spin electron density in both position and momentum spaces from the joint refinement model is in agreement with theoretical and experimental results. Benefits brought from magnetic Compton profiles to the entire spin density matrix are illustrated. We studied the magnetic properties of the YTiO 3 crystal along the Ti-O 1 -Ti bonding. We found that the basis functions are mostly rescaled by means of magnetic Compton profiles, while the molecular occupation numbers are mainly modified by the magnetic structure factors.
Pattern recognition: A basis for remote sensing data analysis
NASA Technical Reports Server (NTRS)
Swain, P. H.
1973-01-01
The theoretical basis for the pattern-recognition-oriented algorithms used in the multispectral data analysis software system is discussed. A model of a general pattern recognition system is presented. The receptor or sensor is usually a multispectral scanner. For each ground resolution element the receptor produces n numbers or measurements corresponding to the n channels of the scanner.
On the theoretical link between LLL-reduction and Lambda-decorrelation
NASA Astrophysics Data System (ADS)
Lannes, A.
2013-04-01
The LLL algorithm, introduced by Lenstra et al. (Math Ann 261:515-534, 1982), plays a key role in many fields of applied mathematics. In particular, it is used as an effective numerical tool for preconditioning the integer least-squares problems arising in high-precision geodetic positioning and Global Navigation Satellite Systems (GNSS). In 1992, Teunissen developed a method for solving these nearest-lattice point (NLP) problems. This method is referred to as Lambda (for Least-squares AMBiguity Decorrelation Adjustment). The preconditioning stage of Lambda corresponds to its decorrelation algorithm. From an epistemological point of view, the latter was devised through an innovative statistical approach completely independent of the LLL algorithm. Recent papers pointed out some similarities between the LLL algorithm and the Lambda-decorrelation algorithm. We try to clarify this point in the paper. We first introduce a parameter measuring the orthogonality defect of the integer basis in which the NLP problem is solved, the LLL-reduced basis of the LLL algorithm, or the Λ -basis of the Lambda method. With regard to this problem, the potential qualities of these bases can then be compared. The Λ -basis is built by working at the level of the variance-covariance matrix of the float solution, while the LLL-reduced basis is built by working at the level of its inverse. As a general rule, the orthogonality defect of the Λ -basis is greater than that of the corresponding LLL-reduced basis; these bases are however very close to one another. To specify this tight relationship, we present a method that provides the dual LLL-reduced basis of a given Λ -basis. As a consequence of this basic link, all the recent developments made on the LLL algorithm can be applied to the Lambda-decorrelation algorithm. This point is illustrated in a concrete manner: we present a parallel Λ -type decorrelation algorithm derived from the parallel LLL algorithm of Luo and Qiao (Proceedings of the fourth international C^* conference on computer science and software engineering. ACM Int Conf P Series. ACM Press, pp 93-101, 2012).
NASA Technical Reports Server (NTRS)
Barker, John L.; Harnden, Joann M. K.; Montgomery, Harry; Anuta, Paul; Kvaran, Geir; Knight, ED; Bryant, Tom; Mckay, AL; Smid, Jon; Knowles, Dan, Jr.
1994-01-01
The EOS Moderate Resolution Imaging Spectrometer (MODIS) is being developed by NASA for flight on the Earth Observing System (EOS) series of satellites, the first of which (EOS-AM-1) is scheduled for launch in 1998. This document describes the algorithms and their theoretical basis for the MODIS Level 1B characterization, calibration, and geolocation algorithms which must produce radiometrically, spectrally, and spatially calibrated data with sufficient accuracy so that Global change research programs can detect minute changes in biogeophysical parameters. The document first describes the geolocation algorithm which determines geodetic latitude, longitude, and elevation of each MODIS pixel and the determination of geometric parameters for each observation (satellite zenith angle, satellite azimuth, range to the satellite, solar zenith angle, and solar azimuth). Next, the utilization of the MODIS onboard calibration sources, which consist of the Spectroradiometric Calibration Assembly (SRCA), Solar Diffuser (SD), Solar Diffuser Stability Monitor (SDSM), and the Blackbody (BB), is treated. Characterization of these sources and integration of measurements into the calibration process is described. Finally, the use of external sources, including the Moon, instrumented sites on the Earth (called vicarious calibration), and unsupervised normalization sites having invariant reflectance and emissive properties is treated. Finally, algorithms for generating utility masks needed for scene-based calibration are discussed. Eight appendices are provided, covering instrument design and additional algorithm details.
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Mielke, Roland R.
1988-01-01
Research directed at developing a graph theoretical model for describing data and control flow associated with the execution of large grained algorithms in a special distributed computer environment is presented. This model is identified by the acronym ATAMM which represents Algorithms To Architecture Mapping Model. The purpose of such a model is to provide a basis for establishing rules for relating an algorithm to its execution in a multiprocessor environment. Specifications derived from the model lead directly to the description of a data flow architecture which is a consequence of the inherent behavior of the data and control flow described by the model. The purpose of the ATAMM based architecture is to provide an analytical basis for performance evaluation. The ATAMM model and architecture specifications are demonstrated on a prototype system for concept validation.
Estimation of leaf area index and its sunlit portion from DSCOVR EPIC data: Theoretical basis
Yang, Bin; Knyazikhin, Yuri; Mõttus, Matti; Rautiainen, Miina; Stenberg, Pauline; Yan, Lei; Chen, Chi; Yan, Kai; Choi, Sungho; Park, Taejin; Myneni, Ranga B.
2017-01-01
This paper presents the theoretical basis of the algorithm designed for the generation of leaf area index and diurnal course of its sunlit portion from NASA’s Earth Polychromatic Imaging Camera (EPIC) onboard NOAA’s Deep Space Climate Observatory (DSCOVR). The Look-up-Table (LUT) approach implemented in the MODIS operational LAI/FPAR algorithm is adopted. The LUT, which is the heart of the approach, has been significantly modified. First, its parameterization incorporates the canopy hot spot phenomenon and recent advances in the theory of canopy spectral invariants. This allows more accurate decoupling of the structural and radiometric components of the measured Bidirectional Reflectance Factor (BRF), improves scaling properties of the LUT and consequently simplifies adjustments of the algorithm for data spatial resolution and spectral band compositions. Second, the stochastic radiative transfer equations are used to generate the LUT for all biome types. The equations naturally account for radiative effects of the three-dimensional canopy structure on the BRF and allow for an accurate discrimination between sunlit and shaded leaf areas. Third, the LUT entries are measurable, i.e., they can be independently derived from both below canopy measurements of the transmitted and above canopy measurements of reflected radiation fields. This feature makes possible direct validation of the LUT, facilitates identification of its deficiencies and development of refinements. Analyses of field data on canopy structure and leaf optics collected at 18 sites in the Hyytiälä forest in southern boreal zone in Finland and hyperspectral images acquired by the EO-1 Hyperion sensor support the theoretical basis. PMID:28867834
Sheng, Xi
2012-07-01
The thesis aims to study the automation replenishment algorithm in hospital on medical supplies supplying chain. The mathematical model and algorithm of medical supplies automation replenishment are designed through referring to practical data form hospital on the basis of applying inventory theory, greedy algorithm and partition algorithm. The automation replenishment algorithm is proved to realize automatic calculation of the medical supplies distribution amount and optimize medical supplies distribution scheme. A conclusion could be arrived that the model and algorithm of inventory theory, if applied in medical supplies circulation field, could provide theoretical and technological support for realizing medical supplies automation replenishment of hospital on medical supplies supplying chain.
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Mielke, Roland R.
1987-01-01
The results of ongoing research directed at developing a graph theoretical model for describing data and control flow associated with the execution of large grained algorithms in a spatial distributed computer environment is presented. This model is identified by the acronym ATAMM (Algorithm/Architecture Mapping Model). The purpose of such a model is to provide a basis for establishing rules for relating an algorithm to its execution in a multiprocessor environment. Specifications derived from the model lead directly to the description of a data flow architecture which is a consequence of the inherent behavior of the data and control flow described by the model. The purpose of the ATAMM based architecture is to optimize computational concurrency in the multiprocessor environment and to provide an analytical basis for performance evaluation. The ATAMM model and architecture specifications are demonstrated on a prototype system for concept validation.
Supervised learning of probability distributions by neural networks
NASA Technical Reports Server (NTRS)
Baum, Eric B.; Wilczek, Frank
1988-01-01
Supervised learning algorithms for feedforward neural networks are investigated analytically. The back-propagation algorithm described by Werbos (1974), Parker (1985), and Rumelhart et al. (1986) is generalized by redefining the values of the input and output neurons as probabilities. The synaptic weights are then varied to follow gradients in the logarithm of likelihood rather than in the error. This modification is shown to provide a more rigorous theoretical basis for the algorithm and to permit more accurate predictions. A typical application involving a medical-diagnosis expert system is discussed.
THE SUCCESSIVE LINEAR ESTIMATOR: A REVISIT. (R827114)
This paper examines the theoretical basis of the successive linear estimator (SLE) that has been developed for the inverse problem in subsurface hydrology. We show that the SLE algorithm is a non-linear iterative estimator to the inverse problem. The weights used in the SLE al...
The Algorithm Theoretical Basis Document for Level 1A Processing
NASA Technical Reports Server (NTRS)
Jester, Peggy L.; Hancock, David W., III
2012-01-01
The first process of the Geoscience Laser Altimeter System (GLAS) Science Algorithm Software converts the Level 0 data into the Level 1A Data Products. The Level 1A Data Products are the time ordered instrument data converted from counts to engineering units. This document defines the equations that convert the raw instrument data into engineering units. Required scale factors, bias values, and coefficients are defined in this document. Additionally, required quality assurance and browse products are defined in this document.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sudiarta, I. Wayan; Angraini, Lily Maysari, E-mail: lilyangraini@unram.ac.id
We have applied the finite difference time domain (FDTD) method with the supersymmetric quantum mechanics (SUSY-QM) procedure to determine excited energies of one dimensional quantum systems. The theoretical basis of FDTD, SUSY-QM, a numerical algorithm and an illustrative example for a particle in a one dimensional square-well potential were given in this paper. It was shown that the numerical results were in excellent agreement with theoretical results. Numerical errors produced by the SUSY-QM procedure was due to errors in estimations of superpotentials and supersymmetric partner potentials.
Kaliman, Ilya A; Krylov, Anna I
2017-04-30
A new hardware-agnostic contraction algorithm for tensors of arbitrary symmetry and sparsity is presented. The algorithm is implemented as a stand-alone open-source code libxm. This code is also integrated with general tensor library libtensor and with the Q-Chem quantum-chemistry package. An overview of the algorithm, its implementation, and benchmarks are presented. Similarly to other tensor software, the algorithm exploits efficient matrix multiplication libraries and assumes that tensors are stored in a block-tensor form. The distinguishing features of the algorithm are: (i) efficient repackaging of the individual blocks into large matrices and back, which affords efficient graphics processing unit (GPU)-enabled calculations without modifications of higher-level codes; (ii) fully asynchronous data transfer between disk storage and fast memory. The algorithm enables canonical all-electron coupled-cluster and equation-of-motion coupled-cluster calculations with single and double substitutions (CCSD and EOM-CCSD) with over 1000 basis functions on a single quad-GPU machine. We show that the algorithm exhibits predicted theoretical scaling for canonical CCSD calculations, O(N 6 ), irrespective of the data size on disk. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Incorporation of quality updates for JPSS CGS Products
NASA Astrophysics Data System (ADS)
Cochran, S.; Grant, K. D.; Ibrahim, W.; Brueske, K. F.; Smit, P.
2016-12-01
NOAA's next-generation environmental satellite, the Joint Polar Satellite System (JPSS) replaces the current Polar-orbiting Operational Environmental Satellites (POES). JPSS satellites carry sensors which collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The first JPSS satellite was launched in 2011 and is currently NOAA's primary operational polar satellite. The JPSS ground system is the Common Ground System (CGS), and provides command, control, and communications (C3) and data processing (DP). A multi-mission system, CGS provides combinations of C3/DP for numerous NASA, NOAA, DoD, and international missions. In preparation for the next JPSS satellite, CGS improved its multi-mission capabilities to enhance mission operations for larger constellations of earth observing satellites with the added benefit of streamlining mission operations for other NOAA missions. This paper will discuss both the theoretical basis and the actual practices used to date to identify, test and incorporate algorithm updates into the CGS processing baseline. To provide a basis for this support, Raytheon developed a theoretical analysis framework, and the application of derived engineering processes, for the maintenance of consistency and integrity of remote sensing operational algorithm outputs. The framework is an abstraction of the operationalization of the science-grade algorithm (Sci2Ops) process used throughout the JPSS program. By combining software and systems engineering controls, manufacturing disciplines to detect and reduce defects, and a standard process to control analysis, an environment to maintain operational algorithm maturity is achieved. Results of the use of this approach to implement algorithm changes into operations will also be detailed.
Methods and Tools for Product Quality Maintenance in JPSS CGS
NASA Astrophysics Data System (ADS)
Cochran, S.; Smit, P.; Grant, K. D.; Jamilkowski, M. L.
2015-12-01
NOAA's next-generation environmental satellite, the Joint Polar Satellite System (JPSS) replaces the current Polar-orbiting Operational Environmental Satellites (POES). JPSS satellites carry sensors which collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The first JPSS satellite was launched in 2011 and is currently NOAA's primary operational polar satellite. The JPSS ground system is the Common Ground System (CGS), and provides command, control, and communications (C3) and data processing (DP). A multi-mission system, CGS provides combinations of C3/DP for numerous NASA, NOAA, DoD, and international missions. In preparation for the next JPSS satellite, CGS improved its multi-mission capabilities to enhance mission operations for larger constellations of earth observing satellites with the added benefit of streamlining mission operations for other NOAA missions. This paper will discuss both the theoretical basis and the actual practices used to date to identify, test and incorporate algorithm updates into the CGS processing baseline. To provide a basis for this support, Raytheon developed a theoretical analysis framework, and the application of derived engineering processes, for the maintenance of consistency and integrity of remote sensing operational algorithm outputs. The framework is an abstraction of the operationalization of the science-grade algorithm (Sci2Ops) process used throughout the JPSS program. By combining software and systems engineering controls, manufacturing disciplines to detect and reduce defects, and a standard process to control analysis, an environment to maintain operational algorithm maturity is achieved. Results of the use of this approach to implement algorithm changes into operations will also be detailed.
Group-theoretical approach to the construction of bases in 2{sup n}-dimensional Hilbert space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garcia, A.; Romero, J. L.; Klimov, A. B., E-mail: klimov@cencar.udg.mx
2011-06-15
We propose a systematic procedure to construct all the possible bases with definite factorization structure in 2{sup n}-dimensional Hilbert space and discuss an algorithm for the determination of basis separability. The results are applied for classification of bases for an n-qubit system.
NASA Astrophysics Data System (ADS)
Bogusz, Michael
1993-01-01
The need for a systematic methodology for the analysis of aircraft electromagnetic compatibility (EMC) problems is examined. The available computer aids used in aircraft EMC analysis are assessed and a theoretical basis is established for the complex algorithms which identify and quantify electromagnetic interactions. An overview is presented of one particularly well established aircraft antenna to antenna EMC analysis code, the Aircraft Inter-Antenna Propagation with Graphics (AAPG) Version 07 software. The specific new algorithms created to compute cone geodesics and their associated path losses and to graph the physical coupling path are discussed. These algorithms are validated against basic principles. Loss computations apply the uniform geometrical theory of diffraction and are subsequently compared to measurement data. The increased modelling and analysis capabilities of the newly developed AAPG Version 09 are compared to those of Version 07. Several models of real aircraft, namely the Electronic Systems Trainer Challenger, are generated and provided as a basis for this preliminary comparative assessment. Issues such as software reliability, algorithm stability, and quality of hardcopy output are also discussed.
A single VLSI chip for computing syndromes in the (225, 223) Reed-Solomon decoder
NASA Technical Reports Server (NTRS)
Hsu, I. S.; Truong, T. K.; Shao, H. M.; Deutsch, L. J.
1986-01-01
A description of a single VLSI chip for computing syndromes in the (255, 223) Reed-Solomon decoder is presented. The architecture that leads to this single VLSI chip design makes use of the dual basis multiplication algorithm. The same architecture can be applied to design VLSI chips to compute various kinds of number theoretic transforms.
The Algorithm Theoretical Basis Document for Tidal Corrections
NASA Technical Reports Server (NTRS)
Fricker, Helen A.; Ridgway, Jeff R.; Minster, Jean-Bernard; Yi, Donghui; Bentley, Charles R.`
2012-01-01
This Algorithm Theoretical Basis Document deals with the tidal corrections that need to be applied to range measurements made by the Geoscience Laser Altimeter System (GLAS). These corrections result from the action of ocean tides and Earth tides which lead to deviations from an equilibrium surface. Since the effect of tides is dependent of the time of measurement, it is necessary to remove the instantaneous tide components when processing altimeter data, so that all measurements are made to the equilibrium surface. The three main tide components to consider are the ocean tide, the solid-earth tide and the ocean loading tide. There are also long period ocean tides and the pole tide. The approximate magnitudes of these components are illustrated in Table 1, together with estimates of their uncertainties (i.e. the residual error after correction). All of these components are important for GLAS measurements over the ice sheets since centimeter-level accuracy for surface elevation change detection is required. The effect of each tidal component is to be removed by approximating their magnitude using tidal prediction models. Conversely, assimilation of GLAS measurements into tidal models will help to improve them, especially at high latitudes.
Infrared super-resolution imaging based on compressed sensing
NASA Astrophysics Data System (ADS)
Sui, Xiubao; Chen, Qian; Gu, Guohua; Shen, Xuewei
2014-03-01
The theoretical basis of traditional infrared super-resolution imaging method is Nyquist sampling theorem. The reconstruction premise is that the relative positions of the infrared objects in the low-resolution image sequences should keep fixed and the image restoration means is the inverse operation of ill-posed issues without fixed rules. The super-resolution reconstruction ability of the infrared image, algorithm's application area and stability of reconstruction algorithm are limited. To this end, we proposed super-resolution reconstruction method based on compressed sensing in this paper. In the method, we selected Toeplitz matrix as the measurement matrix and realized it by phase mask method. We researched complementary matching pursuit algorithm and selected it as the recovery algorithm. In order to adapt to the moving target and decrease imaging time, we take use of area infrared focal plane array to acquire multiple measurements at one time. Theoretically, the method breaks though Nyquist sampling theorem and can greatly improve the spatial resolution of the infrared image. The last image contrast and experiment data indicate that our method is effective in improving resolution of infrared images and is superior than some traditional super-resolution imaging method. The compressed sensing super-resolution method is expected to have a wide application prospect.
NASA Technical Reports Server (NTRS)
Jaggers, R. F.
1974-01-01
An optimum powered explicit guidance algorithm capable of handling all space shuttle exoatospheric maneuvers is presented. The theoretical and practical basis for the currently baselined space shuttle powered flight guidance equations and logic is documented. Detailed flow diagrams for implementing the steering computations for all shuttle phases, including powered return to launch site (RTLS) abort, are also presented. Derivation of the powered RTLS algorithm is provided, as well as detailed flow diagrams for implementing the option. The flow diagrams and equations are compatible with the current powered flight documentation.
Meterological correction of optical beam refraction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lukin, V.P.; Melamud, A.E.; Mironov, V.L.
1986-02-01
At the present time laser reference systems (LRS's) are widely used in agrotechnology and in geodesy. The demands for accuracy in LRS's constantly increase, so that a study of error sources and means of considering and correcting them is of practical importance. A theoretical algorithm is presented for correction of the regular component of atmospheric refraction for various types of hydrostatic stability of the atmospheric layer adjacent to the earth. The algorithm obtained is compared to regression equations obtained by processing an experimental data base. It is shown that within admissible accuracy limits the refraction correction algorithm obtained permits constructionmore » of correction tables and design of optical systems with programmable correction for atmospheric refraction on the basis of rapid meteorological measurements.« less
NASA Technical Reports Server (NTRS)
Chen, Wei-Ting; Kahn, Ralph A.; Nelson, David; Yau, Kevin; Seinfeld, John H.
2008-01-01
The treatment of biomass burning (BB) carbonaceous particles in the Multiangle Imaging SpectroRadiometer (MISR) Standard Aerosol Retrieval Algorithm is assessed, and algorithm refinements are suggested, based on a theoretical sensitivity analysis and comparisons with near-coincident AERONET measurements at representative BB sites. Over the natural ranges of BB aerosol microphysical and optical properties observed in past field campaigns, patterns of retrieved Aerosol Optical Depth (AOD), particle size, and single scattering albedo (SSA) are evaluated. On the basis of the theoretical analysis, assuming total column AOD of 0.2, over a dark, uniform surface, MISR can distinguish two to three groups in each of size and SSA, except when the assumed atmospheric particles are significantly absorbing (mid-visible SSA approx.0.84), or of medium sizes (mean radius approx.0.13 pin); sensitivity to absorbing, medium-large size particles increases considerably when the assumed column AOD is raised to 0.5. MISR Research Aerosol Retrievals confirm the theoretical results, based on coincident AERONET inversions under BB-dominated conditions. When BB is externally mixed with dust in the atmosphere, dust optical model and surface reflection uncertainties, along with spatial variability, contribute to differences between the Research Retrievals and AERONET. These results suggest specific refinements to the MISR Standard Aerosol Algorithm complement of component particles and mixtures. They also highlight the importance for satellite aerosol retrievals of surface reflectance characterization, with accuracies that can be difficult to achieve with coupled surface-aerosol algorithms in some higher AOD situations.
Signal and noise modeling in confocal laser scanning fluorescence microscopy.
Herberich, Gerlind; Windoffer, Reinhard; Leube, Rudolf E; Aach, Til
2012-01-01
Fluorescence confocal laser scanning microscopy (CLSM) has revolutionized imaging of subcellular structures in biomedical research by enabling the acquisition of 3D time-series of fluorescently-tagged proteins in living cells, hence forming the basis for an automated quantification of their morphological and dynamic characteristics. Due to the inherently weak fluorescence, CLSM images exhibit a low SNR. We present a novel model for the transfer of signal and noise in CLSM that is both theoretically sound as well as corroborated by a rigorous analysis of the pixel intensity statistics via measurement of the 3D noise power spectra, signal-dependence and distribution. Our model provides a better fit to the data than previously proposed models. Further, it forms the basis for (i) the simulation of the CLSM imaging process indispensable for the quantitative evaluation of CLSM image analysis algorithms, (ii) the application of Poisson denoising algorithms and (iii) the reconstruction of the fluorescence signal.
Quantized kernel least mean square algorithm.
Chen, Badong; Zhao, Songlin; Zhu, Pingping; Príncipe, José C
2012-01-01
In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.
Operational algorithm development and refinement approaches
NASA Astrophysics Data System (ADS)
Ardanuy, Philip E.
2003-11-01
Next-generation polar and geostationary systems, such as the National Polar-orbiting Operational Environmental Satellite System (NPOESS) and the Geostationary Operational Environmental Satellite (GOES)-R, will deploy new generations of electro-optical reflective and emissive capabilities. These will include low-radiometric-noise, improved spatial resolution multi-spectral and hyperspectral imagers and sounders. To achieve specified performances (e.g., measurement accuracy, precision, uncertainty, and stability), and best utilize the advanced space-borne sensing capabilities, a new generation of retrieval algorithms will be implemented. In most cases, these advanced algorithms benefit from ongoing testing and validation using heritage research mission algorithms and data [e.g., the Earth Observing System (EOS)] Moderate-resolution Imaging Spectroradiometer (MODIS) and Shuttle Ozone Limb Scattering Experiment (SOLSE)/Limb Ozone Retreival Experiment (LORE). In these instances, an algorithm's theoretical basis is not static, but rather improves with time. Once frozen, an operational algorithm can "lose ground" relative to research analogs. Cost/benefit analyses provide a basis for change management. The challenge is in reconciling and balancing the stability, and "comfort," that today"s generation of operational platforms provide (well-characterized, known, sensors and algorithms) with the greatly improved quality, opportunities, and risks, that the next generation of operational sensors and algorithms offer. By using the best practices and lessons learned from heritage/groundbreaking activities, it is possible to implement an agile process that enables change, while managing change. This approach combines a "known-risk" frozen baseline with preset completion schedules with insertion opportunities for algorithm advances as ongoing validation activities identify and repair areas of weak performance. This paper describes an objective, adaptive implementation roadmap that takes into account the specific maturities of each system"s (sensor and algorithm) technology to provide for a program that contains continuous improvement while retaining its manageability.
Theoretical modeling of the vapor cavitation in dynamically loaded journal bearings
NASA Technical Reports Server (NTRS)
Brewe, D. E.
1985-01-01
A theoretical investigation is made of the evolution of a vapor-bubble for a submerged journal bearing under dynamically loaded conditions using the Elrod algorithm. This method conserves mass throughout the computational domain. A comparison study is performed to determine some of the consequences of applying a nonconservative theory (pseudo-Gumbel BC) to a dynamic problem. A complete dynamic cycle of a journal whirling in a circular path is chosen for the basis of comparison. Significant differences are observed in the load components near the end of the cycle. Further, good agreement with experiment is found for stationary and nonstationary cavitation.
NASA Astrophysics Data System (ADS)
Mladenova, I. E.; Jackson, T. J.; Bindlish, R.; Njoku, E. G.; Chan, S.; Cosh, M. H.
2012-12-01
We are currently evaluating potential improvements to the standard NASA global soil moisture product derived using observations acquired from the Advanced Microwave Scanning Radiometer-Earth Observing System (AMSR-E). A major component of this effort is a thorough review of the theoretical basis of available passive-based soil moisture retrieval algorithms suitable for operational implementation. Several agencies provide routine soil moisture products. Our research focuses on five well-establish techniques that are capable of carrying out global retrieval using the same AMSR-E data set as the NASA approach (i.e. X-band brightness temperature data). In general, most passive-based algorithms include two major components: radiative transfer modeling, which provides the smooth surface reflectivity properties of the soil surface, and a complex dielectric constant model of the soil-water mixture. These two components are related through the Fresnel reflectivity equations. Furthermore, the land surface temperature, vegetation, roughness and soil properties need to be adequately accounted for in the radiative transfer and dielectric modeling. All of the available approaches we have examined follow the general data processing flow described above, however, the actual solutions as well as the final products can be very different. This is primarily a result of the assumptions, number of sensor variables utilized, the selected ancillary data sets and approaches used to account for the effect of the additional geophysical variables impacting the measured signal. The operational NASA AMSR-E-based retrievals have been shown to have a dampened temporal response and sensitivity range. Two possible approaches to addressing these issues are being evaluated: enhancing the theoretical basis of the existing algorithm, if feasible, or directly adjusting the dynamic range of the final soil moisture product. Both of these aspects are being actively investigated and will be discussed in our talk. Improving the quality and reliability of the global soil moisture product would result in greater acceptance and utilization in the related applications. USDA is an equal opportunity provider and employer.
Arnold, J B; Liow, J S; Schaper, K A; Stern, J J; Sled, J G; Shattuck, D W; Worth, A J; Cohen, M S; Leahy, R M; Mazziotta, J C; Rottenberg, D A
2001-05-01
The desire to correct intensity nonuniformity in magnetic resonance images has led to the proliferation of nonuniformity-correction (NUC) algorithms with different theoretical underpinnings. In order to provide end users with a rational basis for selecting a given algorithm for a specific neuroscientific application, we evaluated the performance of six NUC algorithms. We used simulated and real MRI data volumes, including six repeat scans of the same subject, in order to rank the accuracy, precision, and stability of the nonuniformity corrections. We also compared algorithms using data volumes from different subjects and different (1.5T and 3.0T) MRI scanners in order to relate differences in algorithmic performance to intersubject variability and/or differences in scanner performance. In phantom studies, the correlation of the extracted with the applied nonuniformity was highest in the transaxial (left-to-right) direction and lowest in the axial (top-to-bottom) direction. Two of the six algorithms demonstrated a high degree of stability, as measured by the iterative application of the algorithm to its corrected output. While none of the algorithms performed ideally under all circumstances, locally adaptive methods generally outperformed nonadaptive methods. Copyright 2001 Academic Press.
On the Hilbert-Huang Transform Theoretical Developments
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Blank, Karin; Flatley, Thomas; Huang, Norden E.; Patrick, David; Hestnes, Phyllis
2005-01-01
One of the main heritage tools used in scientific and engineering data spectrum analysis is the Fourier Integral Transform and its high performance digital equivalent - the Fast Fourier Transform (FFT). Both carry strong a-priori assumptions about the source data, such as linearity, of being stationary, and of satisfying the Dirichlet conditions. A recent development at the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC), known as the Hilbert-Huang Transform (HHT), proposes a novel approach to the solution for the nonlinear class of spectrum analysis problems. Using a-posteriori data processing based on the Empirical Mode Decomposition (EMD) sifting process (algorithm), followed by the normalized Hilbert Transform of the decomposition data, the HHT allows spectrum analysis of nonlinear and nonstationary data. The EMD sifting process results in a non-constrained decomposition of a source real value data vector into a finite set of Intrinsic Mode Functions (IMF). These functions form a near orthogonal adaptive basis, a basis that is derived from the data. The IMFs can be further analyzed for spectrum interpretation by the classical Hilbert Transform. A new engineering spectrum analysis tool using HHT has been developed at NASA GSFC, the HHT Data Processing System (HHT-DPS). As the HHT-DPS has been successfully used and commercialized, new applications post additional questions about the theoretical basis behind the HHT and EMD algorithms. Why is the fastest changing component of a composite signal being sifted out first in the EMD sifting process? Why does the EMD sifting process seemingly converge and why does it converge rapidly? Does an IMF have a distinctive structure? Why are the IMFs near orthogonal? We address these questions and develop the initial theoretical background for the HHT. This will contribute to the developments of new HHT processing options, such as real-time and 2-D processing using Field Programmable Array (FPGA) computational resources, enhanced HHT synthesis, and broaden the scope of HHT applications for signal processing.
Polcari, J.
2013-08-16
The signal processing concept of signal-to-noise ratio (SNR), in its role as a performance measure, is recast within the more general context of information theory, leading to a series of useful insights. Establishing generalized SNR (GSNR) as a rigorous information theoretic measure inherent in any set of observations significantly strengthens its quantitative performance pedigree while simultaneously providing a specific definition under general conditions. This directly leads to consideration of the log likelihood ratio (LLR): first, as the simplest possible information-preserving transformation (i.e., signal processing algorithm) and subsequently, as an absolute, comparable measure of information for any specific observation exemplar. Furthermore,more » the information accounting methodology that results permits practical use of both GSNR and LLR as diagnostic scalar performance measurements, directly comparable across alternative system/algorithm designs, applicable at any tap point within any processing string, in a form that is also comparable with the inherent performance bounds due to information conservation.« less
Design factors and considerations for a time-based flight management system
NASA Technical Reports Server (NTRS)
Vicroy, D. D.; Williams, D. H.; Sorensen, J. A.
1986-01-01
Recent NASA Langley Research Center research to develop a technology data base from which an advanced Flight Management System (FMS) design might evolve is reviewed. In particular, the generation of fixed range cruise/descent reference trajectories which meet predefined end conditions of altitude, speed, and time is addressed. Results on the design and theoretical basis of the trajectory generation algorithm are presented, followed by a brief discussion of a series of studies that are being conducted to determine the accuracy requirements of the aircraft and weather models resident in the trajectory generation algorithm. Finally, studies to investigate the interface requirements between the pilot and an advanced FMS are considered.
The airport gate assignment problem: a survey.
Bouras, Abdelghani; Ghaleb, Mageed A; Suryahatmaja, Umar S; Salem, Ahmed M
2014-01-01
The airport gate assignment problem (AGAP) is one of the most important problems operations managers face daily. Many researches have been done to solve this problem and tackle its complexity. The objective of the task is assigning each flight (aircraft) to an available gate while maximizing both conveniences to passengers and the operational efficiency of airport. This objective requires a solution that provides the ability to change and update the gate assignment data on a real time basis. In this paper, we survey the state of the art of these problems and the various methods to obtain the solution. Our survey covers both theoretical and real AGAP with the description of mathematical formulations and resolution methods such as exact algorithms, heuristic algorithms, and metaheuristic algorithms. We also provide a research trend that can inspire researchers about new problems in this area.
The Airport Gate Assignment Problem: A Survey
Ghaleb, Mageed A.; Salem, Ahmed M.
2014-01-01
The airport gate assignment problem (AGAP) is one of the most important problems operations managers face daily. Many researches have been done to solve this problem and tackle its complexity. The objective of the task is assigning each flight (aircraft) to an available gate while maximizing both conveniences to passengers and the operational efficiency of airport. This objective requires a solution that provides the ability to change and update the gate assignment data on a real time basis. In this paper, we survey the state of the art of these problems and the various methods to obtain the solution. Our survey covers both theoretical and real AGAP with the description of mathematical formulations and resolution methods such as exact algorithms, heuristic algorithms, and metaheuristic algorithms. We also provide a research trend that can inspire researchers about new problems in this area. PMID:25506074
A Class of Manifold Regularized Multiplicative Update Algorithms for Image Clustering.
Yang, Shangming; Yi, Zhang; He, Xiaofei; Li, Xuelong
2015-12-01
Multiplicative update algorithms are important tools for information retrieval, image processing, and pattern recognition. However, when the graph regularization is added to the cost function, different classes of sample data may be mapped to the same subspace, which leads to the increase of data clustering error rate. In this paper, an improved nonnegative matrix factorization (NMF) cost function is introduced. Based on the cost function, a class of novel graph regularized NMF algorithms is developed, which results in a class of extended multiplicative update algorithms with manifold structure regularization. Analysis shows that in the learning, the proposed algorithms can efficiently minimize the rank of the data representation matrix. Theoretical results presented in this paper are confirmed by simulations. For different initializations and data sets, variation curves of cost functions and decomposition data are presented to show the convergence features of the proposed update rules. Basis images, reconstructed images, and clustering results are utilized to present the efficiency of the new algorithms. Last, the clustering accuracies of different algorithms are also investigated, which shows that the proposed algorithms can achieve state-of-the-art performance in applications of image clustering.
Algorithm theoretical basis for GEDI level-4A footprint above ground biomass density.
NASA Astrophysics Data System (ADS)
Kellner, J. R.; Armston, J.; Blair, J. B.; Duncanson, L.; Hancock, S.; Hofton, M. A.; Luthcke, S. B.; Marselis, S.; Tang, H.; Dubayah, R.
2017-12-01
The Global Ecosystem Dynamics Investigation is a NASA Earth-Venture-2 mission that will place a multi-beam waveform lidar instrument on the International Space Station. GEDI data will provide globally representative measurements of vertical height profiles (waveforms) and estimates of above ground carbon stocks throughout the planet's temperate and tropical regions. Here we describe the current algorithm theoretical basis for the L4A footprint above ground biomass data product. The L4A data product is above ground biomass density (AGBD, Mg · ha-1) at the scale of individual GEDI footprints (25 m diameter). Footprint AGBD is derived from statistical models that relate waveform height metrics to field-estimated above ground biomass. The field estimates are from long-term permanent plot inventories in which all free-standing woody plants greater than a diameter size threshold have been identified and mapped. We simulated GEDI waveforms from discrete-return airborne lidar data using the GEDI waveform simulator. We associated height metrics from simulated waveforms with field-estimated AGBD at 61 sites in temperate and tropical regions of North and South America, Europe, Africa, Asia and Australia. We evaluated the ability of empirical and physically-based regression and machine learning models to predict AGBD at the footprint level. Our analysis benchmarks the performance of these models in terms of site and region-specific accuracy and transferability using a globally comprehensive calibration and validation dataset.
Gobin, Oliver C; Schüth, Ferdi
2008-01-01
Genetic algorithms are widely used to solve and optimize combinatorial problems and are more often applied for library design in combinatorial chemistry. Because of their flexibility, however, their implementation can be challenging. In this study, the influence of the representation of solid catalysts on the performance of genetic algorithms was systematically investigated on the basis of a new, constrained, multiobjective, combinatorial test problem with properties common to problems in combinatorial materials science. Constraints were satisfied by penalty functions, repair algorithms, or special representations. The tests were performed using three state-of-the-art evolutionary multiobjective algorithms by performing 100 optimization runs for each algorithm and test case. Experimental data obtained during the optimization of a noble metal-free solid catalyst system active in the selective catalytic reduction of nitric oxide with propene was used to build up a predictive model to validate the results of the theoretical test problem. A significant influence of the representation on the optimization performance was observed. Binary encodings were found to be the preferred encoding in most of the cases, and depending on the experimental test unit, repair algorithms or penalty functions performed best.
NASA Technical Reports Server (NTRS)
Piepmeier, Jeffrey; Mohammed, Priscilla; De Amici, Giovanni; Kim, Edward; Peng, Jinzheng; Ruf, Christopher; Hanna, Maher; Yueh, Simon; Entekhabi, Dara
2016-01-01
The purpose of the Soil Moisture Active Passive (SMAP) radiometer calibration algorithm is to convert Level 0 (L0) radiometer digital counts data into calibrated estimates of brightness temperatures referenced to the Earth's surface within the main beam. The algorithm theory in most respects is similar to what has been developed and implemented for decades for other satellite radiometers; however, SMAP includes two key features heretofore absent from most satellite borne radiometers: radio frequency interference (RFI) detection and mitigation, and measurement of the third and fourth Stokes parameters using digital correlation. The purpose of this document is to describe the SMAP radiometer and forward model, explain the SMAP calibration algorithm, including approximations, errors, and biases, provide all necessary equations for implementing the calibration algorithm and detail the RFI detection and mitigation process. Section 2 provides a summary of algorithm objectives and driving requirements. Section 3 is a description of the instrument and Section 4 covers the forward models, upon which the algorithm is based. Section 5 gives the retrieval algorithm and theory. Section 6 describes the orbit simulator, which implements the forward model and is the key for deriving antenna pattern correction coefficients and testing the overall algorithm.
NASA Technical Reports Server (NTRS)
Gasiewski, Albin J.
1992-01-01
This technique for electronically rotating the polarization basis of an orthogonal-linear polarization radiometer is based on the measurement of the first three feedhorn Stokes parameters, along with the subsequent transformation of this measured Stokes vector into a rotated coordinate frame. The technique requires an accurate measurement of the cross-correlation between the two orthogonal feedhorn modes, for which an innovative polarized calibration load was developed. The experimental portion of this investigation consisted of a proof of concept demonstration of the technique of electronic polarization basis rotation (EPBR) using a ground based 90-GHz dual orthogonal-linear polarization radiometer. Practical calibration algorithms for ground-, aircraft-, and space-based instruments were identified and tested. The theoretical effort consisted of radiative transfer modeling using the planar-stratified numerical model described in Gasiewski and Staelin (1990).
NASA Astrophysics Data System (ADS)
Wang, Xiaohua; Rong, Mingzhe; Qiu, Juan; Liu, Dingxin; Su, Biao; Wu, Yi
A new type of algorithm for predicting the mechanical faults of a vacuum circuit breaker (VCB) based on an artificial neural network (ANN) is proposed in this paper. There are two types of mechanical faults in a VCB: operation mechanism faults and tripping circuit faults. An angle displacement sensor is used to measure the main axle angle displacement which reflects the displacement of the moving contact, to obtain the state of the operation mechanism in the VCB, while a Hall current sensor is used to measure the trip coil current, which reflects the operation state of the tripping circuit. Then an ANN prediction algorithm based on a sliding time window is proposed in this paper and successfully used to predict mechanical faults in a VCB. The research results in this paper provide a theoretical basis for the realization of online monitoring and fault diagnosis of a VCB.
Linear Scaling Density Functional Calculations with Gaussian Orbitals
NASA Technical Reports Server (NTRS)
Scuseria, Gustavo E.
1999-01-01
Recent advances in linear scaling algorithms that circumvent the computational bottlenecks of large-scale electronic structure simulations make it possible to carry out density functional calculations with Gaussian orbitals on molecules containing more than 1000 atoms and 15000 basis functions using current workstations and personal computers. This paper discusses the recent theoretical developments that have led to these advances and demonstrates in a series of benchmark calculations the present capabilities of state-of-the-art computational quantum chemistry programs for the prediction of molecular structure and properties.
A Study of Wind Turbine Comprehensive Operational Assessment Model Based on EM-PCA Algorithm
NASA Astrophysics Data System (ADS)
Zhou, Minqiang; Xu, Bin; Zhan, Yangyan; Ren, Danyuan; Liu, Dexing
2018-01-01
To assess wind turbine performance accurately and provide theoretical basis for wind farm management, a hybrid assessment model based on Entropy Method and Principle Component Analysis (EM-PCA) was established, which took most factors of operational performance into consideration and reach to a comprehensive result. To verify the model, six wind turbines were chosen as the research objects, the ranking obtained by the method proposed in the paper were 4#>6#>1#>5#>2#>3#, which are completely in conformity with the theoretical ranking, which indicates that the reliability and effectiveness of the EM-PCA method are high. The method could give guidance for processing unit state comparison among different units and launching wind farm operational assessment.
NASA Technical Reports Server (NTRS)
Wielicki, Bruce A. (Principal Investigator); Barkstrom, Bruce R. (Principal Investigator); Baum, Bryan A.; Cess, Robert D.; Charlock, Thomas P.; Coakley, James A.; Green, Richard N.; Lee, Robert B., III; Minnis, Patrick; Smith, G. Louis
1995-01-01
The theoretical bases for the Release 1 algorithms that will be used to process satellite data for investigation of the Clouds and the Earth's Radiant Energy System (CERES) are described. The architecture for software implementation of the methodologies is outlined. Volume 1 provides both summarized and detailed overviews of the CERES Release 1 data analysis system. CERES will produce global top-of-the-atmosphere shortwave and longwave radiative fluxes at the top of the atmosphere, at the surface, and within the atmosphere by using the combination of a large variety of measurements and models. The CERES processing system includes radiance observations from CERES scanning radiometers, cloud properties derived from coincident satellite imaging radiometers, temperature and humidity fields from meteorological analysis models, and high-temporal-resolution geostationary satellite radiances to account for unobserved times. CERES will provide a continuation of the ERBE record and the lowest error climatology of consistent cloud properties and radiation fields. CERES will also substantially improve our knowledge of the Earth's surface radiation budget.
Decoding communities in networks
NASA Astrophysics Data System (ADS)
Radicchi, Filippo
2018-02-01
According to a recent information-theoretical proposal, the problem of defining and identifying communities in networks can be interpreted as a classical communication task over a noisy channel: memberships of nodes are information bits erased by the channel, edges and nonedges in the network are parity bits introduced by the encoder but degraded through the channel, and a community identification algorithm is a decoder. The interpretation is perfectly equivalent to the one at the basis of well-known statistical inference algorithms for community detection. The only difference in the interpretation is that a noisy channel replaces a stochastic network model. However, the different perspective gives the opportunity to take advantage of the rich set of tools of coding theory to generate novel insights on the problem of community detection. In this paper, we illustrate two main applications of standard coding-theoretical methods to community detection. First, we leverage a state-of-the-art decoding technique to generate a family of quasioptimal community detection algorithms. Second and more important, we show that the Shannon's noisy-channel coding theorem can be invoked to establish a lower bound, here named as decodability bound, for the maximum amount of noise tolerable by an ideal decoder to achieve perfect detection of communities. When computed for well-established synthetic benchmarks, the decodability bound explains accurately the performance achieved by the best community detection algorithms existing on the market, telling us that only little room for their improvement is still potentially left.
Decoding communities in networks.
Radicchi, Filippo
2018-02-01
According to a recent information-theoretical proposal, the problem of defining and identifying communities in networks can be interpreted as a classical communication task over a noisy channel: memberships of nodes are information bits erased by the channel, edges and nonedges in the network are parity bits introduced by the encoder but degraded through the channel, and a community identification algorithm is a decoder. The interpretation is perfectly equivalent to the one at the basis of well-known statistical inference algorithms for community detection. The only difference in the interpretation is that a noisy channel replaces a stochastic network model. However, the different perspective gives the opportunity to take advantage of the rich set of tools of coding theory to generate novel insights on the problem of community detection. In this paper, we illustrate two main applications of standard coding-theoretical methods to community detection. First, we leverage a state-of-the-art decoding technique to generate a family of quasioptimal community detection algorithms. Second and more important, we show that the Shannon's noisy-channel coding theorem can be invoked to establish a lower bound, here named as decodability bound, for the maximum amount of noise tolerable by an ideal decoder to achieve perfect detection of communities. When computed for well-established synthetic benchmarks, the decodability bound explains accurately the performance achieved by the best community detection algorithms existing on the market, telling us that only little room for their improvement is still potentially left.
Multiobjective generalized extremal optimization algorithm for simulation of daylight illuminants
NASA Astrophysics Data System (ADS)
Kumar, Srividya Ravindra; Kurian, Ciji Pearl; Gomes-Borges, Marcos Eduardo
2017-10-01
Daylight illuminants are widely used as references for color quality testing and optical vision testing applications. Presently used daylight simulators make use of fluorescent bulbs that are not tunable and occupy more space inside the quality testing chambers. By designing a spectrally tunable LED light source with an optimal number of LEDs, cost, space, and energy can be saved. This paper describes an application of the generalized extremal optimization (GEO) algorithm for selection of the appropriate quantity and quality of LEDs that compose the light source. The multiobjective approach of this algorithm tries to get the best spectral simulation with minimum fitness error toward the target spectrum, correlated color temperature (CCT) the same as the target spectrum, high color rendering index (CRI), and luminous flux as required for testing applications. GEO is a global search algorithm based on phenomena of natural evolution and is especially designed to be used in complex optimization problems. Several simulations have been conducted to validate the performance of the algorithm. The methodology applied to model the LEDs, together with the theoretical basis for CCT and CRI calculation, is presented in this paper. A comparative result analysis of M-GEO evolutionary algorithm with the Levenberg-Marquardt conventional deterministic algorithm is also presented.
Research on compressive sensing reconstruction algorithm based on total variation model
NASA Astrophysics Data System (ADS)
Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin
2017-12-01
Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.
Li, Luyang; Liu, Yun-Hui; Jiang, Tianjiao; Wang, Kai; Fang, Mu
2018-02-01
Despite tremendous efforts made for years, trajectory tracking control (TC) of a nonholonomic mobile robot (NMR) without global positioning system remains an open problem. The major reason is the difficulty to localize the robot by using its onboard sensors only. In this paper, a newly designed adaptive trajectory TC method is proposed for the NMR without its position, orientation, and velocity measurements. The controller is designed on the basis of a novel algorithm to estimate position and velocity of the robot online from visual feedback of an omnidirectional camera. It is theoretically proved that the proposed algorithm yields the TC errors to asymptotically converge to zero. Real-world experiments are conducted on a wheeled NMR to validate the feasibility of the control system.
Analysis of whisker-toughened CMC structural components using an interactive reliability model
NASA Technical Reports Server (NTRS)
Duffy, Stephen F.; Palko, Joseph L.
1992-01-01
Realizing wider utilization of ceramic matrix composites (CMC) requires the development of advanced structural analysis technologies. This article focuses on the use of interactive reliability models to predict component probability of failure. The deterministic William-Warnke failure criterion serves as theoretical basis for the reliability model presented here. The model has been implemented into a test-bed software program. This computer program has been coupled to a general-purpose finite element program. A simple structural problem is presented to illustrate the reliability model and the computer algorithm.
Ladar range image denoising by a nonlocal probability statistics algorithm
NASA Astrophysics Data System (ADS)
Xia, Zhi-Wei; Li, Qi; Xiong, Zhi-Peng; Wang, Qi
2013-01-01
According to the characteristic of range images of coherent ladar and the basis of nonlocal means (NLM), a nonlocal probability statistics (NLPS) algorithm is proposed in this paper. The difference is that NLM performs denoising using the mean of the conditional probability distribution function (PDF) while NLPS using the maximum of the marginal PDF. In the algorithm, similar blocks are found out by the operation of block matching and form a group. Pixels in the group are analyzed by probability statistics and the gray value with maximum probability is used as the estimated value of the current pixel. The simulated range images of coherent ladar with different carrier-to-noise ratio and real range image of coherent ladar with 8 gray-scales are denoised by this algorithm, and the results are compared with those of median filter, multitemplate order mean filter, NLM, median nonlocal mean filter and its incorporation of anatomical side information, and unsupervised information-theoretic adaptive filter. The range abnormality noise and Gaussian noise in range image of coherent ladar are effectively suppressed by NLPS.
Collision detection for spacecraft proximity operations. Ph.D. Thesis - MIT
NASA Technical Reports Server (NTRS)
Vaughan, Robin M.
1987-01-01
The development of a new collision detection algorithm to be used when two spacecraft are operating in the same vicinity is described. The two spacecraft are modeled as unions of convex polyhedra, where the polyhedron resulting from the union may be either convex or nonconvex. The relative motion of the two spacecraft is assumed to be such that one vehicle is moving with constant linear and angular velocity with respect to the other. The algorithm determines if a collision is possible and, if so, predicts the time when the collision will take place. The theoretical basis for the new collision detection algorithm is the C-function formulation of the configuration space approach recently introduced by researchers in robotics. Three different types of C-functions are defined that model the contacts between the vertices, edges, and faces of the polyhedra representing the two spacecraft. The C-functions are shown to be transcendental functions of time for the assumed trajectory of the moving spacecraft. The capabilities of the new algorithm are demonstrated for several example cases.
NASA Technical Reports Server (NTRS)
Hulley, G.; Malakar, N.; Hughes, T.; Islam, T.; Hook, S.
2016-01-01
This document outlines the theory and methodology for generating the Moderate Resolution Imaging Spectroradiometer (MODIS) Level-2 daily daytime and nighttime 1-km land surface temperature (LST) and emissivity product using the Temperature Emissivity Separation (TES) algorithm. The MODIS-TES (MOD21_L2) product, will include the LST and emissivity for three MODIS thermal infrared (TIR) bands 29, 31, and 32, and will be generated for data from the NASA-EOS AM and PM platforms. This is version 1.0 of the ATBD and the goal is maintain a 'living' version of this document with changes made when necessary. The current standard baseline MODIS LST products (MOD11*) are derived from the generalized split-window (SW) algorithm (Wan and Dozier 1996), which produces a 1-km LST product and two classification-based emissivities for bands 31 and 32; and a physics-based day/night algorithm (Wan and Li 1997), which produces a 5-km (C4) and 6-km (C5) LST product and emissivity for seven MODIS bands: 20, 22, 23, 29, 31-33.
Improved Algorithm For Finite-Field Normal-Basis Multipliers
NASA Technical Reports Server (NTRS)
Wang, C. C.
1989-01-01
Improved algorithm reduces complexity of calculations that must precede design of Massey-Omura finite-field normal-basis multipliers, used in error-correcting-code equipment and cryptographic devices. Algorithm represents an extension of development reported in "Algorithm To Design Finite-Field Normal-Basis Multipliers" (NPO-17109), NASA Tech Briefs, Vol. 12, No. 5, page 82.
Psychosocial Pain Management Moderation: The Limit, Activate, and Enhance Model.
Day, Melissa A; Ehde, Dawn M; Jensen, Mark P
2015-10-01
There is a growing emphasis in the pain literature on understanding the following second-order research questions: Why do psychosocial pain treatments work? For whom do various treatments work? This critical review summarizes research that addresses the latter question and proposes a moderation model to help guide future research. A theoretical moderation framework for matching individuals to specific psychosocial pain interventions has been lacking. However, several such frameworks have been proposed in the broad psychotherapy and implementation science literature. Drawing on these theories and adapting them specifically for psychosocial pain treatment, here we propose a Limit, Activate, and Enhance model of pain treatment moderation. This model is unique in that it includes algorithms not only for matching treatments on the basis of patient weaknesses but also for directing patients to interventions that build on their strengths. Critically, this model provides a basis for specific a priori hypothesis generation, and a selection of the possible hypotheses drawn from the model are proposed and discussed. Future research considerations are presented that could refine and expand the model based on theoretically driven empirical evidence. The Limit, Activate, and Enhance model presented here is a theoretically derived framework that provides an a priori basis for hypothesis generation regarding psychosocial pain treatment moderators. The model will advance moderation research via its unique focus on matching patients to specific treatments that (1) limit maladaptive responses, (2) activate adaptive responses, and (3) enhance treatment outcomes based on patient strengths and resources. Copyright © 2015 American Pain Society. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Abadjieva, Emilia; Abadjiev, Valentin
2017-09-01
This work is developed on the basis of the illustrated main parts of the kinematic theory (theory of gearing) of the spatial rack drives in Part 1 of this study. The applied theoretical approach to their synthesis, based on the T. Olivier's second principle is defined. A study of the geometric nature of the surface of action (mesh region, respectively) of these class transmissions is shown. Research software programs for synthesis and visualization of these transmissions and their specific elements are elaborated, on the basis of the given algorithms to the synthesis of the elements of high kinematic joints (active tooth surfaces), with which the movable links of the studied gear systems are equipped.
Evaluation of a transfinite element numerical solution method for nonlinear heat transfer problems
NASA Technical Reports Server (NTRS)
Cerro, J. A.; Scotti, S. J.
1991-01-01
Laplace transform techniques have been widely used to solve linear, transient field problems. A transform-based algorithm enables calculation of the response at selected times of interest without the need for stepping in time as required by conventional time integration schemes. The elimination of time stepping can substantially reduce computer time when transform techniques are implemented in a numerical finite element program. The coupling of transform techniques with spatial discretization techniques such as the finite element method has resulted in what are known as transfinite element methods. Recently attempts have been made to extend the transfinite element method to solve nonlinear, transient field problems. This paper examines the theoretical basis and numerical implementation of one such algorithm, applied to nonlinear heat transfer problems. The problem is linearized and solved by requiring a numerical iteration at selected times of interest. While shown to be acceptable for weakly nonlinear problems, this algorithm is ineffective as a general nonlinear solution method.
Gabor filter based fingerprint image enhancement
NASA Astrophysics Data System (ADS)
Wang, Jin-Xiang
2013-03-01
Fingerprint recognition technology has become the most reliable biometric technology due to its uniqueness and invariance, which has been most convenient and most reliable technique for personal authentication. The development of Automated Fingerprint Identification System is an urgent need for modern information security. Meanwhile, fingerprint preprocessing algorithm of fingerprint recognition technology has played an important part in Automatic Fingerprint Identification System. This article introduces the general steps in the fingerprint recognition technology, namely the image input, preprocessing, feature recognition, and fingerprint image enhancement. As the key to fingerprint identification technology, fingerprint image enhancement affects the accuracy of the system. It focuses on the characteristics of the fingerprint image, Gabor filters algorithm for fingerprint image enhancement, the theoretical basis of Gabor filters, and demonstration of the filter. The enhancement algorithm for fingerprint image is in the windows XP platform with matlab.65 as a development tool for the demonstration. The result shows that the Gabor filter is effective in fingerprint image enhancement technology.
On the Hilbert-Huang Transform Theoretical Foundation
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Blank, Karin; Huang, Norden E.
2004-01-01
The Hilbert-Huang Transform [HHT] is a novel empirical method for spectrum analysis of non-linear and non-stationary signals. The HHT is a recent development and much remains to be done to establish the theoretical foundation of the HHT algorithms. This paper develops the theoretical foundation for the convergence of the HHT sifting algorithm and it proves that the finest spectrum scale will always be the first generated by the HHT Empirical Mode Decomposition (EMD) algorithm. The theoretical foundation for cutting an extrema data points set into two parts is also developed. This then allows parallel signal processing for the HHT computationally complex sifting algorithm and its optimization in hardware.
Parallel Algorithms for Groebner-Basis Reduction
1987-09-25
22209 ELEMENT NO. NO. NO. ACCESSION NO. 11. TITLE (Include Security Classification) * PARALLEL ALGORITHMS FOR GROEBNER -BASIS REDUCTION 12. PERSONAL...All other editions are obsolete. Productivity Engineering in the UNIXt Environment p Parallel Algorithms for Groebner -Basis Reduction Technical Report
On Certain Theoretical Developments Underlying the Hilbert-Huang Transform
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Blank, Karin; Flatley, Thomas; Huang, Norden E.; Petrick, David; Hestness, Phyllis
2006-01-01
One of the main traditional tools used in scientific and engineering data spectral analysis is the Fourier Integral Transform and its high performance digital equivalent - the Fast Fourier Transform (FFT). Both carry strong a-priori assumptions about the source data, such as being linear and stationary, and of satisfying the Dirichlet conditions. A recent development at the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC), known as the Hilbert-Huang Transform (HHT), proposes a novel approach to the solution for the nonlinear class of spectral analysis problems. Using a-posteriori data processing based on the Empirical Mode Decomposition (EMD) sifting process (algorithm), followed by the normalized Hilbert Transform of the decomposed data, the HHT allows spectral analysis of nonlinear and nonstationary data. The EMD sifting process results in a non-constrained decomposition of a source real-value data vector into a finite set of Intrinsic Mode Functions (IMF). These functions form a nearly orthogonal derived from the data (adaptive) basis. The IMFs can be further analyzed for spectrum content by using the classical Hilbert Transform. A new engineering spectral analysis tool using HHT has been developed at NASA GSFC, the HHT Data Processing System (HHT-DPS). As the HHT-DPS has been successfully used and commercialized, new applications pose additional questions about the theoretical basis behind the HHT and EMD algorithms. Why is the fastest changing component of a composite signal being sifted out first in the EMD sifting process? Why does the EMD sifting process seemingly converge and why does it converge rapidly? Does an IMF have a distinctive structure? Why are the IMFs nearly orthogonal? We address these questions and develop the initial theoretical background for the HHT. This will contribute to the development of new HHT processing options, such as real-time and 2-D processing using Field Programmable Gate Array (FPGA) computational resources,
Rigler, E. Joshua
2017-04-26
A theoretical basis and prototype numerical algorithm are provided that decompose regular time series of geomagnetic observations into three components: secular variation; solar quiet, and disturbance. Respectively, these three components correspond roughly to slow changes in the Earth’s internal magnetic field, periodic daily variations caused by quasi-stationary (with respect to the sun) electrical current systems in the Earth’s magnetosphere, and episodic perturbations to the geomagnetic baseline that are typically driven by fluctuations in a solar wind that interacts electromagnetically with the Earth’s magnetosphere. In contrast to similar algorithms applied to geomagnetic data in the past, this one addresses the issue of real time data acquisition directly by applying a time-causal, exponential smoother with “seasonal corrections” to the data as soon as they become available.
The effect of S-wave arrival times on the accuracy of hypocenter estimation
Gomberg, J.S.; Shedlock, K.M.; Roecker, S.W.
1990-01-01
We have examined the theoretical basis behind some of the widely accepted "rules of thumb' for obtaining accurate hypocenter estimates that pertain to the use of S phases and illustrate, in a variety of ways, why and when these "rules' are applicable. Most methods used to determine earthquake hypocenters are based on iterative, linearized, least-squares algorithms. We examine the influence of S-phase arrival time data on such algorithms by using the program HYPOINVERSE with synthetic datasets. We conclude that a correctly timed S phase recorded within about 1.4 focal depth's distance from the epicenter can be a powerful constraint on focal depth. Furthermore, we demonstrate that even a single incorrectly timed S phase can result in depth estimates and associated measures of uncertainty that are significantly incorrect. -from Authors
NASA Astrophysics Data System (ADS)
Roberts, Brenden; Vidick, Thomas; Motrunich, Olexei I.
2017-12-01
The success of polynomial-time tensor network methods for computing ground states of certain quantum local Hamiltonians has recently been given a sound theoretical basis by Arad et al. [Math. Phys. 356, 65 (2017), 10.1007/s00220-017-2973-z]. The convergence proof, however, relies on "rigorous renormalization group" (RRG) techniques which differ fundamentally from existing algorithms. We introduce a practical adaptation of the RRG procedure which, while no longer theoretically guaranteed to converge, finds matrix product state ansatz approximations to the ground spaces and low-lying excited spectra of local Hamiltonians in realistic situations. In contrast to other schemes, RRG does not utilize variational methods on tensor networks. Rather, it operates on subsets of the system Hilbert space by constructing approximations to the global ground space in a treelike manner. We evaluate the algorithm numerically, finding similar performance to density matrix renormalization group (DMRG) in the case of a gapped nondegenerate Hamiltonian. Even in challenging situations of criticality, large ground-state degeneracy, or long-range entanglement, RRG remains able to identify candidate states having large overlap with ground and low-energy eigenstates, outperforming DMRG in some cases.
Integrand-level reduction of loop amplitudes by computational algebraic geometry methods
NASA Astrophysics Data System (ADS)
Zhang, Yang
2012-09-01
We present an algorithm for the integrand-level reduction of multi-loop amplitudes of renormalizable field theories, based on computational algebraic geometry. This algorithm uses (1) the Gröbner basis method to determine the basis for integrand-level reduction, (2) the primary decomposition of an ideal to classify all inequivalent solutions of unitarity cuts. The resulting basis and cut solutions can be used to reconstruct the integrand from unitarity cuts, via polynomial fitting techniques. The basis determination part of the algorithm has been implemented in the Mathematica package, BasisDet. The primary decomposition part can be readily carried out by algebraic geometry softwares, with the output of the package BasisDet. The algorithm works in both D = 4 and D = 4 - 2 ɛ dimensions, and we present some two and three-loop examples of applications of this algorithm.
Parallel Lattice Basis Reduction Using a Multi-threaded Schnorr-Euchner LLL Algorithm
NASA Astrophysics Data System (ADS)
Backes, Werner; Wetzel, Susanne
In this paper, we introduce a new parallel variant of the LLL lattice basis reduction algorithm. Our new, multi-threaded algorithm is the first to provide an efficient, parallel implementation of the Schorr-Euchner algorithm for today’s multi-processor, multi-core computer architectures. Experiments with sparse and dense lattice bases show a speed-up factor of about 1.8 for the 2-thread and about factor 3.2 for the 4-thread version of our new parallel lattice basis reduction algorithm in comparison to the traditional non-parallel algorithm.
Automated quantum operations in photonic qutrits
NASA Astrophysics Data System (ADS)
Borges, G. F.; Baldijão, R. D.; Condé, J. G. L.; Cabral, J. S.; Marques, B.; Terra Cunha, M.; Cabello, A.; Pádua, S.
2018-02-01
We report an experimental implementation of automated state transformations on spatial photonic qutrits following the theoretical proposal made by Baldijão et al. [Phys. Rev. A 96, 032329 (2017), 10.1103/PhysRevA.96.032329]. A qutrit state is simulated by using three Gaussian beams, and after some state operations, the transformed state is available in the end in terms of the basis state. The state transformation setup uses a spatial light modulator and a calcite-based interferometer. The results reveal the usefulness of the operation method. The experimental data show a good agreement with theoretical predictions, opening possibilities for explorations in higher dimensions and in a wide range of applications. This is a necessary step in qualifying spatial photonic qudits as a competitive setup for experimental research in the implementation of quantum algorithms which demand a large number of steps.
Remote sensing of chlorophyll concentration: State-of-the-art, 1975
NASA Technical Reports Server (NTRS)
Atwell, B. H.
1976-01-01
Remote measurement of chlorophyll concentration of the world's oceans from satellite observations could potentially be extremely useful for assessments of productivity in large areas for which measurements by other means would be impractical. The basis of these measurements rests with the physics of the interaction of light with material dissolved and suspended in the water. It is theoretically possible to predict the nature of light upwelled from the ocean surface from a solution to the radiative transfer equation. Practically, however, this is difficult. Monte-Carlo methods presently are thought to be the most viable method to treat the general theoretical problem. With restrictive assumptions of the nature of scattering, it is possible to construct simpler models. Algorithms developed to relate chlorophyll concentration (or some other parameter, i.e., seechi depth) to the upwelled light spectrum are discussed.
NASA Technical Reports Server (NTRS)
Kamat, M. P.
1980-01-01
The formulation basis for establishing the static or dynamic equilibrium configurations of finite element models of structures which may behave in the nonlinear range are provided. With both geometric and time independent material nonlinearities included, the development is restricted to simple one and two dimensional finite elements which are regarded as being the basic elements for modeling full aircraft-like structures under crash conditions. Representations of a rigid link and an impenetrable contact plane are added to the deformation model so that any number of nodes of the finite element model may be connected by a rigid link or may contact the plane. Equilibrium configurations are derived as the stationary conditions of a potential function of the generalized nodal variables of the model. Minimization of the nonlinear potential function is achieved by using the best current variable metric update formula for use in unconstrained minimization. Powell's conjugate gradient algorithm, which offers very low storage requirements at some slight increase in the total number of calculations, is the other alternative algorithm to be used for extremely large scale problems.
Determining the Basis of Homodesmotic Reactions of Cyclic Organic Compounds by Means of Graph Theory
NASA Astrophysics Data System (ADS)
Khursan, S. L.; Ismagilova, A. S.; Akhmetyanova, A. I.
2018-07-01
Comparative calculations based on the use of a homodesmotic reaction (HDR)—an isodesmic process with the additional requirement for group balance—is used to analyze the thermochemical characteristics of cyclic organic compounds exemplified by bicyclo[2.1.0]pentene-2. To avoid confusion in selecting HDRs, an algorithm is developed for determining the HDR basis, i.e., the set of all possible independent homodesmotic reactions. The algorithm for constructing the set of HDRs is based on an analysis and transformations of the bond graph of groups for the investigated chemical compound. The use of graph theory allows us to automate the procedure for deriving the basis of homodesmotic reactions, and to obtain a visual geometric interpretation of the basis, which is important for subsequent physicochemical analysis. The energetics of bicyclo[2.1.0]pentene-2 is investigated using the proposed approach, and the independent basis of HDRs is found to include 19 formal transformations. Standard enthalpies for the test compound and the participants of homodesmotic reactions are calculated using the G3 composite approach. Thermochemical analysis of the obtained data allows us to determine the standard enthalpy of formation of the bicycle (Δf H° = 336.4 kJ/mol) and value Δf H° of a number of cyclic and acyclic alkenes and alkadienes that are products of theoretical decomposition of the test compound. The proposed method is shown to be extremely effective in analyzing the effects of nonbonded interactions in the structure of organic molecules. The ring strain energy of the bicycle is calculated or the test compound: E S = 295.2± 2.2 kJ/mol.
Adaptively resizing populations: Algorithm, analysis, and first results
NASA Technical Reports Server (NTRS)
Smith, Robert E.; Smuda, Ellen
1993-01-01
Deciding on an appropriate population size for a given Genetic Algorithm (GA) application can often be critical to the algorithm's success. Too small, and the GA can fall victim to sampling error, affecting the efficacy of its search. Too large, and the GA wastes computational resources. Although advice exists for sizing GA populations, much of this advice involves theoretical aspects that are not accessible to the novice user. An algorithm for adaptively resizing GA populations is suggested. This algorithm is based on recent theoretical developments that relate population size to schema fitness variance. The suggested algorithm is developed theoretically, and simulated with expected value equations. The algorithm is then tested on a problem where population sizing can mislead the GA. The work presented suggests that the population sizing algorithm may be a viable way to eliminate the population sizing decision from the application of GA's.
The GLAS Algorithm Theoretical Basis Document for Precision Attitude Determination (PAD)
NASA Technical Reports Server (NTRS)
Bae, Sungkoo; Smith, Noah; Schutz, Bob E.
2013-01-01
The Geoscience Laser Altimeter System (GLAS) was the sole instrument for NASAs Ice, Cloud and land Elevation Satellite (ICESat) laser altimetry mission. The primary purpose of the ICESat mission was to make ice sheet elevation measurements of the polar regions. Additional goals were to measure the global distribution of clouds and aerosols and to map sea ice, land topography and vegetation. ICESat was the benchmark Earth Observing System (EOS) mission to be used to determine the mass balance of the ice sheets, as well as for providing cloud property information, especially for stratospheric clouds common over polar areas.
A generalized algorithm to design finite field normal basis multipliers
NASA Technical Reports Server (NTRS)
Wang, C. C.
1986-01-01
Finite field arithmetic logic is central in the implementation of some error-correcting coders and some cryptographic devices. There is a need for good multiplication algorithms which can be easily realized. Massey and Omura recently developed a new multiplication algorithm for finite fields based on a normal basis representation. Using the normal basis representation, the design of the finite field multiplier is simple and regular. The fundamental design of the Massey-Omura multiplier is based on a design of a product function. In this article, a generalized algorithm to locate a normal basis in a field is first presented. Using this normal basis, an algorithm to construct the product function is then developed. This design does not depend on particular characteristics of the generator polynomial of the field.
Economic communication model set
NASA Astrophysics Data System (ADS)
Zvereva, Olga M.; Berg, Dmitry B.
2017-06-01
This paper details findings from the research work targeted at economic communications investigation with agent-based models usage. The agent-based model set was engineered to simulate economic communications. Money in the form of internal and external currencies was introduced into the models to support exchanges in communications. Every model, being based on the general concept, has its own peculiarities in algorithm and input data set since it was engineered to solve the specific problem. Several and different origin data sets were used in experiments: theoretic sets were estimated on the basis of static Leontief's equilibrium equation and the real set was constructed on the basis of statistical data. While simulation experiments, communication process was observed in dynamics, and system macroparameters were estimated. This research approved that combination of an agent-based and mathematical model can cause a synergetic effect.
Ho, Kevin I-J; Leung, Chi-Sing; Sum, John
2010-06-01
In the last two decades, many online fault/noise injection algorithms have been developed to attain a fault tolerant neural network. However, not much theoretical works related to their convergence and objective functions have been reported. This paper studies six common fault/noise-injection-based online learning algorithms for radial basis function (RBF) networks, namely 1) injecting additive input noise, 2) injecting additive/multiplicative weight noise, 3) injecting multiplicative node noise, 4) injecting multiweight fault (random disconnection of weights), 5) injecting multinode fault during training, and 6) weight decay with injecting multinode fault. Based on the Gladyshev theorem, we show that the convergence of these six online algorithms is almost sure. Moreover, their true objective functions being minimized are derived. For injecting additive input noise during training, the objective function is identical to that of the Tikhonov regularizer approach. For injecting additive/multiplicative weight noise during training, the objective function is the simple mean square training error. Thus, injecting additive/multiplicative weight noise during training cannot improve the fault tolerance of an RBF network. Similar to injective additive input noise, the objective functions of other fault/noise-injection-based online algorithms contain a mean square error term and a specialized regularization term.
Wavelength-adaptive dehazing using histogram merging-based classification for UAV images.
Yoon, Inhye; Jeong, Seokhwa; Jeong, Jaeheon; Seo, Doochun; Paik, Joonki
2015-03-19
Since incoming light to an unmanned aerial vehicle (UAV) platform can be scattered by haze and dust in the atmosphere, the acquired image loses the original color and brightness of the subject. Enhancement of hazy images is an important task in improving the visibility of various UAV images. This paper presents a spatially-adaptive dehazing algorithm that merges color histograms with consideration of the wavelength-dependent atmospheric turbidity. Based on the wavelength-adaptive hazy image acquisition model, the proposed dehazing algorithm consists of three steps: (i) image segmentation based on geometric classes; (ii) generation of the context-adaptive transmission map; and (iii) intensity transformation for enhancing a hazy UAV image. The major contribution of the research is a novel hazy UAV image degradation model by considering the wavelength of light sources. In addition, the proposed transmission map provides a theoretical basis to differentiate visually important regions from others based on the turbidity and merged classification results.
Preliminary Results from the GPS-Reflections Mediterranean Balloon Experiment (GPSR-MEBEX)
NASA Technical Reports Server (NTRS)
Garrison, James L.; Ruffini, Giulio; Rius, Antonio; Cardellach, Estelle; Masters, Dallas; Armatys, Michael; Zavorotny, Valery; Bauer, Frank H. (Technical Monitor)
2000-01-01
An experiment to collect bistatically scattered GPS signals from a balloon at 37 km altitude has been conducted. This experiment represented the highest altitude to date that such signals were successfully recorded. The flight took place in August 1999 over the Mediterranean sea, between a launch in Sicily and recovery near Nerpio, a town in the Sierra de Segura, Albacete province of Huelva, Spain. Results from this experiment are presented, showing the waveform shape as compared to theoretical calculations. These results will be used to validate analytical models which form the basis of wind vector retrieval algorithms. These algorithms are already being validated from aircraft altitudes, but may be applied to data from future spacebourne GPS receivers. Surface wind data from radiosondes were used for comparison. This experiment was a cooperative project between NASA, the IEEC in Barcelona, and the University of Colorado at Boulder.
Preliminary Results from the GPS-Reflections Mediterranean Balloon Experiment (GPSR MEBEX)
NASA Technical Reports Server (NTRS)
Garrison, James L.; Ruffini, Giulio; Rius, Antonio; Cardellach, Estelle; Masters, Dallas; Armathys, Michael; Zavorotny, Valery
2000-01-01
An experiment to collect bistatically scattered GPS signals from a balloon at 37 km altitude has been conducted. This experiment represented the highest altitude to date that such signals were successfully recorded. The flight took place in August 1999 over the Mediterranean sea, between a launch in Sicily and recovery near Nerpio, a town in the Sierra de Segura, Albacete province of Huelva, Spain. Results from this experiment are presented, showing the waveform shape as compared to theoretical calculations. These results will be used to validate analytical models which form the basis of wind vector retrieval algorithms. These algorithms are already being validated from aircraft altitudes, but may be applied to data from future spaceborne GPS receivers. Surface wind data from radiosondes were used for comparison. This experiment was a cooperative project between NASA, the IEEC in Barcelona, and the University of Colorado at Boulder.
Ciesielski, Krzysztof Chris; Udupa, Jayaram K.
2011-01-01
In the current vast image segmentation literature, there seems to be considerable redundancy among algorithms, while there is a serious lack of methods that would allow their theoretical comparison to establish their similarity, equivalence, or distinctness. In this paper, we make an attempt to fill this gap. To accomplish this goal, we argue that: (1) every digital segmentation algorithm A should have a well defined continuous counterpart MA, referred to as its model, which constitutes an asymptotic of A when image resolution goes to infinity; (2) the equality of two such models MA and MA′ establishes a theoretical (asymptotic) equivalence of their digital counterparts A and A′. Such a comparison is of full theoretical value only when, for each involved algorithm A, its model MA is proved to be an asymptotic of A. So far, such proofs do not appear anywhere in the literature, even in the case of algorithms introduced as digitizations of continuous models, like level set segmentation algorithms. The main goal of this article is to explore a line of investigation for formally pairing the digital segmentation algorithms with their asymptotic models, justifying such relations with mathematical proofs, and using the results to compare the segmentation algorithms in this general theoretical framework. As a first step towards this general goal, we prove here that the gradient based thresholding model M∇ is the asymptotic for the fuzzy connectedness Udupa and Samarasekera segmentation algorithm used with gradient based affinity A∇. We also argue that, in a sense, M∇ is the asymptotic for the original front propagation level set algorithm of Malladi, Sethian, and Vemuri, thus establishing a theoretical equivalence between these two specific algorithms. Experimental evidence of this last equivalence is also provided. PMID:21442014
NASA Astrophysics Data System (ADS)
Ma, Weiwei; Gong, Cailan; Hu, Yong; Li, Long; Meng, Peng
2015-10-01
Remote sensing technology has been broadly recognized for its convenience and efficiency in mapping vegetation, particularly in high-altitude and inaccessible areas where there are lack of in-situ observations. In this study, Landsat Thematic Mapper (TM) images and Chinese environmental mitigation satellite CCD sensor (HJ-1 CCD) images, both of which are at 30m spatial resolution were employed for identifying and monitoring of vegetation types in a area of Western China——Qinghai Lake Watershed(QHLW). A decision classification tree (DCT) algorithm using multi-characteristic including seasonal TM/HJ-1 CCD time series data combined with digital elevation models (DEMs) dataset, and a supervised maximum likelihood classification (MLC) algorithm with single-data TM image were applied vegetation classification. Accuracy of the two algorithms was assessed using field observation data. Based on produced vegetation classification maps, it was found that the DCT using multi-season data and geomorphologic parameters was superior to the MLC algorithm using single-data image, improving the overall accuracy by 11.86% at second class level and significantly reducing the "salt and pepper" noise. The DCT algorithm applied to TM /HJ-1 CCD time series data geomorphologic parameters appeared as a valuable and reliable tool for monitoring vegetation at first class level (5 vegetation classes) and second class level(8 vegetation subclasses). The DCT algorithm using multi-characteristic might provide a theoretical basis and general approach to automatic extraction of vegetation types from remote sensing imagery over plateau areas.
Angles-centroids fitting calibration and the centroid algorithm applied to reverse Hartmann test
NASA Astrophysics Data System (ADS)
Zhao, Zhu; Hui, Mei; Xia, Zhengzheng; Dong, Liquan; Liu, Ming; Liu, Xiaohua; Kong, Lingqin; Zhao, Yuejin
2017-02-01
In this paper, we develop an angles-centroids fitting (ACF) system and the centroid algorithm to calibrate the reverse Hartmann test (RHT) with sufficient precision. The essence of ACF calibration is to establish the relationship between ray angles and detector coordinates. Centroids computation is used to find correspondences between the rays of datum marks and detector pixels. Here, the point spread function of RHT is classified as circle of confusion (CoC), and the fitting of a CoC spot with 2D Gaussian profile to identify the centroid forms the basis of the centroid algorithm. Theoretical and experimental results of centroids computation demonstrate that the Gaussian fitting method has a less centroid shift or the shift grows at a slower pace when the quality of the image is reduced. In ACF tests, the optical instrumental alignments reach an overall accuracy of 0.1 pixel with the application of laser spot centroids tracking program. Locating the crystal at different positions, the feasibility and accuracy of ACF calibration are further validated to 10-6-10-4 rad root-mean-square error of the calibrations differences.
NASA Astrophysics Data System (ADS)
Eckert, R.; Neyhart, J. T.; Burd, L.; Polikar, R.; Mandayam, S. A.; Tseng, M.
2003-03-01
Mammography is the best method available as a non-invasive technique for the early detection of breast cancer. The radiographic appearance of the female breast consists of radiolucent (dark) regions due to fat and radiodense (light) regions due to connective and epithelial tissue. The amount of radiodense tissue can be used as a marker for predicting breast cancer risk. Previously, we have shown that the use of statistical models is a reliable technique for segmenting radiodense tissue. This paper presents improvements in the model that allow for further development of an automated system for segmentation of radiodense tissue. The segmentation algorithm employs a two-step process. In the first step, segmentation of tissue and non-tissue regions of a digitized X-ray mammogram image are identified using a radial basis function neural network. The second step uses a constrained Neyman-Pearson algorithm, developed especially for this research work, to determine the amount of radiodense tissue. Results obtained using the algorithm have been validated by comparing with estimates provided by a radiologist employing previously established methods.
Fractal Complexity-Based Feature Extraction Algorithm of Communication Signals
NASA Astrophysics Data System (ADS)
Wang, Hui; Li, Jingchao; Guo, Lili; Dou, Zheng; Lin, Yun; Zhou, Ruolin
How to analyze and identify the characteristics of radiation sources and estimate the threat level by means of detecting, intercepting and locating has been the central issue of electronic support in the electronic warfare, and communication signal recognition is one of the key points to solve this issue. Aiming at accurately extracting the individual characteristics of the radiation source for the increasingly complex communication electromagnetic environment, a novel feature extraction algorithm for individual characteristics of the communication radiation source based on the fractal complexity of the signal is proposed. According to the complexity of the received signal and the situation of environmental noise, use the fractal dimension characteristics of different complexity to depict the subtle characteristics of the signal to establish the characteristic database, and then identify different broadcasting station by gray relation theory system. The simulation results demonstrate that the algorithm can achieve recognition rate of 94% even in the environment with SNR of -10dB, and this provides an important theoretical basis for the accurate identification of the subtle features of the signal at low SNR in the field of information confrontation.
Optimization Techniques for Analysis of Biological and Social Networks
2012-03-28
analyzing a new metaheuristic technique, variable objective search. 3. Experimentation and application: Implement the proposed algorithms , test and fine...alternative mathematical programming formulations, their theoretical analysis, the development of exact algorithms , and heuristics. Originally, clusters...systematic fashion under a unifying theoretical and algorithmic framework. Optimization, Complex Networks, Social Network Analysis, Computational
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions
NASA Astrophysics Data System (ADS)
Novosad, Philip; Reader, Andrew J.
2016-06-01
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [11C]SCH23390 data, showing promising results.
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions.
Novosad, Philip; Reader, Andrew J
2016-06-21
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [(18)F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [(11)C]SCH23390 data, showing promising results.
NASA Technical Reports Server (NTRS)
1995-01-01
The theoretical bases for the Release 1 algorithms that will be used to process satellite data for investigation of the Clouds and Earth's Radiant Energy System (CERES) are described. The architecture for software implementation of the methodologies is outlined. Volume 3 details the advanced CERES methods for performing scene identification and inverting each CERES scanner radiance to a top-of-the-atmosphere (TOA) flux. CERES determines cloud fraction, height, phase, effective particle size, layering, and thickness from high-resolution, multispectral imager data. CERES derives cloud properties for each pixel of the Tropical Rainfall Measuring Mission (TRMM) visible and infrared scanner and the Earth Observing System (EOS) moderate-resolution imaging spectroradiometer. Cloud properties for each imager pixel are convolved with the CERES footprint point spread function to produce average cloud properties for each CERES scanner radiance. The mean cloud properties are used to determine an angular distribution model (ADM) to convert each CERES radiance to a TOA flux. The TOA fluxes are used in simple parameterization to derive surface radiative fluxes. This state-of-the-art cloud-radiation product will be used to substantially improve our understanding of the complex relationship between clouds and the radiation budget of the Earth-atmosphere system.
Synthetic aperture radar image formation for the moving-target and near-field bistatic cases
NASA Astrophysics Data System (ADS)
Ding, Yu
This dissertation addresses topics in two areas of synthetic aperture radar (SAR) image formation: time-frequency based SAR imaging of moving targets and a fast backprojection (BP) algorithm for near-field bistatic SAR imaging. SAR imaging of a moving target is a challenging task due to unknown motion of the target. We approach this problem in a theoretical way, by analyzing the Wigner-Ville distribution (WVD) based SAR imaging technique. We derive approximate closed-form expressions for the point-target response of the SAR imaging system, which quantify the image resolution, and show how the blurring in conventional SAR imaging can be eliminated, while the target shift still remains. Our analyses lead to accurate prediction of the target position in the reconstructed images. The derived expressions also enable us to further study additional aspects of WVD-based SAR imaging. Bistatic SAR imaging is more involved than the monostatic SAR case, because of the separation of the transmitter and the receiver, and possibly the changing bistatic geometry. For near-field bistatic SAR imaging, we develop a novel fast BP algorithm, motivated by a newly proposed fast BP algorithm in computer tomography. First we show that the BP algorithm is the spatial-domain counterpart of the benchmark o -- k algorithm in bistatic SAR imaging, yet it avoids the frequency-domain interpolation in the o -- k algorithm, which may cause artifacts in the reconstructed image. We then derive the band-limited property for BP methods in both monostatic and bistatic SAR imaging, which is the basis for developing the fast BP algorithm. We compare our algorithm with other frequency-domain based algorithms, and show that it achieves better reconstructed image quality, while having the same computational complexity as that of the frequency-domain based algorithms.
NASA Technical Reports Server (NTRS)
Shutz, Bob E.; Urban, Timothy J.
2014-01-01
This ATBD summarizes (and links with other ATBDs) the elements used to obtain the geolocated GLAS laser spot location, with respect to the Earth Center of Mass. Because of the approach used, the reference frame used to express the geolocation is linked to the reference frame used for POD and PAD, which are related to the ITRF. The geolocated spot coordinates (which includes the elevation or height, with respect to an adopted reference ellipsoid) is the inferred position of the laser spot, since the spot location is not directly measured. This document also summarizes the GLAS operation time periods.
PRESS-based EFOR algorithm for the dynamic parametrical modeling of nonlinear MDOF systems
NASA Astrophysics Data System (ADS)
Liu, Haopeng; Zhu, Yunpeng; Luo, Zhong; Han, Qingkai
2017-09-01
In response to the identification problem concerning multi-degree of freedom (MDOF) nonlinear systems, this study presents the extended forward orthogonal regression (EFOR) based on predicted residual sums of squares (PRESS) to construct a nonlinear dynamic parametrical model. The proposed parametrical model is based on the non-linear autoregressive with exogenous inputs (NARX) model and aims to explicitly reveal the physical design parameters of the system. The PRESS-based EFOR algorithm is proposed to identify such a model for MDOF systems. By using the algorithm, we built a common-structured model based on the fundamental concept of evaluating its generalization capability through cross-validation. The resulting model aims to prevent over-fitting with poor generalization performance caused by the average error reduction ratio (AERR)-based EFOR algorithm. Then, a functional relationship is established between the coefficients of the terms and the design parameters of the unified model. Moreover, a 5-DOF nonlinear system is taken as a case to illustrate the modeling of the proposed algorithm. Finally, a dynamic parametrical model of a cantilever beam is constructed from experimental data. Results indicate that the dynamic parametrical model of nonlinear systems, which depends on the PRESS-based EFOR, can accurately predict the output response, thus providing a theoretical basis for the optimal design of modeling methods for MDOF nonlinear systems.
A New Algorithm with Plane Waves and Wavelets for Random Velocity Fields with Many Spatial Scales
NASA Astrophysics Data System (ADS)
Elliott, Frank W.; Majda, Andrew J.
1995-03-01
A new Monte Carlo algorithm for constructing and sampling stationary isotropic Gaussian random fields with power-law energy spectrum, infrared divergence, and fractal self-similar scaling is developed here. The theoretical basis for this algorithm involves the fact that such a random field is well approximated by a superposition of random one-dimensional plane waves involving a fixed finite number of directions. In general each one-dimensional plane wave is the sum of a random shear layer and a random acoustical wave. These one-dimensional random plane waves are then simulated by a wavelet Monte Carlo method for a single space variable developed recently by the authors. The computational results reported in this paper demonstrate remarkable low variance and economical representation of such Gaussian random fields through this new algorithm. In particular, the velocity structure function for an imcorepressible isotropic Gaussian random field in two space dimensions with the Kolmogoroff spectrum can be simulated accurately over 12 decades with only 100 realizations of the algorithm with the scaling exponent accurate to 1.1% and the constant prefactor accurate to 6%; in fact, the exponent of the velocity structure function can be computed over 12 decades within 3.3% with only 10 realizations. Furthermore, only 46,592 active computational elements are utilized in each realization to achieve these results for 12 decades of scaling behavior.
NASA Astrophysics Data System (ADS)
Ha, Jeongmok; Jeong, Hong
2016-07-01
This study investigates the directed acyclic subgraph (DAS) algorithm, which is used to solve discrete labeling problems much more rapidly than other Markov-random-field-based inference methods but at a competitive accuracy. However, the mechanism by which the DAS algorithm simultaneously achieves competitive accuracy and fast execution speed, has not been elucidated by a theoretical derivation. We analyze the DAS algorithm by comparing it with a message passing algorithm. Graphical models, inference methods, and energy-minimization frameworks are compared between DAS and message passing algorithms. Moreover, the performances of DAS and other message passing methods [sum-product belief propagation (BP), max-product BP, and tree-reweighted message passing] are experimentally compared.
A theoretical comparison of evolutionary algorithms and simulated annealing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, W.E.
1995-08-28
This paper theoretically compares the performance of simulated annealing and evolutionary algorithms. Our main result is that under mild conditions a wide variety of evolutionary algorithms can be shown to have greater performance than simulated annealing after a sufficiently large number of function evaluations. This class of EAs includes variants of evolutionary strategie and evolutionary programming, the canonical genetic algorithm, as well as a variety of genetic algorithms that have been applied to combinatorial optimization problems. The proof of this result is based on a performance analysis of a very general class of stochastic optimization algorithms, which has implications formore » the performance of a variety of other optimization algorithm.« less
Reduced Order Model Basis Vector Generation: Generates Basis Vectors fro ROMs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arrighi, Bill
2016-03-03
libROM is a library that implements order reduction via singular value decomposition (SVD) of sampled state vectors. It implements 2 parallel, incremental SVD algorithms and one serial, non-incremental algorithm. It also provides a mechanism for adaptive sampling of basis vectors.
NASA Astrophysics Data System (ADS)
Tavan, Paul; Schulten, Klaus
1980-03-01
A new, efficient algorithm for the evaluation of the matrix elements of the CI Hamiltonian in the basis of spin-coupled ν-fold excitations (over orthonormal orbitals) is developed for even electron systems. For this purpose we construct an orthonormal, spin-adapted CI basis in the framework of second quantization. As a prerequisite, spin and space parts of the fermion operators have to be separated; this makes it possible to introduce the representation theory of the permutation group. The ν-fold excitation operators are Serber spin-coupled products of particle-hole excitations. This construction is also designed for CI calculations from multireference (open-shell) states. The 2N-electron Hamiltonian is expanded in terms of spin-coupled particle-hole operators which map any ν-fold excitation on ν-, and ν±1-, and ν±2-fold excitations. For the calculation of the CI matrix this leaves one with only the evaluation of overlap matrix elements between spin-coupled excitations. This leads to a set of ten general matrix element formulas which contain Serber representation matrices of the permutation group Sν×Sν as parameters. Because of the Serber structure of the CI basis these group-theoretical parameters are kept to a minimum such that they can be stored readily in the central memory of a computer for ν?4 and even for higher excitations. As the computational effort required to obtain the CI matrix elements from the general formulas is very small, the algorithm presented appears to constitute for even electron systems a promising alternative to existing CI methods for multiply excited configurations, e.g., the unitary group approach. Our method makes possible the adaptation of spatial symmetries and the selection of any subset of configurations. The algorithm has been implemented in a computer program and tested extensively for ν?4 and singlet ground and excited states.
Wavelength-Adaptive Dehazing Using Histogram Merging-Based Classification for UAV Images
Yoon, Inhye; Jeong, Seokhwa; Jeong, Jaeheon; Seo, Doochun; Paik, Joonki
2015-01-01
Since incoming light to an unmanned aerial vehicle (UAV) platform can be scattered by haze and dust in the atmosphere, the acquired image loses the original color and brightness of the subject. Enhancement of hazy images is an important task in improving the visibility of various UAV images. This paper presents a spatially-adaptive dehazing algorithm that merges color histograms with consideration of the wavelength-dependent atmospheric turbidity. Based on the wavelength-adaptive hazy image acquisition model, the proposed dehazing algorithm consists of three steps: (i) image segmentation based on geometric classes; (ii) generation of the context-adaptive transmission map; and (iii) intensity transformation for enhancing a hazy UAV image. The major contribution of the research is a novel hazy UAV image degradation model by considering the wavelength of light sources. In addition, the proposed transmission map provides a theoretical basis to differentiate visually important regions from others based on the turbidity and merged classification results. PMID:25808767
A-VCI: A flexible method to efficiently compute vibrational spectra
NASA Astrophysics Data System (ADS)
Odunlami, Marc; Le Bris, Vincent; Bégué, Didier; Baraille, Isabelle; Coulaud, Olivier
2017-06-01
The adaptive vibrational configuration interaction algorithm has been introduced as a new method to efficiently reduce the dimension of the set of basis functions used in a vibrational configuration interaction process. It is based on the construction of nested bases for the discretization of the Hamiltonian operator according to a theoretical criterion that ensures the convergence of the method. In the present work, the Hamiltonian is written as a sum of products of operators. The purpose of this paper is to study the properties and outline the performance details of the main steps of the algorithm. New parameters have been incorporated to increase flexibility, and their influence has been thoroughly investigated. The robustness and reliability of the method are demonstrated for the computation of the vibrational spectrum up to 3000 cm-1 of a widely studied 6-atom molecule (acetonitrile). Our results are compared to the most accurate up to date computation; we also give a new reference calculation for future work on this system. The algorithm has also been applied to a more challenging 7-atom molecule (ethylene oxide). The computed spectrum up to 3200 cm-1 is the most accurate computation that exists today on such systems.
A-VCI: A flexible method to efficiently compute vibrational spectra.
Odunlami, Marc; Le Bris, Vincent; Bégué, Didier; Baraille, Isabelle; Coulaud, Olivier
2017-06-07
The adaptive vibrational configuration interaction algorithm has been introduced as a new method to efficiently reduce the dimension of the set of basis functions used in a vibrational configuration interaction process. It is based on the construction of nested bases for the discretization of the Hamiltonian operator according to a theoretical criterion that ensures the convergence of the method. In the present work, the Hamiltonian is written as a sum of products of operators. The purpose of this paper is to study the properties and outline the performance details of the main steps of the algorithm. New parameters have been incorporated to increase flexibility, and their influence has been thoroughly investigated. The robustness and reliability of the method are demonstrated for the computation of the vibrational spectrum up to 3000 cm -1 of a widely studied 6-atom molecule (acetonitrile). Our results are compared to the most accurate up to date computation; we also give a new reference calculation for future work on this system. The algorithm has also been applied to a more challenging 7-atom molecule (ethylene oxide). The computed spectrum up to 3200 cm -1 is the most accurate computation that exists today on such systems.
Numerical modeling of solar irradiance on earth's surface
NASA Astrophysics Data System (ADS)
Mera, E.; Gutierez, L.; Da Silva, L.; Miranda, E.
2016-05-01
Modeling studies and estimation of solar radiation in base area, touch from the problems of estimating equation of time, distance equation solar space, solar declination, calculation of surface irradiance, considering that there are a lot of studies you reported the inability of these theoretical equations to be accurate estimates of radiation, many authors have proceeded to make corrections through calibrations with Pyranometers field (solarimeters) or the use of satellites, this being very poor technique last because there a differentiation between radiation and radiant kinetic effects. Because of the above and considering that there is a weather station properly calibrated ground in the Susques Salar in the Jujuy Province, Republic of Argentina, proceeded to make the following modeling of the variable in question, it proceeded to perform the following process: 1. Theoretical Modeling, 2. graphic study of the theoretical and actual data, 3. Adjust primary calibration data through data segmentation on an hourly basis, through horizontal and adding asymptotic constant, 4. Analysis of scatter plot and contrast series. Based on the above steps, the modeling data obtained: Step One: Theoretical data were generated, Step Two: The theoretical data moved 5 hours, Step Three: an asymptote of all negative emissivity values applied, Solve Excel algorithm was applied to least squares minimization between actual and modeled values, obtaining new values of asymptotes with the corresponding theoretical reformulation of data. Add a constant value by month, over time range set (4:00 pm to 6:00 pm). Step Four: The modeling equation coefficients had monthly correlation between actual and theoretical data ranging from 0.7 to 0.9.
Computing border bases using mutant strategies
NASA Astrophysics Data System (ADS)
Ullah, E.; Abbas Khan, S.
2014-01-01
Border bases, a generalization of Gröbner bases, have actively been addressed during recent years due to their applicability to industrial problems. In cryptography and coding theory a useful application of border based is to solve zero-dimensional systems of polynomial equations over finite fields, which motivates us for developing optimizations of the algorithms that compute border bases. In 2006, Kehrein and Kreuzer formulated the Border Basis Algorithm (BBA), an algorithm which allows the computation of border bases that relate to a degree compatible term ordering. In 2007, J. Ding et al. introduced mutant strategies bases on finding special lower degree polynomials in the ideal. The mutant strategies aim to distinguish special lower degree polynomials (mutants) from the other polynomials and give them priority in the process of generating new polynomials in the ideal. In this paper we develop hybrid algorithms that use the ideas of J. Ding et al. involving the concept of mutants to optimize the Border Basis Algorithm for solving systems of polynomial equations over finite fields. In particular, we recall a version of the Border Basis Algorithm which is actually called the Improved Border Basis Algorithm and propose two hybrid algorithms, called MBBA and IMBBA. The new mutants variants provide us space efficiency as well as time efficiency. The efficiency of these newly developed hybrid algorithms is discussed using standard cryptographic examples.
NASA Technical Reports Server (NTRS)
Wielicki, B. A. (Principal Investigator); Barkstrom, B. R. (Principal Investigator); Charlock, T. P.; Baum, B. A.; Green, R. N.; Minnis, P.; Smith, G. L.; Coakley, J. A.; Randall, D. R.; Lee, R. B., III
1995-01-01
The theoretical bases for the Release 1 algorithms that will be used to process satellite data for investigation of the Clouds and Earth's Radiant Energy System (CERES) are described. The architecture for software implementation of the methodologies is outlined. Volume 2 details the techniques used to geolocate and calibrate the CERES scanning radiometer measurements of shortwave and longwave radiance to invert the radiances to top-of-the-atmosphere (TOA) and surface fluxes following the Earth Radiation Budget Experiment (ERBE) approach, and to average the fluxes over various time and spatial scales to produce an ERBE-like product. Spacecraft ephemeris and sensor telemetry are used with calibration coefficients to produce a chronologically ordered data product called bidirectional scan (BDS) radiances. A spatially organized instrument Earth scan product is developed for the cloud-processing subsystem. The ERBE-like inversion subsystem converts BDS radiances to unfiltered instantaneous TOA and surface fluxes. The TOA fluxes are determined by using established ERBE techniques. Hourly TOA fluxes are computed from the instantaneous values by using ERBE methods. Hourly surface fluxes are estimated from TOA fluxes by using simple parameterizations based on recent research. The averaging process produces daily, monthly-hourly, and monthly means of TOA and surface fluxes at various scales. This product provides a continuation of the ERBE record.
Robot Path Planning in Uncertain Environments: A Language Measure-theoretic Approach
2014-01-01
Paper DS-14-1028 to appear in the Special Issue on Stochastic Models, Control and Algorithms in Robotics, ASME Journal of Dynamic Systems...Measurement and Control Robot Path Planning in Uncertain Environments: A Language Measure-theoretic Approach⋆ Devesh K. Jha† Yue Li† Thomas A. Wettergren‡† Asok...algorithm, called ν⋆, that was formulated in the framework of probabilistic finite state automata (PFSA) and language measure from a control -theoretic
NASA Technical Reports Server (NTRS)
Li, Wei; Saleeb, Atef F.
1995-01-01
This two-part report is concerned with the development of a general framework for the implicit time-stepping integrators for the flow and evolution equations in generalized viscoplastic models. The primary goal is to present a complete theoretical formulation, and to address in detail the algorithmic and numerical analysis aspects involved in its finite element implementation, as well as to critically assess the numerical performance of the developed schemes in a comprehensive set of test cases. On the theoretical side, the general framework is developed on the basis of the unconditionally-stable, backward-Euler difference scheme as a starting point. Its mathematical structure is of sufficient generality to allow a unified treatment of different classes of viscoplastic models with internal variables. In particular, two specific models of this type, which are representative of the present start-of-art in metal viscoplasticity, are considered in applications reported here; i.e., fully associative (GVIPS) and non-associative (NAV) models. The matrix forms developed for both these models are directly applicable for both initially isotropic and anisotropic materials, in general (three-dimensional) situations as well as subspace applications (i.e., plane stress/strain, axisymmetric, generalized plane stress in shells). On the computational side, issues related to efficiency and robustness are emphasized in developing the (local) interative algorithm. In particular, closed-form expressions for residual vectors and (consistent) material tangent stiffness arrays are given explicitly for both GVIPS and NAV models, with their maximum sizes 'optimized' to depend only on the number of independent stress components (but independent of the number of viscoplastic internal state parameters). Significant robustness of the local iterative solution is provided by complementing the basic Newton-Raphson scheme with a line-search strategy for convergence. In the present second part of the report, we focus on the specific details of the numerical schemes, and associated computer algorithms, for the finite-element implementation of GVIPS and NAV models.
Bayesian Retrieval of Complete Posterior PDFs of Oceanic Rain Rate From Microwave Observations
NASA Technical Reports Server (NTRS)
Chiu, J. Christine; Petty, Grant W.
2005-01-01
This paper presents a new Bayesian algorithm for retrieving surface rain rate from Tropical Rainfall Measurements Mission (TRMM) Microwave Imager (TMI) over the ocean, along with validations against estimates from the TRMM Precipitation Radar (PR). The Bayesian approach offers a rigorous basis for optimally combining multichannel observations with prior knowledge. While other rain rate algorithms have been published that are based at least partly on Bayesian reasoning, this is believed to be the first self-contained algorithm that fully exploits Bayes Theorem to yield not just a single rain rate, but rather a continuous posterior probability distribution of rain rate. To advance our understanding of theoretical benefits of the Bayesian approach, we have conducted sensitivity analyses based on two synthetic datasets for which the true conditional and prior distribution are known. Results demonstrate that even when the prior and conditional likelihoods are specified perfectly, biased retrievals may occur at high rain rates. This bias is not the result of a defect of the Bayesian formalism but rather represents the expected outcome when the physical constraint imposed by the radiometric observations is weak, due to saturation effects. It is also suggested that the choice of the estimators and the prior information are both crucial to the retrieval. In addition, the performance of our Bayesian algorithm is found to be comparable to that of other benchmark algorithms in real-world applications, while having the additional advantage of providing a complete continuous posterior probability distribution of surface rain rate.
ICESat Science Investigator led Processing System (I-SIPS)
NASA Astrophysics Data System (ADS)
Bhardwaj, S.; Bay, J.; Brenner, A.; Dimarzio, J.; Hancock, D.; Sherman, M.
2003-12-01
The ICESat Science Investigator-led Processing System (I-SIPS) generates the GLAS standard data products. It consists of two main parts the Scheduling and Data Management System (SDMS) and the Geoscience Laser Altimeter System (GLAS) Science Algorithm Software. The system has been operational since the successful launch of ICESat. It ingests data from the GLAS instrument, generates GLAS data products, and distributes them to the GLAS Science Computing Facility (SCF), the Instrument Support Facility (ISF) and the National Snow and Ice Data Center (NSIDC) ECS DAAC. The SDMS is the Planning, Scheduling and Data Management System that runs the GLAS Science Algorithm Software (GSAS). GSAS is based on the Algorithm Theoretical Basis Documents provided by the Science Team and is developed independently of SDMS. The SDMS provides the processing environment to plan jobs based on existing data, control job flow, data distribution, and archiving. The SDMS design is based on a mission-independent architecture that imposes few constraints on the science code thereby facilitating I-SIPS integration. I-SIPS currently works in an autonomous manner to ingest GLAS instrument data, distribute this data to the ISF, run the science processing algorithms to produce the GLAS standard products, reprocess data when new versions of science algorithms are released, and distributes the products to the SCF, ISF, and NSIDC. I-SIPS has a proven performance record, delivering the data to the SCF within hours after the initial instrument activation. The I-SIPS design philosophy gives this system a high potential for reuse in other science missions.
Taboo search algorithm for item assignment in synchronized zone automated order picking system
NASA Astrophysics Data System (ADS)
Wu, Yingying; Wu, Yaohua
2014-07-01
The idle time which is part of the order fulfillment time is decided by the number of items in the zone; therefore the item assignment method affects the picking efficiency. Whereas previous studies only focus on the balance of number of kinds of items between different zones but not the number of items and the idle time in each zone. In this paper, an idle factor is proposed to measure the idle time exactly. The idle factor is proven to obey the same vary trend with the idle time, so the object of this problem can be simplified from minimizing idle time to minimizing idle factor. Based on this, the model of item assignment problem in synchronized zone automated order picking system is built. The model is a form of relaxation of parallel machine scheduling problem which had been proven to be NP-complete. To solve the model, a taboo search algorithm is proposed. The main idea of the algorithm is minimizing the greatest idle factor of zones with the 2-exchange algorithm. Finally, the simulation which applies the data collected from a tobacco distribution center is conducted to evaluate the performance of the algorithm. The result verifies the model and shows the algorithm can do a steady work to reduce idle time and the idle time can be reduced by 45.63% on average. This research proposed an approach to measure the idle time in synchronized zone automated order picking system. The approach can improve the picking efficiency significantly and can be seen as theoretical basis when optimizing the synchronized automated order picking systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kassab, A.J.; Pollard, J.E.
An algorithm is presented for the high-resolution detection of irregular-shaped subsurface cavities within irregular-shaped bodies by the IR-CAT method. The theoretical basis of the algorithm is rooted in the solution of an inverse geometric steady-state heat conduction problem. A Cauchy boundary condition is prescribed at the exposed surface, and the inverse geometric heat conduction problem is formulated by specifying the thermal condition at the inner cavities walls, whose unknown geometries are to be detected. The location of the inner cavities is initially estimated, and the domain boundaries are discretized. Linear boundary elements are used in conjunction with cubic splines formore » high resolution of the cavity walls. An anchored grid pattern (AGP) is established to constrain the cubic spline knots that control the inner cavity geometry to evolve along the AGP at each iterative step. A residual is defined measuring the difference between imposed and computed boundary conditions. A Newton-Raphson method with a Broyden update is used to automate the detection of inner cavity walls. During the iterative procedure, the movement of the inner cavity walls is restricted to physically realistic intermediate solutions. Numerical simulation demonstrates the superior resolution of the cubic spline AGP algorithm over the linear spline-based AGP in the detection of an irregular-shaped cavity. Numerical simulation is also used to test the sensitivity of the linear and cubic spline AGP algorithms by simulating bias and random error in measured surface temperature. The proposed AGP algorithm is shown to satisfactorily detect cavities with these simulated data.« less
NASA Astrophysics Data System (ADS)
Lee, Kyunghoon
To evaluate the maximum likelihood estimates (MLEs) of probabilistic principal component analysis (PPCA) parameters such as a factor-loading, PPCA can invoke an expectation-maximization (EM) algorithm, yielding an EM algorithm for PPCA (EM-PCA). In order to examine the benefits of the EM-PCA for aerospace engineering applications, this thesis attempts to qualitatively and quantitatively scrutinize the EM-PCA alongside both POD and gappy POD using high-dimensional simulation data. In pursuing qualitative investigations, the theoretical relationship between POD and PPCA is transparent such that the factor-loading MLE of PPCA, evaluated by the EM-PCA, pertains to an orthogonal basis obtained by POD. By contrast, the analytical connection between gappy POD and the EM-PCA is nebulous because they distinctively approximate missing data due to their antithetical formulation perspectives: gappy POD solves a least-squares problem whereas the EM-PCA relies on the expectation of the observation probability model. To juxtapose both gappy POD and the EM-PCA, this research proposes a unifying least-squares perspective that embraces the two disparate algorithms within a generalized least-squares framework. As a result, the unifying perspective reveals that both methods address similar least-squares problems; however, their formulations contain dissimilar bases and norms. Furthermore, this research delves into the ramifications of the different bases and norms that will eventually characterize the traits of both methods. To this end, two hybrid algorithms of gappy POD and the EM-PCA are devised and compared to the original algorithms for a qualitative illustration of the different basis and norm effects. After all, a norm reflecting a curve-fitting method is found to more significantly affect estimation error reduction than a basis for two example test data sets: one is absent of data only at a single snapshot and the other misses data across all the snapshots. From a numerical performance aspect, the EM-PCA is computationally less efficient than POD for intact data since it suffers from slow convergence inherited from the EM algorithm. For incomplete data, this thesis quantitatively found that the number of data missing snapshots predetermines whether the EM-PCA or gappy POD outperforms the other because of the computational cost of a coefficient evaluation, resulting from a norm selection. For instance, gappy POD demands laborious computational effort in proportion to the number of data-missing snapshots as a consequence of the gappy norm. In contrast, the computational cost of the EM-PCA is invariant to the number of data-missing snapshots thanks to the L2 norm. In general, the higher the number of data-missing snapshots, the wider the gap between the computational cost of gappy POD and the EM-PCA. Based on the numerical experiments reported in this thesis, the following criterion is recommended regarding the selection between gappy POD and the EM-PCA for computational efficiency: gappy POD for an incomplete data set containing a few data-missing snapshots and the EM-PCA for an incomplete data set involving multiple data-missing snapshots. Last, the EM-PCA is applied to two aerospace applications in comparison to gappy POD as a proof of concept: one with an emphasis on basis extraction and the other with a focus on missing data reconstruction for a given incomplete data set with scattered missing data. The first application exploits the EM-PCA to efficiently construct reduced-order models of engine deck responses obtained by the numerical propulsion system simulation (NPSS), some of whose results are absent due to failed analyses caused by numerical instability. Model-prediction tests validate that engine performance metrics estimated by the reduced-order NPSS model exhibit considerably good agreement with those directly obtained by NPSS. Similarly, the second application illustrates that the EM-PCA is significantly more cost effective than gappy POD at repairing spurious PIV measurements obtained from acoustically-excited, bluff-body jet flow experiments. The EM-PCA reduces computational cost on factors 8 ˜ 19 compared to gappy POD while generating the same restoration results as those evaluated by gappy POD. All in all, through comprehensive theoretical and numerical investigation, this research establishes that the EM-PCA is an efficient alternative to gappy POD for an incomplete data set containing missing data over an entire data set. (Abstract shortened by UMI.)
Theoretical and Empirical Analysis of a Spatial EA Parallel Boosting Algorithm.
Kamath, Uday; Domeniconi, Carlotta; De Jong, Kenneth
2018-01-01
Many real-world problems involve massive amounts of data. Under these circumstances learning algorithms often become prohibitively expensive, making scalability a pressing issue to be addressed. A common approach is to perform sampling to reduce the size of the dataset and enable efficient learning. Alternatively, one customizes learning algorithms to achieve scalability. In either case, the key challenge is to obtain algorithmic efficiency without compromising the quality of the results. In this article we discuss a meta-learning algorithm (PSBML) that combines concepts from spatially structured evolutionary algorithms (SSEAs) with concepts from ensemble and boosting methodologies to achieve the desired scalability property. We present both theoretical and empirical analyses which show that PSBML preserves a critical property of boosting, specifically, convergence to a distribution centered around the margin. We then present additional empirical analyses showing that this meta-level algorithm provides a general and effective framework that can be used in combination with a variety of learning classifiers. We perform extensive experiments to investigate the trade-off achieved between scalability and accuracy, and robustness to noise, on both synthetic and real-world data. These empirical results corroborate our theoretical analysis, and demonstrate the potential of PSBML in achieving scalability without sacrificing accuracy.
A Theoretical Analysis of Why Hybrid Ensembles Work.
Hsu, Kuo-Wei
2017-01-01
Inspired by the group decision making process, ensembles or combinations of classifiers have been found favorable in a wide variety of application domains. Some researchers propose to use the mixture of two different types of classification algorithms to create a hybrid ensemble. Why does such an ensemble work? The question remains. Following the concept of diversity, which is one of the fundamental elements of the success of ensembles, we conduct a theoretical analysis of why hybrid ensembles work, connecting using different algorithms to accuracy gain. We also conduct experiments on classification performance of hybrid ensembles of classifiers created by decision tree and naïve Bayes classification algorithms, each of which is a top data mining algorithm and often used to create non-hybrid ensembles. Therefore, through this paper, we provide a complement to the theoretical foundation of creating and using hybrid ensembles.
NASA Technical Reports Server (NTRS)
Saleeb, Atef F.; Li, Wei
1995-01-01
This two-part report is concerned with the development of a general framework for the implicit time-stepping integrators for the flow and evolution equations in generalized viscoplastic models. The primary goal is to present a complete theoretical formulation, and to address in detail the algorithmic and numerical analysis aspects involved in its finite element implementation, as well as to critically assess the numerical performance of the developed schemes in a comprehensive set of test cases. On the theoretical side, the general framework is developed on the basis of the unconditionally-stable, backward-Euler difference scheme as a starting point. Its mathematical structure is of sufficient generality to allow a unified treatment of different classes of viscoplastic models with internal variables. In particular, two specific models of this type, which are representative of the present start-of-art in metal viscoplasticity, are considered in applications reported here; i.e., fully associative (GVIPS) and non-associative (NAV) models. The matrix forms developed for both these models are directly applicable for both initially isotropic and anisotropic materials, in general (three-dimensional) situations as well as subspace applications (i.e., plane stress/strain, axisymmetric, generalized plane stress in shells). On the computational side, issues related to efficiency and robustness are emphasized in developing the (local) interative algorithm. In particular, closed-form expressions for residual vectors and (consistent) material tangent stiffness arrays are given explicitly for both GVIPS and NAV models, with their maximum sizes 'optimized' to depend only on the number of independent stress components (but independent of the number of viscoplastic internal state parameters). Significant robustness of the local iterative solution is provided by complementing the basic Newton-Raphson scheme with a line-search strategy for convergence. In the present first part of the report, we focus on the theoretical developments, and discussions of the results of numerical-performance studies using the integration schemes for GVIPS and NAV models.
Yang, Sheng-Sung; Ho, Chia-Lu; Siu, Sammy
2010-12-01
In this paper, we propose an algorithm based on the central limit theorem to compute the sensitivity of the multilayer perceptron (MLP) due to the errors of the inputs and weights. For simplicity and practicality, all inputs and weights studied here are independently identically distributed (i.i.d.). The theoretical results derived from the proposed algorithm show that the sensitivity of the MLP is affected by the number of layers and the number of neurons adopted in each layer. To prove the reliability of the proposed algorithm, some experimental results of the sensitivity are also presented, and they match the theoretical ones. The good agreement between the theoretical results and the experimental results verifies the reliability and feasibility of the proposed algorithm. Furthermore, the proposed algorithm can also be applied to compute precisely the sensitivity of the MLP with any available activation functions and any types of i.i.d. inputs and weights.
NASA Astrophysics Data System (ADS)
Cheng, Liantao; Zhang, Fenghui; Kang, Xiaoyu; Wang, Lang
2018-05-01
In evolutionary population synthesis (EPS) models, we need to convert stellar evolutionary parameters into spectra via interpolation in a stellar spectral library. For theoretical stellar spectral libraries, the spectrum grid is homogeneous on the effective-temperature and gravity plane for a given metallicity. It is relatively easy to derive stellar spectra. For empirical stellar spectral libraries, stellar parameters are irregularly distributed and the interpolation algorithm is relatively complicated. In those EPS models that use empirical stellar spectral libraries, different algorithms are used and the codes are often not released. Moreover, these algorithms are often complicated. In this work, based on a radial basis function (RBF) network, we present a new spectrum interpolation algorithm and its code. Compared with the other interpolation algorithms that are used in EPS models, it can be easily understood and is highly efficient in terms of computation. The code is written in MATLAB scripts and can be used on any computer system. Using it, we can obtain the interpolated spectra from a library or a combination of libraries. We apply this algorithm to several stellar spectral libraries (such as MILES, ELODIE-3.1 and STELIB-3.2) and give the integrated spectral energy distributions (ISEDs) of stellar populations (with ages from 1 Myr to 14 Gyr) by combining them with Yunnan-III isochrones. Our results show that the differences caused by the adoption of different EPS model components are less than 0.2 dex. All data about the stellar population ISEDs in this work and the RBF spectrum interpolation code can be obtained by request from the first author or downloaded from http://www1.ynao.ac.cn/˜zhangfh.
Algorithmic transformation of multi-loop master integrals to a canonical basis with CANONICA
NASA Astrophysics Data System (ADS)
Meyer, Christoph
2018-01-01
The integration of differential equations of Feynman integrals can be greatly facilitated by using a canonical basis. This paper presents the Mathematica package CANONICA, which implements a recently developed algorithm to automatize the transformation to a canonical basis. This represents the first publicly available implementation suitable for differential equations depending on multiple scales. In addition to the presentation of the package, this paper extends the description of some aspects of the algorithm, including a proof of the uniqueness of canonical forms up to constant transformations.
Geometric modeling of subcellular structures, organelles, and multiprotein complexes
Feng, Xin; Xia, Kelin; Tong, Yiying; Wei, Guo-Wei
2013-01-01
SUMMARY Recently, the structure, function, stability, and dynamics of subcellular structures, organelles, and multi-protein complexes have emerged as a leading interest in structural biology. Geometric modeling not only provides visualizations of shapes for large biomolecular complexes but also fills the gap between structural information and theoretical modeling, and enables the understanding of function, stability, and dynamics. This paper introduces a suite of computational tools for volumetric data processing, information extraction, surface mesh rendering, geometric measurement, and curvature estimation of biomolecular complexes. Particular emphasis is given to the modeling of cryo-electron microscopy data. Lagrangian-triangle meshes are employed for the surface presentation. On the basis of this representation, algorithms are developed for surface area and surface-enclosed volume calculation, and curvature estimation. Methods for volumetric meshing have also been presented. Because the technological development in computer science and mathematics has led to multiple choices at each stage of the geometric modeling, we discuss the rationales in the design and selection of various algorithms. Analytical models are designed to test the computational accuracy and convergence of proposed algorithms. Finally, we select a set of six cryo-electron microscopy data representing typical subcellular complexes to demonstrate the efficacy of the proposed algorithms in handling biomolecular surfaces and explore their capability of geometric characterization of binding targets. This paper offers a comprehensive protocol for the geometric modeling of subcellular structures, organelles, and multiprotein complexes. PMID:23212797
Absolute configuration of (-)-myrtenal by vibrational circular dichroism.
Burgueño-Tapia, Eleuterio; Zepeda, L Gerardo; Joseph-Nathan, Pedro
2010-07-01
The VCD spectrum of the monoterpene (-)-myrtenal (1) was compared with theoretical spectra using ab initio density functional theory (DFT) calculations at the B3LYP/6-31G(d,p), B3LYP/6-31G+(d,p), B3LYP/6-311G+(d,p), B3LYP/DGDZVP, and B3PW91/DGTZVP levels of theory. Conformational analysis of 1 indicated that the lowest energy conformer was s-trans-C2-C10, which contributes more than 98.5% to the total conformational population regardless of the employed level of theory. The use of a recently developed confidence level algorithm demonstrated that VCD spectra calculated for the main conformer, using the indicated hybrid functionals and basis set, gave no significant changes, from where it follows that B3LYP/DGDZVP calculations provide a superior balance between computer cost and VCD spectral accuracy. The DGDZVP basis set demanded around a quarter the time than the 6-311G+(d,p) basis set while providing similar results. The spectral comparison also provided evidence that the levorotatory enantiomer of myrtenal has the 1R absolute configuration. 2010 Elsevier Ltd. All rights reserved.
A Theoretical Analysis of Why Hybrid Ensembles Work
2017-01-01
Inspired by the group decision making process, ensembles or combinations of classifiers have been found favorable in a wide variety of application domains. Some researchers propose to use the mixture of two different types of classification algorithms to create a hybrid ensemble. Why does such an ensemble work? The question remains. Following the concept of diversity, which is one of the fundamental elements of the success of ensembles, we conduct a theoretical analysis of why hybrid ensembles work, connecting using different algorithms to accuracy gain. We also conduct experiments on classification performance of hybrid ensembles of classifiers created by decision tree and naïve Bayes classification algorithms, each of which is a top data mining algorithm and often used to create non-hybrid ensembles. Therefore, through this paper, we provide a complement to the theoretical foundation of creating and using hybrid ensembles. PMID:28255296
NASA Technical Reports Server (NTRS)
Kummerow, Christian; Giglio, Louis
1994-01-01
This paper describes a multichannel physical approach for retrieving rainfall and vertical structure information from satellite-based passive microwave observations. The algorithm makes use of statistical inversion techniques based upon theoretically calculated relations between rainfall rates and brightness temperatures. Potential errors introduced into the theoretical calculations by the unknown vertical distribution of hydrometeors are overcome by explicity accounting for diverse hydrometeor profiles. This is accomplished by allowing for a number of different vertical distributions in the theoretical brightness temperature calculations and requiring consistency between the observed and calculated brightness temperatures. This paper will focus primarily on the theoretical aspects of the retrieval algorithm, which includes a procedure used to account for inhomogeneities of the rainfall within the satellite field of view as well as a detailed description of the algorithm as it is applied over both ocean and land surfaces. The residual error between observed and calculated brightness temperatures is found to be an important quantity in assessing the uniqueness of the solution. It is further found that the residual error is a meaningful quantity that can be used to derive expected accuracies from this retrieval technique. Examples comparing the retrieved results as well as the detailed analysis of the algorithm performance under various circumstances are the subject of a companion paper.
Koay, Cheng Guan; Chang, Lin-Ching; Carew, John D; Pierpaoli, Carlo; Basser, Peter J
2006-09-01
A unifying theoretical and algorithmic framework for diffusion tensor estimation is presented. Theoretical connections among the least squares (LS) methods, (linear least squares (LLS), weighted linear least squares (WLLS), nonlinear least squares (NLS) and their constrained counterparts), are established through their respective objective functions, and higher order derivatives of these objective functions, i.e., Hessian matrices. These theoretical connections provide new insights in designing efficient algorithms for NLS and constrained NLS (CNLS) estimation. Here, we propose novel algorithms of full Newton-type for the NLS and CNLS estimations, which are evaluated with Monte Carlo simulations and compared with the commonly used Levenberg-Marquardt method. The proposed methods have a lower percent of relative error in estimating the trace and lower reduced chi2 value than those of the Levenberg-Marquardt method. These results also demonstrate that the accuracy of an estimate, particularly in a nonlinear estimation problem, is greatly affected by the Hessian matrix. In other words, the accuracy of a nonlinear estimation is algorithm-dependent. Further, this study shows that the noise variance in diffusion weighted signals is orientation dependent when signal-to-noise ratio (SNR) is low (
Pan, Shuguo; Chen, Weirong; Jin, Xiaodong; Shi, Xiaofei; He, Fan
2015-07-22
Satellite orbit error and clock bias are the keys to precise point positioning (PPP). The traditional PPP algorithm requires precise satellite products based on worldwide permanent reference stations. Such an algorithm requires considerable work and hardly achieves real-time performance. However, real-time positioning service will be the dominant mode in the future. IGS is providing such an operational service (RTS) and there are also commercial systems like Trimble RTX in operation. On the basis of the regional Continuous Operational Reference System (CORS), a real-time PPP algorithm is proposed to apply the coupling estimation of clock bias and orbit error. The projection of orbit error onto the satellite-receiver range has the same effects on positioning accuracy with clock bias. Therefore, in satellite clock estimation, part of the orbit error can be absorbed by the clock bias and the effects of residual orbit error on positioning accuracy can be weakened by the evenly distributed satellite geometry. In consideration of the simple structure of pseudorange equations and the high precision of carrier-phase equations, the clock bias estimation method coupled with orbit error is also improved. Rovers obtain PPP results by receiving broadcast ephemeris and real-time satellite clock bias coupled with orbit error. By applying the proposed algorithm, the precise orbit products provided by GNSS analysis centers are rendered no longer necessary. On the basis of previous theoretical analysis, a real-time PPP system was developed. Some experiments were then designed to verify this algorithm. Experimental results show that the newly proposed approach performs better than the traditional PPP based on International GNSS Service (IGS) real-time products. The positioning accuracies of the rovers inside and outside the network are improved by 38.8% and 36.1%, respectively. The PPP convergence speeds are improved by up to 61.4% and 65.9%. The new approach can change the traditional PPP mode because of its advantages of independence, high positioning precision, and real-time performance. It could be an alternative solution for regional positioning service before global PPP service comes into operation.
Pan, Shuguo; Chen, Weirong; Jin, Xiaodong; Shi, Xiaofei; He, Fan
2015-01-01
Satellite orbit error and clock bias are the keys to precise point positioning (PPP). The traditional PPP algorithm requires precise satellite products based on worldwide permanent reference stations. Such an algorithm requires considerable work and hardly achieves real-time performance. However, real-time positioning service will be the dominant mode in the future. IGS is providing such an operational service (RTS) and there are also commercial systems like Trimble RTX in operation. On the basis of the regional Continuous Operational Reference System (CORS), a real-time PPP algorithm is proposed to apply the coupling estimation of clock bias and orbit error. The projection of orbit error onto the satellite-receiver range has the same effects on positioning accuracy with clock bias. Therefore, in satellite clock estimation, part of the orbit error can be absorbed by the clock bias and the effects of residual orbit error on positioning accuracy can be weakened by the evenly distributed satellite geometry. In consideration of the simple structure of pseudorange equations and the high precision of carrier-phase equations, the clock bias estimation method coupled with orbit error is also improved. Rovers obtain PPP results by receiving broadcast ephemeris and real-time satellite clock bias coupled with orbit error. By applying the proposed algorithm, the precise orbit products provided by GNSS analysis centers are rendered no longer necessary. On the basis of previous theoretical analysis, a real-time PPP system was developed. Some experiments were then designed to verify this algorithm. Experimental results show that the newly proposed approach performs better than the traditional PPP based on International GNSS Service (IGS) real-time products. The positioning accuracies of the rovers inside and outside the network are improved by 38.8% and 36.1%, respectively. The PPP convergence speeds are improved by up to 61.4% and 65.9%. The new approach can change the traditional PPP mode because of its advantages of independence, high positioning precision, and real-time performance. It could be an alternative solution for regional positioning service before global PPP service comes into operation. PMID:26205276
NASA Astrophysics Data System (ADS)
Sitko, Rafał
2008-11-01
Knowledge of X-ray tube spectral distribution is necessary in theoretical methods of matrix correction, i.e. in both fundamental parameter (FP) methods and theoretical influence coefficient algorithms. Thus, the influence of X-ray tube distribution on the accuracy of the analysis of thin films and bulk samples is presented. The calculations are performed using experimental X-ray tube spectra taken from the literature and theoretical X-ray tube spectra evaluated by three different algorithms proposed by Pella et al. (X-Ray Spectrom. 14 (1985) 125-135), Ebel (X-Ray Spectrom. 28 (1999) 255-266), and Finkelshtein and Pavlova (X-Ray Spectrom. 28 (1999) 27-32). In this study, Fe-Cr-Ni system is selected as an example and the calculations are performed for X-ray tubes commonly applied in X-ray fluorescence analysis (XRF), i.e., Cr, Mo, Rh and W. The influence of X-ray tube spectra on FP analysis is evaluated when quantification is performed using various types of calibration samples. FP analysis of bulk samples is performed using pure-element bulk standards and multielement bulk standards similar to the analyzed material, whereas for FP analysis of thin films, the bulk and thin pure-element standards are used. For the evaluation of the influence of X-ray tube spectra on XRF analysis performed by theoretical influence coefficient methods, two algorithms for bulk samples are selected, i.e. Claisse-Quintin (Can. Spectrosc. 12 (1967) 129-134) and COLA algorithms (G.R. Lachance, Paper Presented at the International Conference on Industrial Inorganic Elemental Analysis, Metz, France, June 3, 1981) and two algorithms (constant and linear coefficients) for thin films recently proposed by Sitko (X-Ray Spectrom. 37 (2008) 265-272).
Interpolation bias for the inverse compositional Gauss-Newton algorithm in digital image correlation
NASA Astrophysics Data System (ADS)
Su, Yong; Zhang, Qingchuan; Xu, Xiaohai; Gao, Zeren; Wu, Shangquan
2018-01-01
It is believed that the classic forward additive Newton-Raphson (FA-NR) algorithm and the recently introduced inverse compositional Gauss-Newton (IC-GN) algorithm give rise to roughly equal interpolation bias. Questioning the correctness of this statement, this paper presents a thorough analysis of interpolation bias for the IC-GN algorithm. A theoretical model is built to analytically characterize the dependence of interpolation bias upon speckle image, target image interpolation, and reference image gradient estimation. The interpolation biases of the FA-NR algorithm and the IC-GN algorithm can be significantly different, whose relative difference can exceed 80%. For the IC-GN algorithm, the gradient estimator can strongly affect the interpolation bias; the relative difference can reach 178%. Since the mean bias errors are insensitive to image noise, the theoretical model proposed remains valid in the presence of noise. To provide more implementation details, source codes are uploaded as a supplement.
An algorithm of improving speech emotional perception for hearing aid
NASA Astrophysics Data System (ADS)
Xi, Ji; Liang, Ruiyu; Fei, Xianju
2017-07-01
In this paper, a speech emotion recognition (SER) algorithm was proposed to improve the emotional perception of hearing-impaired people. The algorithm utilizes multiple kernel technology to overcome the drawback of SVM: slow training speed. Firstly, in order to improve the adaptive performance of Gaussian Radial Basis Function (RBF), the parameter determining the nonlinear mapping was optimized on the basis of Kernel target alignment. Then, the obtained Kernel Function was used as the basis kernel of Multiple Kernel Learning (MKL) with slack variable that could solve the over-fitting problem. However, the slack variable also brings the error into the result. Therefore, a soft-margin MKL was proposed to balance the margin against the error. Moreover, the relatively iterative algorithm was used to solve the combination coefficients and hyper-plane equations. Experimental results show that the proposed algorithm can acquire an accuracy of 90% for five kinds of emotions including happiness, sadness, anger, fear and neutral. Compared with KPCA+CCA and PIM-FSVM, the proposed algorithm has the highest accuracy.
FLUT - A program for aeroelastic stability analysis. [of aircraft structures in subsonic flow
NASA Technical Reports Server (NTRS)
Johnson, E. H.
1977-01-01
A computer program (FLUT) that can be used to evaluate the aeroelastic stability of aircraft structures in subsonic flow is described. The algorithm synthesizes data from a structural vibration analysis with an unsteady aerodynamics analysis and then performs a complex eigenvalue analysis to assess the system stability. The theoretical basis of the program is discussed with special emphasis placed on some innovative techniques which improve the efficiency of the analysis. User information needed to efficiently and successfully utilize the program is provided. In addition to identifying the required input, the flow of the program execution and some possible sources of difficulty are included. The use of the program is demonstrated with a listing of the input and output for a simple example.
NASA Astrophysics Data System (ADS)
Hartmann, Alexander K.; Weigt, Martin
2005-10-01
A concise, comprehensive introduction to the topic of statistical physics of combinatorial optimization, bringing together theoretical concepts and algorithms from computer science with analytical methods from physics. The result bridges the gap between statistical physics and combinatorial optimization, investigating problems taken from theoretical computing, such as the vertex-cover problem, with the concepts and methods of theoretical physics. The authors cover rapid developments and analytical methods that are both extremely complex and spread by word-of-mouth, providing all the necessary basics in required detail. Throughout, the algorithms are shown with examples and calculations, while the proofs are given in a way suitable for graduate students, post-docs, and researchers. Ideal for newcomers to this young, multidisciplinary field.
Improving the Numerical Stability of Fast Matrix Multiplication
Ballard, Grey; Benson, Austin R.; Druinsky, Alex; ...
2016-10-04
Fast algorithms for matrix multiplication, namely those that perform asymptotically fewer scalar operations than the classical algorithm, have been considered primarily of theoretical interest. Apart from Strassen's original algorithm, few fast algorithms have been efficiently implemented or used in practical applications. However, there exist many practical alternatives to Strassen's algorithm with varying performance and numerical properties. Fast algorithms are known to be numerically stable, but because their error bounds are slightly weaker than the classical algorithm, they are not used even in cases where they provide a performance benefit. We argue in this study that the numerical sacrifice of fastmore » algorithms, particularly for the typical use cases of practical algorithms, is not prohibitive, and we explore ways to improve the accuracy both theoretically and empirically. The numerical accuracy of fast matrix multiplication depends on properties of the algorithm and of the input matrices, and we consider both contributions independently. We generalize and tighten previous error analyses of fast algorithms and compare their properties. We discuss algorithmic techniques for improving the error guarantees from two perspectives: manipulating the algorithms, and reducing input anomalies by various forms of diagonal scaling. In conclusion, we benchmark performance and demonstrate our improved numerical accuracy.« less
Andreini, Daniele; Lin, Fay Y; Rizvi, Asim; Cho, Iksung; Heo, Ran; Pontone, Gianluca; Bartorelli, Antonio L; Mushtaq, Saima; Villines, Todd C; Carrascosa, Patricia; Choi, Byoung Wook; Bloom, Stephen; Wei, Han; Xing, Yan; Gebow, Dan; Gransar, Heidi; Chang, Hyuk-Jae; Leipsic, Jonathon; Min, James K
2018-06-01
Motion artifact can reduce the diagnostic accuracy of coronary CT angiography (CCTA) for coronary artery disease (CAD). The purpose of this study was to compare the diagnostic performance of an algorithm dedicated to correcting coronary motion artifact with the performance of standard reconstruction methods in a prospective international multicenter study. Patients referred for clinically indicated invasive coronary angiography (ICA) for suspected CAD prospectively underwent an investigational CCTA examination free from heart rate-lowering medications before they underwent ICA. Blinded core laboratory interpretations of motion-corrected and standard reconstructions for obstructive CAD (≥ 50% stenosis) were compared with ICA findings. Segments unevaluable owing to artifact were considered obstructive. The primary endpoint was per-subject diagnostic accuracy of the intracycle motion correction algorithm for obstructive CAD found at ICA. Among 230 patients who underwent CCTA with the motion correction algorithm and standard reconstruction, 92 (40.0%) had obstructive CAD on the basis of ICA findings. At a mean heart rate of 68.0 ± 11.7 beats/min, the motion correction algorithm reduced the number of nondiagnostic scans compared with standard reconstruction (20.4% vs 34.8%; p < 0.001). Diagnostic accuracy for obstructive CAD with the motion correction algorithm (62%; 95% CI, 56-68%) was not significantly different from that of standard reconstruction on a per-subject basis (59%; 95% CI, 53-66%; p = 0.28) but was superior on a per-vessel basis: 77% (95% CI, 74-80%) versus 72% (95% CI, 69-75%) (p = 0.02). The motion correction algorithm was superior in subgroups of patients with severely obstructive (≥ 70%) stenosis, heart rate ≥ 70 beats/min, and vessels in the atrioventricular groove. The motion correction algorithm studied reduces artifacts and improves diagnostic performance for obstructive CAD on a per-vessel basis and in selected subgroups on a per-subject basis.
Xu, Jin; Xu, Zhao-Xia; Lu, Ping; Guo, Rui; Yan, Hai-Xia; Xu, Wen-Jie; Wang, Yi-Qin; Xia, Chun-Ming
2016-11-01
To develop an effective Chinese Medicine (CM) diagnostic model of coronary heart disease (CHD) and to confifirm the scientifific validity of CM theoretical basis from an algorithmic viewpoint. Four types of objective diagnostic data were collected from 835 CHD patients by using a self-developed CM inquiry scale for the diagnosis of heart problems, a tongue diagnosis instrument, a ZBOX-I pulse digital collection instrument, and the sound of an attending acquisition system. These diagnostic data was analyzed and a CM diagnostic model was established using a multi-label learning algorithm (REAL). REAL was employed to establish a Xin (Heart) qi defificiency, Xin yang defificiency, Xin yin defificiency, blood stasis, and phlegm fifive-card CM diagnostic model, which had recognition rates of 80.32%, 89.77%, 84.93%, 85.37%, and 69.90%, respectively. The multi-label learning method established using four diagnostic models based on mutual information feature selection yielded good recognition results. The characteristic model parameters were selected by maximizing the mutual information for each card type. The four diagnostic methods used to obtain information in CM, i.e., observation, auscultation and olfaction, inquiry, and pulse diagnosis, can be characterized by these parameters, which is consistent with CM theory.
Direct Retrieval of Exterior Orientation Parameters Using A 2-D Projective Transformation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seedahmed, Gamal H.
2006-09-01
Direct solutions are very attractive because they obviate the need for initial approximations associated with non-linear solutions. The Direct Linear Transformation (DLT) establishes itself as a method of choice for direct solutions in photogrammetry and other fields. The use of the DLT with coplanar object space points leads to a rank deficient model. This rank deficient model leaves the DLT defined up to a 2-D projective transformation, which makes the direct retrieval of the exterior orientation parameters (EOPs) a non-trivial task. This paper presents a novel direct algorithm to retrieve the EOPs from the 2-D projective transformation. It is basedmore » on a direct relationship between the 2-D projective transformation and the collinearity model using homogeneous coordinates representation. This representation offers a direct matrix correspondence between the 2-D projective transformation parameters and the collinearity model parameters. This correspondence lends itself to a direct matrix factorization to retrieve the EOPs. An important step in the proposed algorithm is a normalization process that provides the actual link between the 2-D projective transformation and the collinearity model. This paper explains the theoretical basis of the proposed algorithm as well as the necessary steps for its practical implementation. In addition, numerical examples are provided to demonstrate its validity.« less
Noninvasive identification of the total peripheral resistance baroreflex
NASA Technical Reports Server (NTRS)
Mukkamala, Ramakrishna; Toska, Karin; Cohen, Richard J.
2003-01-01
We propose two identification algorithms for quantitating the total peripheral resistance (TPR) baroreflex, an important contributor to short-term arterial blood pressure (ABP) regulation. Each algorithm analyzes beat-to-beat fluctuations in ABP and cardiac output, which may both be obtained noninvasively in humans. For a theoretical evaluation, we applied both algorithms to a realistic cardiovascular model. The results contrasted with only one of the algorithms proving to be reliable. This algorithm was able to track changes in the static gains of both the arterial and cardiopulmonary TPR baroreflex. We then applied both algorithms to a preliminary set of human data and obtained contrasting results much like those obtained from the cardiovascular model, thereby making the theoretical evaluation results more meaningful. This study suggests that, with experimental testing, the reliable identification algorithm may provide a powerful, noninvasive means for quantitating the TPR baroreflex. This study also provides an example of the role that models can play in the development and initial evaluation of algorithms aimed at quantitating important physiological mechanisms.
2D-pattern matching image and video compression: theory, algorithms, and experiments.
Alzina, Marc; Szpankowski, Wojciech; Grama, Ananth
2002-01-01
In this paper, we propose a lossy data compression framework based on an approximate two-dimensional (2D) pattern matching (2D-PMC) extension of the Lempel-Ziv (1977, 1978) lossless scheme. This framework forms the basis upon which higher level schemes relying on differential coding, frequency domain techniques, prediction, and other methods can be built. We apply our pattern matching framework to image and video compression and report on theoretical and experimental results. Theoretically, we show that the fixed database model used for video compression leads to suboptimal but computationally efficient performance. The compression ratio of this model is shown to tend to the generalized entropy. For image compression, we use a growing database model for which we provide an approximate analysis. The implementation of 2D-PMC is a challenging problem from the algorithmic point of view. We use a range of techniques and data structures such as k-d trees, generalized run length coding, adaptive arithmetic coding, and variable and adaptive maximum distortion level to achieve good compression ratios at high compression speeds. We demonstrate bit rates in the range of 0.25-0.5 bpp for high-quality images and data rates in the range of 0.15-0.5 Mbps for a baseline video compression scheme that does not use any prediction or interpolation. We also demonstrate that this asymmetric compression scheme is capable of extremely fast decompression making it particularly suitable for networked multimedia applications.
NASA Technical Reports Server (NTRS)
Wielicki, Bruce A. (Principal Investigator); Barkstrom, Bruce R. (Principal Investigator); Baum, Bryan A.; Charlock, Thomas P.; Green, Richard N.; Lee, Robert B., III; Minnis, Patrick; Smith, G. Louis; Coakley, J. A.; Randall, David R.
1995-01-01
The theoretical bases for the Release 1 algorithms that will be used to process satellite data for investigation of the Clouds and the Earth's Radiant Energy System (CERES) are described. The architecture for software implementation of the methodologies is outlined. Volume 4 details the advanced CERES techniques for computing surface and atmospheric radiative fluxes (using the coincident CERES cloud property and top-of-the-atmosphere (TOA) flux products) and for averaging the cloud properties and TOA, atmospheric, and surface radiative fluxes over various temporal and spatial scales. CERES attempts to match the observed TOA fluxes with radiative transfer calculations that use as input the CERES cloud products and NOAA National Meteorological Center analyses of temperature and humidity. Slight adjustments in the cloud products are made to obtain agreement of the calculated and observed TOA fluxes. The computed products include shortwave and longwave fluxes from the surface to the TOA. The CERES instantaneous products are averaged on a 1.25-deg latitude-longitude grid, then interpolated to produce global, synoptic maps to TOA fluxes and cloud properties by using 3-hourly, normalized radiances from geostationary meteorological satellites. Surface and atmospheric fluxes are computed by using these interpolated quantities. Clear-sky and total fluxes and cloud properties are then averaged over various scales.
Fast algorithm for computing complex number-theoretic transforms
NASA Technical Reports Server (NTRS)
Reed, I. S.; Liu, K. Y.; Truong, T. K.
1977-01-01
A high-radix FFT algorithm for computing transforms over FFT, where q is a Mersenne prime, is developed to implement fast circular convolutions. This new algorithm requires substantially fewer multiplications than the conventional FFT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, P. T.
1993-09-01
As the field of computational fluid dynamics (CFD) continues to mature, algorithms are required to exploit the most recent advances in approximation theory, numerical mathematics, computing architectures, and hardware. Meeting this requirement is particularly challenging in incompressible fluid mechanics, where primitive-variable CFD formulations that are robust, while also accurate and efficient in three dimensions, remain an elusive goal. This dissertation asserts that one key to accomplishing this goal is recognition of the dual role assumed by the pressure, i.e., a mechanism for instantaneously enforcing conservation of mass and a force in the mechanical balance law for conservation of momentum. Provingmore » this assertion has motivated the development of a new, primitive-variable, incompressible, CFD algorithm called the Continuity Constraint Method (CCM). The theoretical basis for the CCM consists of a finite-element spatial semi-discretization of a Galerkin weak statement, equal-order interpolation for all state-variables, a 0-implicit time-integration scheme, and a quasi-Newton iterative procedure extended by a Taylor Weak Statement (TWS) formulation for dispersion error control. Original contributions to algorithmic theory include: (a) formulation of the unsteady evolution of the divergence error, (b) investigation of the role of non-smoothness in the discretized continuity-constraint function, (c) development of a uniformly H 1 Galerkin weak statement for the Reynolds-averaged Navier-Stokes pressure Poisson equation, (d) derivation of physically and numerically well-posed boundary conditions, and (e) investigation of sparse data structures and iterative methods for solving the matrix algebra statements generated by the algorithm.« less
NASA Astrophysics Data System (ADS)
Kurien, Binoy G.; Ashcom, Jonathan B.; Shah, Vinay N.; Rachlin, Yaron; Tarokh, Vahid
2017-01-01
Atmospheric turbulence presents a fundamental challenge to Fourier phase recovery in optical interferometry. Typical reconstruction algorithms employ Bayesian inference techniques which rely on prior knowledge of the scene under observation. In contrast, redundant spacing calibration (RSC) algorithms employ redundancy in the baselines of the interferometric array to directly expose the contribution of turbulence, thereby enabling phase recovery for targets of arbitrary and unknown complexity. Traditionally RSC algorithms have been applied directly to single-exposure measurements, which are reliable only at high photon flux in general. In scenarios of low photon flux, such as those arising in the observation of dim objects in space, one must instead rely on time-averaged, atmosphere-invariant quantities such as the bispectrum. In this paper, we develop a novel RSC-based algorithm for prior-less phase recovery in which we generalize the bispectrum to higher order atmosphere-invariants (n-spectra) for improved sensitivity. We provide a strategy for selection of a high-signal-to-noise ratio set of n-spectra using the graph-theoretic notion of the minimum cycle basis. We also discuss a key property of this set (wrap-invariance), which then enables reliable application of standard linear estimation techniques to recover the Fourier phases from the 2π-wrapped n-spectra phases. For validation, we analyse the expected shot-noise-limited performance of our algorithm for both pairwise and Fizeau interferometric architectures, and corroborate this analysis with simulation results showing performance near an atmosphere-oracle Cramer-Rao bound. Lastly, we apply techniques from the field of compressed sensing to perform image reconstruction from the estimated complex visibilities.
An adaptive ANOVA-based PCKF for high-dimensional nonlinear inverse modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Weixuan, E-mail: weixuan.li@usc.edu; Lin, Guang, E-mail: guang.lin@pnnl.gov; Zhang, Dongxiao, E-mail: dxz@pku.edu.cn
2014-02-01
The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect—except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos basis functions in the expansion helps to capture uncertainty more accurately but increases computational cost. Selection of basis functionsmore » is particularly important for high-dimensional stochastic problems because the number of polynomial chaos basis functions required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE basis functions are pre-set based on users' experience. Also, for sequential data assimilation problems, the basis functions kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE basis functions for different problems and automatically adjusts the number of basis functions in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm was tested with different examples and demonstrated great effectiveness in comparison with non-adaptive PCKF and EnKF algorithms.« less
Spatial Statistics for Segmenting Histological Structures in H&E Stained Tissue Images.
Nguyen, Luong; Tosun, Akif Burak; Fine, Jeffrey L; Lee, Adrian V; Taylor, D Lansing; Chennubhotla, S Chakra
2017-07-01
Segmenting a broad class of histological structures in transmitted light and/or fluorescence-based images is a prerequisite for determining the pathological basis of cancer, elucidating spatial interactions between histological structures in tumor microenvironments (e.g., tumor infiltrating lymphocytes), facilitating precision medicine studies with deep molecular profiling, and providing an exploratory tool for pathologists. This paper focuses on segmenting histological structures in hematoxylin- and eosin-stained images of breast tissues, e.g., invasive carcinoma, carcinoma in situ, atypical and normal ducts, adipose tissue, and lymphocytes. We propose two graph-theoretic segmentation methods based on local spatial color and nuclei neighborhood statistics. For benchmarking, we curated a data set of 232 high-power field breast tissue images together with expertly annotated ground truth. To accurately model the preference for histological structures (ducts, vessels, tumor nets, adipose, etc.) over the remaining connective tissue and non-tissue areas in ground truth annotations, we propose a new region-based score for evaluating segmentation algorithms. We demonstrate the improvement of our proposed methods over the state-of-the-art algorithms in both region- and boundary-based performance measures.
NASA Technical Reports Server (NTRS)
Duda, James L.; Barth, Suzanna C
2005-01-01
The VIIRS sensor provides measurements for 22 Environmental Data Records (EDRs) addressing the atmosphere, ocean surface temperature, ocean color, land parameters, aerosols, imaging for clouds and ice, and more. That is, the VIIRS collects visible and infrared radiometric data of the Earth's atmosphere, ocean, and land surfaces. Data types include atmospheric, clouds, Earth radiation budget, land/water and sea surface temperature, ocean color, and low light imagery. This wide scope of measurements calls for the preparation of a multiplicity of Algorithm Theoretical Basis Documents (ATBDs), and, additionally, for intermediate products such as cloud mask, et al. Furthermore, the VIIRS interacts with three or more other sensors. This paper addresses selected and crucial elements of the process being used to convert and test an immense volume of a maturing and changing science code to the initial operational source code in preparation for launch of NPP. The integrity of the original science code is maintained and enhanced via baseline comparisons when re-hosted, in addition to multiple planned code performance reviews.
Improved pulse laser ranging algorithm based on high speed sampling
NASA Astrophysics Data System (ADS)
Gao, Xuan-yi; Qian, Rui-hai; Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; He, Shi-jie; Guo, Xiao-kang
2016-10-01
Narrow pulse laser ranging achieves long-range target detection using laser pulse with low divergent beams. Pulse laser ranging is widely used in military, industrial, civil, engineering and transportation field. In this paper, an improved narrow pulse laser ranging algorithm is studied based on the high speed sampling. Firstly, theoretical simulation models have been built and analyzed including the laser emission and pulse laser ranging algorithm. An improved pulse ranging algorithm is developed. This new algorithm combines the matched filter algorithm and the constant fraction discrimination (CFD) algorithm. After the algorithm simulation, a laser ranging hardware system is set up to implement the improved algorithm. The laser ranging hardware system includes a laser diode, a laser detector and a high sample rate data logging circuit. Subsequently, using Verilog HDL language, the improved algorithm is implemented in the FPGA chip based on fusion of the matched filter algorithm and the CFD algorithm. Finally, the laser ranging experiment is carried out to test the improved algorithm ranging performance comparing to the matched filter algorithm and the CFD algorithm using the laser ranging hardware system. The test analysis result demonstrates that the laser ranging hardware system realized the high speed processing and high speed sampling data transmission. The algorithm analysis result presents that the improved algorithm achieves 0.3m distance ranging precision. The improved algorithm analysis result meets the expected effect, which is consistent with the theoretical simulation.
Parallel Directionally Split Solver Based on Reformulation of Pipelined Thomas Algorithm
NASA Technical Reports Server (NTRS)
Povitsky, A.
1998-01-01
In this research an efficient parallel algorithm for 3-D directionally split problems is developed. The proposed algorithm is based on a reformulated version of the pipelined Thomas algorithm that starts the backward step computations immediately after the completion of the forward step computations for the first portion of lines This algorithm has data available for other computational tasks while processors are idle from the Thomas algorithm. The proposed 3-D directionally split solver is based on the static scheduling of processors where local and non-local, data-dependent and data-independent computations are scheduled while processors are idle. A theoretical model of parallelization efficiency is used to define optimal parameters of the algorithm, to show an asymptotic parallelization penalty and to obtain an optimal cover of a global domain with subdomains. It is shown by computational experiments and by the theoretical model that the proposed algorithm reduces the parallelization penalty about two times over the basic algorithm for the range of the number of processors (subdomains) considered and the number of grid nodes per subdomain.
Diffeomorphic demons: efficient non-parametric image registration.
Vercauteren, Tom; Pennec, Xavier; Perchant, Aymeric; Ayache, Nicholas
2009-03-01
We propose an efficient non-parametric diffeomorphic image registration algorithm based on Thirion's demons algorithm. In the first part of this paper, we show that Thirion's demons algorithm can be seen as an optimization procedure on the entire space of displacement fields. We provide strong theoretical roots to the different variants of Thirion's demons algorithm. This analysis predicts a theoretical advantage for the symmetric forces variant of the demons algorithm. We show on controlled experiments that this advantage is confirmed in practice and yields a faster convergence. In the second part of this paper, we adapt the optimization procedure underlying the demons algorithm to a space of diffeomorphic transformations. In contrast to many diffeomorphic registration algorithms, our solution is computationally efficient since in practice it only replaces an addition of displacement fields by a few compositions. Our experiments show that in addition to being diffeomorphic, our algorithm provides results that are similar to the ones from the demons algorithm but with transformations that are much smoother and closer to the gold standard, available in controlled experiments, in terms of Jacobians.
NASA Astrophysics Data System (ADS)
Ham, Woonchul; Song, Chulgyu
2017-05-01
In this paper, we propose a new three-dimensional stereo image reconstruction algorithm for a photoacoustic medical imaging system. We also introduce and discuss a new theoretical algorithm by using the physical concept of Radon transform. The main key concept of proposed theoretical algorithm is to evaluate the existence possibility of the acoustic source within a searching region by using the geometric distance between each sensor element of acoustic detector and the corresponding searching region denoted by grid. We derive the mathematical equation for the magnitude of the existence possibility which can be used for implementing a new proposed algorithm. We handle and derive mathematical equations of proposed algorithm for the one-dimensional sensing array case as well as two dimensional sensing array case too. A mathematical k-wave simulation data are used for comparing the image quality of the proposed algorithm with that of general conventional algorithm in which the FFT should be necessarily used. From the k-wave Matlab simulation results, we can prove the effectiveness of the proposed reconstruction algorithm.
Seghouane, Abd-Krim; Iqbal, Asif
2017-09-01
Sequential dictionary learning algorithms have been successfully applied to functional magnetic resonance imaging (fMRI) data analysis. fMRI data sets are, however, structured data matrices with the notions of temporal smoothness in the column direction. This prior information, which can be converted into a constraint of smoothness on the learned dictionary atoms, has seldomly been included in classical dictionary learning algorithms when applied to fMRI data analysis. In this paper, we tackle this problem by proposing two new sequential dictionary learning algorithms dedicated to fMRI data analysis by accounting for this prior information. These algorithms differ from the existing ones in their dictionary update stage. The steps of this stage are derived as a variant of the power method for computing the SVD. The proposed algorithms generate regularized dictionary atoms via the solution of a left regularized rank-one matrix approximation problem where temporal smoothness is enforced via regularization through basis expansion and sparse basis expansion in the dictionary update stage. Applications on synthetic data experiments and real fMRI data sets illustrating the performance of the proposed algorithms are provided.
A novel algorithm for validating peptide identification from a shotgun proteomics search engine.
Jian, Ling; Niu, Xinnan; Xia, Zhonghang; Samir, Parimal; Sumanasekera, Chiranthani; Mu, Zheng; Jennings, Jennifer L; Hoek, Kristen L; Allos, Tara; Howard, Leigh M; Edwards, Kathryn M; Weil, P Anthony; Link, Andrew J
2013-03-01
Liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS) has revolutionized the proteomics analysis of complexes, cells, and tissues. In a typical proteomic analysis, the tandem mass spectra from a LC-MS/MS experiment are assigned to a peptide by a search engine that compares the experimental MS/MS peptide data to theoretical peptide sequences in a protein database. The peptide spectra matches are then used to infer a list of identified proteins in the original sample. However, the search engines often fail to distinguish between correct and incorrect peptides assignments. In this study, we designed and implemented a novel algorithm called De-Noise to reduce the number of incorrect peptide matches and maximize the number of correct peptides at a fixed false discovery rate using a minimal number of scoring outputs from the SEQUEST search engine. The novel algorithm uses a three-step process: data cleaning, data refining through a SVM-based decision function, and a final data refining step based on proteolytic peptide patterns. Using proteomics data generated on different types of mass spectrometers, we optimized the De-Noise algorithm on the basis of the resolution and mass accuracy of the mass spectrometer employed in the LC-MS/MS experiment. Our results demonstrate De-Noise improves peptide identification compared to other methods used to process the peptide sequence matches assigned by SEQUEST. Because De-Noise uses a limited number of scoring attributes, it can be easily implemented with other search engines.
UAV Control on the Basis of 3D Landmark Bearing-Only Observations.
Karpenko, Simon; Konovalenko, Ivan; Miller, Alexander; Miller, Boris; Nikolaev, Dmitry
2015-11-27
The article presents an approach to the control of a UAV on the basis of 3D landmark observations. The novelty of the work is the usage of the 3D RANSAC algorithm developed on the basis of the landmarks' position prediction with the aid of a modified Kalman-type filter. Modification of the filter based on the pseudo-measurements approach permits obtaining unbiased UAV position estimation with quadratic error characteristics. Modeling of UAV flight on the basis of the suggested algorithm shows good performance, even under significant external perturbations.
Computing Game-Theoretic Solutions for Security in the Medium Term
This project concerns the design of algorithms for computing game- theoretic solutions . (Game theory concerns how to act in a strategically optimal...way in environments with other agents who also seek to act optimally but have different , and possibly opposite, interests .) Such algorithms have...recently found application in a number of real-world security applications, including among others airport security, scheduling Federal Air Marshals, and
Information theoretic analysis of linear shift-invariant edge-detection operators
NASA Astrophysics Data System (ADS)
Jiang, Bo; Rahman, Zia-ur
2012-06-01
Generally, the designs of digital image processing algorithms and image gathering devices remain separate. Consequently, the performance of digital image processing algorithms is evaluated without taking into account the influences by the image gathering process. However, experiments show that the image gathering process has a profound impact on the performance of digital image processing and the quality of the resulting images. Huck et al. proposed one definitive theoretic analysis of visual communication channels, where the different parts, such as image gathering, processing, and display, are assessed in an integrated manner using Shannon's information theory. We perform an end-to-end information theory based system analysis to assess linear shift-invariant edge-detection algorithms. We evaluate the performance of the different algorithms as a function of the characteristics of the scene and the parameters, such as sampling, additive noise etc., that define the image gathering system. The edge-detection algorithm is regarded as having high performance only if the information rate from the scene to the edge image approaches its maximum possible. This goal can be achieved only by jointly optimizing all processes. Our information-theoretic assessment provides a new tool that allows us to compare different linear shift-invariant edge detectors in a common environment.
Wavelength routing beyond the standard graph coloring approach
NASA Astrophysics Data System (ADS)
Blankenhorn, Thomas
2004-04-01
When lightpaths are routed in the planning stage of transparent optical networks, the textbook approach is to use algorithms that try to minimize the overall number of wavelengths used in the . We demonstrate that this method cannot be expected to minimize actual costs when the marginal cost of instlling more wavelengths is a declining function of the number of wavelengths already installed, as is frequently the case. We further demonstrate how cost optimization can theoretically be improved with algorithms based on Prim"s algorithm. Finally, we test this theory with simulaion on a series of actual network topologies, which confirm the theoretical analysis.
Fast algorithm for bilinear transforms in optics
NASA Astrophysics Data System (ADS)
Ostrovsky, Andrey S.; Martinez-Niconoff, Gabriel C.; Ramos Romero, Obdulio; Cortes, Liliana
2000-10-01
The fast algorithm for calculating the bilinear transform in the optical system is proposed. This algorithm is based on the coherent-mode representation of the cross-spectral density function of the illumination. The algorithm is computationally efficient when the illumination is partially coherent. Numerical examples are studied and compared with the theoretical results.
Calculation of 3D Coordinates of a Point on the Basis of a Stereoscopic System
NASA Astrophysics Data System (ADS)
Mussabayev, R. R.; Kalimoldayev, M. N.; Amirgaliyev, Ye. N.; Tairova, A. T.; Mussabayev, T. R.
2018-05-01
The solution of three-dimensional (3D) coordinate calculation task for a material point is considered. Two flat images (a stereopair) which correspond to the left and to the right viewpoints of a 3D scene are used for this purpose. The stereopair is obtained using two cameras with parallel optical axes. The analytical formulas for calculating 3D coordinates of a material point in the scene were obtained on the basis of analysis of the stereoscopic system optical and geometrical schemes. The detailed presentation of the algorithmic and hardware realization of the given method was discussed with the the practical. The practical module was recommended for the determination of the optical system unknown parameters. The series of experimental investigations were conducted for verification of theoretical results. During these experiments the minor inaccuracies were occurred by space distortions in the optical system and by it discrecity. While using the high quality stereoscopic system, the existing calculation inaccuracy enables to apply the given method for the wide range of practical tasks.
Guidelines for VCCT-Based Interlaminar Fatigue and Progressive Failure Finite Element Analysis
NASA Technical Reports Server (NTRS)
Deobald, Lyle R.; Mabson, Gerald E.; Engelstad, Steve; Prabhakar, M.; Gurvich, Mark; Seneviratne, Waruna; Perera, Shenal; O'Brien, T. Kevin; Murri, Gretchen; Ratcliffe, James;
2017-01-01
This document is intended to detail the theoretical basis, equations, references and data that are necessary to enhance the functionality of commercially available Finite Element codes, with the objective of having functionality better suited for the aerospace industry in the area of composite structural analysis. The specific area of focus will be improvements to composite interlaminar fatigue and progressive interlaminar failure. Suggestions are biased towards codes that perform interlaminar Linear Elastic Fracture Mechanics (LEFM) using Virtual Crack Closure Technique (VCCT)-based algorithms [1,2]. All aspects of the science associated with composite interlaminar crack growth are not fully developed and the codes developed to predict this mode of failure must be programmed with sufficient flexibility to accommodate new functional relationships as the science matures.
Wang, S W; Li, M; Yang, H F; Zhao, Y J; Wang, Y; Liu, Y
2016-04-18
To compare the accuracyof interactive closet point (ICP) algorithm, Procrustes analysis (PA) algorithm,and a landmark-independent method to construct the mid-sagittal plane (MSP) of the cone beam computed tomography.To provide theoretical basis for establishing coordinate systemof CBCT images and symmetric analysis. Ten patients were selected and scanned by CBCT before orthodontic treatment.The scan data was imported into Mimics 10.0 to reconstructthree dimensional skulls.And the MSP of each skull was generated by ICP algorithm, PA algorithm and landmark-independent method. MSP extracted by ICP algorithm or PA algorithm involvedthree steps. First, the 3D skull processing was performed by reverse engineering software geomagic studio 2012 to obtain the mirror skull. Then, the original and its mirror skull was registered separately by ICP algorithm in geomagic studio 2012 and PA algorithm in NX Imageware 11.0. Finally, the registered data were united into new data to calculate the MSP of the originaldata in geomagic studio 2012. The mid-sagittal plane was determined by SELLA (S), nasion (N), basion (Ba) as traditional landmark-dependent methodconducted in software InVivoDental 5.0. The distance from 9 pairs of symmetric anatomical marked points to three sagittal plane were measured and calculated to compare the differences of the absolute value. The one-way ANOVA test was used to analyze the variable differences among the 3 MSPs. The pairwise comparison was performed with LSD method. MSPs calculated by the three methods were available for clinic analysis, which could be concluded from the front view.However, there was significant differences among the distances from the 9 pairs of symmetric anatomical marked points to the MSPs (F=10.932,P=0.001).LSD test showed there was no significant difference between the ICP algorithm and landmark-independent method (P=0.11), while there was significant difference between the PA algorithm and landmark-independent methods (P=0.01) . Mid-sagittal plane of 3D skulls could be generated base on ICP algorithm or PA algorithm. There was no significant difference between the ICP algorithm and landmark-independent method. For the subjects with no evident asymmetry, ICP algorithm is feasible in clinical analysis.
Spectral Anonymization of Data
Lasko, Thomas A.; Vinterbo, Staal A.
2011-01-01
The goal of data anonymization is to allow the release of scientifically useful data in a form that protects the privacy of its subjects. This requires more than simply removing personal identifiers from the data, because an attacker can still use auxiliary information to infer sensitive individual information. Additional perturbation is necessary to prevent these inferences, and the challenge is to perturb the data in a way that preserves its analytic utility. No existing anonymization algorithm provides both perfect privacy protection and perfect analytic utility. We make the new observation that anonymization algorithms are not required to operate in the original vector-space basis of the data, and many algorithms can be improved by operating in a judiciously chosen alternate basis. A spectral basis derived from the data’s eigenvectors is one that can provide substantial improvement. We introduce the term spectral anonymization to refer to an algorithm that uses a spectral basis for anonymization, and we give two illustrative examples. We also propose new measures of privacy protection that are more general and more informative than existing measures, and a principled reference standard with which to define adequate privacy protection. PMID:21373375
A new fast algorithm for computing a complex number: Theoretic transforms
NASA Technical Reports Server (NTRS)
Reed, I. S.; Liu, K. Y.; Truong, T. K.
1977-01-01
A high-radix fast Fourier transformation (FFT) algorithm for computing transforms over GF(sq q), where q is a Mersenne prime, is developed to implement fast circular convolutions. This new algorithm requires substantially fewer multiplications than the conventional FFT.
Information Clustering Based on Fuzzy Multisets.
ERIC Educational Resources Information Center
Miyamoto, Sadaaki
2003-01-01
Proposes a fuzzy multiset model for information clustering with application to information retrieval on the World Wide Web. Highlights include search engines; term clustering; document clustering; algorithms for calculating cluster centers; theoretical properties concerning clustering algorithms; and examples to show how the algorithms work.…
Digital Sound Encryption with Logistic Map and Number Theoretic Transform
NASA Astrophysics Data System (ADS)
Satria, Yudi; Gabe Rizky, P. H.; Suryadi, MT
2018-03-01
Digital sound security has limits on encrypting in Frequency Domain. Number Theoretic Transform based on field (GF 2521 – 1) improve and solve that problem. The algorithm for this sound encryption is based on combination of Chaos function and Number Theoretic Transform. The Chaos function that used in this paper is Logistic Map. The trials and the simulations are conducted by using 5 different digital sound files data tester in Wave File Extension Format and simulated at least 100 times each. The key stream resulted is random with verified by 15 NIST’s randomness test. The key space formed is very big which more than 10469. The processing speed of algorithm for encryption is slightly affected by Number Theoretic Transform.
CONORBIT: constrained optimization by radial basis function interpolation in trust regions
Regis, Rommel G.; Wild, Stefan M.
2016-09-26
Here, this paper presents CONORBIT (CONstrained Optimization by Radial Basis function Interpolation in Trust regions), a derivative-free algorithm for constrained black-box optimization where the objective and constraint functions are computationally expensive. CONORBIT employs a trust-region framework that uses interpolating radial basis function (RBF) models for the objective and constraint functions, and is an extension of the ORBIT algorithm. It uses a small margin for the RBF constraint models to facilitate the generation of feasible iterates, and extensive numerical tests confirm that such a margin is helpful in improving performance. CONORBIT is compared with other algorithms on 27 test problems, amore » chemical process optimization problem, and an automotive application. Numerical results show that CONORBIT performs better than COBYLA, a sequential penalty derivative-free method, an augmented Lagrangian method, a direct search method, and another RBF-based algorithm on the test problems and on the automotive application.« less
UAV Control on the Basis of 3D Landmark Bearing-Only Observations
Karpenko, Simon; Konovalenko, Ivan; Miller, Alexander; Miller, Boris; Nikolaev, Dmitry
2015-01-01
The article presents an approach to the control of a UAV on the basis of 3D landmark observations. The novelty of the work is the usage of the 3D RANSAC algorithm developed on the basis of the landmarks’ position prediction with the aid of a modified Kalman-type filter. Modification of the filter based on the pseudo-measurements approach permits obtaining unbiased UAV position estimation with quadratic error characteristics. Modeling of UAV flight on the basis of the suggested algorithm shows good performance, even under significant external perturbations. PMID:26633394
NASA Astrophysics Data System (ADS)
Bristow, Quentin
1990-01-01
Part one of this two-part study is concerned with the multiple coincidences in pulse trains from X-ray and gamma radiation detectors which are the cause of pulse pileup. A sequence of pulses with inter-arrival times less than tau, the resolving time of the pulse-height analysis system used to acquire spectra, is called a multiple pulse string. Such strings can be classified on the basis of the number of pulses they contain, or the number of resolving times they cover. The occurrence rates of such strings are derived from theoretical considerations. Logic circuits were devised to make experimental measurements of multiple pulse string occurrence rates in the output from a NaI(Tl) scintillation detector over a wide range of count rates. Markov process theory was used to predict state transition rates in the logic circuits, enabling the experimental data to be checked rigorously for conformity with those predicted for a Poisson distribution. No fundamental discrepancies were observed. Part two of the study is concerned with a theoretical analysis of pulse pileup and the development of a discrete correction algorithm, based on the use of a function to simulate the coincidence spectrum produced by partial sums of pulses. Monte Carlo simulations, incorporating criteria for pulse pileup inherent in the operation of modern ADC's, were used to generate pileup spectra due to coincidences between two pulses, (1st order pileup) and three pulses (2nd order pileup), for different semi-Gaussian pulse shapes. Coincidences between pulses in a single channel produced a basic probability density function spectrum which can be regarded as an impulse response for a particular pulse shape. The use of a flat spectrum (identical count rates in all channels) in the simulations, and in a parallel theoretical analysis, showed the 1st order pileup distorted the spectrum to a linear ramp with a pileup tail. The correction algorithm was successfully applied to correct entire spectra for 1st and 2nd order pileup; both those generated by Monte Carlo simulations and in addition some real spectra acquired with a laboratory multichannel analysis system.
French, Robert M; Glady, Yannick; Thibaut, Jean-Pierre
2017-08-01
In recent years, eyetracking has begun to be used to study the dynamics of analogy making. Numerous scanpath-comparison algorithms and machine-learning techniques are available that can be applied to the raw eyetracking data. We show how scanpath-comparison algorithms, combined with multidimensional scaling and a classification algorithm, can be used to resolve an outstanding question in analogy making-namely, whether or not children's and adults' strategies in solving analogy problems are different. (They are.) We show which of these scanpath-comparison algorithms is best suited to the kinds of analogy problems that have formed the basis of much analogy-making research over the years. Furthermore, we use machine-learning classification algorithms to examine the item-to-item saccade vectors making up these scanpaths. We show which of these algorithms best predicts, from very early on in a trial, on the basis of the frequency of various item-to-item saccades, whether a child or an adult is doing the problem. This type of analysis can also be used to predict, on the basis of the item-to-item saccade dynamics in the first third of a trial, whether or not a problem will be solved correctly.
A roadmap of clustering algorithms: finding a match for a biomedical application.
Andreopoulos, Bill; An, Aijun; Wang, Xiaogang; Schroeder, Michael
2009-05-01
Clustering is ubiquitously applied in bioinformatics with hierarchical clustering and k-means partitioning being the most popular methods. Numerous improvements of these two clustering methods have been introduced, as well as completely different approaches such as grid-based, density-based and model-based clustering. For improved bioinformatics analysis of data, it is important to match clusterings to the requirements of a biomedical application. In this article, we present a set of desirable clustering features that are used as evaluation criteria for clustering algorithms. We review 40 different clustering algorithms of all approaches and datatypes. We compare algorithms on the basis of desirable clustering features, and outline algorithms' benefits and drawbacks as a basis for matching them to biomedical applications.
Satisfiability of logic programming based on radial basis function neural networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamadneh, Nawaf; Sathasivam, Saratha; Tilahun, Surafel Luleseged
2014-07-10
In this paper, we propose a new technique to test the Satisfiability of propositional logic programming and quantified Boolean formula problem in radial basis function neural networks. For this purpose, we built radial basis function neural networks to represent the proportional logic which has exactly three variables in each clause. We used the Prey-predator algorithm to calculate the output weights of the neural networks, while the K-means clustering algorithm is used to determine the hidden parameters (the centers and the widths). Mean of the sum squared error function is used to measure the activity of the two algorithms. We appliedmore » the developed technique with the recurrent radial basis function neural networks to represent the quantified Boolean formulas. The new technique can be applied to solve many applications such as electronic circuits and NP-complete problems.« less
Genetic Algorithms and Local Search
NASA Technical Reports Server (NTRS)
Whitley, Darrell
1996-01-01
The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.
Spectral CT metal artifact reduction with an optimization-based reconstruction algorithm
NASA Astrophysics Data System (ADS)
Gilat Schmidt, Taly; Barber, Rina F.; Sidky, Emil Y.
2017-03-01
Metal objects cause artifacts in computed tomography (CT) images. This work investigated the feasibility of a spectral CT method to reduce metal artifacts. Spectral CT acquisition combined with optimization-based reconstruction is proposed to reduce artifacts by modeling the physical effects that cause metal artifacts and by providing the flexibility to selectively remove corrupted spectral measurements in the spectral-sinogram space. The proposed Constrained `One-Step' Spectral CT Image Reconstruction (cOSSCIR) algorithm directly estimates the basis material maps while enforcing convex constraints. The incorporation of constraints on the reconstructed basis material maps is expected to mitigate undersampling effects that occur when corrupted data is excluded from reconstruction. The feasibility of the cOSSCIR algorithm to reduce metal artifacts was investigated through simulations of a pelvis phantom. The cOSSCIR algorithm was investigated with and without the use of a third basis material representing metal. The effects of excluding data corrupted by metal were also investigated. The results demonstrated that the proposed cOSSCIR algorithm reduced metal artifacts and improved CT number accuracy. For example, CT number error in a bright shading artifact region was reduced from 403 HU in the reference filtered backprojection reconstruction to 33 HU using the proposed algorithm in simulation. In the dark shading regions, the error was reduced from 1141 HU to 25 HU. Of the investigated approaches, decomposing the data into three basis material maps and excluding the corrupted data demonstrated the greatest reduction in metal artifacts.
NASA Astrophysics Data System (ADS)
Platonov, A.; Zaitsev, Ie.; Opalski, L. J.
2017-08-01
The paper presents an overview of design methodology and results of experiments with a Prototype of highly efficient optimal adaptive feedback communication systems (AFCS), transmitting low frequency analog signals without coding. The paper emphasizes the role of the forward transmitter saturation as the factor that blocked implementation of theoretical results of pioneer (1960-1970s) and later research on FCS. Deepened analysis of the role of statistical fitting condition in adequate formulation and solution of AFCS optimization task is given. Solution of the task - optimal transmission/reception algorithms is presented in the form useful for elaboration of the hardware/software Prototype. A notable particularity of the Prototype is absence of the encoding/decoding units, whose functions are realized by the adaptive pulse amplitude modulator (PAM) of the forward transmitter (FT) and estimating/controlling algorithm in the receiver of base station (BS). Experiments confirm that the Prototype transmits signals from FT to BS "perfectly": with the bit rate equal to the capacity of the system, and with limit energy [J/bit] and spectral [bps/Hz] efficiency. Another, not less important and confirmed experimentally, particularity of AFCS is its capability to adjust parameters of FT and BS to the characteristics of scenario of application and maintain the ideal regime of transmission including spectralenergy efficiency. AFCS adjustment can be made using BS estimates of mean square error (MSE). The concluding part of the paper contains discussion of the presented results, stressing capability of AFCS to solve problems appearing in development of dense wireless networks.
Development of tight-binding based GW algorithm and its computational implementation for graphene
DOE Office of Scientific and Technical Information (OSTI.GOV)
Majidi, Muhammad Aziz; NUSNNI-NanoCore, Department of Physics, National University of Singapore; Singapore Synchrotron Light Source
Graphene has been a hot subject of research in the last decade as it holds a promise for various applications. One interesting issue is whether or not graphene should be classified into a strongly or weakly correlated system, as the optical properties may change upon several factors, such as the substrate, voltage bias, adatoms, etc. As the Coulomb repulsive interactions among electrons can generate the correlation effects that may modify the single-particle spectra (density of states) and the two-particle spectra (optical conductivity) of graphene, we aim to explore such interactions in this study. The understanding of such correlation effects ismore » important because eventually they play an important role in inducing the effective attractive interactions between electrons and holes that bind them into excitons. We do this study theoretically by developing a GW method implemented on the basis of the tight-binding (TB) model Hamiltonian. Unlike the well-known GW method developed within density functional theory (DFT) framework, our TB-based GW implementation may serve as an alternative technique suitable for systems which Hamiltonian is to be constructed through a tight-binding based or similar models. This study includes theoretical formulation of the Green’s function G, the renormalized interaction function W from random phase approximation (RPA), and the corresponding self energy derived from Feynman diagrams, as well as the development of the algorithm to compute those quantities. As an evaluation of the method, we perform calculations of the density of states and the optical conductivity of graphene, and analyze the results.« less
NASA Astrophysics Data System (ADS)
Zingerle, Philipp; Fecher, Thomas; Pail, Roland; Gruber, Thomas
2016-04-01
One of the major obstacles in modern global gravity field modelling is the seamless combination of lower degree inhomogeneous gravity field observations (e.g. data from satellite missions) with (very) high degree homogeneous information (e.g. gridded and reduced gravity anomalies, beyond d/o 1000). Actual approaches mostly combine such data only on the basis of the coefficients, meaning that previously for both observation classes (resp. models) a spherical harmonic analysis is done independently, solving dense normal equations (NEQ) for the inhomogeneous model and block-diagonal NEQs for the homogeneous. Obviously those methods are unable to identify or eliminate effects as spectral leakage due to band limitations of the models and non-orthogonality of the spherical harmonic base functions. To antagonize such problems a combination of both models on NEQ-basis is desirable. Theoretically this can be achieved using NEQ-stacking. Because of the higher maximum degree of the homogeneous model a reordering of the coefficient is needed which leads inevitably to the destruction of the block diagonal structure of the appropriate NEQ-matrix and therefore also to the destruction of simple sparsity. Hence, a special coefficient ordering is needed to create some new favorable sparsity pattern leading to a later efficient computational solving method. Such pattern can be found in the so called kite-structure (Bosch, 1993), achieving when applying the kite-ordering to the stacked NEQ-matrix. In a first step it is shown what is needed to attain the kite-(NEQ)system, how to solve it efficiently and also how to calculate the appropriate variance information from it. Further, because of the massive computational workload when operating on large kite-systems (theoretically possible up to about max. d/o 100.000), the main emphasis is put on to the presentation of special distributed algorithms which may solve those systems parallel on an indeterminate number of processes and are therefore suitable for the application on supercomputers (such as SuperMUC). Finally, (if time or space) some in-detail problems are shown that occur when dealing with high degree spherical harmonic base functions (mostly due to instabilities of Legendre polynomials), introducing also an appropriate solution for each.
Daub, Carsten O; Steuer, Ralf; Selbig, Joachim; Kloska, Sebastian
2004-01-01
Background The information theoretic concept of mutual information provides a general framework to evaluate dependencies between variables. In the context of the clustering of genes with similar patterns of expression it has been suggested as a general quantity of similarity to extend commonly used linear measures. Since mutual information is defined in terms of discrete variables, its application to continuous data requires the use of binning procedures, which can lead to significant numerical errors for datasets of small or moderate size. Results In this work, we propose a method for the numerical estimation of mutual information from continuous data. We investigate the characteristic properties arising from the application of our algorithm and show that our approach outperforms commonly used algorithms: The significance, as a measure of the power of distinction from random correlation, is significantly increased. This concept is subsequently illustrated on two large-scale gene expression datasets and the results are compared to those obtained using other similarity measures. A C++ source code of our algorithm is available for non-commercial use from kloska@scienion.de upon request. Conclusion The utilisation of mutual information as similarity measure enables the detection of non-linear correlations in gene expression datasets. Frequently applied linear correlation measures, which are often used on an ad-hoc basis without further justification, are thereby extended. PMID:15339346
A Christoffel function weighted least squares algorithm for collocation approximations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narayan, Akil; Jakeman, John D.; Zhou, Tao
Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less
A Christoffel function weighted least squares algorithm for collocation approximations
Narayan, Akil; Jakeman, John D.; Zhou, Tao
2016-11-28
Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less
Efficient field-theoretic simulation of polymer solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villet, Michael C.; Fredrickson, Glenn H., E-mail: ghf@mrl.ucsb.edu; Department of Materials, University of California, Santa Barbara, California 93106
2014-12-14
We present several developments that facilitate the efficient field-theoretic simulation of polymers by complex Langevin sampling. A regularization scheme using finite Gaussian excluded volume interactions is used to derive a polymer solution model that appears free of ultraviolet divergences and hence is well-suited for lattice-discretized field theoretic simulation. We show that such models can exhibit ultraviolet sensitivity, a numerical pathology that dramatically increases sampling error in the continuum lattice limit, and further show that this pathology can be eliminated by appropriate model reformulation by variable transformation. We present an exponential time differencing algorithm for integrating complex Langevin equations for fieldmore » theoretic simulation, and show that the algorithm exhibits excellent accuracy and stability properties for our regularized polymer model. These developments collectively enable substantially more efficient field-theoretic simulation of polymers, and illustrate the importance of simultaneously addressing analytical and numerical pathologies when implementing such computations.« less
An implementation of the QMR method based on coupled two-term recurrences
NASA Technical Reports Server (NTRS)
Freund, Roland W.; Nachtigal, Noeel M.
1992-01-01
The authors have proposed a new Krylov subspace iteration, the quasi-minimal residual algorithm (QMR), for solving non-Hermitian linear systems. In the original implementation of the QMR method, the Lanczos process with look-ahead is used to generate basis vectors for the underlying Krylov subspaces. In the Lanczos algorithm, these basis vectors are computed by means of three-term recurrences. It has been observed that, in finite precision arithmetic, vector iterations based on three-term recursions are usually less robust than mathematically equivalent coupled two-term vector recurrences. This paper presents a look-ahead algorithm that constructs the Lanczos basis vectors by means of coupled two-term recursions. Implementation details are given, and the look-ahead strategy is described. A new implementation of the QMR method, based on this coupled two-term algorithm, is described. A simplified version of the QMR algorithm without look-ahead is also presented, and the special case of QMR for complex symmetric linear systems is considered. Results of numerical experiments comparing the original and the new implementations of the QMR method are reported.
Engineering Education as a Complex System
ERIC Educational Resources Information Center
Gattie, David K.; Kellam, Nadia N.; Schramski, John R.; Walther, Joachim
2011-01-01
This paper presents a theoretical basis for cultivating engineering education as a complex system that will prepare students to think critically and make decisions with regard to poorly understood, ill-structured issues. Integral to this theoretical basis is a solution space construct developed and presented as a benchmark for evaluating…
QCCM Center for Quantum Algorithms
2008-10-17
algorithms (e.g., quantum walks and adiabatic computing ), as well as theoretical advances relating algorithms to physical implementations (e.g...Park, NC 27709-2211 15. SUBJECT TERMS Quantum algorithms, quantum computing , fault-tolerant error correction Richard Cleve MITACS East Academic...0511200 Algebraic results on quantum automata A. Ambainis, M. Beaudry, M. Golovkins, A. Kikusts, M. Mercer, D. Thrien Theory of Computing Systems 39(2006
NASA Astrophysics Data System (ADS)
Fukuda, Jun'ichi; Johnson, Kaj M.
2010-06-01
We present a unified theoretical framework and solution method for probabilistic, Bayesian inversions of crustal deformation data. The inversions involve multiple data sets with unknown relative weights, model parameters that are related linearly or non-linearly through theoretic models to observations, prior information on model parameters and regularization priors to stabilize underdetermined problems. To efficiently handle non-linear inversions in which some of the model parameters are linearly related to the observations, this method combines both analytical least-squares solutions and a Monte Carlo sampling technique. In this method, model parameters that are linearly and non-linearly related to observations, relative weights of multiple data sets and relative weights of prior information and regularization priors are determined in a unified Bayesian framework. In this paper, we define the mixed linear-non-linear inverse problem, outline the theoretical basis for the method, provide a step-by-step algorithm for the inversion, validate the inversion method using synthetic data and apply the method to two real data sets. We apply the method to inversions of multiple geodetic data sets with unknown relative data weights for interseismic fault slip and locking depth. We also apply the method to the problem of estimating the spatial distribution of coseismic slip on faults with unknown fault geometry, relative data weights and smoothing regularization weight.
A Thermo-Optic Propagation Modeling Capability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schrader, Karl; Akau, Ron
2014-10-01
A new theoretical basis is derived for tracing optical rays within a finite-element (FE) volume. The ray-trajectory equations are cast into the local element coordinate frame and the full finite-element interpolation is used to determine instantaneous index gradient for the ray-path integral equation. The FE methodology (FEM) is also used to interpolate local surface deformations and the surface normal vector for computing the refraction angle when launching rays into the volume, and again when rays exit the medium. The method is implemented in the Matlab(TM) environment and compared to closed- form gradient index models. A software architecture is also developedmore » for implementing the algorithms in the Zemax(TM) commercial ray-trace application. A controlled thermal environment was constructed in the laboratory, and measured data was collected to validate the structural, thermal, and optical modeling methods.« less
Determination of cloud parameters from infrared sounder data
NASA Technical Reports Server (NTRS)
Yeh, H.-Y. M.
1984-01-01
The World Climate Research Programme (WCRP) plan is concerned with the need to develop a uniform global cloud climatology as part of a broad research program on climate processes. The International Satellite Cloud Climatology Project (ISCCP) has been approved as the first project of the WCRP. The ISCCP has the basic objective to collect and analyze satellite radiance data to infer the global distribution of cloud radiative properties in order to improve the modeling of cloud effects on climate. Research is conducted to explore an algorithm for retrieving cloud properties by utilizing the available infrared sounder data from polar-orbiting satellites. A numerical method is developed for computing cloud top heights, amount, and emissivity on the basis of a parameterized infrared radiative transfer equation for cloudy atmospheres. Theoretical studies were carried out by considering a synthetic atmosphere.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlson, J.J.; Bouchard, A.M.; Osbourn, G.C.
Future generation automated human biometric identification and verification will require multiple features/sensors together with internal and external information sources to achieve high performance, accuracy, and reliability in uncontrolled environments. The primary objective of the proposed research is to develop a theoretical and practical basis for identifying and verifying people using standoff biometric features that can be obtained with minimal inconvenience during the verification process. The basic problem involves selecting sensors and discovering features that provide sufficient information to reliably verify a person`s identity under the uncertainties caused by measurement errors and tactics of uncooperative subjects. A system was developed formore » discovering hand, face, ear, and voice features and fusing them to verify the identity of people. The system obtains its robustness and reliability by fusing many coarse and easily measured features into a near minimal probability of error decision algorithm.« less
Evaluation and Prediction of Water Resources Based on AHP
NASA Astrophysics Data System (ADS)
Li, Shuai; Sun, Anqi
2017-01-01
Nowadays, the shortage of water resources is a threat to us. In order to solve the problem of water resources restricted by varieties of factors, this paper establishes a water resources evaluation index model (WREI), which adopts the fuzzy comprehensive evaluation (FCE) based on analytic hierarchy process (AHP) algorithm. After considering influencing factors of water resources, we ignore secondary factors and then hierarchical approach the main factors according to the class, set up a three-layer structure. The top floor is for WREI. Using analytic hierarchy process (AHP) to determine weight first, and then use fuzzy judgment to judge target, so the comprehensive use of the two algorithms reduce the subjective influence of AHP and overcome the disadvantages of multi-level evaluation. To prove the model, we choose India as a target region. On the basis of water resources evaluation index model, we use Matlab and combine grey prediction with linear prediction to discuss the ability to provide clean water in India and the trend of India’s water resources changing in the next 15 years. The model with theoretical support and practical significance will be of great help to provide reliable data support and reference for us to get plans to improve water quality.
NASA Astrophysics Data System (ADS)
Ackley, Kendall; Eikenberry, Stephen; Klimenko, Sergey; LIGO Team
2017-01-01
We present a false-alarm rate for a joint detection of gravitational wave (GW) events and associated electromagnetic (EM) counterparts for Advanced LIGO and Virgo (LV) observations during the first years of operation. Using simulated GW events and their recostructed probability skymaps, we tile over the error regions using sets of archival wide-field telescope survey images and recover the number of astrophysical transients to be expected during LV-EM followup. With the known GW event injection coordinates we inject artificial electromagnetic (EM) sources at that site based on theoretical and observational models on a one-to-one basis. We calculate the EM false-alarm probability using an unsupervised machine learning algorithm based on shapelet analysis which has shown to be a strong discriminator between astrophysical transients and image artifacts while reducing the set of transients to be manually vetted by five orders of magnitude. We also show the performance of our method in context with other machine-learned transient classification and reduction algorithms, showing comparability without the need for a large set of training data opening the possibility for next-generation telescopes to take advantage of this pipeline for LV-EM followup missions.
Cheng, Long; Hou, Zeng-Guang; Tan, Min; Zhang, W J
2012-10-01
The trajectory tracking problem of a closed-chain five-bar robot is studied in this paper. Based on an error transformation function and the backstepping technique, an approximation-based tracking algorithm is proposed, which can guarantee the control performance of the robotic system in both the stable and transient phases. In particular, the overshoot, settling time, and final tracking error of the robotic system can be all adjusted by properly setting the parameters in the error transformation function. The radial basis function neural network (RBFNN) is used to compensate the complicated nonlinear terms in the closed-loop dynamics of the robotic system. The approximation error of the RBFNN is only required to be bounded, which simplifies the initial "trail-and-error" configuration of the neural network. Illustrative examples are given to verify the theoretical analysis and illustrate the effectiveness of the proposed algorithm. Finally, it is also shown that the proposed approximation-based controller can be simplified by a smart mechanical design of the closed-chain robot, which demonstrates the promise of the integrated design and control philosophy.
Consideration of online rainfall measurement and nowcasting for RTC of the combined sewage system.
Rouault, P; Schroeder, K; Pawlowsky-Reusing, E; Reimer, E
2008-01-01
In Berlin, Germany, the demand for enhanced protection of the environment and the growing economic pressure have led to an increased application of control concepts within the sewage system. A global control strategy to regulate the pumpage of the combined sewage system to the treatment plant was developed and evaluated in a theoretical study. The objective was to reduce CSO. In this paper an extension of the existing control algorithm by information from online rainfall measurement and radar nowcasting is described. The rainfall information is taken into account by two additive terms describing the predicted volume from rainfall runoff. On the basis of numerical simulation the potential of these two complementary forecast terms in the global control algorithm to further reduce CSO is evaluated. The investigations are based on long-time simulations that are conducted with the dynamic flow routing model InfoWorks for three subcatchments of the Berlin drainage system. The results show that at the current Berlin system a CSO reduction of only 0.8% is possible. The effect of the forecast terms is limited by operational constraints. Limits are set to both, the delivery from each individual pump station and the total pumpage to the treatment plant.
Chou, Ting-Chao
2006-09-01
The median-effect equation derived from the mass-action law principle at equilibrium-steady state via mathematical induction and deduction for different reaction sequences and mechanisms and different types of inhibition has been shown to be the unified theory for the Michaelis-Menten equation, Hill equation, Henderson-Hasselbalch equation, and Scatchard equation. It is shown that dose and effect are interchangeable via defined parameters. This general equation for the single drug effect has been extended to the multiple drug effect equation for n drugs. These equations provide the theoretical basis for the combination index (CI)-isobologram equation that allows quantitative determination of drug interactions, where CI < 1, = 1, and > 1 indicate synergism, additive effect, and antagonism, respectively. Based on these algorithms, computer software has been developed to allow automated simulation of synergism and antagonism at all dose or effect levels. It displays the dose-effect curve, median-effect plot, combination index plot, isobologram, dose-reduction index plot, and polygonogram for in vitro or in vivo studies. This theoretical development, experimental design, and computerized data analysis have facilitated dose-effect analysis for single drug evaluation or carcinogen and radiation risk assessment, as well as for drug or other entity combinations in a vast field of disciplines of biomedical sciences. In this review, selected examples of applications are given, and step-by-step examples of experimental designs and real data analysis are also illustrated. The merging of the mass-action law principle with mathematical induction-deduction has been proven to be a unique and effective scientific method for general theory development. The median-effect principle and its mass-action law based computer software are gaining increased applications in biomedical sciences, from how to effectively evaluate a single compound or entity to how to beneficially use multiple drugs or modalities in combination therapies.
Insight into efficient image registration techniques and the demons algorithm.
Vercauteren, Tom; Pennec, Xavier; Malis, Ezio; Perchant, Aymeric; Ayache, Nicholas
2007-01-01
As image registration becomes more and more central to many biomedical imaging applications, the efficiency of the algorithms becomes a key issue. Image registration is classically performed by optimizing a similarity criterion over a given spatial transformation space. Even if this problem is considered as almost solved for linear registration, we show in this paper that some tools that have recently been developed in the field of vision-based robot control can outperform classical solutions. The adequacy of these tools for linear image registration leads us to revisit non-linear registration and allows us to provide interesting theoretical roots to the different variants of Thirion's demons algorithm. This analysis predicts a theoretical advantage to the symmetric forces variant of the demons algorithm. We show that, on controlled experiments, this advantage is confirmed, and yields a faster convergence.
Experimental and analytical study of secondary path variations in active engine mounts
NASA Astrophysics Data System (ADS)
Hausberg, Fabian; Scheiblegger, Christian; Pfeffer, Peter; Plöchl, Manfred; Hecker, Simon; Rupp, Markus
2015-03-01
Active engine mounts (AEMs) provide an effective solution to further improve the acoustic and vibrational comfort of passenger cars. Typically, adaptive feedforward control algorithms, e.g., the filtered-x-least-mean-squares (FxLMS) algorithm, are applied to cancel disturbing engine vibrations. These algorithms require an accurate estimate of the AEM active dynamic characteristics, also known as the secondary path, in order to guarantee control performance and stability. This paper focuses on the experimental and theoretical study of secondary path variations in AEMs. The impact of three major influences, namely nonlinearity, change of preload and component temperature, on the AEM active dynamic characteristics is experimentally analyzed. The obtained test results are theoretically investigated with a linear AEM model which incorporates an appropriate description for elastomeric components. A special experimental set-up extends the model validation of the active dynamic characteristics to higher frequencies up to 400 Hz. The theoretical and experimental results show that significant secondary path variations are merely observed in the frequency range of the AEM actuator's resonance frequency. These variations mainly result from the change of the component temperature. As the stability of the algorithm is primarily affected by the actuator's resonance frequency, the findings of this paper facilitate the design of AEMs with simpler adaptive feedforward algorithms. From a practical point of view it may further be concluded that algorithmic countermeasures against instability are only necessary in the frequency range of the AEM actuator's resonance frequency.
Information theoretic analysis of edge detection in visual communication
NASA Astrophysics Data System (ADS)
Jiang, Bo; Rahman, Zia-ur
2010-08-01
Generally, the designs of digital image processing algorithms and image gathering devices remain separate. Consequently, the performance of digital image processing algorithms is evaluated without taking into account the artifacts introduced into the process by the image gathering process. However, experiments show that the image gathering process profoundly impacts the performance of digital image processing and the quality of the resulting images. Huck et al. proposed one definitive theoretic analysis of visual communication channels, where the different parts, such as image gathering, processing, and display, are assessed in an integrated manner using Shannon's information theory. In this paper, we perform an end-to-end information theory based system analysis to assess edge detection methods. We evaluate the performance of the different algorithms as a function of the characteristics of the scene, and the parameters, such as sampling, additive noise etc., that define the image gathering system. The edge detection algorithm is regarded to have high performance only if the information rate from the scene to the edge approaches the maximum possible. This goal can be achieved only by jointly optimizing all processes. People generally use subjective judgment to compare different edge detection methods. There is not a common tool that can be used to evaluate the performance of the different algorithms, and to give people a guide for selecting the best algorithm for a given system or scene. Our information-theoretic assessment becomes this new tool to which allows us to compare the different edge detection operators in a common environment.
Parallel matrix multiplication on the Connection Machine
NASA Technical Reports Server (NTRS)
Tichy, Walter F.
1988-01-01
Matrix multiplication is a computation and communication intensive problem. Six parallel algorithms for matrix multiplication on the Connection Machine are presented and compared with respect to their performance and processor usage. For n by n matrices, the algorithms have theoretical running times of O(n to the 2nd power log n), O(n log n), O(n), and O(log n), and require n, n to the 2nd power, n to the 2nd power, and n to the 3rd power processors, respectively. With careful attention to communication patterns, the theoretically predicted runtimes can indeed be achieved in practice. The parallel algorithms illustrate the tradeoffs between performance, communication cost, and processor usage.
NASA Astrophysics Data System (ADS)
Drozd, Marek
2018-03-01
According to literature data two structures of guanidine with nitrobenzoic acids are known. For guanidinium 4-nitrobenzoate the detailed studies of X-ray structure, vibrational and theoretical properties were performed. This compound was classified as second harmonic generator with efficiency of 3.3 times that KDP, standard crystal. On the contrary to mentioned above results for the guanidinium 3-nitrobenzoate the basic X-ray diffraction study was performed, only. On the basis of established crystallographic results, the detailed investigation of geometry and vibrational properties were made on the basis of theoretical calculation. According to this data the equilibrium geometry of investigated molecule was established. On the basis of this calculation the detailed computational studies of vibrational properties were performed. The theoretical IR and Raman frequencies, intensities and PED analysis are presented. Additionally, the NBO charges, HOMO and LUMO shapes and NLO properties of titled crystal were calculated. On the basis of these results the crystal was classified as second order generator in NLO but with bigger efficiency that guanidinium 4-nitorobenzoate compound. The obtained data are compared with experimental crystallographic and vibrational results for real crystal of guanidinium 3-nitrobenzoate. Additionally, the theoretical vibrational spectra are compared with literature calculations of guanidinium 4-nitrobenzoate compound.
Dust-concentration measurement based on Mie scattering of a laser beam
Yu, Xiaoyu; Shi, Yunbo; Wang, Tian; Sun, Xu
2017-01-01
To realize automatic measurement of the concentration of dust particles in the air, a theory for dust concentration measurement was developed, and a system was designed to implement the dust concentration measurement method based on laser scattering. In the study, the principle of dust concentration detection using laser scattering is studied, and the detection basis of Mie scattering theory is determined. Through simulation, the influence of the incident laser wavelength, dust particle diameter, and refractive index of dust particles on the scattered light intensity distribution are obtained for determining the scattered light intensity curves of single suspended dust particles under different characteristic parameters. A genetic algorithm was used to study the inverse particle size distribution, and the reliability of the measurement system design is proven theoretically. The dust concentration detection system, which includes a laser system, computer circuitry, air flow system, and control system, was then implemented according to the parameters obtained from the theoretical analysis. The performance of the designed system was evaluated. Experimental results show that the system performance was stable and reliable, resulting in high-precision automatic dust concentration measurement with strong anti-interference ability. PMID:28767662
Exploring super-Gaussianity toward robust information-theoretical time delay estimation.
Petsatodis, Theodoros; Talantzis, Fotios; Boukis, Christos; Tan, Zheng-Hua; Prasad, Ramjee
2013-03-01
Time delay estimation (TDE) is a fundamental component of speaker localization and tracking algorithms. Most of the existing systems are based on the generalized cross-correlation method assuming gaussianity of the source. It has been shown that the distribution of speech, captured with far-field microphones, is highly varying, depending on the noise and reverberation conditions. Thus the performance of TDE is expected to fluctuate depending on the underlying assumption for the speech distribution, being also subject to multi-path reflections and competitive background noise. This paper investigates the effect upon TDE when modeling the source signal with different speech-based distributions. An information theoretical TDE method indirectly encapsulating higher order statistics (HOS) formed the basis of this work. The underlying assumption of Gaussian distributed source has been replaced by that of generalized Gaussian distribution that allows evaluating the problem under a larger set of speech-shaped distributions, ranging from Gaussian to Laplacian and Gamma. Closed forms of the univariate and multivariate entropy expressions of the generalized Gaussian distribution are derived to evaluate the TDE. The results indicate that TDE based on the specific criterion is independent of the underlying assumption for the distribution of the source, for the same covariance matrix.
Using circuit theory to model connectivity in ecology, evolution, and conservation.
McRae, Brad H; Dickson, Brett G; Keitt, Timothy H; Shah, Viral B
2008-10-01
Connectivity among populations and habitats is important for a wide range of ecological processes. Understanding, preserving, and restoring connectivity in complex landscapes requires connectivity models and metrics that are reliable, efficient, and process based. We introduce a new class of ecological connectivity models based in electrical circuit theory. Although they have been applied in other disciplines, circuit-theoretic connectivity models are new to ecology. They offer distinct advantages over common analytic connectivity models, including a theoretical basis in random walk theory and an ability to evaluate contributions of multiple dispersal pathways. Resistance, current, and voltage calculated across graphs or raster grids can be related to ecological processes (such as individual movement and gene flow) that occur across large population networks or landscapes. Efficient algorithms can quickly solve networks with millions of nodes, or landscapes with millions of raster cells. Here we review basic circuit theory, discuss relationships between circuit and random walk theories, and describe applications in ecology, evolution, and conservation. We provide examples of how circuit models can be used to predict movement patterns and fates of random walkers in complex landscapes and to identify important habitat patches and movement corridors for conservation planning.
NASA Astrophysics Data System (ADS)
Leclerc, Arnaud; Thomas, Phillip S.; Carrington, Tucker
2017-08-01
Vibrational spectra and wavefunctions of polyatomic molecules can be calculated at low memory cost using low-rank sum-of-product (SOP) decompositions to represent basis functions generated using an iterative eigensolver. Using a SOP tensor format does not determine the iterative eigensolver. The choice of the interative eigensolver is limited by the need to restrict the rank of the SOP basis functions at every stage of the calculation. We have adapted, implemented and compared different reduced-rank algorithms based on standard iterative methods (block-Davidson algorithm, Chebyshev iteration) to calculate vibrational energy levels and wavefunctions of the 12-dimensional acetonitrile molecule. The effect of using low-rank SOP basis functions on the different methods is analysed and the numerical results are compared with those obtained with the reduced rank block power method. Relative merits of the different algorithms are presented, showing that the advantage of using a more sophisticated method, although mitigated by the use of reduced-rank SOP functions, is noticeable in terms of CPU time.
ERIC Educational Resources Information Center
Weissman, Alexander
2013-01-01
Convergence of the expectation-maximization (EM) algorithm to a global optimum of the marginal log likelihood function for unconstrained latent variable models with categorical indicators is presented. The sufficient conditions under which global convergence of the EM algorithm is attainable are provided in an information-theoretic context by…
Practical sliced configuration spaces for curved planar pairs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sacks, E.
1999-01-01
In this article, the author presents a practical configuration-space computation algorithm for pairs of curved planar parts, based on the general algorithm developed by Bajaj and the author. The general algorithm advances the theoretical understanding of configuration-space computation, but is too slow and fragile for some applications. The new algorithm solves these problems by restricting the analysis to parts bounded by line segments and circular arcs, whereas the general algorithm handles rational parametric curves. The trade-off is worthwhile, because the restricted class handles most robotics and mechanical engineering applications. The algorithm reduces run time by a factor of 60 onmore » nine representative engineering pairs, and by a factor of 9 on two human-knee pairs. It also handles common special pairs by specialized methods. A survey of 2,500 mechanisms shows that these methods cover 90% of pairs and yield an additional factor of 10 reduction in average run time. The theme of this article is that application requirements, as well as intrinsic theoretical interest, should drive configuration-space research.« less
The Evolution of Random Number Generation in MUVES
2017-01-01
mathematical basis and statistical justification for algorithms used in the code. The working code provided produces results identical to the current...MUVES, includ- ing the mathematical basis and statistical justification for algorithms used in the code. The working code provided produces results...questionable numerical and statistical properties. The development of the modern system is traced through software change requests, resulting in a random number
NASA Astrophysics Data System (ADS)
Arabi, Ehsan; Gruenwald, Benjamin C.; Yucelen, Tansel; Nguyen, Nhan T.
2018-05-01
Research in adaptive control algorithms for safety-critical applications is primarily motivated by the fact that these algorithms have the capability to suppress the effects of adverse conditions resulting from exogenous disturbances, imperfect dynamical system modelling, degraded modes of operation, and changes in system dynamics. Although government and industry agree on the potential of these algorithms in providing safety and reducing vehicle development costs, a major issue is the inability to achieve a-priori, user-defined performance guarantees with adaptive control algorithms. In this paper, a new model reference adaptive control architecture for uncertain dynamical systems is presented to address disturbance rejection and uncertainty suppression. The proposed framework is predicated on a set-theoretic adaptive controller construction using generalised restricted potential functions.The key feature of this framework allows the system error bound between the state of an uncertain dynamical system and the state of a reference model, which captures a desired closed-loop system performance, to be less than a-priori, user-defined worst-case performance bound, and hence, it has the capability to enforce strict performance guarantees. Examples are provided to demonstrate the efficacy of the proposed set-theoretic model reference adaptive control architecture.
Current Status of Japan's Activity for GPM/DPR and Global Rainfall Map algorithm development
NASA Astrophysics Data System (ADS)
Kachi, M.; Kubota, T.; Yoshida, N.; Kida, S.; Oki, R.; Iguchi, T.; Nakamura, K.
2012-04-01
The Global Precipitation Measurement (GPM) mission is composed of two categories of satellites; 1) a Tropical Rainfall Measuring Mission (TRMM)-like non-sun-synchronous orbit satellite (GPM Core Observatory); and 2) constellation of satellites carrying microwave radiometer instruments. The GPM Core Observatory carries the Dual-frequency Precipitation Radar (DPR), which is being developed by the Japan Aerospace Exploration Agency (JAXA) and the National Institute of Information and Communications Technology (NICT), and microwave radiometer provided by the National Aeronautics and Space Administration (NASA). GPM Core Observatory will be launched in February 2014, and development of algorithms is underway. DPR Level 1 algorithm, which provides DPR L1B product including received power, will be developed by the JAXA. The first version was submitted in March 2011. Development of the second version of DPR L1B algorithm (Version 2) will complete in March 2012. Version 2 algorithm includes all basic functions, preliminary database, HDF5 I/F, and minimum error handling. Pre-launch code will be developed by the end of October 2012. DPR Level 2 algorithm has been developing by the DPR Algorithm Team led by Japan, which is under the NASA-JAXA Joint Algorithm Team. The first version of GPM/DPR Level-2 Algorithm Theoretical Basis Document was completed on November 2010. The second version, "Baseline code", was completed in January 2012. Baseline code includes main module, and eight basic sub-modules (Preparation module, Vertical Profile module, Classification module, SRT module, DSD module, Solver module, Input module, and Output module.) The Level-2 algorithms will provide KuPR only products, KaPR only products, and Dual-frequency Precipitation products, with estimated precipitation rate, radar reflectivity, and precipitation information such as drop size distribution and bright band height. It is important to develop algorithm applicable to both TRMM/PR and KuPR in order to produce long-term continuous data set. Pre-launch code will be developed by autumn 2012. Global Rainfall Map algorithm has been developed by the Global Rainfall Map Algorithm Development Team in Japan. The algorithm succeeded heritages of the Global Satellite Mapping for Precipitation (GSMaP) project between 2002 and 2007, and near-real-time version operating at JAXA since 2007. "Baseline code" used current operational GSMaP code (V5.222,) and development completed in January 2012. Pre-launch code will be developed by autumn 2012, including update of database for rain type classification and rain/no-rain classification, and introduction of rain-gauge correction.
Fast Constrained Spectral Clustering and Cluster Ensemble with Random Projection
Liu, Wenfen
2017-01-01
Constrained spectral clustering (CSC) method can greatly improve the clustering accuracy with the incorporation of constraint information into spectral clustering and thus has been paid academic attention widely. In this paper, we propose a fast CSC algorithm via encoding landmark-based graph construction into a new CSC model and applying random sampling to decrease the data size after spectral embedding. Compared with the original model, the new algorithm has the similar results with the increase of its model size asymptotically; compared with the most efficient CSC algorithm known, the new algorithm runs faster and has a wider range of suitable data sets. Meanwhile, a scalable semisupervised cluster ensemble algorithm is also proposed via the combination of our fast CSC algorithm and dimensionality reduction with random projection in the process of spectral ensemble clustering. We demonstrate by presenting theoretical analysis and empirical results that the new cluster ensemble algorithm has advantages in terms of efficiency and effectiveness. Furthermore, the approximate preservation of random projection in clustering accuracy proved in the stage of consensus clustering is also suitable for the weighted k-means clustering and thus gives the theoretical guarantee to this special kind of k-means clustering where each point has its corresponding weight. PMID:29312447
A new implementation of the CMRH method for solving dense linear systems
NASA Astrophysics Data System (ADS)
Heyouni, M.; Sadok, H.
2008-04-01
The CMRH method [H. Sadok, Methodes de projections pour les systemes lineaires et non lineaires, Habilitation thesis, University of Lille1, Lille, France, 1994; H. Sadok, CMRH: A new method for solving nonsymmetric linear systems based on the Hessenberg reduction algorithm, Numer. Algorithms 20 (1999) 303-321] is an algorithm for solving nonsymmetric linear systems in which the Arnoldi component of GMRES is replaced by the Hessenberg process, which generates Krylov basis vectors which are orthogonal to standard unit basis vectors rather than mutually orthogonal. The iterate is formed from these vectors by solving a small least squares problem involving a Hessenberg matrix. Like GMRES, this method requires one matrix-vector product per iteration. However, it can be implemented to require half as much arithmetic work and less storage. Moreover, numerical experiments show that this method performs accurately and reduces the residual about as fast as GMRES. With this new implementation, we show that the CMRH method is the only method with long-term recurrence which requires not storing at the same time the entire Krylov vectors basis and the original matrix as in the GMRES algorithmE A comparison with Gaussian elimination is provided.
A Theoretical Basis for Adult Learning Facilitation: Review of Selected Articles
ERIC Educational Resources Information Center
Muneja, Mussa S.
2015-01-01
The aim of this paper is to synthesize a theoretical basis for adult learning facilitation in order to provide a valuable systematic resource in the field of adult education. The paper has reviewed 6 journal articles with topics ranging from theory of andragogy; the effect of globalization on adult learning; the contribution of Malcolm Knowles;…
A comparison of SAR ATR performance with information theoretic predictions
NASA Astrophysics Data System (ADS)
Blacknell, David
2003-09-01
Performance assessment of automatic target detection and recognition algorithms for SAR systems (or indeed any other sensors) is essential if the military utility of the system / algorithm mix is to be quantified. This is a relatively straightforward task if extensive trials data from an existing system is used. However, a crucial requirement is to assess the potential performance of novel systems as a guide to procurement decisions. This task is no longer straightforward since a hypothetical system cannot provide experimental trials data. QinetiQ has previously developed a theoretical technique for classification algorithm performance assessment based on information theory. The purpose of the study presented here has been to validate this approach. To this end, experimental SAR imagery of targets has been collected using the QinetiQ Enhanced Surveillance Radar to allow algorithm performance assessments as a number of parameters are varied. In particular, performance comparisons can be made for (i) resolutions up to 0.1m, (ii) single channel versus polarimetric (iii) targets in the open versus targets in scrubland and (iv) use versus non-use of camouflage. The change in performance as these parameters are varied has been quantified from the experimental imagery whilst the information theoretic approach has been used to predict the expected variation of performance with parameter value. A comparison of these measured and predicted assessments has revealed the strengths and weaknesses of the theoretical technique as will be discussed in the paper.
Topological properties of the limited penetrable horizontal visibility graph family
NASA Astrophysics Data System (ADS)
Wang, Minggang; Vilela, André L. M.; Du, Ruijin; Zhao, Longfeng; Dong, Gaogao; Tian, Lixin; Stanley, H. Eugene
2018-05-01
The limited penetrable horizontal visibility graph algorithm was recently introduced to map time series in complex networks. In this work, we extend this algorithm to create a directed-limited penetrable horizontal visibility graph and an image-limited penetrable horizontal visibility graph. We define two algorithms and provide theoretical results on the topological properties of these graphs associated with different types of real-value series. We perform several numerical simulations to check the accuracy of our theoretical results. Finally, we present an application of the directed-limited penetrable horizontal visibility graph to measure real-value time series irreversibility and an application of the image-limited penetrable horizontal visibility graph that discriminates noise from chaos. We also propose a method to measure the systematic risk using the image-limited penetrable horizontal visibility graph, and the empirical results show the effectiveness of our proposed algorithms.
An algorithm for the basis of the finite Fourier transform
NASA Technical Reports Server (NTRS)
Santhanam, Thalanayar S.
1995-01-01
The Finite Fourier Transformation matrix (F.F.T.) plays a central role in the formulation of quantum mechanics in a finite dimensional space studied by the author over the past couple of decades. An outstanding problem which still remains open is to find a complete basis for F.F.T. In this paper we suggest a simple algorithm to find the eigenvectors of F.T.T.
Decryption of pure-position permutation algorithms.
Zhao, Xiao-Yu; Chen, Gang; Zhang, Dan; Wang, Xiao-Hong; Dong, Guang-Chang
2004-07-01
Pure position permutation image encryption algorithms, commonly used as image encryption investigated in this work are unfortunately frail under known-text attack. In view of the weakness of pure position permutation algorithm, we put forward an effective decryption algorithm for all pure-position permutation algorithms. First, a summary of the pure position permutation image encryption algorithms is given by introducing the concept of ergodic matrices. Then, by using probability theory and algebraic principles, the decryption probability of pure-position permutation algorithms is verified theoretically; and then, by defining the operation system of fuzzy ergodic matrices, we improve a specific decryption algorithm. Finally, some simulation results are shown.
A SAT Based Effective Algorithm for the Directed Hamiltonian Cycle Problem
NASA Astrophysics Data System (ADS)
Jäger, Gerold; Zhang, Weixiong
The Hamiltonian cycle problem (HCP) is an important combinatorial problem with applications in many areas. While thorough theoretical and experimental analyses have been made on the HCP in undirected graphs, little is known for the HCP in directed graphs (DHCP). The contribution of this work is an effective algorithm for the DHCP. Our algorithm explores and exploits the close relationship between the DHCP and the Assignment Problem (AP) and utilizes a technique based on Boolean satisfiability (SAT). By combining effective algorithms for the AP and SAT, our algorithm significantly outperforms previous exact DHCP algorithms including an algorithm based on the award-winning Concorde TSP algorithm.
ERIC Educational Resources Information Center
Christensen, Mette Krogh; Laursen, Dan Norgaard; Sorensen, Jan Kahr
2011-01-01
Background: The application of a social theory of learning and the notion of situated learning as a theoretical basis for understanding students' learning in PE is broadly recognised. Nevertheless, it is far more unusual for this theoretical approach to provide a basis for understanding learning processes in talent development in elite sport.…
2018-01-01
This paper presents an integrated hybrid optimization algorithm for training the radial basis function neural network (RBF NN). Training of neural networks is still a challenging exercise in machine learning domain. Traditional training algorithms in general suffer and trap in local optima and lead to premature convergence, which makes them ineffective when applied for datasets with diverse features. Training algorithms based on evolutionary computations are becoming popular due to their robust nature in overcoming the drawbacks of the traditional algorithms. Accordingly, this paper proposes a hybrid training procedure with differential search (DS) algorithm functionally integrated with the particle swarm optimization (PSO). To surmount the local trapping of the search procedure, a new population initialization scheme is proposed using Logistic chaotic sequence, which enhances the population diversity and aid the search capability. To demonstrate the effectiveness of the proposed RBF hybrid training algorithm, experimental analysis on publicly available 7 benchmark datasets are performed. Subsequently, experiments were conducted on a practical application case for wind speed prediction to expound the superiority of the proposed RBF training algorithm in terms of prediction accuracy. PMID:29768463
Rani R, Hannah Jessie; Victoire T, Aruldoss Albert
2018-01-01
This paper presents an integrated hybrid optimization algorithm for training the radial basis function neural network (RBF NN). Training of neural networks is still a challenging exercise in machine learning domain. Traditional training algorithms in general suffer and trap in local optima and lead to premature convergence, which makes them ineffective when applied for datasets with diverse features. Training algorithms based on evolutionary computations are becoming popular due to their robust nature in overcoming the drawbacks of the traditional algorithms. Accordingly, this paper proposes a hybrid training procedure with differential search (DS) algorithm functionally integrated with the particle swarm optimization (PSO). To surmount the local trapping of the search procedure, a new population initialization scheme is proposed using Logistic chaotic sequence, which enhances the population diversity and aid the search capability. To demonstrate the effectiveness of the proposed RBF hybrid training algorithm, experimental analysis on publicly available 7 benchmark datasets are performed. Subsequently, experiments were conducted on a practical application case for wind speed prediction to expound the superiority of the proposed RBF training algorithm in terms of prediction accuracy.
State transformations and Hamiltonian structures for optimal control in discrete systems
NASA Astrophysics Data System (ADS)
Sieniutycz, S.
2006-04-01
Preserving usual definition of Hamiltonian H as the scalar product of rates and generalized momenta we investigate two basic classes of discrete optimal control processes governed by the difference rather than differential equations for the state transformation. The first class, linear in the time interval θ, secures the constancy of optimal H and satisfies a discrete Hamilton-Jacobi equation. The second class, nonlinear in θ, does not assure the constancy of optimal H and satisfies only a relationship that may be regarded as an equation of Hamilton-Jacobi type. The basic question asked is if and when Hamilton's canonical structures emerge in optimal discrete systems. For a constrained discrete control, general optimization algorithms are derived that constitute powerful theoretical and computational tools when evaluating extremum properties of constrained physical systems. The mathematical basis is Bellman's method of dynamic programming (DP) and its extension in the form of the so-called Carathéodory-Boltyanski (CB) stage optimality criterion which allows a variation of the terminal state that is otherwise fixed in Bellman's method. For systems with unconstrained intervals of the holdup time θ two powerful optimization algorithms are obtained: an unconventional discrete algorithm with a constant H and its counterpart for models nonlinear in θ. We also present the time-interval-constrained extension of the second algorithm. The results are general; namely, one arrives at: discrete canonical equations of Hamilton, maximum principles, and (at the continuous limit of processes with free intervals of time) the classical Hamilton-Jacobi theory, along with basic results of variational calculus. A vast spectrum of applications and an example are briefly discussed with particular attention paid to models nonlinear in the time interval θ.
The Mucciardi-Gose Clustering Algorithm and Its Applications in Automatic Pattern Recognition.
A procedure known as the Mucciardi- Gose clustering algorithm, CLUSTR, for determining the geometrical or statistical relationships among groups of N...discussion of clustering algorithms is given; the particular advantages of the Mucciardi- Gose procedure are described. The mathematical basis for, and the
NASA Astrophysics Data System (ADS)
Lodhi, Ehtisham; Lodhi, Zeeshan; Noman Shafqat, Rana; Chen, Fieda
2017-07-01
Photovoltaic (PV) system usually employed The Maximum power point tracking (MPPT) techniques for increasing its efficiency. The performance of the PV system perhaps boosts by controlling at its apex point of power, in this way maximal power can be given to load. The proficiency of a PV system usually depends upon irradiance, temperature and array architecture. PV array shows a non-linear style for V-I curve and maximal power point on V-P curve also varies with changing environmental conditions. MPPT methods grantees that a PV module is regulated at reference voltage and to produce entire usage of the maximal output power. This paper gives analysis between two widely employed Perturb and Observe (P&O) and Incremental Conductance (INC) MPPT techniques. Their performance is evaluated and compared through theoretical analysis and digital simulation on the basis of response time and efficiency under varying irradiance and temperature condition using Matlab/Simulink.
NASA Astrophysics Data System (ADS)
Schmieschek, S.; Shamardin, L.; Frijters, S.; Krüger, T.; Schiller, U. D.; Harting, J.; Coveney, P. V.
2017-08-01
We introduce the lattice-Boltzmann code LB3D, version 7.1. Building on a parallel program and supporting tools which have enabled research utilising high performance computing resources for nearly two decades, LB3D version 7 provides a subset of the research code functionality as an open source project. Here, we describe the theoretical basis of the algorithm as well as computational aspects of the implementation. The software package is validated against simulations of meso-phases resulting from self-assembly in ternary fluid mixtures comprising immiscible and amphiphilic components such as water-oil-surfactant systems. The impact of the surfactant species on the dynamics of spinodal decomposition are tested and quantitative measurement of the permeability of a body centred cubic (BCC) model porous medium for a simple binary mixture is described. Single-core performance and scaling behaviour of the code are reported for simulations on current supercomputer architectures.
A New Standard for Assessing the Performance of High Contrast Imaging Systems
NASA Astrophysics Data System (ADS)
Jensen-Clem, Rebecca; Mawet, Dimitri; Gomez Gonzalez, Carlos A.; Absil, Olivier; Belikov, Ruslan; Currie, Thayne; Kenworthy, Matthew A.; Marois, Christian; Mazoyer, Johan; Ruane, Garreth; Tanner, Angelle; Cantalloube, Faustine
2018-01-01
As planning for the next generation of high contrast imaging instruments (e.g., WFIRST, HabEx, and LUVOIR, TMT-PFI, EELT-EPICS) matures and second-generation ground-based extreme adaptive optics facilities (e.g., VLT-SPHERE, Gemini-GPI) finish their principal surveys, it is imperative that the performance of different designs, post-processing algorithms, observing strategies, and survey results be compared in a consistent, statistically robust framework. In this paper, we argue that the current industry standard for such comparisons—the contrast curve—falls short of this mandate. We propose a new figure of merit, the “performance map,” that incorporates three fundamental concepts in signal detection theory: the true positive fraction, the false positive fraction, and the detection threshold. By supplying a theoretical basis and recipe for generating the performance map, we hope to encourage the widespread adoption of this new metric across subfields in exoplanet imaging.
A broadband terahertz ultrathin multi-focus lens
He, Jingwen; Ye, Jiasheng; Wang, Xinke; Kan, Qiang; Zhang, Yan
2016-01-01
Ultrathin transmission metasurface devices are designed on the basis of the Yang-Gu amplitude-phase retrieval algorithm for focusing the terahertz (THz) radiation into four or nine spots with focal spacing of 2 or 3 mm at a frequency of 0.8 THz. The focal properties are experimentally investigated in detail, and the results agree well with the theoretical expectations. The designed THz multi-focus lens (TMFL) demonstrates a good focusing function over a broad frequency range from 0.3 to 1.1 THz. As a transmission-type device based on metasurface, the diffraction efficiency of the TMFL can be as high as 33.92% at the designed frequency. The imaging function of the TMFL is also demonstrated experimentally and clear images are obtained. The proposed method produces an ultrathin, low-cost, and broadband multi-focus lens for THz-band application PMID:27346430
Song, Qiang; Liu, Fang; Wen, Guanghui; Cao, Jinde; Yang, Xinsong
2017-04-24
This paper considers the position-based consensus in a network of agents with double-integrator dynamics and directed topology. Two types of distributed observer algorithms are proposed to solve the consensus problem by utilizing continuous and intermittent position measurements, respectively, where each observer does not interact with any other observers. For the case of continuous communication between network agents, some convergence conditions are derived for reaching consensus in the network with a single constant delay or multiple time-varying delays on the basis of the eigenvalue analysis and the descriptor method. When the network agents can only obtain intermittent position data from local neighbors at discrete time instants, the consensus in the network without time delay or with nonuniform delays is investigated by using the Wirtinger's inequality and the delayed-input approach. Numerical examples are given to illustrate the theoretical analysis.
Estimators of primary production for interpretation of remotely sensed data on ocean color
NASA Technical Reports Server (NTRS)
Platt, Trevor; Sathyendranath, Shubha
1993-01-01
The theoretical basis is explained for some commonly used estimators of daily primary production in a vertically uniform water column. These models are recast into a canonical form, with dimensionless arguments, to facilitate comparison with each other and with an analytic solution. The limitations of each model are examined. The values of the photoadaptation parameter I(k) observed in the ocean are analyzed, and I(k) is used as a scale to normalize the surface irradiance. The range of this scaled irradiance is presented. An equation is given for estimation of I(k) from recent light history. It is shown how the models for water column production can be adapted for estimation of the production in finite layers. The distinctions between model formulation, model implementation and model evaluation are discussed. Recommendations are given on the choice of algorithm for computation of daily production according to the degree of approximation acceptable in the result.
Robot path planning algorithm based on symbolic tags in dynamic environment
NASA Astrophysics Data System (ADS)
Vokhmintsev, A.; Timchenko, M.; Melnikov, A.; Kozko, A.; Makovetskii, A.
2017-09-01
The present work will propose a new heuristic algorithms for path planning of a mobile robot in an unknown dynamic space that have theoretically approved estimates of computational complexity and are approbated for solving specific applied problems.
Comparison Of Eigenvector-Based Statistical Pattern Recognition Algorithms For Hybrid Processing
NASA Astrophysics Data System (ADS)
Tian, Q.; Fainman, Y.; Lee, Sing H.
1989-02-01
The pattern recognition algorithms based on eigenvector analysis (group 2) are theoretically and experimentally compared in this part of the paper. Group 2 consists of Foley-Sammon (F-S) transform, Hotelling trace criterion (HTC), Fukunaga-Koontz (F-K) transform, linear discriminant function (LDF) and generalized matched filter (GMF). It is shown that all eigenvector-based algorithms can be represented in a generalized eigenvector form. However, the calculations of the discriminant vectors are different for different algorithms. Summaries on how to calculate the discriminant functions for the F-S, HTC and F-K transforms are provided. Especially for the more practical, underdetermined case, where the number of training images is less than the number of pixels in each image, the calculations usually require the inversion of a large, singular, pixel correlation (or covariance) matrix. We suggest solving this problem by finding its pseudo-inverse, which requires inverting only the smaller, non-singular image correlation (or covariance) matrix plus multiplying several non-singular matrices. We also compare theoretically the effectiveness for classification with the discriminant functions from F-S, HTC and F-K with LDF and GMF, and between the linear-mapping-based algorithms and the eigenvector-based algorithms. Experimentally, we compare the eigenvector-based algorithms using a set of image data bases each image consisting of 64 x 64 pixels.
Kadiyala, Akhil; Kaur, Devinder; Kumar, Ashok
2013-02-01
The present study developed a novel approach to modeling indoor air quality (IAQ) of a public transportation bus by the development of hybrid genetic-algorithm-based neural networks (also known as evolutionary neural networks) with input variables optimized from using the regression trees, referred as the GART approach. This study validated the applicability of the GART modeling approach in solving complex nonlinear systems by accurately predicting the monitored contaminants of carbon dioxide (CO2), carbon monoxide (CO), nitric oxide (NO), sulfur dioxide (SO2), 0.3-0.4 microm sized particle numbers, 0.4-0.5 microm sized particle numbers, particulate matter (PM) concentrations less than 1.0 microm (PM10), and PM concentrations less than 2.5 microm (PM2.5) inside a public transportation bus operating on 20% grade biodiesel in Toledo, OH. First, the important variables affecting each monitored in-bus contaminant were determined using regression trees. Second, the analysis of variance was used as a complimentary sensitivity analysis to the regression tree results to determine a subset of statistically significant variables affecting each monitored in-bus contaminant. Finally, the identified subsets of statistically significant variables were used as inputs to develop three artificial neural network (ANN) models. The models developed were regression tree-based back-propagation network (BPN-RT), regression tree-based radial basis function network (RBFN-RT), and GART models. Performance measures were used to validate the predictive capacity of the developed IAQ models. The results from this approach were compared with the results obtained from using a theoretical approach and a generalized practicable approach to modeling IAQ that included the consideration of additional independent variables when developing the aforementioned ANN models. The hybrid GART models were able to capture majority of the variance in the monitored in-bus contaminants. The genetic-algorithm-based neural network IAQ models outperformed the traditional ANN methods of the back-propagation and the radial basis function networks. The novelty of this research is the development of a novel approach to modeling vehicular indoor air quality by integration of the advanced methods of genetic algorithms, regression trees, and the analysis of variance for the monitored in-vehicle gaseous and particulate matter contaminants, and comparing the results obtained from using the developed approach with conventional artificial intelligence techniques of back propagation networks and radial basis function networks. This study validated the newly developed approach using holdout and threefold cross-validation methods. These results are of great interest to scientists, researchers, and the public in understanding the various aspects of modeling an indoor microenvironment. This methodology can easily be extended to other fields of study also.
NASA Astrophysics Data System (ADS)
Yusupov, L. R.; Klochkova, K. V.; Simonova, L. A.
2017-09-01
The paper presents a methodology of modeling the chemical composition of the composite material via genetic algorithm for optimization of the manufacturing process of products. The paper presents algorithms of methods based on intelligent system of vermicular graphite iron design
Computer algorithm for coding gain
NASA Technical Reports Server (NTRS)
Dodd, E. E.
1974-01-01
Development of a computer algorithm for coding gain for use in an automated communications link design system. Using an empirical formula which defines coding gain as used in space communications engineering, an algorithm is constructed on the basis of available performance data for nonsystematic convolutional encoding with soft-decision (eight-level) Viterbi decoding.
Theoretical Basis and Correct Explanation of the Periodic System: Review and Update
ERIC Educational Resources Information Center
Schwarz, W. H. Eugen; Rich, Ronald L.
2010-01-01
Long-standing questions on the theoretical basis of the periodic system have been answered in recent years. A specific type of periodicity is imposed on all elements by the main groups just before and after the noble gasses. The upper "n"p shells of these elements are unique because of their stabilized energies and the large gaps to the next…
ERIC Educational Resources Information Center
All-Union Inst. for Scientific and Technical Information, Moscow (USSR).
Reports given before the Committee on "Research on the Theoretical Basis of Information" of the International Federation for Documentation (FID/RI) are presented unaltered and unabridged in English or in Russian -- the language of their presentation. Each report is accompanied by an English or Russian resume. Generally, only original…
A new parallel algorithm of MP2 energy calculations.
Ishimura, Kazuya; Pulay, Peter; Nagase, Shigeru
2006-03-01
A new parallel algorithm has been developed for second-order Møller-Plesset perturbation theory (MP2) energy calculations. Its main projected applications are for large molecules, for instance, for the calculation of dispersion interaction. Tests on a moderate number of processors (2-16) show that the program has high CPU and parallel efficiency. Timings are presented for two relatively large molecules, taxol (C(47)H(51)NO(14)) and luciferin (C(11)H(8)N(2)O(3)S(2)), the former with the 6-31G* and 6-311G** basis sets (1,032 and 1,484 basis functions, 164 correlated orbitals), and the latter with the aug-cc-pVDZ and aug-cc-pVTZ basis sets (530 and 1,198 basis functions, 46 correlated orbitals). An MP2 energy calculation on C(130)H(10) (1,970 basis functions, 265 correlated orbitals) completed in less than 2 h on 128 processors.
Smolin, John A; Gambetta, Jay M; Smith, Graeme
2012-02-17
We provide an efficient method for computing the maximum-likelihood mixed quantum state (with density matrix ρ) given a set of measurement outcomes in a complete orthonormal operator basis subject to Gaussian noise. Our method works by first changing basis yielding a candidate density matrix μ which may have nonphysical (negative) eigenvalues, and then finding the nearest physical state under the 2-norm. Our algorithm takes at worst O(d(4)) for the basis change plus O(d(3)) for finding ρ where d is the dimension of the quantum state. In the special case where the measurement basis is strings of Pauli operators, the basis change takes only O(d(3)) as well. The workhorse of the algorithm is a new linear-time method for finding the closest probability distribution (in Euclidean distance) to a set of real numbers summing to one.
Information theoretic analysis of canny edge detection in visual communication
NASA Astrophysics Data System (ADS)
Jiang, Bo; Rahman, Zia-ur
2011-06-01
In general edge detection evaluation, the edge detectors are examined, analyzed, and compared either visually or with a metric for specific an application. This analysis is usually independent of the characteristics of the image-gathering, transmission and display processes that do impact the quality of the acquired image and thus, the resulting edge image. We propose a new information theoretic analysis of edge detection that unites the different components of the visual communication channel and assesses edge detection algorithms in an integrated manner based on Shannon's information theory. The edge detection algorithm here is considered to achieve high performance only if the information rate from the scene to the edge approaches the maximum possible. Thus, by setting initial conditions of the visual communication system as constant, different edge detection algorithms could be evaluated. This analysis is normally limited to linear shift-invariant filters so in order to examine the Canny edge operator in our proposed system, we need to estimate its "power spectral density" (PSD). Since the Canny operator is non-linear and shift variant, we perform the estimation for a set of different system environment conditions using simulations. In our paper we will first introduce the PSD of the Canny operator for a range of system parameters. Then, using the estimated PSD, we will assess the Canny operator using information theoretic analysis. The information-theoretic metric is also used to compare the performance of the Canny operator with other edge-detection operators. This also provides a simple tool for selecting appropriate edgedetection algorithms based on system parameters, and for adjusting their parameters to maximize information throughput.
Angle-of-Arrival Assisted GNSS Collaborative Positioning.
Huang, Bin; Yao, Zheng; Cui, Xiaowei; Lu, Mingquan
2016-06-20
For outdoor and global navigation satellite system (GNSS) challenged scenarios, collaborative positioning algorithms are proposed to fuse information from GNSS satellites and terrestrial wireless systems. This paper derives the Cramer-Rao lower bound (CRLB) and algorithms for the angle-of-arrival (AOA)-assisted GNSS collaborative positioning. Based on the CRLB model and collaborative positioning algorithms, theoretical analysis are performed to specify the effects of various factors on the accuracy of collaborative positioning, including the number of users, their distribution and AOA measurements accuracy. Besides, the influences of the relative location of the collaborative users are also discussed in order to choose appropriate neighboring users, which is in favor of reducing computational complexity. Simulations and actual experiment are carried out with several GNSS receivers in different scenarios, and the results are consistent with theoretical analysis.
Using Mathematics to Make Computing on Encrypted Data Secure and Practical
2015-12-01
LLL) lattice basis reduction algorithm, G-Lattice, Cryptography , Security, Gentry-Szydlo Algorithm, Ring-LWE 16. SECURITY CLASSIFICATION OF: 17...with symmetry be further developed, in order to quantify the security of lattice-based cryptography , including especially the security of homomorphic...the Gentry-Szydlo algorithm, and the ideas should be applicable to a range of questions in cryptography . The new algorithm of Lenstra and Silverberg
NASA Astrophysics Data System (ADS)
Li, H.; Yang, Y.; Yongming, D.; Cao, B.; Qinhuo, L.
2017-12-01
Land surface temperature (LST) is a key parameter for hydrological, meteorological, climatological and environmental studies. During the past decades, many efforts have been devoted to the establishment of methodology for retrieving the LST from remote sensing data and significant progress has been achieved. Many operational LST products have been generated using different remote sensing data. MODIS LST product (MOD11) is one of the most commonly used LST products, which is produced using a generalized split-window algorithm. Many validation studies have showed that MOD11 LST product agrees well with ground measurements over vegetated and inland water surfaces, however, large negative biases of up to 5 K are present over arid regions. In addition, land surface emissivity of MOD11 are estimated by assigning fixed emissivities according to a land cover classification dataset, which may introduce large errors to the LST product due to misclassification of the land cover. Therefore, a new MODIS LSE&E product (MOD21) is developed based on the temperature emissivity separation (TES) algorithm, and the water vapor scaling (WVS) method has also been incorporated into the MODIS TES algorithm for improving the accuracy of the atmospheric correction. The MOD21 product will be released with MODIS collection 6 Tier-2 land products in 2017. Due to the MOD21 products are not available right now, the MODTES algorithm was implemented including the TES and WVS methods as detailed in the MOD21 Algorithm Theoretical Basis Document. The MOD21 and MOD11 C6 LST products are validated using ground measurements and ASTER LST products collected in an arid area of Northwest China during the Heihe Watershed Allied Telemetry Experimental Research (HiWATER) experiment. In addition, lab emissivity spectra of four sand dunes in the Northwest China are also used to validate the MOD21 and MOD11 emissivity products.
Automated Leaf Tracking using Multi-view Image Sequences of Maize Plants for Leaf-growth Monitoring
NASA Astrophysics Data System (ADS)
Das Choudhury, S.; Awada, T.; Samal, A.; Stoerger, V.; Bashyam, S.
2017-12-01
Extraction of phenotypes with botanical importance by analyzing plant image sequences has the desirable advantages of non-destructive temporal phenotypic measurements of a large number of plants with little or no manual intervention in a relatively short period of time. The health of a plant is best interpreted by the emergence timing and temporal growth of individual leaves. For automated leaf growth monitoring, it is essential to track each leaf throughout the life cycle of the plant. Plants are constantly changing organisms with increasing complexity in architecture due to variations in self-occlusions and phyllotaxy, i.e., arrangements of leaves around the stem. The leaf cross-overs pose challenges to accurately track each leaf using single view image sequence. Thus, we introduce a novel automated leaf tracking algorithm using a graph theoretic approach by multi-view image sequence analysis based on the determination of leaf-tips and leaf-junctions in the 3D space. The basis of the leaf tracking algorithm is: the leaves emerge using bottom-up approach in the case of a maize plant, and the direction of leaf emergence strictly alternates in terms of direction. The algorithm involves labeling of the individual parts of a plant, i.e., leaves and stem, following graphical representation of the plant skeleton, i.e., one-pixel wide connected line obtained from the binary image. The length of the leaf is measured by the number of pixels in the leaf skeleton. To evaluate the performance of the algorithm, a benchmark dataset is indispensable. Thus, we publicly release University of Nebraska-Lincoln Component Plant Phenotyping dataset-2 (UNL-CPPD-2) consisting of images of the 20 maize plants captured by visible light camera of the Lemnatec Scanalyzer 3D high throughout plant phenotyping facility once daily for 60 days from 10 different views. The dataset is aimed to facilitate the development and evaluation of leaf tracking algorithms and their uniform comparisons.
NASA Astrophysics Data System (ADS)
Zein-Sabatto, Saleh; Mikhail, Maged; Bodruzzaman, Mohammad; DeSimio, Martin; Derriso, Mark; Behbahani, Alireza
2012-06-01
It has been widely accepted that data fusion and information fusion methods can improve the accuracy and robustness of decision-making in structural health monitoring systems. It is arguably true nonetheless, that decision-level is equally beneficial when applied to integrated health monitoring systems. Several decisions at low-levels of abstraction may be produced by different decision-makers; however, decision-level fusion is required at the final stage of the process to provide accurate assessment about the health of the monitored system as a whole. An example of such integrated systems with complex decision-making scenarios is the integrated health monitoring of aircraft. Thorough understanding of the characteristics of the decision-fusion methodologies is a crucial step for successful implementation of such decision-fusion systems. In this paper, we have presented the major information fusion methodologies reported in the literature, i.e., probabilistic, evidential, and artificial intelligent based methods. The theoretical basis and characteristics of these methodologies are explained and their performances are analyzed. Second, candidate methods from the above fusion methodologies, i.e., Bayesian, Dempster-Shafer, and fuzzy logic algorithms are selected and their applications are extended to decisions fusion. Finally, fusion algorithms are developed based on the selected fusion methods and their performance are tested on decisions generated from synthetic data and from experimental data. Also in this paper, a modeling methodology, i.e. cloud model, for generating synthetic decisions is presented and used. Using the cloud model, both types of uncertainties; randomness and fuzziness, involved in real decision-making are modeled. Synthetic decisions are generated with an unbiased process and varying interaction complexities among decisions to provide for fair performance comparison of the selected decision-fusion algorithms. For verification purposes, implementation results of the developed fusion algorithms on structural health monitoring data collected from experimental tests are reported in this paper.
Theory of Remote Image Formation
NASA Astrophysics Data System (ADS)
Blahut, Richard E.
2004-11-01
In many applications, images, such as ultrasonic or X-ray signals, are recorded and then analyzed with digital or optical processors in order to extract information. Such processing requires the development of algorithms of great precision and sophistication. This book presents a unified treatment of the mathematical methods that underpin the various algorithms used in remote image formation. The author begins with a review of transform and filter theory. He then discusses two- and three-dimensional Fourier transform theory, the ambiguity function, image construction and reconstruction, tomography, baseband surveillance systems, and passive systems (where the signal source might be an earthquake or a galaxy). Information-theoretic methods in image formation are also covered, as are phase errors and phase noise. Throughout the book, practical applications illustrate theoretical concepts, and there are many homework problems. The book is aimed at graduate students of electrical engineering and computer science, and practitioners in industry. Presents a unified treatment of the mathematical methods that underpin the algorithms used in remote image formation Illustrates theoretical concepts with reference to practical applications Provides insights into the design parameters of real systems
Anderson, I M; Bezdek, J C
1984-01-01
This paper introduces a new theory for the tangential deflection and curvature of plane discrete curves. Our theory applies to discrete data in either rectangular boundary coordinate or chain coded formats: its rationale is drawn from the statistical and geometric properties associated with the eigenvalue-eigenvector structure of sample covariance matrices. Specifically, we prove that the nonzero entry of the commutator of a piar of scatter matrices constructed from discrete arcs is related to the angle between their eigenspaces. And further, we show that this entry is-in certain limiting cases-also proportional to the analytical curvature of the plane curve from which the discrete data are drawn. These results lend a sound theoretical basis to the notions of discrete curvature and tangential deflection; and moreover, they provide a means for computationally efficient implementation of algorithms which use these ideas in various image processing contexts. As a concrete example, we develop the commutator vertex detection (CVD) algorithm, which identifies the location of vertices in shape data based on excessive cummulative tangential deflection; and we compare its performance to several well established corner detectors that utilize the alternative strategy of finding (approximate) curvature extrema.
Multifractal detrending moving-average cross-correlation analysis
NASA Astrophysics Data System (ADS)
Jiang, Zhi-Qiang; Zhou, Wei-Xing
2011-07-01
There are a number of situations in which several signals are simultaneously recorded in complex systems, which exhibit long-term power-law cross correlations. The multifractal detrended cross-correlation analysis (MFDCCA) approaches can be used to quantify such cross correlations, such as the MFDCCA based on the detrended fluctuation analysis (MFXDFA) method. We develop in this work a class of MFDCCA algorithms based on the detrending moving-average analysis, called MFXDMA. The performances of the proposed MFXDMA algorithms are compared with the MFXDFA method by extensive numerical experiments on pairs of time series generated from bivariate fractional Brownian motions, two-component autoregressive fractionally integrated moving-average processes, and binomial measures, which have theoretical expressions of the multifractal nature. In all cases, the scaling exponents hxy extracted from the MFXDMA and MFXDFA algorithms are very close to the theoretical values. For bivariate fractional Brownian motions, the scaling exponent of the cross correlation is independent of the cross-correlation coefficient between two time series, and the MFXDFA and centered MFXDMA algorithms have comparative performances, which outperform the forward and backward MFXDMA algorithms. For two-component autoregressive fractionally integrated moving-average processes, we also find that the MFXDFA and centered MFXDMA algorithms have comparative performances, while the forward and backward MFXDMA algorithms perform slightly worse. For binomial measures, the forward MFXDMA algorithm exhibits the best performance, the centered MFXDMA algorithms performs worst, and the backward MFXDMA algorithm outperforms the MFXDFA algorithm when the moment order q<0 and underperforms when q>0. We apply these algorithms to the return time series of two stock market indexes and to their volatilities. For the returns, the centered MFXDMA algorithm gives the best estimates of hxy(q) since its hxy(2) is closest to 0.5, as expected, and the MFXDFA algorithm has the second best performance. For the volatilities, the forward and backward MFXDMA algorithms give similar results, while the centered MFXDMA and the MFXDFA algorithms fail to extract rational multifractal nature.
Convergence of Proximal Iteratively Reweighted Nuclear Norm Algorithm for Image Processing.
Sun, Tao; Jiang, Hao; Cheng, Lizhi
2017-08-25
The nonsmooth and nonconvex regularization has many applications in imaging science and machine learning research due to its excellent recovery performance. A proximal iteratively reweighted nuclear norm algorithm has been proposed for the nonsmooth and nonconvex matrix minimizations. In this paper, we aim to investigate the convergence of the algorithm. With the Kurdyka-Łojasiewicz property, we prove the algorithm globally converges to a critical point of the objective function. The numerical results presented in this paper coincide with our theoretical findings.
17 CFR Appendix A to Part 37 - Guidance on Compliance With Registration Criteria
Code of Federal Regulations, 2011 CFR
2011-04-01
... facility should include the system's trade-matching algorithm and order entry procedures. A submission involving a trade-matching algorithm that is based on order priority factors other than on a best price/earliest time basis should include a brief explanation of the alternative algorithm. (b) A board of trade's...
17 CFR Appendix A to Part 37 - Guidance on Compliance With Registration Criteria
Code of Federal Regulations, 2012 CFR
2012-04-01
... facility should include the system's trade-matching algorithm and order entry procedures. A submission involving a trade-matching algorithm that is based on order priority factors other than on a best price/earliest time basis should include a brief explanation of the alternative algorithm. (b) A board of trade's...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-27
... provides a ``menu'' of matching algorithms to choose from when executing incoming electronic orders. The menu format allows the Exchange to utilize different matching algorithms on a class-by-class basis. The menu includes, among other choices, the ultimate matching algorithm (``UMA''), as well as price-time...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-14
... class-by-class basis which electronic allocation algorithm \\6\\ would apply for rotations. Currently Rule... opening price (with multiple quotes and orders being ranked in accordance with the allocation algorithm in... and quotes ranked in accordance with the allocation algorithm in effect for the class). Any remaining...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-18
... Change, as Modified by Amendment No. 1 Thereto, Related to the Hybrid Matching Algorithms May 12, 2010... allocation algorithms to choose from when executing incoming electronic orders. The menu format allows the Exchange to utilize different allocation algorithms on a class-by-class basis. The menu includes, among...
Efficiency in nonequilibrium molecular dynamics Monte Carlo simulations
Radak, Brian K.; Roux, Benoît
2016-10-07
Hybrid algorithms combining nonequilibrium molecular dynamics and Monte Carlo (neMD/MC) offer a powerful avenue for improving the sampling efficiency of computer simulations of complex systems. These neMD/MC algorithms are also increasingly finding use in applications where conventional approaches are impractical, such as constant-pH simulations with explicit solvent. However, selecting an optimal nonequilibrium protocol for maximum efficiency often represents a non-trivial challenge. This work evaluates the efficiency of a broad class of neMD/MC algorithms and protocols within the theoretical framework of linear response theory. The approximations are validated against constant pH-MD simulations and shown to provide accurate predictions of neMD/MC performance.more » An assessment of a large set of protocols confirms (both theoretically and empirically) that a linear work protocol gives the best neMD/MC performance. Lastly, a well-defined criterion for optimizing the time parameters of the protocol is proposed and demonstrated with an adaptive algorithm that improves the performance on-the-fly with minimal cost.« less
Perfect blind restoration of images blurred by multiple filters: theory and efficient algorithms.
Harikumar, G; Bresler, Y
1999-01-01
We address the problem of restoring an image from its noisy convolutions with two or more unknown finite impulse response (FIR) filters. We develop theoretical results about the existence and uniqueness of solutions, and show that under some generically true assumptions, both the filters and the image can be determined exactly in the absence of noise, and stably estimated in its presence. We present efficient algorithms to estimate the blur functions and their sizes. These algorithms are of two types, subspace-based and likelihood-based, and are extensions of techniques proposed for the solution of the multichannel blind deconvolution problem in one dimension. We present memory and computation-efficient techniques to handle the very large matrices arising in the two-dimensional (2-D) case. Once the blur functions are determined, they are used in a multichannel deconvolution step to reconstruct the unknown image. The theoretical and practical implications of edge effects, and "weakly exciting" images are examined. Finally, the algorithms are demonstrated on synthetic and real data.
Analyzing the BBOB results by means of benchmarking concepts.
Mersmann, O; Preuss, M; Trautmann, H; Bischl, B; Weihs, C
2015-01-01
We present methods to answer two basic questions that arise when benchmarking optimization algorithms. The first one is: which algorithm is the "best" one? and the second one is: which algorithm should I use for my real-world problem? Both are connected and neither is easy to answer. We present a theoretical framework for designing and analyzing the raw data of such benchmark experiments. This represents a first step in answering the aforementioned questions. The 2009 and 2010 BBOB benchmark results are analyzed by means of this framework and we derive insight regarding the answers to the two questions. Furthermore, we discuss how to properly aggregate rankings from algorithm evaluations on individual problems into a consensus, its theoretical background and which common pitfalls should be avoided. Finally, we address the grouping of test problems into sets with similar optimizer rankings and investigate whether these are reflected by already proposed test problem characteristics, finding that this is not always the case.
GPU-accelerated computing for Lagrangian coherent structures of multi-body gravitational regimes
NASA Astrophysics Data System (ADS)
Lin, Mingpei; Xu, Ming; Fu, Xiaoyu
2017-04-01
Based on a well-established theoretical foundation, Lagrangian Coherent Structures (LCSs) have elicited widespread research on the intrinsic structures of dynamical systems in many fields, including the field of astrodynamics. Although the application of LCSs in dynamical problems seems straightforward theoretically, its associated computational cost is prohibitive. We propose a block decomposition algorithm developed on Compute Unified Device Architecture (CUDA) platform for the computation of the LCSs of multi-body gravitational regimes. In order to take advantage of GPU's outstanding computing properties, such as Shared Memory, Constant Memory, and Zero-Copy, the algorithm utilizes a block decomposition strategy to facilitate computation of finite-time Lyapunov exponent (FTLE) fields of arbitrary size and timespan. Simulation results demonstrate that this GPU-based algorithm can satisfy double-precision accuracy requirements and greatly decrease the time needed to calculate final results, increasing speed by approximately 13 times. Additionally, this algorithm can be generalized to various large-scale computing problems, such as particle filters, constellation design, and Monte-Carlo simulation.
NASA Technical Reports Server (NTRS)
Oza, Nikunji C.
2005-01-01
Bagging and boosting are two of the most well-known ensemble learning methods due to their theoretical performance guarantees and strong experimental results. However, these algorithms have been used mainly in batch mode, i.e., they require the entire training set to be available at once and, in some cases, require random access to the data. In this paper, we present online versions of bagging and boosting that require only one pass through the training data. We build on previously presented work by presenting some theoretical results. We also compare the online and batch algorithms experimentally in terms of accuracy and running time.
Hong, Xia
2006-07-01
In this letter, a Box-Cox transformation-based radial basis function (RBF) neural network is introduced using the RBF neural network to represent the transformed system output. Initially a fixed and moderate sized RBF model base is derived based on a rank revealing orthogonal matrix triangularization (QR decomposition). Then a new fast identification algorithm is introduced using Gauss-Newton algorithm to derive the required Box-Cox transformation, based on a maximum likelihood estimator. The main contribution of this letter is to explore the special structure of the proposed RBF neural network for computational efficiency by utilizing the inverse of matrix block decomposition lemma. Finally, the Box-Cox transformation-based RBF neural network, with good generalization and sparsity, is identified based on the derived optimal Box-Cox transformation and a D-optimality-based orthogonal forward regression algorithm. The proposed algorithm and its efficacy are demonstrated with an illustrative example in comparison with support vector machine regression.
Numerically Exact Calculation of Rovibrational Levels of Cl^-H_2O
NASA Astrophysics Data System (ADS)
Wang, Xiao-Gang; Carrington, Tucker
2014-06-01
Large amplitude vibrations of Van der Waals clusters are important because they reveal large regions of a potential energy surface (PES). To calculate spectra of Van der Waals clusters it is common to use an adiabatic approximation. When coupling between intra- and inter-molecular coordinates is important non-adiabatic coupling cannot be neglected and it is therefore critical to develop and test theoretical methods that couple both types of coordinates. We have developed new product basis and contracted basis Lanczos methods for Van der Waals complexes and tested them by computing rovibrational energy levels of Cl^-H_2O. The new product basis is made of functions of the inter-monomer distance, Wigner functions that depend on Euler angles specifying the orientation of H_2O with respect to a frame attached to the inter-monomer Jacobi vector, basis functions for H_2O vibration, and Wigner functions that depend on Euler angles specifying the orientation of the inter-monomer Jacobi vector with respect to a space-fixed frame. An advantage of this product basis is that it can be used to make an efficient contracted basis by replacing the vibrational basis functions for the monomer with monomer vibrational wavefunctions. Due to weak coupling between intra- and inter-molecular coordinates, only a few tens of monomer vibrational wavefunctions are necessary. The validity of the two new methods is established by comparing energy levels with benchmark rovibrational levels obtained with polyspherical coordinates and spherical harmonic type basis functions. For all bases, product structure is exploited to calculate eigenvalues with the Lanczos algorithm. For Cl^-H_2O, we are able, for the first time, to compute accurate splittings due to tunnelling between the two equivalent C_s minima. We use the PES of Rheinecker and Bowman (RB). Our results are in good agreement with experiment for the five fundamental bands observed. J. Rheinecker and J. M. Bowman, J. Chem. Phys. 124 131102 (2006) J. Rheinecker and J. M. Bowman, J. Chem. Phys. 125 133206 (2006)} S. Horvath, A. B. McCoy, B. M. Elliott, G. H. Weddle, J. R. Roscioli, and M. A. Johnson J. Phys. Chem. A 114 1556 (2010)
Meena, K; Muthu, K; Meenatchi, V; Rajasekar, M; Bhagavannarayana, G; Meenakshisundaram, S P
2014-04-24
Transparent optical quality single crystals of imidazolium L-tartrate (IMLT) were grown by conventional slow evaporation solution growth technique. Crystal structure of the as-grown IMLT was determined by single crystal X-ray diffraction analysis. Thermal analysis reveals the purity of the crystal and the sample is stable up to the melting point. Good transmittance in the visible region is observed and the band gap energy is estimated using diffuse reflectance data by the application of Kubelka-Munk algorithm. The powder X-ray diffraction study reveals the crystallinity of the as-grown crystal and it is compared with that of the experimental one. An additional peak in high resolution X-ray diffraction (HRXRD) indicates the presence of an internal structural low angle boundary. Second harmonic generation (SHG) activity of IMLT is significant as estimated by Kurtz and Perry powder technique. HOMO-LUMO energies and first-order molecular hyperpolarizability of IMLT have been evaluated using density functional theory (DFT) employing B3LYP functional and 6-31G(d,p) basis set. The optimized geometry closely resembles the ORTEP. The vibrational patterns present in the molecule are confirmed by FT-IR coinciding with theoretical patterns. Copyright © 2014 Elsevier B.V. All rights reserved.
Application of Improved APO Algorithm in Vulnerability Assessment and Reconstruction of Microgrid
NASA Astrophysics Data System (ADS)
Xie, Jili; Ma, Hailing
2018-01-01
Artificial Physics Optimization (APO) has good global search ability and can avoid the premature convergence phenomenon in PSO algorithm, which has good stability of fast convergence and robustness. On the basis of APO of the vector model, a reactive power optimization algorithm based on improved APO algorithm is proposed for the static structure and dynamic operation characteristics of microgrid. The simulation test is carried out through the IEEE 30-bus system and the result shows that the algorithm has better efficiency and accuracy compared with other optimization algorithms.
A finite element algorithm for high-lying eigenvalues with Neumann and Dirichlet boundary conditions
NASA Astrophysics Data System (ADS)
Báez, G.; Méndez-Sánchez, R. A.; Leyvraz, F.; Seligman, T. H.
2014-01-01
We present a finite element algorithm that computes eigenvalues and eigenfunctions of the Laplace operator for two-dimensional problems with homogeneous Neumann or Dirichlet boundary conditions, or combinations of either for different parts of the boundary. We use an inverse power plus Gauss-Seidel algorithm to solve the generalized eigenvalue problem. For Neumann boundary conditions the method is much more efficient than the equivalent finite difference algorithm. We checked the algorithm by comparing the cumulative level density of the spectrum obtained numerically with the theoretical prediction given by the Weyl formula. We found a systematic deviation due to the discretization, not to the algorithm itself.
A VLSI architecture for simplified arithmetic Fourier transform algorithm
NASA Technical Reports Server (NTRS)
Reed, Irving S.; Shih, Ming-Tang; Truong, T. K.; Hendon, E.; Tufts, D. W.
1992-01-01
The arithmetic Fourier transform (AFT) is a number-theoretic approach to Fourier analysis which has been shown to perform competitively with the classical FFT in terms of accuracy, complexity, and speed. Theorems developed in a previous paper for the AFT algorithm are used here to derive the original AFT algorithm which Bruns found in 1903. This is shown to yield an algorithm of less complexity and of improved performance over certain recent AFT algorithms. A VLSI architecture is suggested for this simplified AFT algorithm. This architecture uses a butterfly structure which reduces the number of additions by 25 percent of that used in the direct method.
A Multipath Mitigation Algorithm for vehicle with Smart Antenna
NASA Astrophysics Data System (ADS)
Ji, Jing; Zhang, Jiantong; Chen, Wei; Su, Deliang
2018-01-01
In this paper, the antenna array adaptive method is used to eliminate the multipath interference in the environment of GPS L1 frequency. Combined with the power inversion (PI) algorithm and the minimum variance no distortion response (MVDR) algorithm, the anti-Simulation and verification of the antenna array, and the program into the FPGA, the actual test on the CBD road, the theoretical analysis of the LCMV criteria and PI and MVDR algorithm principles and characteristics of MVDR algorithm to verify anti-multipath interference performance is better than PI algorithm, The satellite navigation in the field of vehicle engineering practice has some guidance and reference.
A Novel Color Image Encryption Algorithm Based on Quantum Chaos Sequence
NASA Astrophysics Data System (ADS)
Liu, Hui; Jin, Cong
2017-03-01
In this paper, a novel algorithm of image encryption based on quantum chaotic is proposed. The keystreams are generated by the two-dimensional logistic map as initial conditions and parameters. And then general Arnold scrambling algorithm with keys is exploited to permute the pixels of color components. In diffusion process, a novel encryption algorithm, folding algorithm, is proposed to modify the value of diffused pixels. In order to get the high randomness and complexity, the two-dimensional logistic map and quantum chaotic map are coupled with nearest-neighboring coupled-map lattices. Theoretical analyses and computer simulations confirm that the proposed algorithm has high level of security.
Graph Matching: Relax at Your Own Risk.
Lyzinski, Vince; Fishkind, Donniell E; Fiori, Marcelo; Vogelstein, Joshua T; Priebe, Carey E; Sapiro, Guillermo
2016-01-01
Graph matching-aligning a pair of graphs to minimize their edge disagreements-has received wide-spread attention from both theoretical and applied communities over the past several decades, including combinatorics, computer vision, and connectomics. Its attention can be partially attributed to its computational difficulty. Although many heuristics have previously been proposed in the literature to approximately solve graph matching, very few have any theoretical support for their performance. A common technique is to relax the discrete problem to a continuous problem, therefore enabling practitioners to bring gradient-descent-type algorithms to bear. We prove that an indefinite relaxation (when solved exactly) almost always discovers the optimal permutation, while a common convex relaxation almost always fails to discover the optimal permutation. These theoretical results suggest that initializing the indefinite algorithm with the convex optimum might yield improved practical performance. Indeed, experimental results illuminate and corroborate these theoretical findings, demonstrating that excellent results are achieved in both benchmark and real data problems by amalgamating the two approaches.
Variable screening via quantile partial correlation
Ma, Shujie; Tsai, Chih-Ling
2016-01-01
In quantile linear regression with ultra-high dimensional data, we propose an algorithm for screening all candidate variables and subsequently selecting relevant predictors. Specifically, we first employ quantile partial correlation for screening, and then we apply the extended Bayesian information criterion (EBIC) for best subset selection. Our proposed method can successfully select predictors when the variables are highly correlated, and it can also identify variables that make a contribution to the conditional quantiles but are marginally uncorrelated or weakly correlated with the response. Theoretical results show that the proposed algorithm can yield the sure screening set. By controlling the false selection rate, model selection consistency can be achieved theoretically. In practice, we proposed using EBIC for best subset selection so that the resulting model is screening consistent. Simulation studies demonstrate that the proposed algorithm performs well, and an empirical example is presented. PMID:28943683
Optimal Location through Distributed Algorithm to Avoid Energy Hole in Mobile Sink WSNs
Qing-hua, Li; Wei-hua, Gui; Zhi-gang, Chen
2014-01-01
In multihop data collection sensor network, nodes near the sink need to relay on remote data and, thus, have much faster energy dissipation rate and suffer from premature death. This phenomenon causes energy hole near the sink, seriously damaging the network performance. In this paper, we first compute energy consumption of each node when sink is set at any point in the network through theoretical analysis; then we propose an online distributed algorithm, which can adjust sink position based on the actual energy consumption of each node adaptively to get the actual maximum lifetime. Theoretical analysis and experimental results show that the proposed algorithms significantly improve the lifetime of wireless sensor network. It lowers the network residual energy by more than 30% when it is dead. Moreover, the cost for moving the sink is relatively smaller. PMID:24895668
Anomaly clustering in hyperspectral images
NASA Astrophysics Data System (ADS)
Doster, Timothy J.; Ross, David S.; Messinger, David W.; Basener, William F.
2009-05-01
The topological anomaly detection algorithm (TAD) differs from other anomaly detection algorithms in that it uses a topological/graph-theoretic model for the image background instead of modeling the image with a Gaussian normal distribution. In the construction of the model, TAD produces a hard threshold separating anomalous pixels from background in the image. We build on this feature of TAD by extending the algorithm so that it gives a measure of the number of anomalous objects, rather than the number of anomalous pixels, in a hyperspectral image. This is done by identifying, and integrating, clusters of anomalous pixels via a graph theoretical method combining spatial and spectral information. The method is applied to a cluttered HyMap image and combines small groups of pixels containing like materials, such as those corresponding to rooftops and cars, into individual clusters. This improves visualization and interpretation of objects.
NASA Astrophysics Data System (ADS)
Bang, Jeongho; Lee, Seung-Woo; Lee, Chang-Woo; Jeong, Hyunseok
2015-01-01
We propose a quantum algorithm to obtain the lowest eigenstate of any Hamiltonian simulated by a quantum computer. The proposed algorithm begins with an arbitrary initial state of the simulated system. A finite series of transforms is iteratively applied to the initial state assisted with an ancillary qubit. The fraction of the lowest eigenstate in the initial state is then amplified up to 1. We prove that our algorithm can faithfully work for any arbitrary Hamiltonian in the theoretical analysis. Numerical analyses are also carried out. We firstly provide a numerical proof-of-principle demonstration with a simple Hamiltonian in order to compare our scheme with the so-called "Demon-like algorithmic cooling (DLAC)", recently proposed in Xu (Nat Photonics 8:113, 2014). The result shows a good agreement with our theoretical analysis, exhibiting the comparable behavior to the best `cooling' with the DLAC method. We then consider a random Hamiltonian model for further analysis of our algorithm. By numerical simulations, we show that the total number of iterations is proportional to , where is the difference between the two lowest eigenvalues and is an error defined as the probability that the finally obtained system state is in an unexpected (i.e., not the lowest) eigenstate.
Pourhassan, Mojgan; Neumann, Frank
2018-06-22
The generalized travelling salesperson problem is an important NP-hard combinatorial optimization problem for which meta-heuristics, such as local search and evolutionary algorithms, have been used very successfully. Two hierarchical approaches with different neighbourhood structures, namely a Cluster-Based approach and a Node-Based approach, have been proposed by Hu and Raidl (2008) for solving this problem. In this paper, local search algorithms and simple evolutionary algorithms based on these approaches are investigated from a theoretical perspective. For local search algorithms, we point out the complementary abilities of the two approaches by presenting instances where they mutually outperform each other. Afterwards, we introduce an instance which is hard for both approaches when initialized on a particular point of the search space, but where a variable neighbourhood search combining them finds the optimal solution in polynomial time. Then we turn our attention to analysing the behaviour of simple evolutionary algorithms that use these approaches. We show that the Node-Based approach solves the hard instance of the Cluster-Based approach presented in Corus et al. (2016) in polynomial time. Furthermore, we prove an exponential lower bound on the optimization time of the Node-Based approach for a class of Euclidean instances.
Experimental and Theoretical Basis for a Closed-Form Spectral BRDF Model
2015-09-17
EXPERIMENTAL AND THEORETICAL BASIS FOR A CLOSED-FORM SPECTRAL BRDF MODEL DISSERTATION Samuel D. Butler, Major, USAF AFIT-ENP-DS-15-S-021 DEPARTMENT...SPECTRAL BRDF MODEL DISSERTATION Presented to the Faculty Graduate School of Engineering and Management Air Force Institute of Technology Air University Air...FOR A CLOSED-FORM SPECTRAL BRDF MODEL DISSERTATION Samuel D. Butler, BS, MS Major, USAF Committee Membership: Michael A. Marciniak, PhD Chairman Kevin
Hong, X; Harris, C J
2000-01-01
This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bézier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bézier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bézier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bézier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.
Cross Validation of Rain Drop Size Distribution between GPM and Ground Based Polarmetric radar
NASA Astrophysics Data System (ADS)
Chandra, C. V.; Biswas, S.; Le, M.; Chen, H.
2017-12-01
Dual-frequency precipitation radar (DPR) on board the Global Precipitation Measurement (GPM) core satellite has reflectivity measurements at two independent frequencies, Ku- and Ka- band. Dual-frequency retrieval algorithms have been developed traditionally through forward, backward, and recursive approaches. However, these algorithms suffer from "dual-value" problem when they retrieve medium volume diameter from dual-frequency ratio (DFR) in rain region. To this end, a hybrid method has been proposed to perform raindrop size distribution (DSD) retrieval for GPM using a linear constraint of DSD along rain profile to avoid "dual-value" problem (Le and Chandrasekar, 2015). In the current GPM level 2 algorithm (Iguchi et al. 2017- Algorithm Theoretical Basis Document) the Solver module retrieves a vertical profile of drop size distributionn from dual-frequency observations and path integrated attenuations. The algorithm details can be found in Seto et al. (2013) . On the other hand, ground based polarimetric radars have been used for a long time to estimate drop size distributions (e.g., Gorgucci et al. 2002 ). In addition, coincident GPM and ground based observations have been cross validated using careful overpass analysis. In this paper, we perform cross validation on raindrop size distribution retrieval from three sources, namely the hybrid method, the standard products from the solver module and DSD retrievals from ground polarimetric radars. The results are presented from two NEXRAD radars located in Dallas -Fort Worth, Texas (i.e., KFWS radar) and Melbourne, Florida (i.e., KMLB radar). The results demonstrate the ability of DPR observations to produce DSD estimates, which can be used subsequently to generate global DSD maps. References: Seto, S., T. Iguchi, T. Oki, 2013: The basic performance of a precipitation retrieval algorithm for the Global Precipitation Measurement mission's single/dual-frequency radar measurements. IEEE Transactions on Geoscience and Remote Sensing, 51(12), 5239-5251. Gorgucci, E., Chandrasekar, V., Bringi, V. N., and Scarchilli, G.: Estimation of Raindrop Size Distribution Parameters from Polarimetric Radar Measurements, J. Atmos. Sci., 59, 2373-2384, doi:10.1175/1520-0469(2002)0592.0.CO;2, 2002.
A Comparison of Techniques To Find Mirrored Hosts on the WWW.
ERIC Educational Resources Information Center
Bharat, Krishna; Broder, Andrei; Dean, Jefferey; Henzinger, Monika R.
2000-01-01
Compares several "top-down" algorithms for identifying mirrored hosts on the Web. The algorithms operate on the basis of URL strings and linkage data: the type of information about Web pages easily available from Web proxies and crawlers. Results reveal that the best approach is a combination of five algorithms: on test data this…
The Behavioral and Neural Mechanisms Underlying the Tracking of Expertise
Boorman, Erie D.; O’Doherty, John P.; Adolphs, Ralph; Rangel, Antonio
2013-01-01
Summary Evaluating the abilities of others is fundamental for successful economic and social behavior. We investigated the computational and neurobiological basis of ability tracking by designing an fMRI task that required participants to use and update estimates of both people and algorithms’ expertise through observation of their predictions. Behaviorally, we find a model-based algorithm characterized subject predictions better than several alternative models. Notably, when the agent’s prediction was concordant rather than discordant with the subject’s own likely prediction, participants credited people more than algorithms for correct predictions and penalized them less for incorrect predictions. Neurally, many components of the mentalizing network—medial prefrontal cortex, anterior cingulate gyrus, temporoparietal junction, and precuneus—represented or updated expertise beliefs about both people and algorithms. Moreover, activity in lateral orbitofrontal and medial prefrontal cortex reflected behavioral differences in learning about people and algorithms. These findings provide basic insights into the neural basis of social learning. PMID:24360551
Scheduling nursing personnel on a microcomputer.
Liao, C J; Kao, C Y
1997-01-01
Suggests that with the shortage of nursing personnel, hospital administrators have to pay more attention to the needs of nurses to retain and recruit them. Also asserts that improving nurses' schedules is one of the most economic ways for the hospital administration to create a better working environment for nurses. Develops an algorithm for scheduling nursing personnel. Contrary to the current hospital approach, which schedules nurses on a person-by-person basis, the proposed algorithm constructs schedules on a day-by-day basis. The algorithm has inherent flexibility in handling a variety of possible constraints and goals, similar to other non-cyclical approaches. But, unlike most other non-cyclical approaches, it can also generate a quality schedule in a short time on a microcomputer. The algorithm was coded in C language and run on a microcomputer. The developed software is currently implemented at a leading hospital in Taiwan. The response to the initial implementation is quite promising.
Patwary, Nurmohammed; Preza, Chrysanthe
2015-01-01
A depth-variant (DV) image restoration algorithm for wide field fluorescence microscopy, using an orthonormal basis decomposition of DV point-spread functions (PSFs), is investigated in this study. The efficient PSF representation is based on a previously developed principal component analysis (PCA), which is computationally intensive. We present an approach developed to reduce the number of DV PSFs required for the PCA computation, thereby making the PCA-based approach computationally tractable for thick samples. Restoration results from both synthetic and experimental images show consistency and that the proposed algorithm addresses efficiently depth-induced aberration using a small number of principal components. Comparison of the PCA-based algorithm with a previously-developed strata-based DV restoration algorithm demonstrates that the proposed method improves performance by 50% in terms of accuracy and simultaneously reduces the processing time by 64% using comparable computational resources. PMID:26504634
Was Euclid an Unnecessarily Sophisticated Psychologist?
ERIC Educational Resources Information Center
Arabie, Phipps
1991-01-01
The current state of multidimensional scaling using the city-block metric is reviewed, with attention to (1) substantive and theoretical issues; (2) recent algorithmic developments and their implications for analysis; (3) isometries with other metrics; (4) links to graph-theoretic models; and (5) prospects for future development. (SLD)
Martín H., José Antonio
2013-01-01
Many practical problems in almost all scientific and technological disciplines have been classified as computationally hard (NP-hard or even NP-complete). In life sciences, combinatorial optimization problems frequently arise in molecular biology, e.g., genome sequencing; global alignment of multiple genomes; identifying siblings or discovery of dysregulated pathways. In almost all of these problems, there is the need for proving a hypothesis about certain property of an object that can be present if and only if it adopts some particular admissible structure (an NP-certificate) or be absent (no admissible structure), however, none of the standard approaches can discard the hypothesis when no solution can be found, since none can provide a proof that there is no admissible structure. This article presents an algorithm that introduces a novel type of solution method to “efficiently” solve the graph 3-coloring problem; an NP-complete problem. The proposed method provides certificates (proofs) in both cases: present or absent, so it is possible to accept or reject the hypothesis on the basis of a rigorous proof. It provides exact solutions and is polynomial-time (i.e., efficient) however parametric. The only requirement is sufficient computational power, which is controlled by the parameter . Nevertheless, here it is proved that the probability of requiring a value of to obtain a solution for a random graph decreases exponentially: , making tractable almost all problem instances. Thorough experimental analyses were performed. The algorithm was tested on random graphs, planar graphs and 4-regular planar graphs. The obtained experimental results are in accordance with the theoretical expected results. PMID:23349711
Automatic Syllabification in English: A Comparison of Different Algorithms
ERIC Educational Resources Information Center
Marchand, Yannick; Adsett, Connie R.; Damper, Robert I.
2009-01-01
Automatic syllabification of words is challenging, not least because the syllable is not easy to define precisely. Consequently, no accepted standard algorithm for automatic syllabification exists. There are two broad approaches: rule-based and data-driven. The rule-based method effectively embodies some theoretical position regarding the…
Research on laser marking speed optimization by using genetic algorithm.
Wang, Dongyun; Yu, Qiwei; Zhang, Yu
2015-01-01
Laser Marking Machine is the most common coding equipment on product packaging lines. However, the speed of laser marking has become a bottleneck of production. In order to remove this bottleneck, a new method based on a genetic algorithm is designed. On the basis of this algorithm, a controller was designed and simulations and experiments were performed. The results show that using this algorithm could effectively improve laser marking efficiency by 25%.
NASA Astrophysics Data System (ADS)
Asgari, Shadnaz; Ali, Andreas M.; Collier, Travis C.; Yao, Yuan; Hudson, Ralph E.; Yao, Kung; Taylor, Charles E.
2007-09-01
The focus of most direction-of-arrival (DOA) estimation problems has been based mainly on a two-dimensional (2D) scenario where we only need to estimate the azimuth angle. But in various practical situations we have to deal with a three-dimensional scenario. The importance of being able to estimate both azimuth and elevation angles with high accuracy and low complexity is of interest. We present the theoretical and the practical issues of DOA estimation using the Approximate-Maximum-Likelihood (AML) algorithm in a 3D scenario. We show that the performance of the proposed 3D AML algorithm converges to the Cramer-Rao Bound. We use the concept of an isotropic array to reduce the complexity of the proposed algorithm by advocating a decoupled 3D version. We also explore a modified version of the decoupled 3D AML algorithm which can be used for DOA estimation with non-isotropic arrays. Various numerical results are presented. We use two acoustic arrays each consisting of 8 microphones to do some field measurements. The processing of the measured data from the acoustic arrays for different azimuth and elevation angles confirms the effectiveness of the proposed methods.
Theoretical study of the XP3 (X = Al, B, Ga) clusters
NASA Astrophysics Data System (ADS)
Ueno, Leonardo T.; Lopes, Cinara; Malaspina, Thaciana; Roberto-Neto, Orlando; Canuto, Sylvio; Machado, Francisco B. C.
2012-05-01
The lowest singlet and triplet states of AlP3, GaP3 and BP3 molecules with Cs, C2v and C3v symmetries were characterized using the B3LYP functional and the aug-cc-pVTZ and aug-cc-pVQZ correlated consistent basis sets. Geometrical parameters and vibrational frequencies were calculated and compared to existent experimental and theoretical data. Relative energies were obtained with single point CCSD(T) calculations using the aug-cc-pVTZ, aug-cc-pVQZ and aug-cc-pV5Z basis sets, and then extrapolating to the complete basis set (CBS) limit.
Analysis of the Explosive Internal Impact on the Barriers of Building Structures
NASA Astrophysics Data System (ADS)
Siwiński, Jarosław; Stolarski, Adam
2017-10-01
Work issues concern the safety of construction in relation to the hazards arising from explosion of the explosive charge located inside the building. The algorithms proposed in the paper for determining the parameters of the overpressure wave resulting from the detonation of clustered explosive charges, determine the basis for numerical simulation analyzes. Determination of the maximum value of peak pressure on the wave forehead of an internal explosion is presented on the basis of reflected wave analysis. Changeability in time of the internal explosion action describes the overpressure phase only. The analysis of the load caused by the internal explosive charge detonation was carried out under conditions of the undisturbed standard atmosphere. A load determination algorithm has been developed, taking into account the geometrical characteristics of the building barriers and the rooms as well as the parameters of environment in which the detonation occurs. The way of taking into account the influence of venting surfaces, i.e. windows, doors, ventilation ducts, on the overpressure wave parameters, was presented. Discloses a method to take into account the effect of the surface relief, i.e. windows, doors, air ducts, pressure wave parameters. Modification of the method for explosive overpressure determination presented by Cormie, Smith, Mays (2009), was proposed in the paper. This modification was developed on the basis of substitute impulse analysis for multiple overpressure pulses. In order to take into account the pressure distribution of explosive gases on the barrier surface, the method of modification the relationship for determination the changeability over time and space of the pressure of explosive gases, was presented. For this purpose, the changeability of the pressure wave angles of incidence to the barrier and the distance of the explosive charge to any point on the surface of the barrier, was taken into account. Based on the developed procedure, the overpressure changeability over time was determined for selected measurement points of the reference room. A comparative analysis of the determined loadings with experimental results and theoretical results of other authors, taken from the original work of Weerhiejm et al. (2012), was carried out.
VAXELN Experimentation: Programming a Real-Time Periodic Task Dispatcher Using VAXELN Ada 1.1
1987-11-01
synchronization to the SQM and VAXELN semaphores. Based on real-time scheduling theory, the optimal rate-monotonic scheduling algorithm [Lui 73...schedulability test based on the rate-monotonic algorithm , namely task-lumping [Sha 871, was necessary to cal- culate the theoretically expected schedulability...8217 Guide Digital Equipment Corporation, Maynard, MA, 1986. [Lui 73] Liu, C.L., Layland, J.W. Scheduling Algorithms for Multi-programming in a Hard-Real-Time
Algorithm for Surface of Translation Attached Radiators (A-STAR). Volume 2. Users manual
NASA Astrophysics Data System (ADS)
Medgyesimitschang, L. N.; Putnam, J. M.
1982-05-01
A hierarchy of computer programs implementing the method of moments for bodies of translation (MM/BOT) is described. The algorithm treats the far-field radiation from off-surface and aperture antennas on finite-length open or closed bodies of arbitrary cross section. The near fields and antenna coupling on such bodies are computed. The theoretical development underlying the algorithm is described in Volume 1 of this report.
NASA Astrophysics Data System (ADS)
Frisch, Michael J.; Binkley, J. Stephen; Schaefer, Henry F., III
1984-08-01
The relative energies of the stationary points on the FH2 and H2CO nuclear potential energy surfaces relevant to the hydrogen atom abstraction, H2 elimination and 1,2-hydrogen shift reactions have been examined using fourth-order Møller-Plesset perturbation theory and a variety of basis sets. The theoretical absolute zero activation energy for the F+H2→FH+H reaction is in better agreement with experiment than previous theoretical studies, and part of the disagreement between earlier theoretical calculations and experiment is found to result from the use of assumed rather than calculated zero-point vibrational energies. The fourth-order reaction energy for the elimination of hydrogen from formaldehyde is within 2 kcal mol-1 of the experimental value using the largest basis set considered. The qualitative features of the H2CO surface are unchanged by expansion of the basis set beyond the polarized triple-zeta level, but diffuse functions and several sets of polarization functions are found to be necessary for quantitative accuracy in predicted reaction and activation energies. Basis sets and levels of perturbation theory which represent good compromises between computational efficiency and accuracy are recommended.
Fast ℓ1-regularized space-time adaptive processing using alternating direction method of multipliers
NASA Astrophysics Data System (ADS)
Qin, Lilong; Wu, Manqing; Wang, Xuan; Dong, Zhen
2017-04-01
Motivated by the sparsity of filter coefficients in full-dimension space-time adaptive processing (STAP) algorithms, this paper proposes a fast ℓ1-regularized STAP algorithm based on the alternating direction method of multipliers to accelerate the convergence and reduce the calculations. The proposed algorithm uses a splitting variable to obtain an equivalent optimization formulation, which is addressed with an augmented Lagrangian method. Using the alternating recursive algorithm, the method can rapidly result in a low minimum mean-square error without a large number of calculations. Through theoretical analysis and experimental verification, we demonstrate that the proposed algorithm provides a better output signal-to-clutter-noise ratio performance than other algorithms.
An Improved Vision-based Algorithm for Unmanned Aerial Vehicles Autonomous Landing
NASA Astrophysics Data System (ADS)
Zhao, Yunji; Pei, Hailong
In vision-based autonomous landing system of UAV, the efficiency of target detecting and tracking will directly affect the control system. The improved algorithm of SURF(Speed Up Robust Features) will resolve the problem which is the inefficiency of the SURF algorithm in the autonomous landing system. The improved algorithm is composed of three steps: first, detect the region of the target using the Camshift; second, detect the feature points in the region of the above acquired using the SURF algorithm; third, do the matching between the template target and the region of target in frame. The results of experiment and theoretical analysis testify the efficiency of the algorithm.
Bouallègue, Fayçal Ben; Crouzet, Jean-François; Comtat, Claude; Fourcade, Marjolaine; Mohammadi, Bijan; Mariano-Goulart, Denis
2007-07-01
This paper presents an extended 3-D exact rebinning formula in the Fourier space that leads to an iterative reprojection algorithm (iterative FOREPROJ), which enables the estimation of unmeasured oblique projection data on the basis of the whole set of measured data. In first approximation, this analytical formula also leads to an extended Fourier rebinning equation that is the basis for an approximate reprojection algorithm (extended FORE). These algorithms were evaluated on numerically simulated 3-D positron emission tomography (PET) data for the solution of the truncation problem, i.e., the estimation of the missing portions in the oblique projection data, before the application of algorithms that require complete projection data such as some rebinning methods (FOREX) or 3-D reconstruction algorithms (3DRP or direct Fourier methods). By taking advantage of all the 3-D data statistics, the iterative FOREPROJ reprojection provides a reliable alternative to the classical FOREPROJ method, which only exploits the low-statistics nonoblique data. It significantly improves the quality of the external reconstructed slices without loss of spatial resolution. As for the approximate extended FORE algorithm, it clearly exhibits limitations due to axial interpolations, but will require clinical studies with more realistic measured data in order to decide on its pertinence.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-26
... allocation algorithm shall apply for COB and/or COA executions on a class-by-class basis, subject to certain conditions. Currently, as described in more detail below, the allocation algorithms for COB and COA default to the allocation algorithms in effect for a given options class. As proposed, the rule change would...
Research on Laser Marking Speed Optimization by Using Genetic Algorithm
Wang, Dongyun; Yu, Qiwei; Zhang, Yu
2015-01-01
Laser Marking Machine is the most common coding equipment on product packaging lines. However, the speed of laser marking has become a bottleneck of production. In order to remove this bottleneck, a new method based on a genetic algorithm is designed. On the basis of this algorithm, a controller was designed and simulations and experiments were performed. The results show that using this algorithm could effectively improve laser marking efficiency by 25%. PMID:25955831
Target Coverage in Wireless Sensor Networks with Probabilistic Sensors
Shan, Anxing; Xu, Xianghua; Cheng, Zongmao
2016-01-01
Sensing coverage is a fundamental problem in wireless sensor networks (WSNs), which has attracted considerable attention. Conventional research on this topic focuses on the 0/1 coverage model, which is only a coarse approximation to the practical sensing model. In this paper, we study the target coverage problem, where the objective is to find the least number of sensor nodes in randomly-deployed WSNs based on the probabilistic sensing model. We analyze the joint detection probability of target with multiple sensors. Based on the theoretical analysis of the detection probability, we formulate the minimum ϵ-detection coverage problem. We prove that the minimum ϵ-detection coverage problem is NP-hard and present an approximation algorithm called the Probabilistic Sensor Coverage Algorithm (PSCA) with provable approximation ratios. To evaluate our design, we analyze the performance of PSCA theoretically and also perform extensive simulations to demonstrate the effectiveness of our proposed algorithm. PMID:27618902
NASA Astrophysics Data System (ADS)
Niu, Yingli; Li, Wenqiang; Peng, Qian; Geng, Hua; Yi, Yuanping; Wang, Linjun; Nan, Guangjun; Wang, Dong; Shuai, Zhigang
2018-04-01
MOlecular MAterials Property Prediction Package (MOMAP) is a software toolkit for molecular materials property prediction. It focuses on luminescent properties and charge mobility properties. This article contains a brief descriptive introduction of key features, theoretical models and algorithms of the software, together with examples that illustrate the performance. First, we present the theoretical models and algorithms for molecular luminescent properties calculation, which includes the excited-state radiative/non-radiative decay rate constant and the optical spectra. Then, a multi-scale simulation approach and its algorithm for the molecular charge mobility are described. This approach is based on hopping model and combines with Kinetic Monte Carlo and molecular dynamics simulations, and it is especially applicable for describing a large category of organic semiconductors, whose inter-molecular electronic coupling is much smaller than intra-molecular charge reorganisation energy.
On distribution reduction and algorithm implementation in inconsistent ordered information systems.
Zhang, Yanqin
2014-01-01
As one part of our work in ordered information systems, distribution reduction is studied in inconsistent ordered information systems (OISs). Some important properties on distribution reduction are studied and discussed. The dominance matrix is restated for reduction acquisition in dominance relations based information systems. Matrix algorithm for distribution reduction acquisition is stepped. And program is implemented by the algorithm. The approach provides an effective tool for the theoretical research and the applications for ordered information systems in practices. For more detailed and valid illustrations, cases are employed to explain and verify the algorithm and the program which shows the effectiveness of the algorithm in complicated information systems.
Image restoration by minimizing zero norm of wavelet frame coefficients
NASA Astrophysics Data System (ADS)
Bao, Chenglong; Dong, Bin; Hou, Likun; Shen, Zuowei; Zhang, Xiaoqun; Zhang, Xue
2016-11-01
In this paper, we propose two algorithms, namely the extrapolated proximal iterative hard thresholding (EPIHT) algorithm and the EPIHT algorithm with line-search, for solving the {{\\ell }}0-norm regularized wavelet frame balanced approach for image restoration. Under the theoretical framework of Kurdyka-Łojasiewicz property, we show that the sequences generated by the two algorithms converge to a local minimizer with linear convergence rate. Moreover, extensive numerical experiments on sparse signal reconstruction and wavelet frame based image restoration problems including CT reconstruction, image deblur, demonstrate the improvement of {{\\ell }}0-norm based regularization models over some prevailing ones, as well as the computational efficiency of the proposed algorithms.
Optimization of fiber grating couplers on SOI using advanced search algorithms.
Wohlfeil, Benjamin; Zimmermann, Lars; Petermann, Klaus
2014-06-01
A one-dimensional fiber grating coupler is derived from a waveguide with random etches using implementations of particle swarm and genetic algorithms. The resulting gratings yield a theoretical coupling efficiency of up to 1.1 dB and prompt clear design rules for the layout of highly efficient fiber grating couplers.
Measuring Disorientation Based on the Needleman-Wunsch Algorithm
ERIC Educational Resources Information Center
Güyer, Tolga; Atasoy, Bilal; Somyürek, Sibel
2015-01-01
This study offers a new method to measure navigation disorientation in web based systems which is powerful learning medium for distance and open education. The Needleman-Wunsch algorithm is used to measure disorientation in a more precise manner. The process combines theoretical and applied knowledge from two previously distinct research areas,…
Sexuality Education for Young People: A Theoretically Integrated Approach from Australia
ERIC Educational Resources Information Center
Goldman, Juliette D. G.
2010-01-01
Background: Teachers of sexuality education can often be uncertain about what theoretical basis and pedagogical strategies to use in their teaching. Sexuality educational programmes designed by teachers can often show few evident theoretical principles that have been applied in its construction. Thus, there seems to be a dearth of evidence of ways…
Exact consideration of data redundancies for spiral cone-beam CT
NASA Astrophysics Data System (ADS)
Lauritsch, Guenter; Katsevich, Alexander; Hirsch, Michael
2004-05-01
In multi-slice spiral computed tomography (CT) there is an obvious trend in adding more and more detector rows. The goals are numerous: volume coverage, isotropic spatial resolution, and speed. Consequently, there will be a variety of scan protocols optimizing clinical applications. Flexibility in table feed requires consideration of data redundancies to ensure efficient detector usage. Until recently this was achieved by approximate reconstruction algorithms only. However, due to the increasing cone angles there is a need of exact treatment of the cone beam geometry. A new, exact and efficient 3-PI algorithm for considering three-fold data redundancies was derived from a general, theoretical framework based on 3D Radon inversion using Grangeat's formula. The 3-PI algorithm possesses a simple and efficient structure as the 1-PI method for non-redundant data previously proposed. Filtering is one-dimensional, performed along lines with variable tilt on the detector. This talk deals with a thorough evaluation of the performance of the 3-PI algorithm in comparison to the 1-PI method. Image quality of the 3-PI algorithm is superior. The prominent spiral artifacts and other discretization artifacts are significantly reduced due to averaging effects when taking into account redundant data. Certainly signal-to-noise ratio is increased. The computational expense is comparable even to that of approximate algorithms. The 3-PI algorithm proves its practicability for applications in medical imaging. Other exact n-PI methods for n-fold data redundancies (n odd) can be deduced from the general, theoretical framework.
ERIC Educational Resources Information Center
Akhmetova, Daniya Z.; Chelnokova, Tatyana A.; Morozova, Ilona G.
2017-01-01
Article is devoted to the scientific heritage of educators and psychologists of Russia in the first quarter of the twentieth century. The aim of the research is the identification of the most significant ideas of P. P. Blonsky, L. S. Vygotsky, V. P. Kacshenko, S. T Shatsky which based the theoretical and methodological basis of inclusive…
Investigations of quantum heuristics for optimization
NASA Astrophysics Data System (ADS)
Rieffel, Eleanor; Hadfield, Stuart; Jiang, Zhang; Mandra, Salvatore; Venturelli, Davide; Wang, Zhihui
We explore the design of quantum heuristics for optimization, focusing on the quantum approximate optimization algorithm, a metaheuristic developed by Farhi, Goldstone, and Gutmann. We develop specific instantiations of the of quantum approximate optimization algorithm for a variety of challenging combinatorial optimization problems. Through theoretical analyses and numeric investigations of select problems, we provide insight into parameter setting and Hamiltonian design for quantum approximate optimization algorithms and related quantum heuristics, and into their implementation on hardware realizable in the near term.
On the physical basis of a theory of human thermoregulation.
NASA Technical Reports Server (NTRS)
Iberall, A. S.; Schindler, A. M.
1973-01-01
Theoretical study of the physical factors which are responsible for thermoregulation in nude resting humans in a physical steady state. The behavior of oxidative metabolism, evaporative and convective thermal fluxes, fluid heat transfer, internal and surface temperatures, and evaporative phase transitions is studied by physiological/physical modeling techniques. The modeling is based on the theories that the body has a vital core with autothermoregulation, that the vital core contracts longitudinally, that the temperature of peripheral regions and extremities decreases towards the ambient, and that a significant portion of the evaporative heat may be lost underneath the skin. A theoretical basis is derived for a consistent modeling of steady-state thermoregulation on the basis of these theories.
Implementation of Complex Signal Processing Algorithms for Position-Sensitive Microcalorimeters
NASA Technical Reports Server (NTRS)
Smith, Stephen J.
2008-01-01
We have recently reported on a theoretical digital signal-processing algorithm for improved energy and position resolution in position-sensitive, transition-edge sensor (POST) X-ray detectors [Smith et al., Nucl, lnstr and Meth. A 556 (2006) 2371. PoST's consists of one or more transition-edge sensors (TES's) on a large continuous or pixellated X-ray absorber and are under development as an alternative to arrays of single pixel TES's. PoST's provide a means to increase the field-of-view for the fewest number of read-out channels. In this contribution we extend the theoretical correlated energy position optimal filter (CEPOF) algorithm (originally developed for 2-TES continuous absorber PoST's) to investigate the practical implementation on multi-pixel single TES PoST's or Hydras. We use numerically simulated data for a nine absorber device, which includes realistic detector noise, to demonstrate an iterative scheme that enables convergence on the correct photon absorption position and energy without any a priori assumptions. The position sensitivity of the CEPOF implemented on simulated data agrees very well with the theoretically predicted resolution. We discuss practical issues such as the impact of random arrival phase of the measured data on the performance of the CEPOF. The CEPOF algorithm demonstrates that full-width-at- half-maximum energy resolution of < 8 eV coupled with position-sensitivity down to a few 100 eV should be achievable for a fully optimized device.
Initial postbuckling analysis of elastoplastic thin-shear structures
NASA Technical Reports Server (NTRS)
Carnoy, E. G.; Panosyan, G.
1984-01-01
The design of thin shell structures with respect to elastoplastic buckling requires an extended analysis of the influence of initial imperfections. For conservative design, the most critical defect should be assumed with the maximum allowable magnitude. This defect is closely related to the initial postbuckling behavior. An algorithm is given for the quasi-static analysis of the postbuckling behavior of structures that exhibit multiple buckling points. the algorithm based upon an energy criterion allows the computation of the critical perturbation which will be employed for the definition of the critical defect. For computational efficiency, the algorithm uses the reduced basis technique with automatic update of the modal basis. The method is applied to the axisymmetric buckling of cylindrical shells under axial compression, and conclusions are given for future research.
VLSI architectures for computing multiplications and inverses in GF(2m)
NASA Technical Reports Server (NTRS)
Wang, C. C.; Truong, T. K.; Shao, H. M.; Deutsch, L. J.; Omura, J. K.
1985-01-01
Finite field arithmetic logic is central in the implementation of Reed-Solomon coders and in some cryptographic algorithms. There is a need for good multiplication and inversion algorithms that are easily realized on VLSI chips. Massey and Omura recently developed a new multiplication algorithm for Galois fields based on a normal basis representation. A pipeline structure is developed to realize the Massey-Omura multiplier in the finite field GF(2m). With the simple squaring property of the normal-basis representation used together with this multiplier, a pipeline architecture is also developed for computing inverse elements in GF(2m). The designs developed for the Massey-Omura multiplier and the computation of inverse elements are regular, simple, expandable and, therefore, naturally suitable for VLSI implementation.
VLSI architectures for computing multiplications and inverses in GF(2-m)
NASA Technical Reports Server (NTRS)
Wang, C. C.; Truong, T. K.; Shao, H. M.; Deutsch, L. J.; Omura, J. K.; Reed, I. S.
1983-01-01
Finite field arithmetic logic is central in the implementation of Reed-Solomon coders and in some cryptographic algorithms. There is a need for good multiplication and inversion algorithms that are easily realized on VLSI chips. Massey and Omura recently developed a new multiplication algorithm for Galois fields based on a normal basis representation. A pipeline structure is developed to realize the Massey-Omura multiplier in the finite field GF(2m). With the simple squaring property of the normal-basis representation used together with this multiplier, a pipeline architecture is also developed for computing inverse elements in GF(2m). The designs developed for the Massey-Omura multiplier and the computation of inverse elements are regular, simple, expandable and, therefore, naturally suitable for VLSI implementation.
VLSI architectures for computing multiplications and inverses in GF(2m).
Wang, C C; Truong, T K; Shao, H M; Deutsch, L J; Omura, J K; Reed, I S
1985-08-01
Finite field arithmetic logic is central in the implementation of Reed-Solomon coders and in some cryptographic algorithms. There is a need for good multiplication and inversion algorithms that can be easily realized on VLSI chips. Massey and Omura recently developed a new multiplication algorithm for Galois fields based on a normal basis representation. In this paper, a pipeline structure is developed to realize the Massey-Omura multiplier in the finite field GF(2m). With the simple squaring property of the normal basis representation used together with this multiplier, a pipeline architecture is developed for computing inverse elements in GF(2m). The designs developed for the Massey-Omura multiplier and the computation of inverse elements are regular, simple, expandable, and therefore, naturally suitable for VLSI implementation.
NASA Astrophysics Data System (ADS)
Aharonov, Dorit
In the last few years, theoretical study of quantum systems serving as computational devices has achieved tremendous progress. We now have strong theoretical evidence that quantum computers, if built, might be used as a dramatically powerful computational tool, capable of performing tasks which seem intractable for classical computers. This review is about to tell the story of theoretical quantum computation. I l out the developing topic of experimental realizations of the model, and neglected other closely related topics which are quantum information and quantum communication. As a result of narrowing the scope of this paper, I hope it has gained the benefit of being an almost self contained introduction to the exciting field of quantum computation. The review begins with background on theoretical computer science, Turing machines and Boolean circuits. In light of these models, I define quantum computers, and discuss the issue of universal quantum gates. Quantum algorithms, including Shor's factorization algorithm and Grover's algorithm for searching databases, are explained. I will devote much attention to understanding what the origins of the quantum computational power are, and what the limits of this power are. Finally, I describe the recent theoretical results which show that quantum computers maintain their complexity power even in the presence of noise, inaccuracies and finite precision. This question cannot be separated from that of quantum complexity because any realistic model will inevitably be subjected to such inaccuracies. I tried to put all results in their context, asking what the implications to other issues in computer science and physics are. In the end of this review, I make these connections explicit by discussing the possible implications of quantum computation on fundamental physical questions such as the transition from quantum to classical physics.
NASA Astrophysics Data System (ADS)
Maltz, Jonathan S.
2000-11-01
We present an algorithm of reduced computational cost which is able to estimate kinetic model parameters directly from dynamic ECT sinograms made up of temporally inconsistent projections. The algorithm exploits the extreme degree of parameter redundancy inherent in linear combinations of the exponential functions which represent the modes of first-order compartmental systems. The singular value decomposition is employed to find a small set of orthogonal functions, the linear combinations of which are able to accurately represent all modes within the physiologically anticipated range in a given study. The reduced-dimension basis is formed as the convolution of this orthogonal set with a measured input function. The Moore-Penrose pseudoinverse is used to find coefficients of this basis. Algorithm performance is evaluated at realistic count rates using MCAT phantom and clinical 99mTc-teboroxime myocardial study data. Phantom data are modelled as originating from a Poisson process. For estimates recovered from a single slice projection set containing 2.5×105 total counts, recovered tissue responses compare favourably with those obtained using more computationally intensive methods. The corresponding kinetic parameter estimates (coefficients of the new basis) exhibit negligible bias, while parameter variances are low, falling within 30% of the Cramér-Rao lower bound.
Algorithm for Calculating the Dissociation Constants of Ampholytes in Nonbuffer Systems
NASA Astrophysics Data System (ADS)
Lysova, S. S.; Skripnikova, T. A.; Zevatskii, Yu. E.
2018-05-01
An algorithm for calculating the dissociation constants of ampholytes in aqueous solutions is developed on the basis of spectrophotometric data in the UV and visible ranges without pH measurements of a medium and without buffer solutions. The proposed algorithm has been experimentally tested for five ampholytes of different strengths. The relative error of measuring dissociation constants is less than 5%.
Generalized gradient algorithm for trajectory optimization
NASA Technical Reports Server (NTRS)
Zhao, Yiyuan; Bryson, A. E.; Slattery, R.
1990-01-01
The generalized gradient algorithm presented and verified as a basis for the solution of trajectory optimization problems improves the performance index while reducing path equality constraints, and terminal equality constraints. The algorithm is conveniently divided into two phases, of which the first, 'feasibility' phase yields a solution satisfying both path and terminal constraints, while the second, 'optimization' phase uses the results of the first phase as initial guesses.
The openGL visualization of the 2D parallel FDTD algorithm
NASA Astrophysics Data System (ADS)
Walendziuk, Wojciech
2005-02-01
This paper presents a way of visualization of a two-dimensional version of a parallel algorithm of the FDTD method. The visualization module was created on the basis of the OpenGL graphic standard with the use of the GLUT interface. In addition, the work includes the results of the efficiency of the parallel algorithm in the form of speedup charts.
Refined genetic algorithm -- Economic dispatch example
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheble, G.B.; Brittig, K.
1995-02-01
A genetic-based algorithm is used to solve an economic dispatch (ED) problem. The algorithm utilizes payoff information of perspective solutions to evaluate optimality. Thus, the constraints of classical LaGrangian techniques on unit curves are eliminated. Using an economic dispatch problem as a basis for comparison, several different techniques which enhance program efficiency and accuracy, such as mutation prediction, elitism, interval approximation and penalty factors, are explored. Two unique genetic algorithms are also compared. The results are verified for a sample problem using a classical technique.
Algorithm for detection the QRS complexes based on support vector machine
NASA Astrophysics Data System (ADS)
Van, G. V.; Podmasteryev, K. V.
2017-11-01
The efficiency of computer ECG analysis depends on the accurate detection of QRS-complexes. This paper presents an algorithm for QRS complex detection based of support vector machine (SVM). The proposed algorithm is evaluated on annotated standard databases such as MIT-BIH Arrhythmia database. The QRS detector obtained a sensitivity Se = 98.32% and specificity Sp = 95.46% for MIT-BIH Arrhythmia database. This algorithm can be used as the basis for the software to diagnose electrical activity of the heart.
Ab initio molecular simulations with numeric atom-centered orbitals
NASA Astrophysics Data System (ADS)
Blum, Volker; Gehrke, Ralf; Hanke, Felix; Havu, Paula; Havu, Ville; Ren, Xinguo; Reuter, Karsten; Scheffler, Matthias
2009-11-01
We describe a complete set of algorithms for ab initio molecular simulations based on numerically tabulated atom-centered orbitals (NAOs) to capture a wide range of molecular and materials properties from quantum-mechanical first principles. The full algorithmic framework described here is embodied in the Fritz Haber Institute "ab initio molecular simulations" (FHI-aims) computer program package. Its comprehensive description should be relevant to any other first-principles implementation based on NAOs. The focus here is on density-functional theory (DFT) in the local and semilocal (generalized gradient) approximations, but an extension to hybrid functionals, Hartree-Fock theory, and MP2/GW electron self-energies for total energies and excited states is possible within the same underlying algorithms. An all-electron/full-potential treatment that is both computationally efficient and accurate is achieved for periodic and cluster geometries on equal footing, including relaxation and ab initio molecular dynamics. We demonstrate the construction of transferable, hierarchical basis sets, allowing the calculation to range from qualitative tight-binding like accuracy to meV-level total energy convergence with the basis set. Since all basis functions are strictly localized, the otherwise computationally dominant grid-based operations scale as O(N) with system size N. Together with a scalar-relativistic treatment, the basis sets provide access to all elements from light to heavy. Both low-communication parallelization of all real-space grid based algorithms and a ScaLapack-based, customized handling of the linear algebra for all matrix operations are possible, guaranteeing efficient scaling (CPU time and memory) up to massively parallel computer systems with thousands of CPUs.
A Parametric k-Means Algorithm
Tarpey, Thaddeus
2007-01-01
Summary The k points that optimally represent a distribution (usually in terms of a squared error loss) are called the k principal points. This paper presents a computationally intensive method that automatically determines the principal points of a parametric distribution. Cluster means from the k-means algorithm are nonparametric estimators of principal points. A parametric k-means approach is introduced for estimating principal points by running the k-means algorithm on a very large simulated data set from a distribution whose parameters are estimated using maximum likelihood. Theoretical and simulation results are presented comparing the parametric k-means algorithm to the usual k-means algorithm and an example on determining sizes of gas masks is used to illustrate the parametric k-means algorithm. PMID:17917692
Influence Function Learning in Information Diffusion Networks.
Du, Nan; Liang, Yingyu; Balcan, Maria-Florina; Song, Le
2014-06-01
Can we learn the influence of a set of people in a social network from cascades of information diffusion? This question is often addressed by a two-stage approach: first learn a diffusion model, and then calculate the influence based on the learned model. Thus, the success of this approach relies heavily on the correctness of the diffusion model which is hard to verify for real world data. In this paper, we exploit the insight that the influence functions in many diffusion models are coverage functions, and propose a novel parameterization of such functions using a convex combination of random basis functions. Moreover, we propose an efficient maximum likelihood based algorithm to learn such functions directly from cascade data, and hence bypass the need to specify a particular diffusion model in advance. We provide both theoretical and empirical analysis for our approach, showing that the proposed approach can provably learn the influence function with low sample complexity, be robust to the unknown diffusion models, and significantly outperform existing approaches in both synthetic and real world data.
When I cut, you choose method implies intransitivity
NASA Astrophysics Data System (ADS)
Makowski, Marcin; Piotrowski, Edward W.
2014-12-01
There is a common belief that humans and many animals follow transitive inference (choosing A over C on the basis of knowing that A is better than B and B is better than C). Transitivity seems to be the essence of rational choice. We present a theoretical model of a repeated game in which the players make a choice between three goods (e.g. food). The rules of the game refer to the simple procedure of fair division among two players, known as the “I cut, you choose” mechanism which has been widely discussed in the literature. In this game one of the players has to make intransitive choices in order to achieve the optimal result (for him/her and his/her co-player). The point is that an intransitive choice can be rational. Previously, an increase in the significance of intransitive strategies was achieved by referring to models of quantum games. We show that relevant intransitive strategies also appear in the classic description of decision algorithms.
Imaging sensor constellation for tomographic chemical cloud mapping.
Cosofret, Bogdan R; Konno, Daisei; Faghfouri, Aram; Kindle, Harry S; Gittins, Christopher M; Finson, Michael L; Janov, Tracy E; Levreault, Mark J; Miyashiro, Rex K; Marinelli, William J
2009-04-01
A sensor constellation capable of determining the location and detailed concentration distribution of chemical warfare agent simulant clouds has been developed and demonstrated on government test ranges. The constellation is based on the use of standoff passive multispectral infrared imaging sensors to make column density measurements through the chemical cloud from two or more locations around its periphery. A computed tomography inversion method is employed to produce a 3D concentration profile of the cloud from the 2D line density measurements. We discuss the theoretical basis of the approach and present results of recent field experiments where controlled releases of chemical warfare agent simulants were simultaneously viewed by three chemical imaging sensors. Systematic investigations of the algorithm using synthetic data indicate that for complex functions, 3D reconstruction errors are less than 20% even in the case of a limited three-sensor measurement network. Field data results demonstrate the capability of the constellation to determine 3D concentration profiles that account for ~?86%? of the total known mass of material released.
High-resolution remotely sensed small target detection by imitating fly visual perception mechanism.
Huang, Fengchen; Xu, Lizhong; Li, Min; Tang, Min
2012-01-01
The difficulty and limitation of small target detection methods for high-resolution remote sensing data have been a recent research hot spot. Inspired by the information capture and processing theory of fly visual system, this paper endeavors to construct a characterized model of information perception and make use of the advantages of fast and accurate small target detection under complex varied nature environment. The proposed model forms a theoretical basis of small target detection for high-resolution remote sensing data. After the comparison of prevailing simulation mechanism behind fly visual systems, we propose a fly-imitated visual system method of information processing for high-resolution remote sensing data. A small target detector and corresponding detection algorithm are designed by simulating the mechanism of information acquisition, compression, and fusion of fly visual system and the function of pool cell and the character of nonlinear self-adaption. Experiments verify the feasibility and rationality of the proposed small target detection model and fly-imitated visual perception method.
Multi-reference approach to the calculation of photoelectron spectra including spin-orbit coupling.
Grell, Gilbert; Bokarev, Sergey I; Winter, Bernd; Seidel, Robert; Aziz, Emad F; Aziz, Saadullah G; Kühn, Oliver
2015-08-21
X-ray photoelectron spectra provide a wealth of information on the electronic structure. The extraction of molecular details requires adequate theoretical methods, which in case of transition metal complexes has to account for effects due to the multi-configurational and spin-mixed nature of the many-electron wave function. Here, the restricted active space self-consistent field method including spin-orbit coupling is used to cope with this challenge and to calculate valence- and core-level photoelectron spectra. The intensities are estimated within the frameworks of the Dyson orbital formalism and the sudden approximation. Thereby, we utilize an efficient computational algorithm that is based on a biorthonormal basis transformation. The approach is applied to the valence photoionization of the gas phase water molecule and to the core ionization spectrum of the [Fe(H2O)6](2+) complex. The results show good agreement with the experimental data obtained in this work, whereas the sudden approximation demonstrates distinct deviations from experiments.
An IPSO-SVM algorithm for security state prediction of mine production logistics system
NASA Astrophysics Data System (ADS)
Zhang, Yanliang; Lei, Junhui; Ma, Qiuli; Chen, Xin; Bi, Runfang
2017-06-01
A theoretical basis for the regulation of corporate security warning and resources was provided in order to reveal the laws behind the security state in mine production logistics. Considering complex mine production logistics system and the variable is difficult to acquire, a superior security status predicting model of mine production logistics system based on the improved particle swarm optimization and support vector machine (IPSO-SVM) is proposed in this paper. Firstly, through the linear adjustments of inertia weight and learning weights, the convergence speed and search accuracy are enhanced with the aim to deal with situations associated with the changeable complexity and the data acquisition difficulty. The improved particle swarm optimization (IPSO) is then introduced to resolve the problem of parameter settings in traditional support vector machines (SVM). At the same time, security status index system is built to determine the classification standards of safety status. The feasibility and effectiveness of this method is finally verified using the experimental results.
Environmental statistics and optimal regulation
NASA Astrophysics Data System (ADS)
Sivak, David; Thomson, Matt
2015-03-01
The precision with which an organism can detect its environment, and the timescale for and statistics of environmental change, will affect the suitability of different strategies for regulating protein levels in response to environmental inputs. We propose a general framework--here applied to the enzymatic regulation of metabolism in response to changing nutrient concentrations--to predict the optimal regulatory strategy given the statistics of fluctuations in the environment and measurement apparatus, and the costs associated with enzyme production. We find: (i) relative convexity of enzyme expression cost and benefit influences the fitness of thresholding or graded responses; (ii) intermediate levels of measurement uncertainty call for a sophisticated Bayesian decision rule; and (iii) in dynamic contexts, intermediate levels of uncertainty call for retaining memory of the past. Statistical properties of the environment, such as variability and correlation times, set optimal biochemical parameters, such as thresholds and decay rates in signaling pathways. Our framework provides a theoretical basis for interpreting molecular signal processing algorithms and a classification scheme that organizes known regulatory strategies and may help conceptualize heretofore unknown ones.
Calibrated Multivariate Regression with Application to Neural Semantic Basis Discovery.
Liu, Han; Wang, Lie; Zhao, Tuo
2015-08-01
We propose a calibrated multivariate regression method named CMR for fitting high dimensional multivariate regression models. Compared with existing methods, CMR calibrates regularization for each regression task with respect to its noise level so that it simultaneously attains improved finite-sample performance and tuning insensitiveness. Theoretically, we provide sufficient conditions under which CMR achieves the optimal rate of convergence in parameter estimation. Computationally, we propose an efficient smoothed proximal gradient algorithm with a worst-case numerical rate of convergence O (1/ ϵ ), where ϵ is a pre-specified accuracy of the objective function value. We conduct thorough numerical simulations to illustrate that CMR consistently outperforms other high dimensional multivariate regression methods. We also apply CMR to solve a brain activity prediction problem and find that it is as competitive as a handcrafted model created by human experts. The R package camel implementing the proposed method is available on the Comprehensive R Archive Network http://cran.r-project.org/web/packages/camel/.
Artificial Immune System for Recognizing Patterns
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance
2005-01-01
A method of recognizing or classifying patterns is based on an artificial immune system (AIS), which includes an algorithm and a computational model of nonlinear dynamics inspired by the behavior of a biological immune system. The method has been proposed as the theoretical basis of the computational portion of a star-tracking system aboard a spacecraft. In that system, a newly acquired star image would be treated as an antigen that would be matched by an appropriate antibody (an entry in a star catalog). The method would enable rapid convergence, would afford robustness in the face of noise in the star sensors, would enable recognition of star images acquired in any sensor or spacecraft orientation, and would not make an excessive demand on the computational resources of a typical spacecraft. Going beyond the star-tracking application, the AIS-based pattern-recognition method is potentially applicable to pattern- recognition and -classification processes for diverse purposes -- for example, reconnaissance, detecting intruders, and mining data.
Claumann, Carlos Alberto; Wüst Zibetti, André; Bolzan, Ariovaldo; Machado, Ricardo A F; Pinto, Leonel Teixeira
2015-12-18
An approach that is commonly used for calculating the retention time of a compound in GC departs from the thermodynamic properties ΔH, ΔS and ΔCp of phase change (from mobile to stationary). Such properties can be estimated by using experimental retention time data, which results in a non-linear regression problem for non-isothermal temperature programs. As shown in this work, the surface of the objective function (approximation error criterion) on the basis of thermodynamic parameters can be divided into three clearly defined regions, and solely in one of them there is a possibility for the global optimum to be found. The main contribution of this study was the development of an algorithm that distinguishes the different regions of the error surface and its use in the robust initialization of the estimation of parameters ΔH, ΔS and ΔCp. Copyright © 2015 Elsevier B.V. All rights reserved.
The guanidine and maleic acid (1:1) complex. The additional theoretical and experimental studies.
Drozd, Marek; Dudzic, Damian
2012-04-01
On the basis of experimental literature data the theoretical studies for guanidinium and maleic acid complex with using DFT method are performed. In these studies the experimental X-ray data for two different forms of investigated crystal were used. During the geometry optimization process one equilibrium structure was found, only. According to this result the infrared spectrum for one theoretical molecule was calculated. On the basis of potential energy distribution (PED) analysis the clear-cut assignments of observed bands were performed. For the calculated molecule with energy minimum the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO) were obtained and graphically illustrated. The energy difference (GAP) between HOMO and LUMO was analyzed. Additionally, the nonlinear properties of this molecule were calculated. The α and β (first and second order) hyperpolarizability values are obtained. On the basis of these results the title crystal was classified as new second order NLO generator. Copyright © 2011 Elsevier B.V. All rights reserved.
Perceptual awareness and its neural basis: bridging experimental and theoretical paradigms
Raffone, Antonino; Srinivasan, Narayanan; van Leeuwen, Cees
2014-01-01
Understanding consciousness is a major scientific challenge of our times, and perceptual awareness is an integral part of that challenge. This Theme Issue aims to provide a timely focus on crucial insights from leading scientists on perceptual awareness and its neural basis. The issue refers to key research questions and findings in perceptual awareness research and aims to be a catalyst for further research, by bringing together the state-of-the-art. It shows how bridges are being built between empirical and theoretical research and proposes new directions for the study of multisensory awareness and the role of the states of the body therein. In this introduction, we highlight crucial problems that have characterized the development of the study of perceptual awareness. We then provide an overview of major experimental and theoretical paradigms related to perceptual awareness and its neural basis. Finally, we present an overview of the Theme Issue, with reference to the contributed articles and their relationships. PMID:24639576
Ion flux through membrane channels--an enhanced algorithm for the Poisson-Nernst-Planck model.
Dyrka, Witold; Augousti, Andy T; Kotulska, Malgorzata
2008-09-01
A novel algorithmic scheme for numerical solution of the 3D Poisson-Nernst-Planck model is proposed. The algorithmic improvements are universal and independent of the detailed physical model. They include three major steps: an adjustable gradient-based step value, an adjustable relaxation coefficient, and an optimized segmentation of the modeled space. The enhanced algorithm significantly accelerates the speed of computation and reduces the computational demands. The theoretical model was tested on a regular artificial channel and validated on a real protein channel-alpha-hemolysin, proving its efficiency. (c) 2008 Wiley Periodicals, Inc.
Model of ballistic targets' dynamics used for trajectory tracking algorithms
NASA Astrophysics Data System (ADS)
Okoń-FÄ fara, Marta; Kawalec, Adam; Witczak, Andrzej
2017-04-01
There are known only few ballistic object tracking algorithms. To develop such algorithms and to its further testing, it is necessary to implement possibly simple and reliable objects' dynamics model. The article presents the dynamics' model of a tactical ballistic missile (TBM) including the three stages of flight: the boost stage and two passive stages - the ascending one and the descending one. Additionally, the procedure of transformation from the local coordinate system to the polar-radar oriented and the global is presented. The prepared theoretical data may be used to determine the tracking algorithm parameters and to its further verification.
Xu, Jingjing; Yang, Wei; Zhang, Linyuan; Han, Ruisong; Shao, Xiaotao
2015-01-01
In this paper, a wireless sensor network (WSN) technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA) to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI) arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD) algorithm with particle swarm optimization (PSO), namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER) performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization. PMID:26343660
Space Shuttle Main Engine Propellant Path Leak Detection Using Sequential Image Processing
NASA Technical Reports Server (NTRS)
Smith, L. Montgomery; Malone, Jo Anne; Crawford, Roger A.
1995-01-01
Initial research in this study using theoretical radiation transport models established that the occurrence of a leak is accompanies by a sudden but sustained change in intensity in a given region of an image. In this phase, temporal processing of video images on a frame-by-frame basis was used to detect leaks within a given field of view. The leak detection algorithm developed in this study consists of a digital highpass filter cascaded with a moving average filter. The absolute value of the resulting discrete sequence is then taken and compared to a threshold value to produce the binary leak/no leak decision at each point in the image. Alternatively, averaging over the full frame of the output image produces a single time-varying mean value estimate that is indicative of the intensity and extent of a leak. Laboratory experiments were conducted in which artificially created leaks on a simulated SSME background were produced and recorded from a visible wavelength video camera. This data was processed frame-by-frame over the time interval of interest using an image processor implementation of the leak detection algorithm. In addition, a 20 second video sequence of an actual SSME failure was analyzed using this technique. The resulting output image sequences and plots of the full frame mean value versus time verify the effectiveness of the system.
[A new information technology for system diagnosis of functional activity of human organs].
Avshalumov, A Sh; Sudakov, K V; Filaretov, G F
2006-01-01
The goal of this work was to consider a new diagnostic technology based on analysis of objective information parameters of functional activity and interaction of normal and pathologically changed human organs. The technology is based on the use of very low power millimeter (EHF) radiation emitted by human body and other biological objects in the process of vital activity. The importance of consideration of the information aspect of vital activity from the standpoint of the theory of functional systems suggested by P. K. Anokhin is emphasized. The suggested information technology is theoretically substantiated. The capabilities of the suggested technology for diagnosis, as well as the difficulties of its practical implementation caused by very low power of electromagnetic fields generated by human body, are discussed. It is noted that only use of modern radiophysical equipment together with new software based on specially developed algorithms made it possible to construct a medical EHF diagnostic system for effective implementation of the suggested technology. The system structure, functions of its components, the examination procedure, and the form of representation of diagnostic information are described together with the specific features of applied software based on the principle of maximal objectivity of analysis and interpretation of the results of diagnosis on the basis of artificial intelligence algorithms. The diagnostic capabilities of the system are illustrated by several examples.
A Novel Approach for Adaptive Signal Processing
NASA Technical Reports Server (NTRS)
Chen, Ya-Chin; Juang, Jer-Nan
1998-01-01
Adaptive linear predictors have been used extensively in practice in a wide variety of forms. In the main, their theoretical development is based upon the assumption of stationarity of the signals involved, particularly with respect to the second order statistics. On this basis, the well-known normal equations can be formulated. If high- order statistical stationarity is assumed, then the equivalent normal equations involve high-order signal moments. In either case, the cross moments (second or higher) are needed. This renders the adaptive prediction procedure non-blind. A novel procedure for blind adaptive prediction has been proposed and considerable implementation has been made in our contributions in the past year. The approach is based upon a suitable interpretation of blind equalization methods that satisfy the constant modulus property and offers significant deviations from the standard prediction methods. These blind adaptive algorithms are derived by formulating Lagrange equivalents from mechanisms of constrained optimization. In this report, other new update algorithms are derived from the fundamental concepts of advanced system identification to carry out the proposed blind adaptive prediction. The results of the work can be extended to a number of control-related problems, such as disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. The applications implemented are speech processing, such as coding and synthesis. Simulations are included to verify the novel modelling method.
Xu, Jingjing; Yang, Wei; Zhang, Linyuan; Han, Ruisong; Shao, Xiaotao
2015-08-27
In this paper, a wireless sensor network (WSN) technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA) to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI) arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD) algorithm with particle swarm optimization (PSO), namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER) performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization.
The GLAS Algorithm Theoretical Basis Document for Precision Orbit Determination (POD)
NASA Technical Reports Server (NTRS)
Rim, Hyung Jin; Yoon, S. P.; Schultz, Bob E.
2013-01-01
The Geoscience Laser Altimeter System (GLAS) was the sole instrument for NASA's Ice, Cloud and land Elevation Satellite (ICESat) laser altimetry mission. The primary purpose of the ICESat mission was to make ice sheet elevation measurements of the polar regions. Additional goals were to measure the global distribution of clouds and aerosols and to map sea ice, land topography and vegetation. ICESat was the benchmark Earth Observing System (EOS) mission to be used to determine the mass balance of the ice sheets, as well as for providing cloud property information, especially for stratospheric clouds common over polar areas. The GLAS instrument operated from 2003 to 2009 and provided multi-year elevation data needed to determine changes in sea ice freeboard, land topography and vegetation around the globe, in addition to elevation changes of the Greenland and Antarctic ice sheets. This document describes the Precision Orbit Determination (POD) algorithm for the ICESat mission. The problem of determining an accurate ephemeris for an orbiting satellite involves estimating the position and velocity of the satellite from a sequence of observations. The ICESatGLAS elevation measurements must be very accurately geolocated, combining precise orbit information with precision pointing information. The ICESat mission POD requirement states that the position of the instrument should be determined with an accuracy of 5 and 20 cm (1-s) in radial and horizontal components, respectively, to meet the science requirements for determining elevation change.
Wang, Jindong; Chen, Peng; Deng, Yufen; Guo, Junjie
2018-01-01
As a three-dimensional measuring instrument, the laser tracker is widely used in industrial measurement. To avoid the influence of angle measurement error on the overall measurement accuracy, the multi-station and time-sharing measurement with a laser tracker is introduced on the basis of the global positioning system (GPS) principle in this paper. For the proposed method, how to accurately determine the coordinates of each measuring point by using a large amount of measured data is a critical issue. Taking detecting motion error of a numerical control machine tool, for example, the corresponding measurement algorithms are investigated thoroughly. By establishing the mathematical model of detecting motion error of a machine tool with this method, the analytical algorithm concerning on base station calibration and measuring point determination is deduced without selecting the initial iterative value in calculation. However, when the motion area of the machine tool is in a 2D plane, the coefficient matrix of base station calibration is singular, which generates a distortion result. In order to overcome the limitation of the original algorithm, an improved analytical algorithm is also derived. Meanwhile, the calibration accuracy of the base station with the improved algorithm is compared with that with the original analytical algorithm and some iterative algorithms, such as the Gauss-Newton algorithm and Levenberg-Marquardt algorithm. The experiment further verifies the feasibility and effectiveness of the improved algorithm. In addition, the different motion areas of the machine tool have certain influence on the calibration accuracy of the base station, and the corresponding influence of measurement error on the calibration result of the base station depending on the condition number of coefficient matrix are analyzed.
NASA Astrophysics Data System (ADS)
Wang, Jindong; Chen, Peng; Deng, Yufen; Guo, Junjie
2018-01-01
As a three-dimensional measuring instrument, the laser tracker is widely used in industrial measurement. To avoid the influence of angle measurement error on the overall measurement accuracy, the multi-station and time-sharing measurement with a laser tracker is introduced on the basis of the global positioning system (GPS) principle in this paper. For the proposed method, how to accurately determine the coordinates of each measuring point by using a large amount of measured data is a critical issue. Taking detecting motion error of a numerical control machine tool, for example, the corresponding measurement algorithms are investigated thoroughly. By establishing the mathematical model of detecting motion error of a machine tool with this method, the analytical algorithm concerning on base station calibration and measuring point determination is deduced without selecting the initial iterative value in calculation. However, when the motion area of the machine tool is in a 2D plane, the coefficient matrix of base station calibration is singular, which generates a distortion result. In order to overcome the limitation of the original algorithm, an improved analytical algorithm is also derived. Meanwhile, the calibration accuracy of the base station with the improved algorithm is compared with that with the original analytical algorithm and some iterative algorithms, such as the Gauss-Newton algorithm and Levenberg-Marquardt algorithm. The experiment further verifies the feasibility and effectiveness of the improved algorithm. In addition, the different motion areas of the machine tool have certain influence on the calibration accuracy of the base station, and the corresponding influence of measurement error on the calibration result of the base station depending on the condition number of coefficient matrix are analyzed.
Theoretical dissociation energies for ionic molecules
NASA Technical Reports Server (NTRS)
Langhoff, S. R.; Bauschlicher, C. W., Jr.; Partridge, H.
1986-01-01
Ab initio calculations at the self-consistent-field and singles plus doubles configuration-interaction level are used to determine accurate spectroscopic parameters for most of the alkali and alkaline-earth fluorides, chlorides, oxides, sulfides, hydroxides, and isocyanides. Numerical Hartree-Fock (NHF) calculations are performed on selected systems to ensure that the extended Slater basis sets employed for the diatomic systems are near the Hartree-Fock limit. Extended Gaussian basis sets of at least triple-zeta plus double polarization equality are employed for the triatomic system. With this model, correlation effects are relatively small, but invariably increase the theoretical dissociation energies. The importance of correlating the electrons on both the anion and the metal is discussed. The theoretical dissociation energies are critically compared with the literature to rule out disparate experimental values. Theoretical (sup 2)Pi - (sup 2)Sigma (sup +) energy separations are presented for the alkali oxides and sulfides.
Symbolic interactionism as a theoretical perspective for multiple method research.
Benzies, K M; Allen, M N
2001-02-01
Qualitative and quantitative research rely on different epistemological assumptions about the nature of knowledge. However, the majority of nurse researchers who use multiple method designs do not address the problem of differing theoretical perspectives. Traditionally, symbolic interactionism has been viewed as one perspective underpinning qualitative research, but it is also the basis for quantitative studies. Rooted in social psychology, symbolic interactionism has a rich intellectual heritage that spans more than a century. Underlying symbolic interactionism is the major assumption that individuals act on the basis of the meaning that things have for them. The purpose of this paper is to present symbolic interactionism as a theoretical perspective for multiple method designs with the aim of expanding the dialogue about new methodologies. Symbolic interactionism can serve as a theoretical perspective for conceptually clear and soundly implemented multiple method research that will expand the understanding of human health behaviour.
Evaluation of registration, compression and classification algorithms. Volume 1: Results
NASA Technical Reports Server (NTRS)
Jayroe, R.; Atkinson, R.; Callas, L.; Hodges, J.; Gaggini, B.; Peterson, J.
1979-01-01
The registration, compression, and classification algorithms were selected on the basis that such a group would include most of the different and commonly used approaches. The results of the investigation indicate clearcut, cost effective choices for registering, compressing, and classifying multispectral imagery.
Theoretical Bounds of Direct Binary Search Halftoning.
Liao, Jan-Ray
2015-11-01
Direct binary search (DBS) produces the images of the best quality among half-toning algorithms. The reason is that it minimizes the total squared perceived error instead of using heuristic approaches. The search for the optimal solution involves two operations: (1) toggle and (2) swap. Both operations try to find the binary states for each pixel to minimize the total squared perceived error. This error energy minimization leads to a conjecture that the absolute value of the filtered error after DBS converges is bounded by half of the peak value of the autocorrelation filter. However, a proof of the bound's existence has not yet been found. In this paper, we present a proof that shows the bound existed as conjectured under the condition that at least one swap occurs after toggle converges. The theoretical analysis also indicates that a swap with a pixel further away from the center of the autocorrelation filter results in a tighter bound. Therefore, we propose a new DBS algorithm which considers toggle and swap separately, and the swap operations are considered in the order from the edge to the center of the filter. Experimental results show that the new algorithm is more efficient than the previous algorithm and can produce half-toned images of the same quality as the previous algorithm.
Joint graph cut and relative fuzzy connectedness image segmentation algorithm.
Ciesielski, Krzysztof Chris; Miranda, Paulo A V; Falcão, Alexandre X; Udupa, Jayaram K
2013-12-01
We introduce an image segmentation algorithm, called GC(sum)(max), which combines, in novel manner, the strengths of two popular algorithms: Relative Fuzzy Connectedness (RFC) and (standard) Graph Cut (GC). We show, both theoretically and experimentally, that GC(sum)(max) preserves robustness of RFC with respect to the seed choice (thus, avoiding "shrinking problem" of GC), while keeping GC's stronger control over the problem of "leaking though poorly defined boundary segments." The analysis of GC(sum)(max) is greatly facilitated by our recent theoretical results that RFC can be described within the framework of Generalized GC (GGC) segmentation algorithms. In our implementation of GC(sum)(max) we use, as a subroutine, a version of RFC algorithm (based on Image Forest Transform) that runs (provably) in linear time with respect to the image size. This results in GC(sum)(max) running in a time close to linear. Experimental comparison of GC(sum)(max) to GC, an iterative version of RFC (IRFC), and power watershed (PW), based on a variety medical and non-medical images, indicates superior accuracy performance of GC(sum)(max) over these other methods, resulting in a rank ordering of GC(sum)(max)>PW∼IRFC>GC. Copyright © 2013 Elsevier B.V. All rights reserved.
A Convex Formulation for Learning a Shared Predictive Structure from Multiple Tasks
Chen, Jianhui; Tang, Lei; Liu, Jun; Ye, Jieping
2013-01-01
In this paper, we consider the problem of learning from multiple related tasks for improved generalization performance by extracting their shared structures. The alternating structure optimization (ASO) algorithm, which couples all tasks using a shared feature representation, has been successfully applied in various multitask learning problems. However, ASO is nonconvex and the alternating algorithm only finds a local solution. We first present an improved ASO formulation (iASO) for multitask learning based on a new regularizer. We then convert iASO, a nonconvex formulation, into a relaxed convex one (rASO). Interestingly, our theoretical analysis reveals that rASO finds a globally optimal solution to its nonconvex counterpart iASO under certain conditions. rASO can be equivalently reformulated as a semidefinite program (SDP), which is, however, not scalable to large datasets. We propose to employ the block coordinate descent (BCD) method and the accelerated projected gradient (APG) algorithm separately to find the globally optimal solution to rASO; we also develop efficient algorithms for solving the key subproblems involved in BCD and APG. The experiments on the Yahoo webpages datasets and the Drosophila gene expression pattern images datasets demonstrate the effectiveness and efficiency of the proposed algorithms and confirm our theoretical analysis. PMID:23520249
Differentially Private Empirical Risk Minimization
Chaudhuri, Kamalika; Monteleoni, Claire; Sarwate, Anand D.
2011-01-01
Privacy-preserving machine learning algorithms are crucial for the increasingly common setting in which personal data, such as medical or financial records, are analyzed. We provide general techniques to produce privacy-preserving approximations of classifiers learned via (regularized) empirical risk minimization (ERM). These algorithms are private under the ε-differential privacy definition due to Dwork et al. (2006). First we apply the output perturbation ideas of Dwork et al. (2006), to ERM classification. Then we propose a new method, objective perturbation, for privacy-preserving machine learning algorithm design. This method entails perturbing the objective function before optimizing over classifiers. If the loss and regularizer satisfy certain convexity and differentiability criteria, we prove theoretical results showing that our algorithms preserve privacy, and provide generalization bounds for linear and nonlinear kernels. We further present a privacy-preserving technique for tuning the parameters in general machine learning algorithms, thereby providing end-to-end privacy guarantees for the training process. We apply these results to produce privacy-preserving analogues of regularized logistic regression and support vector machines. We obtain encouraging results from evaluating their performance on real demographic and benchmark data sets. Our results show that both theoretically and empirically, objective perturbation is superior to the previous state-of-the-art, output perturbation, in managing the inherent tradeoff between privacy and learning performance. PMID:21892342
Fast localized orthonormal virtual orbitals which depend smoothly on nuclear coordinates.
Subotnik, Joseph E; Dutoi, Anthony D; Head-Gordon, Martin
2005-09-15
We present here an algorithm for computing stable, well-defined localized orthonormal virtual orbitals which depend smoothly on nuclear coordinates. The algorithm is very fast, limited only by diagonalization of two matrices with dimension the size of the number of virtual orbitals. Furthermore, we require no more than quadratic (in the number of electrons) storage. The basic premise behind our algorithm is that one can decompose any given atomic-orbital (AO) vector space as a minimal basis space (which includes the occupied and valence virtual spaces) and a hard-virtual (HV) space (which includes everything else). The valence virtual space localizes easily with standard methods, while the hard-virtual space is constructed to be atom centered and automatically local. The orbitals presented here may be computed almost as quickly as projecting the AO basis onto the virtual space and are almost as local (according to orbital variance), while our orbitals are orthonormal (rather than redundant and nonorthogonal). We expect this algorithm to find use in local-correlation methods.
NASA Astrophysics Data System (ADS)
Regis, Rommel G.
2014-02-01
This article develops two new algorithms for constrained expensive black-box optimization that use radial basis function surrogates for the objective and constraint functions. These algorithms are called COBRA and Extended ConstrLMSRBF and, unlike previous surrogate-based approaches, they can be used for high-dimensional problems where all initial points are infeasible. They both follow a two-phase approach where the first phase finds a feasible point while the second phase improves this feasible point. COBRA and Extended ConstrLMSRBF are compared with alternative methods on 20 test problems and on the MOPTA08 benchmark automotive problem (D.R. Jones, Presented at MOPTA 2008), which has 124 decision variables and 68 black-box inequality constraints. The alternatives include a sequential penalty derivative-free algorithm, a direct search method with kriging surrogates, and two multistart methods. Numerical results show that COBRA algorithms are competitive with Extended ConstrLMSRBF and they generally outperform the alternatives on the MOPTA08 problem and most of the test problems.
NASA Technical Reports Server (NTRS)
Henderson, R. A.; Schrag, R. L.
1986-01-01
A summary of modeling the electrical system aspects of a coil and metal target configuration resembling a practical electro-impulse deicing (EIDI) installation, and a simple circuit for providing energy to the coil, was presented. The model was developed in sufficient theoretical detail to allow the generation of computer algorithms for the current in the coil, the magnetic induction on both surfaces of the target, the force between the coil and target, and the impulse delivered to the target. These algorithms were applied to a specific prototype EIDI test system for which the current, magnetic fields near the target surfaces, and impulse were previously measured.
An assessment of support vector machines for land cover classification
Huang, C.; Davis, L.S.; Townshend, J.R.G.
2002-01-01
The support vector machine (SVM) is a group of theoretically superior machine learning algorithms. It was found competitive with the best available machine learning algorithms in classifying high-dimensional data sets. This paper gives an introduction to the theoretical development of the SVM and an experimental evaluation of its accuracy, stability and training speed in deriving land cover classifications from satellite images. The SVM was compared to three other popular classifiers, including the maximum likelihood classifier (MLC), neural network classifiers (NNC) and decision tree classifiers (DTC). The impacts of kernel configuration on the performance of the SVM and of the selection of training data and input variables on the four classifiers were also evaluated in this experiment.
Neural networks and MIMD-multiprocessors
NASA Technical Reports Server (NTRS)
Vanhala, Jukka; Kaski, Kimmo
1990-01-01
Two artificial neural network models are compared. They are the Hopfield Neural Network Model and the Sparse Distributed Memory model. Distributed algorithms for both of them are designed and implemented. The run time characteristics of the algorithms are analyzed theoretically and tested in practice. The storage capacities of the networks are compared. Implementations are done using a distributed multiprocessor system.
ERIC Educational Resources Information Center
Schulz, Andreas
2018-01-01
Theoretical analysis of whole number-based calculation strategies and digit-based algorithms for multi-digit multiplication and division reveals that strategy use includes two kinds of reasoning: reasoning about the relations between numbers and reasoning about the relations between operations. In contrast, algorithms aim to reduce the necessary…
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKemmish, Laura K., E-mail: laura.mckemmish@gmail.com; Research School of Chemistry, Australian National University, Canberra
Algorithms for the efficient calculation of two-electron integrals in the newly developed mixed ramp-Gaussian basis sets are presented, alongside a Fortran90 implementation of these algorithms, RAMPITUP. These new basis sets have significant potential to (1) give some speed-up (estimated at up to 20% for large molecules in fully optimised code) to general-purpose Hartree-Fock (HF) and density functional theory quantum chemistry calculations, replacing all-Gaussian basis sets, and (2) give very large speed-ups for calculations of core-dependent properties, such as electron density at the nucleus, NMR parameters, relativistic corrections, and total energies, replacing the current use of Slater basis functions or verymore » large specialised all-Gaussian basis sets for these purposes. This initial implementation already demonstrates roughly 10% speed-ups in HF/R-31G calculations compared to HF/6-31G calculations for large linear molecules, demonstrating the promise of this methodology, particularly for the second application. As well as the reduction in the total primitive number in R-31G compared to 6-31G, this timing advantage can be attributed to the significant reduction in the number of mathematically complex intermediate integrals after modelling each ramp-Gaussian basis-function-pair as a sum of ramps on a single atomic centre.« less
Two hybrid compaction algorithms for the layout optimization problem.
Xiao, Ren-Bin; Xu, Yi-Chun; Amos, Martyn
2007-01-01
In this paper we present two new algorithms for the layout optimization problem: this concerns the placement of circular, weighted objects inside a circular container, the two objectives being to minimize imbalance of mass and to minimize the radius of the container. This problem carries real practical significance in industrial applications (such as the design of satellites), as well as being of significant theoretical interest. We present two nature-inspired algorithms for this problem, the first based on simulated annealing, and the second on particle swarm optimization. We compare our algorithms with the existing best-known algorithm, and show that our approaches out-perform it in terms of both solution quality and execution time.
Fast algorithm for wavefront reconstruction in XAO/SCAO with pyramid wavefront sensor
NASA Astrophysics Data System (ADS)
Shatokhina, Iuliia; Obereder, Andreas; Ramlau, Ronny
2014-08-01
We present a fast wavefront reconstruction algorithm developed for an extreme adaptive optics system equipped with a pyramid wavefront sensor on a 42m telescope. The method is called the Preprocessed Cumulative Reconstructor with domain decomposition (P-CuReD). The algorithm is based on the theoretical relationship between pyramid and Shack-Hartmann wavefront sensor data. The algorithm consists of two consecutive steps - a data preprocessing, and an application of the CuReD algorithm, which is a fast method for wavefront reconstruction from Shack-Hartmann sensor data. The closed loop simulation results show that the P-CuReD method provides the same reconstruction quality and is significantly faster than an MVM.
Modelling the spread of innovation in wild birds.
Shultz, Thomas R; Montrey, Marcel; Aplin, Lucy M
2017-06-01
We apply three plausible algorithms in agent-based computer simulations to recent experiments on social learning in wild birds. Although some of the phenomena are simulated by all three learning algorithms, several manifestations of social conformity bias are simulated by only the approximate majority (AM) algorithm, which has roots in chemistry, molecular biology and theoretical computer science. The simulations generate testable predictions and provide several explanatory insights into the diffusion of innovation through a population. The AM algorithm's success raises the possibility of its usefulness in studying group dynamics more generally, in several different scientific domains. Our differential-equation model matches simulation results and provides mathematical insights into the dynamics of these algorithms. © 2017 The Author(s).
A Novel Range Compression Algorithm for Resolution Enhancement in GNSS-SARs.
Zheng, Yu; Yang, Yang; Chen, Wu
2017-06-25
In this paper, a novel range compression algorithm for enhancing range resolutions of a passive Global Navigation Satellite System-based Synthetic Aperture Radar (GNSS-SAR) is proposed. In the proposed algorithm, within each azimuth bin, firstly range compression is carried out by correlating a reflected GNSS intermediate frequency (IF) signal with a synchronized direct GNSS base-band signal in the range domain. Thereafter, spectrum equalization is applied to the compressed results for suppressing side lobes to obtain a final range-compressed signal. Both theoretical analysis and simulation results have demonstrated that significant range resolution improvement in GNSS-SAR images can be achieved by the proposed range compression algorithm, compared to the conventional range compression algorithm.
Jankovic, Marko; Ogawa, Hidemitsu
2004-10-01
Principal Component Analysis (PCA) and Principal Subspace Analysis (PSA) are classic techniques in statistical data analysis, feature extraction and data compression. Given a set of multivariate measurements, PCA and PSA provide a smaller set of "basis vectors" with less redundancy, and a subspace spanned by them, respectively. Artificial neurons and neural networks have been shown to perform PSA and PCA when gradient ascent (descent) learning rules are used, which is related to the constrained maximization (minimization) of statistical objective functions. Due to their low complexity, such algorithms and their implementation in neural networks are potentially useful in cases of tracking slow changes of correlations in the input data or in updating eigenvectors with new samples. In this paper we propose PCA learning algorithm that is fully homogeneous with respect to neurons. The algorithm is obtained by modification of one of the most famous PSA learning algorithms--Subspace Learning Algorithm (SLA). Modification of the algorithm is based on Time-Oriented Hierarchical Method (TOHM). The method uses two distinct time scales. On a faster time scale PSA algorithm is responsible for the "behavior" of all output neurons. On a slower scale, output neurons will compete for fulfillment of their "own interests". On this scale, basis vectors in the principal subspace are rotated toward the principal eigenvectors. At the end of the paper it will be briefly analyzed how (or why) time-oriented hierarchical method can be used for transformation of any of the existing neural network PSA method, into PCA method.
Sekihara, Kensuke; Adachi, Yoshiaki; Kubota, Hiroshi K; Cai, Chang; Nagarajan, Srikantan S
2018-06-01
Magnetoencephalography (MEG) has a well-recognized weakness at detecting deeper brain activities. This paper proposes a novel algorithm for selective detection of deep sources by suppressing interference signals from superficial sources in MEG measurements. The proposed algorithm combines the beamspace preprocessing method with the dual signal space projection (DSSP) interference suppression method. A prerequisite of the proposed algorithm is prior knowledge of the location of the deep sources. The proposed algorithm first derives the basis vectors that span a local region just covering the locations of the deep sources. It then estimates the time-domain signal subspace of the superficial sources by using the projector composed of these basis vectors. Signals from the deep sources are extracted by projecting the row space of the data matrix onto the direction orthogonal to the signal subspace of the superficial sources. Compared with the previously proposed beamspace signal space separation (SSS) method, the proposed algorithm is capable of suppressing much stronger interference from superficial sources. This capability is demonstrated in our computer simulation as well as experiments using phantom data. The proposed bDSSP algorithm can be a powerful tool in studies of physiological functions of midbrain and deep brain structures.
NASA Astrophysics Data System (ADS)
Sekihara, Kensuke; Adachi, Yoshiaki; Kubota, Hiroshi K.; Cai, Chang; Nagarajan, Srikantan S.
2018-06-01
Objective. Magnetoencephalography (MEG) has a well-recognized weakness at detecting deeper brain activities. This paper proposes a novel algorithm for selective detection of deep sources by suppressing interference signals from superficial sources in MEG measurements. Approach. The proposed algorithm combines the beamspace preprocessing method with the dual signal space projection (DSSP) interference suppression method. A prerequisite of the proposed algorithm is prior knowledge of the location of the deep sources. The proposed algorithm first derives the basis vectors that span a local region just covering the locations of the deep sources. It then estimates the time-domain signal subspace of the superficial sources by using the projector composed of these basis vectors. Signals from the deep sources are extracted by projecting the row space of the data matrix onto the direction orthogonal to the signal subspace of the superficial sources. Main results. Compared with the previously proposed beamspace signal space separation (SSS) method, the proposed algorithm is capable of suppressing much stronger interference from superficial sources. This capability is demonstrated in our computer simulation as well as experiments using phantom data. Significance. The proposed bDSSP algorithm can be a powerful tool in studies of physiological functions of midbrain and deep brain structures.
Organizational Socialization: A Social Learning Interpretation
1982-02-01
approaches to socialization, they lack a clear theoretical basis for understanding and application. This paper proposes a social learning theoretical ... framework . Particular attention is given to the relevancy that modeling and self-control can have for organizational socialization. Specific examples of
Enhancements to AERMOD’s Building Downwash Algorithms based on Wind Tunnel and Embedded-LES Modeling
This presentation presents three modifications to the building downwash algorithm in AERMOD that improve the physical basis and internal consistency of the model, and one modification to AERMOD’s building pre-processor to better represent elongated buildings in oblique wind...
Tactical Approaches for Making a Successful Satellite Passive Microwave ESDR
NASA Astrophysics Data System (ADS)
Hardman, M.; Brodzik, M. J.; Gotberg, J.; Long, D. G.; Paget, A. C.
2014-12-01
Our NASA MEaSUREs project is producing a new, enhanced resolution gridded Earth System Data Record for the entire satellite passive microwave (SMMR, SSM/I-SSMIS and AMSR-E) time series. Our project goals are twofold: to produce a well-documented, consistently processed, high-quality historical record at higher spatial resolutions than have previously been available, and to transition the production software to the NSIDC DAAC for ongoing processing after our project completion. In support of these goals, our distributed team at BYU and NSIDC faces project coordination challenges to produce a high-quality data set that our user community will accept as a replacement for the currently available historical versions of these data. We work closely with our DAAC liaison on format specifications, data and metadata plans, and project progress. In order for the user community to understand and support our project, we have solicited a team of Early Adopters who are reviewing and evaluating a prototype version of the data. Early Adopter feedback will be critical input to our final data content and format decisions. For algorithm transparency and accountability, we have released an Algorithm Theoretical Basis Document (ATBD) and detailed supporting technical documentation, with rationale for all algorithm implementation decisions. For distributed team management, we are using collaborative tools for software revision control and issue tracking. For reliably transitioning a research-quality image reconstruction software system to production-quality software suitable for use at the DAAC, we have adopted continuous integration methods for running automated regression testing. Our presentation will summarize bothadvantages and challenges of each of these tactics in ensuring production of a successful ESDR and an enduring production software system.
Model Order Reduction Algorithm for Estimating the Absorption Spectrum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Beeumen, Roel; Williams-Young, David B.; Kasper, Joseph M.
The ab initio description of the spectral interior of the absorption spectrum poses both a theoretical and computational challenge for modern electronic structure theory. Due to the often spectrally dense character of this domain in the quantum propagator’s eigenspectrum for medium-to-large sized systems, traditional approaches based on the partial diagonalization of the propagator often encounter oscillatory and stagnating convergence. Electronic structure methods which solve the molecular response problem through the solution of spectrally shifted linear systems, such as the complex polarization propagator, offer an alternative approach which is agnostic to the underlying spectral density or domain location. This generality comesmore » at a seemingly high computational cost associated with solving a large linear system for each spectral shift in some discretization of the spectral domain of interest. In this work, we present a novel, adaptive solution to this high computational overhead based on model order reduction techniques via interpolation. Model order reduction reduces the computational complexity of mathematical models and is ubiquitous in the simulation of dynamical systems and control theory. The efficiency and effectiveness of the proposed algorithm in the ab initio prediction of X-ray absorption spectra is demonstrated using a test set of challenging water clusters which are spectrally dense in the neighborhood of the oxygen K-edge. On the basis of a single, user defined tolerance we automatically determine the order of the reduced models and approximate the absorption spectrum up to the given tolerance. We also illustrate that, for the systems studied, the automatically determined model order increases logarithmically with the problem dimension, compared to a linear increase of the number of eigenvalues within the energy window. Furthermore, we observed that the computational cost of the proposed algorithm only scales quadratically with respect to the problem dimension.« less
Synthesis of Greedy Algorithms Using Dominance Relations
NASA Technical Reports Server (NTRS)
Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.
2010-01-01
Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.
I/O efficient algorithms and applications in geographic information systems
NASA Astrophysics Data System (ADS)
Danner, Andrew
Modern remote sensing methods such a laser altimetry (lidar) and Interferometric Synthetic Aperture Radar (IfSAR) produce georeferenced elevation data at unprecedented rates. Many Geographic Information System (GIS) algorithms designed for terrain modelling applications cannot process these massive data sets. The primary problem is that these data sets are too large to fit in the main internal memory of modern computers and must therefore reside on larger, but considerably slower disks. In these applications, the transfer of data between disk and main memory, or I/O, becomes the primary bottleneck. Working in a theoretical model that more accurately represents this two level memory hierarchy, we can develop algorithms that are I/O-efficient and reduce the amount of disk I/O needed to solve a problem. In this thesis we aim to modernize GIS algorithms and develop a number of I/O-efficient algorithms for processing geographic data derived from massive elevation data sets. For each application, we convert a geographic question to an algorithmic question, develop an I/O-efficient algorithm that is theoretically efficient, implement our approach and verify its performance using real-world data. The applications we consider include constructing a gridded digital elevation model (DEM) from an irregularly spaced point cloud, removing topological noise from a DEM, modeling surface water flow over a terrain, extracting river networks and watershed hierarchies from the terrain, and locating polygons containing query points in a planar subdivision. We initially developed solutions to each of these applications individually. However, we also show how to combine individual solutions to form a scalable geo-processing pipeline that seamlessly solves a sequence of sub-problems with little or no manual intervention. We present experimental results that demonstrate orders of magnitude improvement over previously known algorithms.
Structure and structure-preserving algorithms for plasma physics
NASA Astrophysics Data System (ADS)
Morrison, P. J.
2016-10-01
Conventional simulation studies of plasma physics are based on numerically solving the underpinning differential (or integro-differential) equations. Usual algorithms in general do not preserve known geometric structure of the physical systems, such as the local energy-momentum conservation law, Casimir invariants, and the symplectic structure (Poincaré invariants). As a consequence, numerical errors may accumulate coherently with time and long-term simulation results may be unreliable. Recently, a series of geometric algorithms that preserve the geometric structures resulting from the Hamiltonian and action principle (HAP) form of theoretical models in plasma physics have been developed by several authors. The superiority of these geometric algorithms has been demonstrated with many test cases. For example, symplectic integrators for guiding-center dynamics have been constructed to preserve the noncanonical symplectic structures and bound the energy-momentum errors for all simulation time-steps; variational and symplectic algorithms have been discovered and successfully applied to the Vlasov-Maxwell system, MHD, and other magnetofluid equations as well. Hamiltonian truncations of the full Vlasov-Maxwell system have opened the field of discrete gyrokinetics and led to the GEMPIC algorithm. The vision that future numerical capabilities in plasma physics should be based on structure-preserving geometric algorithms will be presented. It will be argued that the geometric consequences of HAP form and resulting geometric algorithms suitable for plasma physics studies cannot be adapted from existing mathematical literature but, rather, need to be discovered and worked out by theoretical plasma physicists. The talk will review existing HAP structures of plasma physics for a variety of models, and how they have been adapted for numerical implementation. Supported by DOE DE-FG02-04ER-54742.
Converting Panax ginseng DNA and chemical fingerprints into two-dimensional barcode.
Cai, Yong; Li, Peng; Li, Xi-Wen; Zhao, Jing; Chen, Hai; Yang, Qing; Hu, Hao
2017-07-01
In this study, we investigated how to convert the Panax ginseng DNA sequence code and chemical fingerprints into a two-dimensional code. In order to improve the compression efficiency, GATC2Bytes and digital merger compression algorithms are proposed. HPLC chemical fingerprint data of 10 groups of P. ginseng from Northeast China and the internal transcribed spacer 2 (ITS2) sequence code as the DNA sequence code were ready for conversion. In order to convert such data into a two-dimensional code, the following six steps were performed: First, the chemical fingerprint characteristic data sets were obtained through the inflection filtering algorithm. Second, precompression processing of such data sets is undertaken. Third, precompression processing was undertaken with the P. ginseng DNA (ITS2) sequence codes. Fourth, the precompressed chemical fingerprint data and the DNA (ITS2) sequence code were combined in accordance with the set data format. Such combined data can be compressed by Zlib, an open source data compression algorithm. Finally, the compressed data generated a two-dimensional code called a quick response code (QR code). Through the abovementioned converting process, it can be found that the number of bytes needed for storing P. ginseng chemical fingerprints and its DNA (ITS2) sequence code can be greatly reduced. After GTCA2Bytes algorithm processing, the ITS2 compression rate reaches 75% and the chemical fingerprint compression rate exceeds 99.65% via filtration and digital merger compression algorithm processing. Therefore, the overall compression ratio even exceeds 99.36%. The capacity of the formed QR code is around 0.5k, which can easily and successfully be read and identified by any smartphone. P. ginseng chemical fingerprints and its DNA (ITS2) sequence code can form a QR code after data processing, and therefore the QR code can be a perfect carrier of the authenticity and quality of P. ginseng information. This study provides a theoretical basis for the development of a quality traceability system of traditional Chinese medicine based on a two-dimensional code.
Wang, Li; Jia, Pengfei; Huang, Tailai; Duan, Shukai; Yan, Jia; Wang, Lidan
2016-01-01
An electronic nose (E-nose) is an intelligent system that we will use in this paper to distinguish three indoor pollutant gases (benzene (C6H6), toluene (C7H8), formaldehyde (CH2O)) and carbon monoxide (CO). The algorithm is a key part of an E-nose system mainly composed of data processing and pattern recognition. In this paper, we employ support vector machine (SVM) to distinguish indoor pollutant gases and two of its parameters need to be optimized, so in order to improve the performance of SVM, in other words, to get a higher gas recognition rate, an effective enhanced krill herd algorithm (EKH) based on a novel decision weighting factor computing method is proposed to optimize the two SVM parameters. Krill herd (KH) is an effective method in practice, however, on occasion, it cannot avoid the influence of some local best solutions so it cannot always find the global optimization value. In addition its search ability relies fully on randomness, so it cannot always converge rapidly. To address these issues we propose an enhanced KH (EKH) to improve the global searching and convergence speed performance of KH. To obtain a more accurate model of the krill behavior, an updated crossover operator is added to the approach. We can guarantee the krill group are diversiform at the early stage of iterations, and have a good performance in local searching ability at the later stage of iterations. The recognition results of EKH are compared with those of other optimization algorithms (including KH, chaotic KH (CKH), quantum-behaved particle swarm optimization (QPSO), particle swarm optimization (PSO) and genetic algorithm (GA)), and we can find that EKH is better than the other considered methods. The research results verify that EKH not only significantly improves the performance of our E-nose system, but also provides a good beginning and theoretical basis for further study about other improved krill algorithms’ applications in all E-nose application areas. PMID:27529247
Land Surface Temperature Measurements from EOS MODIS Data
NASA Technical Reports Server (NTRS)
Wan, Zheng-Ming
2004-01-01
This report summarizes the accomplishments made by the MODIS LST (Land-Surface Temperature) group at University of California, Santa Barbara, under NASA Contract. Version 1 of the MODIS Land-Surface Temperature Algorithm Theoretical Basis Document (ATBD) was reviewed in June 1994, version 2 reviewed in November 1994, version 3.1 in August 1996, and version 3.3 updated in April 1999. Based on the ATBD, two LST algorithms were developed, one is the generalized split-window algorithm and another is the physics-based day/night LST algorithm. These two LST algorithms were implemented into the production generation executive code (PGE 16) for the daily standard MODIS LST products at level-2 (MODII-L2) and level-3 (MODIIA1 at 1 km resolution and MODIIB1 at 5km resolution). PGE codes for 8-day 1 km LST product (MODIIA2) and the daily, 8-day and monthly LST products at 0.05 degree latitude/longitude climate model grids (CMG) were also delivered. Four to six field campaigns were conducted each year since 2000 to validate the daily LST products generated by PGE16 and the calibration accuracies of the MODIS TIR bands used for the LST/emissivity retrieval from versions 2-4 of Terra MODIS data and versions 3-4 of Aqua MODIS data. Validation results from temperature-based and radiance-based methods indicate that the MODIS LST accuracy is better than 1 C in most clear-sky cases in the range from -10 to 58 C. One of the major lessons learn from multi- year temporal analysis of the consistent V4 daily Terra MODIS LST products in 2000-2003 over some selected target areas including lakes, snow/ice fields, and semi-arid sites is that there are variable numbers of cloud-contaminated LSTs in the MODIS LST products depending on surface elevation, land cover types, and atmospheric conditions. A cloud-screen scheme with constraints on spatial and temporal variations in LSTs was developed to remove cloud-contaminated LSTs. The 5km LST product was indirectly validated through comparisons to the 1 km LST product. Twenty three papers related to the LST research work were published in journals over the last decade.
An efficient parallel algorithm for the calculation of canonical MP2 energies.
Baker, Jon; Pulay, Peter
2002-09-01
We present the parallel version of a previous serial algorithm for the efficient calculation of canonical MP2 energies (Pulay, P.; Saebo, S.; Wolinski, K. Chem Phys Lett 2001, 344, 543). It is based on the Saebo-Almlöf direct-integral transformation, coupled with an efficient prescreening of the AO integrals. The parallel algorithm avoids synchronization delays by spawning a second set of slaves during the bin-sort prior to the second half-transformation. Results are presented for systems with up to 2000 basis functions. MP2 energies for molecules with 400-500 basis functions can be routinely calculated to microhartree accuracy on a small number of processors (6-8) in a matter of minutes with modern PC-based parallel computers. Copyright 2002 Wiley Periodicals, Inc. J Comput Chem 23: 1150-1156, 2002
NASA Astrophysics Data System (ADS)
Asgari, Shadnaz
Recent developments in the integrated circuits and wireless communications not only open up many possibilities but also introduce challenging issues for the collaborative processing of signals for source localization and beamforming in an energy-constrained distributed sensor network. In signal processing, various sensor array processing algorithms and concepts have been adopted, but must be further tailored to match the communication and computational constraints. Sometimes the constraints are such that none of the existing algorithms would be an efficient option for the defined problem and as the result; the necessity of developing a new algorithm becomes undeniable. In this dissertation, we present the theoretical and the practical issues of Direction-Of-Arrival (DOA) estimation and source localization using the Approximate-Maximum-Likelihood (AML) algorithm for different scenarios. We first investigate a robust algorithm design for coherent source DOA estimation in a limited reverberant environment. Then, we provide a least-square (LS) solution for source localization based on our newly proposed virtual array model. In another scenario, we consider the determination of the location of a disturbance source which emits both wideband acoustic and seismic signals. We devise an enhanced AML algorithm to process the data collected at the acoustic sensors. For processing the seismic signals, two distinct algorithms are investigated to determine the DOAs. Then, we consider a basic algorithm for fusion of the results yielded by the acoustic and seismic arrays. We also investigate the theoretical and practical issues of DOA estimation in a three-dimensional (3D) scenario. We show that the performance of the proposed 3D AML algorithm converges to the Cramer-Rao Bound. We use the concept of an isotropic array to reduce the complexity of the proposed algorithm by advocating a decoupled 3D version. We also explore a modified version of the decoupled 3D AML algorithm which can be used for DOA estimation with non-isotropic arrays. In this dissertation, for each scenario, efficient numerical implementations of the corresponding AML algorithm are derived and applied into a real-time sensor network testbed. Extensive simulations as well as experimental results are presented to verify the effectiveness of the proposed algorithms.
2012-01-01
Background Quantitative trait loci (QTL) detection on a huge amount of phenotypes, like eQTL detection on transcriptomic data, can be dramatically impaired by the statistical properties of interval mapping methods. One of these major outcomes is the high number of QTL detected at marker locations. The present study aims at identifying and specifying the sources of this bias, in particular in the case of analysis of data issued from outbred populations. Analytical developments were carried out in a backcross situation in order to specify the bias and to propose an algorithm to control it. The outbred population context was studied through simulated data sets in a wide range of situations. The likelihood ratio test was firstly analyzed under the "one QTL" hypothesis in a backcross population. Designs of sib families were then simulated and analyzed using the QTL Map software. On the basis of the theoretical results in backcross, parameters such as the population size, the density of the genetic map, the QTL effect and the true location of the QTL, were taken into account under the "no QTL" and the "one QTL" hypotheses. A combination of two non parametric tests - the Kolmogorov-Smirnov test and the Mann-Whitney-Wilcoxon test - was used in order to identify the parameters that affected the bias and to specify how much they influenced the estimation of QTL location. Results A theoretical expression of the bias of the estimated QTL location was obtained for a backcross type population. We demonstrated a common source of bias under the "no QTL" and the "one QTL" hypotheses and qualified the possible influence of several parameters. Simulation studies confirmed that the bias exists in outbred populations under both the hypotheses of "no QTL" and "one QTL" on a linkage group. The QTL location was systematically closer to marker locations than expected, particularly in the case of low QTL effect, small population size or low density of markers, i.e. designs with low power. Practical recommendations for experimental designs for QTL detection in outbred populations are given on the basis of this bias quantification. Furthermore, an original algorithm is proposed to adjust the location of a QTL, obtained with interval mapping, which co located with a marker. Conclusions Therefore, one should be attentive when one QTL is mapped at the location of one marker, especially under low power conditions. PMID:22520935
Theoretical Calculation and Validation of the Water Vapor Continuum Absorption
NASA Technical Reports Server (NTRS)
Ma, Qiancheng; Tipping, Richard H.
1998-01-01
The primary objective of this investigation is the development of an improved parameterization of the water vapor continuum absorption through the refinement and validation of our existing theoretical formalism. The chief advantage of our approach is the self-consistent, first principles, basis of the formalism which allows us to predict the frequency, temperature and pressure dependence of the continuum absorption as well as provide insights into the physical mechanisms responsible for the continuum absorption. Moreover, our approach is such that the calculated continuum absorption can be easily incorporated into satellite retrieval algorithms and climate models. Accurate determination of the water vapor continuum is essential for the next generation of retrieval algorithms which propose to use the combined constraints of multispectral measurements such as those under development for EOS data analysis (e.g., retrieval algorithms based on MODIS and AIRS measurements); current Pathfinder activities which seek to use the combined constraints of infrared and microwave (e.g., HIRS and MSU) measurements to improve temperature and water profile retrievals, and field campaigns which seek to reconcile spectrally-resolved and broad-band measurements such as those obtained as part of FIRE. Current widely used continuum treatments have been shown to produce spectrally dependent errors, with the magnitude of the error dependent on temperature and abundance which produces errors with a seasonal and latitude dependence. Translated into flux, current water vapor continuum parameterizations produce flux errors of order 10 W/sq m, which compared to the 4 W/sq m magnitude of the greenhouse gas forcing and the 1-2 W/sq m estimated aerosol forcing is certainly climatologically significant and unacceptably large. While it is possible to tune the empirical formalisms, the paucity of laboratory measurements, especially at temperatures of interest for atmospheric applications, preclude tuning, the empirical continuum models over the full spectral range of interest for remote sensing and climate applications. Thus, we propose to further develop and refine our existing, far-wing formalism to provide an improved treatment applicable from the near-infrared through the microwave. Based on the results of this investigation, we will provide to the remote sensing/climate modeling community a practical and accurate tabulation of the continuum absorption covering the near-infrared through the microwave region of the spectrum for the range of temperatures and pressures of interest for atmospheric applications.
Theoretical Calculation and Validation of the Water Vapor Continuum Absorption
NASA Technical Reports Server (NTRS)
Ma, Qiancheng; Tipping, Richard H.
1998-01-01
The primary objective of this investigation is the development of an improved parameterization of the water vapor continuum absorption through the refinement and validation of our existing theoretical formalism. The chief advantage of our approach is the self-consistent, first principles, basis of the formalism which allows us to predict the frequency, temperature and pressure dependence of the continuum absorption as well as provide insights into the physical mechanisms responsible for the continuum absorption. Moreover, our approach is such that the calculated continuum absorption can be easily incorporated into satellite retrieval algorithms and climate models. Accurate determination of the water vapor continuum is essential for the next generation of retrieval algorithms which propose to use the combined constraints of multi-spectral measurements such as those under development for EOS data analysis (e.g., retrieval algorithms based on MODIS and AIRS measurements); current Pathfinder activities which seek to use the combined constraints of infrared and microwave (e.g., HIRS and MSU) measurements to improve temperature and water profile retrievals, and field campaigns which seek to reconcile spectrally-resolved and broad-band measurements such as those obtained as part of FIRE. Current widely used continuum treatments have been shown to produce spectrally dependent errors, with the magnitude of the error dependent on temperature and abundance which produces errors with a seasonal and latitude dependence. Translated into flux, current water vapor continuum parameterizations produce flux errors of order 10 W/ml, which compared to the 4 W/m' magnitude of the greenhouse gas forcing and the 1-2 W/m' estimated aerosol forcing is certainly climatologically significant and unacceptably large. While it is possible to tune the empirical formalisms, the paucity of laboratory measurements, especially at temperatures of interest for atmospheric applications, preclude tuning the empirical continuum models over the full spectral range of interest for remote sensing and climate applications. Thus, we propose to further develop and refine our existing far-wing formalism to provide an improved treatment applicable from the near-infrared through the microwave. Based on the results of this investigation, we will provide to the remote sensing/climate modeling community a practical and accurate tabulation of the continuum absorption covering the near-infrared through the microwave region of the spectrum for the range of temperatures and pressures of interest for atmospheric applications.
Frequent statistics of link-layer bit stream data based on AC-IM algorithm
NASA Astrophysics Data System (ADS)
Cao, Chenghong; Lei, Yingke; Xu, Yiming
2017-08-01
At present, there are many relevant researches on data processing using classical pattern matching and its improved algorithm, but few researches on statistical data of link-layer bit stream. This paper adopts a frequent statistical method of link-layer bit stream data based on AC-IM algorithm for classical multi-pattern matching algorithms such as AC algorithm has high computational complexity, low efficiency and it cannot be applied to binary bit stream data. The method's maximum jump distance of the mode tree is length of the shortest mode string plus 3 in case of no missing? In this paper, theoretical analysis is made on the principle of algorithm construction firstly, and then the experimental results show that the algorithm can adapt to the binary bit stream data environment and extract the frequent sequence more accurately, the effect is obvious. Meanwhile, comparing with the classical AC algorithm and other improved algorithms, AC-IM algorithm has a greater maximum jump distance and less time-consuming.
Phase retrieval algorithm for JWST Flight and Testbed Telescope
NASA Astrophysics Data System (ADS)
Dean, Bruce H.; Aronstein, David L.; Smith, J. Scott; Shiri, Ron; Acton, D. Scott
2006-06-01
An image-based wavefront sensing and control algorithm for the James Webb Space Telescope (JWST) is presented. The algorithm heritage is discussed in addition to implications for algorithm performance dictated by NASA's Technology Readiness Level (TRL) 6. The algorithm uses feedback through an adaptive diversity function to avoid the need for phase-unwrapping post-processing steps. Algorithm results are demonstrated using JWST Testbed Telescope (TBT) commissioning data and the accuracy is assessed by comparison with interferometer results on a multi-wave phase aberration. Strategies for minimizing aliasing artifacts in the recovered phase are presented and orthogonal basis functions are implemented for representing wavefronts in irregular hexagonal apertures. Algorithm implementation on a parallel cluster of high-speed digital signal processors (DSPs) is also discussed.
Karayiannis, N B
2000-01-01
This paper presents the development and investigates the properties of ordered weighted learning vector quantization (LVQ) and clustering algorithms. These algorithms are developed by using gradient descent to minimize reformulation functions based on aggregation operators. An axiomatic approach provides conditions for selecting aggregation operators that lead to admissible reformulation functions. Minimization of admissible reformulation functions based on ordered weighted aggregation operators produces a family of soft LVQ and clustering algorithms, which includes fuzzy LVQ and clustering algorithms as special cases. The proposed LVQ and clustering algorithms are used to perform segmentation of magnetic resonance (MR) images of the brain. The diagnostic value of the segmented MR images provides the basis for evaluating a variety of ordered weighted LVQ and clustering algorithms.
NASA Astrophysics Data System (ADS)
Brambilla, A.; Gorecki, A.; Potop, A.; Paulus, C.; Verger, L.
2017-08-01
Energy sensitive photon counting X-ray detectors provide energy dependent information which can be exploited for material identification. The attenuation of an X-ray beam as a function of energy depends on the effective atomic number Zeff and the density. However, the measured attenuation is degraded by the imperfections of the detector response such as charge sharing or pile-up. These imperfections lead to non-linearities that limit the benefits of energy resolved imaging. This work aims to implement a basis material decomposition method which overcomes these problems. Basis material decomposition is based on the fact that the attenuation of any material or complex object can be accurately reproduced by a combination of equivalent thicknesses of basis materials. Our method is based on a calibration phase to learn the response of the detector for different combinations of thicknesses of the basis materials. The decomposition algorithm finds the thicknesses of basis material whose spectrum is closest to the measurement, using a maximum likelihood criterion assuming a Poisson law distribution of photon counts for each energy bin. The method was used with a ME100 linear array spectrometric X-ray imager to decompose different plastic materials on a Polyethylene and Polyvinyl Chloride base. The resulting equivalent thicknesses were used to estimate the effective atomic number Zeff. The results are in good agreement with the theoretical Zeff, regardless of the plastic sample thickness. The linear behaviour of the equivalent lengths makes it possible to process overlapped materials. Moreover, the method was tested with a 3 materials base by adding gadolinium, whose K-edge is not taken into account by the other two materials. The proposed method has the advantage that it can be used with any number of energy channels, taking full advantage of the high energy resolution of the ME100 detector. Although in principle two channels are sufficient, experimental measurements show that the use of a high number of channels significantly improves the accuracy of decomposition by reducing noise and systematic bias.
Mathematical model and coordination algorithms for ensuring complex security of an organization
NASA Astrophysics Data System (ADS)
Novoseltsev, V. I.; Orlova, D. E.; Dubrovin, A. S.; Irkhin, V. P.
2018-03-01
The mathematical model of coordination when ensuring complex security of the organization is considered. On the basis of use of a method of casual search three types of algorithms of effective coordination adequate to mismatch level concerning security are developed: a coordination algorithm at domination of instructions of the coordinator; a coordination algorithm at domination of decisions of performers; a coordination algorithm at parity of interests of the coordinator and performers. Assessment of convergence of the algorithms considered above it was made by carrying out a computing experiment. The described algorithms of coordination have property of convergence in the sense stated above. And, the following regularity is revealed: than more simply in the structural relation the algorithm, for the smaller number of iterations is provided to those its convergence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Xiaojun; Lei, Guangtsai; Pan, Guangwen
In this paper, the continuous operator is discretized into matrix forms by Galerkin`s procedure, using periodic Battle-Lemarie wavelets as basis/testing functions. The polynomial decomposition of wavelets is applied to the evaluation of matrix elements, which makes the computational effort of the matrix elements no more expensive than that of method of moments (MoM) with conventional piecewise basis/testing functions. A new algorithm is developed employing the fast wavelet transform (FWT). Owing to localization, cancellation, and orthogonal properties of wavelets, very sparse matrices have been obtained, which are then solved by the LSQR iterative method. This algorithm is also adaptive in thatmore » one can add at will finer wavelet bases in the regions where fields vary rapidly, without any damage to the system orthogonality of the wavelet basis functions. To demonstrate the effectiveness of the new algorithm, we applied it to the evaluation of frequency-dependent resistance and inductance matrices of multiple lossy transmission lines. Numerical results agree with previously published data and laboratory measurements. The valid frequency range of the boundary integral equation results has been extended two to three decades in comparison with the traditional MoM approach. The new algorithm has been integrated into the computer aided design tool, MagiCAD, which is used for the design and simulation of high-speed digital systems and multichip modules Pan et al. 29 refs., 7 figs., 6 tabs.« less
Quantum and electromagnetic propagation with the conjugate symmetric Lanczos method.
Acevedo, Ramiro; Lombardini, Richard; Turner, Matthew A; Kinsey, James L; Johnson, Bruce R
2008-02-14
The conjugate symmetric Lanczos (CSL) method is introduced for the solution of the time-dependent Schrodinger equation. This remarkably simple and efficient time-domain algorithm is a low-order polynomial expansion of the quantum propagator for time-independent Hamiltonians and derives from the time-reversal symmetry of the Schrodinger equation. The CSL algorithm gives forward solutions by simply complex conjugating backward polynomial expansion coefficients. Interestingly, the expansion coefficients are the same for each uniform time step, a fact that is only spoiled by basis incompleteness and finite precision. This is true for the Krylov basis and, with further investigation, is also found to be true for the Lanczos basis, important for efficient orthogonal projection-based algorithms. The CSL method errors roughly track those of the short iterative Lanczos method while requiring fewer matrix-vector products than the Chebyshev method. With the CSL method, only a few vectors need to be stored at a time, there is no need to estimate the Hamiltonian spectral range, and only matrix-vector and vector-vector products are required. Applications using localized wavelet bases are made to harmonic oscillator and anharmonic Morse oscillator systems as well as electrodynamic pulse propagation using the Hamiltonian form of Maxwell's equations. For gold with a Drude dielectric function, the latter is non-Hermitian, requiring consideration of corrections to the CSL algorithm.
Research of centroiding algorithms for extended and elongated spot of sodium laser guide star
NASA Astrophysics Data System (ADS)
Shao, Yayun; Zhang, Yudong; Wei, Kai
2016-10-01
Laser guide stars (LGSs) increase the sky coverage of astronomical adaptive optics systems. But spot array obtained by Shack-Hartmann wave front sensors (WFSs) turns extended and elongated, due to the thickness and size limitation of sodium LGS, which affects the accuracy of the wave front reconstruction algorithm. In this paper, we compared three different centroiding algorithms , the Center-of-Gravity (CoG), weighted CoG (WCoG) and Intensity Weighted Centroid (IWC), as well as those accuracies for various extended and elongated spots. In addition, we compared the reconstructed image data from those three algorithms with theoretical results, and proved that WCoG and IWC are the best wave front reconstruction algorithms for extended and elongated spot among all the algorithms.
A Coulomb collision algorithm for weighted particle simulations
NASA Technical Reports Server (NTRS)
Miller, Ronald H.; Combi, Michael R.
1994-01-01
A binary Coulomb collision algorithm is developed for weighted particle simulations employing Monte Carlo techniques. Charged particles within a given spatial grid cell are pair-wise scattered, explicitly conserving momentum and implicitly conserving energy. A similar algorithm developed by Takizuka and Abe (1977) conserves momentum and energy provided the particles are unweighted (each particle representing equal fractions of the total particle density). If applied as is to simulations incorporating weighted particles, the plasma temperatures equilibrate to an incorrect temperature, as compared to theory. Using the appropriate pairing statistics, a Coulomb collision algorithm is developed for weighted particles. The algorithm conserves energy and momentum and produces the appropriate relaxation time scales as compared to theoretical predictions. Such an algorithm is necessary for future work studying self-consistent multi-species kinetic transport.
Song, Jia; Zheng, Sisi; Nguyen, Nhung; Wang, Youjun; Zhou, Yubin; Lin, Kui
2017-10-03
Because phylogenetic inference is an important basis for answering many evolutionary problems, a large number of algorithms have been developed. Some of these algorithms have been improved by integrating gene evolution models with the expectation of accommodating the hierarchy of evolutionary processes. To the best of our knowledge, however, there still is no single unifying model or algorithm that can take all evolutionary processes into account through a stepwise or simultaneous method. On the basis of three existing phylogenetic inference algorithms, we built an integrated pipeline for inferring the evolutionary history of a given gene family; this pipeline can model gene sequence evolution, gene duplication-loss, gene transfer and multispecies coalescent processes. As a case study, we applied this pipeline to the STIMATE (TMEM110) gene family, which has recently been reported to play an important role in store-operated Ca 2+ entry (SOCE) mediated by ORAI and STIM proteins. We inferred their phylogenetic trees in 69 sequenced chordate genomes. By integrating three tree reconstruction algorithms with diverse evolutionary models, a pipeline for inferring the evolutionary history of a gene family was developed, and its application was demonstrated.
Quartic scaling MP2 for solids: A highly parallelized algorithm in the plane wave basis
NASA Astrophysics Data System (ADS)
Schäfer, Tobias; Ramberger, Benjamin; Kresse, Georg
2017-03-01
We present a low-complexity algorithm to calculate the correlation energy of periodic systems in second-order Møller-Plesset (MP2) perturbation theory. In contrast to previous approximation-free MP2 codes, our implementation possesses a quartic scaling, O ( N 4 ) , with respect to the system size N and offers an almost ideal parallelization efficiency. The general issue that the correlation energy converges slowly with the number of basis functions is eased by an internal basis set extrapolation. The key concept to reduce the scaling is to eliminate all summations over virtual orbitals which can be elegantly achieved in the Laplace transformed MP2 formulation using plane wave basis sets and fast Fourier transforms. Analogously, this approach could allow us to calculate second order screened exchange as well as particle-hole ladder diagrams with a similar low complexity. Hence, the presented method can be considered as a step towards systematically improved correlation energies.
Patch-based image reconstruction for PET using prior-image derived dictionaries
NASA Astrophysics Data System (ADS)
Tahaei, Marzieh S.; Reader, Andrew J.
2016-09-01
In PET image reconstruction, regularization is often needed to reduce the noise in the resulting images. Patch-based image processing techniques have recently been successfully used for regularization in medical image reconstruction through a penalized likelihood framework. Re-parameterization within reconstruction is another powerful regularization technique in which the object in the scanner is re-parameterized using coefficients for spatially-extensive basis vectors. In this work, a method for extracting patch-based basis vectors from the subject’s MR image is proposed. The coefficients for these basis vectors are then estimated using the conventional MLEM algorithm. Furthermore, using the alternating direction method of multipliers, an algorithm for optimizing the Poisson log-likelihood while imposing sparsity on the parameters is also proposed. This novel method is then utilized to find sparse coefficients for the patch-based basis vectors extracted from the MR image. The results indicate the superiority of the proposed methods to patch-based regularization using the penalized likelihood framework.
The Best m-Term Approximation and Greedy Algorithms
1997-01-01
in the paper DKT For a given basis we dene the Greedy Algorithm Gp as follows Let f X I cIf I and cIf p kcIf Ikp Then... DKT RA DeVore SV Konyagin and VV Temlyakov Hyperbolic Wavelet Approximation to appear DL R DeVore GLorentz
NASA Astrophysics Data System (ADS)
Doha, E. H.; Abd-Elhameed, W. M.; Bassuony, M. A.
2013-03-01
This paper is concerned with spectral Galerkin algorithms for solving high even-order two point boundary value problems in one dimension subject to homogeneous and nonhomogeneous boundary conditions. The proposed algorithms are extended to solve two-dimensional high even-order differential equations. The key to the efficiency of these algorithms is to construct compact combinations of Chebyshev polynomials of the third and fourth kinds as basis functions. The algorithms lead to linear systems with specially structured matrices that can be efficiently inverted. Numerical examples are included to demonstrate the validity and applicability of the proposed algorithms, and some comparisons with some other methods are made.
NASA Technical Reports Server (NTRS)
Shultz, Christopher J.; Carey, Lawrence D.; Schultz, Elise V.; Stano, Geoffrey T.; Blakeslee, Richard J.; Goodman, Steven J.
2014-01-01
The presence and rates of total lightning are both correlated to and physically dependent upon storm updraft strength, mixed phase precipitation volume and the size of the charging zone. The updraft modulates the ingredients necessary for electrification within a thunderstorm, while the updraft also plays a critical role in the development of severe and hazardous weather. Therefore utilizing this relationship, the monitoring of lightning rates and jumps provides an additional piece of information on the evolution of a thunderstorm, more often than not, at higher temporal resolution than current operational radar systems. This correlation is the basis for the total lightning jump algorithm that has been developed in recent years. Currently, the lightning jump algorithm is being tested in two separate but important efforts. Schultz et al. (2014; AMS 10th Satellite Symposium) is exploring the transition of the algorithm from its research based formulation to a fully objective algorithm that includes storm tracking, Geostationary Lightning Mapper (GLM) Proxy data and the lightning jump algorithm. Chronis et al. (2014; this conference) provides context for the transition to current operational forecasting using lightning mapping array based products. However, what remains is an end to end physical and dynamical basis for relating lightning rates to severe storm manifestation, so the forecaster has a reason beyond simple correlation to utilize the lightning jump algorithm within their severe storm conceptual models. Therefore, the physical basis for the lightning jump algorithm in relation to severe storm dynamics and microphysics is a key component that must be further explored. Many radar studies have examined flash rates and their relation to updraft strength, updraft volume, precipitation-sized ice mass, etc.; however, relation specifically to lightning jumps is fragmented within the literature. Thus the goal of this study is to use multiple Doppler techniques to resolve the physical and dynamical storm characteristics specifically around the time of the lightning jump. This information will help forecasters anticipate lightning jump occurrence, or even be of use to determine future characteristics of a given storm (e.g., development of a mesocyclone, downdraft, or hail signature on radar), providing additional lead time/confidence in the severe storm warning paradigm.
Grid and basis adaptive polynomial chaos techniques for sensitivity and uncertainty analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perkó, Zoltán, E-mail: Z.Perko@tudelft.nl; Gilli, Luca, E-mail: Gilli@nrg.eu; Lathouwers, Danny, E-mail: D.Lathouwers@tudelft.nl
2014-03-01
The demand for accurate and computationally affordable sensitivity and uncertainty techniques is constantly on the rise and has become especially pressing in the nuclear field with the shift to Best Estimate Plus Uncertainty methodologies in the licensing of nuclear installations. Besides traditional, already well developed methods – such as first order perturbation theory or Monte Carlo sampling – Polynomial Chaos Expansion (PCE) has been given a growing emphasis in recent years due to its simple application and good performance. This paper presents new developments of the research done at TU Delft on such Polynomial Chaos (PC) techniques. Our work ismore » focused on the Non-Intrusive Spectral Projection (NISP) approach and adaptive methods for building the PCE of responses of interest. Recent efforts resulted in a new adaptive sparse grid algorithm designed for estimating the PC coefficients. The algorithm is based on Gerstner's procedure for calculating multi-dimensional integrals but proves to be computationally significantly cheaper, while at the same it retains a similar accuracy as the original method. More importantly the issue of basis adaptivity has been investigated and two techniques have been implemented for constructing the sparse PCE of quantities of interest. Not using the traditional full PC basis set leads to further reduction in computational time since the high order grids necessary for accurately estimating the near zero expansion coefficients of polynomial basis vectors not needed in the PCE can be excluded from the calculation. Moreover the sparse PC representation of the response is easier to handle when used for sensitivity analysis or uncertainty propagation due to the smaller number of basis vectors. The developed grid and basis adaptive methods have been implemented in Matlab as the Fully Adaptive Non-Intrusive Spectral Projection (FANISP) algorithm and were tested on four analytical problems. These show consistent good performance both in terms of the accuracy of the resulting PC representation of quantities and the computational costs associated with constructing the sparse PCE. Basis adaptivity also seems to make the employment of PC techniques possible for problems with a higher number of input parameters (15–20), alleviating a well known limitation of the traditional approach. The prospect of larger scale applicability and the simplicity of implementation makes such adaptive PC algorithms particularly appealing for the sensitivity and uncertainty analysis of complex systems and legacy codes.« less
Fast and Exact Continuous Collision Detection with Bernstein Sign Classification
Tang, Min; Tong, Ruofeng; Wang, Zhendong; Manocha, Dinesh
2014-01-01
We present fast algorithms to perform accurate CCD queries between triangulated models. Our formulation uses properties of the Bernstein basis and Bézier curves and reduces the problem to evaluating signs of polynomials. We present a geometrically exact CCD algorithm based on the exact geometric computation paradigm to perform reliable Boolean collision queries. Our algorithm is more than an order of magnitude faster than prior exact algorithms. We evaluate its performance for cloth and FEM simulations on CPUs and GPUs, and highlight the benefits. PMID:25568589
Application of tabu search to deterministic and stochastic optimization problems
NASA Astrophysics Data System (ADS)
Gurtuna, Ozgur
During the past two decades, advances in computer science and operations research have resulted in many new optimization methods for tackling complex decision-making problems. One such method, tabu search, forms the basis of this thesis. Tabu search is a very versatile optimization heuristic that can be used for solving many different types of optimization problems. Another research area, real options, has also gained considerable momentum during the last two decades. Real options analysis is emerging as a robust and powerful method for tackling decision-making problems under uncertainty. Although the theoretical foundations of real options are well-established and significant progress has been made in the theory side, applications are lagging behind. A strong emphasis on practical applications and a multidisciplinary approach form the basic rationale of this thesis. The fundamental concepts and ideas behind tabu search and real options are investigated in order to provide a concise overview of the theory supporting both of these two fields. This theoretical overview feeds into the design and development of algorithms that are used to solve three different problems. The first problem examined is a deterministic one: finding the optimal servicing tours that minimize energy and/or duration of missions for servicing satellites around Earth's orbit. Due to the nature of the space environment, this problem is modeled as a time-dependent, moving-target optimization problem. Two solution methods are developed: an exhaustive method for smaller problem instances, and a method based on tabu search for larger ones. The second and third problems are related to decision-making under uncertainty. In the second problem, tabu search and real options are investigated together within the context of a stochastic optimization problem: option valuation. By merging tabu search and Monte Carlo simulation, a new method for studying options, Tabu Search Monte Carlo (TSMC) method, is developed. The theoretical underpinnings of the TSMC method and the flow of the algorithm are explained. Its performance is compared to other existing methods for financial option valuation. In the third, and final, problem, TSMC method is used to determine the conditions of feasibility for hybrid electric vehicles and fuel cell vehicles. There are many uncertainties related to the technologies and markets associated with new generation passenger vehicles. These uncertainties are analyzed in order to determine the conditions in which new generation vehicles can compete with established technologies.
Test of the Semi-Analytical Case 1 and Gelbstoff Case 2 SeaWiFS Algorithm with a Global Data Set
NASA Technical Reports Server (NTRS)
Carder, Kendall L.
1997-01-01
The algorithm-development activities at USF during the second half of 1997 have concentrated on data collection and theoretical modeling. Six abstracts were submitted for presentation at the AGU conference in San Diego, California during February 9-13, 1998. Four papers were submitted to JGR and Applied Optics for publication.
NASA Astrophysics Data System (ADS)
Chernyavskiy, Andrey; Khamitov, Kamil; Teplov, Alexey; Voevodin, Vadim; Voevodin, Vladimir
2016-10-01
In recent years, quantum information technologies (QIT) showed great development, although, the way of the implementation of QIT faces the serious difficulties, some of which are challenging computational tasks. This work is devoted to the deep and broad analysis of the parallel algorithmic properties of such tasks. As an example we take one- and two-qubit transformations of a many-qubit quantum state, which are the most critical kernels of many important QIT applications. The analysis of the algorithms uses the methodology of the AlgoWiki project (algowiki-project.org) and consists of two parts: theoretical and experimental. Theoretical part includes features like sequential and parallel complexity, macro structure, and visual information graph. Experimental part was made by using the petascale Lomonosov supercomputer (Moscow State University, Russia) and includes the analysis of locality and memory access, scalability and the set of more specific dynamic characteristics of realization. This approach allowed us to obtain bottlenecks and generate ideas of efficiency improvement.
Robustness of continuous-time adaptive control algorithms in the presence of unmodeled dynamics
NASA Technical Reports Server (NTRS)
Rohrs, C. E.; Valavani, L.; Athans, M.; Stein, G.
1985-01-01
This paper examines the robustness properties of existing adaptive control algorithms to unmodeled plant high-frequency dynamics and unmeasurable output disturbances. It is demonstrated that there exist two infinite-gain operators in the nonlinear dynamic system which determines the time-evolution of output and parameter errors. The pragmatic implications of the existence of such infinite-gain operators is that: (1) sinusoidal reference inputs at specific frequencies and/or (2) sinusoidal output disturbances at any frequency (including dc), can cause the loop gain to increase without bound, thereby exciting the unmodeled high-frequency dynamics, and yielding an unstable control system. Hence, it is concluded that existing adaptive control algorithms as they are presented in the literature referenced in this paper, cannot be used with confidence in practical designs where the plant contains unmodeled dynamics because instability is likely to result. Further understanding is required to ascertain how the currently implemented adaptive systems differ from the theoretical systems studied here and how further theoretical development can improve the robustness of adaptive controllers.
Single-snapshot DOA estimation by using Compressed Sensing
NASA Astrophysics Data System (ADS)
Fortunati, Stefano; Grasso, Raffaele; Gini, Fulvio; Greco, Maria S.; LePage, Kevin
2014-12-01
This paper deals with the problem of estimating the directions of arrival (DOA) of multiple source signals from a single observation vector of an array data. In particular, four estimation algorithms based on the theory of compressed sensing (CS), i.e., the classical ℓ 1 minimization (or Least Absolute Shrinkage and Selection Operator, LASSO), the fast smooth ℓ 0 minimization, and the Sparse Iterative Covariance-Based Estimator, SPICE and the Iterative Adaptive Approach for Amplitude and Phase Estimation, IAA-APES algorithms, are analyzed, and their statistical properties are investigated and compared with the classical Fourier beamformer (FB) in different simulated scenarios. We show that unlike the classical FB, a CS-based beamformer (CSB) has some desirable properties typical of the adaptive algorithms (e.g., Capon and MUSIC) even in the single snapshot case. Particular attention is devoted to the super-resolution property. Theoretical arguments and simulation analysis provide evidence that a CS-based beamformer can achieve resolution beyond the classical Rayleigh limit. Finally, the theoretical findings are validated by processing a real sonar dataset.
The VLSI design of a Reed-Solomon encoder using Berlekamps bit-serial multiplier algorithm
NASA Technical Reports Server (NTRS)
Truong, T. K.; Deutsch, L. J.; Reed, I. S.; Hsu, I. S.; Wang, K.; Yeh, C. S.
1982-01-01
Realization of a bit-serial multiplication algorithm for the encoding of Reed-Solomon (RS) codes on a single VLSI chip using NMOS technology is demonstrated to be feasible. A dual basis (255, 223) over a Galois field is used. The conventional RS encoder for long codes ofter requires look-up tables to perform the multiplication of two field elements. Berlekamp's algorithm requires only shifting and exclusive-OR operations.
NASA Astrophysics Data System (ADS)
Lazariev, A.; Allouche, A.-R.; Aubert-Frécon, M.; Fauvelle, F.; Piotto, M.; Elbayed, K.; Namer, I.-J.; van Ormondt, D.; Graveron-Demilly, D.
2011-11-01
High-resolution magic angle spinning (HRMAS) nuclear magnetic resonance (NMR) is playing an increasingly important role for diagnosis. This technique enables setting up metabolite profiles of ex vivo pathological and healthy tissue. The need to monitor diseases and pharmaceutical follow-up requires an automatic quantitation of HRMAS 1H signals. However, for several metabolites, the values of chemical shifts of proton groups may slightly differ according to the micro-environment in the tissue or cells, in particular to its pH. This hampers the accurate estimation of the metabolite concentrations mainly when using quantitation algorithms based on a metabolite basis set: the metabolite fingerprints are not correct anymore. In this work, we propose an accurate method coupling quantum mechanical simulations and quantitation algorithms to handle basis-set changes. The proposed algorithm automatically corrects mismatches between the signals of the simulated basis set and the signal under analysis by maximizing the normalized cross-correlation between the mentioned signals. Optimized chemical shift values of the metabolites are obtained. This method, QM-QUEST, provides more robust fitting while limiting user involvement and respects the correct fingerprints of metabolites. Its efficiency is demonstrated by accurately quantitating 33 signals from tissue samples of human brains with oligodendroglioma, obtained at 11.7 tesla. The corresponding chemical shift changes of several metabolites within the series are also analyzed.
Phase unwrapping in three dimensions with application to InSAR time series.
Hooper, Andrew; Zebker, Howard A
2007-09-01
The problem of phase unwrapping in two dimensions has been studied extensively in the past two decades, but the three-dimensional (3D) problem has so far received relatively little attention. We develop here a theoretical framework for 3D phase unwrapping and also describe two algorithms for implementation, both of which can be applied to synthetic aperture radar interferometry (InSAR) time series. We test the algorithms on simulated data and find both give more accurate results than a two-dimensional algorithm. When applied to actual InSAR time series, we find good agreement both between the algorithms and with ground truth.
Adaptive Metropolis Sampling with Product Distributions
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Lee, Chiu Fan
2005-01-01
The Metropolis-Hastings (MH) algorithm is a way to sample a provided target distribution pi(z). It works by repeatedly sampling a separate proposal distribution T(x,x') to generate a random walk {x(t)}. We consider a modification of the MH algorithm in which T is dynamically updated during the walk. The update at time t uses the {x(t' less than t)} to estimate the product distribution that has the least Kullback-Leibler distance to pi. That estimate is the information-theoretically optimal mean-field approximation to pi. We demonstrate through computer experiments that our algorithm produces samples that are superior to those of the conventional MH algorithm.
NASA Astrophysics Data System (ADS)
Chen, Shichao; Zhu, Yizheng
2017-02-01
Sensitivity is a critical index to measure the temporal fluctuation of the retrieved optical pathlength in quantitative phase imaging system. However, an accurate and comprehensive analysis for sensitivity evaluation is still lacking in current literature. In particular, previous theoretical studies for fundamental sensitivity based on Gaussian noise models are not applicable to modern cameras and detectors, which are dominated by shot noise. In this paper, we derive two shot noiselimited theoretical sensitivities, Cramér-Rao bound and algorithmic sensitivity for wavelength shifting interferometry, which is a major category of on-axis interferometry techniques in quantitative phase imaging. Based on the derivations, we show that the shot noise-limited model permits accurate estimation of theoretical sensitivities directly from measured data. These results can provide important insights into fundamental constraints in system performance and can be used to guide system design and optimization. The same concepts can be generalized to other quantitative phase imaging techniques as well.
Institute for Defense Analysis. Annual Report 1995.
1995-01-01
staff have been involved in the community-wide development of MPI as well as in its application to specific NSA problems. 35 Parallel Groebner ...Basis Code — Symbolic Computing on Parallel Machines The Groebner basis method is a set of algorithms for reformulating very complex algebraic expres
Observation of quantum criticality with ultracold atoms in optical lattices
NASA Astrophysics Data System (ADS)
Zhang, Xibo
As biological problems are becoming more complex and data growing at a rate much faster than that of computer hardware, new and faster algorithms are required. This dissertation investigates computational problems arising in two of the fields: comparative genomics and epigenomics, and employs a variety of computational techniques to address the problems. One fundamental question in the studies of chromosome evolution is whether the rearrangement breakpoints are happening at random positions or along certain hotspots. We investigate the breakpoint reuse phenomenon, and show the analyses that support the more recently proposed fragile breakage model as opposed to the conventional random breakage models for chromosome evolution. The identification of syntenic regions between chromosomes forms the basis for studies of genome architectures, comparative genomics, and evolutionary genomics. The previous synteny block reconstruction algorithms could not be scaled to a large number of mammalian genomes being sequenced; neither did they address the issue of generating non-overlapping synteny blocks suitable for analyzing rearrangements and evolutionary history of large-scale duplications prevalent in plant genomes. We present a new unified synteny block generation algorithm based on A-Bruijn graph framework that overcomes these shortcomings. In the epigenome sequencing, a sample may contain a mixture of epigenomes and there is a need to resolve the distinct methylation patterns from the mixture. Many sequencing applications, such as haplotype inference for diploid or polyploid genomes, and metagenomic sequencing, share the similar objective: to infer a set of distinct assemblies from reads that are sequenced from a heterogeneous sample and subsequently aligned to a reference genome. We model the problem from both a combinatorial and a statistical angles. First, we describe a theoretical framework. A linear-time algorithm is then given to resolve a minimum number of assemblies that are consistent with all reads, substantially improving on previous algorithms. An efficient algorithm is also described to determine a set of assemblies that is consistent with a maximum subset of the reads, a previously untreated problem. We then prove that allowing nested reads or permitting mismatches between reads and their assemblies renders these problems NP-hard. Second, we describe a mixture model-based approach, and applied the model for the detection of allele-specific methylations.
A comparison of VLSI architecture of finite field multipliers using dual, normal or standard basis
NASA Technical Reports Server (NTRS)
Hsu, I. S.; Truong, T. K.; Shao, H. M.; Deutsch, L. J.; Reed, I. S.
1987-01-01
Three different finite field multipliers are presented: (1) a dual basis multiplier due to Berlekamp; (2) a Massy-Omura normal basis multiplier; and (3) the Scott-Tavares-Peppard standard basis multiplier. These algorithms are chosen because each has its own distinct features which apply most suitably in different areas. Finally, they are implemented on silicon chips with nitride metal oxide semiconductor technology so that the multiplier most desirable for very large scale integration implementations can readily be ascertained.
Infrared image enhancement using H(infinity) bounds for surveillance applications.
Qidwai, Uvais
2008-08-01
In this paper, two algorithms have been presented to enhance the infrared (IR) images. Using the autoregressive moving average model structure and H(infinity) optimal bounds, the image pixels are mapped from the IR pixel space into normal optical image space, thus enhancing the IR image for improved visual quality. Although H(infinity)-based system identification algorithms are very common now, they are not quite suitable for real-time applications owing to their complexity. However, many variants of such algorithms are possible that can overcome this constraint. Two such algorithms have been developed and implemented in this paper. Theoretical and algorithmic results show remarkable enhancement in the acquired images. This will help in enhancing the visual quality of IR images for surveillance applications.
A Novel Range Compression Algorithm for Resolution Enhancement in GNSS-SARs
Zheng, Yu; Yang, Yang; Chen, Wu
2017-01-01
In this paper, a novel range compression algorithm for enhancing range resolutions of a passive Global Navigation Satellite System-based Synthetic Aperture Radar (GNSS-SAR) is proposed. In the proposed algorithm, within each azimuth bin, firstly range compression is carried out by correlating a reflected GNSS intermediate frequency (IF) signal with a synchronized direct GNSS base-band signal in the range domain. Thereafter, spectrum equalization is applied to the compressed results for suppressing side lobes to obtain a final range-compressed signal. Both theoretical analysis and simulation results have demonstrated that significant range resolution improvement in GNSS-SAR images can be achieved by the proposed range compression algorithm, compared to the conventional range compression algorithm. PMID:28672830
Control Coordination of Multiple Agents Through Decision Theoretic and Economic Methods
2003-02-01
instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing this collection of information...investigated the design of test data for benchmarking such optimization algorithms. Our other research on combinatorial auctions included I...average combination rule. We exemplified these theoretical results with experiments on stock market data , demonstrating how ensembles of classifiers can
Gradient descent for robust kernel-based regression
NASA Astrophysics Data System (ADS)
Guo, Zheng-Chu; Hu, Ting; Shi, Lei
2018-06-01
In this paper, we study the gradient descent algorithm generated by a robust loss function over a reproducing kernel Hilbert space (RKHS). The loss function is defined by a windowing function G and a scale parameter σ, which can include a wide range of commonly used robust losses for regression. There is still a gap between theoretical analysis and optimization process of empirical risk minimization based on loss: the estimator needs to be global optimal in the theoretical analysis while the optimization method can not ensure the global optimality of its solutions. In this paper, we aim to fill this gap by developing a novel theoretical analysis on the performance of estimators generated by the gradient descent algorithm. We demonstrate that with an appropriately chosen scale parameter σ, the gradient update with early stopping rules can approximate the regression function. Our elegant error analysis can lead to convergence in the standard L 2 norm and the strong RKHS norm, both of which are optimal in the mini-max sense. We show that the scale parameter σ plays an important role in providing robustness as well as fast convergence. The numerical experiments implemented on synthetic examples and real data set also support our theoretical results.
Optimization and experimental realization of the quantum permutation algorithm
NASA Astrophysics Data System (ADS)
Yalçınkaya, I.; Gedik, Z.
2017-12-01
The quantum permutation algorithm provides computational speed-up over classical algorithms for determining the parity of a given cyclic permutation. For its n -qubit implementations, the number of required quantum gates scales quadratically with n due to the quantum Fourier transforms included. We show here for the n -qubit case that the algorithm can be simplified so that it requires only O (n ) quantum gates, which theoretically reduces the complexity of the implementation. To test our results experimentally, we utilize IBM's 5-qubit quantum processor to realize the algorithm by using the original and simplified recipes for the 2-qubit case. It turns out that the latter results in a significantly higher success probability which allows us to verify the algorithm more precisely than the previous experimental realizations. We also verify the algorithm for the first time for the 3-qubit case with a considerable success probability by taking the advantage of our simplified scheme.
A Stereo Dual-Channel Dynamic Programming Algorithm for UAV Image Stitching
Chen, Ruizhi; Zhang, Weilong; Li, Deren; Liao, Xuan; Zhang, Peng
2017-01-01
Dislocation is one of the major challenges in unmanned aerial vehicle (UAV) image stitching. In this paper, we propose a new algorithm for seamlessly stitching UAV images based on a dynamic programming approach. Our solution consists of two steps: Firstly, an image matching algorithm is used to correct the images so that they are in the same coordinate system. Secondly, a new dynamic programming algorithm is developed based on the concept of a stereo dual-channel energy accumulation. A new energy aggregation and traversal strategy is adopted in our solution, which can find a more optimal seam line for image stitching. Our algorithm overcomes the theoretical limitation of the classical Duplaquet algorithm. Experiments show that the algorithm can effectively solve the dislocation problem in UAV image stitching, especially for the cases in dense urban areas. Our solution is also direction-independent, which has better adaptability and robustness for stitching images. PMID:28885547
A Stereo Dual-Channel Dynamic Programming Algorithm for UAV Image Stitching.
Li, Ming; Chen, Ruizhi; Zhang, Weilong; Li, Deren; Liao, Xuan; Wang, Lei; Pan, Yuanjin; Zhang, Peng
2017-09-08
Dislocation is one of the major challenges in unmanned aerial vehicle (UAV) image stitching. In this paper, we propose a new algorithm for seamlessly stitching UAV images based on a dynamic programming approach. Our solution consists of two steps: Firstly, an image matching algorithm is used to correct the images so that they are in the same coordinate system. Secondly, a new dynamic programming algorithm is developed based on the concept of a stereo dual-channel energy accumulation. A new energy aggregation and traversal strategy is adopted in our solution, which can find a more optimal seam line for image stitching. Our algorithm overcomes the theoretical limitation of the classical Duplaquet algorithm. Experiments show that the algorithm can effectively solve the dislocation problem in UAV image stitching, especially for the cases in dense urban areas. Our solution is also direction-independent, which has better adaptability and robustness for stitching images.
Advanced Concepts Theory Annual Report 1983.
1984-05-18
variety of theoretical models, tools, and computational strategies to understand, guide, and predict the behavior of high brightness, laboratory x-ray... theoretical models must treat hard and soft x-ray emission from different electron configurations with K, L, and M shells, and they must include... theoretical effort has basis for comprehending the trends which appear in the been devoted to elucidating the effects of opacity on the numerical results
Kagawa, Rina; Kawazoe, Yoshimasa; Ida, Yusuke; Shinohara, Emiko; Tanaka, Katsuya; Imai, Takeshi; Ohe, Kazuhiko
2017-07-01
Phenotyping is an automated technique that can be used to distinguish patients based on electronic health records. To improve the quality of medical care and advance type 2 diabetes mellitus (T2DM) research, the demand for T2DM phenotyping has been increasing. Some existing phenotyping algorithms are not sufficiently accurate for screening or identifying clinical research subjects. We propose a practical phenotyping framework using both expert knowledge and a machine learning approach to develop 2 phenotyping algorithms: one is for screening; the other is for identifying research subjects. We employ expert knowledge as rules to exclude obvious control patients and machine learning to increase accuracy for complicated patients. We developed phenotyping algorithms on the basis of our framework and performed binary classification to determine whether a patient has T2DM. To facilitate development of practical phenotyping algorithms, this study introduces new evaluation metrics: area under the precision-sensitivity curve (AUPS) with a high sensitivity and AUPS with a high positive predictive value. The proposed phenotyping algorithms based on our framework show higher performance than baseline algorithms. Our proposed framework can be used to develop 2 types of phenotyping algorithms depending on the tuning approach: one for screening, the other for identifying research subjects. We develop a novel phenotyping framework that can be easily implemented on the basis of proper evaluation metrics, which are in accordance with users' objectives. The phenotyping algorithms based on our framework are useful for extraction of T2DM patients in retrospective studies.
NASA Technical Reports Server (NTRS)
Lawson, C. L.
1977-01-01
The problem of mathematically defining a smooth surface, passing through a finite set of given points is studied. Literature relating to the problem is briefly reviewed. An algorithm is described that first constructs a triangular grid in the (x,y) domain, and first partial derivatives at the modal points are estimated. Interpolation in the triangular cells using a method that gives C sup.1 continuity overall is examined. Performance of software implementing the algorithm is discussed. Theoretical results are presented that provide valuable guidance in the development of algorithms for constructing triangular grids.
Lapierre-Landry, Maryse; Tucker-Schwartz, Jason M.; Skala, Melissa C.
2016-01-01
Photothermal OCT (PT-OCT) is an emerging molecular imaging technique that occupies a spatial imaging regime between microscopy and whole body imaging. PT-OCT would benefit from a theoretical model to optimize imaging parameters and test image processing algorithms. We propose the first analytical PT-OCT model to replicate an experimental A-scan in homogeneous and layered samples. We also propose the PT-CLEAN algorithm to reduce phase-accumulation and shadowing, two artifacts found in PT-OCT images, and demonstrate it on phantoms and in vivo mouse tumors. PMID:27446693
NASA Technical Reports Server (NTRS)
Kao, M. H.; Bodenheimer, R. E.
1976-01-01
The tse computer's capability of achieving image congruence between temporal and multiple images with misregistration due to rotational differences is reported. The coordinate transformations are obtained and a general algorithms is devised to perform image rotation using tse operations very efficiently. The details of this algorithm as well as its theoretical implications are presented. Step by step procedures of image registration are described in detail. Numerous examples are also employed to demonstrate the correctness and the effectiveness of the algorithms and conclusions and recommendations are made.
Formation of the predicted training parameters in the form of a discrete information stream
NASA Astrophysics Data System (ADS)
Smolentseva, T. E.; Sumin, V. I.; Zolnikov, V. K.; Lavlinsky, V. V.
2018-03-01
In work process of training in the form of a discrete information stream is considered. On each of stages of the considered process portions of the training information and quality of their assimilation are analysed. Individual characteristics and reaction trained for every portion of information on appropriate sections are defined. The control algorithm of training with the predicted number of control checks of the trainee who allows to define what operating influence is considered it is necessary to create for the trainee. On the basis of this algorithm the vector of probabilities of ignorance of elements of the training information is received. As a result of the conducted researches the algorithm on formation of the predicted training parameters is developed. In work the task of comparison of duration of training received experimentally with predicted on the basis of it is solved the conclusion is drawn on efficiency of formation of the predicted training parameters. The program complex on the basis of the values of individual parameters received as a result of experiments on each trainee who allows to calculate individual characteristics is developed, to form rating and to monitor process of change of parameters of training.
A combined emitter threat assessment method based on ICW-RCM
NASA Astrophysics Data System (ADS)
Zhang, Ying; Wang, Hongwei; Guo, Xiaotao; Wang, Yubing
2017-08-01
Considering that the tradition al emitter threat assessment methods are difficult to intuitively reflect the degree of target threaten and the deficiency of real-time and complexity, on the basis of radar chart method(RCM), an algorithm of emitter combined threat assessment based on ICW-RCM (improved combination weighting method, ICW) is proposed. The coarse sorting is integrated with fine sorting in emitter combined threat assessment, sequencing the emitter threat level roughly accordance to radar operation mode, and reducing task priority of the low-threat emitter; On the basis of ICW-RCM, sequencing the same radar operation mode emitter roughly, finally, obtain the results of emitter threat assessment through coarse and fine sorting. Simulation analyses show the correctness and effectiveness of this algorithm. Comparing with classical method of emitter threat assessment based on CW-RCM, the algorithm is visual in image and can work quickly with lower complexity.
Perioperative fluid therapy: defining a clinical algorithm between insufficient and excessive.
Strunden, Mike S; Tank, Sascha; Kerner, Thoralf
2016-12-01
In the perioperative scenario, adequate fluid and volume therapy is a challenging task. Despite improved knowledge on the physiology of the vascular barrier function and its respective pathophysiologic disturbances during the perioperative process, clear-cut therapeutic principles are difficult to implement. Neglecting the physiologic basis of the vascular barrier and the cardiovascular system, numerous studies proclaiming different approaches to fluid and volume therapy do not provide a rationale, as various surgical and patient risk groups, and different fluid regimens combined with varying hemodynamic measures and variable algorithms led to conflicting results. This review refers to the physiologic basis and answers questions inseparably conjoined to a rational approach to perioperative fluid and volume therapy: Why does fluid get lost from the vasculature perioperatively? Whereto does it get lost? Based on current findings and rationale considerations, which fluid replacement algorithm could be implemented into clinical routine? Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Motta, Mario; Zhang, Shiwei
2018-05-01
We propose an algorithm for accurate, systematic, and scalable computation of interatomic forces within the auxiliary-field quantum Monte Carlo (AFQMC) method. The algorithm relies on the Hellmann-Feynman theorem and incorporates Pulay corrections in the presence of atomic orbital basis sets. We benchmark the method for small molecules by comparing the computed forces with the derivatives of the AFQMC potential energy surface and by direct comparison with other quantum chemistry methods. We then perform geometry optimizations using the steepest descent algorithm in larger molecules. With realistic basis sets, we obtain equilibrium geometries in agreement, within statistical error bars, with experimental values. The increase in computational cost for computing forces in this approach is only a small prefactor over that of calculating the total energy. This paves the way for a general and efficient approach for geometry optimization and molecular dynamics within AFQMC.
Soft-output decoding algorithms in iterative decoding of turbo codes
NASA Technical Reports Server (NTRS)
Benedetto, S.; Montorsi, G.; Divsalar, D.; Pollara, F.
1996-01-01
In this article, we present two versions of a simplified maximum a posteriori decoding algorithm. The algorithms work in a sliding window form, like the Viterbi algorithm, and can thus be used to decode continuously transmitted sequences obtained by parallel concatenated codes, without requiring code trellis termination. A heuristic explanation is also given of how to embed the maximum a posteriori algorithms into the iterative decoding of parallel concatenated codes (turbo codes). The performances of the two algorithms are compared on the basis of a powerful rate 1/3 parallel concatenated code. Basic circuits to implement the simplified a posteriori decoding algorithm using lookup tables, and two further approximations (linear and threshold), with a very small penalty, to eliminate the need for lookup tables are proposed.
Improved Bat Algorithm Applied to Multilevel Image Thresholding
2014-01-01
Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed. PMID:25165733
SAR Processing Based On Two-Dimensional Transfer Function
NASA Technical Reports Server (NTRS)
Chang, Chi-Yung; Jin, Michael Y.; Curlander, John C.
1994-01-01
Exact transfer function, ETF, is two-dimensional transfer function that constitutes basis of improved frequency-domain-convolution algorithm for processing synthetic-aperture-radar, SAR data. ETF incorporates terms that account for Doppler effect of motion of radar relative to scanned ground area and for antenna squint angle. Algorithm based on ETF outperforms others.
Comparison of the MPP with other supercomputers for LANDSAT data processing
NASA Technical Reports Server (NTRS)
Ozga, Martin
1987-01-01
The massively parallel processor is compared to the CRAY X-MP and the CYBER-205 for LANDSAT data processing. The maximum likelihood classification algorithm is the basis for comparison since this algorithm is simple to implement and vectorizes very well. The algorithm was implemented on all three machines and tested by classifying the same full scene of LANDSAT multispectral scan data. Timings are compared as well as features of the machines and available software.
Training the Recurrent neural network by the Fuzzy Min-Max algorithm for fault prediction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zemouri, Ryad; Racoceanu, Daniel; Zerhouni, Noureddine
2009-03-05
In this paper, we present a training technique of a Recurrent Radial Basis Function neural network for fault prediction. We use the Fuzzy Min-Max technique to initialize the k-center of the RRBF neural network. The k-means algorithm is then applied to calculate the centers that minimize the mean square error of the prediction task. The performances of the k-means algorithm are then boosted by the Fuzzy Min-Max technique.
Optimization Techniques for Clustering,Connectivity, and Flow Problems in Complex Networks
2012-10-01
discrete optimization and for analysis of performance of algorithm portfolios; introducing a metaheuristic framework of variable objective search that...The results of empirical evaluation of the proposed algorithm are also included. 1.3 Theoretical analysis of heuristics and designing new metaheuristic ...analysis of heuristics for inapproximable problems and designing new metaheuristic approaches for the problems of interest; (IV) Developing new models
Enhancements and Algorithms for Avionic Information Processing System Design Methodology.
1982-06-16
programming algorithm is enhanced by incorporating task precedence constraints and hardware failures. Stochastic network methods are used to analyze...allocations in the presence of random fluctuations. Graph theoretic methods are used to analyze hardware designs, and new designs are constructed with...There, spatial dynamic programming (SDP) was used to solve a static, deterministic software allocation problem. Under the current contract the SDP
K-Nearest Neighbor Algorithm Optimization in Text Categorization
NASA Astrophysics Data System (ADS)
Chen, Shufeng
2018-01-01
K-Nearest Neighbor (KNN) classification algorithm is one of the simplest methods of data mining. It has been widely used in classification, regression and pattern recognition. The traditional KNN method has some shortcomings such as large amount of sample computation and strong dependence on the sample library capacity. In this paper, a method of representative sample optimization based on CURE algorithm is proposed. On the basis of this, presenting a quick algorithm QKNN (Quick k-nearest neighbor) to find the nearest k neighbor samples, which greatly reduces the similarity calculation. The experimental results show that this algorithm can effectively reduce the number of samples and speed up the search for the k nearest neighbor samples to improve the performance of the algorithm.
Evidence-based ergonomics. A comparison of Japanese and American office layouts.
Noro, Kageyu; Fujimaki, Goroh; Kishi, Shinsuke
2003-01-01
There is a variety of alternatives in office layouts. Yet the theoretical basis and criteria for predicting how well these layouts accommodate employees are poorly understood. The objective of this study was to evaluate criteria for selecting office layouts. Intensive computer workers worked in simulated office layouts in a controlled experimental laboratory. Eye movement measures indicate that knowledge work requires both concentration and interaction. Findings pointed to one layout as providing optimum balance between these 2 requirements. Recommendations for establishing a theoretical basis and design criteria for selecting office layouts based on work style are suggested.
Comparison and analysis of nonlinear algorithms for compressed sensing in MRI.
Yu, Yeyang; Hong, Mingjian; Liu, Feng; Wang, Hua; Crozier, Stuart
2010-01-01
Compressed sensing (CS) theory has been recently applied in Magnetic Resonance Imaging (MRI) to accelerate the overall imaging process. In the CS implementation, various algorithms have been used to solve the nonlinear equation system for better image quality and reconstruction speed. However, there are no explicit criteria for an optimal CS algorithm selection in the practical MRI application. A systematic and comparative study of those commonly used algorithms is therefore essential for the implementation of CS in MRI. In this work, three typical algorithms, namely, the Gradient Projection For Sparse Reconstruction (GPSR) algorithm, Interior-point algorithm (l(1)_ls), and the Stagewise Orthogonal Matching Pursuit (StOMP) algorithm are compared and investigated in three different imaging scenarios, brain, angiogram and phantom imaging. The algorithms' performances are characterized in terms of image quality and reconstruction speed. The theoretical results show that the performance of the CS algorithms is case sensitive; overall, the StOMP algorithm offers the best solution in imaging quality, while the GPSR algorithm is the most efficient one among the three methods. In the next step, the algorithm performances and characteristics will be experimentally explored. It is hoped that this research will further support the applications of CS in MRI.
A new image encryption algorithm based on the fractional-order hyperchaotic Lorenz system
NASA Astrophysics Data System (ADS)
Wang, Zhen; Huang, Xia; Li, Yu-Xia; Song, Xiao-Na
2013-01-01
We propose a new image encryption algorithm on the basis of the fractional-order hyperchaotic Lorenz system. While in the process of generating a key stream, the system parameters and the derivative order are embedded in the proposed algorithm to enhance the security. Such an algorithm is detailed in terms of security analyses, including correlation analysis, information entropy analysis, run statistic analysis, mean-variance gray value analysis, and key sensitivity analysis. The experimental results demonstrate that the proposed image encryption scheme has the advantages of large key space and high security for practical image encryption.
Study on some useful Operators for Graph-theoretic Image Processing
NASA Astrophysics Data System (ADS)
Moghani, Ali; Nasiri, Parviz
2010-11-01
In this paper we describe a human perception based approach to pixel color segmentation which applied in color reconstruction by numerical method associated with graph-theoretic image processing algorithm typically in grayscale. Fuzzy sets defined on the Hue, Saturation and Value components of the HSV color space, provide a fuzzy logic model that aims to follow the human intuition of color classification.
A Sparsity-Promoted Method Based on Majorization-Minimization for Weak Fault Feature Enhancement
Hao, Yansong; Song, Liuyang; Tang, Gang; Yuan, Hongfang
2018-01-01
Fault transient impulses induced by faulty components in rotating machinery usually contain substantial interference. Fault features are comparatively weak in the initial fault stage, which renders fault diagnosis more difficult. In this case, a sparse representation method based on the Majorzation-Minimization (MM) algorithm is proposed to enhance weak fault features and extract the features from strong background noise. However, the traditional MM algorithm suffers from two issues, which are the choice of sparse basis and complicated calculations. To address these challenges, a modified MM algorithm is proposed in which a sparse optimization objective function is designed firstly. Inspired by the Basis Pursuit (BP) model, the optimization function integrates an impulsive feature-preserving factor and a penalty function factor. Second, a modified Majorization iterative method is applied to address the convex optimization problem of the designed function. A series of sparse coefficients can be achieved through iterating, which only contain transient components. It is noteworthy that there is no need to select the sparse basis in the proposed iterative method because it is fixed as a unit matrix. Then the reconstruction step is omitted, which can significantly increase detection efficiency. Eventually, envelope analysis of the sparse coefficients is performed to extract weak fault features. Simulated and experimental signals including bearings and gearboxes are employed to validate the effectiveness of the proposed method. In addition, comparisons are made to prove that the proposed method outperforms the traditional MM algorithm in terms of detection results and efficiency. PMID:29597280
A Sparsity-Promoted Method Based on Majorization-Minimization for Weak Fault Feature Enhancement.
Ren, Bangyue; Hao, Yansong; Wang, Huaqing; Song, Liuyang; Tang, Gang; Yuan, Hongfang
2018-03-28
Fault transient impulses induced by faulty components in rotating machinery usually contain substantial interference. Fault features are comparatively weak in the initial fault stage, which renders fault diagnosis more difficult. In this case, a sparse representation method based on the Majorzation-Minimization (MM) algorithm is proposed to enhance weak fault features and extract the features from strong background noise. However, the traditional MM algorithm suffers from two issues, which are the choice of sparse basis and complicated calculations. To address these challenges, a modified MM algorithm is proposed in which a sparse optimization objective function is designed firstly. Inspired by the Basis Pursuit (BP) model, the optimization function integrates an impulsive feature-preserving factor and a penalty function factor. Second, a modified Majorization iterative method is applied to address the convex optimization problem of the designed function. A series of sparse coefficients can be achieved through iterating, which only contain transient components. It is noteworthy that there is no need to select the sparse basis in the proposed iterative method because it is fixed as a unit matrix. Then the reconstruction step is omitted, which can significantly increase detection efficiency. Eventually, envelope analysis of the sparse coefficients is performed to extract weak fault features. Simulated and experimental signals including bearings and gearboxes are employed to validate the effectiveness of the proposed method. In addition, comparisons are made to prove that the proposed method outperforms the traditional MM algorithm in terms of detection results and efficiency.
Minimal-scan filtered backpropagation algorithms for diffraction tomography.
Pan, X; Anastasio, M A
1999-12-01
The filtered backpropagation (FBPP) algorithm, originally developed by Devaney [Ultrason. Imaging 4, 336 (1982)], has been widely used for reconstructing images in diffraction tomography. It is generally known that the FBPP algorithm requires scattered data from a full angular range of 2 pi for exact reconstruction of a generally complex-valued object function. However, we reveal that one needs scattered data only over the angular range 0 < or = phi < or = 3 pi/2 for exact reconstruction of a generally complex-valued object function. Using this insight, we develop and analyze a family of minimal-scan filtered backpropagation (MS-FBPP) algorithms, which, unlike the FBPP algorithm, use scattered data acquired from view angles over the range 0 < or = phi < or = 3 pi/2. We show analytically that these MS-FBPP algorithms are mathematically identical to the FBPP algorithm. We also perform computer simulation studies for validation, demonstration, and comparison of these MS-FBPP algorithms. The numerical results in these simulation studies corroborate our theoretical assertions.
NASA Astrophysics Data System (ADS)
Park, Jun Kwon; Kang, Kwan Hyoung
2012-04-01
Contact angle (CA) hysteresis is important in many natural and engineering wetting processes, but predicting it numerically is difficult. We developed an algorithm that considers CA hysteresis when analyzing the motion of the contact line (CL). This algorithm employs feedback control of CA which decelerates CL speed to make the CL stationary in the hysteretic range of CA, and one control coefficient should be heuristically determined depending on characteristic time of the simulated system. The algorithm requires embedding only a simple additional routine with little modification of a code which considers the dynamic CA. The method is non-iterative and explicit, and also has less computational load than other algorithms. For a drop hanging on a wire, the proposed algorithm accurately predicts the theoretical equilibrium CA. For the drop impacting on a dry surface, the results of the proposed algorithm agree well with experimental results including the intermittent occurrence of the pinning of CL. The proposed algorithm is as accurate as other algorithms, but faster.
Separation analysis, a tool for analyzing multigrid algorithms
NASA Technical Reports Server (NTRS)
Costiner, Sorin; Taasan, Shlomo
1995-01-01
The separation of vectors by multigrid (MG) algorithms is applied to the study of convergence and to the prediction of the performance of MG algorithms. The separation operator for a two level cycle algorithm is derived. It is used to analyze the efficiency of the cycle when mixing of eigenvectors occurs. In particular cases the separation analysis reduces to Fourier type analysis. The separation operator of a two level cycle for a Schridubger eigenvalue problem, is derived and analyzed in a Fourier basis. Separation analysis gives information on how to choose performance relaxations and inter-level transfers. Separation analysis is a tool for analyzing and designing algorithms, and for optimizing their performance.
The Research and Test of Fast Radio Burst Real-time Search Algorithm Based on GPU Acceleration
NASA Astrophysics Data System (ADS)
Wang, J.; Chen, M. Z.; Pei, X.; Wang, Z. Q.
2017-03-01
In order to satisfy the research needs of Nanshan 25 m radio telescope of Xinjiang Astronomical Observatory (XAO) and study the key technology of the planned QiTai radio Telescope (QTT), the receiver group of XAO studied the GPU (Graphics Processing Unit) based real-time FRB searching algorithm which developed from the original FRB searching algorithm based on CPU (Central Processing Unit), and built the FRB real-time searching system. The comparison of the GPU system and the CPU system shows that: on the basis of ensuring the accuracy of the search, the speed of the GPU accelerated algorithm is improved by 35-45 times compared with the CPU algorithm.
Francis, Jill J; O'Connor, Denise; Curran, Janet
2012-04-24
Behaviour change is key to increasing the uptake of evidence into healthcare practice. Designing behaviour-change interventions first requires problem analysis, ideally informed by theory. Yet the large number of partly overlapping theories of behaviour makes it difficult to select the most appropriate theory. The need for an overarching theoretical framework of behaviour change was addressed in research in which 128 explanatory constructs from 33 theories of behaviour were identified and grouped. The resulting Theoretical Domains Framework (TDF) appears to be a helpful basis for investigating implementation problems. Research groups in several countries have conducted TDF-based studies. It seems timely to bring together the experience of these teams in a thematic series to demonstrate further applications and to report key developments. This overview article describes the TDF, provides a brief critique of the framework, and introduces this thematic series.In a brief review to assess the extent of TDF-based research, we identified 133 papers that cite the framework. Of these, 17 used the TDF as the basis for empirical studies to explore health professionals' behaviour. The identified papers provide evidence of the impact of the TDF on implementation research. Two major strengths of the framework are its theoretical coverage and its capacity to elicit beliefs that could signify key mediators of behaviour change. The TDF provides a useful conceptual basis for assessing implementation problems, designing interventions to enhance healthcare practice, and understanding behaviour-change processes. We discuss limitations and research challenges and introduce papers in this series.
2012-01-01
Behaviour change is key to increasing the uptake of evidence into healthcare practice. Designing behaviour-change interventions first requires problem analysis, ideally informed by theory. Yet the large number of partly overlapping theories of behaviour makes it difficult to select the most appropriate theory. The need for an overarching theoretical framework of behaviour change was addressed in research in which 128 explanatory constructs from 33 theories of behaviour were identified and grouped. The resulting Theoretical Domains Framework (TDF) appears to be a helpful basis for investigating implementation problems. Research groups in several countries have conducted TDF-based studies. It seems timely to bring together the experience of these teams in a thematic series to demonstrate further applications and to report key developments. This overview article describes the TDF, provides a brief critique of the framework, and introduces this thematic series. In a brief review to assess the extent of TDF-based research, we identified 133 papers that cite the framework. Of these, 17 used the TDF as the basis for empirical studies to explore health professionals’ behaviour. The identified papers provide evidence of the impact of the TDF on implementation research. Two major strengths of the framework are its theoretical coverage and its capacity to elicit beliefs that could signify key mediators of behaviour change. The TDF provides a useful conceptual basis for assessing implementation problems, designing interventions to enhance healthcare practice, and understanding behaviour-change processes. We discuss limitations and research challenges and introduce papers in this series. PMID:22531601
A weighted belief-propagation algorithm for estimating volume-related properties of random polytopes
NASA Astrophysics Data System (ADS)
Font-Clos, Francesc; Massucci, Francesco Alessandro; Pérez Castillo, Isaac
2012-11-01
In this work we introduce a novel weighted message-passing algorithm based on the cavity method for estimating volume-related properties of random polytopes, properties which are relevant in various research fields ranging from metabolic networks, to neural networks, to compressed sensing. We propose, as opposed to adopting the usual approach consisting in approximating the real-valued cavity marginal distributions by a few parameters, using an algorithm to faithfully represent the entire marginal distribution. We explain various alternatives for implementing the algorithm and benchmarking the theoretical findings by showing concrete applications to random polytopes. The results obtained with our approach are found to be in very good agreement with the estimates produced by the Hit-and-Run algorithm, known to produce uniform sampling.
Collegial Activity Learning between Heterogeneous Sensors.
Feuz, Kyle D; Cook, Diane J
2017-11-01
Activity recognition algorithms have matured and become more ubiquitous in recent years. However, these algorithms are typically customized for a particular sensor platform. In this paper we introduce PECO, a Personalized activity ECOsystem, that transfers learned activity information seamlessly between sensor platforms in real time so that any available sensor can continue to track activities without requiring its own extensive labeled training data. We introduce a multi-view transfer learning algorithm that facilitates this information handoff between sensor platforms and provide theoretical performance bounds for the algorithm. In addition, we empirically evaluate PECO using datasets that utilize heterogeneous sensor platforms to perform activity recognition. These results indicate that not only can activity recognition algorithms transfer important information to new sensor platforms, but any number of platforms can work together as colleagues to boost performance.
Physical and Dynamical Linkages Between Lightning Jumps and Storm Conceptual Models
NASA Technical Reports Server (NTRS)
Schultz, Christopher J.; Carey, Lawrence D.; Schultz, Elise V.; Blakeslee, Richard J.; Goodman, Steven J.
2014-01-01
The presence and rates of total lightning are both correlated to and physically dependent upon storm updraft strength, mixed phase precipitation volume and the size of the charging zone. The updraft modulates the ingredients necessary for electrification within a thunderstorm, while the updraft also plays a critical role in the development of severe and hazardous weather. Therefore utilizing this relationship, the monitoring of lightning rates and jumps provides an additional piece of information on the evolution of a thunderstorm, more often than not, at higher temporal resolution than current operational radar systems. This correlation is the basis for the total lightning jump algorithm that has been developed in recent years. Currently, the lightning jump algorithm is being tested in two separate but important efforts. Schultz et al. (2014; this conference) is exploring the transition of the algorithm from its research based formulation to a fully objective algorithm that includes storm tracking, Geostationary Lightning Mapper (GLM) Proxy data and the lightning jump algorithm. Chronis et al. (2014) provides context for the transition to current operational forecasting using lightning mapping array based products. However, what remains is an end-to-end physical and dynamical basis for coupling total lightning flash rates to severe storm manifestation, so the forecaster has a reason beyond simple correlation to utilize the lightning jump algorithm within their severe storm conceptual models. Therefore, the physical basis for the lightning jump algorithm in relation to severe storm dynamics and microphysics is a key component that must be further explored. Many radar studies have examined flash rates and their relationship to updraft strength, updraft volume, precipitation-sized ice mass, etc.; however, their relationship specifically to lightning jumps is fragmented within the literature. Thus the goal of this study is to use multiple Doppler and polarimetric radar techniques to resolve the physical and dynamical storm characteristics specifically around the time of the lightning jump. This information will help forecasters anticipate lightning jump occurrence, or even be of use to determine future characteristics of a given storm (e.g., development of a mesocyclone, downdraft, or hail signature on radar), providing additional lead time/confidence in the severe storm warning paradigm.
Physical and Dynamical Linkages between Lightning Jumps and Storm Conceptual Models
NASA Technical Reports Server (NTRS)
Schultz, Christopher J.; Carey, Lawrence D.; Schultz, Elise V.; Blakeslee, Richard J.; Goodman, Steven J.
2014-01-01
The presence and rates of total lightning are both correlated to and physically dependent upon storm updraft strength, mixed phase precipitation volume and the size of the charging zone. The updraft modulates the ingredients necessary for electrification within a thunderstorm, while the updraft also plays a critical role in the development of severe and hazardous weather. Therefore utilizing this relationship, the monitoring of lightning rates and jumps provides an additional piece of information on the evolution of a thunderstorm, more often than not, at higher temporal resolution than current operational radar systems. This correlation is the basis for the total lightning jump algorithm that has been developed in recent years. Currently, the lightning jump algorithm is being tested in two separate but important efforts. Schultz et al. (2014; this conference) is exploring the transition of the algorithm from its research based formulation to a fully objective algorithm that includes storm tracking, Geostationary Lightning Mapper (GLM) Proxy data and the lightning jump algorithm. Chronis et al. (2014; this conference) provides context for the transition to current operational forecasting using lightning mapping array based products. However, what remains is an end-to-end physical and dynamical basis for coupling total lightning flash rates to severe storm manifestation, so the forecaster has a reason beyond simple correlation to utilize the lightning jump algorithm within their severe storm conceptual models. Therefore, the physical basis for the lightning jump algorithm in relation to severe storm dynamics and microphysics is a key component that must be further explored. Many radar studies have examined flash rates and their relationship to updraft strength, updraft volume, precipitation-sized ice mass, etc.; however, their relationship specifically to lightning jumps is fragmented within the literature. Thus the goal of this study is to use multiple Doppler and polarimetric radar techniques to resolve the physical and dynamical storm characteristics specifically around the time of the lightning jump. This information will help forecasters anticipate lightning jump occurrence, or even be of use to determine future characteristics of a given storm (e.g., development of a mesocyclone, downdraft, or hail signature on radar), providing additional lead time/confidence in the severe storm warning paradigm.
Radial basis function neural networks applied to NASA SSME data
NASA Technical Reports Server (NTRS)
Wheeler, Kevin R.; Dhawan, Atam P.
1993-01-01
This paper presents a brief report on the application of Radial Basis Function Neural Networks (RBFNN) to the prediction of sensor values for fault detection and diagnosis of the Space Shuttle's Main Engines (SSME). The location of the Radial Basis Function (RBF) node centers was determined with a K-means clustering algorithm. A neighborhood operation about these center points was used to determine the variances of the individual processing notes.
Semantic super networks: A case analysis of Wikipedia papers
NASA Astrophysics Data System (ADS)
Kostyuchenko, Evgeny; Lebedeva, Taisiya; Goritov, Alexander
2017-11-01
An algorithm for constructing super-large semantic networks has been developed in current work. Algorithm was tested using the "Cosmos" category of the Internet encyclopedia "Wikipedia" as an example. During the implementation, a parser for the syntax analysis of Wikipedia pages was developed. A graph based on list of articles and categories was formed. On the basis of the obtained graph analysis, algorithms for finding domains of high connectivity in a graph were proposed and tested. Algorithms for constructing a domain based on the number of links and the number of articles in the current subject area is considered. The shortcomings of these algorithms are shown and explained, an algorithm is developed on their joint use. The possibility of applying a combined algorithm for obtaining the final domain is shown. The problem of instability of the received domain was discovered when starting an algorithm from two neighboring vertices related to the domain.
Using L-M BP Algorithm Forecase the 305 Days Production of First-Breed Dairy
NASA Astrophysics Data System (ADS)
Wei, Xiaoli; Qi, Guoqiang; Shen, Weizheng; Jian, Sun
Aiming at the shortage of conventional BP algorithm, a BP neural net works improved by L-M algorithm is put forward. On the basis of the network, a Prediction model for 305 day's milk productions was set up. Traditional methods finish these data must spend at least 305 days, But this model can forecast first-breed dairy's 305 days milk production ahead of 215 days. The validity of the improved BP neural network predictive model was validated through the experiments.
Generalized ocean color inversion model for retrieving marine inherent optical properties.
Werdell, P Jeremy; Franz, Bryan A; Bailey, Sean W; Feldman, Gene C; Boss, Emmanuel; Brando, Vittorio E; Dowell, Mark; Hirata, Takafumi; Lavender, Samantha J; Lee, ZhongPing; Loisel, Hubert; Maritorena, Stéphane; Mélin, Fréderic; Moore, Timothy S; Smyth, Timothy J; Antoine, David; Devred, Emmanuel; d'Andon, Odile Hembise Fanton; Mangin, Antoine
2013-04-01
Ocean color measured from satellites provides daily, global estimates of marine inherent optical properties (IOPs). Semi-analytical algorithms (SAAs) provide one mechanism for inverting the color of the water observed by the satellite into IOPs. While numerous SAAs exist, most are similarly constructed and few are appropriately parameterized for all water masses for all seasons. To initiate community-wide discussion of these limitations, NASA organized two workshops that deconstructed SAAs to identify similarities and uniqueness and to progress toward consensus on a unified SAA. This effort resulted in the development of the generalized IOP (GIOP) model software that allows for the construction of different SAAs at runtime by selection from an assortment of model parameterizations. As such, GIOP permits isolation and evaluation of specific modeling assumptions, construction of SAAs, development of regionally tuned SAAs, and execution of ensemble inversion modeling. Working groups associated with the workshops proposed a preliminary default configuration for GIOP (GIOP-DC), with alternative model parameterizations and features defined for subsequent evaluation. In this paper, we: (1) describe the theoretical basis of GIOP; (2) present GIOP-DC and verify its comparable performance to other popular SAAs using both in situ and synthetic data sets; and, (3) quantify the sensitivities of their output to their parameterization. We use the latter to develop a hierarchical sensitivity of SAAs to various model parameterizations, to identify components of SAAs that merit focus in future research, and to provide material for discussion on algorithm uncertainties and future emsemble applications.
Generalized Ocean Color Inversion Model for Retrieving Marine Inherent Optical Properties
NASA Technical Reports Server (NTRS)
Werdell, P. Jeremy; Franz, Bryan A.; Bailey, Sean W.; Feldman, Gene C.; Boss, Emmanuel; Brando, Vittorio E.; Dowell, Mark; Hirata, Takafumi; Lavender, Samantha J.; Lee, ZhongPing;
2013-01-01
Ocean color measured from satellites provides daily, global estimates of marine inherent optical properties (IOPs). Semi-analytical algorithms (SAAs) provide one mechanism for inverting the color of the water observed by the satellite into IOPs. While numerous SAAs exist, most are similarly constructed and few are appropriately parameterized for all water masses for all seasons. To initiate community-wide discussion of these limitations, NASA organized two workshops that deconstructed SAAs to identify similarities and uniqueness and to progress toward consensus on a unified SAA. This effort resulted in the development of the generalized IOP (GIOP) model software that allows for the construction of different SAAs at runtime by selection from an assortment of model parameterizations. As such, GIOP permits isolation and evaluation of specific modeling assumptions, construction of SAAs, development of regionally tuned SAAs, and execution of ensemble inversion modeling. Working groups associated with the workshops proposed a preliminary default configuration for GIOP (GIOP-DC), with alternative model parameterizations and features defined for subsequent evaluation. In this paper, we: (1) describe the theoretical basis of GIOP; (2) present GIOP-DC and verify its comparable performance to other popular SAAs using both in situ and synthetic data sets; and, (3) quantify the sensitivities of their output to their parameterization. We use the latter to develop a hierarchical sensitivity of SAAs to various model parameterizations, to identify components of SAAs that merit focus in future research, and to provide material for discussion on algorithm uncertainties and future ensemble applications.
A Hybrid Approach to Protect Palmprint Templates
Sun, Dongmei; Xiong, Ke; Qiu, Zhengding
2014-01-01
Biometric template protection is indispensable to protect personal privacy in large-scale deployment of biometric systems. Accuracy, changeability, and security are three critical requirements for template protection algorithms. However, existing template protection algorithms cannot satisfy all these requirements well. In this paper, we propose a hybrid approach that combines random projection and fuzzy vault to improve the performances at these three points. Heterogeneous space is designed for combining random projection and fuzzy vault properly in the hybrid scheme. New chaff point generation method is also proposed to enhance the security of the heterogeneous vault. Theoretical analyses of proposed hybrid approach in terms of accuracy, changeability, and security are given in this paper. Palmprint database based experimental results well support the theoretical analyses and demonstrate the effectiveness of proposed hybrid approach. PMID:24982977
A hybrid approach to protect palmprint templates.
Liu, Hailun; Sun, Dongmei; Xiong, Ke; Qiu, Zhengding
2014-01-01
Biometric template protection is indispensable to protect personal privacy in large-scale deployment of biometric systems. Accuracy, changeability, and security are three critical requirements for template protection algorithms. However, existing template protection algorithms cannot satisfy all these requirements well. In this paper, we propose a hybrid approach that combines random projection and fuzzy vault to improve the performances at these three points. Heterogeneous space is designed for combining random projection and fuzzy vault properly in the hybrid scheme. New chaff point generation method is also proposed to enhance the security of the heterogeneous vault. Theoretical analyses of proposed hybrid approach in terms of accuracy, changeability, and security are given in this paper. Palmprint database based experimental results well support the theoretical analyses and demonstrate the effectiveness of proposed hybrid approach.
Frequency-domain beamformers using conjugate gradient techniques for speech enhancement.
Zhao, Shengkui; Jones, Douglas L; Khoo, Suiyang; Man, Zhihong
2014-09-01
A multiple-iteration constrained conjugate gradient (MICCG) algorithm and a single-iteration constrained conjugate gradient (SICCG) algorithm are proposed to realize the widely used frequency-domain minimum-variance-distortionless-response (MVDR) beamformers and the resulting algorithms are applied to speech enhancement. The algorithms are derived based on the Lagrange method and the conjugate gradient techniques. The implementations of the algorithms avoid any form of explicit or implicit autocorrelation matrix inversion. Theoretical analysis establishes formal convergence of the algorithms. Specifically, the MICCG algorithm is developed based on a block adaptation approach and it generates a finite sequence of estimates that converge to the MVDR solution. For limited data records, the estimates of the MICCG algorithm are better than the conventional estimators and equivalent to the auxiliary vector algorithms. The SICCG algorithm is developed based on a continuous adaptation approach with a sample-by-sample updating procedure and the estimates asymptotically converge to the MVDR solution. An illustrative example using synthetic data from a uniform linear array is studied and an evaluation on real data recorded by an acoustic vector sensor array is demonstrated. Performance of the MICCG algorithm and the SICCG algorithm are compared with the state-of-the-art approaches.
Regier, Michael D; Moodie, Erica E M
2016-05-01
We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.
Beyond maximum entropy: Fractal Pixon-based image reconstruction
NASA Technical Reports Server (NTRS)
Puetter, Richard C.; Pina, R. K.
1994-01-01
We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other competing methods, including Goodness-of-Fit methods such as Least-Squares fitting and Lucy-Richardson reconstruction, as well as Maximum Entropy (ME) methods such as those embodied in the MEMSYS algorithms. Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME. Our past work has shown how uniform information content pixons can be used to develop a 'Super-ME' method in which entropy is maximized exactly. Recently, however, we have developed a superior pixon basis for the image, the Fractal Pixon Basis (FPB). Unlike the Uniform Pixon Basis (UPB) of our 'Super-ME' method, the FPB basis is selected by employing fractal dimensional concepts to assess the inherent structure in the image. The Fractal Pixon Basis results in the best image reconstructions to date, superior to both UPB and the best ME reconstructions. In this paper, we review the theory of the UPB and FPB pixon and apply our methodology to the reconstruction of far-infrared imaging of the galaxy M51. The results of our reconstruction are compared to published reconstructions of the same data using the Lucy-Richardson algorithm, the Maximum Correlation Method developed at IPAC, and the MEMSYS ME algorithms. The results show that our reconstructed image has a spatial resolution a factor of two better than best previous methods (and a factor of 20 finer than the width of the point response function), and detects sources two orders of magnitude fainter than other methods.
Queue and stack sorting algorithm optimization and performance analysis
NASA Astrophysics Data System (ADS)
Qian, Mingzhu; Wang, Xiaobao
2018-04-01
Sorting algorithm is one of the basic operation of a variety of software development, in data structures course specializes in all kinds of sort algorithm. The performance of the sorting algorithm is directly related to the efficiency of the software. A lot of excellent scientific research queue is constantly optimizing algorithm, algorithm efficiency better as far as possible, the author here further research queue combined with stacks of sorting algorithms, the algorithm is mainly used for alternating operation queue and stack storage properties, Thus avoiding the need for a large number of exchange or mobile operations in the traditional sort. Before the existing basis to continue research, improvement and optimization, the focus on the optimization of the time complexity of the proposed optimization and improvement, The experimental results show that the improved effectively, at the same time and the time complexity and space complexity of the algorithm, the stability study corresponding research. The improvement and optimization algorithm, improves the practicability.
On the "Optimal" Choice of Trial Functions for Modelling Potential Fields
NASA Astrophysics Data System (ADS)
Michel, Volker
2015-04-01
There are many trial functions (e.g. on the sphere) available which can be used for the modelling of a potential field. Among them are orthogonal polynomials such as spherical harmonics and radial basis functions such as spline or wavelet basis functions. Their pros and cons have been widely discussed in the last decades. We present an algorithm, the Regularized Functional Matching Pursuit (RFMP), which is able to choose trial functions of different kinds in order to combine them to a stable approximation of a potential field. One main advantage of the RFMP is that the constructed approximation inherits the advantages of the different basis systems. By including spherical harmonics, coarse global structures can be represented in a sparse way. However, the additional use of spline basis functions allows a stable handling of scattered data grids. Furthermore, the inclusion of wavelets and scaling functions yields a multiscale analysis of the potential. In addition, ill-posed inverse problems (like a downward continuation or the inverse gravimetric problem) can be regularized with the algorithm. We show some numerical examples to demonstrate the possibilities which the RFMP provides.
A "Networked-Hutong Siwei of Critiques" for Critical Teacher Education
ERIC Educational Resources Information Center
Qi, Jing
2014-01-01
This paper offers a conceptual basis for refashioning the formulation of critical teacher education. It argues that current critical teacher education is uncritically constructed upon key theoretical departures from critical theories. Drawing on Boltanski's critique of critical theories, the paper examines the ways these theoretical departures…
The Theoretical Basis of the Effective School Improvement Model (ESI)
ERIC Educational Resources Information Center
Scheerens, Jaap; Demeuse, Marc
2005-01-01
This article describes the process of theoretical reflection that preceded the development and empirical verification of a model of "effective school improvement". The focus is on basic mechanisms that could be seen as underlying "getting things in motion" and change in education systems. Four mechanisms are distinguished:…
Design 2000: Theory-Based Design Models of the Future.
ERIC Educational Resources Information Center
Richey, Rita C.
The influence of theory on instructional-design models of the future is explored on the basis of the theoretical developments of today. Anticipated model changes are expected to result from disparate theoretical thinking in areas such as chaos theory, constructivism, situated learning, cognitive-learning theory, and general systems theory.…
Motoric Aids to Perceptual Training. The Slow Learner Series.
ERIC Educational Resources Information Center
Chaney, Clara M.; Kephart, Newell C.
Written from a developmental viewpoint, this book for parents and teachers presents both a theoretical orientation and perceptual motor activities for training children with learning disabilities, both the brain injured and the retarded. The theoretical basis for training generalized motor responses is considered in terms of motor perceptual…
[Theoretical construction in the sociology of health: a reflection on its trajectory].
Nunes, Everardo Duarte
2014-04-01
The scope of this paper is to reflect on the theoretical construction in the constitution of the sociology of health, still called medical sociology in some countries. Two main ideas constitute the basis for this: interdisciplinarity and the degree of articulation in the fields of medicine and sociology. We sought to establish a dialogue with some dimensions - macro/micro, structure/action - that constitute the basis for understanding medicine/health in relation to the social/sociological dimension. The main aspects of these dimensions are initially presented. Straus' two medical sociologies and the theory/application impasses are then addressed, as well as the dilemmas of the sociology of medicine in the 1960s and 1970s. From these analyses the theoretical production before 1970 is placed as a counterpoint. Lastly, the sociology of health is seen in the general context of sociology, which underwent a fragmentation process from 1970 with effects in all subfields of the social sciences. This process involves a rethinking of the theoretical issues in a broadened spectrum of possibilities. The 1980s are highlighted when theoretical issues in the sociology of health are reinvigorated and the issue of interdisciplinarity is once again addressed.
Theoretical Investigation Leading to Energy Storage in Atomic and Molecular Systems
1990-12-01
can be calculated in a single run. 21 j) Non-gradient optimization of basis function exponents is possible. The source code can be modified to carry...basis. The 10s3p/5s3p basis consists of the 9s/4s contraction of Siegbahn and Liu (Reference 91) augmented by a diffuse s-type function ( exponent ...vibrational modes. Introduction of diffuse basis functions and optimization of the d-orbital exponents have a small but important effect on the
Halftoning and Image Processing Algorithms
1999-02-01
screening techniques with the quality advantages of error diffusion in the half toning of color maps, and on color image enhancement for halftone ...image quality. Our goals in this research were to advance the understanding in image science for our new halftone algorithm and to contribute to...image retrieval and noise theory for such imagery. In the field of color halftone printing, research was conducted on deriving a theoretical model of our
NASA Astrophysics Data System (ADS)
Vasilenko, Georgii Ivanovich; Taratorin, Aleksandr Markovich
Linear, nonlinear, and iterative image-reconstruction (IR) algorithms are reviewed. Theoretical results are presented concerning controllable linear filters, the solution of ill-posed functional minimization problems, and the regularization of iterative IR algorithms. Attention is also given to the problem of superresolution and analytical spectrum continuation, the solution of the phase problem, and the reconstruction of images distorted by turbulence. IR in optical and optical-digital systems is discussed with emphasis on holographic techniques.
Side-information-dependent correlation channel estimation in hash-based distributed video coding.
Deligiannis, Nikos; Barbarien, Joeri; Jacobs, Marc; Munteanu, Adrian; Skodras, Athanassios; Schelkens, Peter
2012-04-01
In the context of low-cost video encoding, distributed video coding (DVC) has recently emerged as a potential candidate for uplink-oriented applications. This paper builds on a concept of correlation channel (CC) modeling, which expresses the correlation noise as being statistically dependent on the side information (SI). Compared with classical side-information-independent (SII) noise modeling adopted in current DVC solutions, it is theoretically proven that side-information-dependent (SID) modeling improves the Wyner-Ziv coding performance. Anchored in this finding, this paper proposes a novel algorithm for online estimation of the SID CC parameters based on already decoded information. The proposed algorithm enables bit-plane-by-bit-plane successive refinement of the channel estimation leading to progressively improved accuracy. Additionally, the proposed algorithm is included in a novel DVC architecture that employs a competitive hash-based motion estimation technique to generate high-quality SI at the decoder. Experimental results corroborate our theoretical gains and validate the accuracy of the channel estimation algorithm. The performance assessment of the proposed architecture shows remarkable and consistent coding gains over a germane group of state-of-the-art distributed and standard video codecs, even under strenuous conditions, i.e., large groups of pictures and highly irregular motion content.
Abduallah, Yasser; Turki, Turki; Byron, Kevin; Du, Zongxuan; Cervantes-Cervantes, Miguel; Wang, Jason T L
2017-01-01
Gene regulation is a series of processes that control gene expression and its extent. The connections among genes and their regulatory molecules, usually transcription factors, and a descriptive model of such connections are known as gene regulatory networks (GRNs). Elucidating GRNs is crucial to understand the inner workings of the cell and the complexity of gene interactions. To date, numerous algorithms have been developed to infer gene regulatory networks. However, as the number of identified genes increases and the complexity of their interactions is uncovered, networks and their regulatory mechanisms become cumbersome to test. Furthermore, prodding through experimental results requires an enormous amount of computation, resulting in slow data processing. Therefore, new approaches are needed to expeditiously analyze copious amounts of experimental data resulting from cellular GRNs. To meet this need, cloud computing is promising as reported in the literature. Here, we propose new MapReduce algorithms for inferring gene regulatory networks on a Hadoop cluster in a cloud environment. These algorithms employ an information-theoretic approach to infer GRNs using time-series microarray data. Experimental results show that our MapReduce program is much faster than an existing tool while achieving slightly better prediction accuracy than the existing tool.
Song, Xiaojun; Ta, Dean; Wang, Weiqi
2011-10-01
The parameters of ultrasonic guided waves (GWs) are very sensitive to mechanical and structural changes in long cortical bones. However, it is a challenge to obtain the group velocity and other parameters of GWs because of the presence of mixed multiple modes. This paper proposes a blind identification algorithm using the joint approximate diagonalization of eigen-matrices (JADE) and applies it to the separation of superimposed GWs in long bones. For the simulation case, the velocity of the single mode was calculated after separation. A strong agreement was obtained between the estimated velocity and the theoretical expectation. For the experiments in bovine long bones, by using the calculated velocity and a theoretical model, the cortical thickness (CTh) was obtained. For comparison with the JADE approach, an adaptive Gaussian chirplet time-frequency (ACGTF) method was also used to estimate the CTh. The results showed that the mean error of the CTh acquired by the JADE approach was 4.3%, which was smaller than that of the ACGTF method (13.6%). This suggested that the JADE algorithm may be used to separate the superimposed GWs and that the JADE algorithm could potentially be used to evaluate long bones. Copyright © 2011 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Inversion of Surface-wave Dispersion Curves due to Low-velocity-layer Models
NASA Astrophysics Data System (ADS)
Shen, C.; Xia, J.; Mi, B.
2016-12-01
A successful inversion relies on exact forward modeling methods. It is a key step to accurately calculate multi-mode dispersion curves of a given model in high-frequency surface-wave (Rayleigh wave and Love wave) methods. For normal models (shear (S)-wave velocity increasing with depth), their theoretical dispersion curves completely match the dispersion spectrum that is generated based on wave equation. For models containing a low-velocity-layer, however, phase velocities calculated by existing forward-modeling algorithms (e.g. Thomson-Haskell algorithm, Knopoff algorithm, fast vector-transfer algorithm and so on) fail to be consistent with the dispersion spectrum at a high frequency range. They will approach a value that close to the surface-wave velocity of the low-velocity-layer under the surface layer, rather than that of the surface layer when their corresponding wavelengths are short enough. This phenomenon conflicts with the characteristics of surface waves, which results in an erroneous inverted model. By comparing the theoretical dispersion curves with simulated dispersion energy, we proposed a direct and essential solution to accurately compute surface-wave phase velocities due to low-velocity-layer models. Based on the proposed forward modeling technique, we can achieve correct inversion for these types of models. Several synthetic data proved the effectiveness of our method.
Passive microwave algorithm development and evaluation
NASA Technical Reports Server (NTRS)
Petty, Grant W.
1995-01-01
The scientific objectives of this grant are: (1) thoroughly evaluate, both theoretically and empirically, all available Special Sensor Microwave Imager (SSM/I) retrieval algorithms for column water vapor, column liquid water, and surface wind speed; (2) where both appropriate and feasible, develop, validate, and document satellite passive microwave retrieval algorithms that offer significantly improved performance compared with currently available algorithms; and (3) refine and validate a novel physical inversion scheme for retrieving rain rate over the ocean. This report summarizes work accomplished or in progress during the first year of a three year grant. The emphasis during the first year has been on the validation and refinement of the rain rate algorithm published by Petty and on the analysis of independent data sets that can be used to help evaluate the performance of rain rate algorithms over remote areas of the ocean. Two articles in the area of global oceanic precipitation are attached.
Typical performance of approximation algorithms for NP-hard problems
NASA Astrophysics Data System (ADS)
Takabe, Satoshi; Hukushima, Koji
2016-11-01
Typical performance of approximation algorithms is studied for randomized minimum vertex cover problems. A wide class of random graph ensembles characterized by an arbitrary degree distribution is discussed with the presentation of a theoretical framework. Herein, three approximation algorithms are examined: linear-programming relaxation, loopy-belief propagation, and the leaf-removal algorithm. The former two algorithms are analyzed using a statistical-mechanical technique, whereas the average-case analysis of the last one is conducted using the generating function method. These algorithms have a threshold in the typical performance with increasing average degree of the random graph, below which they find true optimal solutions with high probability. Our study reveals that there exist only three cases, determined by the order of the typical performance thresholds. In addition, we provide some conditions for classification of the graph ensembles and demonstrate explicitly some examples for the difference in thresholds.
Calculation of Temperature Rise in Calorimetry.
ERIC Educational Resources Information Center
Canagaratna, Sebastian G.; Witt, Jerry
1988-01-01
Gives a simple but fuller account of the basis for accurately calculating temperature rise in calorimetry. Points out some misconceptions regarding these calculations. Describes two basic methods, the extrapolation to zero time and the equal area method. Discusses the theoretical basis of each and their underlying assumptions. (CW)
Health management system for rocket engines
NASA Technical Reports Server (NTRS)
Nemeth, Edward
1990-01-01
The functional framework of a failure detection algorithm for the Space Shuttle Main Engine (SSME) is developed. The basic algorithm is based only on existing SSME measurements. Supplemental measurements, expected to enhance failure detection effectiveness, are identified. To support the algorithm development, a figure of merit is defined to estimate the likelihood of SSME criticality 1 failure modes and the failure modes are ranked in order of likelihood of occurrence. Nine classes of failure detection strategies are evaluated and promising features are extracted as the basis for the failure detection algorithm. The failure detection algorithm provides early warning capabilities for a wide variety of SSME failure modes. Preliminary algorithm evaluation, using data from three SSME failures representing three different failure types, demonstrated indications of imminent catastrophic failure well in advance of redline cutoff in all three cases.
NASA Astrophysics Data System (ADS)
Wang, J.; Samms, T.; Meier, C.; Simmons, L.; Miller, D.; Bathke, D.
2005-12-01
Spatial evapotranspiration (ET) is usually estimated by Surface Energy Balance Algorithm for Land. The average accuracy of the algorithm is 85% on daily basis and 95% on seasonable basis. However, the accuracy of the algorithm varies from 67% to 95% on instantaneous ET estimates and, as reported in 18 studies, 70% to 98% on 1 to 10-day ET estimates. There is a need to understand the sensitivity of the ET calculation with respect to the algorithm variables and equations. With an increased understanding, information can be developed to improve the algorithm, and to better identify the key variables and equations. A Modified Surface Energy Balance Algorithm for Land (MSEBAL) was developed and validated with data from a pecan orchard and an alfalfa field. The MSEBAL uses ground reflectance and temperature data from ASTER sensors along with humidity, wind speed, and solar radiation data from a local weather station. MSEBAL outputs hourly and daily ET with 90 m by 90 m resolution. A sensitivity analysis was conducted for MSEBAL on ET calculation. In order to observe the sensitivity of the calculation to a particular variable, the value of that variable was changed while holding the magnitudes of the other variables. The key variables and equations to which the ET calculation most sensitive were determined in this study. href='http://weather.nmsu.edu/pecans/SEBALFolder/San%20Francisco%20AGU%20meeting/ASensitivityAnalysisonMSE">http://weather.nmsu.edu/pecans/SEBALFolder/San%20Francisco%20AGU%20meeting/ASensitivityAnalysisonMSE
Validation of energy-weighted algorithm for radiation portal monitor using plastic scintillator.
Lee, Hyun Cheol; Shin, Wook-Geun; Park, Hyo Jun; Yoo, Do Hyun; Choi, Chang-Il; Park, Chang-Su; Kim, Hong-Suk; Min, Chul Hee
2016-01-01
To prevent illicit tracking of radionuclides, radiation portal monitor (RPM) systems employing plastic scintillators have been used in ports and airports. However, their poor energy resolution makes the discrimination of radioactive material inaccurate. In this study, an energy weight algorithm was validated to determine (133)Ba, (22)Na, (137)Cs, and (60)Co by using a plastic scintillator. The Compton edges of energy spectra were converted to peaks based on the algorithm. The peaks have a maximum error of 6% towards the theoretical Compton edge. Copyright © 2015 Elsevier Ltd. All rights reserved.
User's Manual for the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA)
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.; Cheatwood, F. McNeil
1996-01-01
This user's manual provides detailed instructions for the installation and the application of version 4.1 of the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA). Also provides simulation of flow field in thermochemical nonequilibrium around vehicles traveling at hypersonic velocities through the atmosphere. Earlier versions of LAURA were predominantly research codes, and they had minimal (or no) documentation. This manual describes UNIX-based utilities for customizing the code for special applications that also minimize system resource requirements. The algorithm is reviewed, and the various program options are related to specific equations and variables in the theoretical development.
An Enhanced K-Means Algorithm for Water Quality Analysis of The Haihe River in China.
Zou, Hui; Zou, Zhihong; Wang, Xiaojing
2015-11-12
The increase and the complexity of data caused by the uncertain environment is today's reality. In order to identify water quality effectively and reliably, this paper presents a modified fast clustering algorithm for water quality analysis. The algorithm has adopted a varying weights K-means cluster algorithm to analyze water monitoring data. The varying weights scheme was the best weighting indicator selected by a modified indicator weight self-adjustment algorithm based on K-means, which is named MIWAS-K-means. The new clustering algorithm avoids the margin of the iteration not being calculated in some cases. With the fast clustering analysis, we can identify the quality of water samples. The algorithm is applied in water quality analysis of the Haihe River (China) data obtained by the monitoring network over a period of eight years (2006-2013) with four indicators at seven different sites (2078 samples). Both the theoretical and simulated results demonstrate that the algorithm is efficient and reliable for water quality analysis of the Haihe River. In addition, the algorithm can be applied to more complex data matrices with high dimensionality.
NASA Technical Reports Server (NTRS)
Madyastha, Raghavendra K.; Aazhang, Behnaam; Henson, Troy F.; Huxhold, Wendy L.
1992-01-01
This paper addresses the issue of applying a globally convergent optimization algorithm to the training of multilayer perceptrons, a class of Artificial Neural Networks. The multilayer perceptrons are trained towards the solution of two highly nonlinear problems: (1) signal detection in a multi-user communication network, and (2) solving the inverse kinematics for a robotic manipulator. The research is motivated by the fact that a multilayer perceptron is theoretically capable of approximating any nonlinear function to within a specified accuracy. The algorithm that has been employed in this study combines the merits of two well known optimization algorithms, the Conjugate Gradients and the Trust Regions Algorithms. The performance is compared to a widely used algorithm, the Backpropagation Algorithm, that is basically a gradient-based algorithm, and hence, slow in converging. The performances of the two algorithms are compared with the convergence rate. Furthermore, in the case of the signal detection problem, performances are also benchmarked by the decision boundaries drawn as well as the probability of error obtained in either case.
STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION.
Fan, Jianqing; Xue, Lingzhou; Zou, Hui
2014-06-01
Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression.
Statistical mechanics of the vertex-cover problem
NASA Astrophysics Data System (ADS)
Hartmann, Alexander K.; Weigt, Martin
2003-10-01
We review recent progress in the study of the vertex-cover problem (VC). The VC belongs to the class of NP-complete graph theoretical problems, which plays a central role in theoretical computer science. On ensembles of random graphs, VC exhibits a coverable-uncoverable phase transition. Very close to this transition, depending on the solution algorithm, easy-hard transitions in the typical running time of the algorithms occur. We explain a statistical mechanics approach, which works by mapping the VC to a hard-core lattice gas, and then applying techniques such as the replica trick or the cavity approach. Using these methods, the phase diagram of the VC could be obtained exactly for connectivities c < e, where the VC is replica symmetric. Recently, this result could be confirmed using traditional mathematical techniques. For c > e, the solution of the VC exhibits full replica symmetry breaking. The statistical mechanics approach can also be used to study analytically the typical running time of simple complete and incomplete algorithms for the VC. Finally, we describe recent results for the VC when studied on other ensembles of finite- and infinite-dimensional graphs.
STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION
Fan, Jianqing; Xue, Lingzhou; Zou, Hui
2014-01-01
Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression. PMID:25598560
Minimal-memory realization of pearl-necklace encoders of general quantum convolutional codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Houshmand, Monireh; Hosseini-Khayat, Saied
2011-02-15
Quantum convolutional codes, like their classical counterparts, promise to offer higher error correction performance than block codes of equivalent encoding complexity, and are expected to find important applications in reliable quantum communication where a continuous stream of qubits is transmitted. Grassl and Roetteler devised an algorithm to encode a quantum convolutional code with a ''pearl-necklace'' encoder. Despite their algorithm's theoretical significance as a neat way of representing quantum convolutional codes, it is not well suited to practical realization. In fact, there is no straightforward way to implement any given pearl-necklace structure. This paper closes the gap between theoretical representation andmore » practical implementation. In our previous work, we presented an efficient algorithm to find a minimal-memory realization of a pearl-necklace encoder for Calderbank-Shor-Steane (CSS) convolutional codes. This work is an extension of our previous work and presents an algorithm for turning a pearl-necklace encoder for a general (non-CSS) quantum convolutional code into a realizable quantum convolutional encoder. We show that a minimal-memory realization depends on the commutativity relations between the gate strings in the pearl-necklace encoder. We find a realization by means of a weighted graph which details the noncommutative paths through the pearl necklace. The weight of the longest path in this graph is equal to the minimal amount of memory needed to implement the encoder. The algorithm has a polynomial-time complexity in the number of gate strings in the pearl-necklace encoder.« less
Python for Information Theoretic Analysis of Neural Data
Ince, Robin A. A.; Petersen, Rasmus S.; Swan, Daniel C.; Panzeri, Stefano
2008-01-01
Information theory, the mathematical theory of communication in the presence of noise, is playing an increasingly important role in modern quantitative neuroscience. It makes it possible to treat neural systems as stochastic communication channels and gain valuable, quantitative insights into their sensory coding function. These techniques provide results on how neurons encode stimuli in a way which is independent of any specific assumptions on which part of the neuronal response is signal and which is noise, and they can be usefully applied even to highly non-linear systems where traditional techniques fail. In this article, we describe our work and experiences using Python for information theoretic analysis. We outline some of the algorithmic, statistical and numerical challenges in the computation of information theoretic quantities from neural data. In particular, we consider the problems arising from limited sampling bias and from calculation of maximum entropy distributions in the presence of constraints representing the effects of different orders of interaction in the system. We explain how and why using Python has allowed us to significantly improve the speed and domain of applicability of the information theoretic algorithms, allowing analysis of data sets characterized by larger numbers of variables. We also discuss how our use of Python is facilitating integration with collaborative databases and centralised computational resources. PMID:19242557
Geometric characterization and simulation of planar layered elastomeric fibrous biomaterials
Carleton, James B.; D'Amore, Antonio; Feaver, Kristen R.; Rodin, Gregory J.; Sacks, Michael S.
2014-01-01
Many important biomaterials are composed of multiple layers of networked fibers. While there is a growing interest in modeling and simulation of the mechanical response of these biomaterials, a theoretical foundation for such simulations has yet to be firmly established. Moreover, correctly identifying and matching key geometric features is a critically important first step for performing reliable mechanical simulations. The present work addresses these issues in two ways. First, using methods of geometric probability we develop theoretical estimates for the mean linear and areal fiber intersection densities for two-dimensional fibrous networks. These densities are expressed in terms of the fiber density and the orientation distribution function, both of which are relatively easy-to-measure properties. Secondly, we develop a random walk algorithm for geometric simulation of two-dimensional fibrous networks which can accurately reproduce the prescribed fiber density and orientation distribution function. Furthermore, the linear and areal fiber intersection densities obtained with the algorithm are in agreement with the theoretical estimates. Both theoretical and computational results are compared with those obtained by post-processing of SEM images of actual scaffolds. These comparisons reveal difficulties inherent to resolving fine details of multilayered fibrous networks. The methods provided herein can provide a rational means to define and generate key geometric features from experimentally measured or prescribed scaffold structural data. PMID:25311685
Mining relational paths in integrated biomedical data.
He, Bing; Tang, Jie; Ding, Ying; Wang, Huijun; Sun, Yuyin; Shin, Jae Hong; Chen, Bin; Moorthy, Ganesh; Qiu, Judy; Desai, Pankaj; Wild, David J
2011-01-01
Much life science and biology research requires an understanding of complex relationships between biological entities (genes, compounds, pathways, diseases, and so on). There is a wealth of data on such relationships in publicly available datasets and publications, but these sources are overlapped and distributed so that finding pertinent relational data is increasingly difficult. Whilst most public datasets have associated tools for searching, there is a lack of searching methods that can cross data sources and that in particular search not only based on the biological entities themselves but also on the relationships between them. In this paper, we demonstrate how graph-theoretic algorithms for mining relational paths can be used together with a previous integrative data resource we developed called Chem2Bio2RDF to extract new biological insights about the relationships between such entities. In particular, we use these methods to investigate the genetic basis of side-effects of thiazolinedione drugs, and in particular make a hypothesis for the recently discovered cardiac side-effects of Rosiglitazone (Avandia) and a prediction for Pioglitazone which is backed up by recent clinical studies.
Influence Function Learning in Information Diffusion Networks
Du, Nan; Liang, Yingyu; Balcan, Maria-Florina; Song, Le
2015-01-01
Can we learn the influence of a set of people in a social network from cascades of information diffusion? This question is often addressed by a two-stage approach: first learn a diffusion model, and then calculate the influence based on the learned model. Thus, the success of this approach relies heavily on the correctness of the diffusion model which is hard to verify for real world data. In this paper, we exploit the insight that the influence functions in many diffusion models are coverage functions, and propose a novel parameterization of such functions using a convex combination of random basis functions. Moreover, we propose an efficient maximum likelihood based algorithm to learn such functions directly from cascade data, and hence bypass the need to specify a particular diffusion model in advance. We provide both theoretical and empirical analysis for our approach, showing that the proposed approach can provably learn the influence function with low sample complexity, be robust to the unknown diffusion models, and significantly outperform existing approaches in both synthetic and real world data. PMID:25973445
Operational Implementation of a Pc Uncertainty Construct for Conjunction Assessment Risk Analysis
NASA Technical Reports Server (NTRS)
Newman, Lauri K.; Hejduk, Matthew D.; Johnson, Lauren C.
2016-01-01
Earlier this year the NASA Conjunction Assessment and Risk Analysis (CARA) project presented the theoretical and algorithmic aspects of a method to include the uncertainties in the calculation inputs when computing the probability of collision (Pc) between two space objects, principally uncertainties in the covariances and the hard-body radius. The output of this calculation approach is to produce rather than a single Pc value an entire probability density function that will represent the range of possible Pc values given the uncertainties in the inputs and bring CA risk analysis methodologies more in line with modern risk management theory. The present study provides results from the exercise of this method against an extended dataset of satellite conjunctions in order to determine the effect of its use on the evaluation of conjunction assessment (CA) event risk posture. The effects are found to be considerable: a good number of events are downgraded from or upgraded to a serious risk designation on the basis of consideration of the Pc uncertainty. The findings counsel the integration of the developed methods into NASA CA operations.
Ren, Hai-Sheng; Ming, Mei-Jun; Ma, Jian-Yi; Li, Xiang-Yuan
2013-08-22
Within the framework of constrained density functional theory (CDFT), the diabatic or charge localized states of electron transfer (ET) have been constructed. Based on the diabatic states, inner reorganization energy λin has been directly calculated. For solvent reorganization energy λs, a novel and reasonable nonequilibrium solvation model is established by introducing a constrained equilibrium manipulation, and a new expression of λs has been formulated. It is found that λs is actually the cost of maintaining the residual polarization, which equilibrates with the extra electric field. On the basis of diabatic states constructed by CDFT, a numerical algorithm using the new formulations with the dielectric polarizable continuum model (D-PCM) has been implemented. As typical test cases, self-exchange ET reactions between tetracyanoethylene (TCNE) and tetrathiafulvalene (TTF) and their corresponding ionic radicals in acetonitrile are investigated. The calculated reorganization energies λ are 7293 cm(-1) for TCNE/TCNE(-) and 5939 cm(-1) for TTF/TTF(+) reactions, agreeing well with available experimental results of 7250 cm(-1) and 5810 cm(-1), respectively.
The Lateral Tracking Control for the Intelligent Vehicle Based on Adaptive PID Neural Network.
Han, Gaining; Fu, Weiping; Wang, Wen; Wu, Zongsheng
2017-05-30
The intelligent vehicle is a complicated nonlinear system, and the design of a path tracking controller is one of the key technologies in intelligent vehicle research. This paper mainly designs a lateral control dynamic model of the intelligent vehicle, which is used for lateral tracking control. Firstly, the vehicle dynamics model (i.e., transfer function) is established according to the vehicle parameters. Secondly, according to the vehicle steering control system and the CARMA (Controlled Auto-Regression and Moving-Average) model, a second-order control system model is built. Using forgetting factor recursive least square estimation (FFRLS), the system parameters are identified. Finally, a neural network PID (Proportion Integral Derivative) controller is established for lateral path tracking control based on the vehicle model and the steering system model. Experimental simulation results show that the proposed model and algorithm have the high real-time and robustness in path tracing control. This provides a certain theoretical basis for intelligent vehicle autonomous navigation tracking control, and lays the foundation for the vertical and lateral coupling control.
The Lateral Tracking Control for the Intelligent Vehicle Based on Adaptive PID Neural Network
Han, Gaining; Fu, Weiping; Wang, Wen; Wu, Zongsheng
2017-01-01
The intelligent vehicle is a complicated nonlinear system, and the design of a path tracking controller is one of the key technologies in intelligent vehicle research. This paper mainly designs a lateral control dynamic model of the intelligent vehicle, which is used for lateral tracking control. Firstly, the vehicle dynamics model (i.e., transfer function) is established according to the vehicle parameters. Secondly, according to the vehicle steering control system and the CARMA (Controlled Auto-Regression and Moving-Average) model, a second-order control system model is built. Using forgetting factor recursive least square estimation (FFRLS), the system parameters are identified. Finally, a neural network PID (Proportion Integral Derivative) controller is established for lateral path tracking control based on the vehicle model and the steering system model. Experimental simulation results show that the proposed model and algorithm have the high real-time and robustness in path tracing control. This provides a certain theoretical basis for intelligent vehicle autonomous navigation tracking control, and lays the foundation for the vertical and lateral coupling control. PMID:28556817
On the remote sensing of cloud properties from satellite infrared sounder data
NASA Technical Reports Server (NTRS)
Yeh, H. Y. M.
1984-01-01
A method for remote sensing of cloud parameters by using infrared sounder data has been developed on the basis of the parameterized infrared transfer equation applicable to cloudy atmospheres. The method is utilized for the retrieval of the cloud height, amount, and emissivity in 11 micro m region. Numerical analyses and retrieval experiments have been carried out by utilizing the synthetic sounder data for the theoretical study. The sensitivity of the numerical procedures to the measurement and instrument errors are also examined. The retrieved results are physically discussed and numerically compared with the model atmospheres. Comparisons reveal that the recovered cloud parameters agree reasonably well with the pre-assumed values. However, for cases when relatively thin clouds and/or small cloud fractional cover within a field of view are present, the recovered cloud parameters show considerable fluctuations. Experiments on the proposed algorithm are carried out utilizing High Resolution Infrared Sounder (HIRS/2) data of NOAA 6 and TIROS-N. Results of experiments show reasonably good comparisons with the surface reports and GOES satellite images.
NASA Astrophysics Data System (ADS)
Mishra, Deependra K.; Umbaugh, Scott E.; Lama, Norsang; Dahal, Rohini; Marino, Dominic J.; Sackman, Joseph
2016-09-01
CVIPtools is a software package for the exploration of computer vision and image processing developed in the Computer Vision and Image Processing Laboratory at Southern Illinois University Edwardsville. CVIPtools is available in three variants - a) CVIPtools Graphical User Interface, b) CVIPtools C library and c) CVIPtools MATLAB toolbox, which makes it accessible to a variety of different users. It offers students, faculty, researchers and any user a free and easy way to explore computer vision and image processing techniques. Many functions have been implemented and are updated on a regular basis, the library has reached a level of sophistication that makes it suitable for both educational and research purposes. In this paper, the detail list of the functions available in the CVIPtools MATLAB toolbox are presented and how these functions can be used in image analysis and computer vision applications. The CVIPtools MATLAB toolbox allows the user to gain practical experience to better understand underlying theoretical problems in image processing and pattern recognition. As an example application, the algorithm for the automatic creation of masks for veterinary thermographic images is presented.
Derivation of the Freundlich Adsorption Isotherm from Kinetics
ERIC Educational Resources Information Center
Skopp, Joseph
2009-01-01
The Freundlich adsorption isotherm is a useful description of adsorption phenomena. It is frequently presented as an empirical equation with little theoretical basis. In fact, a variety of derivations exist. Here a new derivation is presented using the concepts of fractal reaction kinetics. This derivation provides an alternative basis for…
A global optimization perspective on molecular clusters.
Marques, J M C; Pereira, F B; Llanio-Trujillo, J L; Abreu, P E; Albertí, M; Aguilar, A; Pirani, F; Bartolomei, M
2017-04-28
Although there is a long history behind the idea of chemical structure, this is a key concept that continues to challenge chemists. Chemical structure is fundamental to understanding most of the properties of matter and its knowledge for complex systems requires the use of state-of-the-art techniques, either experimental or theoretical. From the theoretical view point, one needs to establish the interaction potential among the atoms or molecules of the system, which contains all the information regarding the energy landscape, and employ optimization algorithms to discover the relevant stationary points. In particular, global optimization methods are of major importance to search for the low-energy structures of molecular aggregates. We review the application of global optimization techniques to several molecular clusters; some new results are also reported. Emphasis is given to evolutionary algorithms and their application in the study of the microsolvation of alkali-metal and Ca 2+ ions with various types of solvents.This article is part of the themed issue 'Theoretical and computational studies of non-equilibrium and non-statistical dynamics in the gas phase, in the condensed phase and at interfaces'. © 2017 The Author(s).
A global optimization perspective on molecular clusters
Pereira, F. B.; Llanio-Trujillo, J. L.; Abreu, P. E.; Albertí, M.; Aguilar, A.; Pirani, F.; Bartolomei, M.
2017-01-01
Although there is a long history behind the idea of chemical structure, this is a key concept that continues to challenge chemists. Chemical structure is fundamental to understanding most of the properties of matter and its knowledge for complex systems requires the use of state-of-the-art techniques, either experimental or theoretical. From the theoretical view point, one needs to establish the interaction potential among the atoms or molecules of the system, which contains all the information regarding the energy landscape, and employ optimization algorithms to discover the relevant stationary points. In particular, global optimization methods are of major importance to search for the low-energy structures of molecular aggregates. We review the application of global optimization techniques to several molecular clusters; some new results are also reported. Emphasis is given to evolutionary algorithms and their application in the study of the microsolvation of alkali-metal and Ca2+ ions with various types of solvents. This article is part of the themed issue ‘Theoretical and computational studies of non-equilibrium and non-statistical dynamics in the gas phase, in the condensed phase and at interfaces’. PMID:28320902
Managing and capturing the physics of robotic systems
NASA Astrophysics Data System (ADS)
Werfel, Justin
Algorithmic and other theoretical analyses of robotic systems often use a discretized or otherwise idealized framework, while the real world is continuous-valued and noisy. This disconnect can make theoretical work sometimes problematic to apply successfully to real-world systems. One approach to bridging the separation can be to design hardware to take advantage of simple physical effects mechanically, in order to guide elements into a desired set of discrete attracting states. As a result, the system behavior can effectively approximate a discretized formalism, so that proofs based on an idealization remain directly relevant, while control can be made simpler. It is important to note, conversely, that such an approach does not make a physical instantiation unnecessary nor a purely theoretical treatment sufficient. Experiments with hardware in practice always reveal physical effects not originally accounted for in simulation or analytic modeling, which lead to unanticipated results and require nontrivial modifications to control algorithms in order to achieve desired outcomes. I will discuss these points in the context of swarm robotic systems recently developed at the Self-Organizing Systems Research Group at Harvard.
NASA Astrophysics Data System (ADS)
Cheng, Li; Shen, Zuochun; Lu, Jianye; Gao, Huide; Lü, Zhiwei
2005-11-01
Dissociation energies, ionization potentials and electron affinities of three perfluoroalkyl iodides, CF 3I, C 2F 5I, and i-C 3F 7I are calculated accurately with B3LYP, MP n ( n = 2-4), QCISD, QCISD(T), CCSD, and CCSD(T) methods. Calculations are performed by using large-core correlation-consistent pseudopotential basis set (SDB-aug-cc-pVTZ) for iodine atom. In all energy calculations, the zero point vibration energy is corrected. And the basis set superposition error is corrected by counterpoise method in the calculation of dissociation energy. Theoretical results are compared with the experimental values.
Classifying epileptic EEG signals with delay permutation entropy and Multi-Scale K-means.
Zhu, Guohun; Li, Yan; Wen, Peng Paul; Wang, Shuaifang
2015-01-01
Most epileptic EEG classification algorithms are supervised and require large training datasets, that hinder their use in real time applications. This chapter proposes an unsupervised Multi-Scale K-means (MSK-means) MSK-means algorithm to distinguish epileptic EEG signals and identify epileptic zones. The random initialization of the K-means algorithm can lead to wrong clusters. Based on the characteristics of EEGs, the MSK-means MSK-means algorithm initializes the coarse-scale centroid of a cluster with a suitable scale factor. In this chapter, the MSK-means algorithm is proved theoretically superior to the K-means algorithm on efficiency. In addition, three classifiers: the K-means, MSK-means MSK-means and support vector machine (SVM), are used to identify seizure and localize epileptogenic zone using delay permutation entropy features. The experimental results demonstrate that identifying seizure with the MSK-means algorithm and delay permutation entropy achieves 4. 7 % higher accuracy than that of K-means, and 0. 7 % higher accuracy than that of the SVM.
A linear recurrent kernel online learning algorithm with sparse updates.
Fan, Haijin; Song, Qing
2014-02-01
In this paper, we propose a recurrent kernel algorithm with selectively sparse updates for online learning. The algorithm introduces a linear recurrent term in the estimation of the current output. This makes the past information reusable for updating of the algorithm in the form of a recurrent gradient term. To ensure that the reuse of this recurrent gradient indeed accelerates the convergence speed, a novel hybrid recurrent training is proposed to switch on or off learning the recurrent information according to the magnitude of the current training error. Furthermore, the algorithm includes a data-dependent adaptive learning rate which can provide guaranteed system weight convergence at each training iteration. The learning rate is set as zero when the training violates the derived convergence conditions, which makes the algorithm updating process sparse. Theoretical analyses of the weight convergence are presented and experimental results show the good performance of the proposed algorithm in terms of convergence speed and estimation accuracy. Copyright © 2013 Elsevier Ltd. All rights reserved.
Analysis and improvement of the quantum image matching
NASA Astrophysics Data System (ADS)
Dang, Yijie; Jiang, Nan; Hu, Hao; Zhang, Wenyin
2017-11-01
We investigate the quantum image matching algorithm proposed by Jiang et al. (Quantum Inf Process 15(9):3543-3572, 2016). Although the complexity of this algorithm is much better than the classical exhaustive algorithm, there may be an error in it: After matching the area between two images, only the pixel at the upper left corner of the matched area played part in following steps. That is to say, the paper only matched one pixel, instead of an area. If more than one pixels in the big image are the same as the one at the upper left corner of the small image, the algorithm will randomly measure one of them, which causes the error. In this paper, an improved version is presented which takes full advantage of the whole matched area to locate a small image in a big image. The theoretical analysis indicates that the network complexity is higher than the previous algorithm, but it is still far lower than the classical algorithm. Hence, this algorithm is still efficient.
Zheng, Wei; Yan, Xiaoyong; Zhao, Wei; Qian, Chengshan
2017-12-20
A novel large-scale multi-hop localization algorithm based on regularized extreme learning is proposed in this paper. The large-scale multi-hop localization problem is formulated as a learning problem. Unlike other similar localization algorithms, the proposed algorithm overcomes the shortcoming of the traditional algorithms which are only applicable to an isotropic network, therefore has a strong adaptability to the complex deployment environment. The proposed algorithm is composed of three stages: data acquisition, modeling and location estimation. In data acquisition stage, the training information between nodes of the given network is collected. In modeling stage, the model among the hop-counts and the physical distances between nodes is constructed using regularized extreme learning. In location estimation stage, each node finds its specific location in a distributed manner. Theoretical analysis and several experiments show that the proposed algorithm can adapt to the different topological environments with low computational cost. Furthermore, high accuracy can be achieved by this method without setting complex parameters.
NASA Astrophysics Data System (ADS)
Wieferink, Jürgen; Krüger, Peter; Pollmann, Johannes
2006-11-01
We present an algorithm for DFT calculations employing Gaussian basis sets for the wave function and a Fourier basis for the potential representation. In particular, a numerically very efficient calculation of the local potential matrix elements and the charge density is described. Special emphasis is placed on the consequences of periodicity and explicit k -vector dependence. The algorithm is tested by comparison with more straightforward ones for the case of adsorption of ethylene on the silicon-rich SiC(001)-(3×2) surface clearly revealing its substantial advantages. A complete self-consistency cycle is speeded up by roughly one order of magnitude since the calculation of matrix elements and of the charge density are accelerated by factors of 10 and 80, respectively, as compared to their straightforward calculation. Our results for C2H4:SiC(001)-(3×2) show that ethylene molecules preferentially adsorb in on-top positions above Si dimers on the substrate surface saturating both dimer dangling bonds per unit cell. In addition, a twist of the molecules around a surface-perpendicular axis is slightly favored energetically similar to the case of a complete monolayer of ethylene adsorbed on the Si(001)-(2×1) surface.