Perceptually-Based Adaptive JPEG Coding
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Rosenholtz, Ruth; Null, Cynthia H. (Technical Monitor)
1996-01-01
An extension to the JPEG standard (ISO/IEC DIS 10918-3) allows spatial adaptive coding of still images. As with baseline JPEG coding, one quantization matrix applies to an entire image channel, but in addition the user may specify a multiplier for each 8 x 8 block, which multiplies the quantization matrix, yielding the new matrix for the block. MPEG 1 and 2 use much the same scheme, except there the multiplier changes only on macroblock boundaries. We propose a method for perceptual optimization of the set of multipliers. We compute the perceptual error for each block based upon DCT quantization error adjusted according to contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image. We investigate the bitrate savings due to this adaptive coding scheme and the relative importance of the different sorts of masking on adaptive coding.
NASA Astrophysics Data System (ADS)
Odinokov, S. B.; Petrov, A. V.
1995-10-01
Mathematical models of components of a vector-matrix optoelectronic multiplier are considered. Perturbing factors influencing a real optoelectronic system — noise and errors of radiation sources and detectors, nonlinearity of an analogue—digital converter, nonideal optical systems — are taken into account. Analytic expressions are obtained for relating the precision of such a multiplier to the probability of an error amounting to one bit, to the parameters describing the quality of the multiplier components, and to the quality of the optical system of the processor. Various methods of increasing the dynamic range of a multiplier are considered at the technical systems level.
Andreoli, Daria; Volpe, Giorgio; Popoff, Sébastien; Katz, Ori; Grésillon, Samuel; Gigan, Sylvain
2015-01-01
We present a method to measure the spectrally-resolved transmission matrix of a multiply scattering medium, thus allowing for the deterministic spatiospectral control of a broadband light source by means of wavefront shaping. As a demonstration, we show how the medium can be used to selectively focus one or many spectral components of a femtosecond pulse, and how it can be turned into a controllable dispersive optical element to spatially separate different spectral components to arbitrary positions. PMID:25965944
A general parallel sparse-blocked matrix multiply for linear scaling SCF theory
NASA Astrophysics Data System (ADS)
Challacombe, Matt
2000-06-01
A general approach to the parallel sparse-blocked matrix-matrix multiply is developed in the context of linear scaling self-consistent-field (SCF) theory. The data-parallel message passing method uses non-blocking communication to overlap computation and communication. The space filling curve heuristic is used to achieve data locality for sparse matrix elements that decay with “separation”. Load balance is achieved by solving the bin packing problem for blocks with variable size.With this new method as the kernel, parallel performance of the simplified density matrix minimization (SDMM) for solution of the SCF equations is investigated for RHF/6-31G ∗∗ water clusters and RHF/3-21G estane globules. Sustained rates above 5.7 GFLOPS for the SDMM have been achieved for (H 2 O) 200 with 95 Origin 2000 processors. Scalability is found to be limited by load imbalance, which increases with decreasing granularity, due primarily to the inhomogeneous distribution of variable block sizes.
Integrated optic vector-matrix multiplier
Watts, Michael R [Albuquerque, NM
2011-09-27
A vector-matrix multiplier is disclosed which uses N different wavelengths of light that are modulated with amplitudes representing elements of an N.times.1 vector and combined to form an input wavelength-division multiplexed (WDM) light stream. The input WDM light stream is split into N streamlets from which each wavelength of the light is individually coupled out and modulated for a second time using an input signal representing elements of an M.times.N matrix, and is then coupled into an output waveguide for each streamlet to form an output WDM light stream which is detected to generate a product of the vector and matrix. The vector-matrix multiplier can be formed as an integrated optical circuit using either waveguide amplitude modulators or ring resonator amplitude modulators.
Design and experimental verification for optical module of optical vector-matrix multiplier.
Zhu, Weiwei; Zhang, Lei; Lu, Yangyang; Zhou, Ping; Yang, Lin
2013-06-20
Optical computing is a new method to implement signal processing functions. The multiplication between a vector and a matrix is an important arithmetic algorithm in the signal processing domain. The optical vector-matrix multiplier (OVMM) is an optoelectronic system to carry out this operation, which consists of an electronic module and an optical module. In this paper, we propose an optical module for OVMM. To eliminate the cross talk and make full use of the optical elements, an elaborately designed structure that involves spherical lenses and cylindrical lenses is utilized in this optical system. The optical design software package ZEMAX is used to optimize the parameters and simulate the whole system. Finally, experimental data is obtained through experiments to evaluate the overall performance of the system. The results of both simulation and experiment indicate that the system constructed can implement the multiplication between a matrix with dimensions of 16 by 16 and a vector with a dimension of 16 successfully.
Covariance Matrix Estimation for Massive MIMO
NASA Astrophysics Data System (ADS)
Upadhya, Karthik; Vorobyov, Sergiy A.
2018-04-01
We propose a novel pilot structure for covariance matrix estimation in massive multiple-input multiple-output (MIMO) systems in which each user transmits two pilot sequences, with the second pilot sequence multiplied by a random phase-shift. The covariance matrix of a particular user is obtained by computing the sample cross-correlation of the channel estimates obtained from the two pilot sequences. This approach relaxes the requirement that all the users transmit their uplink pilots over the same set of symbols. We derive expressions for the achievable rate and the mean-squared error of the covariance matrix estimate when the proposed method is used with staggered pilots. The performance of the proposed method is compared with existing methods through simulations.
Meng, Fan; Yang, Xiaomei; Zhou, Chenghu
2014-01-01
This paper studies the problem of the restoration of images corrupted by mixed Gaussian-impulse noise. In recent years, low-rank matrix reconstruction has become a research hotspot in many scientific and engineering domains such as machine learning, image processing, computer vision and bioinformatics, which mainly involves the problem of matrix completion and robust principal component analysis, namely recovering a low-rank matrix from an incomplete but accurate sampling subset of its entries and from an observed data matrix with an unknown fraction of its entries being arbitrarily corrupted, respectively. Inspired by these ideas, we consider the problem of recovering a low-rank matrix from an incomplete sampling subset of its entries with an unknown fraction of the samplings contaminated by arbitrary errors, which is defined as the problem of matrix completion from corrupted samplings and modeled as a convex optimization problem that minimizes a combination of the nuclear norm and the -norm in this paper. Meanwhile, we put forward a novel and effective algorithm called augmented Lagrange multipliers to exactly solve the problem. For mixed Gaussian-impulse noise removal, we regard it as the problem of matrix completion from corrupted samplings, and restore the noisy image following an impulse-detecting procedure. Compared with some existing methods for mixed noise removal, the recovery quality performance of our method is dominant if images possess low-rank features such as geometrically regular textures and similar structured contents; especially when the density of impulse noise is relatively high and the variance of Gaussian noise is small, our method can outperform the traditional methods significantly not only in the simultaneous removal of Gaussian noise and impulse noise, and the restoration ability for a low-rank image matrix, but also in the preservation of textures and details in the image. PMID:25248103
New algorithms to compute the nearness symmetric solution of the matrix equation.
Peng, Zhen-Yun; Fang, Yang-Zhi; Xiao, Xian-Wei; Du, Dan-Dan
2016-01-01
In this paper we consider the nearness symmetric solution of the matrix equation AXB = C to a given matrix [Formula: see text] in the sense of the Frobenius norm. By discussing equivalent form of the considered problem, we derive some necessary and sufficient conditions for the matrix [Formula: see text] is a solution of the considered problem. Based on the idea of the alternating variable minimization with multiplier method, we propose two iterative methods to compute the solution of the considered problem, and analyze the global convergence results of the proposed algorithms. Numerical results illustrate the proposed methods are more effective than the existing two methods proposed in Peng et al. (Appl Math Comput 160:763-777, 2005) and Peng (Int J Comput Math 87: 1820-1830, 2010).
Multiplier Accounting of Indian Mining Industry: The Application
NASA Astrophysics Data System (ADS)
Hussain, Azhar; Karmakar, Netai Chandra
2017-10-01
In the previous paper (Hussain and Karmakar in Inst Eng India Ser, 2014. doi: 10.1007/s40033-014-0058-0), the concepts of input-output transaction matrix and multiplier were explained in detail. Input-output multipliers are indicators used for predicting the total impact on an economy due to changes in its industrial demand and output which is calculated using transaction matrix. The aim of this paper is to present an application of the concepts with respect to the mining industry, showing progress in different sectors of mining with time and explaining different outcomes from the results obtained. The analysis shows that a few mineral industries saw a significant growth in their multiplier values over the years.
Sparse Covariance Matrix Estimation With Eigenvalue Constraints
LIU, Han; WANG, Lie; ZHAO, Tuo
2014-01-01
We propose a new approach for estimating high-dimensional, positive-definite covariance matrices. Our method extends the generalized thresholding operator by adding an explicit eigenvalue constraint. The estimated covariance matrix simultaneously achieves sparsity and positive definiteness. The estimator is rate optimal in the minimax sense and we develop an efficient iterative soft-thresholding and projection algorithm based on the alternating direction method of multipliers. Empirically, we conduct thorough numerical experiments on simulated datasets as well as real data examples to illustrate the usefulness of our method. Supplementary materials for the article are available online. PMID:25620866
Distributed Matrix Completion: Applications to Cooperative Positioning in Noisy Environments
2013-12-11
positioning, and a gossip version of low-rank approximation were developed. A convex relaxation for positioning in the presence of noise was shown...computing the leading eigenvectors of a large data matrix through gossip algorithms. A new algorithm is proposed that amounts to iteratively multiplying...generalization of gossip algorithms for consensus. The algorithms outperform state-of-the-art methods in a communication-limited scenario. Positioning via
Closed-form integrator for the quaternion (euler angle) kinematics equations
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A. (Inventor)
2000-01-01
The invention is embodied in a method of integrating kinematics equations for updating a set of vehicle attitude angles of a vehicle using 3-dimensional angular velocities of the vehicle, which includes computing an integrating factor matrix from quantities corresponding to the 3-dimensional angular velocities, computing a total integrated angular rate from the quantities corresponding to a 3-dimensional angular velocities, computing a state transition matrix as a sum of (a) a first complementary function of the total integrated angular rate and (b) the integrating factor matrix multiplied by a second complementary function of the total integrated angular rate, and updating the set of vehicle attitude angles using the state transition matrix. Preferably, the method further includes computing a quanternion vector from the quantities corresponding to the 3-dimensional angular velocities, in which case the updating of the set of vehicle attitude angles using the state transition matrix is carried out by (a) updating the quanternion vector by multiplying the quanternion vector by the state transition matrix to produce an updated quanternion vector and (b) computing an updated set of vehicle attitude angles from the updated quanternion vector. The first and second trigonometric functions are complementary, such as a sine and a cosine. The quantities corresponding to the 3-dimensional angular velocities include respective averages of the 3-dimensional angular velocities over plural time frames. The updating of the quanternion vector preserves the norm of the vector, whereby the updated set of vehicle attitude angles are virtually error-free.
Optical Data Processing for Missile Guidance.
1983-09-30
detector outputs are a. This light intensity multiplies the signal in the AG shifted down at a clock rate 1/Tq and if successive cell and At waves leave the...lolit matrix matrix matrix multiplier -ytem. of B. We thus input these later columns ofB into the input LE) array at successive times with their...converted to frequency and time/space by the results Bj, = B.+ I on two successive iterations k and k frequency-multiplexing unit in Fig. 5 as shown in Eq
Flexible multiply towpreg and method of production therefor
NASA Technical Reports Server (NTRS)
Muzzy, John D. (Inventor); Varughese, Babu (Inventor)
1992-01-01
This invention relates to an improved flexible towpreg and a method of production therefor. The improved flexible towpreg comprises a plurality of towpreg plies which comprise reinforcing filaments and matrix forming material; the reinforcing filaments being substantially wetout by the matrix forming material such that the towpreg plies are substantially void-free composite articles, and the towpreg plies having an average thickness less than about 100 microns. The method of production for the improved flexible towpreg comprises the steps of spreading the reinforcing filaments to expose individually substantially all of the reinforcing filaments; coating the reinforcing filaments with the matrix forming material in a manner causing interfacial adhesion of the matrix forming material to the reinforcing filaments; forming the towpreg plies by heating the matrix forming material contacting the reinforcing filaments until the matrix forming material liquefies and coats the reinforcing filaments; and cooling the towpreg plies in a manner such that substantial cohesion between neighboring towpreg plies is prevented until the matrix forming material solidifies.
The bilinear complexity and practical algorithms for matrix multiplication
NASA Astrophysics Data System (ADS)
Smirnov, A. V.
2013-12-01
A method for deriving bilinear algorithms for matrix multiplication is proposed. New estimates for the bilinear complexity of a number of problems of the exact and approximate multiplication of rectangular matrices are obtained. In particular, the estimate for the boundary rank of multiplying 3 × 3 matrices is improved and a practical algorithm for the exact multiplication of square n × n matrices is proposed. The asymptotic arithmetic complexity of this algorithm is O( n 2.7743).
NASA Technical Reports Server (NTRS)
Habiby, Sarry F.; Collins, Stuart A., Jr.
1987-01-01
The design and implementation of a digital (numerical) optical matrix-vector multiplier are presented. A Hughes liquid crystal light valve, the residue arithmetic representation, and a holographic optical memory are used to construct position coded optical look-up tables. All operations are performed in effectively one light valve response time with a potential for a high information density.
Habiby, S F; Collins, S A
1987-11-01
The design and implementation of a digital (numerical) optical matrix-vector multiplier are presented. A Hughes liquid crystal light valve, the residue arithmetic representation, and a holographic optical memory are used to construct position coded optical look-up tables. All operations are performed in effectively one light valve response time with a potential for a high information density.
Qing Liu; Zhihui Lai; Zongwei Zhou; Fangjun Kuang; Zhong Jin
2016-01-01
Low-rank matrix completion aims to recover a matrix from a small subset of its entries and has received much attention in the field of computer vision. Most existing methods formulate the task as a low-rank matrix approximation problem. A truncated nuclear norm has recently been proposed as a better approximation to the rank of matrix than a nuclear norm. The corresponding optimization method, truncated nuclear norm regularization (TNNR), converges better than the nuclear norm minimization-based methods. However, it is not robust to the number of subtracted singular values and requires a large number of iterations to converge. In this paper, a TNNR method based on weighted residual error (TNNR-WRE) for matrix completion and its extension model (ETNNR-WRE) are proposed. TNNR-WRE assigns different weights to the rows of the residual error matrix in an augmented Lagrange function to accelerate the convergence of the TNNR method. The ETNNR-WRE is much more robust to the number of subtracted singular values than the TNNR-WRE, TNNR alternating direction method of multipliers, and TNNR accelerated proximal gradient with Line search methods. Experimental results using both synthetic and real visual data sets show that the proposed TNNR-WRE and ETNNR-WRE methods perform better than TNNR and Iteratively Reweighted Nuclear Norm (IRNN) methods.
NASA Astrophysics Data System (ADS)
Wilkinson, Michael; Grant, John
2018-03-01
We consider a stochastic process in which independent identically distributed random matrices are multiplied and where the Lyapunov exponent of the product is positive. We continue multiplying the random matrices as long as the norm, ɛ, of the product is less than unity. If the norm is greater than unity we reset the matrix to a multiple of the identity and then continue the multiplication. We address the problem of determining the probability density function of the norm, \
NASA Astrophysics Data System (ADS)
Moura, Ricardo; Sinha, Bimal; Coelho, Carlos A.
2017-06-01
The recent popularity of the use of synthetic data as a Statistical Disclosure Control technique has enabled the development of several methods of generating and analyzing such data, but almost always relying in asymptotic distributions and in consequence being not adequate for small sample datasets. Thus, a likelihood-based exact inference procedure is derived for the matrix of regression coefficients of the multivariate regression model, for multiply imputed synthetic data generated via Posterior Predictive Sampling. Since it is based in exact distributions this procedure may even be used in small sample datasets. Simulation studies compare the results obtained from the proposed exact inferential procedure with the results obtained from an adaptation of Reiters combination rule to multiply imputed synthetic datasets and an application to the 2000 Current Population Survey is discussed.
NASA Technical Reports Server (NTRS)
Muzzy, John D. (Inventor); Varughese, Babu (Inventor)
1992-01-01
This invention relates to an improved flexible towpreg and a method of production therefor. The improved flexible towpreg comprises a plurality of towpreg plies which comprise reinforcing filaments and matrix forming material; the reinforcing filaments being substantially wetout by the matrix forming material such that the towpreg plies are substantially void-free composite articles, and the towpreg plies having an average thickness less than about 100 microns. The method of production for the improved flexible towpreg comprises the steps of spreading the reinforcing filaments to expose individually substantially all of the reinforcing filaments; coating the reinforcing filaments with the matrix forming material in a manner causing interfacial adhesion of the matrix forming material to the reinforcing filaments; forming the towpreg plies by heating the matrix forming material contacting the reinforcing filaments until the matrix forming material liquifies and coats the reinforcing filaments; and cooling the towpreg plies in a manner such that substantial cohesion between neighboring towpreg plies is prevented until the matrix forming material solidifies.
Optical computation using residue arithmetic.
Huang, A; Tsunoda, Y; Goodman, J W; Ishihara, S
1979-01-15
Using residue arithmetic it is possible to perform additions, subtractions, multiplications, and polynomial evaluation without the necessity for carry operations. Calculations can, therefore, be performed in a fully parallel manner. Several different optical methods for performing residue arithmetic operations are described. A possible combination of such methods to form a matrix vector multiplier is considered. The potential advantages of optics in performing these kinds of operations are discussed.
Efficient matrix approach to optical wave propagation and Linear Canonical Transforms.
Shakir, Sami A; Fried, David L; Pease, Edwin A; Brennan, Terry J; Dolash, Thomas M
2015-10-05
The Fresnel diffraction integral form of optical wave propagation and the more general Linear Canonical Transforms (LCT) are cast into a matrix transformation form. Taking advantage of recent efficient matrix multiply algorithms, this approach promises an efficient computational and analytical tool that is competitive with FFT based methods but offers better behavior in terms of aliasing, transparent boundary condition, and flexibility in number of sampling points and computational window sizes of the input and output planes being independent. This flexibility makes the method significantly faster than FFT based propagators when only a single point, as in Strehl metrics, or a limited number of points, as in power-in-the-bucket metrics, are needed in the output observation plane.
Random Matrix Approach for Primal-Dual Portfolio Optimization Problems
NASA Astrophysics Data System (ADS)
Tada, Daichi; Yamamoto, Hisashi; Shinzato, Takashi
2017-12-01
In this paper, we revisit the portfolio optimization problems of the minimization/maximization of investment risk under constraints of budget and investment concentration (primal problem) and the maximization/minimization of investment concentration under constraints of budget and investment risk (dual problem) for the case that the variances of the return rates of the assets are identical. We analyze both optimization problems by the Lagrange multiplier method and the random matrix approach. Thereafter, we compare the results obtained from our proposed approach with the results obtained in previous work. Moreover, we use numerical experiments to validate the results obtained from the replica approach and the random matrix approach as methods for analyzing both the primal and dual portfolio optimization problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Podgorsak, A; Bednarek, D; Rudin, S
2016-06-15
Purpose: To successfully implement and operate a photon counting scheme on an electron multiplying charged-coupled device (EMCCD) based micro-CT system. Methods: We built an EMCCD based micro-CT system and implemented a photon counting scheme. EMCCD detectors use avalanche transfer registries to multiply the input signal far above the readout noise floor. Due to intrinsic differences in the pixel array, using a global threshold for photon counting is not optimal. To address this shortcoming, we generated a threshold array based on sixty dark fields (no x-ray exposure). We calculated an average matrix and a variance matrix of the dark field sequence.more » The average matrix was used for the offset correction while the variance matrix was used to set individual pixel thresholds for the photon counting scheme. Three hundred photon counting frames were added for each projection and 360 projections were acquired for each object. The system was used to scan various objects followed by reconstruction using an FDK algorithm. Results: Examination of the projection images and reconstructed slices of the objects indicated clear interior detail free of beam hardening artifacts. This suggests successful implementation of the photon counting scheme on our EMCCD based micro-CT system. Conclusion: This work indicates that it is possible to implement and operate a photon counting scheme on an EMCCD based micro-CT system, suggesting that these devices might be able to operate at very low x-ray exposures in a photon counting mode. Such devices could have future implications in clinical CT protocols. NIH Grant R01EB002873; Toshiba Medical Systems Corp.« less
Performance analysis of structured gradient algorithm. [for adaptive beamforming linear arrays
NASA Technical Reports Server (NTRS)
Godara, Lal C.
1990-01-01
The structured gradient algorithm uses a structured estimate of the array correlation matrix (ACM) to estimate the gradient required for the constrained least-mean-square (LMS) algorithm. This structure reflects the structure of the exact array correlation matrix for an equispaced linear array and is obtained by spatial averaging of the elements of the noisy correlation matrix. In its standard form the LMS algorithm does not exploit the structure of the array correlation matrix. The gradient is estimated by multiplying the array output with the receiver outputs. An analysis of the two algorithms is presented to show that the covariance of the gradient estimated by the structured method is less sensitive to the look direction signal than that estimated by the standard method. The effect of the number of elements on the signal sensitivity of the two algorithms is studied.
Background recovery via motion-based robust principal component analysis with matrix factorization
NASA Astrophysics Data System (ADS)
Pan, Peng; Wang, Yongli; Zhou, Mingyuan; Sun, Zhipeng; He, Guoping
2018-03-01
Background recovery is a key technique in video analysis, but it still suffers from many challenges, such as camouflage, lighting changes, and diverse types of image noise. Robust principal component analysis (RPCA), which aims to recover a low-rank matrix and a sparse matrix, is a general framework for background recovery. The nuclear norm is widely used as a convex surrogate for the rank function in RPCA, which requires computing the singular value decomposition (SVD), a task that is increasingly costly as matrix sizes and ranks increase. However, matrix factorization greatly reduces the dimension of the matrix for which the SVD must be computed. Motion information has been shown to improve low-rank matrix recovery in RPCA, but this method still finds it difficult to handle original video data sets because of its batch-mode formulation and implementation. Hence, in this paper, we propose a motion-assisted RPCA model with matrix factorization (FM-RPCA) for background recovery. Moreover, an efficient linear alternating direction method of multipliers with a matrix factorization (FL-ADM) algorithm is designed for solving the proposed FM-RPCA model. Experimental results illustrate that the method provides stable results and is more efficient than the current state-of-the-art algorithms.
NASA Astrophysics Data System (ADS)
Razgulin, A. V.; Sazonova, S. V.
2017-09-01
A novel statement of the Fourier filtering problem based on the use of matrix Fourier filters instead of conventional multiplier filters is considered. The basic properties of the matrix Fourier filtering for the filters in the Hilbert-Schmidt class are established. It is proved that the solutions with a finite energy to the periodic initial boundary value problem for the quasi-linear functional differential diffusion equation with the matrix Fourier filtering Lipschitz continuously depend on the filter. The problem of optimal matrix Fourier filtering is formulated, and its solvability for various classes of matrix Fourier filters is proved. It is proved that the objective functional is differentiable with respect to the matrix Fourier filter, and the convergence of a version of the gradient projection method is also proved.
Spectral analysis of the UFBG-based acousto—optical modulator in V-I transmission matrix formalism
NASA Astrophysics Data System (ADS)
Wu, Liang-Ying; Pei, Li; Liu, Chao; Wang, Yi-Qun; Weng, Si-Jun; Wang, Jian-Shuai
2014-11-01
In this study, the V-I transmission matrix formalism (V-I method) is proposed to analyze the spectrum characteristics of the uniform fiber Bragg grating (FBG)-based acousto—optic modulators (UFBG-AOM). The simulation results demonstrate that both the amplitude of the acoustically induced strain and the frequency of the acoustic wave (AW) have an effect on the spectrum. Additionally, the wavelength spacing between the primary reflectivity peak and the secondary reflectivity peak is proportional to the acoustic frequency with the ratio 0.1425 nm/MHz. Meanwhile, we compare the amount of calculation. For the FBG whose period is M, the calculation of the V-I method is 4 × (2M-1) in addition/subtraction, 8 × (2M - 1) in multiply/division and 2M in exponent arithmetic, which is almost a quarter of the multi-film method and transfer matrix (TM) method. The detailed analysis indicates that, compared with the conventional multi-film method and transfer matrix (TM) method, the V-I method is faster and less complex.
Combined group ECC protection and subgroup parity protection
Gara, Alan G.; Chen, Dong; Heidelberger, Philip; Ohmacht, Martin
2013-06-18
A method and system are disclosed for providing combined error code protection and subgroup parity protection for a given group of n bits. The method comprises the steps of identifying a number, m, of redundant bits for said error protection; and constructing a matrix P, wherein multiplying said given group of n bits with P produces m redundant error correction code (ECC) protection bits, and two columns of P provide parity protection for subgroups of said given group of n bits. In the preferred embodiment of the invention, the matrix P is constructed by generating permutations of m bit wide vectors with three or more, but an odd number of, elements with value one and the other elements with value zero; and assigning said vectors to rows of the matrix P.
NASA Technical Reports Server (NTRS)
Tielking, John T.
1989-01-01
Two algorithms for obtaining static contact solutions are described in this presentation. Although they were derived for contact problems involving specific structures (a tire and a solid rubber cylinder), they are sufficiently general to be applied to other shell-of-revolution and solid-body contact problems. The shell-of-revolution contact algorithm is a method of obtaining a point load influence coefficient matrix for the portion of shell surface that is expected to carry a contact load. If the shell is sufficiently linear with respect to contact loading, a single influence coefficient matrix can be used to obtain a good approximation of the contact pressure distribution. Otherwise, the matrix will be updated to reflect nonlinear load-deflection behavior. The solid-body contact algorithm utilizes a Lagrange multiplier to include the contact constraint in a potential energy functional. The solution is found by applying the principle of minimum potential energy. The Lagrange multiplier is identified as the contact load resultant for a specific deflection. At present, only frictionless contact solutions have been obtained with these algorithms. A sliding tread element has been developed to calculate friction shear force in the contact region of the rolling shell-of-revolution tire model.
NASA Astrophysics Data System (ADS)
Peng, Heng; Liu, Yinghua; Chen, Haofeng
2018-05-01
In this paper, a novel direct method called the stress compensation method (SCM) is proposed for limit and shakedown analysis of large-scale elastoplastic structures. Without needing to solve the specific mathematical programming problem, the SCM is a two-level iterative procedure based on a sequence of linear elastic finite element solutions where the global stiffness matrix is decomposed only once. In the inner loop, the static admissible residual stress field for shakedown analysis is constructed. In the outer loop, a series of decreasing load multipliers are updated to approach to the shakedown limit multiplier by using an efficient and robust iteration control technique, where the static shakedown theorem is adopted. Three numerical examples up to about 140,000 finite element nodes confirm the applicability and efficiency of this method for two-dimensional and three-dimensional elastoplastic structures, with detailed discussions on the convergence and the accuracy of the proposed algorithm.
Dynamic SPECT reconstruction from few projections: a sparsity enforced matrix factorization approach
NASA Astrophysics Data System (ADS)
Ding, Qiaoqiao; Zan, Yunlong; Huang, Qiu; Zhang, Xiaoqun
2015-02-01
The reconstruction of dynamic images from few projection data is a challenging problem, especially when noise is present and when the dynamic images are vary fast. In this paper, we propose a variational model, sparsity enforced matrix factorization (SEMF), based on low rank matrix factorization of unknown images and enforced sparsity constraints for representing both coefficients and bases. The proposed model is solved via an alternating iterative scheme for which each subproblem is convex and involves the efficient alternating direction method of multipliers (ADMM). The convergence of the overall alternating scheme for the nonconvex problem relies upon the Kurdyka-Łojasiewicz property, recently studied by Attouch et al (2010 Math. Oper. Res. 35 438) and Attouch et al (2013 Math. Program. 137 91). Finally our proof-of-concept simulation on 2D dynamic images shows the advantage of the proposed method compared to conventional methods.
High precision computing with charge domain devices and a pseudo-spectral method therefor
NASA Technical Reports Server (NTRS)
Barhen, Jacob (Inventor); Toomarian, Nikzad (Inventor); Fijany, Amir (Inventor); Zak, Michail (Inventor)
1997-01-01
The present invention enhances the bit resolution of a CCD/CID MVM processor by storing each bit of each matrix element as a separate CCD charge packet. The bits of each input vector are separately multiplied by each bit of each matrix element in massive parallelism and the resulting products are combined appropriately to synthesize the correct product. In another aspect of the invention, such arrays are employed in a pseudo-spectral method of the invention, in which partial differential equations are solved by expressing each derivative analytically as matrices, and the state function is updated at each computation cycle by multiplying it by the matrices. The matrices are treated as synaptic arrays of a neural network and the state function vector elements are treated as neurons. In a further aspect of the invention, moving target detection is performed by driving the soliton equation with a vector of detector outputs. The neural architecture consists of two synaptic arrays corresponding to the two differential terms of the soliton-equation and an adder connected to the output thereof and to the output of the detector array to drive the soliton equation.
Method and apparatus for optimized processing of sparse matrices
Taylor, Valerie E.
1993-01-01
A computer architecture for processing a sparse matrix is disclosed. The apparatus stores a value-row vector corresponding to nonzero values of a sparse matrix. Each of the nonzero values is located at a defined row and column position in the matrix. The value-row vector includes a first vector including nonzero values and delimiting characters indicating a transition from one column to another. The value-row vector also includes a second vector which defines row position values in the matrix corresponding to the nonzero values in the first vector and column position values in the matrix corresponding to the column position of the nonzero values in the first vector. The architecture also includes a circuit for detecting a special character within the value-row vector. Matrix-vector multiplication is executed on the value-row vector. This multiplication is performed by multiplying an index value of the first vector value by a column value from a second matrix to form a matrix-vector product which is added to a previous matrix-vector product.
NASA Astrophysics Data System (ADS)
Hipp, J. R.; Encarnacao, A.; Ballard, S.; Young, C. J.; Phillips, W. S.; Begnaud, M. L.
2011-12-01
Recently our combined SNL-LANL research team has succeeded in developing a global, seamless 3D tomographic P-velocity model (SALSA3D) that provides superior first P travel time predictions at both regional and teleseismic distances. However, given the variable data quality and uneven data sampling associated with this type of model, it is essential that there be a means to calculate high-quality estimates of the path-dependent variance and covariance associated with the predicted travel times of ray paths through the model. In this paper, we show a methodology for accomplishing this by exploiting the full model covariance matrix. Our model has on the order of 1/2 million nodes, so the challenge in calculating the covariance matrix is formidable: 0.9 TB storage for 1/2 of a symmetric matrix, necessitating an Out-Of-Core (OOC) blocked matrix solution technique. With our approach the tomography matrix (G which includes Tikhonov regularization terms) is multiplied by its transpose (GTG) and written in a blocked sub-matrix fashion. We employ a distributed parallel solution paradigm that solves for (GTG)-1 by assigning blocks to individual processing nodes for matrix decomposition update and scaling operations. We first find the Cholesky decomposition of GTG which is subsequently inverted. Next, we employ OOC matrix multiply methods to calculate the model covariance matrix from (GTG)-1 and an assumed data covariance matrix. Given the model covariance matrix we solve for the travel-time covariance associated with arbitrary ray-paths by integrating the model covariance along both ray paths. Setting the paths equal gives variance for that path. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Combined group ECC protection and subgroup parity protection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gara, Alan; Cheng, Dong; Heidelberger, Philip
A method and system are disclosed for providing combined error code protection and subgroup parity protection for a given group of n bits. The method comprises the steps of identifying a number, m, of redundant bits for said error protection; and constructing a matrix P, wherein multiplying said given group of n bits with P produces m redundant error correction code (ECC) protection bits, and two columns of P provide parity protection for subgroups of said given group of n bits. In the preferred embodiment of the invention, the matrix P is constructed by generating permutations of m bit widemore » vectors with three or more, but an odd number of, elements with value one and the other elements with value zero; and assigning said vectors to rows of the matrix P.« less
Yang, Xi; Han, Guoqiang; Cai, Hongmin; Song, Yan
2017-03-31
Revealing data with intrinsically diagonal block structures is particularly useful for analyzing groups of highly correlated variables. Earlier researches based on non-negative matrix factorization (NMF) have been shown to be effective in representing such data by decomposing the observed data into two factors, where one factor is considered to be the feature and the other the expansion loading from a linear algebra perspective. If the data are sampled from multiple independent subspaces, the loading factor would possess a diagonal structure under an ideal matrix decomposition. However, the standard NMF method and its variants have not been reported to exploit this type of data via direct estimation. To address this issue, a non-negative matrix factorization with multiple constraints model is proposed in this paper. The constraints include an sparsity norm on the feature matrix and a total variational norm on each column of the loading matrix. The proposed model is shown to be capable of efficiently recovering diagonal block structures hidden in observed samples. An efficient numerical algorithm using the alternating direction method of multipliers model is proposed for optimizing the new model. Compared with several benchmark models, the proposed method performs robustly and effectively for simulated and real biological data.
Calculation of Moment Matrix Elements for Bilinear Quadrilaterals and Higher-Order Basis Functions
2016-01-06
methods are known as boundary integral equation (BIE) methods and the present study falls into this category. The numerical solution of the BIE is...iterated integrals. The inner integral involves the product of the free-space Green’s function for the Helmholtz equation multiplied by an appropriate...Website: http://www.wipl-d.com/ 5. Y. Zhang and T. K. Sarkar, Parallel Solution of Integral Equation -Based EM Problems in the Frequency Domain. New
Pseudo-compressibility methods for the incompressible flow equations
NASA Technical Reports Server (NTRS)
Turkel, Eli; Arnone, A.
1993-01-01
Preconditioning methods to accelerate convergence to a steady state for the incompressible fluid dynamics equations are considered. The analysis relies on the inviscid equations. The preconditioning consists of a matrix multiplying the time derivatives. Thus the steady state of the preconditioned system is the same as the steady state of the original system. The method is compared to other types of pseudo-compressibility. For finite difference methods preconditioning can change and improve the steady state solutions. An application to viscous flow around a cascade with a non-periodic mesh is presented.
Fourier transform-wavefront reconstruction for the pyramid wavefront sensor
NASA Astrophysics Data System (ADS)
Quirós-Pacheco, Fernando; Correia, Carlos; Esposito, Simone
The application of Fourier-transform reconstruction techniques to the pyramid wavefront sensor has been investigated. A preliminary study based on end-to-end simulations of an adaptive optics system with ≈40x40 subapertures and actuators shows that the performance of the Fourier-transform reconstructor (FTR) is of the same order of magnitude than the one obtained with a conventional matrix-vector multiply (MVM) method.
NASA Astrophysics Data System (ADS)
Comastri, S. A.; Perez, Liliana I.; Pérez, Gervasio D.; Bastida, K.; Martin, G.
2008-04-01
The wavefront aberration of any image forming system and, in particular, of a human eye, is often expanded in Zernike modes each mode being weighed by a coefficient that depends both on the image forming components of the system and on the contour, size and centering of the pupil. In the present article, expanding up to 7th order the wavefront aberration, an analytical method to compute a new set of Zernike coefficients corresponding to a pupil in terms of an original set evaluated via ray tracing for a dilated and transversally arbitrarily displaced pupil is developed. A transformation matrix of dimension 36×36 is attained multiplying the scaling-horizontal traslation matrix previously derived by appropriate rotation matrices. Multiplying the original coefficients by this transformation matrix, analytical formulas for each new coefficient are attained and supplied and, for the information concerning the wavefront aberration to be available, these formulas must be employed in cases in which the new pupil is contained in the original one. The use of these analytical formulas is exemplified applying them to study the effect of pupil contraction and/or decentering in 3 situations: calculation of corneal aberrations of a keratoconic subject for the natural photopic pupil size and various decenterings; coma compensation by means of pupil shift in a fictitious system solely having primary aberrations and evaluation of the amount of astigmatism and coma of a hypothetical system originally having spherical aberration alone.
NASA Astrophysics Data System (ADS)
Prato, Marco; Bonettini, Silvia; Loris, Ignace; Porta, Federica; Rebegoldi, Simone
2016-10-01
The scaled gradient projection (SGP) method is a first-order optimization method applicable to the constrained minimization of smooth functions and exploiting a scaling matrix multiplying the gradient and a variable steplength parameter to improve the convergence of the scheme. For a general nonconvex function, the limit points of the sequence generated by SGP have been proved to be stationary, while in the convex case and with some restrictions on the choice of the scaling matrix the sequence itself converges to a constrained minimum point. In this paper we extend these convergence results by showing that the SGP sequence converges to a limit point provided that the objective function satisfies the Kurdyka-Łojasiewicz property at each point of its domain and its gradient is Lipschitz continuous.
An indirect approach to the extensive calculation of relationship coefficients
Colleau, Jean-Jacques
2002-01-01
A method was described for calculating population statistics on relationship coefficients without using corresponding individual data. It relied on the structure of the inverse of the numerator relationship matrix between individuals under investigation and ancestors. Computation times were observed on simulated populations and were compared to those incurred with a conventional direct approach. The indirect approach turned out to be very efficient for multiplying the relationship matrix corresponding to planned matings (full design) by any vector. Efficiency was generally still good or very good for calculating statistics on these simulated populations. An extreme implementation of the method is the calculation of inbreeding coefficients themselves. Relative performances of the indirect method were good except when many full-sibs during many generations existed in the population. PMID:12270102
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalugin, A. V., E-mail: Kalugin-AV@nrcki.ru; Tebin, V. V.
The specific features of calculation of the effective multiplication factor using the Monte Carlo method for weakly coupled and non-asymptotic multiplying systems are discussed. Particular examples are considered and practical recommendations on detection and Monte Carlo calculation of systems typical in numerical substantiation of nuclear safety for VVER fuel management problems are given. In particular, the problems of the choice of parameters for the batch mode and the method for normalization of the neutron batch, as well as finding and interpretation of the eigenvalue spectrum for the integral fission matrix, are discussed.
ERIC Educational Resources Information Center
Adachi, Kohei
2009-01-01
In component analysis solutions, post-multiplying a component score matrix by a nonsingular matrix can be compensated by applying its inverse to the corresponding loading matrix. To eliminate this indeterminacy on nonsingular transformation, we propose Joint Procrustes Analysis (JPA) in which component score and loading matrices are simultaneously…
Feature Extraction from Subband Brain Signals and Its Classification
NASA Astrophysics Data System (ADS)
Mukul, Manoj Kumar; Matsuno, Fumitoshi
This paper considers both the non-stationarity as well as independence/uncorrelated criteria along with the asymmetry ratio over the electroencephalogram (EEG) signals and proposes a hybrid approach of the signal preprocessing methods before the feature extraction. A filter bank approach of the discrete wavelet transform (DWT) is used to exploit the non-stationary characteristics of the EEG signals and it decomposes the raw EEG signals into the subbands of different center frequencies called as rhythm. A post processing of the selected subband by the AMUSE algorithm (a second order statistics based ICA/BSS algorithm) provides the separating matrix for each class of the movement imagery. In the subband domain the orthogonality as well as orthonormality criteria over the whitening matrix and separating matrix do not come respectively. The human brain has an asymmetrical structure. It has been observed that the ratio between the norms of the left and right class separating matrices should be different for better discrimination between these two classes. The alpha/beta band asymmetry ratio between the separating matrices of the left and right classes will provide the condition to select an appropriate multiplier. So we modify the estimated separating matrix by an appropriate multiplier in order to get the required asymmetry and extend the AMUSE algorithm in the subband domain. The desired subband is further subjected to the updated separating matrix to extract subband sub-components from each class. The extracted subband sub-components sources are further subjected to the feature extraction (power spectral density) step followed by the linear discriminant analysis (LDA).
Discriminative Transfer Subspace Learning via Low-Rank and Sparse Representation.
Xu, Yong; Fang, Xiaozhao; Wu, Jian; Li, Xuelong; Zhang, David
2016-02-01
In this paper, we address the problem of unsupervised domain transfer learning in which no labels are available in the target domain. We use a transformation matrix to transfer both the source and target data to a common subspace, where each target sample can be represented by a combination of source samples such that the samples from different domains can be well interlaced. In this way, the discrepancy of the source and target domains is reduced. By imposing joint low-rank and sparse constraints on the reconstruction coefficient matrix, the global and local structures of data can be preserved. To enlarge the margins between different classes as much as possible and provide more freedom to diminish the discrepancy, a flexible linear classifier (projection) is obtained by learning a non-negative label relaxation matrix that allows the strict binary label matrix to relax into a slack variable matrix. Our method can avoid a potentially negative transfer by using a sparse matrix to model the noise and, thus, is more robust to different types of noise. We formulate our problem as a constrained low-rankness and sparsity minimization problem and solve it by the inexact augmented Lagrange multiplier method. Extensive experiments on various visual domain adaptation tasks show the superiority of the proposed method over the state-of-the art methods. The MATLAB code of our method will be publicly available at http://www.yongxu.org/lunwen.html.
Using Redundancy To Reduce Errors in Magnetometer Readings
NASA Technical Reports Server (NTRS)
Kulikov, Igor; Zak, Michail
2004-01-01
A method of reducing errors in noisy magnetic-field measurements involves exploitation of redundancy in the readings of multiple magnetometers in a cluster. By "redundancy"is meant that the readings are not entirely independent of each other because the relationships among the magnetic-field components that one seeks to measure are governed by the fundamental laws of electromagnetism as expressed by Maxwell's equations. Assuming that the magnetometers are located outside a magnetic material, that the magnetic field is steady or quasi-steady, and that there are no electric currents flowing in or near the magnetometers, the applicable Maxwell 's equations are delta x B = 0 and delta(raised dot) B = 0, where B is the magnetic-flux-density vector. By suitable algebraic manipulation, these equations can be shown to impose three independent constraints on the values of the components of B at the various magnetometer positions. In general, the problem of reducing the errors in noisy measurements is one of finding a set of corrected values that minimize an error function. In the present method, the error function is formulated as (1) the sum of squares of the differences between the corrected and noisy measurement values plus (2) a sum of three terms, each comprising the product of a Lagrange multiplier and one of the three constraints. The partial derivatives of the error function with respect to the corrected magnetic-field component values and the Lagrange multipliers are set equal to zero, leading to a set of equations that can be put into matrix.vector form. The matrix can be inverted to solve for a vector that comprises the corrected magnetic-field component values and the Lagrange multipliers.
NASA Astrophysics Data System (ADS)
Nguyen, Van-Dung; Wu, Ling; Noels, Ludovic
2017-03-01
This work provides a unified treatment of arbitrary kinds of microscopic boundary conditions usually considered in the multi-scale computational homogenization method for nonlinear multi-physics problems. An efficient procedure is developed to enforce the multi-point linear constraints arising from the microscopic boundary condition either by the direct constraint elimination or by the Lagrange multiplier elimination methods. The macroscopic tangent operators are computed in an efficient way from a multiple right hand sides linear system whose left hand side matrix is the stiffness matrix of the microscopic linearized system at the converged solution. The number of vectors at the right hand side is equal to the number of the macroscopic kinematic variables used to formulate the microscopic boundary condition. As the resolution of the microscopic linearized system often follows a direct factorization procedure, the computation of the macroscopic tangent operators is then performed using this factorized matrix at a reduced computational time.
A feedforward artificial neural network based on quantum effect vector-matrix multipliers.
Levy, H J; McGill, T C
1993-01-01
The vector-matrix multiplier is the engine of many artificial neural network implementations because it can simulate the way in which neurons collect weighted input signals from a dendritic arbor. A new technology for building analog weighting elements that is theoretically capable of densities and speeds far beyond anything that conventional VLSI in silicon could ever offer is presented. To illustrate the feasibility of such a technology, a small three-layer feedforward prototype network with five binary neurons and six tri-state synapses was built and used to perform all of the fundamental logic functions: XOR, AND, OR, and NOT.
Optical implementation of systolic array processing
NASA Technical Reports Server (NTRS)
Caulfield, H. J.; Rhodes, W. T.; Foster, M. J.; Horvitz, S.
1981-01-01
Algorithms for matrix vector multiplication are implemented using acousto-optic cells for multiplication and input data transfer and using charge coupled devices detector arrays for accumulation and output of the results. No two dimensional matrix mask is required; matrix changes are implemented electronically. A system for multiplying a 50 component nonnegative real vector by a 50 by 50 nonnegative real matrix is described. Modifications for bipolar real and complex valued processing are possible, as are extensions to matrix-matrix multiplication and multiplication of a vector by multiple matrices.
Manifold regularized matrix completion for multi-label learning with ADMM.
Liu, Bin; Li, Yingming; Xu, Zenglin
2018-05-01
Multi-label learning is a common machine learning problem arising from numerous real-world applications in diverse fields, e.g, natural language processing, bioinformatics, information retrieval and so on. Among various multi-label learning methods, the matrix completion approach has been regarded as a promising approach to transductive multi-label learning. By constructing a joint matrix comprising the feature matrix and the label matrix, the missing labels of test samples are regarded as missing values of the joint matrix. With the low-rank assumption of the constructed joint matrix, the missing labels can be recovered by minimizing its rank. Despite its success, most matrix completion based approaches ignore the smoothness assumption of unlabeled data, i.e., neighboring instances should also share a similar set of labels. Thus they may under exploit the intrinsic structures of data. In addition, the matrix completion problem can be less efficient. To this end, we propose to efficiently solve the multi-label learning problem as an enhanced matrix completion model with manifold regularization, where the graph Laplacian is used to ensure the label smoothness over it. To speed up the convergence of our model, we develop an efficient iterative algorithm, which solves the resulted nuclear norm minimization problem with the alternating direction method of multipliers (ADMM). Experiments on both synthetic and real-world data have shown the promising results of the proposed approach. Copyright © 2018 Elsevier Ltd. All rights reserved.
Trace Norm Regularized CANDECOMP/PARAFAC Decomposition With Missing Data.
Liu, Yuanyuan; Shang, Fanhua; Jiao, Licheng; Cheng, James; Cheng, Hong
2015-11-01
In recent years, low-rank tensor completion (LRTC) problems have received a significant amount of attention in computer vision, data mining, and signal processing. The existing trace norm minimization algorithms for iteratively solving LRTC problems involve multiple singular value decompositions of very large matrices at each iteration. Therefore, they suffer from high computational cost. In this paper, we propose a novel trace norm regularized CANDECOMP/PARAFAC decomposition (TNCP) method for simultaneous tensor decomposition and completion. We first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the mode- n rank of a tensor. Then, we introduce a tractable relaxation of our rank function, and then achieve a convex combination problem of much smaller-scale matrix trace norm minimization. Finally, we develop an efficient algorithm based on alternating direction method of multipliers to solve our problem. The promising experimental results on synthetic and real-world data validate the effectiveness of our TNCP method. Moreover, TNCP is significantly faster than the state-of-the-art methods and scales to larger problems.
Compressed Continuous Computation v. 12/20/2016
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorodetsky, Alex
2017-02-17
A library for performing numerical computation with low-rank functions. The (C3) library enables performing continuous linear and multilinear algebra with multidimensional functions. Common tasks include taking "matrix" decompositions of vector- or matrix-valued functions, approximating multidimensional functions in low-rank format, adding or multiplying functions together, integrating multidimensional functions.
An Application of Sylvester's Rank Inequality
ERIC Educational Resources Information Center
Kung, Sidney H.
2011-01-01
Using two well known criteria for the diagonalizability of a square matrix plus an extended form of Sylvester's Rank Inequality, the author presents a new condition for the diagonalization of a real matrix from which one can obtain the eigenvectors by simply multiplying some associated matrices without solving a linear system of simultaneous…
INITIAL ANALYSIS OF TRANSIENT POWER TIME LAG DUE TO HETEROGENEITY WITHIN THE TREAT FUEL MATRIX.
DOE Office of Scientific and Technical Information (OSTI.GOV)
D.M. Wachs; A.X. Zabriskie, W.R. Marcum
2014-06-01
The topic Nuclear Safety encompasses a broad spectrum of focal areas within the nuclear industry; one specific aspect centers on the performance and integrity of nuclear fuel during a reactivity insertion accident (RIA). This specific accident has proven to be fundamentally difficult to theoretically characterize due to the numerous empirically driven characteristics that quantify the fuel and reactor performance. The Transient Reactor Test (TREAT) facility was designed and operated to better understand fuel behavior under extreme (i.e. accident) conditions; it was shutdown in 1994. Recently, efforts have been underway to commission the TREAT facility to continue testing of advanced accidentmore » tolerant fuels (i.e. recently developed fuel concepts). To aid in the restart effort, new simulation tools are being used to investigate the behavior of nuclear fuels during facility’s transient events. This study focuses specifically on the characterizing modeled effects of fuel particles within the fuel matrix of the TREAT. The objective of this study was to (1) identify the impact of modeled heterogeneity within the fuel matrix during a transient event, and (2) demonstrate acceptable modeling processes for the purpose of TREAT safety analyses, specific to fuel matrix and particle size. Hypothetically, a fuel that is dominantly heterogeneous will demonstrate a clearly different temporal heating response to that of a modeled homogeneous fuel. This time difference is a result of the uniqueness of the thermal diffusivity within the fuel particle and fuel matrix. Using MOOSE/BISON to simulate the temperature time-lag effect of fuel particle diameter during a transient event, a comparison of the average graphite moderator temperature surrounding a spherical particle of fuel was made for both types of fuel simulations. This comparison showed that at a given time and with a specific fuel particle diameter, the fuel particle (heterogeneous) simulation and the homogeneous simulation were related by a multiplier relative to the average moderator temperature. As time increases the multiplier is comparable to the factor found in a previous analytical study from literature. The implementation of this multiplier and the method of analysis may be employed to remove assumptions and increase fidelity for future research on the effect of fuel particles during transient events.« less
Global collocation methods for approximation and the solution of partial differential equations
NASA Technical Reports Server (NTRS)
Solomonoff, A.; Turkel, E.
1986-01-01
Polynomial interpolation methods are applied both to the approximation of functions and to the numerical solutions of hyperbolic and elliptic partial differential equations. The derivative matrix for a general sequence of the collocation points is constructed. The approximate derivative is then found by a matrix times vector multiply. The effects of several factors on the performance of these methods including the effect of different collocation points are then explored. The resolution of the schemes for both smooth functions and functions with steep gradients or discontinuities in some derivative are also studied. The accuracy when the gradients occur both near the center of the region and in the vicinity of the boundary is investigated. The importance of the aliasing limit on the resolution of the approximation is investigated in detail. Also examined is the effect of boundary treatment on the stability and accuracy of the scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-01-17
This library is an implementation of the Sparse Approximate Matrix Multiplication (SpAMM) algorithm introduced. It provides a matrix data type, and an approximate matrix product, which exhibits linear scaling computational complexity for matrices with decay. The product error and the performance of the multiply can be tuned by choosing an appropriate tolerance. The library can be compiled for serial execution or parallel execution on shared memory systems with an OpenMP capable compiler
Distributed Matrix Completion: Application to Cooperative Positioning in Noisy Environments
2013-12-11
positioning, and a gossip version of low-rank approximation were developed. A convex relaxation for positioning in the presence of noise was shown to...of a large data matrix through gossip algorithms. A new algorithm is proposed that amounts to iteratively multiplying a vector by independent random...sparsification of the original matrix and averaging the resulting normalized vectors. This can be viewed as a generalization of gossip algorithms for
NASA Technical Reports Server (NTRS)
Habiby, Sarry F.
1987-01-01
The design and implementation of a digital (numerical) optical matrix-vector multiplier are presented. The objective is to demonstrate the operation of an optical processor designed to minimize computation time in performing a practical computing application. This is done by using the large array of processing elements in a Hughes liquid crystal light valve, and relying on the residue arithmetic representation, a holographic optical memory, and position coded optical look-up tables. In the design, all operations are performed in effectively one light valve response time regardless of matrix size. The features of the design allowing fast computation include the residue arithmetic representation, the mapping approach to computation, and the holographic memory. In addition, other features of the work include a practical light valve configuration for efficient polarization control, a model for recording multiple exposures in silver halides with equal reconstruction efficiency, and using light from an optical fiber for a reference beam source in constructing the hologram. The design can be extended to implement larger matrix arrays without increasing computation time.
Fabrication of tungsten wire reinforced nickel-base alloy composites
NASA Technical Reports Server (NTRS)
Brentnall, W. D.; Toth, I. J.
1974-01-01
Fabrication methods for tungsten fiber reinforced nickel-base superalloy composites were investigated. Three matrix alloys in pre-alloyed powder or rolled sheet form were evaluated in terms of fabricability into composite monotape and multi-ply forms. The utility of monotapes for fabricating more complex shapes was demonstrated. Preliminary 1093C (2000F) stress rupture tests indicated that efficient utilization of fiber strength was achieved in composites fabricated by diffusion bonding processes. The fabrication of thermal fatigue specimens is also described.
Preconditioning and the limit to the incompressible flow equations
NASA Technical Reports Server (NTRS)
Turkel, E.; Fiterman, A.; Vanleer, B.
1993-01-01
The use of preconditioning methods to accelerate the convergence to a steady state for both the incompressible and compressible fluid dynamic equations are considered. The relation between them for both the continuous problem and the finite difference approximation is also considered. The analysis relies on the inviscid equations. The preconditioning consists of a matrix multiplying the time derivatives. Hence, the steady state of the preconditioned system is the same as the steady state of the original system. For finite difference methods the preconditioning can change and improve the steady state solutions. An application to flow around an airfoil is presented.
Unsteady combustion of solid propellants
NASA Astrophysics Data System (ADS)
Chung, T. J.; Kim, P. K.
The oscillatory motions of all field variables (pressure, temperature, velocity, density, and fuel fractions) in the flame zone of solid propellant rocket motors are calculated using the finite element method. The Arrhenius law with a single step forward chemical reaction is used. Effects of radiative heat transfer, impressed arbitrary acoustic wave incidence, and idealized mean flow velocities are also investigated. Boundary conditions are derived at the solid-gas interfaces and at the flame edges which are implemented via Lagrange multipliers. Perturbation expansions of all governing conservation equations up to and including the second order are carried out so that nonlinear oscillations may be accommodated. All excited frequencies are calculated by means of eigenvalue analyses, and the combustion response functions corresponding to these frequencies are determined. It is shown that the use of isoparametric finite elements, Gaussian quadrature integration, and the Lagrange multiplier boundary matrix scheme offers a convenient approach to two-dimensional calculations.
Hrabe, Nikolas W.; Heinl, Peter; Bordia, Rajendra K.; Körner, Carolin; Fernandes, Russell J.
2013-01-01
Regular 3D periodic porous Ti-6Al-4 V structures were fabricated by the selective electron beam melting method (EBM) over a range of relative densities (0.17–0.40) and pore sizes (500–1500 μm). Structures were seeded with human osteoblast-like cells (SAOS-2) and cultured for four weeks. Cells multiplied within these structures and extracellular matrix collagen content increased. Type I and type V collagens typically synthesized by osteoblasts were deposited in the newly formed matrix with time in culture. High magnification scanning electron microscopy revealed cells attached to surfaces on the interior of the structures with an increasingly fibrous matrix. The in-vitro results demonstrate that the novel EBM-processed porous structures, designed to address the effect of stress-shielding, are conducive to osteoblast attachment, proliferation and deposition of a collagenous matrix characteristic of bone. PMID:23869614
Woodward, Carol S.; Gardner, David J.; Evans, Katherine J.
2015-01-01
Efficient solutions of global climate models require effectively handling disparate length and time scales. Implicit solution approaches allow time integration of the physical system with a step size governed by accuracy of the processes of interest rather than by stability of the fastest time scales present. Implicit approaches, however, require the solution of nonlinear systems within each time step. Usually, a Newton's method is applied to solve these systems. Each iteration of the Newton's method, in turn, requires the solution of a linear model of the nonlinear system. This model employs the Jacobian of the problem-defining nonlinear residual, but thismore » Jacobian can be costly to form. If a Krylov linear solver is used for the solution of the linear system, the action of the Jacobian matrix on a given vector is required. In the case of spectral element methods, the Jacobian is not calculated but only implemented through matrix-vector products. The matrix-vector multiply can also be approximated by a finite difference approximation which may introduce inaccuracy in the overall nonlinear solver. In this paper, we review the advantages and disadvantages of finite difference approximations of these matrix-vector products for climate dynamics within the spectral element shallow water dynamical core of the Community Atmosphere Model.« less
A FLUKA simulation of the KLOE electromagnetic calorimeter
NASA Astrophysics Data System (ADS)
Di Micco, B.; Branchini, P.; Ferrari, A.; Loffredo, S.; Passeri, A.; Patera, V.
2007-10-01
We present the simulation of the KLOE calorimeter with the FLUKA Monte Carlo program. The response of the detector to electromagnetic showers has been studied and compared with the publicly available KLOE data. The energy and the time resolution of the electromagnetic clusters is in good agreement with the data. The simulation has been also used to study a possible improvement of the KLOE calorimeter using multianode photo-multipliers. An HAMAMATSU R7600-M16 photomultiplier has been assembled in order to determine the whole cross talk matrix that has been included in the simulation. The cross talk matrix takes into account the effects of a realistic photo-multiplier's electronics and of its coupling to the active material. The performance of the modified readout has been compared to the usual KLOE configuration.
Analytical Energy Gradients for Excited-State Coupled-Cluster Methods
NASA Astrophysics Data System (ADS)
Wladyslawski, Mark; Nooijen, Marcel
The equation-of-motion coupled-cluster (EOM-CC) and similarity transformed equation-of-motion coupled-cluster (STEOM-CC) methods have been firmly established as accurate and routinely applicable extensions of single-reference coupled-cluster theory to describe electronically excited states. An overview of these methods is provided, with emphasis on the many-body similarity transform concept that is the key to a rationalization of their accuracy. The main topic of the paper is the derivation of analytical energy gradients for such non-variational electronic structure approaches, with an ultimate focus on obtaining their detailed algebraic working equations. A general theoretical framework using Lagrange's method of undetermined multipliers is presented, and the method is applied to formulate the EOM-CC and STEOM-CC gradients in abstract operator terms, following the previous work in [P.G. Szalay, Int. J. Quantum Chem. 55 (1995) 151] and [S.R. Gwaltney, R.J. Bartlett, M. Nooijen, J. Chem. Phys. 111 (1999) 58]. Moreover, the systematics of the Lagrange multiplier approach is suitable for automation by computer, enabling the derivation of the detailed derivative equations through a standardized and direct procedure. To this end, we have developed the SMART (Symbolic Manipulation and Regrouping of Tensors) package of automated symbolic algebra routines, written in the Mathematica programming language. The SMART toolkit provides the means to expand, differentiate, and simplify equations by manipulation of the detailed algebraic tensor expressions directly. The Lagrangian multiplier formulation establishes a uniform strategy to perform the automated derivation in a standardized manner: A Lagrange multiplier functional is constructed from the explicit algebraic equations that define the energy in the electronic method; the energy functional is then made fully variational with respect to all of its parameters, and the symbolic differentiations directly yield the explicit equations for the wavefunction amplitudes, the Lagrange multipliers, and the analytical gradient via the perturbation-independent generalized Hellmann-Feynman effective density matrix. This systematic automated derivation procedure is applied to obtain the detailed gradient equations for the excitation energy (EE-), double ionization potential (DIP-), and double electron affinity (DEA-) similarity transformed equation-of-motion coupled-cluster singles-and-doubles (STEOM-CCSD) methods. In addition, the derivatives of the closed-shell-reference excitation energy (EE-), ionization potential (IP-), and electron affinity (EA-) equation-of-motion coupled-cluster singles-and-doubles (EOM-CCSD) methods are derived. Furthermore, the perturbative EOM-PT and STEOM-PT gradients are obtained. The algebraic derivative expressions for these dozen methods are all derived here uniformly through the automated Lagrange multiplier process and are expressed compactly in a chain-rule/intermediate-density formulation, which facilitates a unified modular implementation of analytic energy gradients for CCSD/PT-based electronic methods. The working equations for these analytical gradients are presented in full detail, and their factorization and implementation into an efficient computer code are discussed.
Tensor Decompositions for Learning Latent Variable Models
2012-12-08
and eigenvectors of tensors is generally significantly more complicated than their matrix counterpart (both algebraically [Qi05, CS11, Lim05] and...The reduction First, let W ∈ Rd×k be a linear transformation such that M2(W,W ) = W M2W = I where I is the k × k identity matrix (i.e., W whitens ...approximate the whitening matrix W ∈ Rd×k from second-moment matrix M2 ∈ Rd×d. To do this, one first multiplies M2 by a random matrix R ∈ Rd×k′ for some k′ ≥ k
Brief announcement: Hypergraph parititioning for parallel sparse matrix-matrix multiplication
Ballard, Grey; Druinsky, Alex; Knight, Nicholas; ...
2015-01-01
The performance of parallel algorithms for sparse matrix-matrix multiplication is typically determined by the amount of interprocessor communication performed, which in turn depends on the nonzero structure of the input matrices. In this paper, we characterize the communication cost of a sparse matrix-matrix multiplication algorithm in terms of the size of a cut of an associated hypergraph that encodes the computation for a given input nonzero structure. Obtaining an optimal algorithm corresponds to solving a hypergraph partitioning problem. Furthermore, our hypergraph model generalizes several existing models for sparse matrix-vector multiplication, and we can leverage hypergraph partitioners developed for that computationmore » to improve application-specific algorithms for multiplying sparse matrices.« less
A robust direct-integration method for rotorcraft maneuver and periodic response
NASA Technical Reports Server (NTRS)
Panda, Brahmananda
1992-01-01
The Newmark-Beta method and the Newton-Raphson iteration scheme are combined to develop a direct-integration method for evaluating the maneuver and periodic-response expressions for rotorcraft. The method requires the generation of Jacobians and includes higher derivatives in the formulation of the geometric stiffness matrix to enhance the convergence of the system. The method leads to effective convergence with nonlinear structural dynamics and aerodynamic terms. Singularities in the matrices can be addressed with the method as they arise from a Lagrange multiplier approach for coupling equations with nonlinear constraints. The method is also shown to be general enough to handle singularities from quasisteady control-system models. The method is shown to be more general and robust than the similar 2GCHAS method for analyzing rotorcraft dynamics.
Luo, Lei; Yang, Jian; Qian, Jianjun; Tai, Ying; Lu, Gui-Fu
2017-09-01
Dealing with partial occlusion or illumination is one of the most challenging problems in image representation and classification. In this problem, the characterization of the representation error plays a crucial role. In most current approaches, the error matrix needs to be stretched into a vector and each element is assumed to be independently corrupted. This ignores the dependence between the elements of error. In this paper, it is assumed that the error image caused by partial occlusion or illumination changes is a random matrix variate and follows the extended matrix variate power exponential distribution. This has the heavy tailed regions and can be used to describe a matrix pattern of l×m dimensional observations that are not independent. This paper reveals the essence of the proposed distribution: it actually alleviates the correlations between pixels in an error matrix E and makes E approximately Gaussian. On the basis of this distribution, we derive a Schatten p -norm-based matrix regression model with L q regularization. Alternating direction method of multipliers is applied to solve this model. To get a closed-form solution in each step of the algorithm, two singular value function thresholding operators are introduced. In addition, the extended Schatten p -norm is utilized to characterize the distance between the test samples and classes in the design of the classifier. Extensive experimental results for image reconstruction and classification with structural noise demonstrate that the proposed algorithm works much more robustly than some existing regression-based methods.
Modelling of Rigid-Body and Elastic Aircraft Dynamics for Flight Control Development.
1986-06-01
AMAT MATSAV AUGMENT MI NV BMAT MMULT EVAL RLPLOT FASTCHG STABDER The subroutines are fairly well commented so that a person familiar with the theory...performed as in a typical flutter solution. C C Subroutine BMAT computes the B matrix from the forcing function C matrix Q. B is a function of dynamic...and BMAT multiplies matrices. C This is used to form the A and B matrices. C C Subroutine EVAL computes the eigenvalues of the A matrix C The
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freeman, John
A measurement of the top quark mass in tmore » $$\\bar{t}$$ → l + jets candidate events, obtained from p$$\\bar{p}$$ collisions at √s = 1.96 TeV at the Fermilab Tevatron using the CDF II detector, is presented. The measurement approach is that of a matrix element method. For each candidate event, a two dimensional likelihood is calculated in the top pole mass and a constant scale factor, 'JES', where JES multiplies the input particle jet momenta and is designed to account for the systematic uncertainty of the jet momentum reconstruction. As with all matrix element techniques, the method involves an integration using the Standard Model matrix element for t$$\\bar{t}$$ production and decay. However, the technique presented is unique in that the matrix element is modified to compensate for kinematic assumptions which are made to reduce computation time. Background events are dealt with through use of an event observable which distinguishes signal from background, as well as through a cut on the value of an event's maximum likelihood. Results are based on a 955 pb -1 data sample, using events with a high-p T lepton and exactly four high-energy jets, at least one of which is tagged as coming from a b quark; 149 events pass all the selection requirements. They find M meas = 169.8 ± 2.3(stat.) ± 1.4(syst.) GeV/c 2.« less
Using Strassen's algorithm to accelerate the solution of linear systems
NASA Technical Reports Server (NTRS)
Bailey, David H.; Lee, King; Simon, Horst D.
1990-01-01
Strassen's algorithm for fast matrix-matrix multiplication has been implemented for matrices of arbitrary shapes on the CRAY-2 and CRAY Y-MP supercomputers. Several techniques have been used to reduce the scratch space requirement for this algorithm while simultaneously preserving a high level of performance. When the resulting Strassen-based matrix multiply routine is combined with some routines from the new LAPACK library, LU decomposition can be performed with rates significantly higher than those achieved by conventional means. We succeeded in factoring a 2048 x 2048 matrix on the CRAY Y-MP at a rate equivalent to 325 MFLOPS.
Ren, Xinxin; Liu, Jia; Zhang, Chengsen; Luo, Hai
2013-03-15
With the rapid development of ambient mass spectrometry, the hybrid laser-based ambient ionization methods which can generate multiply charged ions of large biomolecules and also characterize small molecules with good signal-to-noise in both positive and negative ion modes are of particular interest. An ambient ionization method termed high-voltage-assisted laser desorption ionization (HALDI) is developed, in which a 1064 nm laser is used to desorb various liquid samples from the sample target biased at a high potential without the need for an organic matrix. The pre-charged liquid samples are desorbed by the laser to form small charged droplets which may undergo an electrospray-like ionization process to produce multiply charged ions of large biomolecules. Various samples including proteins, oligonucleotides (ODNs), drugs, whole milk and chicken eggs have been analyzed by HALDI-MS in both positive and negative ion mode with little or no sample preparation. In addition, HALDI can generate intense signals with better signal-to-noise in negative ion mode than laser desorption spay post-ionization (LDSPI) from the same samples, such as ODNs and some carboxylic-group-containing small drug molecules. HALDI-MS can directly analyze a variety of liquid samples including proteins, ODNs, pharmaceuticals and biological fluids in both positive and negative ion mode without the use of an organic matrix. This technique may be further developed into a useful tool for rapid analysis in many different fields such as pharmaceutical, food, and biological sciences. Copyright © 2013 John Wiley & Sons, Ltd.
Fast and accurate matrix completion via truncated nuclear norm regularization.
Hu, Yao; Zhang, Debing; Ye, Jieping; Li, Xuelong; He, Xiaofei
2013-09-01
Recovering a large matrix from a small subset of its entries is a challenging problem arising in many real applications, such as image inpainting and recommender systems. Many existing approaches formulate this problem as a general low-rank matrix approximation problem. Since the rank operator is nonconvex and discontinuous, most of the recent theoretical studies use the nuclear norm as a convex relaxation. One major limitation of the existing approaches based on nuclear norm minimization is that all the singular values are simultaneously minimized, and thus the rank may not be well approximated in practice. In this paper, we propose to achieve a better approximation to the rank of matrix by truncated nuclear norm, which is given by the nuclear norm subtracted by the sum of the largest few singular values. In addition, we develop a novel matrix completion algorithm by minimizing the Truncated Nuclear Norm. We further develop three efficient iterative procedures, TNNR-ADMM, TNNR-APGL, and TNNR-ADMMAP, to solve the optimization problem. TNNR-ADMM utilizes the alternating direction method of multipliers (ADMM), while TNNR-AGPL applies the accelerated proximal gradient line search method (APGL) for the final optimization. For TNNR-ADMMAP, we make use of an adaptive penalty according to a novel update rule for ADMM to achieve a faster convergence rate. Our empirical study shows encouraging results of the proposed algorithms in comparison to the state-of-the-art matrix completion algorithms on both synthetic and real visual datasets.
NASA Astrophysics Data System (ADS)
Lee, M.; Leiter, K.; Eisner, C.; Breuer, A.; Wang, X.
2017-09-01
In this work, we investigate a block Jacobi-Davidson (J-D) variant suitable for sparse symmetric eigenproblems where a substantial number of extremal eigenvalues are desired (e.g., ground-state real-space quantum chemistry). Most J-D algorithm variations tend to slow down as the number of desired eigenpairs increases due to frequent orthogonalization against a growing list of solved eigenvectors. In our specification of block J-D, all of the steps of the algorithm are performed in clusters, including the linear solves, which allows us to greatly reduce computational effort with blocked matrix-vector multiplies. In addition, we move orthogonalization against locked eigenvectors and working eigenvectors outside of the inner loop but retain the single Ritz vector projection corresponding to the index of the correction vector. Furthermore, we minimize the computational effort by constraining the working subspace to the current vectors being updated and the latest set of corresponding correction vectors. Finally, we incorporate accuracy thresholds based on the precision required by the Fermi-Dirac distribution. The net result is a significant reduction in the computational effort against most previous block J-D implementations, especially as the number of wanted eigenpairs grows. We compare our approach with another robust implementation of block J-D (JDQMR) and the state-of-the-art Chebyshev filter subspace (CheFSI) method for various real-space density functional theory systems. Versus CheFSI, for first-row elements, our method yields competitive timings for valence-only systems and 4-6× speedups for all-electron systems with up to 10× reduced matrix-vector multiplies. For all-electron calculations on larger elements (e.g., gold) where the wanted spectrum is quite narrow compared to the full spectrum, we observe 60× speedup with 200× fewer matrix-vector multiples vs. CheFSI.
Lee, M; Leiter, K; Eisner, C; Breuer, A; Wang, X
2017-09-21
In this work, we investigate a block Jacobi-Davidson (J-D) variant suitable for sparse symmetric eigenproblems where a substantial number of extremal eigenvalues are desired (e.g., ground-state real-space quantum chemistry). Most J-D algorithm variations tend to slow down as the number of desired eigenpairs increases due to frequent orthogonalization against a growing list of solved eigenvectors. In our specification of block J-D, all of the steps of the algorithm are performed in clusters, including the linear solves, which allows us to greatly reduce computational effort with blocked matrix-vector multiplies. In addition, we move orthogonalization against locked eigenvectors and working eigenvectors outside of the inner loop but retain the single Ritz vector projection corresponding to the index of the correction vector. Furthermore, we minimize the computational effort by constraining the working subspace to the current vectors being updated and the latest set of corresponding correction vectors. Finally, we incorporate accuracy thresholds based on the precision required by the Fermi-Dirac distribution. The net result is a significant reduction in the computational effort against most previous block J-D implementations, especially as the number of wanted eigenpairs grows. We compare our approach with another robust implementation of block J-D (JDQMR) and the state-of-the-art Chebyshev filter subspace (CheFSI) method for various real-space density functional theory systems. Versus CheFSI, for first-row elements, our method yields competitive timings for valence-only systems and 4-6× speedups for all-electron systems with up to 10× reduced matrix-vector multiplies. For all-electron calculations on larger elements (e.g., gold) where the wanted spectrum is quite narrow compared to the full spectrum, we observe 60× speedup with 200× fewer matrix-vector multiples vs. CheFSI.
Application of kernel method in fluorescence molecular tomography
NASA Astrophysics Data System (ADS)
Zhao, Yue; Baikejiang, Reheman; Li, Changqing
2017-02-01
Reconstruction of fluorescence molecular tomography (FMT) is an ill-posed inverse problem. Anatomical guidance in the FMT reconstruction can improve FMT reconstruction efficiently. We have developed a kernel method to introduce the anatomical guidance into FMT robustly and easily. The kernel method is from machine learning for pattern analysis and is an efficient way to represent anatomical features. For the finite element method based FMT reconstruction, we calculate a kernel function for each finite element node from an anatomical image, such as a micro-CT image. Then the fluorophore concentration at each node is represented by a kernel coefficient vector and the corresponding kernel function. In the FMT forward model, we have a new system matrix by multiplying the sensitivity matrix with the kernel matrix. Thus, the kernel coefficient vector is the unknown to be reconstructed following a standard iterative reconstruction process. We convert the FMT reconstruction problem into the kernel coefficient reconstruction problem. The desired fluorophore concentration at each node can be calculated accordingly. Numerical simulation studies have demonstrated that the proposed kernel-based algorithm can improve the spatial resolution of the reconstructed FMT images. In the proposed kernel method, the anatomical guidance can be obtained directly from the anatomical image and is included in the forward modeling. One of the advantages is that we do not need to segment the anatomical image for the targets and background.
NASA Technical Reports Server (NTRS)
Nash, Stephen G.; Polyak, R.; Sofer, Ariela
1994-01-01
When a classical barrier method is applied to the solution of a nonlinear programming problem with inequality constraints, the Hessian matrix of the barrier function becomes increasingly ill-conditioned as the solution is approached. As a result, it may be desirable to consider alternative numerical algorithms. We compare the performance of two methods motivated by barrier functions. The first is a stabilized form of the classical barrier method, where a numerically stable approximation to the Newton direction is used when the barrier parameter is small. The second is a modified barrier method where a barrier function is applied to a shifted form of the problem, and the resulting barrier terms are scaled by estimates of the optimal Lagrange multipliers. The condition number of the Hessian matrix of the resulting modified barrier function remains bounded as the solution to the constrained optimization problem is approached. Both of these techniques can be used in the context of a truncated-Newton method, and hence can be applied to large problems, as well as on parallel computers. In this paper, both techniques are applied to problems with bound constraints and we compare their practical behavior.
A discrimination method for the detection of pneumonia using chest radiograph.
Noor, Norliza Mohd; Rijal, Omar Mohd; Yunus, Ashari; Abu-Bakar, S A R
2010-03-01
This paper presents a statistical method for the detection of lobar pneumonia when using digitized chest X-ray films. Each region of interest was represented by a vector of wavelet texture measures which is then multiplied by the orthogonal matrix Q(2). The first two elements of the transformed vectors were shown to have a bivariate normal distribution. Misclassification probabilities were estimated using probability ellipsoids and discriminant functions. The result of this study recommends the detection of pneumonia by constructing probability ellipsoids or discriminant function using maximum energy and maximum column sum energy texture measures where misclassification probabilities were less than 0.15. 2009 Elsevier Ltd. All rights reserved.
Efficient optical nonlinear Langmuir-Blodgett films: roles of matrix molecules
NASA Astrophysics Data System (ADS)
Ma, Shihong; Lu, Xingze; Liu, Liying; Han, Kui; Wang, Wencheng; Zhang, Zhi-Ming
1996-10-01
A novel bifat-chain amphiphilic molecule nitrogencrown (NC) was adopted as an inert material for fabrication of optical nonlinear Langmuir-Blodgett (LB) multilayers. Structural improvement in the Z-type mixed fullerene derivative (C60-Be)/NC LB multilayers samples was realized by insertion of the C60-Be molecules between two hydrophobic chains of the NC molecules. The relatively large third-order susceptibility (chi) (3)xxxx(- 3(omega) ;(omega) ,(omega) ,(omega) ) equals 2.9 multiplied by 10-19 M2V-2 (or 2.1 multiplied by 10-11 esu) was deduced by measuring third harmonic generation (THG) from the C60-Be samples. The second harmonic generation (SHG) intensity increased quadratically with the bilayer number (up to 116 bilayers) in Y-type hemicyanine (HEM)/NC interleaving LB multilayers due to improvement of the structural properties by insertion of the long hydrophobic tail of HEM molecules between two chains of NC molecules. The second-order susceptibility (chi) (2)zxx(-2(omega) ;(omega) ,(omega) ) equals 18 pM V-1 (or 4.35 multiplied by 10-8 esu) was obtained by measuring SHG from the HEM samples. The NC molecule has attractive features as a matrix material in fabrications of LB multilayers made from optically nonlinear materials with hydrophobic long tails or ball-like molecules.
NASA Astrophysics Data System (ADS)
Ezz-Eldien, S. S.; Doha, E. H.; Bhrawy, A. H.; El-Kalaawy, A. A.; Machado, J. A. T.
2018-04-01
In this paper, we propose a new accurate and robust numerical technique to approximate the solutions of fractional variational problems (FVPs) depending on indefinite integrals with a type of fixed Riemann-Liouville fractional integral. The proposed technique is based on the shifted Chebyshev polynomials as basis functions for the fractional integral operational matrix (FIOM). Together with the Lagrange multiplier method, these problems are then reduced to a system of algebraic equations, which greatly simplifies the solution process. Numerical examples are carried out to confirm the accuracy, efficiency and applicability of the proposed algorithm
Rational calculation accuracy in acousto-optical matrix-vector processor
NASA Astrophysics Data System (ADS)
Oparin, V. V.; Tigin, Dmitry V.
1994-01-01
The high speed of parallel computations for a comparatively small-size processor and acceptable power consumption makes the usage of acousto-optic matrix-vector multiplier (AOMVM) attractive for processing of large amounts of information in real time. The limited accuracy of computations is an essential disadvantage of such a processor. The reduced accuracy requirements allow for considerable simplification of the AOMVM architecture and the reduction of the demands on its components.
Eichenberger, Alexandre E; Gschwind, Michael K; Gunnels, John A
2013-11-05
Mechanisms for performing matrix multiplication operations with data pre-conditioning in a high performance computing architecture are provided. A vector load operation is performed to load a first vector operand of the matrix multiplication operation to a first target vector register. A load and splat operation is performed to load an element of a second vector operand and replicating the element to each of a plurality of elements of a second target vector register. A multiply add operation is performed on elements of the first target vector register and elements of the second target vector register to generate a partial product of the matrix multiplication operation. The partial product of the matrix multiplication operation is accumulated with other partial products of the matrix multiplication operation.
Momoh, Adeyiza O; Kamat, Ashish M; Butler, Charles E
2010-12-01
Pelvic floor reconstruction after pelvic exenteration is challenging, particularly with bacterial contamination and/or pelvic irradiation. Traditional regional myocutaneous flap options are not always avaliable, especially in the multiply operated patient. Human acellular dermal matrix (HADM) confers several advantages and is associated with less morbidity when compared to synthetic mesh used in these compromised wound beds. We report a clinical case of an elderly patient with an anterior pelvic floor defect, who underwent successful reconstruction with a combination of human acellular dermal matrix and an omental flap. Copyright © 2010. Published by Elsevier Ltd.
Ryumin, Pavel; Cramer, Rainer
2018-07-12
New liquid atmospheric pressure (AP) matrix-assisted laser desorption/ionization (MALDI) matrices that produce predominantly multiply charged ions have been developed and evaluated with respect to their performance for peptide and protein analysis by mass spectrometry (MS). Both the chromophore and the viscous support liquid in these matrices were optimized for highest MS signal intensity, S/N values and maximum charge state. The best performance in both protein and peptide analysis was achieved employing light diols as matrix support liquids (e.g. ethylene glycol and propylene glycol). Investigating the influence of the chromophore, it was found that 2,5-dihydroxybenzoic acid resulted in a higher analyte ion signal intensity for the analysis of small peptides; however, larger molecules (>17 kDa) were undetectable. For larger molecules, a sample preparation based on α-cyano-4-hydroxycinnammic acid as the chromophore was developed and multiply protonated analytes with charge states of more than 50 were detected. Thus, for the first time it was possible to detect with MALDI MS proteins as large as ∼80 kDa with a high number of charge states, i.e. m/z values below 2000. Systematic investigations of various matrix support liquids have revealed a linear dependency between laser threshold energy and surface tension of the liquid MALDI sample. Copyright © 2018 Elsevier B.V. All rights reserved.
NASTRAN internal improvements for 1992 release
NASA Technical Reports Server (NTRS)
Chan, Gordon C.
1992-01-01
The 1992 NASTRAN release incorporates a number of improvements transparent to users. The NASTRAN executable was made smaller by 70 pct. for the RISC base Unix machines by linking NASTRAN into a single program, freeing some 33 megabytes of system disc space that can be used by NASTRAN for solving larger problems. Some basic matrix operations, such as forward-backward substitution (FBS), multiply-add (MPYAD), matrix transpose, and fast eigensolution extraction routine (FEER), have been made more efficient by including new methods, new logic, new I/O techniques, and, in some cases, new subroutines. Some of the improvements provide ground work ready for system vectorization. These are finite element basic operations, and are used repeatedly in a finite element program such as NASTRAN. Any improvements on these basic operations can be translated into substantial cost and cpu time savings. NASTRAN is also discussed in various computer platforms.
Surface Snow Density of East Antarctica Derived from In-Situ Observations
NASA Astrophysics Data System (ADS)
Tian, Y.; Zhang, S.; Du, W.; Chen, J.; Xie, H.; Tong, X.; Li, R.
2018-04-01
Models based on physical principles or semi-empirical parameterizations have used to compute the firn density, which is essential for the study of surface processes in the Antarctic ice sheet. However, parameterization of surface snow density is often challenged by the description of detailed local characterization. In this study we propose to generate a surface density map for East Antarctica from all the filed observations that are available. Considering that the observations are non-uniformly distributed around East Antarctica, obtained by different methods, and temporally inhomogeneous, the field observations are used to establish an initial density map with a grid size of 30 × 30 km2 in which the observations are averaged at a temporal scale of five years. We then construct an observation matrix with its columns as the map grids and rows as the temporal scale. If a site has an unknown density value for a period, we will set it to 0 in the matrix. In order to construct the main spatial and temple information of surface snow density matrix we adopt Empirical Orthogonal Function (EOF) method to decompose the observation matrix and only take first several lower-order modes, because these modes already contain most information of the observation matrix. However, there are a lot of zeros in the matrix and we solve it by using matrix completion algorithm, and then we derive the time series of surface snow density at each observation site. Finally, we can obtain the surface snow density by multiplying the modes interpolated by kriging with the corresponding amplitude of the modes. Comparative analysis have done between our surface snow density map and model results. The above details will be introduced in the paper.
Kamensky, David; Evans, John A; Hsu, Ming-Chen; Bazilevs, Yuri
2017-11-01
This paper discusses a method of stabilizing Lagrange multiplier fields used to couple thin immersed shell structures and surrounding fluids. The method retains essential conservation properties by stabilizing only the portion of the constraint orthogonal to a coarse multiplier space. This stabilization can easily be applied within iterative methods or semi-implicit time integrators that avoid directly solving a saddle point problem for the Lagrange multiplier field. Heart valve simulations demonstrate applicability of the proposed method to 3D unsteady simulations. An appendix sketches the relation between the proposed method and a high-order-accurate approach for simpler model problems.
Full-degrees-of-freedom frequency based substructuring
NASA Astrophysics Data System (ADS)
Drozg, Armin; Čepon, Gregor; Boltežar, Miha
2018-01-01
Dividing the whole system into multiple subsystems and a separate dynamic analysis is common practice in the field of structural dynamics. The substructuring process improves the computational efficiency and enables an effective realization of the local optimization, modal updating and sensitivity analyses. This paper focuses on frequency-based substructuring methods using experimentally obtained data. An efficient substructuring process has already been demonstrated using numerically obtained frequency-response functions (FRFs). However, the experimental process suffers from several difficulties, among which, many of them are related to the rotational degrees of freedom. Thus, several attempts have been made to measure, expand or combine numerical correction methods in order to obtain a complete response model. The proposed methods have numerous limitations and are not yet generally applicable. Therefore, in this paper an alternative approach based on experimentally obtained data only, is proposed. The force-excited part of the FRF matrix is measured with piezoelectric translational and rotational direct accelerometers. The incomplete moment-excited part of the FRF matrix is expanded, based on the modal model. The proposed procedure is integrated in a Lagrange Multiplier Frequency Based Substructuring method and demonstrated on a simple beam structure, where the connection coordinates are mainly associated with the rotational degrees of freedom.
NASA Astrophysics Data System (ADS)
Mercier, Sylvain; Gratton, Serge; Tardieu, Nicolas; Vasseur, Xavier
2017-12-01
Many applications in structural mechanics require the numerical solution of sequences of linear systems typically issued from a finite element discretization of the governing equations on fine meshes. The method of Lagrange multipliers is often used to take into account mechanical constraints. The resulting matrices then exhibit a saddle point structure and the iterative solution of such preconditioned linear systems is considered as challenging. A popular strategy is then to combine preconditioning and deflation to yield an efficient method. We propose an alternative that is applicable to the general case and not only to matrices with a saddle point structure. In this approach, we consider to update an existing algebraic or application-based preconditioner, using specific available information exploiting the knowledge of an approximate invariant subspace or of matrix-vector products. The resulting preconditioner has the form of a limited memory quasi-Newton matrix and requires a small number of linearly independent vectors. Numerical experiments performed on three large-scale applications in elasticity highlight the relevance of the new approach. We show that the proposed method outperforms the deflation method when considering sequences of linear systems with varying matrices.
Peng, Ivory X; Shiea, Jentaie; Ogorzalek Loo, Rachel R; Loo, Joseph A
2007-01-01
We have constructed an electrospray-assisted laser desorption/ionization (ELDI) source which utilizes a nitrogen laser pulse to desorb intact molecules from matrix-containing sample solution droplets, followed by electrospray ionization (ESI) post-ionization. The ELDI source is coupled to a quadrupole ion trap mass spectrometer and allows sampling under ambient conditions. Preliminary data showed that ELDI produces ESI-like multiply charged peptides and proteins up to 29 kDa carbonic anhydrase and 66 kDa bovine albumin from single-protein solutions, as well as from complex digest mixtures. The generated multiply charged polypeptides enable efficient tandem mass spectrometric (MS/MS)-based peptide sequencing. ELDI-MS/MS of protein digests and small intact proteins was performed both by collisionally activated dissociation (CAD) and by nozzle-skimmer dissociation (NSD). ELDI-MS/MS may be a useful tool for protein sequencing analysis and top-down proteomics study, and may complement matrix-assisted laser desorption/ionization (MALDI)-based measurements. Copyright (c) 2007 John Wiley & Sons, Ltd.
Garbarino, John R.; Taylor, Howard E.
1987-01-01
Inductively coupled plasma mass spectrometry is employed in the determination of Ni, Cu, Sr, Cd, Ba, Ti, and Pb in nonsaline, natural water samples by stable isotope dilution analysis. Hydrologic samples were directly analyzed without any unusual pretreatment. Interference effects related to overlapping isobars, formation of metal oxide and multiply charged ions, and matrix composition were identified and suitable methods of correction evaluated. A comparability study snowed that single-element isotope dilution analysis was only marginally better than sequential multielement isotope dilution analysis. Accuracy and precision of the single-element method were determined on the basis of results obtained for standard reference materials. The instrumental technique was shown to be ideally suited for programs associated with certification of standard reference materials.
1980-07-01
WORKI, WORK2, ALOC, and FLAMB . The WORK1 array comprises a number of small arrays which have been read from input and will be utilized throughout the...of the WORK2 array at least as large as the maximum of the two. The size is the same for both the ALOC and FLAMB arrays. The ALOC array stores the...allocation matrix and the FLAMB array is used for the Lagrangian multiplier matrix. Their dimension should be set to 3 x NWPNS x NTGTS, where NTGTS is
Randomized subspace-based robust principal component analysis for hyperspectral anomaly detection
NASA Astrophysics Data System (ADS)
Sun, Weiwei; Yang, Gang; Li, Jialin; Zhang, Dianfa
2018-01-01
A randomized subspace-based robust principal component analysis (RSRPCA) method for anomaly detection in hyperspectral imagery (HSI) is proposed. The RSRPCA combines advantages of randomized column subspace and robust principal component analysis (RPCA). It assumes that the background has low-rank properties, and the anomalies are sparse and do not lie in the column subspace of the background. First, RSRPCA implements random sampling to sketch the original HSI dataset from columns and to construct a randomized column subspace of the background. Structured random projections are also adopted to sketch the HSI dataset from rows. Sketching from columns and rows could greatly reduce the computational requirements of RSRPCA. Second, the RSRPCA adopts the columnwise RPCA (CWRPCA) to eliminate negative effects of sampled anomaly pixels and that purifies the previous randomized column subspace by removing sampled anomaly columns. The CWRPCA decomposes the submatrix of the HSI data into a low-rank matrix (i.e., background component), a noisy matrix (i.e., noise component), and a sparse anomaly matrix (i.e., anomaly component) with only a small proportion of nonzero columns. The algorithm of inexact augmented Lagrange multiplier is utilized to optimize the CWRPCA problem and estimate the sparse matrix. Nonzero columns of the sparse anomaly matrix point to sampled anomaly columns in the submatrix. Third, all the pixels are projected onto the complemental subspace of the purified randomized column subspace of the background and the anomaly pixels in the original HSI data are finally exactly located. Several experiments on three real hyperspectral images are carefully designed to investigate the detection performance of RSRPCA, and the results are compared with four state-of-the-art methods. Experimental results show that the proposed RSRPCA outperforms four comparison methods both in detection performance and in computational time.
NASA Astrophysics Data System (ADS)
Ballard, S.; Hipp, J. R.; Encarnacao, A.; Young, C. J.; Begnaud, M. L.; Phillips, W. S.
2012-12-01
Seismic event locations can be made more accurate and precise by computing predictions of seismic travel time through high fidelity 3D models of the wave speed in the Earth's interior. Given the variable data quality and uneven data sampling associated with this type of model, it is essential that there be a means to calculate high-quality estimates of the path-dependent variance and covariance associated with the predicted travel times of ray paths through the model. In this paper, we describe a methodology for accomplishing this by exploiting the full model covariance matrix and show examples of path-dependent travel time prediction uncertainty computed from SALSA3D, our global, seamless 3D tomographic P-velocity model. Typical global 3D models have on the order of 1/2 million nodes, so the challenge in calculating the covariance matrix is formidable: 0.9 TB storage for 1/2 of a symmetric matrix, necessitating an Out-Of-Core (OOC) blocked matrix solution technique. With our approach the tomography matrix (G which includes Tikhonov regularization terms) is multiplied by its transpose (GTG) and written in a blocked sub-matrix fashion. We employ a distributed parallel solution paradigm that solves for (GTG)-1 by assigning blocks to individual processing nodes for matrix decomposition update and scaling operations. We first find the Cholesky decomposition of GTG which is subsequently inverted. Next, we employ OOC matrix multiplication methods to calculate the model covariance matrix from (GTG)-1 and an assumed data covariance matrix. Given the model covariance matrix, we solve for the travel-time covariance associated with arbitrary ray-paths by summing the model covariance along both ray paths. Setting the paths equal and taking the square root yields the travel prediction uncertainty for the single path.
New convergence results for the scaled gradient projection method
NASA Astrophysics Data System (ADS)
Bonettini, S.; Prato, M.
2015-09-01
The aim of this paper is to deepen the convergence analysis of the scaled gradient projection (SGP) method, proposed by Bonettini et al in a recent paper for constrained smooth optimization. The main feature of SGP is the presence of a variable scaling matrix multiplying the gradient, which may change at each iteration. In the last few years, extensive numerical experimentation showed that SGP equipped with a suitable choice of the scaling matrix is a very effective tool for solving large scale variational problems arising in image and signal processing. In spite of the very reliable numerical results observed, only a weak convergence theorem is provided establishing that any limit point of the sequence generated by SGP is stationary. Here, under the only assumption that the objective function is convex and that a solution exists, we prove that the sequence generated by SGP converges to a minimum point, if the scaling matrices sequence satisfies a simple and implementable condition. Moreover, assuming that the gradient of the objective function is Lipschitz continuous, we are also able to prove the {O}(1/k) convergence rate with respect to the objective function values. Finally, we present the results of a numerical experience on some relevant image restoration problems, showing that the proposed scaling matrix selection rule performs well also from the computational point of view.
Laitila, Jussi; Moilanen, Atte; Pouzols, Federico M
2014-01-01
Biodiversity offsetting, which means compensation for ecological and environmental damage caused by development activity, has recently been gaining strong political support around the world. One common criticism levelled at offsets is that they exchange certain and almost immediate losses for uncertain future gains. In the case of restoration offsets, gains may be realized after a time delay of decades, and with considerable uncertainty. Here we focus on offset multipliers, which are ratios between damaged and compensated amounts (areas) of biodiversity. Multipliers have the attraction of being an easily understandable way of deciding the amount of offsetting needed. On the other hand, exact values of multipliers are very difficult to compute in practice if at all possible. We introduce a mathematical method for deriving minimum levels for offset multipliers under the assumption that offsetting gains must compensate for the losses (no net loss offsetting). We calculate absolute minimum multipliers that arise from time discounting and delayed emergence of offsetting gains for a one-dimensional measure of biodiversity. Despite the highly simplified model, we show that even the absolute minimum multipliers may easily be quite large, in the order of dozens, and theoretically arbitrarily large, contradicting the relatively low multipliers found in literature and in practice. While our results inform policy makers about realistic minimal offsetting requirements, they also challenge many current policies and show the importance of rigorous models for computing (minimum) offset multipliers. The strength of the presented method is that it requires minimal underlying information. We include a supplementary spreadsheet tool for calculating multipliers to facilitate application. PMID:25821578
Acoustic response of a rectangular levitator with orifices
NASA Technical Reports Server (NTRS)
El-Raheb, Michael; Wagner, Paul
1990-01-01
The acoustic response of a rectangular cavity to speaker-generated excitation through waveguides terminating at orifices in the cavity walls is analyzed. To find the effects of orifices, acoustic pressure is expressed by eigenfunctions satisfying Neumann boundary conditions as well as by those satisfying Dirichlet ones. Some of the excess unknowns can be eliminated by point constraints set over the boundary, by appeal to Lagrange undetermined multipliers. The resulting transfer matrix must be further reduced by partial condensation to the order of a matrix describing unmixed boundary conditions. If the cavity is subjected to an axial temperature dependence, the transfer matrix is determined numerically.
Fast reconstruction of high-qubit-number quantum states via low-rate measurements
NASA Astrophysics Data System (ADS)
Li, K.; Zhang, J.; Cong, S.
2017-07-01
Due to the exponential complexity of the resources required by quantum state tomography (QST), people are interested in approaches towards identifying quantum states which require less effort and time. In this paper, we provide a tailored and efficient method for reconstructing mixed quantum states up to 12 (or even more) qubits from an incomplete set of observables subject to noises. Our method is applicable to any pure or nearly pure state ρ and can be extended to many states of interest in quantum information processing, such as a multiparticle entangled W state, Greenberger-Horne-Zeilinger states, and cluster states that are matrix product operators of low dimensions. The method applies the quantum density matrix constraints to a quantum compressive sensing optimization problem and exploits a modified quantum alternating direction multiplier method (quantum-ADMM) to accelerate the convergence. Our algorithm takes 8 ,35 , and 226 seconds, respectively, to reconstruct superposition state density matrices of 10 ,11 ,and12 qubits with acceptable fidelity using less than 1 % of measurements of expectation. To our knowledge it is the fastest realization that people can achieve using a normal desktop. We further discuss applications of this method using experimental data of mixed states obtained in an ion trap experiment of up to 8 qubits.
Sabushimike, Donatien; Na, Seung You; Kim, Jin Young; Bui, Ngoc Nam; Seo, Kyung Sik; Kim, Gil Gyeom
2016-01-01
The detection of a moving target using an IR-UWB Radar involves the core task of separating the waves reflected by the static background and by the moving target. This paper investigates the capacity of the low-rank and sparse matrix decomposition approach to separate the background and the foreground in the trend of UWB Radar-based moving target detection. Robust PCA models are criticized for being batched-data-oriented, which makes them inconvenient in realistic environments where frames need to be processed as they are recorded in real time. In this paper, a novel method based on overlapping-windows processing is proposed to cope with online processing. The method consists of processing a small batch of frames which will be continually updated without changing its size as new frames are captured. We prove that RPCA (via its Inexact Augmented Lagrange Multiplier (IALM) model) can successfully separate the two subspaces, which enhances the accuracy of target detection. The overlapping-windows processing method converges on the optimal solution with its batch counterpart (i.e., processing batched data with RPCA), and both methods prove the robustness and efficiency of the RPCA over the classic PCA and the commonly used exponential averaging method. PMID:27598159
Husain, Muhammad Jami; Khondker, Bazlul Haque
2016-01-01
In Bangladesh, where tobacco use is pervasive, reducing tobacco use is economically beneficial. This paper uses the latest Bangladesh social accounting matrix (SAM) multiplier model to quantify the economy-wide impact of demand-driven changes in tobacco cultivation, bidi industries, and cigarette industries. First, we compute various income multiplier values (i.e. backward linkages) for all production activities in the economy to quantify the impact of changes in demand for the corresponding products on gross output for 86 activities, demand for 86 commodities, returns to four factors of production, and income for eight household groups. Next, we rank tobacco production activities by income multiplier values relative to other sectors. Finally, we present three hypothetical 'tobacco-free economy' scenarios by diverting demand from tobacco products into other sectors of the economy and quantifying the economy-wide impact. The simulation exercises with three different tobacco-free scenarios show that, compared to the baseline values, total sectoral output increases by 0.92%, 1.3%, and 0.75%. The corresponding increases in the total factor returns (i.e. GDP) are 1.57%, 1.75%, and 1.75%. Similarly, total household income increases by 1.40%, 1.58%, and 1.55%.
Two-dimensional grid-free compressive beamforming.
Yang, Yang; Chu, Zhigang; Xu, Zhongming; Ping, Guoli
2017-08-01
Compressive beamforming realizes the direction-of-arrival (DOA) estimation and strength quantification of acoustic sources by solving an underdetermined system of equations relating microphone pressures to a source distribution via compressive sensing. The conventional method assumes DOAs of sources to lie on a grid. Its performance degrades due to basis mismatch when the assumption is not satisfied. To overcome this limitation for the measurement with plane microphone arrays, a two-dimensional grid-free compressive beamforming is developed. First, a continuum based atomic norm minimization is defined to denoise the measured pressure and thus obtain the pressure from sources. Next, a positive semidefinite programming is formulated to approximate the atomic norm minimization. Subsequently, a reasonably fast algorithm based on alternating direction method of multipliers is presented to solve the positive semidefinite programming. Finally, the matrix enhancement and matrix pencil method is introduced to process the obtained pressure and reconstruct the source distribution. Both simulations and experiments demonstrate that under certain conditions, the grid-free compressive beamforming can provide high-resolution and low-contamination imaging, allowing accurate and fast estimation of two-dimensional DOAs and quantification of source strengths, even with non-uniform arrays and noisy measurements.
NASA Astrophysics Data System (ADS)
Nikitin, Anatoly G.; Karadzhov, Yuri
2011-07-01
We present a collection of matrix-valued shape invariant potentials which give rise to new exactly solvable problems of SUSY quantum mechanics. It includes all irreducible matrix superpotentials of the generic form W=kQ+\\frac{1}{k} R+P, where k is a variable parameter, Q is the unit matrix multiplied by a real-valued function of independent variable x, and P and R are the Hermitian matrices depending on x. In particular, we recover the Pron'ko-Stroganov 'matrix Coulomb potential' and all known scalar shape invariant potentials of SUSY quantum mechanics. In addition, five new shape invariant potentials are presented. Three of them admit a dual shape invariance, i.e. the related Hamiltonians can be factorized using two non-equivalent superpotentials. We find discrete spectrum and eigenvectors for the corresponding Schrödinger equations and prove that these eigenvectors are normalizable.
Wientjes, Yvonne C J; Bijma, Piter; Vandenplas, Jérémie; Calus, Mario P L
2017-10-01
Different methods are available to calculate multi-population genomic relationship matrices. Since those matrices differ in base population, it is anticipated that the method used to calculate genomic relationships affects the estimate of genetic variances, covariances, and correlations. The aim of this article is to define the multi-population genomic relationship matrix to estimate current genetic variances within and genetic correlations between populations. The genomic relationship matrix containing two populations consists of four blocks, one block for population 1, one block for population 2, and two blocks for relationships between the populations. It is known, based on literature, that by using current allele frequencies to calculate genomic relationships within a population, current genetic variances are estimated. In this article, we theoretically derived the properties of the genomic relationship matrix to estimate genetic correlations between populations and validated it using simulations. When the scaling factor of across-population genomic relationships is equal to the product of the square roots of the scaling factors for within-population genomic relationships, the genetic correlation is estimated unbiasedly even though estimated genetic variances do not necessarily refer to the current population. When this property is not met, the correlation based on estimated variances should be multiplied by a correction factor based on the scaling factors. In this study, we present a genomic relationship matrix which directly estimates current genetic variances as well as genetic correlations between populations. Copyright © 2017 by the Genetics Society of America.
VON Korff, Modest; Fink, Tobias; Sander, Thomas
2017-01-01
A new computational method is presented to extract disease patterns from heterogeneous and text-based data. For this study, 22 million PubMed records were mined for co-occurrences of gene name synonyms and disease MeSH terms. The resulting publication counts were transferred into a matrix Mdata. In this matrix, a disease was represented by a row and a gene by a column. Each field in the matrix represented the publication count for a co-occurring disease-gene pair. A second matrix with identical dimensions Mrelevance was derived from Mdata. To create Mrelevance the values from Mdata were normalized. The normalized values were multiplied by the column-wise calculated Gini coefficient. This multiplication resulted in a relevance estimator for every gene in relation to a disease. From Mrelevance the similarities between all row vectors were calculated. The resulting similarity matrix Srelevance related 5,000 diseases by the relevance estimators calculated for 15,000 genes. Three diseases were analyzed in detail for the validation of the disease patterns and the relevant genes. Cytoscape was used to visualize and to analyze Mrelevance and Srelevance together with the genes and diseases. Summarizing the results, it can be stated that the relevance estimator introduced here was able to detect valid disease patterns and to identify genes that encoded key proteins and potential targets for drug discovery projects.
Charge transfer collisions of Si^3+ with H at low energies
NASA Astrophysics Data System (ADS)
Joseph, D. C.; Gu, J. P.; Saha, B. C.
2009-11-01
Charge transfer of positively charged ions with atomic hydrogen is important not only in magnetically confined plasmas between impurity ions and H atoms from the chamber walls influences the overall ionization balance and effects the plasma cooling but also in astrophysics, where it plays a key role in determining the properties of the observed gas. It also provides a recombination mechanism for multiply charged ions in X-ray ionized astronomical environments. We report an investigation using the molecular-orbital close-coupling (MOCC) method, both quantum mechanically and semi-classically, in the adiabatic representation. Ab initio adiabatic potentials and coupling matrix elements--radial and angular--are calculated using the MRD-CI method. Comparison of our results with other theoretical as well as experimental findings will be discussed.
Leang, Sarom S; Rendell, Alistair P; Gordon, Mark S
2014-03-11
Increasingly, modern computer systems comprise a multicore general-purpose processor augmented with a number of special purpose devices or accelerators connected via an external interface such as a PCI bus. The NVIDIA Kepler Graphical Processing Unit (GPU) and the Intel Phi are two examples of such accelerators. Accelerators offer peak performances that can be well above those of the host processor. How to exploit this heterogeneous environment for legacy application codes is not, however, straightforward. This paper considers how matrix operations in typical quantum chemical calculations can be migrated to the GPU and Phi systems. Double precision general matrix multiply operations are endemic in electronic structure calculations, especially methods that include electron correlation, such as density functional theory, second order perturbation theory, and coupled cluster theory. The use of approaches that automatically determine whether to use the host or an accelerator, based on problem size, is explored, with computations that are occurring on the accelerator and/or the host. For data-transfers over PCI-e, the GPU provides the best overall performance for data sizes up to 4096 MB with consistent upload and download rates between 5-5.6 GB/s and 5.4-6.3 GB/s, respectively. The GPU outperforms the Phi for both square and nonsquare matrix multiplications.
Determining Diagonal Branches in Mine Ventilation Networks
NASA Astrophysics Data System (ADS)
Krach, Andrzej
2014-12-01
The present paper discusses determining diagonal branches in a mine ventilation network by means of a method based on the relationship A⊗ PT(k, l) = M, which states that the nodal-branch incidence matrix A, modulo-2 multiplied by the transposed path matrix PT(k, l ) from node no. k to node no. l, yields the matrix M where all the elements in rows k and l - corresponding to the start and the end node - are 1, and where the elements in the remaining rows are 0, exclusively. If a row of the matrix M is to contain only "0" elements, the following condition has to be fulfilled: after multiplying the elements of a row of the matrix A by the elements of a column of the matrix PT(k, l), i.e. by the elements of a proper row of the matrix P(k, l ), the result row must display only "0" elements or an even number of "1" entries, as only such a number of "1" entries yields 0 when modulo-2 added - and since the rows of the matrix A correspond to the graph nodes, and the path nodes level is 2 (apart from the nodes k and l, whose level is 1), then the number of "1" elements in a row has to be 0 or 2. If, in turn, the rows k and l of the matrix M are to contain only "1" elements, the following condition has to be fulfilled: after multiplying the elements of the row k or l of the matrix A by the elements of a column of the matrix PT(k, l), the result row must display an uneven number of "1" entries, as only such a number of "1" entries yields 1 when modulo-2 added - and since the rows of the matrix A correspond to the graph nodes, and the level of the i and j path nodes is 1, then the number of "1" elements in a row has to be 1. The process of determining diagonal branches by means of this method was demonstrated using the example of a simple ventilation network with two upcast shafts and one downcast shaft. W artykule przedstawiono metodę wyznaczania bocznic przekątnych w sieci wentylacyjnej kopalni metodą bazującą na zależności A⊗PT(k, l) = M, która podaje, że macierz incydencji węzłowo bocznicowej A pomnożona modulo 2 przez transponowaną macierz ścieżek PT(k, l) od węzła nr k do węzła nr l daje w wyniku macierz M o takich własnościach że ma same jedynki w wierszach k i l, odpowiadającym węzłom początkowemu i końcowemu i same zera w pozostałych wierszach. Warunkiem na to, aby w wierszu macierzy M były same zera jest aby po pomnożeniu elementów wiersza macierzy A przez elementy kolumny macierzy PT(k, l), czyli przez elementy odpowiedniego wiersza macierzy P(k, l), w wierszu wynikowym były same zera lub parzysta liczba jedynek, ponieważ tylko taka liczba jedynek zsumowana modulo 2 daje w wyniku 0, a ponieważ wiersze macierzy A odpowiadają węzłom grafu, a węzły ścieżki są stopnia 2 (oprócz węzłów k i l, które są stopnia 1), to liczba jedynek w wierszu musi być równa 0 lub 2. Natomiast warunkiem na to, aby w wierszach k i l macierzy M były same jedynki jest aby po pomnożeniu elementów wiersza k lub l macierzy A przez elementy kolumny macierzy PT(k, l) w wierszu wynikowym była nieparzysta liczba jedynek, ponieważ tylko taka liczba jedynek zsumowana modulo 2 daje w wyniku 1, a ponieważ wiersze macierzy A odpowiadają węzłom grafu, a węzły k i j ścieżki są stopnia 1, to liczba jedynek w wierszu musi być równa 1. Wyznaczanie bocznic przekątnych tą metodą pokazano na przykładzie prostej sieci wentylacyjnej z dwoma szybami wydechowymi i jednym wdechowym.
NASA Astrophysics Data System (ADS)
Sun, Weiwei; Ma, Jun; Yang, Gang; Du, Bo; Zhang, Liangpei
2017-06-01
A new Bayesian method named Poisson Nonnegative Matrix Factorization with Parameter Subspace Clustering Constraint (PNMF-PSCC) has been presented to extract endmembers from Hyperspectral Imagery (HSI). First, the method integrates the liner spectral mixture model with the Bayesian framework and it formulates endmember extraction into a Bayesian inference problem. Second, the Parameter Subspace Clustering Constraint (PSCC) is incorporated into the statistical program to consider the clustering of all pixels in the parameter subspace. The PSCC could enlarge differences among ground objects and helps finding endmembers with smaller spectrum divergences. Meanwhile, the PNMF-PSCC method utilizes the Poisson distribution as the prior knowledge of spectral signals to better explain the quantum nature of light in imaging spectrometer. Third, the optimization problem of PNMF-PSCC is formulated into maximizing the joint density via the Maximum A Posterior (MAP) estimator. The program is finally solved by iteratively optimizing two sub-problems via the Alternating Direction Method of Multipliers (ADMM) framework and the FURTHESTSUM initialization scheme. Five state-of-the art methods are implemented to make comparisons with the performance of PNMF-PSCC on both the synthetic and real HSI datasets. Experimental results show that the PNMF-PSCC outperforms all the five methods in Spectral Angle Distance (SAD) and Root-Mean-Square-Error (RMSE), and especially it could identify good endmembers for ground objects with smaller spectrum divergences.
12 CFR 217.34 - OTC derivative contracts.
Code of Federal Regulations, 2014 CFR
2014-01-01
... net present value of the amount of unpaid premiums. Table 1 to § 217.34—Conversion Factor Matrix for... single OTC derivative contract is the greater of the mark-to-fair value of the OTC derivative contract or... with a negative mark-to-fair value, is calculated by multiplying the notional principal amount of the...
NASA Astrophysics Data System (ADS)
Asakawa, Daiki; Mizuno, Hajime; Toyo'oka, Toshimasa
2017-12-01
The formation mechanisms of singly and multiply charged organophosphate metabolites by electrospray ionization (ESI) and their gas phase stabilities were investigated. Metabolites containing multiple phosphate groups, such as adenosine 5'-diphosphate (ADP), adenosine 5'-triphosphate (ATP), and D- myo-inositol-1,4,5-triphosphate (IP3) were observed as doubly deprotonated ions by negative-ion ESI mass spectrometry. Organophosphates with multiple negative charges were found to be unstable and often underwent loss of PO3 -, although singly deprotonated analytes were stable. The presence of fragments due to the loss of PO3 - in the negative-ion ESI mass spectra could result in the misinterpretation of analytical results. In contrast to ESI, matrix-assisted laser desorption ionization (MALDI) produced singly charged organophosphate metabolites with no associated fragmentation, since the singly charged anions are stable. The stability of an organophosphate metabolite in the gas phase strongly depends on its charge state. The fragmentations of multiply charged organophosphates were also investigated in detail through density functional theory calculations. [Figure not available: see fulltext.
Local-aggregate modeling for big data via distributed optimization: Applications to neuroimaging.
Hu, Yue; Allen, Genevera I
2015-12-01
Technological advances have led to a proliferation of structured big data that have matrix-valued covariates. We are specifically motivated to build predictive models for multi-subject neuroimaging data based on each subject's brain imaging scans. This is an ultra-high-dimensional problem that consists of a matrix of covariates (brain locations by time points) for each subject; few methods currently exist to fit supervised models directly to this tensor data. We propose a novel modeling and algorithmic strategy to apply generalized linear models (GLMs) to this massive tensor data in which one set of variables is associated with locations. Our method begins by fitting GLMs to each location separately, and then builds an ensemble by blending information across locations through regularization with what we term an aggregating penalty. Our so called, Local-Aggregate Model, can be fit in a completely distributed manner over the locations using an Alternating Direction Method of Multipliers (ADMM) strategy, and thus greatly reduces the computational burden. Furthermore, we propose to select the appropriate model through a novel sequence of faster algorithmic solutions that is similar to regularization paths. We will demonstrate both the computational and predictive modeling advantages of our methods via simulations and an EEG classification problem. © 2015, The International Biometric Society.
Li, Tian-xue; Hu, Lang; Zhang, Meng-meng; Sun, Jian; Qiu, Yue; Rui, Jun-qian; Yang, Xing-hao
2014-01-01
There is a growing concern for the sensitive quantification of multiple components using advanced data acquisition method in herbal medicines (HMs). An improved and rugged UPLC-MS/MS method has been developed and validated for sensitive and rapid determination of multiply analytes from Tong-Xie-Yao-Fang (TXYF) decoction in three biological matrices (plasma/brain tissue/urine) using geniposide and formononetin as internal standards. After solid-phase extraction, chromatographic separation was performed on a C18 column using gradient elution. Quantifier and qualifier transitions were monitored using novel Triggered Dynamic multiple reaction monitoring (TdMRM) in the positive ionization mode. A significant peak symmetry and sensitivity improvement in the TdMRM mode was achieved as compared to conventional MRM. The reproducibility (RSD%) was ≤7.9% by applying TdMRM transition while the values were 6.8-20.6% for MRM. Excellent linear calibration curves were obtained under TdMRM transitions over the tested concentration ranges. Intra- and inter-day precisions (RSD%) were ≤14.2% and accuracies (RE%) ranged from -9.6% to 10.6%. The validation data of specificity, carryover, recovery, matrix effect and stability were within the required limits. The method was effectively applied to simultaneously detect and quantify 1 lactone, 2 monoterpene glucosides, 1 alkaloid, 5 flavonoids and 2 chromones in plasma, brain tissue and urine after oral administration of TXYF decoction. In conclusion, this new and reliable method is beneficial for quantification and confirmation assays of multiply components in complex biological samples. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Buchholz, Peter; Ciardo, Gianfranco; Donatelli, Susanna; Kemper, Peter
1997-01-01
We present a systematic discussion of algorithms to multiply a vector by a matrix expressed as the Kronecker product of sparse matrices, extending previous work in a unified notational framework. Then, we use our results to define new algorithms for the solution of large structured Markov models. In addition to a comprehensive overview of existing approaches, we give new results with respect to: (1) managing certain types of state-dependent behavior without incurring extra cost; (2) supporting both Jacobi-style and Gauss-Seidel-style methods by appropriate multiplication algorithms; (3) speeding up algorithms that consider probability vectors of size equal to the "actual" state space instead of the "potential" state space.
Failure detection and fault management techniques for flush airdata sensing systems
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.; Moes, Timothy R.; Leondes, Cornelius T.
1992-01-01
Methods based on chi-squared analysis are presented for detecting system and individual-port failures in the high-angle-of-attack flush airdata sensing system on the NASA F-18 High Alpha Research Vehicle. The HI-FADS hardware is introduced, and the aerodynamic model describes measured pressure in terms of dynamic pressure, angle of attack, angle of sideslip, and static pressure. Chi-squared analysis is described in the presentation of the concept for failure detection and fault management which includes nominal, iteration, and fault-management modes. A matrix of pressure orifices arranged in concentric circles on the nose of the aircraft indicate the parameters which are applied to the regression algorithms. The sensing techniques are applied to the F-18 flight data, and two examples are given of the computed angle-of-attack time histories. The failure-detection and fault-management techniques permit the matrix to be multiply redundant, and the chi-squared analysis is shown to be useful in the detection of failures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karabacak, Özkan, E-mail: ozkan2917@gmail.com; Department of Electronic Systems, Aalborg University, 9220 Aalborg East; Alikoç, Baran, E-mail: alikoc@itu.edu.tr
Motivated by the chaos suppression methods based on stabilizing an unstable periodic orbit, we study the stability of synchronized periodic orbits of coupled map systems when the period of the orbit is the same as the delay in the information transmission between coupled units. We show that the stability region of a synchronized periodic orbit is determined by the Floquet multiplier of the periodic orbit for the uncoupled map, the coupling constant, the smallest and the largest Laplacian eigenvalue of the adjacency matrix. We prove that the stabilization of an unstable τ-periodic orbit via coupling with delay τ is possiblemore » only when the Floquet multiplier of the orbit is negative and the connection structure is not bipartite. For a given coupling structure, it is possible to find the values of the coupling strength that stabilizes unstable periodic orbits. The most suitable connection topology for stabilization is found to be the all-to-all coupling. On the other hand, a negative coupling constant may lead to destabilization of τ-periodic orbits that are stable for the uncoupled map. We provide examples of coupled logistic maps demonstrating the stabilization and destabilization of synchronized τ-periodic orbits as well as chaos suppression via stabilization of a synchronized τ-periodic orbit.« less
Poisson image reconstruction with Hessian Schatten-norm regularization.
Lefkimmiatis, Stamatios; Unser, Michael
2013-11-01
Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework.
Collaborative sparse priors for multi-view ATR
NASA Astrophysics Data System (ADS)
Li, Xuelu; Monga, Vishal
2018-04-01
Recent work has seen a surge of sparse representation based classification (SRC) methods applied to automatic target recognition problems. While traditional SRC approaches used l0 or l1 norm to quantify sparsity, spike and slab priors have established themselves as the gold standard for providing general tunable sparse structures on vectors. In this work, we employ collaborative spike and slab priors that can be applied to matrices to encourage sparsity for the problem of multi-view ATR. That is, target images captured from multiple views are expanded in terms of a training dictionary multiplied with a coefficient matrix. Ideally, for a test image set comprising of multiple views of a target, coefficients corresponding to its identifying class are expected to be active, while others should be zero, i.e. the coefficient matrix is naturally sparse. We develop a new approach to solve the optimization problem that estimates the sparse coefficient matrix jointly with the sparsity inducing parameters in the collaborative prior. ATR problems are investigated on the mid-wave infrared (MWIR) database made available by the US Army Night Vision and Electronic Sensors Directorate, which has a rich collection of views. Experimental results show that the proposed joint prior and coefficient estimation method (JPCEM) can: 1.) enable improved accuracy when multiple views vs. a single one are invoked, and 2.) outperform state of the art alternatives particularly when training imagery is limited.
Husain, Muhammad Jami; Khondker, Bazlul Haque
2017-01-01
In Bangladesh, where tobacco use is pervasive, reducing tobacco use is economically beneficial. This paper uses the latest Bangladesh social accounting matrix (SAM) multiplier model to quantify the economy-wide impact of demand-driven changes in tobacco cultivation, bidi industries, and cigarette industries. First, we compute various income multiplier values (i.e. backward linkages) for all production activities in the economy to quantify the impact of changes in demand for the corresponding products on gross output for 86 activities, demand for 86 commodities, returns to four factors of production, and income for eight household groups. Next, we rank tobacco production activities by income multiplier values relative to other sectors. Finally, we present three hypothetical ‘tobacco-free economy’ scenarios by diverting demand from tobacco products into other sectors of the economy and quantifying the economy-wide impact. The simulation exercises with three different tobacco-free scenarios show that, compared to the baseline values, total sectoral output increases by 0.92%, 1.3%, and 0.75%. The corresponding increases in the total factor returns (i.e. GDP) are 1.57%, 1.75%, and 1.75%. Similarly, total household income increases by 1.40%, 1.58%, and 1.55%. PMID:28845091
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacquelin, Mathias; De Jong, Wibe A.; Bylaska, Eric J.
2017-07-03
The Ab Initio Molecular Dynamics (AIMD) method allows scientists to treat the dynamics of molecular and condensed phase systems while retaining a first-principles-based description of their interactions. This extremely important method has tremendous computational requirements, because the electronic Schr¨odinger equation, approximated using Kohn-Sham Density Functional Theory (DFT), is solved at every time step. With the advent of manycore architectures, application developers have a significant amount of processing power within each compute node that can only be exploited through massive parallelism. A compute intensive application such as AIMD forms a good candidate to leverage this processing power. In this paper, wemore » focus on adding thread level parallelism to the plane wave DFT methodology implemented in NWChem. Through a careful optimization of tall-skinny matrix products, which are at the heart of the Lagrange multiplier and nonlocal pseudopotential kernels, as well as 3D FFTs, our OpenMP implementation delivers excellent strong scaling on the latest Intel Knights Landing (KNL) processor. We assess the efficiency of our Lagrange multiplier kernels by building a Roofline model of the platform, and verify that our implementation is close to the roofline for various problem sizes. Finally, we present strong scaling results on the complete AIMD simulation for a 64 water molecules test case, that scales up to all 68 cores of the Knights Landing processor.« less
Formation of multiply charged ions from large molecules using massive-cluster impact.
Mahoney, J F; Cornett, D S; Lee, T D
1994-05-01
Massive-cluster impact is demonstrated to be an effective ionization technique for the mass analysis of proteins as large as 17 kDa. The design of the cluster source permits coupling to both magnetic-sector and quadrupole mass spectrometers. Mass spectra are characterized by the almost total absence of chemical background and a predominance of multiply charged ions formed from 100% glycerol matrix. The number of charge states produced by the technique is observed to range from +3 to +9 for chicken egg lysozyme (14,310 Da). The lower m/z values provided by higher charge states increase the effective mass range of analyses performed with conventional ionization by fast-atom bombardment or liquid secondary ion mass spectrometry.
Iterative color-multiplexed, electro-optical processor.
Psaltis, D; Casasent, D; Carlotto, M
1979-11-01
A noncoherent optical vector-matrix multiplier using a linear LED source array and a linear P-I-N photodiode detector array has been combined with a 1-D adder in a feedback loop. The resultant iterative optical processor and its use in solving simultaneous linear equations are described. Operation on complex data is provided by a novel color-multiplexing system.
12 CFR 327.9 - Assessment risk categories and pricing methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... and a weighted average of CAMELS component ratings will be multiplied by a corresponding pricing... CAMELS component ratings is created by multiplying each component by the following percentages and adding... CAMELS Component Rating 1.095 * Ratios are expressed as percentages. ** Multipliers are rounded to three...
Ma, Xu; Cheng, Yongmei; Hao, Shuai
2016-12-10
Automatic classification of terrain surfaces from an aerial image is essential for an autonomous unmanned aerial vehicle (UAV) landing at an unprepared site by using vision. Diverse terrain surfaces may show similar spectral properties due to the illumination and noise that easily cause poor classification performance. To address this issue, a multi-stage classification algorithm based on low-rank recovery and multi-feature fusion sparse representation is proposed. First, color moments and Gabor texture feature are extracted from training data and stacked as column vectors of a dictionary. Then we perform low-rank matrix recovery for the dictionary by using augmented Lagrange multipliers and construct a multi-stage terrain classifier. Experimental results on an aerial map database that we prepared verify the classification accuracy and robustness of the proposed method.
Wu, Zhenkai; Ding, Jing; Zhao, Dahang; Zhao, Li; Li, Hai; Liu, Jianlin
2017-07-10
The multiplier method was introduced by Paley to calculate the timing for temporary hemiepiphysiodesis. However, this method has not been verified in terms of clinical outcome measure. We aimed to (1) predict the rate of angular correction per year (ACPY) at the various corresponding ages by means of multiplier method and verify the reliability based on the data from the published studies and (2) screen out risk factors for deviation of prediction. A comprehensive search was performed in the following electronic databases: Cochrane, PubMed, and EMBASE™. A total of 22 studies met the inclusion criteria. If the actual value of ACPY from the collected date was located out of the range of the predicted value based on the multiplier method, it was considered as the deviation of prediction (DOP). The associations of patient characteristics with DOP were assessed with the use of univariate logistic regression. Only one article was evaluated as moderate evidence; the remaining articles were evaluated as poor quality. The rate of DOP was 31.82%. In the detailed individual data of included studies, the rate of DOP was 55.44%. The multiplier method is not reliable in predicting the timing for temporary hemiepiphysiodesis, even though it is prone to be more reliable for the younger patients with idiopathic genu coronal deformity.
Simplex volume analysis for finding endmembers in hyperspectral imagery
NASA Astrophysics Data System (ADS)
Li, Hsiao-Chi; Song, Meiping; Chang, Chein-I.
2015-05-01
Using maximal simplex volume as an optimal criterion for finding endmembers is a common approach and has been widely studied in the literature. Interestingly, very little work has been reported on how simplex volume is calculated. It turns out that the issue of calculating simplex volume is much more complicated and involved than what we may think. This paper investigates this issue from two different aspects, geometric structure and eigen-analysis. The geometric structure is derived from its simplex structure whose volume can be calculated by multiplying its base with its height. On the other hand, eigen-analysis takes advantage of the Cayley-Menger determinant to calculate the simplex volume. The major issue of this approach is that when the matrix is ill-rank where determinant is desired. To deal with this problem two methods are generally considered. One is to perform data dimensionality reduction to make the matrix to be of full rank. The drawback of this method is that the original volume has been shrunk and the found volume of a dimensionality-reduced simplex is not the real original simplex volume. Another is to use singular value decomposition (SVD) to find singular values for calculating simplex volume. The dilemma of this method is its instability in numerical calculations. This paper explores all of these three methods in simplex volume calculation. Experimental results show that geometric structure-based method yields the most reliable simplex volume.
NASA Technical Reports Server (NTRS)
Tiffany, Sherwood H.; Adams, William M., Jr.
1988-01-01
The approximation of unsteady generalized aerodynamic forces in the equations of motion of a flexible aircraft are discussed. Two methods of formulating these approximations are extended to include the same flexibility in constraining the approximations and the same methodology in optimizing nonlinear parameters as another currently used extended least-squares method. Optimal selection of nonlinear parameters is made in each of the three methods by use of the same nonlinear, nongradient optimizer. The objective of the nonlinear optimization is to obtain rational approximations to the unsteady aerodynamics whose state-space realization is lower order than that required when no optimization of the nonlinear terms is performed. The free linear parameters are determined using the least-squares matrix techniques of a Lagrange multiplier formulation of an objective function which incorporates selected linear equality constraints. State-space mathematical models resulting from different approaches are described and results are presented that show comparative evaluations from application of each of the extended methods to a numerical example.
An Early Childhood Curriculum for Multiply Handicapped Children.
ERIC Educational Resources Information Center
Schattner, Regina
The guide for understanding the multidimensional educational problems of multiply handicapped children and for developing an appropriate curriculum and setting is addressed to teachers. Methods, materials, and a curriculum for working with young (ages 4-9 years) multiply handicapped children are presented. The program includes an enriched language…
NASA Astrophysics Data System (ADS)
Hipp, J. R.; Ballard, S.; Begnaud, M. L.; Encarnacao, A. V.; Young, C. J.; Phillips, W. S.
2015-12-01
Recently our combined SNL-LANL research team has succeeded in developing a global, seamless 3D tomographic P- and S-velocity model (SALSA3D) that provides superior first P and first S travel time predictions at both regional and teleseismic distances. However, given the variable data quality and uneven data sampling associated with this type of model, it is essential that there be a means to calculate high-quality estimates of the path-dependent variance and covariance associated with the predicted travel times of ray paths through the model. In this paper, we describe a methodology for accomplishing this by exploiting the full model covariance matrix and show examples of path-dependent travel time prediction uncertainty computed from our latest tomographic model. Typical global 3D SALSA3D models have on the order of 1/2 million nodes, so the challenge in calculating the covariance matrix is formidable: 0.9 TB storage for 1/2 of a symmetric matrix, necessitating an Out-Of-Core (OOC) blocked matrix solution technique. With our approach the tomography matrix (G which includes a prior model covariance constraint) is multiplied by its transpose (GTG) and written in a blocked sub-matrix fashion. We employ a distributed parallel solution paradigm that solves for (GTG)-1 by assigning blocks to individual processing nodes for matrix decomposition update and scaling operations. We first find the Cholesky decomposition of GTG which is subsequently inverted. Next, we employ OOC matrix multiplication methods to calculate the model covariance matrix from (GTG)-1 and an assumed data covariance matrix. Given the model covariance matrix, we solve for the travel-time covariance associated with arbitrary ray-paths by summing the model covariance along both ray paths. Setting the paths equal and taking the square root yields the travel prediction uncertainty for the single path.
An active learning representative subset selection method using net analyte signal.
He, Zhonghai; Ma, Zhenhe; Luan, Jingmin; Cai, Xi
2018-05-05
To guarantee accurate predictions, representative samples are needed when building a calibration model for spectroscopic measurements. However, in general, it is not known whether a sample is representative prior to measuring its concentration, which is both time-consuming and expensive. In this paper, a method to determine whether a sample should be selected into a calibration set is presented. The selection is based on the difference of Euclidean norm of net analyte signal (NAS) vector between the candidate and existing samples. First, the concentrations and spectra of a group of samples are used to compute the projection matrix, NAS vector, and scalar values. Next, the NAS vectors of candidate samples are computed by multiplying projection matrix with spectra of samples. Scalar value of NAS is obtained by norm computation. The distance between the candidate set and the selected set is computed, and samples with the largest distance are added to selected set sequentially. Last, the concentration of the analyte is measured such that the sample can be used as a calibration sample. Using a validation test, it is shown that the presented method is more efficient than random selection. As a result, the amount of time and money spent on reference measurements is greatly reduced. Copyright © 2018 Elsevier B.V. All rights reserved.
An active learning representative subset selection method using net analyte signal
NASA Astrophysics Data System (ADS)
He, Zhonghai; Ma, Zhenhe; Luan, Jingmin; Cai, Xi
2018-05-01
To guarantee accurate predictions, representative samples are needed when building a calibration model for spectroscopic measurements. However, in general, it is not known whether a sample is representative prior to measuring its concentration, which is both time-consuming and expensive. In this paper, a method to determine whether a sample should be selected into a calibration set is presented. The selection is based on the difference of Euclidean norm of net analyte signal (NAS) vector between the candidate and existing samples. First, the concentrations and spectra of a group of samples are used to compute the projection matrix, NAS vector, and scalar values. Next, the NAS vectors of candidate samples are computed by multiplying projection matrix with spectra of samples. Scalar value of NAS is obtained by norm computation. The distance between the candidate set and the selected set is computed, and samples with the largest distance are added to selected set sequentially. Last, the concentration of the analyte is measured such that the sample can be used as a calibration sample. Using a validation test, it is shown that the presented method is more efficient than random selection. As a result, the amount of time and money spent on reference measurements is greatly reduced.
Kumar Kailasa, Suresh; Hasan, Nazim; Wu, Hui-Fen
2012-08-15
The development of liquid nitrogen assisted spray ionization mass spectrometry (LNASI MS) for the analysis of multiply charged proteins (insulin, ubiquitin, cytochrome c, α-lactalbumin, myoglobin and BSA), peptides (glutathione, HW6, angiotensin-II and valinomycin) and amino acid (arginine) clusters is described. The charged droplets are formed by liquid nitrogen assisted sample spray through a stainless steel nebulizer and transported into mass analyzer for the identification of multiply charged protein ions. The effects of acids and modifier volumes for the efficient ionization of the above analytes in LNASI MS were carefully investigated. Multiply charged proteins and amino acid clusters were effectively identified by LNASI MS. The present approach can effectively detect the multiply charged states of cytochrome c at 400 nM. A comparison between LNASI and ESI, CSI, SSI and V-EASI methods on instrumental conditions, applied temperature and observed charge states for the multiply charged proteins, shows that the LNASI method produces the good quality spectra of amino acid clusters at ambient conditions without applied any electric field and heat. To date, we believe that the LNASI method is the most simple, low cost and provided an alternative paradigm for production of multiply charged ions by LNASI MS, just as ESI-like ions yet no need for applying any electrical field and it could be operated at low temperature for generation of highly charged protein/peptide ions. Copyright © 2012 Elsevier B.V. All rights reserved.
A wideband analog correlator system for AMiBA
NASA Astrophysics Data System (ADS)
Li, Chao-Te; Kubo, Derek; Han, Chih-Chiang; Chen, Chung-Cheng; Chen, Ming-Tang; Lien, Chun-Hsien; Wang, Huei; Wei, Ray-Ming; Yang, Chia-Hsiang; Chiueh, Tzi-Dar; Peterson, Jeffrey; Kesteven, Michael; Wilson, Warwick
2004-10-01
A wideband correlator system with a bandwidth of 16 GHz or more is required for Array for Microwave Background Anisotropy (AMiBA) to achieve the sensitivity of 10μK in one hour of observation. Double-balanced diode mixers were used as multipliers in 4-lag correlator modules. Several wideband modules were developed for IF signal distribution between receivers and correlators. Correlator outputs were amplified, and digitized by voltage-to-frequency converters. Data acquisition circuits were designed using field programmable gate arrays (FPGA). Subsequent data transfer and control software were based on the configuration for Australia Telescope Compact Array. Transform matrix method will be adopted during calibration to take into account the phase and amplitude variations of analog devices across the passband.
GASPACHO: a generic automatic solver using proximal algorithms for convex huge optimization problems
NASA Astrophysics Data System (ADS)
Goossens, Bart; Luong, Hiêp; Philips, Wilfried
2017-08-01
Many inverse problems (e.g., demosaicking, deblurring, denoising, image fusion, HDR synthesis) share various similarities: degradation operators are often modeled by a specific data fitting function while image prior knowledge (e.g., sparsity) is incorporated by additional regularization terms. In this paper, we investigate automatic algorithmic techniques for evaluating proximal operators. These algorithmic techniques also enable efficient calculation of adjoints from linear operators in a general matrix-free setting. In particular, we study the simultaneous-direction method of multipliers (SDMM) and the parallel proximal algorithm (PPXA) solvers and show that the automatically derived implementations are well suited for both single-GPU and multi-GPU processing. We demonstrate this approach for an Electron Microscopy (EM) deconvolution problem.
NASA Astrophysics Data System (ADS)
Golcuk, Kurtulus; Mandair, Gurjit S.; Callender, Andrew F.; Finney, William F.; Sahar, Nadder; Kohn, David H.; Morris, Michael D.
2006-02-01
Background fluorescence can often complicate the use of Raman microspectroscopy in the study of musculoskeletal tissues. Such fluorescence interferences are undesirable as the Raman spectra of matrix and mineral phases can be used to differentiate between normal and pathological or microdamaged bone. Photobleaching with the excitation laser provides a non-invasive method for reducing background fluorescence, enabling 532 nm Raman hyperspectral imaging of bone tissue. The signal acquisition time for a 400 point Raman line image is reduced to 1-4 seconds using electronmultiplying CCD (EMCCD) detector, enabling acquisition of Raman images in less than 10 minutes. Rapid photobleaching depends upon multiple scattering effects in the tissue specimen and is applicable to some, but not all experimental situations.
Mauda, R.; Pinchas, M.
2014-01-01
Recently a new blind equalization method was proposed for the 16QAM constellation input inspired by the maximum entropy density approximation technique with improved equalization performance compared to the maximum entropy approach, Godard's algorithm, and others. In addition, an approximated expression for the minimum mean square error (MSE) was obtained. The idea was to find those Lagrange multipliers that bring the approximated MSE to minimum. Since the derivation of the obtained MSE with respect to the Lagrange multipliers leads to a nonlinear equation for the Lagrange multipliers, the part in the MSE expression that caused the nonlinearity in the equation for the Lagrange multipliers was ignored. Thus, the obtained Lagrange multipliers were not those Lagrange multipliers that bring the approximated MSE to minimum. In this paper, we derive a new set of Lagrange multipliers based on the nonlinear expression for the Lagrange multipliers obtained from minimizing the approximated MSE with respect to the Lagrange multipliers. Simulation results indicate that for the high signal to noise ratio (SNR) case, a faster convergence rate is obtained for a channel causing a high initial intersymbol interference (ISI) while the same equalization performance is obtained for an easy channel (initial ISI low). PMID:24723813
NASA Astrophysics Data System (ADS)
Tavan, Paul; Schulten, Klaus
1980-03-01
A new, efficient algorithm for the evaluation of the matrix elements of the CI Hamiltonian in the basis of spin-coupled ν-fold excitations (over orthonormal orbitals) is developed for even electron systems. For this purpose we construct an orthonormal, spin-adapted CI basis in the framework of second quantization. As a prerequisite, spin and space parts of the fermion operators have to be separated; this makes it possible to introduce the representation theory of the permutation group. The ν-fold excitation operators are Serber spin-coupled products of particle-hole excitations. This construction is also designed for CI calculations from multireference (open-shell) states. The 2N-electron Hamiltonian is expanded in terms of spin-coupled particle-hole operators which map any ν-fold excitation on ν-, and ν±1-, and ν±2-fold excitations. For the calculation of the CI matrix this leaves one with only the evaluation of overlap matrix elements between spin-coupled excitations. This leads to a set of ten general matrix element formulas which contain Serber representation matrices of the permutation group Sν×Sν as parameters. Because of the Serber structure of the CI basis these group-theoretical parameters are kept to a minimum such that they can be stored readily in the central memory of a computer for ν?4 and even for higher excitations. As the computational effort required to obtain the CI matrix elements from the general formulas is very small, the algorithm presented appears to constitute for even electron systems a promising alternative to existing CI methods for multiply excited configurations, e.g., the unitary group approach. Our method makes possible the adaptation of spatial symmetries and the selection of any subset of configurations. The algorithm has been implemented in a computer program and tested extensively for ν?4 and singlet ground and excited states.
A Lagrange multiplier and Hopfield-type barrier function method for the traveling salesman problem.
Dang, Chuangyin; Xu, Lei
2002-02-01
A Lagrange multiplier and Hopfield-type barrier function method is proposed for approximating a solution of the traveling salesman problem. The method is derived from applications of Lagrange multipliers and a Hopfield-type barrier function and attempts to produce a solution of high quality by generating a minimum point of a barrier problem for a sequence of descending values of the barrier parameter. For any given value of the barrier parameter, the method searches for a minimum point of the barrier problem in a feasible descent direction, which has a desired property that lower and upper bounds on variables are always satisfied automatically if the step length is a number between zero and one. At each iteration, the feasible descent direction is found by updating Lagrange multipliers with a globally convergent iterative procedure. For any given value of the barrier parameter, the method converges to a stationary point of the barrier problem without any condition on the objective function. Theoretical and numerical results show that the method seems more effective and efficient than the softassign algorithm.
Effective size of certain macroscopic quantum superpositions.
Dür, Wolfgang; Simon, Christoph; Cirac, J Ignacio
2002-11-18
Several experiments and experimental proposals for the production of macroscopic superpositions naturally lead to states of the general form /phi(1)>( multiply sign in circle N)+/phi 2 >( multiply sign in circle N), where the number of subsystems N is very large, but the states of the individual subsystems have large overlap, /
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Gurvinderjit; Singh, Bhajan, E-mail: bhajan2k1@yahoo.co.in; Sandhu, B. S.
2015-08-28
The present measurements are carried out to investigate the multiple scattering of 662 keV gamma photons emerging from targets of binary alloys (brass and soldering material). The scattered photons are detected by 51 mm × 51 mm NaI(Tl) scintillation detector whose response unscrambling converting the observed pulse–height distribution to a true photon energy spectrum, is obtained with the help of 10 × 10 inverse response matrix. The numbers of multiply scattered events, having same energy as in the singly scattered distribution, first increases with target thickness and then saturate. The application of response function of scintillation detector does not result in anymore » change of measured saturation thickness. Monte Carlo calculation supports the present experimental results.« less
Eichenberger, Alexandre E; Gschwind, Michael K; Gunnels, John A
2014-02-11
Mechanisms for performing a complex matrix multiplication operation are provided. A vector load operation is performed to load a first vector operand of the complex matrix multiplication operation to a first target vector register. The first vector operand comprises a real and imaginary part of a first complex vector value. A complex load and splat operation is performed to load a second complex vector value of a second vector operand and replicate the second complex vector value within a second target vector register. The second complex vector value has a real and imaginary part. A cross multiply add operation is performed on elements of the first target vector register and elements of the second target vector register to generate a partial product of the complex matrix multiplication operation. The partial product is accumulated with other partial products and a resulting accumulated partial product is stored in a result vector register.
Tensor completion for estimating missing values in visual data.
Liu, Ji; Musialski, Przemyslaw; Wonka, Peter; Ye, Jieping
2013-01-01
In this paper, we propose an algorithm to estimate missing values in tensors of visual data. The values can be missing due to problems in the acquisition process or because the user manually identified unwanted outliers. Our algorithm works even with a small amount of samples and it can propagate structure to fill larger missing regions. Our methodology is built on recent studies about matrix completion using the matrix trace norm. The contribution of our paper is to extend the matrix case to the tensor case by proposing the first definition of the trace norm for tensors and then by building a working algorithm. First, we propose a definition for the tensor trace norm that generalizes the established definition of the matrix trace norm. Second, similarly to matrix completion, the tensor completion is formulated as a convex optimization problem. Unfortunately, the straightforward problem extension is significantly harder to solve than the matrix case because of the dependency among multiple constraints. To tackle this problem, we developed three algorithms: simple low rank tensor completion (SiLRTC), fast low rank tensor completion (FaLRTC), and high accuracy low rank tensor completion (HaLRTC). The SiLRTC algorithm is simple to implement and employs a relaxation technique to separate the dependent relationships and uses the block coordinate descent (BCD) method to achieve a globally optimal solution; the FaLRTC algorithm utilizes a smoothing scheme to transform the original nonsmooth problem into a smooth one and can be used to solve a general tensor trace norm minimization problem; the HaLRTC algorithm applies the alternating direction method of multipliers (ADMMs) to our problem. Our experiments show potential applications of our algorithms and the quantitative evaluation indicates that our methods are more accurate and robust than heuristic approaches. The efficiency comparison indicates that FaLTRC and HaLRTC are more efficient than SiLRTC and between FaLRTC an- HaLRTC the former is more efficient to obtain a low accuracy solution and the latter is preferred if a high-accuracy solution is desired.
2D data-space cross-gradient joint inversion of MT, gravity and magnetic data
NASA Astrophysics Data System (ADS)
Pak, Yong-Chol; Li, Tonglin; Kim, Gang-Sop
2017-08-01
We have developed a data-space multiple cross-gradient joint inversion algorithm, and validated it through synthetic tests and applied it to magnetotelluric (MT), gravity and magnetic datasets acquired along a 95 km profile in Benxi-Ji'an area of northeastern China. To begin, we discuss a generalized cross-gradient joint inversion for multiple datasets and model parameters sets, and formulate it in data space. The Lagrange multiplier required for the structural coupling in the data-space method is determined using an iterative solver to avoid calculation of the inverse matrix in solving the large system of equations. Next, using model-space and data-space methods, we inverted the synthetic data and field data. Based on our result, the joint inversion in data-space not only delineates geological bodies more clearly than the separate inversion, but also yields nearly equal results with the one in model-space while consuming much less memory.
Sensitivity Kernels for the Cross-Convolution Measure: Eliminate the Source in Waveform Tomography
NASA Astrophysics Data System (ADS)
Menke, W. H.
2017-12-01
We use the adjoint method to derive sensitivity kernels for the cross-convolution measure, a goodness-of-fit criterion that is applicable to seismic data containing closely-spaced multiple arrivals, such as reverberating compressional waves and split shear waves. In addition to a general formulation, specific expressions for sensitivity with respect to density, Lamé parameter and shear modulus are derived for a isotropic elastic solid. As is typical of adjoint methods, the kernels depend upon an adjoint field, the source of which, in this case, is the reference displacement field, pre-multiplied by a matrix of cross-correlations of components of the observed field. We use a numerical simulation to evaluate the resolving power of a topographic inversion that employs the cross-convolution measure. The estimated resolving kernel shows is point-like, indicating that the cross-convolution measure will perform well in waveform tomography settings.
NASA Astrophysics Data System (ADS)
Bindiya T., S.; Elias, Elizabeth
2015-01-01
In this paper, multiplier-less near-perfect reconstruction tree-structured filter banks are proposed. Filters with sharp transition width are preferred in filter banks in order to reduce the aliasing between adjacent channels. When sharp transition width filters are designed as conventional finite impulse response filters, the order of the filters will become very high leading to increased complexity. The frequency response masking (FRM) method is known to result in linear-phase sharp transition width filters with low complexity. It is found that the proposed design method, which is based on FRM, gives better results compared to the earlier reported results, in terms of the number of multipliers when sharp transition width filter banks are needed. To further reduce the complexity and power consumption, the tree-structured filter bank is made totally multiplier-less by converting the continuous filter bank coefficients to finite precision coefficients in the signed power of two space. This may lead to performance degradation and calls for the use of a suitable optimisation technique. In this paper, gravitational search algorithm is proposed to be used in the design of the multiplier-less tree-structured uniform as well as non-uniform filter banks. This design method results in uniform and non-uniform filter banks which are simple, alias-free, linear phase and multiplier-less and have sharp transition width.
Critical Seismic Vector Random Excitations for Multiply Supported Structures
NASA Astrophysics Data System (ADS)
Sarkar, A.; Manohar, C. S.
1998-05-01
A method for determining critical power spectral density matrix models for earthquake excitations which maximize steady response variance of linear multiply supported extended structures and which also satisfy constraints on input variance, zero crossing rates, frequency content and transmission time lag has been developed. The optimization problem is shown to be non-linear in nature and solutions are obtained by using an iterative technique which is based on linear programming method. A constraint on entropy rate as a measure of uncertainty which can be expected in realistic earthquake ground motions is proposed which makes the critical excitations more realistic. Two special cases are also considered. Firstly, when knowledge of autospectral densities is available, the critical response is shown to be produced by fully coherent excitations which are neither in-phase nor out-of-phase. The critical phase between the excitation components depends on structural parameters, but independent of the auto-spectral densities of the excitations. Secondly, when the knowledge of autospectral densities and phase spectrum of the excitations is available, the critical response is shown to be produced by a system dependent coherence function representing neither fully coherent nor fully incoherent ground motions. The applications of these special cases are discussed in the context of land-based extended structures and secondary systems such as nuclear piping assembly. Illustrative examples on critical inputs and response of sdof and a long-span suspended cable which demonstrated the various features of the approach developed are presented.
Particulated articular cartilage: CAIS and DeNovo NT.
Farr, Jack; Cole, Brian J; Sherman, Seth; Karas, Vasili
2012-03-01
Cartilage Autograft Implantation System (CAIS; DePuy/Mitek, Raynham, MA) and DeNovo Natural Tissue (NT; ISTO, St. Louis, MO) are novel treatment options for focal articular cartilage defects in the knee. These methods involve the implantation of particulated articular cartilage from either autograft or juvenile allograft donor, respectively. In the laboratory and in animal models, both CAIS and DeNovo NT have demonstrated the ability of the transplanted cartilage cells to "escape" from the extracellular matrix, migrate, multiply, and form a new hyaline-like cartilage tissue matrix that integrates with the surrounding host tissue. In clinical practice, the technique for both CAIS and DeNovo NT is straightforward, requiring only a single surgery to affect cartilage repair. Clinical experience is limited, with short-term studies demonstrating both procedures to be safe, feasible, and effective, with improvements in subjective patient scores, and with magnetic resonance imaging evidence of good defect fill. While these treatment options appear promising, prospective randomized controlled studies are necessary to refine the indications and contraindications for both CAIS and DeNovo NT.
Ellipsometry of single-layer antireflection coatings on transparent substrates
NASA Astrophysics Data System (ADS)
Azzam, R. M. A.
2017-11-01
The complex reflection coefficients of p- and s-polarized light and ellipsometric parameters of a transparent substrate of refractive index n2, which is coated by a transparent thin film whose refractive index n1 =√{n2 } satisfies the anti-reflection condition at normal incidence, are considered as functions of film thickness d and angle of incidence ϕ. A unique coated surface, with n1 =√{n2 } and film thickness d equal to half of the film-thickness period Dϕ at angle ϕ and wavelength λ, reflects light of the same wavelength without change of polarization for all incident polarization states. (The reflection Jones matrix of such coated surface is the 2 × 2 identity matrix pre-multiplied by a scalar, hence tanΨ = 1,Δ = 0.) To monitor the deposition of an antireflection coating, the normalized Stokes parameters of obliquely reflected light (e.g. at ϕ =70∘) are measured until predetermined target values of those parameters are detected. This provides a more accurate means of film thickness control than is possible using a micro-balance technique or an intensity reflectance method.
Moving object detection via low-rank total variation regularization
NASA Astrophysics Data System (ADS)
Wang, Pengcheng; Chen, Qian; Shao, Na
2016-09-01
Moving object detection is a challenging task in video surveillance. Recently proposed Robust Principal Component Analysis (RPCA) can recover the outlier patterns from the low-rank data under some mild conditions. However, the l-penalty in RPCA doesn't work well in moving object detection because the irrepresentable condition is often not satisfied. In this paper, a method based on total variation (TV) regularization scheme is proposed. In our model, image sequences captured with a static camera are highly related, which can be described using a low-rank matrix. Meanwhile, the low-rank matrix can absorb background motion, e.g. periodic and random perturbation. The foreground objects in the sequence are usually sparsely distributed and drifting continuously, and can be treated as group outliers from the highly-related background scenes. Instead of l-penalty, we exploit the total variation of the foreground. By minimizing the total variation energy, the outliers tend to collapse and finally converge to be the exact moving objects. The TV-penalty is superior to the l-penalty especially when the outlier is in the majority for some pixels, and our method can estimate the outlier explicitly with less bias but higher variance. To solve the problem, a joint optimization function is formulated and can be effectively solved through the inexact Augmented Lagrange Multiplier (ALM) method. We evaluate our method along with several state-of-the-art approaches in MATLAB. Both qualitative and quantitative results demonstrate that our proposed method works effectively on a large range of complex scenarios.
On-Chip Power-Combining for High-Power Schottky Diode Based Frequency Multipliers
NASA Technical Reports Server (NTRS)
Siles Perez, Jose Vicente (Inventor); Chattopadhyay, Goutam (Inventor); Lee, Choonsup (Inventor); Schlecht, Erich T. (Inventor); Jung-Kubiak, Cecile D. (Inventor); Mehdi, Imran (Inventor)
2015-01-01
A novel MMIC on-chip power-combined frequency multiplier device and a method of fabricating the same, comprising two or more multiplying structures integrated on a single chip, wherein each of the integrated multiplying structures are electrically identical and each of the multiplying structures include one input antenna (E-probe) for receiving an input signal in the millimeter-wave, submillimeter-wave or terahertz frequency range inputted on the chip, a stripline based input matching network electrically connecting the input antennas to two or more Schottky diodes in a balanced configuration, two or more Schottky diodes that are used as nonlinear semiconductor devices to generate harmonics out of the input signal and produce the multiplied output signal, stripline based output matching networks for transmitting the output signal from the Schottky diodes to an output antenna, and an output antenna (E-probe) for transmitting the output signal off the chip into the output waveguide transmission line.
Miniaturized High-Speed Modulated X-Ray Source
NASA Technical Reports Server (NTRS)
Gendreau, Keith C. (Inventor); Arzoumanian, Zaven (Inventor); Kenyon, Steven J. (Inventor); Spartana, Nick Salvatore (Inventor)
2015-01-01
A miniaturized high-speed modulated X-ray source (MXS) device and a method for rapidly and arbitrarily varying with time the output X-ray photon intensities and energies. The MXS device includes an ultraviolet emitter that emits ultraviolet light, a photocathode operably coupled to the ultraviolet light-emitting diode that emits electrons, an electron multiplier operably coupled to the photocathode that multiplies incident electrons, and an anode operably coupled to the electron multiplier that is configured to produce X-rays. The method for modulating MXS includes modulating an intensity of an ultraviolet emitter to emit ultraviolet light, generating electrons in response to the ultraviolet light, multiplying the electrons to become more electrons, and producing X-rays by an anode that includes a target material configured to produce X-rays in response to impact of the more electrons.
Liu, Jing; Duan, Yongrui; Sun, Min
2017-01-01
This paper introduces a symmetric version of the generalized alternating direction method of multipliers for two-block separable convex programming with linear equality constraints, which inherits the superiorities of the classical alternating direction method of multipliers (ADMM), and which extends the feasible set of the relaxation factor α of the generalized ADMM to the infinite interval [Formula: see text]. Under the conditions that the objective function is convex and the solution set is nonempty, we establish the convergence results of the proposed method, including the global convergence, the worst-case [Formula: see text] convergence rate in both the ergodic and the non-ergodic senses, where k denotes the iteration counter. Numerical experiments to decode a sparse signal arising in compressed sensing are included to illustrate the efficiency of the new method.
NASA Astrophysics Data System (ADS)
Meric, Ilker; Johansen, Geir A.; Holstad, Marie B.; Mattingly, John; Gardner, Robin P.
2012-05-01
Prompt gamma-ray neutron activation analysis (PGNAA) has been and still is one of the major methods of choice for the elemental analysis of various bulk samples. This is mostly due to the fact that PGNAA offers a rapid, non-destructive and on-line means of sample interrogation. The quantitative analysis of the prompt gamma-ray data could, on the other hand, be performed either through the single peak analysis or the so-called Monte Carlo library least-squares (MCLLS) approach, of which the latter has been shown to be more sensitive and more accurate than the former. The MCLLS approach is based on the assumption that the total prompt gamma-ray spectrum of any sample is a linear combination of the contributions from the individual constituents or libraries. This assumption leads to, through the minimization of the chi-square value, a set of linear equations which has to be solved to obtain the library multipliers, a process that involves the inversion of the covariance matrix. The least-squares solution may be extremely uncertain due to the ill-conditioning of the covariance matrix. The covariance matrix will become ill-conditioned whenever, in the subsequent calculations, two or more libraries are highly correlated. The ill-conditioning will also be unavoidable whenever the sample contains trace amounts of certain elements or elements with significantly low thermal neutron capture cross-sections. In this work, a new iterative approach, which can handle the ill-conditioning of the covariance matrix, is proposed and applied to a hydrocarbon multiphase flow problem in which the parameters of interest are the separate amounts of the oil, gas, water and salt phases. The results of the proposed method are also compared with the results obtained through the implementation of a well-known regularization method, the truncated singular value decomposition. Final calculations indicate that the proposed approach would be able to treat ill-conditioned cases appropriately.
NASA Astrophysics Data System (ADS)
Elkurdi, Yousef; Fernández, David; Souleimanov, Evgueni; Giannacopoulos, Dennis; Gross, Warren J.
2008-04-01
The Finite Element Method (FEM) is a computationally intensive scientific and engineering analysis tool that has diverse applications ranging from structural engineering to electromagnetic simulation. The trends in floating-point performance are moving in favor of Field-Programmable Gate Arrays (FPGAs), hence increasing interest has grown in the scientific community to exploit this technology. We present an architecture and implementation of an FPGA-based sparse matrix-vector multiplier (SMVM) for use in the iterative solution of large, sparse systems of equations arising from FEM applications. FEM matrices display specific sparsity patterns that can be exploited to improve the efficiency of hardware designs. Our architecture exploits FEM matrix sparsity structure to achieve a balance between performance and hardware resource requirements by relying on external SDRAM for data storage while utilizing the FPGAs computational resources in a stream-through systolic approach. The architecture is based on a pipelined linear array of processing elements (PEs) coupled with a hardware-oriented matrix striping algorithm and a partitioning scheme which enables it to process arbitrarily big matrices without changing the number of PEs in the architecture. Therefore, this architecture is only limited by the amount of external RAM available to the FPGA. The implemented SMVM-pipeline prototype contains 8 PEs and is clocked at 110 MHz obtaining a peak performance of 1.76 GFLOPS. For 8 GB/s of memory bandwidth typical of recent FPGA systems, this architecture can achieve 1.5 GFLOPS sustained performance. Using multiple instances of the pipeline, linear scaling of the peak and sustained performance can be achieved. Our stream-through architecture provides the added advantage of enabling an iterative implementation of the SMVM computation required by iterative solution techniques such as the conjugate gradient method, avoiding initialization time due to data loading and setup inside the FPGA internal memory.
Monitoring hydraulic stimulation using telluric sounding
NASA Astrophysics Data System (ADS)
Rees, Nigel; Heinson, Graham; Conway, Dennis
2018-01-01
The telluric sounding (TS) method is introduced as a potential tool for monitoring hydraulic fracturing at depth. The advantage of this technique is that it requires only the measurement of electric fields, which are cheap and easy when compared with magnetotelluric measurements. Additionally, the transfer function between electric fields from two locations is essentially the identity matrix for a 1D Earth no matter what the vertical structure. Therefore, changes in the earth resulting from the introduction of conductive bodies underneath one of these sites can be associated with deviations away from the identity matrix, with static shift appearing as a galvanic multiplier at all periods. Singular value decomposition and eigenvalue analysis can reduce the complexity of the resulting telluric distortion matrix to simpler parameters that can be visualised in the form of Mohr circles. This technique would be useful in constraining the lateral extent of resistivity changes. We test the viability of utilising the TS method for monitoring on both a synthetic dataset and for a hydraulic stimulation of an enhanced geothermal system case study conducted in Paralana, South Australia. The synthetic data example shows small but consistent changes in the transfer functions associated with hydraulic stimulation, with grids of Mohr circles introduced as a useful diagnostic tool for visualising the extent of fluid movement. The Paralana electric field data were relatively noisy and affected by the dead band making the analysis of transfer functions difficult. However, changes in the order of 5% were observed from 5 s to longer periods. We conclude that deep monitoring using the TS method is marginal at depths in the order of 4 km and that in order to have meaningful interpretations, electric field data need to be of a high quality with low levels of site noise.[Figure not available: see fulltext.
Matrix Sturm-Liouville equation with a Bessel-type singularity on a finite interval
NASA Astrophysics Data System (ADS)
Bondarenko, Natalia
2017-03-01
The matrix Sturm-Liouville equation on a finite interval with a Bessel-type singularity in the end of the interval is studied. Special fundamental systems of solutions for this equation are constructed: analytic Bessel-type solutions with the prescribed behavior at the singular point and Birkhoff-type solutions with the known asymptotics for large values of the spectral parameter. The asymptotic formulas for Stokes multipliers, connecting these two fundamental systems of solutions, are derived. We also set boundary conditions and obtain asymptotic formulas for the spectral data (the eigenvalues and the weight matrices) of the boundary value problem. Our results will be useful in the theory of direct and inverse spectral problems.
A Tensor Product Formulation of Strassen's Matrix Multiplication Algorithm with Memory Reduction
Kumar, B.; Huang, C. -H.; Sadayappan, P.; ...
1995-01-01
In this article, we present a program generation strategy of Strassen's matrix multiplication algorithm using a programming methodology based on tensor product formulas. In this methodology, block recursive programs such as the fast Fourier Transforms and Strassen's matrix multiplication algorithm are expressed as algebraic formulas involving tensor products and other matrix operations. Such formulas can be systematically translated to high-performance parallel/vector codes for various architectures. In this article, we present a nonrecursive implementation of Strassen's algorithm for shared memory vector processors such as the Cray Y-MP. A previous implementation of Strassen's algorithm synthesized from tensor product formulas required working storagemore » of size O(7 n ) for multiplying 2 n × 2 n matrices. We present a modified formulation in which the working storage requirement is reduced to O(4 n ). The modified formulation exhibits sufficient parallelism for efficient implementation on a shared memory multiprocessor. Performance results on a Cray Y-MP8/64 are presented.« less
Preconditioned alternating direction method of multipliers for inverse problems with constraints
NASA Astrophysics Data System (ADS)
Jiao, Yuling; Jin, Qinian; Lu, Xiliang; Wang, Weijie
2017-02-01
We propose a preconditioned alternating direction method of multipliers (ADMM) to solve linear inverse problems in Hilbert spaces with constraints, where the feature of the sought solution under a linear transformation is captured by a possibly non-smooth convex function. During each iteration step, our method avoids solving large linear systems by choosing a suitable preconditioning operator. In case the data is given exactly, we prove the convergence of our preconditioned ADMM without assuming the existence of a Lagrange multiplier. In case the data is corrupted by noise, we propose a stopping rule using information on noise level and show that our preconditioned ADMM is a regularization method; we also propose a heuristic rule when the information on noise level is unavailable or unreliable and give its detailed analysis. Numerical examples are presented to test the performance of the proposed method.
Jung, Yousung; Shao, Yihan; Head-Gordon, Martin
2007-09-01
The scaled opposite spin Møller-Plesset method (SOS-MP2) is an economical way of obtaining correlation energies that are computationally cheaper, and yet, in a statistical sense, of higher quality than standard MP2 theory, by introducing one empirical parameter. But SOS-MP2 still has a fourth-order scaling step that makes the method inapplicable to very large molecular systems. We reduce the scaling of SOS-MP2 by exploiting the sparsity of expansion coefficients and local integral matrices, by performing local auxiliary basis expansions for the occupied-virtual product distributions. To exploit sparsity of 3-index local quantities, we use a blocking scheme in which entire zero-rows and columns, for a given third global index, are deleted by comparison against a numerical threshold. This approach minimizes sparse matrix book-keeping overhead, and also provides sufficiently large submatrices after blocking, to allow efficient matrix-matrix multiplies. The resulting algorithm is formally cubic scaling, and requires only moderate computational resources (quadratic memory and disk space) and, in favorable cases, is shown to yield effective quadratic scaling behavior in the size regime we can apply it to. Errors associated with local fitting using the attenuated Coulomb metric and numerical thresholds in the blocking procedure are found to be insignificant in terms of the predicted relative energies. A diverse set of test calculations shows that the size of system where significant computational savings can be achieved depends strongly on the dimensionality of the system, and the extent of localizability of the molecular orbitals. Copyright 2007 Wiley Periodicals, Inc.
STM contrast of a CO dimer on a Cu(1 1 1) surface: a wave-function analysis.
Gustafsson, Alexander; Paulsson, Magnus
2017-12-20
We present a method used to intuitively interpret the scanning tunneling microscopy (STM) contrast by investigating individual wave functions originating from the substrate and tip side. We use localized basis orbital density functional theory, and propagate the wave functions into the vacuum region at a real-space grid, including averaging over the lateral reciprocal space. Optimization by means of the method of Lagrange multipliers is implemented to perform a unitary transformation of the wave functions in the middle of the vacuum region. The method enables (i) reduction of the number of contributing tip-substrate wave function combinations used in the corresponding transmission matrix, and (ii) to bundle up wave functions with similar symmetry in the lateral plane, so that (iii) an intuitive understanding of the STM contrast can be achieved. The theory is applied to a CO dimer adsorbed on a Cu(1 1 1) surface scanned by a single-atom Cu tip, whose STM image is discussed in detail by the outlined method.
STM contrast of a CO dimer on a Cu(1 1 1) surface: a wave-function analysis
NASA Astrophysics Data System (ADS)
Gustafsson, Alexander; Paulsson, Magnus
2017-12-01
We present a method used to intuitively interpret the scanning tunneling microscopy (STM) contrast by investigating individual wave functions originating from the substrate and tip side. We use localized basis orbital density functional theory, and propagate the wave functions into the vacuum region at a real-space grid, including averaging over the lateral reciprocal space. Optimization by means of the method of Lagrange multipliers is implemented to perform a unitary transformation of the wave functions in the middle of the vacuum region. The method enables (i) reduction of the number of contributing tip-substrate wave function combinations used in the corresponding transmission matrix, and (ii) to bundle up wave functions with similar symmetry in the lateral plane, so that (iii) an intuitive understanding of the STM contrast can be achieved. The theory is applied to a CO dimer adsorbed on a Cu(1 1 1) surface scanned by a single-atom Cu tip, whose STM image is discussed in detail by the outlined method.
Zou, Weiyao; Burns, Stephen A.
2012-01-01
A Lagrange multiplier-based damped least-squares control algorithm for woofer-tweeter (W-T) dual deformable-mirror (DM) adaptive optics (AO) is tested with a breadboard system. We show that the algorithm can complementarily command the two DMs to correct wavefront aberrations within a single optimization process: the woofer DM correcting the high-stroke, low-order aberrations, and the tweeter DM correcting the low-stroke, high-order aberrations. The optimal damping factor for a DM is found to be the median of the eigenvalue spectrum of the influence matrix of that DM. Wavefront control accuracy is maximized with the optimized control parameters. For the breadboard system, the residual wavefront error can be controlled to the precision of 0.03 μm in root mean square. The W-T dual-DM AO has applications in both ophthalmology and astronomy. PMID:22441462
NASA Astrophysics Data System (ADS)
Cherri, Abdallah K.; Alam, Mohammed S.
1998-07-01
Highly-efficient two-step recoded and one-step nonrecoded trinary signed-digit (TSD) carry-free adders subtracters are presented on the basis of redundant-bit representation for the operands digits. It has been shown that only 24 (30) minterms are needed to implement the two-step recoded (the one-step nonrecoded) TSD addition for any operand length. Optical implementation of the proposed arithmetic can be carried out by use of correlation- or matrix-multiplication-based schemes, saving 50% of the system memory. Furthermore, we present four different multiplication designs based on our proposed recoded and nonrecoded TSD adders. Our multiplication designs require a small number of reduced minterms to generate the multiplication partial products. Finally, a recently proposed pipelined iterative-tree algorithm can be used in the TSD adders multipliers; consequently, efficient use of all available adders can be made.
Cherri, A K; Alam, M S
1998-07-10
Highly-efficient two-step recoded and one-step nonrecoded trinary signed-digit (TSD) carry-free adders-subtracters are presented on the basis of redundant-bit representation for the operands' digits. It has been shown that only 24 (30) minterms are needed to implement the two-step recoded (the one-step nonrecoded) TSD addition for any operand length. Optical implementation of the proposed arithmetic can be carried out by use of correlation- or matrix-multiplication-based schemes, saving 50% of the system memory. Furthermore, we present four different multiplication designs based on our proposed recoded and nonrecoded TSD adders. Our multiplication designs require a small number of reduced minterms to generate the multiplication partial products. Finally, a recently proposed pipelined iterative-tree algorithm can be used in the TSD adders-multipliers; consequently, efficient use of all available adders can be made.
Zou, Weiyao; Burns, Stephen A
2012-03-20
A Lagrange multiplier-based damped least-squares control algorithm for woofer-tweeter (W-T) dual deformable-mirror (DM) adaptive optics (AO) is tested with a breadboard system. We show that the algorithm can complementarily command the two DMs to correct wavefront aberrations within a single optimization process: the woofer DM correcting the high-stroke, low-order aberrations, and the tweeter DM correcting the low-stroke, high-order aberrations. The optimal damping factor for a DM is found to be the median of the eigenvalue spectrum of the influence matrix of that DM. Wavefront control accuracy is maximized with the optimized control parameters. For the breadboard system, the residual wavefront error can be controlled to the precision of 0.03 μm in root mean square. The W-T dual-DM AO has applications in both ophthalmology and astronomy. © 2012 Optical Society of America
Regulation of a lightweight high efficiency capacitator diode voltage multiplier dc-dc converter
NASA Technical Reports Server (NTRS)
Harrigill, W. T., Jr.; Myers, I. T.
1976-01-01
A method for the regulation of a capacitor diode voltage multiplier dc-dc converter has been developed which has only minor penalties in weight and efficiency. An auxiliary inductor is used, which only handles a fraction of the total power, to control the output voltage through a pulse width modulation method in a buck boost circuit.
NASA Astrophysics Data System (ADS)
Andrianova, Anastasia A.; DiProspero, Thomas; Geib, Clayton; Smoliakova, Irina P.; Kozliak, Evguenii I.; Kubátová, Alena
2018-05-01
The capability to characterize lignin, lignocellulose, and their degradation products is essential for the development of new renewable feedstocks. Electrospray ionization high-resolution time-of-flight mass spectrometry (ESI-HR TOF-MS) method was developed expanding the lignomics toolkit while targeting the simultaneous detection of low and high molecular weight (MW) lignin species. The effect of a broad range of electrolytes and various ionization conditions on ion formation and ionization effectiveness was studied using a suite of mono-, di-, and triarene lignin model compounds as well as kraft alkali lignin. Contrary to the previous studies, the positive ionization mode was found to be more effective for methoxy-substituted arenes and polyphenols, i.e., species of a broadly varied MW structurally similar to the native lignin. For the first time, we report an effective formation of multiply charged species of lignin with the subsequent mass spectrum deconvolution in the presence of 100 mmol L-1 formic acid in the positive ESI mode. The developed method enabled the detection of lignin species with an MW between 150 and 9000 Da or higher, depending on the mass analyzer. The obtained M n and M w values of 1500 and 2500 Da, respectively, were in good agreement with those determined by gel permeation chromatography. Furthermore, the deconvoluted ESI mass spectrum was similar to that obtained with matrix-assisted laser desorption/ionization (MALDI)-HR TOF-MS, yet featuring a higher signal-to-noise ratio. The formation of multiply charged species was confirmed with ion mobility ESI-HR Q-TOF-MS. [Figure not available: see fulltext.
NASA Astrophysics Data System (ADS)
Andrianova, Anastasia A.; DiProspero, Thomas; Geib, Clayton; Smoliakova, Irina P.; Kozliak, Evguenii I.; Kubátová, Alena
2018-03-01
The capability to characterize lignin, lignocellulose, and their degradation products is essential for the development of new renewable feedstocks. Electrospray ionization high-resolution time-of-flight mass spectrometry (ESI-HR TOF-MS) method was developed expanding the lignomics toolkit while targeting the simultaneous detection of low and high molecular weight (MW) lignin species. The effect of a broad range of electrolytes and various ionization conditions on ion formation and ionization effectiveness was studied using a suite of mono-, di-, and triarene lignin model compounds as well as kraft alkali lignin. Contrary to the previous studies, the positive ionization mode was found to be more effective for methoxy-substituted arenes and polyphenols, i.e., species of a broadly varied MW structurally similar to the native lignin. For the first time, we report an effective formation of multiply charged species of lignin with the subsequent mass spectrum deconvolution in the presence of 100 mmol L-1 formic acid in the positive ESI mode. The developed method enabled the detection of lignin species with an MW between 150 and 9000 Da or higher, depending on the mass analyzer. The obtained M n and M w values of 1500 and 2500 Da, respectively, were in good agreement with those determined by gel permeation chromatography. Furthermore, the deconvoluted ESI mass spectrum was similar to that obtained with matrix-assisted laser desorption/ionization (MALDI)-HR TOF-MS, yet featuring a higher signal-to-noise ratio. The formation of multiply charged species was confirmed with ion mobility ESI-HR Q-TOF-MS. [Figure not available: see fulltext.
Video Bandwidth Compression System.
1980-08-01
scaling function, located between the inverse DPCM and inverse transform , on the decoder matrix multiplier chips. 1"V1 T.. ---- i.13 SECURITY...Bit Unpacker and Inverse DPCM Slave Sync Board 15 e. Inverse DPCM Loop Boards 15 f. Inverse Transform Board 16 g. Composite Video Output Board 16...36 a. Display Refresh Memory 36 (1) Memory Section 37 (2) Timing and Control 39 b. Bit Unpacker and Inverse DPCM 40 c. Inverse Transform Processor 43
A multiple-orbit time-of-flight mass spectrometer based on a low energy electrostatic storage ring
NASA Astrophysics Data System (ADS)
Sullivan, M. R.; Spanjers, T. L.; Thorn, P. A.; Reddish, T. J.; Hammond, P.
2012-11-01
The results are presented for an electrostatic storage ring, consisting of two hemispherical deflector analyzers (HDA) connected by two separate sets of cylindrical lenses, used as a time-of-flight mass spectrometer. Based on the results of charged particle simulations and formal matrix model, the Ion Storage Ring is capable of operating with multiple stable orbits, for both single and multiply charged ions simultaneously.
Kaufmann, Tobias; Völker, Stefan; Gunesch, Laura; Kübler, Andrea
2012-01-01
Brain-computer interfaces (BCI) based on event-related potentials (ERP) allow for selection of characters from a visually presented character-matrix and thus provide a communication channel for users with neurodegenerative disease. Although they have been topic of research for more than 20 years and were multiply proven to be a reliable communication method, BCIs are almost exclusively used in experimental settings, handled by qualified experts. This study investigates if ERP-BCIs can be handled independently by laymen without expert support, which is inevitable for establishing BCIs in end-user's daily life situations. Furthermore we compared the classic character-by-character text entry against a predictive text entry (PTE) that directly incorporates predictive text into the character-matrix. N = 19 BCI novices handled a user-centered ERP-BCI application on their own without expert support. The software individually adjusted classifier weights and control parameters in the background, invisible to the user (auto-calibration). All participants were able to operate the software on their own and to twice correctly spell a sentence with the auto-calibrated classifier (once with PTE, once without). Our PTE increased spelling speed and, importantly, did not reduce accuracy. In sum, this study demonstrates feasibility of auto-calibrating ERP-BCI use, independently by laymen and the strong benefit of integrating predictive text directly into the character-matrix.
Kaufmann, Tobias; Völker, Stefan; Gunesch, Laura; Kübler, Andrea
2012-01-01
Brain–computer interfaces (BCI) based on event-related potentials (ERP) allow for selection of characters from a visually presented character-matrix and thus provide a communication channel for users with neurodegenerative disease. Although they have been topic of research for more than 20 years and were multiply proven to be a reliable communication method, BCIs are almost exclusively used in experimental settings, handled by qualified experts. This study investigates if ERP–BCIs can be handled independently by laymen without expert support, which is inevitable for establishing BCIs in end-user’s daily life situations. Furthermore we compared the classic character-by-character text entry against a predictive text entry (PTE) that directly incorporates predictive text into the character-matrix. N = 19 BCI novices handled a user-centered ERP–BCI application on their own without expert support. The software individually adjusted classifier weights and control parameters in the background, invisible to the user (auto-calibration). All participants were able to operate the software on their own and to twice correctly spell a sentence with the auto-calibrated classifier (once with PTE, once without). Our PTE increased spelling speed and, importantly, did not reduce accuracy. In sum, this study demonstrates feasibility of auto-calibrating ERP–BCI use, independently by laymen and the strong benefit of integrating predictive text directly into the character-matrix. PMID:22833713
Qin, Heng; Zuo, Yong; Zhang, Dong; Li, Yinghui; Wu, Jian
2017-03-06
Through slight modification on typical photon multiplier tube (PMT) receiver output statistics, a generalized received response model considering both scattered propagation and random detection is presented to investigate the impact of inter-symbol interference (ISI) on link data rate of short-range non-line-of-sight (NLOS) ultraviolet communication. Good agreement with the experimental results by numerical simulation is shown. Based on the received response characteristics, a heuristic check matrix construction algorithm of low-density-parity-check (LDPC) code is further proposed to approach the data rate bound derived in a delayed sampling (DS) binary pulse position modulation (PPM) system. Compared to conventional LDPC coding methods, better bit error ratio (BER) below 1E-05 is achieved for short-range NLOS UVC systems operating at data rate of 2Mbps.
Generation of physical random numbers by using homodyne detection
NASA Astrophysics Data System (ADS)
Hirakawa, Kodai; Oya, Shota; Oguri, Yusuke; Ichikawa, Tsubasa; Eto, Yujiro; Hirano, Takuya; Tsurumaru, Toyohiro
2016-10-01
Physical random numbers generated by quantum measurements are, in principle, impossible to predict. We have demonstrated the generation of physical random numbers by using a high-speed balanced photodetector to measure the quadrature amplitudes of vacuum states. Using this method, random numbers were generated at 500 Mbps, which is more than one order of magnitude faster than previously [Gabriel et al:, Nature Photonics 4, 711-715 (2010)]. The Crush test battery of the TestU01 suite consists of 31 tests in 144 variations, and we used them to statistically analyze these numbers. The generated random numbers passed 14 of the 31 tests. To improve the randomness, we performed a hash operation, in which each random number was multiplied by a random Toeplitz matrix; the resulting numbers passed all of the tests in the TestU01 Crush battery.
NASA Technical Reports Server (NTRS)
Becker, Joseph F.; Valentin, Jose
1996-01-01
The maximum entropy technique was successfully applied to the deconvolution of overlapped chromatographic peaks. An algorithm was written in which the chromatogram was represented as a vector of sample concentrations multiplied by a peak shape matrix. Simulation results demonstrated that there is a trade off between the detector noise and peak resolution in the sense that an increase of the noise level reduced the peak separation that could be recovered by the maximum entropy method. Real data originated from a sample storage column was also deconvoluted using maximum entropy. Deconvolution is useful in this type of system because the conservation of time dependent profiles depends on the band spreading processes in the chromatographic column, which might smooth out the finer details in the concentration profile. The method was also applied to the deconvolution of previously interpretted Pioneer Venus chromatograms. It was found in this case that the correct choice of peak shape function was critical to the sensitivity of maximum entropy in the reconstruction of these chromatograms.
Combinatorics of least-squares trees.
Mihaescu, Radu; Pachter, Lior
2008-09-09
A recurring theme in the least-squares approach to phylogenetics has been the discovery of elegant combinatorial formulas for the least-squares estimates of edge lengths. These formulas have proved useful for the development of efficient algorithms, and have also been important for understanding connections among popular phylogeny algorithms. For example, the selection criterion of the neighbor-joining algorithm is now understood in terms of the combinatorial formulas of Pauplin for estimating tree length. We highlight a phylogenetically desirable property that weighted least-squares methods should satisfy, and provide a complete characterization of methods that satisfy the property. The necessary and sufficient condition is a multiplicative four-point condition that the variance matrix needs to satisfy. The proof is based on the observation that the Lagrange multipliers in the proof of the Gauss-Markov theorem are tree-additive. Our results generalize and complete previous work on ordinary least squares, balanced minimum evolution, and the taxon-weighted variance model. They also provide a time-optimal algorithm for computation.
Multiplier less high-speed squaring circuit for binary numbers
NASA Astrophysics Data System (ADS)
Sethi, Kabiraj; Panda, Rutuparna
2015-03-01
The squaring operation is important in many applications in signal processing, cryptography etc. In general, squaring circuits reported in the literature use fast multipliers. A novel idea of a squaring circuit without using multipliers is proposed in this paper. Ancient Indian method used for squaring decimal numbers is extended here for binary numbers. The key to our success is that no multiplier is used. Instead, one squaring circuit is used. The hardware architecture of the proposed squaring circuit is presented. The design is coded in VHDL and synthesised and simulated in Xilinx ISE Design Suite 10.1 (Xilinx Inc., San Jose, CA, USA). It is implemented in Xilinx Vertex 4vls15sf363-12 device (Xilinx Inc.). The results in terms of time delay and area is compared with both modified Booth's algorithm and squaring circuit using Vedic multipliers. Our proposed squaring circuit seems to have better performance in terms of both speed and area.
Homogenous polynomially parameter-dependent H∞ filter designs of discrete-time fuzzy systems.
Zhang, Huaguang; Xie, Xiangpeng; Tong, Shaocheng
2011-10-01
This paper proposes a novel H(∞) filtering technique for a class of discrete-time fuzzy systems. First, a novel kind of fuzzy H(∞) filter, which is homogenous polynomially parameter dependent on membership functions with an arbitrary degree, is developed to guarantee the asymptotic stability and a prescribed H(∞) performance of the filtering error system. Second, relaxed conditions for H(∞) performance analysis are proposed by using a new fuzzy Lyapunov function and the Finsler lemma with homogenous polynomial matrix Lagrange multipliers. Then, based on a new kind of slack variable technique, relaxed linear matrix inequality-based H(∞) filtering conditions are proposed. Finally, two numerical examples are provided to illustrate the effectiveness of the proposed approach.
A method for characterizing after-pulsing and dark noise of PMTs and SiPMs
NASA Astrophysics Data System (ADS)
Butcher, A.; Doria, L.; Monroe, J.; Retière, F.; Smith, B.; Walding, J.
2017-12-01
Photo-multiplier tubes (PMTs) and silicon photo-multipliers (SiPMs) are detectors sensitive to single photons that are widely used for the detection of scintillation and Cerenkov light in subatomic physics and medical imaging. This paper presents a method for characterizing two of the main noise sources that PMTs and SiPMs share: dark noise and correlated noise (after-pulsing). The proposed method allows for a model-independent measurement of the after-pulsing timing distribution and dark noise rate.
A novel iterative scheme and its application to differential equations.
Khan, Yasir; Naeem, F; Šmarda, Zdeněk
2014-01-01
The purpose of this paper is to employ an alternative approach to reconstruct the standard variational iteration algorithm II proposed by He, including Lagrange multiplier, and to give a simpler formulation of Adomian decomposition and modified Adomian decomposition method in terms of newly proposed variational iteration method-II (VIM). Through careful investigation of the earlier variational iteration algorithm and Adomian decomposition method, we find unnecessary calculations for Lagrange multiplier and also repeated calculations involved in each iteration, respectively. Several examples are given to verify the reliability and efficiency of the method.
Minimizing energy dissipation of matrix multiplication kernel on Virtex-II
NASA Astrophysics Data System (ADS)
Choi, Seonil; Prasanna, Viktor K.; Jang, Ju-wook
2002-07-01
In this paper, we develop energy-efficient designs for matrix multiplication on FPGAs. To analyze the energy dissipation, we develop a high-level model using domain-specific modeling techniques. In this model, we identify architecture parameters that significantly affect the total energy (system-wide energy) dissipation. Then, we explore design trade-offs by varying these parameters to minimize the system-wide energy. For matrix multiplication, we consider a uniprocessor architecture and a linear array architecture to develop energy-efficient designs. For the uniprocessor architecture, the cache size is a parameter that affects the I/O complexity and the system-wide energy. For the linear array architecture, the amount of storage per processing element is a parameter affecting the system-wide energy. By using maximum amount of storage per processing element and minimum number of multipliers, we obtain a design that minimizes the system-wide energy. We develop several energy-efficient designs for matrix multiplication. For example, for 6×6 matrix multiplication, energy savings of upto 52% for the uniprocessor architecture and 36% for the linear arrary architecture is achieved over an optimized library for Virtex-II FPGA from Xilinx.
Lagrange constraint neural network for audio varying BSS
NASA Astrophysics Data System (ADS)
Szu, Harold H.; Hsu, Charles C.
2002-03-01
Lagrange Constraint Neural Network (LCNN) is a statistical-mechanical ab-initio model without assuming the artificial neural network (ANN) model at all but derived it from the first principle of Hamilton and Lagrange Methodology: H(S,A)= f(S)- (lambda) C(s,A(x,t)) that incorporates measurement constraint C(S,A(x,t))= (lambda) ([A]S-X)+((lambda) 0-1)((Sigma) isi -1) using the vector Lagrange multiplier-(lambda) and a- priori Shannon Entropy f(S) = -(Sigma) i si log si as the Contrast function of unknown number of independent sources si. Szu et al. have first solved in 1997 the general Blind Source Separation (BSS) problem for spatial-temporal varying mixing matrix for the real world remote sensing where a large pixel footprint implies the mixing matrix [A(x,t)] necessarily fill with diurnal and seasonal variations. Because the ground truth is difficult to be ascertained in the remote sensing, we have thus illustrated in this paper, each step of the LCNN algorithm for the simulated spatial-temporal varying BSS in speech, music audio mixing. We review and compare LCNN with other popular a-posteriori Maximum Entropy methodologies defined by ANN weight matrix-[W] sigmoid-(sigma) post processing H(Y=(sigma) ([W]X)) by Bell-Sejnowski, Amari and Oja (BSAO) called Independent Component Analysis (ICA). Both are mirror symmetric of the MaxEnt methodologies and work for a constant unknown mixing matrix [A], but the major difference is whether the ensemble average is taken at neighborhood pixel data X's in BASO or at the a priori sources S variables in LCNN that dictates which method works for spatial-temporal varying [A(x,t)] that would not allow the neighborhood pixel average. We expected the success of sharper de-mixing by the LCNN method in terms of a controlled ground truth experiment in the simulation of variant mixture of two music of similar Kurtosis (15 seconds composed of Saint-Saens Swan and Rachmaninov cello concerto).
Practical Active Capacitor Filter
NASA Technical Reports Server (NTRS)
Shuler, Robert L., Jr. (Inventor)
2005-01-01
A method and apparatus is described that filters an electrical signal. The filtering uses a capacitor multiplier circuit where the capacitor multiplier circuit uses at least one amplifier circuit and at least one capacitor. A filtered electrical signal results from a direct connection from an output of the at least one amplifier circuit.
The Logan School Motor Development Program for the Deaf-Blind and Sensory Impaired.
ERIC Educational Resources Information Center
Logan, Thomas E.
Presented are numerous motor development activities for sensory impaired, severely and profoundly mentally retarded, and multiply handicapped mentally retarded students of all ages. Background information is provided on program objectives and administration, the multiply handicapped child, motor development, and methods of movement training.…
Stochastic quasi-Newton molecular simulations
NASA Astrophysics Data System (ADS)
Chau, C. D.; Sevink, G. J. A.; Fraaije, J. G. E. M.
2010-08-01
We report a new and efficient factorized algorithm for the determination of the adaptive compound mobility matrix B in a stochastic quasi-Newton method (S-QN) that does not require additional potential evaluations. For one-dimensional and two-dimensional test systems, we previously showed that S-QN gives rise to efficient configurational space sampling with good thermodynamic consistency [C. D. Chau, G. J. A. Sevink, and J. G. E. M. Fraaije, J. Chem. Phys. 128, 244110 (2008)10.1063/1.2943313]. Potential applications of S-QN are quite ambitious, and include structure optimization, analysis of correlations and automated extraction of cooperative modes. However, the potential can only be fully exploited if the computational and memory requirements of the original algorithm are significantly reduced. In this paper, we consider a factorized mobility matrix B=JJT and focus on the nontrivial fundamentals of an efficient algorithm for updating the noise multiplier J . The new algorithm requires O(n2) multiplications per time step instead of the O(n3) multiplications in the original scheme due to Choleski decomposition. In a recursive form, the update scheme circumvents matrix storage and enables limited-memory implementation, in the spirit of the well-known limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method, allowing for a further reduction of the computational effort to O(n) . We analyze in detail the performance of the factorized (FSU) and limited-memory (L-FSU) algorithms in terms of convergence and (multiscale) sampling, for an elementary but relevant system that involves multiple time and length scales. Finally, we use this analysis to formulate conditions for the simulation of the complex high-dimensional potential energy landscapes of interest.
NASA Astrophysics Data System (ADS)
Kappler, Karl N.; Schneider, Daniel D.; MacLean, Laura S.; Bleier, Thomas E.
2017-08-01
A method for identification of pulsations in time series of magnetic field data which are simultaneously present in multiple channels of data at one or more sensor locations is described. Candidate pulsations of interest are first identified in geomagnetic time series by inspection. Time series of these "training events" are represented in matrix form and transpose-multiplied to generate time-domain covariance matrices. The ranked eigenvectors of this matrix are stored as a feature of the pulsation. In the second stage of the algorithm, a sliding window (approximately the width of the training event) is moved across the vector-valued time-series comprising the channels on which the training event was observed. At each window position, the data covariance matrix and associated eigenvectors are calculated. We compare the orientation of the dominant eigenvectors of the training data to those from the windowed data and flag windows where the dominant eigenvectors directions are similar. This was successful in automatically identifying pulses which share polarization and appear to be from the same source process. We apply the method to a case study of continuously sampled (50 Hz) data from six observatories, each equipped with three-component induction coil magnetometers. We examine a 90-day interval of data associated with a cluster of four observatories located within 50 km of Napa, California, together with two remote reference stations-one 100 km to the north of the cluster and the other 350 km south. When the training data contains signals present in the remote reference observatories, we are reliably able to identify and extract global geomagnetic signals such as solar-generated noise. When training data contains pulsations only observed in the cluster of local observatories, we identify several types of non-plane wave signals having similar polarization.
NASA Technical Reports Server (NTRS)
Truong, T. K.; Hsu, I. S.; Chang, J. J.; Shyu, H. C.; Reed, I. S.
1986-01-01
A quadratic-polynomial Fermat residue number system (QFNS) has been used to compute complex integer multiplications. The advantage of such a QFNS is that a complex integer multiplication requires only two integer multiplications. In this article, a new type Fermat number multiplier is developed which eliminates the initialization condition of the previous method. It is shown that the new complex multiplier can be implemented on a single VLSI chip. Such a chip is designed and fabricated in CMOS-pw technology.
NASA Technical Reports Server (NTRS)
Shyu, H. C.; Reed, I. S.; Truong, T. K.; Hsu, I. S.; Chang, J. J.
1987-01-01
A quadratic-polynomial Fermat residue number system (QFNS) has been used to compute complex integer multiplications. The advantage of such a QFNS is that a complex integer multiplication requires only two integer multiplications. In this article, a new type Fermat number multiplier is developed which eliminates the initialization condition of the previous method. It is shown that the new complex multiplier can be implemented on a single VLSI chip. Such a chip is designed and fabricated in CMOS-Pw technology.
Kanarska, Yuliya; Walton, Otis
2015-11-30
Fluid-granular flows are common phenomena in nature and industry. Here, an efficient computational technique based on the distributed Lagrange multiplier method is utilized to simulate complex fluid-granular flows. Each particle is explicitly resolved on an Eulerian grid as a separate domain, using solid volume fractions. The fluid equations are solved through the entire computational domain, however, Lagrange multiplier constrains are applied inside the particle domain such that the fluid within any volume associated with a solid particle moves as an incompressible rigid body. The particle–particle interactions are implemented using explicit force-displacement interactions for frictional inelastic particles similar to the DEMmore » method with some modifications using the volume of an overlapping region as an input to the contact forces. Here, a parallel implementation of the method is based on the SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) library.« less
NASA Astrophysics Data System (ADS)
Hong, Youngjoon; Nicholls, David P.
2017-09-01
The capability to rapidly and robustly simulate the scattering of linear waves by periodic, multiply layered media in two and three dimensions is crucial in many engineering applications. In this regard, we present a High-Order Perturbation of Surfaces method for linear wave scattering in a multiply layered periodic medium to find an accurate numerical solution of the governing Helmholtz equations. For this we truncate the bi-infinite computational domain to a finite one with artificial boundaries, above and below the structure, and enforce transparent boundary conditions there via Dirichlet-Neumann Operators. This is followed by a Transformed Field Expansion resulting in a Fourier collocation, Legendre-Galerkin, Taylor series method for solving the problem in a transformed set of coordinates. Assorted numerical simulations display the spectral convergence of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Schilder, J.; Ellenbroek, M.; de Boer, A.
2017-12-01
In this work, the floating frame of reference formulation is used to create a flexible multibody model of slender offshore structures such as pipelines and risers. It is shown that due to the chain-like topology of the considered structures, the equation of motion can be expressed in terms of absolute interface coordinates. In the presented form, kinematic constraint equations are satisfied explicitly and the Lagrange multipliers are eliminated from the equations. Hence, the structures can be conveniently coupled to finite element or multibody models of for example seabed and vessel. The chain-like topology enables the efficient use of recursive solution procedures for both transient dynamic analysis and equilibrium analysis. For this, the transfer matrix method is used. In order to improve the convergence of the equilibrium analysis, the analytical solution of an ideal catenary is used as an initial configuration, reducing the number of required iterations.
A static data flow simulation study at Ames Research Center
NASA Technical Reports Server (NTRS)
Barszcz, Eric; Howard, Lauri S.
1987-01-01
Demands in computational power, particularly in the area of computational fluid dynamics (CFD), led NASA Ames Research Center to study advanced computer architectures. One architecture being studied is the static data flow architecture based on research done by Jack B. Dennis at MIT. To improve understanding of this architecture, a static data flow simulator, written in Pascal, has been implemented for use on a Cray X-MP/48. A matrix multiply and a two-dimensional fast Fourier transform (FFT), two algorithms used in CFD work at Ames, have been run on the simulator. Execution times can vary by a factor of more than 2 depending on the partitioning method used to assign instructions to processing elements. Service time for matching tokens has proved to be a major bottleneck. Loop control and array address calculation overhead can double the execution time. The best sustained MFLOPS rates were less than 50% of the maximum capability of the machine.
Acoustooptic Processing of Two Dimensional Signals Using Temporal and Spatial Integration
1989-05-12
AND SPATIAL INTEGRATION Demetri Psaltis, John Hong, Scott Hudson, Jeff Yu Fai Mok, Mark Neifeld, and Nabeel Riza, Dave Brady V 13U7101 4 NS7urtn-a...Jeff Yu Fai Mok, Mark Neifeld, and Nabeel Riza, Dave Brady DTIC Grant AFOSR-85-0332 ELECTE Submitted to: S J’ Dr. Lee Giles Air Force Office of...In addition we examine the capacity when the filter is binarized. Vector-matrix multipliers are fundamental components of many signal processing sys
Fuss, Franz Konstantin
2013-01-01
Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless) and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal's time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals.
2013-01-01
Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless) and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal's time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals. PMID:24151522
NASA Astrophysics Data System (ADS)
Simbanefayi, Innocent; Khalique, Chaudry Masood
2018-03-01
In this work we study the Korteweg-de Vries-Benjamin-Bona-Mahony (KdV-BBM) equation, which describes the two-way propagation of waves. Using Lie symmetry method together with Jacobi elliptic function expansion and Kudryashov methods we construct its travelling wave solutions. Also, we derive conservation laws of the KdV-BBM equation using the variational derivative approach. In this method, we begin by computing second-order multipliers for the KdV-BBM equation followed by a derivation of the respective conservation laws for each multiplier.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmoker, J.W.
1984-11-01
Data indicate that porosity loss in subsurface carbonate rocks can be empirically represented by the power function, theta = a (TTI) /SUP b/ , where theta is regional porosity, TTI is Lopatin's time-temperature index of thermal maturity, the exponent, b, equals approximately -0.372, and the multiplier, a, is constant for a given data population but varies by an order of magnitude overall. Implications include the following. 1. The decrease of carbonate porosity by burial diagenesis is a maturation process depending exponentially on temperature and linearly on time. 2. The exponent, b, is essentially independent of the rock matrix, and maymore » reflect rate-limiting processes of diffusive transport. 3. The multiplying coefficient, a, incorporates the net effect on porosity of all depositional and diagenetic parameters. Within constraints, carbonate-porosity prediction appears possible on a regional measurement scale as a function of thermal maturity. Estimation of carbonate porosity at the time of hydrocarbon generation, migration, or trapping also appears possible.« less
Argo, Paul E.; Fitzgerald, T. Joseph
1993-01-01
Fading channel effects on a transmitted communication signal are simulated with both frequency and time variations using a channel scattering function to affect the transmitted signal. A conventional channel scattering function is converted to a series of channel realizations by multiplying the square root of the channel scattering function by a complex number of which the real and imaginary parts are each independent variables. The two-dimensional inverse-FFT of this complex-valued channel realization yields a matrix of channel coefficients that provide a complete frequency-time description of the channel. The transmitted radio signal is segmented to provide a series of transmitted signal and each segment is subject to FFT to generate a series of signal coefficient matrices. The channel coefficient matrices and signal coefficient matrices are then multiplied and subjected to inverse-FFT to output a signal representing the received affected radio signal. A variety of channel scattering functions can be used to characterize the response of a transmitter-receiver system to such atmospheric effects.
Hu, Yanmin; Shamaei-Tousi, Alireza; Liu, Yingjun; Coates, Anthony
2010-01-01
In a clinical infection, multiplying and non-multiplying bacteria co-exist. Antibiotics kill multiplying bacteria, but they are very inefficient at killing non-multipliers which leads to slow or partial death of the total target population of microbes in an infected tissue. This prolongs the duration of therapy, increases the emergence of resistance and so contributes to the short life span of antibiotics after they reach the market. Targeting non-multiplying bacteria from the onset of an antibiotic development program is a new concept. This paper describes the proof of principle for this concept, which has resulted in the development of the first antibiotic using this approach. The antibiotic, called HT61, is a small quinolone-derived compound with a molecular mass of about 400 Daltons, and is active against non-multiplying bacteria, including methicillin sensitive and resistant, as well as Panton-Valentine leukocidin-carrying Staphylococcus aureus. It also kills mupirocin resistant MRSA. The mechanism of action of the drug is depolarisation of the cell membrane and destruction of the cell wall. The speed of kill is within two hours. In comparison to the conventional antibiotics, HT61 kills non-multiplying cells more effectively, 6 logs versus less than one log for major marketed antibiotics. HT61 kills methicillin sensitive and resistant S. aureus in the murine skin bacterial colonization and infection models. No resistant phenotype was produced during 50 serial cultures over a one year period. The antibiotic caused no adverse affects after application to the skin of minipigs. Targeting non-multiplying bacteria using this method should be able to yield many new classes of antibiotic. These antibiotics may be able to reduce the rate of emergence of resistance, shorten the duration of therapy, and reduce relapse rates. PMID:20676403
Econo-ESA in semantic text similarity.
Rahutomo, Faisal; Aritsugi, Masayoshi
2014-01-01
Explicit semantic analysis (ESA) utilizes an immense Wikipedia index matrix in its interpreter part. This part of the analysis multiplies a large matrix by a term vector to produce a high-dimensional concept vector. A similarity measurement between two texts is performed between two concept vectors with numerous dimensions. The cost is expensive in both interpretation and similarity measurement steps. This paper proposes an economic scheme of ESA, named econo-ESA. We investigate two aspects of this proposal: dimensional reduction and experiments with various data. We use eight recycling test collections in semantic text similarity. The experimental results show that both the dimensional reduction and test collection characteristics can influence the results. They also show that an appropriate concept reduction of econo-ESA can decrease the cost with minor differences in the results from the original ESA.
29 CFR 4206.5 - Amount of credit in plans using the modified presumptive method.
Code of Federal Regulations, 2010 CFR
2010-07-01
... section, multiplied by the fractions described or determined under paragraph (c) of this section. When an... multiplied by the two fractions described in this paragraph in order to determine the amount of old liabilities that was previously assessed against the employer. (1) The first fraction is the fraction...
29 CFR 4206.4 - Amount of credit in plans using the presumptive method.
Code of Federal Regulations, 2010 CFR
2010-07-01
... section, multiplied by the fractions described or determined under paragraph (c) of this section. When an... paragraph (b) are multiplied by the two fractions described in this paragraph in order to determine the amount of the old liabilities that was previously assessed against the employer. (1) The first fraction...
Zhang, Dapeng; Lv, Fan; Wang, Liyan; Sun, Liangxian; Zhou, Jian; Su, Wenyi; Bi, Peng
2007-01-01
Objective To estimate the size of the population of female sex workers (FSWs) on the basis of the HIV/AIDS behavioural surveillance approach in two Chinese cities, using a multiplier method. Method Relevant questions were inserted into the questionnaires given to two behavioural surveillance groups—female attendees of sexually transmitted disease (STD) clinics and FSWs. The size of the FSW population was derived by multiplying the number of FSWs in selected STD clinics during the study period by the proportion of FSW population who reported having attended the selected STD clinics during the same period. Results The size of the FSW population in the urban area of Xingyi, China, was estimated to be about 2500 (95% CI 2000 to 3400). This accounted for 3.6% of the total urban adult female population. There were an estimated 17 500 FSWs in the urban area of Guiyang, China (95% CI 10 300 to 31 900) or about 3.4% of its total urban adult female population (rounded to the nearest 100). Conclusions The multiplier method could be a useful and cost‐effective approach to estimate the FSW population, especially suitable in countries where HIV behavioural surveillance has been established in high‐risk populations. PMID:17090568
Low rank alternating direction method of multipliers reconstruction for MR fingerprinting.
Assländer, Jakob; Cloos, Martijn A; Knoll, Florian; Sodickson, Daniel K; Hennig, Jürgen; Lattanzi, Riccardo
2018-01-01
The proposed reconstruction framework addresses the reconstruction accuracy, noise propagation and computation time for magnetic resonance fingerprinting. Based on a singular value decomposition of the signal evolution, magnetic resonance fingerprinting is formulated as a low rank (LR) inverse problem in which one image is reconstructed for each singular value under consideration. This LR approximation of the signal evolution reduces the computational burden by reducing the number of Fourier transformations. Also, the LR approximation improves the conditioning of the problem, which is further improved by extending the LR inverse problem to an augmented Lagrangian that is solved by the alternating direction method of multipliers. The root mean square error and the noise propagation are analyzed in simulations. For verification, in vivo examples are provided. The proposed LR alternating direction method of multipliers approach shows a reduced root mean square error compared to the original fingerprinting reconstruction, to a LR approximation alone and to an alternating direction method of multipliers approach without a LR approximation. Incorporating sensitivity encoding allows for further artifact reduction. The proposed reconstruction provides robust convergence, reduced computational burden and improved image quality compared to other magnetic resonance fingerprinting reconstruction approaches evaluated in this study. Magn Reson Med 79:83-96, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Masuda, Y; Misztal, I; Legarra, A; Tsuruta, S; Lourenco, D A L; Fragomeni, B O; Aguilar, I
2017-01-01
This paper evaluates an efficient implementation to multiply the inverse of a numerator relationship matrix for genotyped animals () by a vector (). The computation is required for solving mixed model equations in single-step genomic BLUP (ssGBLUP) with the preconditioned conjugate gradient (PCG). The inverse can be decomposed into sparse matrices that are blocks of the sparse inverse of a numerator relationship matrix () including genotyped animals and their ancestors. The elements of were rapidly calculated with the Henderson's rule and stored as sparse matrices in memory. Implementation of was by a series of sparse matrix-vector multiplications. Diagonal elements of , which were required as preconditioners in PCG, were approximated with a Monte Carlo method using 1,000 samples. The efficient implementation of was compared with explicit inversion of with 3 data sets including about 15,000, 81,000, and 570,000 genotyped animals selected from populations with 213,000, 8.2 million, and 10.7 million pedigree animals, respectively. The explicit inversion required 1.8 GB, 49 GB, and 2,415 GB (estimated) of memory, respectively, and 42 s, 56 min, and 13.5 d (estimated), respectively, for the computations. The efficient implementation required <1 MB, 2.9 GB, and 2.3 GB of memory, respectively, and <1 sec, 3 min, and 5 min, respectively, for setting up. Only <1 sec was required for the multiplication in each PCG iteration for any data sets. When the equations in ssGBLUP are solved with the PCG algorithm, is no longer a limiting factor in the computations.
Rectangular rotation of spherical harmonic expansion of arbitrary high degree and order
NASA Astrophysics Data System (ADS)
Fukushima, Toshio
2017-08-01
In order to move the polar singularity of arbitrary spherical harmonic expansion to a point on the equator, we rotate the expansion around the y-axis by 90° such that the x-axis becomes a new pole. The expansion coefficients are transformed by multiplying a special value of Wigner D-matrix and a normalization factor. The transformation matrix is unchanged whether the coefficients are 4 π fully normalized or Schmidt quasi-normalized. The matrix is recursively computed by the so-called X-number formulation (Fukushima in J Geodesy 86: 271-285, 2012a). As an example, we obtained 2190× 2190 coefficients of the rectangular rotated spherical harmonic expansion of EGM2008. A proper combination of the original and the rotated expansions will be useful in (i) integrating the polar orbits of artificial satellites precisely and (ii) synthesizing/analyzing the gravitational/geomagnetic potentials and their derivatives accurately in the high latitude regions including the arctic and antarctic area.
Covariate Selection for Multilevel Models with Missing Data
Marino, Miguel; Buxton, Orfeu M.; Li, Yi
2017-01-01
Missing covariate data hampers variable selection in multilevel regression settings. Current variable selection techniques for multiply-imputed data commonly address missingness in the predictors through list-wise deletion and stepwise-selection methods which are problematic. Moreover, most variable selection methods are developed for independent linear regression models and do not accommodate multilevel mixed effects regression models with incomplete covariate data. We develop a novel methodology that is able to perform covariate selection across multiply-imputed data for multilevel random effects models when missing data is present. Specifically, we propose to stack the multiply-imputed data sets from a multiple imputation procedure and to apply a group variable selection procedure through group lasso regularization to assess the overall impact of each predictor on the outcome across the imputed data sets. Simulations confirm the advantageous performance of the proposed method compared with the competing methods. We applied the method to reanalyze the Healthy Directions-Small Business cancer prevention study, which evaluated a behavioral intervention program targeting multiple risk-related behaviors in a working-class, multi-ethnic population. PMID:28239457
Shulkind, Gal; Nazarathy, Moshe
2012-12-17
We present an efficient method for system identification (nonlinear channel estimation) of third order nonlinear Volterra Series Transfer Function (VSTF) characterizing the four-wave-mixing nonlinear process over a coherent OFDM fiber link. Despite the seemingly large number of degrees of freedom in the VSTF (cubic in the number of frequency points) we identified a compressed VSTF representation which does not entail loss of information. Additional slightly lossy compression may be obtained by discarding very low power VSTF coefficients associated with regions of destructive interference in the FWM phased array effect. Based on this two-staged VSTF compressed representation, we develop a robust and efficient algorithm of nonlinear system identification (optical performance monitoring) estimating the VSTF by transmission of an extended training sequence over the OFDM link, performing just a matrix-vector multiplication at the receiver by a pseudo-inverse matrix which is pre-evaluated offline. For 512 (1024) frequency samples per channel, the VSTF measurement takes less than 1 (10) msec to complete with computational complexity of one real-valued multiply-add operation per time sample. Relative to a naïve exhaustive three-tone-test, our algorithm is far more tolerant of ASE additive noise and its acquisition time is orders of magnitude faster.
Modeling of Mean-VaR portfolio optimization by risk tolerance when the utility function is quadratic
NASA Astrophysics Data System (ADS)
Sukono, Sidi, Pramono; Bon, Abdul Talib bin; Supian, Sudradjat
2017-03-01
The problems of investing in financial assets are to choose a combination of weighting a portfolio can be maximized return expectations and minimizing the risk. This paper discusses the modeling of Mean-VaR portfolio optimization by risk tolerance, when square-shaped utility functions. It is assumed that the asset return has a certain distribution, and the risk of the portfolio is measured using the Value-at-Risk (VaR). So, the process of optimization of the portfolio is done based on the model of Mean-VaR portfolio optimization model for the Mean-VaR done using matrix algebra approach, and the Lagrange multiplier method, as well as Khun-Tucker. The results of the modeling portfolio optimization is in the form of a weighting vector equations depends on the vector mean return vector assets, identities, and matrix covariance between return of assets, as well as a factor in risk tolerance. As an illustration of numeric, analyzed five shares traded on the stock market in Indonesia. Based on analysis of five stocks return data gained the vector of weight composition and graphics of efficient surface of portfolio. Vector composition weighting weights and efficient surface charts can be used as a guide for investors in decisions to invest.
LiDAR point classification based on sparse representation
NASA Astrophysics Data System (ADS)
Li, Nan; Pfeifer, Norbert; Liu, Chun
2017-04-01
In order to combine the initial spatial structure and features of LiDAR data for accurate classification. The LiDAR data is represented as a 4-order tensor. Sparse representation for classification(SRC) method is used for LiDAR tensor classification. It turns out SRC need only a few of training samples from each class, meanwhile can achieve good classification result. Multiple features are extracted from raw LiDAR points to generate a high-dimensional vector at each point. Then the LiDAR tensor is built by the spatial distribution and feature vectors of the point neighborhood. The entries of LiDAR tensor are accessed via four indexes. Each index is called mode: three spatial modes in direction X ,Y ,Z and one feature mode. Sparse representation for classification(SRC) method is proposed in this paper. The sparsity algorithm is to find the best represent the test sample by sparse linear combination of training samples from a dictionary. To explore the sparsity of LiDAR tensor, the tucker decomposition is used. It decomposes a tensor into a core tensor multiplied by a matrix along each mode. Those matrices could be considered as the principal components in each mode. The entries of core tensor show the level of interaction between the different components. Therefore, the LiDAR tensor can be approximately represented by a sparse tensor multiplied by a matrix selected from a dictionary along each mode. The matrices decomposed from training samples are arranged as initial elements in the dictionary. By dictionary learning, a reconstructive and discriminative structure dictionary along each mode is built. The overall structure dictionary composes of class-specified sub-dictionaries. Then the sparse core tensor is calculated by tensor OMP(Orthogonal Matching Pursuit) method based on dictionaries along each mode. It is expected that original tensor should be well recovered by sub-dictionary associated with relevant class, while entries in the sparse tensor associated with other classed should be nearly zero. Therefore, SRC use the reconstruction error associated with each class to do data classification. A section of airborne LiDAR points of Vienna city is used and classified into 6classes: ground, roofs, vegetation, covered ground, walls and other points. Only 6 training samples from each class are taken. For the final classification result, ground and covered ground are merged into one same class(ground). The classification accuracy for ground is 94.60%, roof is 95.47%, vegetation is 85.55%, wall is 76.17%, other object is 20.39%.
Network Inference via the Time-Varying Graphical Lasso
Hallac, David; Park, Youngsuk; Boyd, Stephen; Leskovec, Jure
2018-01-01
Many important problems can be modeled as a system of interconnected entities, where each entity is recording time-dependent observations or measurements. In order to spot trends, detect anomalies, and interpret the temporal dynamics of such data, it is essential to understand the relationships between the different entities and how these relationships evolve over time. In this paper, we introduce the time-varying graphical lasso (TVGL), a method of inferring time-varying networks from raw time series data. We cast the problem in terms of estimating a sparse time-varying inverse covariance matrix, which reveals a dynamic network of interdependencies between the entities. Since dynamic network inference is a computationally expensive task, we derive a scalable message-passing algorithm based on the Alternating Direction Method of Multipliers (ADMM) to solve this problem in an efficient way. We also discuss several extensions, including a streaming algorithm to update the model and incorporate new observations in real time. Finally, we evaluate our TVGL algorithm on both real and synthetic datasets, obtaining interpretable results and outperforming state-of-the-art baselines in terms of both accuracy and scalability. PMID:29770256
Chang, Chia-Kai; Wu, Chih-Che; Wang, Yi-Sheng; Chang, Huan-Cheng
2008-05-15
Despite recent advances in phosphopeptide research, detection and characterization of multiply phosphorylated peptides have been a challenge. This work presents a new strategy that not only can effectively extract phosphorylated peptides from complex samples but also can selectively enrich multiphosphorylated peptides for direct matrix-assisted laser desorption/ionization time-of-flight mass spectrometric analysis. Polyarginine-coated diamond nanoparticles are the solid-phase extraction supports used for this purpose. The supports show an exceptionally high affinity for multiphosphorylated peptides due to multiple arginine-phosphate interactions. The efficacy of this method was demonstrated by analyzing a small volume (50 microL) of tryptic digests of proteins such as beta-casein, alpha-casein, and nonfat milk at a concentration as low as 1 x 10 (-9) M. The concentration is markedly lower than that can be achieved by using other currently available technologies. We quantified the enhanced selectivity and detection sensitivity of the method using mixtures composed of mono- and tetraphosphorylated peptide standards. This new affinity-based protocol is expected to find useful applications in characterizing multiple phosphorylation sites on proteins of interest in complex and dilute analytes.
NASA Astrophysics Data System (ADS)
Ghatge, Mayur; Tabrizian, Roozbeh
2018-03-01
A matrix of aluminum-nitride (AlN) waveguides is acoustically engineered to realize electrically isolated phase-synchronous frequency references through nonlinear wave-mixing. AlN rectangular waveguides are cross-coupled through a periodically perforated plate that is engineered to have a wide acoustic bandgap around a desirable frequency ( f1≈509 MHz). While the coupling plate isolates the matrix from resonant vibrations of individual waveguide constituents at f1, it is transparent to the third-order harmonic waves (3f1) that are generated through nonlinear wave-mixing. Therefore, large-signal excitation of the f1 mode in a constituent waveguide generates acoustic waves at 3f1 with an efficiency defined by elastic anharmonicity of the AlN film. The phase-synchronous propagation of the third harmonic through the matrix is amplified by a high quality-factor resonance mode at f2≈1529 MHz, which is sufficiently close to 3f1 (f2 ≅ 3f1). Such an architecture enables realization of frequency-multiplied and phase-synchronous, yet electrically and spectrally isolated, references for multi-band/carrier and spread-spectrum wireless communication systems.
Nonlinear programming extensions to rational function approximations of unsteady aerodynamics
NASA Technical Reports Server (NTRS)
Tiffany, Sherwood H.; Adams, William M., Jr.
1987-01-01
This paper deals with approximating unsteady generalized aerodynamic forces in the equations of motion of a flexible aircraft. Two methods of formulating these approximations are extended to include both the same flexibility in constraining them and the same methodology in optimizing nonlinear parameters as another currently used 'extended least-squares' method. Optimal selection of 'nonlinear' parameters is made in each of the three methods by use of the same nonlinear (nongradient) optimizer. The objective of the nonlinear optimization is to obtain rational approximations to the unsteady aerodynamics whose state-space realization is of lower order than that required when no optimization of the nonlinear terms is performed. The free 'linear' parameters are determined using least-squares matrix techniques on a Lagrange multiplier formulation of an objective function which incorporates selected linear equality constraints. State-space mathematical models resulting from the different approaches are described, and results are presented which show comparative evaluations from application of each of the extended methods to a numerical example. The results obtained for the example problem show a significant (up to 63 percent) reduction in the number of differential equations used to represent the unsteady aerodynamic forces in linear time-invariant equations of motion as compared to a conventional method in which nonlinear terms are not optimized.
Volkov basis for simulation of interaction of strong laser pulses and solids
NASA Astrophysics Data System (ADS)
Kidd, Daniel; Covington, Cody; Li, Yonghui; Varga, Kálmán
2018-01-01
An efficient and accurate basis comprised of Volkov states is implemented and tested for time-dependent simulations of interactions between strong laser pulses and crystalline solids. The Volkov states are eigenstates of the free electron Hamiltonian in an electromagnetic field and analytically represent the rapidly oscillating time-dependence of the orbitals, allowing significantly faster time propagation than conventional approaches. The Volkov approach can be readily implemented in plane-wave codes by multiplying the potential energy matrix elements with a simple time-dependent phase factor.
Pattern classification using charge transfer devices
NASA Technical Reports Server (NTRS)
1980-01-01
The feasibility of using charge transfer devices in the classification of multispectral imagery was investigated by evaluating particular devices to determine their suitability in matrix multiplication subsystem of a pattern classifier and by designing a protype of such a system. Particular attention was given to analog-analog correlator devices which consist of two tapped delay lines, chip multipliers, and a summed output. The design for the classifier and a printed circuit layout for the analog boards were completed and the boards were fabricated. A test j:g for the board was built and checkout was begun.
Varas, Lautaro R; Pontes, F C; Santos, A C F; Coutinho, L H; de Souza, G G B
2015-09-15
The ion-ion-coincidence mass spectroscopy technique brings useful information about the fragmentation dynamics of doubly and multiply charged ionic species. We advocate the use of a matrix-parameter methodology in order to represent and interpret the entire ion-ion spectra associated with the ionic dissociation of doubly charged molecules. This method makes it possible, among other things, to infer fragmentation processes and to extract information about overlapped ion-ion coincidences. This important piece of information is difficult to obtain from other previously described methodologies. A Wiley-McLaren time-of-flight mass spectrometer was used to discriminate the positively charged fragment ions resulting from the sample ionization by a pulsed 800 eV electron beam. We exemplify the application of this methodology by analyzing the fragmentation and ionic dissociation of the dimethyl disulfide (DMDS) molecule as induced by fast electrons. The doubly charged dissociation was analyzed using the Multivariate Normal Distribution. The ion-ion spectrum of the DMDS molecule was obtained at an incident electron energy of 800 eV and was matrix represented using the Multivariate Distribution theory. The proposed methodology allows us to distinguish information among [CH n SH n ] + /[CH 3 ] + (n = 1-3) fragment ions in the ion-ion coincidence spectra using ion-ion coincidence data. Using the momenta balance methodology for the inferred parameters, a secondary decay mechanism is proposed for the [CHS] + ion formation. As an additional check on the methodology, previously published data on the SiF 4 molecule was re-analyzed with the present methodology and the results were shown to be statistically equivalent. The use of a Multivariate Normal Distribution allows for the representation of the whole ion-ion mass spectrum of doubly or multiply ionized molecules as a combination of parameters and the extraction of information among overlapped data. We have successfully applied this methodology to the analysis of the fragmentation of the DMDS molecule. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Floating-Point Units and Algorithms for field-programmable gate arrays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Underwood, Keith D.; Hemmert, K. Scott
2005-11-01
The software that we are attempting to copyright is a package of floating-point unit descriptions and example algorithm implementations using those units for use in FPGAs. The floating point units are best-in-class implementations of add, multiply, divide, and square root floating-point operations. The algorithm implementations are sample (not highly flexible) implementations of FFT, matrix multiply, matrix vector multiply, and dot product. Together, one could think of the collection as an implementation of parts of the BLAS library or something similar to the FFTW packages (without the flexibility) for FPGAs. Results from this work has been published multiple times and wemore » are working on a publication to discuss the techniques we use to implement the floating-point units, For some more background, FPGAS are programmable hardware. "Programs" for this hardware are typically created using a hardware description language (examples include Verilog, VHDL, and JHDL). Our floating-point unit descriptions are written in JHDL, which allows them to include placement constraints that make them highly optimized relative to some other implementations of floating-point units. Many vendors (Nallatech from the UK, SRC Computers in the US) have similar implementations, but our implementations seem to be somewhat higher performance. Our algorithm implementations are written in VHDL and models of the floating-point units are provided in VHDL as well. FPGA "programs" make multiple "calls" (hardware instantiations) to libraries of intellectual property (IP), such as the floating-point unit library described here. These programs are then compiled using a tool called a synthesizer (such as a tool from Synplicity, Inc.). The compiled file is a netlist of gates and flip-flops. This netlist is then mapped to a particular type of FPGA by a mapper and then a place- and-route tool. These tools assign the gates in the netlist to specific locations on the specific type of FPGA chip used and constructs the required routes between them. The result is a "bitstream" that is analogous to a compiled binary. The bitstream is loaded into the FPGA to create a specific hardware configuration.« less
MULTIPLY: Development of a European HSRL Airborne Facility
NASA Astrophysics Data System (ADS)
Binietoglou, Ioannis; Serikov, Ilya; Nicolae, Doina; Amiridis, Vassillis; Belegante, Livio; Boscornea, Andrea; Brugmann, Bjorn; Costa Suros, Montserrat; Hellmann, David; Kokkalis, Panagiotis; Linne, Holger; Stachlewska, Iwona; Vajaiac, Sorin-Nicolae
2016-08-01
MULTIPLY is a novel airborne high spectral resolution lidar (HSRL) currently under development by a consortium of European institutions from Romania, Germany, Greece, and Poland. Its aim is to contribute to calibration and validations activities of the upcoming ESA aerosol sensing missions like ADM-Aeolus, EarthCARE and the Sentinel-3/-4/-5/-5p which include products related to atmospheric aerosols. The effectiveness of these missions depends on independent airborne measurements to develop and test the retrieval methods, and validate mission products following launch. The aim of ESA's MULTIPLY project is to design, develop, and test a multi-wavelength depolarization HSRL for airborne applications. The MULTIPLY lidar will deliver the aerosol extinction and backscatter coefficient profiles at three wavelengths (355nm, 532nm, 1064nm), as well as profiles of aerosol intensive parameters (Ångström exponents, extinction- to-backscatter ratios, and linear particle depolarization ratios).
Evaluation of multiplier effect of housing investments in the city economy
NASA Astrophysics Data System (ADS)
Ovsiannikova, T.; Rabtsevich, O.; Yugova, I.
2017-01-01
The given study presents evaluation of the role and significance of housing investments providing stable social and economic development of a city. It also justifies multiplier impact of investments in housing construction on all the sectors of urban economy. Growth of housing investments generates multiplier effect triggering the development of other different interrelated sectors. The paper suggests approach developed by the authors to evaluate the level of city development. It involves defining gross city product on the basis of integral criterion of gross value added of types of economic activities in the city economy. The algorithm of gross value added generation in urban economy is presented as a result of multiplier effect of housing investments. The evaluation of the mentioned effect was shown on the case of the city of Tomsk (Russia). The study has revealed that multiplier effect allows obtaining four rubles of added value out of one ruble of housing investments in the city economy. Methods used in the present study include the ones of the System of National Accounts, as well as methods of statistical and structural analysis. It has been proved that priority investment in housing construction is considered to be the key factor for stable social and economic development of the city. Developed approach is intended for justification of priority directions in municipal and regional investment policy. City and regional governing bodies and potential investors are the ones to apply the given approach.
Aho, Johnathon M; Dietz, Allan B; Radel, Darcie J; Butler, Greg W; Thomas, Mathew; Nelson, Timothy J; Carlsen, Brian T; Cassivi, Stephen D; Resch, Zachary T; Faubion, William A; Wigle, Dennis A
2016-10-01
: Management of recurrent bronchopleural fistula (BPF) after pneumonectomy remains a challenge. Although a variety of devices and techniques have been described, definitive management usually involves closure of the fistula tract through surgical intervention. Standard surgical approaches for BPF incur significant morbidity and mortality and are not reliably or uniformly successful. We describe the first-in-human application of an autologous mesenchymal stem cell (MSC)-seeded matrix graft to repair a multiply recurrent postpneumonectomy BPF. Adipose-derived MSCs were isolated from patient abdominal adipose tissue, expanded, and seeded onto bio-absorbable mesh, which was surgically implanted at the site of BPF. Clinical follow-up and postprocedural radiological and bronchoscopic imaging were performed to ensure BPF closure, and in vitro stemness characterization of patient-specific MSCs was performed. The patient remained clinically asymptomatic without evidence of recurrence on bronchoscopy at 3 months, computed tomographic imaging at 16 months, and clinical follow-up of 1.5 years. There is no evidence of malignant degeneration of MSC populations in situ, and the patient-derived MSCs were capable of differentiating into adipocytes, chondrocytes, and osteocytes using established protocols. Isolation and expansion of autologous MSCs derived from patients in a malnourished, deconditioned state is possible. Successful closure and safety data for this approach suggest the potential for an expanded study of the role of autologous MSCs in regenerative surgical applications for BPF. Bronchopleural fistula is a severe complication of pulmonary resection. Current management is not reliably successful. This work describes the first-in-human application of an autologous mesenchymal stem cell (MSC)-seeded matrix graft to the repair of a large, multiply recurrent postpneumonectomy BPF. Clinical follow-up of 1.5 years without recurrence suggests initial safety and feasibility of this approach. Further assessment of MSC grafts in these difficult clinical scenarios requires expanded study. ©AlphaMed Press.
Matrix exponential-based closures for the turbulent subgrid-scale stress tensor.
Li, Yi; Chevillard, Laurent; Eyink, Gregory; Meneveau, Charles
2009-01-01
Two approaches for closing the turbulence subgrid-scale stress tensor in terms of matrix exponentials are introduced and compared. The first approach is based on a formal solution of the stress transport equation in which the production terms can be integrated exactly in terms of matrix exponentials. This formal solution of the subgrid-scale stress transport equation is shown to be useful to explore special cases, such as the response to constant velocity gradient, but neglecting pressure-strain correlations and diffusion effects. The second approach is based on an Eulerian-Lagrangian change of variables, combined with the assumption of isotropy for the conditionally averaged Lagrangian velocity gradient tensor and with the recent fluid deformation approximation. It is shown that both approaches lead to the same basic closure in which the stress tensor is expressed as the matrix exponential of the resolved velocity gradient tensor multiplied by its transpose. Short-time expansions of the matrix exponentials are shown to provide an eddy-viscosity term and particular quadratic terms, and thus allow a reinterpretation of traditional eddy-viscosity and nonlinear stress closures. The basic feasibility of the matrix-exponential closure is illustrated by implementing it successfully in large eddy simulation of forced isotropic turbulence. The matrix-exponential closure employs the drastic approximation of entirely omitting the pressure-strain correlation and other nonlinear scrambling terms. But unlike eddy-viscosity closures, the matrix exponential approach provides a simple and local closure that can be derived directly from the stress transport equation with the production term, and using physically motivated assumptions about Lagrangian decorrelation and upstream isotropy.
Fast ℓ1-regularized space-time adaptive processing using alternating direction method of multipliers
NASA Astrophysics Data System (ADS)
Qin, Lilong; Wu, Manqing; Wang, Xuan; Dong, Zhen
2017-04-01
Motivated by the sparsity of filter coefficients in full-dimension space-time adaptive processing (STAP) algorithms, this paper proposes a fast ℓ1-regularized STAP algorithm based on the alternating direction method of multipliers to accelerate the convergence and reduce the calculations. The proposed algorithm uses a splitting variable to obtain an equivalent optimization formulation, which is addressed with an augmented Lagrangian method. Using the alternating recursive algorithm, the method can rapidly result in a low minimum mean-square error without a large number of calculations. Through theoretical analysis and experimental verification, we demonstrate that the proposed algorithm provides a better output signal-to-clutter-noise ratio performance than other algorithms.
Improved Joining of Metal Components to Composite Structures
NASA Technical Reports Server (NTRS)
Semmes, Edmund
2009-01-01
Systems requirements for complex spacecraft drive design requirements that lead to structures, components, and/or enclosures of a multi-material and multifunctional design. The varying physical properties of aluminum, tungsten, Invar, or other high-grade aerospace metals when utilized in conjunction with lightweight composites multiply system level solutions. These multi-material designs are largely dependent upon effective joining techAn improved method of joining metal components to matrix/fiber composite material structures has been invented. The method is particularly applicable to equipping such thin-wall polymer-matrix composite (PMC) structures as tanks with flanges, ceramic matrix composite (CMC) liners for high heat engine nozzles, and other metallic-to-composite attachments. The method is oriented toward new architectures and distributing mechanical loads as widely as possible in the vicinities of attachment locations to prevent excessive concentrations of stresses that could give rise to delaminations, debonds, leaks, and other failures. The method in its most basic form can be summarized as follows: A metal component is to be joined to a designated attachment area on a composite-material structure. In preparation for joining, the metal component is fabricated to include multiple studs projecting from the aforementioned face. Also in preparation for joining, holes just wide enough to accept the studs are molded into, drilled, or otherwise formed in the corresponding locations in the designated attachment area of the uncured ("wet') composite structure. The metal component is brought together with the uncured composite structure so that the studs become firmly seated in the holes, thereby causing the composite material to become intertwined with the metal component in the joining area. Alternately, it is proposed to utilize other mechanical attachment schemes whereby the uncured composite and metallic parts are joined with "z-direction" fasteners. The resulting "wet" assembly is then subjected to the composite-curing heat treatment, becoming a unitary structure. It should be noted that this new art will require different techniques for CMC s versus PMC's, but the final architecture and companion curing philosophy is the same. For instance, a chemical vapor infiltration (CVI) fabrication technique may require special integration of the pre-form and
Chaurasia, Ashok; Harel, Ofer
2015-02-10
Tests for regression coefficients such as global, local, and partial F-tests are common in applied research. In the framework of multiple imputation, there are several papers addressing tests for regression coefficients. However, for simultaneous hypothesis testing, the existing methods are computationally intensive because they involve calculation with vectors and (inversion of) matrices. In this paper, we propose a simple method based on the scalar entity, coefficient of determination, to perform (global, local, and partial) F-tests with multiply imputed data. The proposed method is evaluated using simulated data and applied to suicide prevention data. Copyright © 2014 John Wiley & Sons, Ltd.
From progressive to finite deformation, and back: the universal deformation matrix
NASA Astrophysics Data System (ADS)
Provost, A.; Buisson, C.; Merle, O.
2003-04-01
It is widely accepted that any finite strain recorded in the field may be interpreted in terms of the simultaneous combination of a pure shear component with one or several simple shear components. To predict strain in geological structures, approximate solutions may be obtained by multiplying successive small increments of each elementary strain component. A more rigorous method consists in achieving the simultaneous combination in the velocity gradient tensor but solutions already proposed in the literature are valid for special cases only and cannot be used, e.g., for the general combination of a pure shear component and six elementary simple shear components. In this paper, we show that the combination of any strain components is as simple as a mouse click, both analytically and numerically. The finite deformation matrix is given by L=exp(L.Δt) where L.Δt is the time-integrated velocity gradient tensor. This method makes it possible to predict finite strain for any combination of strain components. Reciprocally, L.Δt=ln(D) , which allows to unravel the simplest deformation history that might be liable for a given finite deformation. Given the strain ellipsoid only, it is still possible to constrain the range of compatible deformation matrices and thus the range of strain component combinations. Interestingly, certain deformation matrices, though geologically sensible, have no real logarithm so cannot be explained by a deformation history implying strain rate components with constant proportions, what implies significant changes of the stress field during the history of deformation. The study as a whole opens the possibility for further investigations on deformation analysis in general, the method could be used wathever the configuration is.
NASA Astrophysics Data System (ADS)
Keefe, Laurence
2016-11-01
Parabolized acoustic propagation in transversely inhomogeneous media is described by the operator update equation U (x , y , z + Δz) =eik0 (- 1 +√{ 1 + Z }) U (x , y , z) for evolution of the envelope of a wavetrain solution to the original Helmholtz equation. Here the operator, Z =∇T2 + (n2 - 1) , involves the transverse Laplacian and the refractive index distribution. Standard expansion techniques (on the assumption Z << 1)) produce pdes that approximate, to greater or lesser extent, the full dispersion relation of the original Helmholtz equation, except that none of them describe evanescent/damped waves without special modifications to the expansion coefficients. Alternatively, a discretization of both the envelope and the operator converts the operator update equation into a matrix multiply, and existing theorems on matrix functions demonstrate that the complete (discrete) Helmholtz dispersion relation, including evanescent/damped waves, is preserved by this discretization. Propagation-constant/damping-rates contour comparisons for the operator equation and various approximations demonstrate this point, and how poorly the lowest-order, textbook, parabolized equation describes propagation in lined ducts.
Calculation of biochemical net reactions and pathways by using matrix operations.
Alberty, R A
1996-01-01
Pathways for net biochemical reactions can be calculated by using a computer program that solves systems of linear equations. The coefficients in the linear equations are the stoichiometric numbers in the biochemical equations for the system. The solution of the system of linear equations is a vector of the stoichiometric numbers of the reactions in the pathway for the net reaction; this is referred to as the pathway vector. The pathway vector gives the number of times the various reactions have to occur to produce the desired net reaction. Net reactions may involve unknown numbers of ATP, ADP, and Pi molecules. The numbers of ATP, ADP, and Pi in a desired net reaction can be calculated in a two-step process. In the first step, the pathway is calculated by solving the system of linear equations for an abbreviated stoichiometric number matrix without ATP, ADP, Pi, NADred, and NADox. In the second step, the stoichiometric numbers in the desired net reaction, which includes ATP, ADP, Pi, NADred, and NADox, are obtained by multiplying the full stoichiometric number matrix by the calculated pathway vector. PMID:8804633
Asymmetric color image encryption based on singular value decomposition
NASA Astrophysics Data System (ADS)
Yao, Lili; Yuan, Caojin; Qiang, Junjie; Feng, Shaotong; Nie, Shouping
2017-02-01
A novel asymmetric color image encryption approach by using singular value decomposition (SVD) is proposed. The original color image is encrypted into a ciphertext shown as an indexed image by using the proposed method. The red, green and blue components of the color image are subsequently encoded into a complex function which is then separated into U, S and V parts by SVD. The data matrix of the ciphertext is obtained by multiplying orthogonal matrices U and V while implementing phase-truncation. Diagonal entries of the three diagonal matrices of the SVD results are abstracted and scrambling combined to construct the colormap of the ciphertext. Thus, the encrypted indexed image covers less space than the original image. For decryption, the original color image cannot be recovered without private keys which are obtained from phase-truncation and the orthogonality of V. Computer simulations are presented to evaluate the performance of the proposed algorithm. We also analyze the security of the proposed system.
Randomly Sampled-Data Control Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Han, Kuoruey
1990-01-01
The purpose is to solve the Linear Quadratic Regulator (LQR) problem with random time sampling. Such a sampling scheme may arise from imperfect instrumentation as in the case of sampling jitter. It can also model the stochastic information exchange among decentralized controllers to name just a few. A practical suboptimal controller is proposed with the nice property of mean square stability. The proposed controller is suboptimal in the sense that the control structure is limited to be linear. Because of i. i. d. assumption, this does not seem unreasonable. Once the control structure is fixed, the stochastic discrete optimal control problem is transformed into an equivalent deterministic optimal control problem with dynamics described by the matrix difference equation. The N-horizon control problem is solved using the Lagrange's multiplier method. The infinite horizon control problem is formulated as a classical minimization problem. Assuming existence of solution to the minimization problem, the total system is shown to be mean square stable under certain observability conditions. Computer simulations are performed to illustrate these conditions.
Akiyama, Naotaro; Yamamoto-Fukuda, Tomomi; Takahashi, Haruo; Koji, Takehiko
2013-01-01
Middle-ear mucosa maintains middle-ear pressure. However, the majority of surgical cases exhibit inadequate middle-ear mucosal regeneration, and mucosal transplantation is necessary in such cases. The aim of the present study was to assess the feasibility of transplantation of isolated mucosal cells encapsulated within synthetic self-assembling peptide nanofiber scaffolds using PuraMatrix, which has been successfully used as scaffolding in tissue engineering, for the repair of damaged middle-ear. Middle-ear bullae with mucosa were removed from Sprague Dawley (SD) transgenic rats, transfected with enhanced green fluorescent protein (EGFP) transgene and excised into small pieces, then cultured up to the third passage. After surgical elimination of middle-ear mucosa in SD recipient rats, donor cells were encapsulated within PuraMatrix and transplanted into these immunosuppressed rats. Primary cultured cells were positive for pancytokeratin but not for vimentin, and retained the character of middle-ear epithelial cells. A high proportion of EGFP-expressing cells were found in the recipient middle-ear after transplantation with PuraMatrix, but not without PuraMatrix. These cells retained normal morphology and function, as confirmed by histological examination, immunohistochemistry, and electron microscopy, and multiplied to form new epithelial and subepithelial layers together with basement membrane. The present study demonstrated the feasibility of transplantation of cultured middle-ear mucosal epithelial cells encapsulated within PuraMatrix for regeneration of surgically eliminated mucosa of the middle-ear in SD rats. PMID:23926427
Akiyama, Naotaro; Yamamoto-Fukuda, Tomomi; Takahashi, Haruo; Koji, Takehiko
2013-01-01
Middle-ear mucosa maintains middle-ear pressure. However, the majority of surgical cases exhibit inadequate middle-ear mucosal regeneration, and mucosal transplantation is necessary in such cases. The aim of the present study was to assess the feasibility of transplantation of isolated mucosal cells encapsulated within synthetic self-assembling peptide nanofiber scaffolds using PuraMatrix, which has been successfully used as scaffolding in tissue engineering, for the repair of damaged middle-ear. Middle-ear bullae with mucosa were removed from Sprague Dawley (SD) transgenic rats, transfected with enhanced green fluorescent protein (EGFP) transgene and excised into small pieces, then cultured up to the third passage. After surgical elimination of middle-ear mucosa in SD recipient rats, donor cells were encapsulated within PuraMatrix and transplanted into these immunosuppressed rats. Primary cultured cells were positive for pancytokeratin but not for vimentin, and retained the character of middle-ear epithelial cells. A high proportion of EGFP-expressing cells were found in the recipient middle-ear after transplantation with PuraMatrix, but not without PuraMatrix. These cells retained normal morphology and function, as confirmed by histological examination, immunohistochemistry, and electron microscopy, and multiplied to form new epithelial and subepithelial layers together with basement membrane. The present study demonstrated the feasibility of transplantation of cultured middle-ear mucosal epithelial cells encapsulated within PuraMatrix for regeneration of surgically eliminated mucosa of the middle-ear in SD rats.
Rector, Annabel; Tachezy, Ruth; Van Ranst, Marc
2004-01-01
The discovery of novel viruses has often been accomplished by using hybridization-based methods that necessitate the availability of a previously characterized virus genome probe or knowledge of the viral nucleotide sequence to construct consensus or degenerate PCR primers. In their natural replication cycle, certain viruses employ a rolling-circle mechanism to propagate their circular genomes, and multiply primed rolling-circle amplification (RCA) with φ29 DNA polymerase has recently been applied in the amplification of circular plasmid vectors used in cloning. We employed an isothermal RCA protocol that uses random hexamer primers to amplify the complete genomes of papillomaviruses without the need for prior knowledge of their DNA sequences. We optimized this RCA technique with extracted human papillomavirus type 16 (HPV-16) DNA from W12 cells, using a real-time quantitative PCR assay to determine amplification efficiency, and obtained a 2.4 × 104-fold increase in HPV-16 DNA concentration. We were able to clone the complete HPV-16 genome from this multiply primed RCA product. The optimized protocol was subsequently applied to a bovine fibropapillomatous wart tissue sample. Whereas no papillomavirus DNA could be detected by restriction enzyme digestion of the original sample, multiply primed RCA enabled us to obtain a sufficient amount of papillomavirus DNA for restriction enzyme analysis, cloning, and subsequent sequencing of a novel variant of bovine papillomavirus type 1. The multiply primed RCA method allows the discovery of previously unknown papillomaviruses, and possibly also other circular DNA viruses, without a priori sequence information. PMID:15113879
Deformation analysis of boron/aluminum specimens by moire interferometry
NASA Technical Reports Server (NTRS)
Post, Daniel; Guo, Yifan; Czarnek, Robert
1989-01-01
Whole-field surface deformations were measured for two slotted tension specimens from multiply laminates, one with 0 deg fiber orientation in the surface ply and the other with 45 deg orientation. Macromechanical and micromechanical details were revealed using high-sensitivity moire interferometry. Although global deformations of all plies were essentially equal, numerous random or anomalous features were observed. Local deformations of adjacent 0 deg and 45 deg plies were very different, both near the slot and remote from it, requiring large interlaminar shear strains for continuity. Shear strains were concentrated in the aluminum matrix. For 45 deg plies, a major portion of the deformation was by shear; large plastic slip of matrix occurred at random locations in 45 deg plies, wherein groups of fibers slipped relative to other groups. Shear strains in the interior, between adjacent fibers, were larger than the measured surface strains.
NASA Astrophysics Data System (ADS)
Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Stohl, Andreas
2016-11-01
Estimation of pollutant releases into the atmosphere is an important problem in the environmental sciences. It is typically formalized as an inverse problem using a linear model that can explain observable quantities (e.g., concentrations or deposition values) as a product of the source-receptor sensitivity (SRS) matrix obtained from an atmospheric transport model multiplied by the unknown source-term vector. Since this problem is typically ill-posed, current state-of-the-art methods are based on regularization of the problem and solution of a formulated optimization problem. This procedure depends on manual settings of uncertainties that are often very poorly quantified, effectively making them tuning parameters. We formulate a probabilistic model, that has the same maximum likelihood solution as the conventional method using pre-specified uncertainties. Replacement of the maximum likelihood solution by full Bayesian estimation also allows estimation of all tuning parameters from the measurements. The estimation procedure is based on the variational Bayes approximation which is evaluated by an iterative algorithm. The resulting method is thus very similar to the conventional approach, but with the possibility to also estimate all tuning parameters from the observations. The proposed algorithm is tested and compared with the standard methods on data from the European Tracer Experiment (ETEX) where advantages of the new method are demonstrated. A MATLAB implementation of the proposed algorithm is available for download.
Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R
2017-09-14
While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.
Yu, Xiaoyu; Reva, Oleg N
2018-01-01
Modern phylogenetic studies may benefit from the analysis of complete genome sequences of various microorganisms. Evolutionary inferences based on genome-scale analysis are believed to be more accurate than the gene-based alternative. However, the computational complexity of current phylogenomic procedures, inappropriateness of standard phylogenetic tools to process genome-wide data, and lack of reliable substitution models which correlates with alignment-free phylogenomic approaches deter microbiologists from using these opportunities. For example, the super-matrix and super-tree approaches of phylogenomics use multiple integrated genomic loci or individual gene-based trees to infer an overall consensus tree. However, these approaches potentially multiply errors of gene annotation and sequence alignment not mentioning the computational complexity and laboriousness of the methods. In this article, we demonstrate that the annotation- and alignment-free comparison of genome-wide tetranucleotide frequencies, termed oligonucleotide usage patterns (OUPs), allowed a fast and reliable inference of phylogenetic trees. These were congruent to the corresponding whole genome super-matrix trees in terms of tree topology when compared with other known approaches including 16S ribosomal RNA and GyrA protein sequence comparison, complete genome-based MAUVE, and CVTree methods. A Web-based program to perform the alignment-free OUP-based phylogenomic inferences was implemented at http://swphylo.bi.up.ac.za/. Applicability of the tool was tested on different taxa from subspecies to intergeneric levels. Distinguishing between closely related taxonomic units may be enforced by providing the program with alignments of marker protein sequences, eg, GyrA.
Yu, Xiaoyu; Reva, Oleg N
2018-01-01
Modern phylogenetic studies may benefit from the analysis of complete genome sequences of various microorganisms. Evolutionary inferences based on genome-scale analysis are believed to be more accurate than the gene-based alternative. However, the computational complexity of current phylogenomic procedures, inappropriateness of standard phylogenetic tools to process genome-wide data, and lack of reliable substitution models which correlates with alignment-free phylogenomic approaches deter microbiologists from using these opportunities. For example, the super-matrix and super-tree approaches of phylogenomics use multiple integrated genomic loci or individual gene-based trees to infer an overall consensus tree. However, these approaches potentially multiply errors of gene annotation and sequence alignment not mentioning the computational complexity and laboriousness of the methods. In this article, we demonstrate that the annotation- and alignment-free comparison of genome-wide tetranucleotide frequencies, termed oligonucleotide usage patterns (OUPs), allowed a fast and reliable inference of phylogenetic trees. These were congruent to the corresponding whole genome super-matrix trees in terms of tree topology when compared with other known approaches including 16S ribosomal RNA and GyrA protein sequence comparison, complete genome-based MAUVE, and CVTree methods. A Web-based program to perform the alignment-free OUP-based phylogenomic inferences was implemented at http://swphylo.bi.up.ac.za/. Applicability of the tool was tested on different taxa from subspecies to intergeneric levels. Distinguishing between closely related taxonomic units may be enforced by providing the program with alignments of marker protein sequences, eg, GyrA. PMID:29511354
NASA Technical Reports Server (NTRS)
Klein, L. R.
1974-01-01
The free vibrations of elastic structures of arbitrary complexity were analyzed in terms of their component modes. The method was based upon the use of the normal unconstrained modes of the components in a Rayleigh-Ritz analysis. The continuity conditions were enforced by means of Lagrange Multipliers. Examples of the structures considered are: (1) beams with nonuniform properties; (2) airplane structures with high or low aspect ratio lifting surface components; (3) the oblique wing airplane; and (4) plate structures. The method was also applied to the analysis of modal damping of linear elastic structures. Convergence of the method versus the number of modes per component and/or the number of components is discussed and compared to more conventional approaches, ad-hoc methods, and experimental results.
A Framework for Including Family Health Spillovers in Economic Evaluation
Al-Janabi, Hareth; van Exel, Job; Brouwer, Werner; Coast, Joanna
2016-01-01
Health care interventions may affect the health of patients’ family networks. It has been suggested that these “health spillovers” should be included in economic evaluation, but there is not a systematic method for doing this. In this article, we develop a framework for including health spillovers in economic evaluation. We focus on extra-welfarist economic evaluations where the objective is to maximize health benefits from a health care budget (the “health care perspective”). Our framework involves adapting the conventional cost-effectiveness decision rule to include 2 multiplier effects to internalize the spillover effects. These multiplier effects express the ratio of total health effects (for patients and their family networks) to patient health effects. One multiplier effect is specified for health benefit generated from providing a new intervention, one for health benefit displaced by funding this intervention. We show that using multiplier effects to internalize health spillovers could change the optimal funding decisions and generate additional health benefits to society. PMID:26377370
Modified Interior Distance Functions (Theory and Methods)
NASA Technical Reports Server (NTRS)
Polyak, Roman A.
1995-01-01
In this paper we introduced and developed the theory of Modified Interior Distance Functions (MIDF's). The MIDF is a Classical Lagrangian (CL) for a constrained optimization problem which is equivalent to the initial one and can be obtained from the latter by monotone transformation both the objective function and constraints. In contrast to the Interior Distance Functions (IDF's), which played a fundamental role in Interior Point Methods (IPM's), the MIDF's are defined on an extended feasible set and along with center, have two extra tools, which control the computational process: the barrier parameter and the vector of Lagrange multipliers. The extra tools allow to attach to the MEDF's very important properties of Augmented Lagrangeans. One can consider the MIDFs as Interior Augmented Lagrangeans. It makes MIDF's similar in spirit to Modified Barrier Functions (MBF's), although there is a fundamental difference between them both in theory and methods. Based on MIDF's theory, Modified Center Methods (MCM's) have been developed and analyzed. The MCM's find an unconstrained minimizer in primal space and update the Lagrange multipliers, while both the center and the barrier parameter can be fixed or updated at each step. The MCM's convergence was investigated, and their rate of convergence was estimated. The extension of the feasible set and the special role of the Lagrange multipliers allow to develop MCM's, which produce, in case of nondegenerate constrained optimization, a primal and dual sequences that converge to the primal-dual solutions with linear rate, even when both the center and the barrier parameter are fixed. Moreover, every Lagrange multipliers update shrinks the distance to the primal dual solution by a factor 0 less than gamma less than 1 which can be made as small as one wants by choosing a fixed interior point as a 'center' and a fixed but large enough barrier parameter. The numericai realization of MCM leads to the Newton MCM (NMCM). The approximation for the primal minimizer one finds by Newton Method followed by the Lagrange multipliers update. Due to the MCM convergence, when both the center and the barrier parameter are fixed, the condition of the MDF Hessism and the neighborhood of the primal ninimizer where Newton method is 'well' defined remains stable. It contributes to both the complexity and the numerical stability of the NMCM.
MODA A Framework for Memory Centric Performance Characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shrestha, Sunil; Su, Chun-Yi; White, Amanda M.
2012-06-29
In the age of massive parallelism, the focus of performance analysis has switched from the processor and related structures to the memory and I/O resources. Adapting to this new reality, a performance analysis tool has to provide a way to analyze resource usage to pinpoint existing and potential problems in a given application. This paper provides an overview of the Memory Observant Data Analysis (MODA) tool, a memory-centric tool first implemented on the Cray XMT supercomputer. Throughout the paper, MODA's capabilities have been showcased with experiments done on matrix multiply and Graph-500 application codes.
Solving fractional optimal control problems within a Chebyshev-Legendre operational technique
NASA Astrophysics Data System (ADS)
Bhrawy, A. H.; Ezz-Eldien, S. S.; Doha, E. H.; Abdelkawy, M. A.; Baleanu, D.
2017-06-01
In this manuscript, we report a new operational technique for approximating the numerical solution of fractional optimal control (FOC) problems. The operational matrix of the Caputo fractional derivative of the orthonormal Chebyshev polynomial and the Legendre-Gauss quadrature formula are used, and then the Lagrange multiplier scheme is employed for reducing such problems into those consisting of systems of easily solvable algebraic equations. We compare the approximate solutions achieved using our approach with the exact solutions and with those presented in other techniques and we show the accuracy and applicability of the new numerical approach, through two numerical examples.
Soltwisch, Jens; Jaskolla, Thorsten W; Hillenkamp, Franz; Karas, Michael; Dreisewerd, Klaus
2012-08-07
The laser wavelength constitutes a key parameter in ultraviolet-matrix-assisted laser desorption ionization-mass spectrometry (UV-MALDI-MS). Optimal analytical results are only achieved at laser wavelengths that correspond to a high optical absorption of the matrix. In the presented work, the wavelength dependence and the contribution of matrix proton affinity to the MALDI process were investigated. A tunable dye laser was used to examine the wavelength range between 280 and 355 nm. The peptide and matrix ion signals recorded as a function of these irradiation parameters are displayed in the form of heat maps, a data representation that furnishes multidimensional data interpretation. Matrixes with a range of proton affinities from 809 to 866 kJ/mol were investigated. Among those selected are the standard matrixes 2,5-dihydroxybenzoic acid (DHB) and α-cyano-4-hydroxycinnamic acid (HCCA) as well as five halogen-substituted cinnamic acid derivatives, including the recently introduced 4-chloro-α-cyanocinnamic acid (ClCCA) and α-cyano-2,4-difluorocinnamic acid (DiFCCA) matrixes. With the exception of DHB, the highest analyte ion signals were obtained toward the red side of the peak optical absorption in the solid state. A stronger decline of the molecular analyte ion signals generated from the matrixes was consistently observed at the low wavelength side of the peak absorption. This effect is mainly the result of increased fragmentation of both analyte and matrix ions. Optimal use of multiply halogenated matrixes requires adjustment of the excitation wavelength to values below that of the standard MALDI lasers emitting at 355 (Nd:YAG) or 337 nm (N(2) laser). The combined data provide new insights into the UV-MALDI desorption/ionization processes and indicate ways to improve the analytical sensitivity.
Wang, L; Rokhlin, S I
2002-09-01
An inversion method based on Floquet wave velocity in a periodic medium has been introduced to determine the single ply elastic moduli of a multi-ply composite. The stability of this algorithm is demonstrated by numerical simulation. The applicability of the plane wave approximation to the velocity measurement in the double-through-transmission self-reference method has been analyzed using a time-domain beam model. It shows that the finite width of the transmitter affects only the amplitudes of the signals and has almost no effect on the time delay. Using this method, the ply moduli for a multiply composite have been experimentally determined. While the paper focuses on elastic constant reconstruction from phase velocity measurements by the self-reference double-through-transmission method, the reconstruction methodology is also applicable to assessment of data collected by other methods.
Estimation of the size of the female sex worker population in Rwanda using three different methods
Kayitesi, Catherine; Gwiza, Aimé; Ruton, Hinda; Koleros, Andrew; Gupta, Neil; Balisanga, Helene; Riedel, David J; Nsanzimana, Sabin
2014-01-01
HIV prevalence is disproportionately high among female sex workers compared to the general population. Many African countries lack useful data on the size of female sex worker populations to inform national HIV programmes. A female sex worker size estimation exercise using three different venue-based methodologies was conducted among female sex workers in all provinces of Rwanda in August 2010. The female sex worker national population size was estimated using capture–recapture and enumeration methods, and the multiplier method was used to estimate the size of the female sex worker population in Kigali. A structured questionnaire was also used to supplement the data. The estimated number of female sex workers by the capture–recapture method was 3205 (95% confidence interval: 2998–3412). The female sex worker size was estimated at 3348 using the enumeration method. In Kigali, the female sex worker size was estimated at 2253 (95% confidence interval: 1916–2524) using the multiplier method. Nearly 80% of all female sex workers in Rwanda were found to be based in the capital, Kigali. This study provided a first-time estimate of the female sex worker population size in Rwanda using capture–recapture, enumeration, and multiplier methods. The capture–recapture and enumeration methods provided similar estimates of female sex worker in Rwanda. Combination of such size estimation methods is feasible and productive in low-resource settings and should be considered vital to inform national HIV programmes. PMID:25336306
Estimation of the size of the female sex worker population in Rwanda using three different methods.
Mutagoma, Mwumvaneza; Kayitesi, Catherine; Gwiza, Aimé; Ruton, Hinda; Koleros, Andrew; Gupta, Neil; Balisanga, Helene; Riedel, David J; Nsanzimana, Sabin
2015-10-01
HIV prevalence is disproportionately high among female sex workers compared to the general population. Many African countries lack useful data on the size of female sex worker populations to inform national HIV programmes. A female sex worker size estimation exercise using three different venue-based methodologies was conducted among female sex workers in all provinces of Rwanda in August 2010. The female sex worker national population size was estimated using capture-recapture and enumeration methods, and the multiplier method was used to estimate the size of the female sex worker population in Kigali. A structured questionnaire was also used to supplement the data. The estimated number of female sex workers by the capture-recapture method was 3205 (95% confidence interval: 2998-3412). The female sex worker size was estimated at 3348 using the enumeration method. In Kigali, the female sex worker size was estimated at 2253 (95% confidence interval: 1916-2524) using the multiplier method. Nearly 80% of all female sex workers in Rwanda were found to be based in the capital, Kigali. This study provided a first-time estimate of the female sex worker population size in Rwanda using capture-recapture, enumeration, and multiplier methods. The capture-recapture and enumeration methods provided similar estimates of female sex worker in Rwanda. Combination of such size estimation methods is feasible and productive in low-resource settings and should be considered vital to inform national HIV programmes. © The Author(s) 2015.
MRM-Lasso: A Sparse Multiview Feature Selection Method via Low-Rank Analysis.
Yang, Wanqi; Gao, Yang; Shi, Yinghuan; Cao, Longbing
2015-11-01
Learning about multiview data involves many applications, such as video understanding, image classification, and social media. However, when the data dimension increases dramatically, it is important but very challenging to remove redundant features in multiview feature selection. In this paper, we propose a novel feature selection algorithm, multiview rank minimization-based Lasso (MRM-Lasso), which jointly utilizes Lasso for sparse feature selection and rank minimization for learning relevant patterns across views. Instead of simply integrating multiple Lasso from view level, we focus on the performance of sample-level (sample significance) and introduce pattern-specific weights into MRM-Lasso. The weights are utilized to measure the contribution of each sample to the labels in the current view. In addition, the latent correlation across different views is successfully captured by learning a low-rank matrix consisting of pattern-specific weights. The alternating direction method of multipliers is applied to optimize the proposed MRM-Lasso. Experiments on four real-life data sets show that features selected by MRM-Lasso have better multiview classification performance than the baselines. Moreover, pattern-specific weights are demonstrated to be significant for learning about multiview data, compared with view-specific weights.
When will Low-Contrast Features be Visible in a STEM X-Ray Spectrum Image?
Parish, Chad M
2015-06-01
When will a small or low-contrast feature, such as an embedded second-phase particle, be visible in a scanning transmission electron microscopy (STEM) X-ray map? This work illustrates a computationally inexpensive method to simulate X-ray maps and spectrum images (SIs), based upon the equations of X-ray generation and detection. To particularize the general procedure, an example of nanostructured ferritic alloy (NFA) containing nm-sized Y2Ti2O7 embedded precipitates in ferritic stainless steel matrix is chosen. The proposed model produces physically appearing simulated SI data sets, which can either be reduced to X-ray dot maps or analyzed via multivariate statistical analysis. Comparison to NFA X-ray maps acquired using three different STEM instruments match the generated simulations quite well, despite the large number of simplifying assumptions used. A figure of merit of electron dose multiplied by X-ray collection solid angle is proposed to compare feature detectability from one data set (simulated or experimental) to another. The proposed method can scope experiments that are feasible under specific analysis conditions on a given microscope. Future applications, such as spallation proton-neutron irradiations, core-shell nanoparticles, or dopants in polycrystalline photovoltaic solar cells, are proposed.
Analysis of pulse thermography using similarities between wave and diffusion propagation
NASA Astrophysics Data System (ADS)
Gershenson, M.
2017-05-01
Pulse thermography or thermal wave imaging are commonly used as nondestructive evaluation (NDE) method. While the technical aspect has evolve with time, theoretical interpretation is lagging. Interpretation is still using curved fitting on a log log scale. A new approach based directly on the governing differential equation is introduced. By using relationships between wave propagation and the diffusive propagation of thermal excitation, it is shown that one can transform from solutions in one type of propagation to the other. The method is based on the similarities between the Laplace transforms of the diffusion equation and the wave equation. For diffusive propagation we have the Laplace variable s to the first power, while for the wave propagation similar equations occur with s2. For discrete time the transformation between the domains is performed by multiplying the temperature data vector by a matrix. The transform is local. The performance of the techniques is tested on synthetic data. The application of common back projection techniques used in the processing of wave data is also demonstrated. The combined use of the transform and back projection makes it possible to improve both depth and lateral resolution of transient thermography.
Decentralized Dimensionality Reduction for Distributed Tensor Data Across Sensor Networks.
Liang, Junli; Yu, Guoyang; Chen, Badong; Zhao, Minghua
2016-11-01
This paper develops a novel decentralized dimensionality reduction algorithm for the distributed tensor data across sensor networks. The main contributions of this paper are as follows. First, conventional centralized methods, which utilize entire data to simultaneously determine all the vectors of the projection matrix along each tensor mode, are not suitable for the network environment. Here, we relax the simultaneous processing manner into the one-vector-by-one-vector (OVBOV) manner, i.e., determining the projection vectors (PVs) related to each tensor mode one by one. Second, we prove that in the OVBOV manner each PV can be determined without modifying any tensor data, which simplifies corresponding computations. Third, we cast the decentralized PV determination problem as a set of subproblems with consensus constraints, so that it can be solved in the network environment only by local computations and information communications among neighboring nodes. Fourth, we introduce the null space and transform the PV determination problem with complex orthogonality constraints into an equivalent hidden convex one without any orthogonality constraint, which can be solved by the Lagrange multiplier method. Finally, experimental results are given to show that the proposed algorithm is an effective dimensionality reduction scheme for the distributed tensor data across the sensor networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Unsal, Diclehan; Uner, Aytug; Akyurek, Nalan
2007-01-01
Purpose: To analyze whether the expression of matrix metalloproteinases (MMPs) and their tissue inhibitors are associated with tumor response to preoperative chemoradiotherapy in rectal cancer patients. Methods and Materials: Forty-four patients who had undergone preoperative chemoradiotherapy were evaluated retrospectively. Treatment consisted of pelvic radiotherapy and two cycles of 5-fluorouracil plus leucovorin. Surgery was performed 6-8 weeks later. MMP-2, MMP-9, and tissue inhibitors of metalloproteinase-1 and -2 expression was analyzed by immunohistochemistry of the preradiation biopsy and surgical specimens. The intensity and extent of staining were evaluated separately, and a final score was calculated by multiplying the two scores. The primarymore » endpoint was the correlation of expression with tumor response, with the secondary endpoint the effect of chemoradiotherapy on the expression. Results: Preoperative treatment resulted in downstaging in 20 patients (45%) and no clinical response in 24 (55%). The pathologic tumor response was complete in 11 patients (25%), partial in 23 (52%), and none in 10 (23%). Positive MMP-9 staining was observed in 20 tumors (45%) and was associated with the clinical nodal stage (p = 0.035) and the pathologic and clinical response (p < 0.0001). The staining status of the other markers was associated with neither stage nor response. The overall pathologic response rate was 25% in MMP-9-positive patients vs. 52% in MMP-9-negative patients (p = 0.001). None of the 11 patients with pathologic complete remission was MMP-9 positive. Conclusions: Matrix metalloproteinase-9 expression correlated with a poor tumor response to preoperative chemoradiotherapy in rectal carcinoma patients.« less
Retrieving Storm Electric Fields From Aircraft Field Mill Data. Part 2; Applications
NASA Technical Reports Server (NTRS)
Koshak, W. J.; Mach, D. M.; Christian, H. J.; Stewart, M. F.; Bateman, M. G.
2005-01-01
The Lagrange multiplier theory and "pitch down method" developed in Part I of this study are applied to complete the calibration of a Citation aircraft that is instrumented with six field mill sensors. When side constraints related to average fields are used, the method performs well in computer simulations. For mill measurement errors of 1 V/m and a 5 V/m error in the mean fair weather field function, the 3-D storm electric field is retrieved to within an error of about 12%. A side constraint that involves estimating the detailed structure of the fair weather field was also tested using computer simulations. For mill measurement errors of 1 V/m, the method retrieves the 3-D storm field to within an error of about 8% if the fair weather field estimate is typically within 1 V/m of the true fair weather field. Using this side constraint and data from fair weather field maneuvers taken on 29 June 2001, the Citation aircraft was calibrated. The resulting calibration matrix was then used to retrieve storm electric fields during a Citation flight on 2 June 2001. The storm field results are encouraging and agree favorably with the results obtained from earlier calibration analyses that were based on iterative techniques.
Siddiqui, M F; Reza, A W; Kanesan, J; Ramiah, H
2014-01-01
A wide interest has been observed to find a low power and area efficient hardware design of discrete cosine transform (DCT) algorithm. This research work proposed a novel Common Subexpression Elimination (CSE) based pipelined architecture for DCT, aimed at reproducing the cost metrics of power and area while maintaining high speed and accuracy in DCT applications. The proposed design combines the techniques of Canonical Signed Digit (CSD) representation and CSE to implement the multiplier-less method for fixed constant multiplication of DCT coefficients. Furthermore, symmetry in the DCT coefficient matrix is used with CSE to further decrease the number of arithmetic operations. This architecture needs a single-port memory to feed the inputs instead of multiport memory, which leads to reduction of the hardware cost and area. From the analysis of experimental results and performance comparisons, it is observed that the proposed scheme uses minimum logic utilizing mere 340 slices and 22 adders. Moreover, this design meets the real time constraints of different video/image coders and peak-signal-to-noise-ratio (PSNR) requirements. Furthermore, the proposed technique has significant advantages over recent well-known methods along with accuracy in terms of power reduction, silicon area usage, and maximum operating frequency by 41%, 15%, and 15%, respectively.
A flow-cytometry-based method for detecting simultaneously five allergens in a complex food matrix.
Otto, Gaetan; Lamote, Amandine; Deckers, Elise; Dumont, Valery; Delahaut, Philippe; Scippo, Marie-Louise; Pleck, Jessica; Hillairet, Caroline; Gillard, Nathalie
2016-12-01
To avoid carry-over contamination with allergens, food manufacturers implement quality control strategies relying primarily on detection of allergenic proteins by ELISA. Although sensitive and specific, this method allowed detection of only one allergen per analysis and effective control policies were thus based on multiplying the number of tests done in order to cover the whole range of allergens. We present in this work an immunoassay for the simultaneous detection of milk, egg, peanut, mustard and crustaceans in cookies samples. The method was based on a combination of flow cytometry with competitive ELISA where microbeads were used as sorbent surface. The test was able to detect the presence of the five allergens with median inhibitory concentrations (IC50) ranging from 2.5 to 15 mg/kg according to the allergen to be detected. The lowest concentrations of contaminants inducing a significant difference of signal between non-contaminated controls and test samples were 2 mg/kg of peanut, 5 mg/kg of crustaceans, 5 mg/kg of milk, 5 mg/kg of mustard and 10 mg/kg of egg. Assay sensitivity was influenced by the concentration of primary antibodies added to the sample extract for the competition and by the concentration of allergenic proteins bound to the surface of the microbeads.
Siddiqui, M. F.; Reza, A. W.; Kanesan, J.; Ramiah, H.
2014-01-01
A wide interest has been observed to find a low power and area efficient hardware design of discrete cosine transform (DCT) algorithm. This research work proposed a novel Common Subexpression Elimination (CSE) based pipelined architecture for DCT, aimed at reproducing the cost metrics of power and area while maintaining high speed and accuracy in DCT applications. The proposed design combines the techniques of Canonical Signed Digit (CSD) representation and CSE to implement the multiplier-less method for fixed constant multiplication of DCT coefficients. Furthermore, symmetry in the DCT coefficient matrix is used with CSE to further decrease the number of arithmetic operations. This architecture needs a single-port memory to feed the inputs instead of multiport memory, which leads to reduction of the hardware cost and area. From the analysis of experimental results and performance comparisons, it is observed that the proposed scheme uses minimum logic utilizing mere 340 slices and 22 adders. Moreover, this design meets the real time constraints of different video/image coders and peak-signal-to-noise-ratio (PSNR) requirements. Furthermore, the proposed technique has significant advantages over recent well-known methods along with accuracy in terms of power reduction, silicon area usage, and maximum operating frequency by 41%, 15%, and 15%, respectively. PMID:25133249
A fictitious domain approach for the Stokes problem based on the extended finite element method
NASA Astrophysics Data System (ADS)
Court, Sébastien; Fournié, Michel; Lozinski, Alexei
2014-01-01
In the present work, we propose to extend to the Stokes problem a fictitious domain approach inspired by eXtended Finite Element Method and studied for Poisson problem in [Renard]. The method allows computations in domains whose boundaries do not match. A mixed finite element method is used for fluid flow. The interface between the fluid and the structure is localized by a level-set function. Dirichlet boundary conditions are taken into account using Lagrange multiplier. A stabilization term is introduced to improve the approximation of the normal trace of the Cauchy stress tensor at the interface and avoid the inf-sup condition between the spaces for velocity and the Lagrange multiplier. Convergence analysis is given and several numerical tests are performed to illustrate the capabilities of the method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parzen, George
It will be shown that starting from a coordinate system where the 6 phase space coordinates are linearly coupled, one can go to a new coordinate system, where the motion is uncoupled, by means of a linear transformation. The original coupled coordinates and the new uncoupled coordinates are related by a 6 x 6 matrix, R. R will be called the decoupling matrix. It will be shown that of the 36 elements of the 6 x 6 decoupling matrix R, only 12 elements are independent. This may be contrasted with the results for motion in 4- dimensional phase space, wheremore » R has 4 independent elements. A set of equations is given from which the 12 elements of R can be computed from the one period transfer matrix. This set of equations also allows the linear parameters, the β i,α i, i = 1, 3, for the uncoupled coordinates, to be computed from the one period transfer matrix. An alternative procedure for computing the linear parameters,β i,α i, i = 1, 3, and the 12 independent elements of the decoupling matrix R is also given which depends on computing the eigenvectors of the one period transfer matrix. These results can be used in a tracking program, where the one period transfer matrix can be computed by multiplying the transfer matrices of all the elements in a period, to compute the linear parameters α i and β i, i = 1, 3, and the elements of the decoupling matrix R. The procedure presented here for studying coupled motion in 6-dimensional phase space can also be applied to coupled motion in 4-dimensional phase space, where it may be a useful alternative procedure to the procedure presented by Edwards and Teng. In particular, it gives a simpler programing procedure for computing the beta functions and the emittances for coupled motion in 4-dimensional phase space.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parzen, G.
It will be shown that starting from a coordinate system where the 6 phase space coordinates are linearly coupled, one can go to a new coordinate system, where the motion is uncoupled, by means of a linear transformation. The original coupled coordinates and the new uncoupled coordinates are related by a 6 {times} 6 matrix, R. R will be called the decoupling matrix. It will be shown that of the 36 elements of the 6 {times} 6 decoupling matrix R, only 12 elements are independent. This may be contrasted with the results for motion in 4-dimensional phase space, where Rmore » has 4 independent elements. A set of equations is given from which the 12 elements of R can be computed from the one period transfer matrix. This set of equations also allows the linear parameters, {beta}{sub i}, {alpha}{sub i} = 1, 3, for the uncoupled coordinates, to be computed from the one period transfer matrix. An alternative procedure for computing the linear parameters, the {beta}{sub i}, {alpha}{sub i} i = 1, 3, and the 12 independent elements of the decoupling matrix R is also given which depends on computing the eigenvectors of the one period transfer matrix. These results can be used in a tracking program, where the one period transfer matrix can be computed by multiplying the transfer matrices of all the elements in a period, to compute the linear parameters {alpha}{sub i} and {beta}{sub i}, i = 1, 3, and the elements of the decoupling matrix R. The procedure presented here for studying coupled motion in 6-dimensional phase space can also be applied to coupled motion in 4-dimensional phase space, where it may be a useful alternative procedure to the procedure presented by Edwards and Teng. In particular, it gives a simpler programming procedure for computing the beta functions and the emittances for coupled motion in 4-dimensional phase space.« less
Rate of rotation measurement using back-EMFS associated with windings of a brushless DC motor
NASA Technical Reports Server (NTRS)
Howard, David E. (Inventor)
2000-01-01
A system and method are provided for measuring rate of rotation. A brushless DC motor is rotated and produces a back electromagnetic force (emf) on each winding thereof. Each winding's back-emf is integrated and multiplied by the back-emf associated with an adjacent winding. The multiplied outputs associated with each winding are combined to produce a directionally sensitive DC output proportional only to the rate of rotation of the motor's shaft.
Aggregation, adsorption, and surface properties of multiply end-functionalized polystyrenes.
Ansari, Imtiyaz A; Clarke, Nigel; Hutchings, Lian R; Pillay-Narrainen, Amilcar; Terry, Ann E; Thompson, Richard L; Webster, John R P
2007-04-10
The properties of polystyrene blends containing deuteriopolystyrene, multiply end-functionalized with C8F17 fluorocarbon groups, are strikingly analogous to those of surfactants in solution. These materials, denoted FxdPSy, where x is the number of fluorocarbon groups and y is the molecular weight of the dPS chain in kg/mol, were blended with unfunctionalized polystyrene, hPS. Nuclear reaction analysis experiments show that FxdPSy polymers adsorb spontaneously to solution and blend surfaces, resulting in a reduction in surface energy inferred from contact angle analysis. Aggregation of functionalized polymers in the bulk was found to be sensitive to FxdPSy structure and closely related to surface properties. At low concentrations, the functionalized polymers are freely dispersed in the hPS matrix, and in this range, the surface excess concentration grows sharply with increasing bulk concentration. At higher concentrations, surface excess concentrations and contact angles reach a plateau, small-angle neutron scattering data indicate small micellar aggregates of six to seven F2dPS10 polymer chains and much larger aggregates of F4dPS10. Whereas F2dPS10 aggregates are miscible with the hPS matrix, F4dPS10 forms a separate phase of multilamellar vesicles. Using neutron reflectometry (NR), we found that the extent of the adsorbed layer was approximately half the lamellar spacing of the multilamellar vesicles. NR data were fitted using an error function profile to describe the concentration profile of the adsorbed layer, and reasonable agreement was found with concentration profiles predicted by the SCFT model. The thermodynamic sticking energy of the fluorocarbon-functionalized polymer chains to the blend surface increases from 5.3kBT for x = 2 to 6.6kBT for x = 4 but appears to be somewhat dependent upon the blend concentration.
A Fluid Structure Algorithm with Lagrange Multipliers to Model Free Swimming
NASA Astrophysics Data System (ADS)
Sahin, Mehmet; Dilek, Ezgi
2017-11-01
A new monolithic approach is prosed to solve the fluid-structure interaction (FSI) problem with Lagrange multipliers in order to model free swimming/flying. In the present approach, the fluid domain is modeled by the incompressible Navier-Stokes equations and discretized using an Arbitrary Lagrangian-Eulerian (ALE) formulation based on the stable side-centered unstructured finite volume method. The solid domain is modeled by the constitutive laws for the nonlinear Saint Venant-Kirchhoff material and the classical Galerkin finite element method is used to discretize the governing equations in a Lagrangian frame. In order to impose the body motion/deformation, the distance between the constraint pair nodes is imposed using the Lagrange multipliers, which is independent from the frame of reference. The resulting algebraic linear equations are solved in a fully coupled manner using a dual approach (null space method). The present numerical algorithm is initially validated for the classical FSI benchmark problems and then applied to the free swimming of three linked ellipses. The authors are grateful for the use of the computing resources provided by the National Center for High Performance Computing (UYBHM) under Grant Number 10752009 and the computing facilities at TUBITAK-ULAKBIM, High Performance and Grid Computing Center.
Estimating the number of female sex workers in Côte d'Ivoire: results and lessons learned.
Vuylsteke, Bea; Sika, Lazare; Semdé, Gisèle; Anoma, Camille; Kacou, Elise; Laga, Marie
2017-09-01
To report on the results of three size estimations of the populations of female sex workers (FSW) in five cities in Côte d'Ivoire and on operational lessons learned, which may be relevant for key population programmes in other parts of the world. We applied three methods: mapping and census, capture-recapture and service multiplier. All were applied between 2008 and 2009 in Abidjan, San Pedro, Bouaké, Yamoussoukro and Abengourou. Abidjan was the city with the highest number of FSW by far, with estimations between 7880 (census) and 13 714 (service multiplier). The estimations in San Pedro, Bouaké and Yamoussoukro were very similar, with figures ranging from 1160 (Yamoussoukro, census) to 1916 (San Pedro, capture-recapture). Important operational lessons were learned, including strategies for mapping, the importance of involving peer sex workers for implementing the capture-recapture and the identification of the right question for the multiplier method. Successful application of three methods to estimate the population size of FSW in five cities in Côte d'Ivoire enabled us to make recommendations for size estimations of key population in low-income countries. © 2017 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Feng, Xueshang; Li, Caixia; Xiang, Changqing; Zhang, Man; Li, HuiChao; Wei, Fengsi
2017-11-01
A second-order path-conservative scheme with a Godunov-type finite-volume method has been implemented to advance the equations of single-fluid solar wind plasma magnetohydrodynamics (MHD) in time. This code operates on the six-component composite grid system in three-dimensional spherical coordinates with hexahedral cells of quadrilateral frustum type. The generalized Osher-Solomon Riemann solver is employed based on a numerical integration of the path-dependent dissipation matrix. For simplicity, the straight line segment path is used, and the path integral is evaluated in a fully numerical way by a high-order numerical Gauss-Legendre quadrature. Besides its very close similarity to Godunov type, the resulting scheme retains the attractive features of the original solver: it is nonlinear, free of entropy-fix, differentiable, and complete, in that each characteristic field results in a different numerical viscosity, due to the full use of the MHD eigenstructure. By using a minmod limiter for spatial oscillation control, the path-conservative scheme is realized for the generalized Lagrange multiplier and the extended generalized Lagrange multiplier formulation of solar wind MHD systems. This new model that is second order in space and time is written in the FORTRAN language with Message Passing Interface parallelization and validated in modeling the time-dependent large-scale structure of the solar corona, driven continuously by Global Oscillation Network Group data. To demonstrate the suitability of our code for the simulation of solar wind, we present selected results from 2009 October 9 to 2009 December 29 show its capability of producing a structured solar corona in agreement with solar coronal observations.
Data-Driven Modeling of Solar Corona by a New 3d Path-Conservative Osher-Solomon MHD Odel
NASA Astrophysics Data System (ADS)
Feng, X. S.; Li, C.
2017-12-01
A second-order path-conservative scheme with Godunov-type finite volume method (FVM) has been implemented to advance the equations of single-fluid solar wind plasma magnetohydrodynamics (MHD) in time. This code operates on the six-component composite grid system in 3D spherical coordinates with hexahedral cells of quadrilateral frustum type. The generalized Osher-Solomon Riemann solver is employed based on a numerical integration of the path-dependentdissipation matrix. For simplicity, the straight line segment path is used and the path-integral is evaluated in a fully numerical way by high-order numerical Gauss-Legendre quadrature. Besides its closest similarity to Godunov, the resulting scheme retains the attractive features of the original solver: it is nonlinear, free of entropy-fix, differentiable and complete in that each characteristic field results in a different numerical viscosity, due to the full use of the MHD eigenstructure. By using a minmod limiter for spatial oscillation control, the pathconservative scheme is realized for the generalized Lagrange multiplier (GLM) and the extended generalized Lagrange multiplier (EGLM) formulation of solar wind MHD systems. This new model of second-order in space and time is written in FORTRAN language with Message Passing Interface (MPI) parallelization, and validated in modeling time-dependent large-scale structure of solar corona, driven continuously by the Global Oscillation Network Group (GONG) data. To demonstrate the suitability of our code for the simulation of solar wind, we present selected results from October 9th, 2009 to December 29th, 2009 , & Year 2008 to show its capability of producing structured solar wind in agreement with the observations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, J; Gao, H
2016-06-15
Purpose: Different from the conventional computed tomography (CT), spectral CT based on energy-resolved photon-counting detectors is able to provide the unprecedented material composition. However, an important missing piece for accurate spectral CT is to incorporate the detector response function (DRF), which is distorted by factors such as pulse pileup and charge-sharing. In this work, we propose material reconstruction methods for spectral CT with DRF. Methods: The polyenergetic X-ray forward model takes the DRF into account for accurate material reconstruction. Two image reconstruction methods are proposed: a direct method based on the nonlinear data fidelity from DRF-based forward model; a linear-data-fidelitymore » based method that relies on the spectral rebinning so that the corresponding DRF matrix is invertible. Then the image reconstruction problem is regularized with the isotropic TV term and solved by alternating direction method of multipliers. Results: The simulation results suggest that the proposed methods provided more accurate material compositions than the standard method without DRF. Moreover, the proposed method with linear data fidelity had improved reconstruction quality from the proposed method with nonlinear data fidelity. Conclusion: We have proposed material reconstruction methods for spectral CT with DRF, whichprovided more accurate material compositions than the standard methods without DRF. Moreover, the proposed method with linear data fidelity had improved reconstruction quality from the proposed method with nonlinear data fidelity. Jiulong Liu and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000), and the Shanghai Pujiang Talent Program (#14PJ1404500).« less
Time reversal optical tomography locates fluorescent targets in a turbid medium
NASA Astrophysics Data System (ADS)
Wu, Binlin; Cai, W.; Gayen, S. K.
2013-03-01
A fluorescence optical tomography approach that extends time reversal optical tomography (TROT) to locate fluorescent targets embedded in a turbid medium is introduced. It uses a multi-source illumination and multi-detector signal acquisition scheme, along with TR matrix formalism, and multiple signal classification (MUSIC) to construct pseudo-image of the targets. The samples consisted of a single or two small tubes filled with water solution of Indocyanine Green (ICG) dye as targets embedded in a 250 mm × 250 mm × 60 mm rectangular cell filled with Intralipid-20% suspension as the scattering medium. The ICG concentration was 1μM, and the Intralipid-20% concentration was adjusted to provide ~ 1-mm transport length for both excitation wavelength of 790 nm and fluorescence wavelength around 825 nm. The data matrix was constructed using the diffusely transmitted fluorescence signals for all scan positions, and the TR matrix was constructed by multiplying data matrix with its transpose. A pseudo spectrum was calculated using the signal subspace of the TR matrix. Tomographic images were generated using the pseudo spectrum. The peaks in the pseudo images provided locations of the target(s) with sub-millimeter accuracy. Concurrent transmission TROT measurements corroborated fluorescence-TROT findings. The results demonstrate that TROT is a fast approach that can be used to obtain accurate three-dimensional position information of fluorescence targets embedded deep inside a highly scattering medium, such as, a contrast-enhanced tumor in a human breast.
NASA Astrophysics Data System (ADS)
de Schryver, C.; Weithoffer, S.; Wasenmüller, U.; Wehn, N.
2012-09-01
Channel coding is a standard technique in all wireless communication systems. In addition to the typically employed methods like convolutional coding, turbo coding or low density parity check (LDPC) coding, algebraic codes are used in many cases. For example, outer BCH coding is applied in the DVB-S2 standard for satellite TV broadcasting. A key operation for BCH and the related Reed-Solomon codes are multiplications in finite fields (Galois Fields), where extension fields of prime fields are used. A lot of architectures for multiplications in finite fields have been published over the last decades. This paper examines four different multiplier architectures in detail that offer the potential for very high throughputs. We investigate the implementation performance of these multipliers on FPGA technology in the context of channel coding. We study the efficiency of the multipliers with respect to area, frequency and throughput, as well as configurability and scalability. The implementation data of the fully verified circuits are provided for a Xilinx Virtex-4 device after place and route.
A Numerical Scheme for the Solution of the Space Charge Problem on a Multiply Connected Region
NASA Astrophysics Data System (ADS)
Budd, C. J.; Wheeler, A. A.
1991-11-01
In this paper we extend the work of Budd and Wheeler ( Proc. R. Soc. London A, 417, 389, 1988) , who described a new numerical scheme for the solution of the space charge equation on a simple connected domain, to multiply connected regions. The space charge equation, ▿ · ( Δ overlineϕ ▽ overlineϕ) = 0 , is a third-order nonlinear partial differential equation for the electric potential overlineϕ which models the electric field in the vicinity of a coronating conductor. Budd and Wheeler described a new way of analysing this equation by constructing an orthogonal coordinate system ( overlineϕ, overlineψ) and recasting the equation in terms of x, y, and ▽ overlineϕ as functions of ( overlineϕ, overlineψ). This transformation is singular on multiply connected regions and in this paper we show how this may be overcome to provide an efficient numerical scheme for the solution of the space charge equation. This scheme also provides a new method for the solution of Laplaces equation and the calculation of orthogonal meshes on multiply connected regions.
A correction factor for estimating statewide agricultural injuries from ambulance reports.
Scott, Erika E; Earle-Richardson, Giulia; Krupa, Nicole; Jenkins, Paul
2011-10-01
Agriculture ranks as one of the most hazardous industries in the nation. Agricultural injury surveillance is critical to identifying and reducing major injury hazards. Currently, there is no comprehensive system of identifying and characterizing fatal and serious non-fatal agricultural injuries. Researchers sought to calculate a multiplier for estimating the number of agricultural injury cases based on the number of times the farm box indicator was checked on the ambulance report. Farm injuries from 2007 that used ambulance transport were ascertained for 10 New York counties using two methods: (1) ambulance reports including hand-entered free text; and (2) community surveillance. The resulting multiplier that was developed from contrasting these two methods was then applied to the statewide Emergency Medical Services database to estimate the total number of agricultural injuries for New York state. There were 25,735 unique ambulance runs due to injuries in the 10 counties in 2007. Among these, the farm box was checked a total of 90 times. Of these 90, 63 (70%) were determined to be agricultural. Among injury runs where the farm box was not checked, an additional 59 cases were identified from the free text. Among these 122 cases (63 + 59), four were duplicates. Twenty-four additional unique cases were identified from the community surveillance for a total of 142. This yielded a multiplier of 142/90 = 1.578 for estimating all agricultural injuries from the farm box indicator. Sensitivity and specificity of the ambulance report method were 53.4% and 99.9%, respectively. This method provides a cost-effective way to estimate the total number of agricultural injuries for the state. However, it would not eliminate the more labor intensive methods that are required to identify of the actual individual case records. Incorporating an independent source of case ascertainment (community surveillance) increased the multiplier by 17%. Copyright © 2011 Elsevier Inc. All rights reserved.
Characterization of light scattering in nematic droplet-polymer films
NASA Astrophysics Data System (ADS)
Kinugasa, Naoki; Yano, Yuichi; Takigawa, Akio; Kawahara, Hideo
1992-06-01
The optical properties of nematic droplet-polymer films were studied both in the on and off state using Lambert-Beer''s law to characterize their scattering phenomena. For the preparation of the devices, NCAP process was employed with the different diameter, distribution, shape, and density of nematic droplets. Their cell thickness and refractive indices concerning the birefringence of liquid crystals were also controlled. The results showed that the scattering phenomena of nematic droplet-polymer films were likely caused by two types of features. One, related to the surface area of nematic droplets, was the difference of the refractive indices in the interface between liquid crystals and polymer matrix. The other, related to the liquid crystal volume inside the nematic droplets, was the birefringence of liquid crystals. Considering such relations, the extinction coefficient of Lambert-Beer''s law could be described by the sum of the area in the interface multiplied by the difference of the refractive indices between two materials and the liquid crystal volume multiplied by their birefringence. Furthermore, it was found their parallel transmittance in the off state and haze ratio in the on state were well characterized by such extinction coefficient of Lambert-Beer''s law.
Thomas, R.E.
1959-08-25
An electronic multiplier circuit is described in which an output voltage having an amplitude proportional to the product or quotient of the input signals is accomplished in a novel manner which facilitates simplicity of circuit construction and a high degree of accuracy in accomplishing the multiplying and dividing function. The circuit broadly comprises a multiplier tube in which the plate current is proportional to the voltage applied to a first control grid multiplied by the difference between voltage applied to a second control grid and the voltage applied to the first control grid. Means are provided to apply a first signal to be multiplied to the first control grid together with means for applying the sum of the first signal to be multiplied and a second signal to be multiplied to the second control grid whereby the plate current of the multiplier tube is proportional to the product of the first and second signals to be multiplied.
NASA Technical Reports Server (NTRS)
Mustard, John F.; Pieters, Carle M.
1987-01-01
Moses Rock dike is a Tertiary diatreme containing serpentinized ultramafic microbreccia (SUM). Field evidence indicates the SUM was emplaced first followed by breccias derived from the Permian strata exposed in the walls of the diatreme and finally by complex breccias containing basement and mantle derived rocks. SUM is found primarily dispersed throughout the matrix of the diatreme. Moses Rock dike was examined with Airborne Imaging Spectrometer (AIS) to map the distribution and excess of SUM in the matrix and to better understand the nature of the eruption which formed this explosive volcanic feature. AIS data was calibrated by dividing the suite of AIS data by data from an internal standard area and then multiplying this relative reflectance data by the absolute bidirectional reflectance of a selected sample from the standard area which was measured in the lab. From the calibrated AIS data the minerals serpentine, gypsum, and illite as well as desert varnish and the lithologies SUM and other sandstones were identified. SUM distribution and abundance in the matrix of the diatreme were examined in detail and two distinct styles of SUM dispersion were observed. The two styles are discussed in detail.
NASA Astrophysics Data System (ADS)
Masuda, Kazuaki; Aiyoshi, Eitaro
We propose a method for solving optimal price decision problems for simultaneous multi-article auctions. An auction problem, originally formulated as a combinatorial problem, determines both every seller's whether or not to sell his/her article and every buyer's which article(s) to buy, so that the total utility of buyers and sellers will be maximized. Due to the duality theory, we transform it equivalently into a dual problem in which Lagrange multipliers are interpreted as articles' transaction price. As the dual problem is a continuous optimization problem with respect to the multipliers (i.e., the transaction prices), we propose a numerical method to solve it by applying heuristic global search methods. In this paper, Particle Swarm Optimization (PSO) is used to solve the dual problem, and experimental results are presented to show the validity of the proposed method.
Low-Power Architectures for Large Radio Astronomy Correlators
NASA Technical Reports Server (NTRS)
D'Addario, Larry R.
2011-01-01
The architecture of a cross-correlator for a synthesis radio telescope with N greater than 1000 antennas is studied with the objective of minimizing power consumption. It is found that the optimum architecture minimizes memory operations, and this implies preference for a matrix structure over a pipeline structure and avoiding the use of memory banks as accumulation registers when sharing multiply-accumulators among baselines. A straw-man design for N = 2000 and bandwidth of 1 GHz, based on ASICs fabricated in a 90 nm CMOS process, is presented. The cross-correlator proper (excluding per-antenna processing) is estimated to consume less than 35 kW.
Development of iterative techniques for the solution of unsteady compressible viscous flows
NASA Technical Reports Server (NTRS)
Sankar, Lakshmi N.; Hixon, Duane
1991-01-01
Efficient iterative solution methods are being developed for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. Thus, the extra work required by iterative schemes can also be designed to perform efficiently on current and future generation scalable, missively parallel machines. An obvious candidate for iteratively solving the system of coupled nonlinear algebraic equations arising in CFD applications is the Newton method. Newton's method was implemented in existing finite difference and finite volume methods. Depending on the complexity of the problem, the number of Newton iterations needed per step to solve the discretized system of equations can, however, vary dramatically from a few to several hundred. Another popular approach based on the classical conjugate gradient method, known as the GMRES (Generalized Minimum Residual) algorithm is investigated. The GMRES algorithm was used in the past by a number of researchers for solving steady viscous and inviscid flow problems with considerable success. Here, the suitability of this algorithm is investigated for solving the system of nonlinear equations that arise in unsteady Navier-Stokes solvers at each time step. Unlike the Newton method which attempts to drive the error in the solution at each and every node down to zero, the GMRES algorithm only seeks to minimize the L2 norm of the error. In the GMRES algorithm the changes in the flow properties from one time step to the next are assumed to be the sum of a set of orthogonal vectors. By choosing the number of vectors to a reasonably small value N (between 5 and 20) the work required for advancing the solution from one time step to the next may be kept to (N+1) times that of a noniterative scheme. Many of the operations required by the GMRES algorithm such as matrix-vector multiplies, matrix additions and subtractions can all be vectorized and parallelized efficiently.
WE-FG-207B-04: Noise Suppression for Energy-Resolved CT Via Variance Weighted Non-Local Filtration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harms, J; Zhu, L
Purpose: The photon starvation problem is exacerbated in energy-resolved CT, since the detected photons are shared by multiple energy channels. Using pixel similarity-based non-local filtration, we aim to produce accurate and high-resolution energy-resolved CT images with significantly reduced noise. Methods: Averaging CT images reconstructed from different energy channels reduces noise at the price of losing spectral information, while conventional denoising techniques inevitably degrade image resolution. Inspired by the fact that CT images of the same object at different energies share the same structures, we aim to reduce noise of energy-resolved CT by averaging only pixels of similar materials - amore » non-local filtration technique. For each CT image, an empirical exponential model is used to calculate the material similarity between two pixels based on their CT values and the similarity values are organized in a matrix form. A final similarity matrix is generated by averaging these similarity matrices, with weights inversely proportional to the estimated total noise variance in the sinogram of different energy channels. Noise suppression is achieved for each energy channel via multiplying the image vector by the similarity matrix. Results: Multiple scans on a tabletop CT system are used to simulate 6-channel energy-resolved CT, with energies ranging from 75 to 125 kVp. On a low-dose acquisition at 15 mA of the Catphan©600 phantom, our method achieves the same image spatial resolution as a high-dose scan at 80 mA with a noise standard deviation (STD) lower by a factor of >2. Compared with another non-local noise suppression algorithm (ndiNLM), the proposed algorithms obtains images with substantially improved resolution at the same level of noise reduction. Conclusion: We propose a noise-suppression method for energy-resolved CT. Our method takes full advantage of the additional structural information provided by energy-resolved CT and preserves image values at each energy level. Research reported in this publication was supported by the National Institute Of Biomedical Imaging And Bioengineering of the National Institutes of Health under Award Number R21EB019597. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.« less
Operational stability prediction in milling based on impact tests
NASA Astrophysics Data System (ADS)
Kiss, Adam K.; Hajdu, David; Bachrathy, Daniel; Stepan, Gabor
2018-03-01
Chatter detection is usually based on the analysis of measured signals captured during cutting processes. These techniques, however, often give ambiguous results close to the stability boundaries, which is a major limitation in industrial applications. In this paper, an experimental chatter detection method is proposed based on the system's response for perturbations during the machining process, and no system parameter identification is required. The proposed method identifies the dominant characteristic multiplier of the periodic dynamical system that models the milling process. The variation of the modulus of the largest characteristic multiplier can also be monitored, the stability boundary can precisely be extrapolated, while the manufacturing parameters are still kept in the chatter-free region. The method is derived in details, and also verified experimentally in laboratory environment.
"Magic" Ionization Mass Spectrometry
NASA Astrophysics Data System (ADS)
Trimpin, Sarah
2016-01-01
The systematic study of the temperature and pressure dependence of matrix-assisted ionization (MAI) led us to the discovery of the seemingly impossible, initially explained by some reviewers as either sleight of hand or the misinterpretation by an overzealous young scientist of results reported many years before and having little utility. The "magic" that we were attempting to report was that with matrix assistance, molecules, at least as large as bovine serum albumin (66 kDa), are lifted into the gas phase as multiply charged ions simply by exposure of the matrix:analyte sample to the vacuum of a mass spectrometer. Applied heat, a laser, or voltages are not necessary to achieve charge states and ion abundances only previously observed with electrospray ionization (ESI). The fundamentals of how solid phase volatile or nonvolatile compounds are converted to gas-phase ions without added energy currently involves speculation providing a great opportunity to rethink mechanistic understanding of ionization processes used in mass spectrometry. Improved understanding of the mechanism(s) of these processes and their connection to ESI and matrix-assisted laser desorption/ionization may provide opportunities to further develop new ionization strategies for traditional and yet unforeseen applications of mass spectrometry. This Critical Insights article covers developments leading to the discovery of a seemingly magic ionization process that is simple to use, fast, sensitive, robust, and can be directly applied to surface characterization using portable or high performance mass spectrometers.
HOW TO FIND GRAVITATIONALLY LENSED TYPE Ia SUPERNOVAE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldstein, Daniel A.; Nugent, Peter E.
2017-01-01
Type Ia supernovae (SNe Ia) that are multiply imaged by gravitational lensing can extend the SN Ia Hubble diagram to very high redshifts ( z ≳ 2), probe potential SN Ia evolution, and deliver high-precision constraints on H {sub 0}, w , and Ω{sub m} via time delays. However, only one, iPTF16geu, has been found to date, and many more are needed to achieve these goals. To increase the multiply imaged SN Ia discovery rate, we present a simple algorithm for identifying gravitationally lensed SN Ia candidates in cadenced, wide-field optical imaging surveys. The technique is to look for supernovaemore » that appear to be hosted by elliptical galaxies, but that have absolute magnitudes implied by the apparent hosts’ photometric redshifts that are far brighter than the absolute magnitudes of normal SNe Ia (the brightest type of supernovae found in elliptical galaxies). Importantly, this purely photometric method does not require the ability to resolve the lensed images for discovery. Active galactic nuclei, the primary sources of contamination that affect the method, can be controlled using catalog cross-matches and color cuts. Highly magnified core-collapse SNe will also be discovered as a byproduct of the method. Using a Monte Carlo simulation, we forecast that the Large Synoptic Survey Telescope can discover up to 500 multiply imaged SNe Ia using this technique in a 10 year z -band search, more than an order of magnitude improvement over previous estimates. We also predict that the Zwicky Transient Facility should find up to 10 multiply imaged SNe Ia using this technique in a 3 year R -band search—despite the fact that this survey will not resolve a single system.« less
How to Find Gravitationally Lensed Type Ia supernovae
Goldstein, Daniel A.; Nugent, Peter E.
2016-12-29
Type Ia supernovae (SNe Ia) that are multiply imaged by gravitational lensing can extend the SN Ia Hubble diagram to very high redshifts (z ≳ 2), probe potential SN Ia evolution, and deliver high-precision constraints on H 0, w, and Ω m via time delays. However, only one, iPTF16geu, has been found to date, and many more are needed to achieve these goals. To increase the multiply imaged SN Ia discovery rate, we present a simple algorithm for identifying gravitationally lensed SN Ia candidates in cadenced, wide-field optical imaging surveys. The technique is to look for supernovae that appear tomore » be hosted by elliptical galaxies, but that have absolute magnitudes implied by the apparent hosts' photometric redshifts that are far brighter than the absolute magnitudes of normal SNe Ia (the brightest type of supernovae found in elliptical galaxies). Importantly, this purely photometric method does not require the ability to resolve the lensed images for discovery. Active galactic nuclei, the primary sources of contamination that affect the method, can be controlled using catalog cross-matches and color cuts. Highly magnified core-collapse SNe will also be discovered as a byproduct of the method. Using a Monte Carlo simulation, we forecast that the Large Synoptic Survey Telescope can discover up to 500 multiply imaged SNe Ia using this technique in a 10 year z-band search, more than an order of magnitude improvement over previous estimates. Finally, we also predict that the Zwicky Transient Facility should find up to 10 multiply imaged SNe Ia using this technique in a 3 year R-band search - despite the fact that this survey will not resolve a single system.« less
Optimization of sparse matrix-vector multiplication on emerging multicore platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Samuel; Oliker, Leonid; Vuduc, Richard
2007-01-01
We are witnessing a dramatic change in computer architecture due to the multicore paradigm shift, as every electronic device from cell phones to supercomputers confronts parallelism of unprecedented scale. To fully unleash the potential of these systems, the HPC community must develop multicore specific optimization methodologies for important scientific computations. In this work, we examine sparse matrix-vector multiply (SpMV) - one of the most heavily used kernels in scientific computing - across a broad spectrum of multicore designs. Our experimental platform includes the homogeneous AMD dual-core and Intel quad-core designs, the heterogeneous STI Cell, as well as the first scientificmore » study of the highly multithreaded Sun Niagara2. We present several optimization strategies especially effective for the multicore environment, and demonstrate significant performance improvements compared to existing state-of-the-art serial and parallel SpMV implementations. Additionally, we present key insights into the architectural tradeoffs of leading multicore design strategies, in the context of demanding memory-bound numerical algorithms.« less
Hiraguchi, Ryuji; Hazama, Hisanao; Senoo, Kenichirou; Yahata, Yukinori; Masuda, Katsuyoshi; Awazu, Kunio
2014-01-01
A continuous flow atmospheric pressure laser desorption/ionization technique using a porous stainless steel probe and a 6–7-µm-band mid-infrared tunable laser was developed. This ion source is capable of direct ionization from a continuous flow with a high temporal stability. The 6–7-µm wavelength region corresponds to the characteristic absorption bands of various molecular vibration modes, including O–H, C=O, CH3 and C–N bonds. Consequently, many organic compounds and solvents, including water, have characteristic absorption peaks in this region. This ion source requires no additional matrix, and utilizes water or acetonitrile as the solvent matrix at several absorption peak wavelengths (6.05 and 7.27 µm, respectively). The distribution of multiply-charged peptide ions is extremely sensitive to the temperature of the heated capillary, which is the inlet of the mass spectrometer. This ionization technique has potential for the interface of liquid chromatography/mass spectrometry (LC/MS). PMID:24937686
Tritium β decay in chiral effective field theory
Baroni, A.; Girlanda, L.; Kievsky, A.; ...
2016-08-18
We evaluate the Fermi and Gamow-Teller (GT) matrix elements in tritiummore » $$\\beta$$-decay by including in the charge-changing weak current the corrections up to one loop recently derived in nuclear chiral effective field theory ($$\\chi$$ EFT). The trinucleon wave functions are obtained from hyperspherical-harmonics solutions of the Schroedinger equation with two- and three-nucleon potentials corresponding to either $$\\chi$$ EFT (the N3LO/N2LO combination) or meson-exchange phenomenology (the AV18/UIX combination). We find that contributions due to loop corrections in the axial current are, in relative terms, as large as (and in some cases, dominate) those from one-pion exchange, which nominally occur at lower order in the power counting. Furthermore, we also provide values for the low-energy constants multiplying the contact axial current and three-nucleon potential, required to reproduce the experimental GT matrix element and trinucleon binding energies in the N3LO/N2LO and AV18/UIX calculations.« less
NASA Astrophysics Data System (ADS)
Taoka, Hidekazu; Kishiyama, Yoshihisa; Higuchi, Kenichi; Sawahashi, Mamoru
This paper presents comparisons between common and dedicated reference signals (RSs) for channel estimation in MIMO multiplexing using codebook-based precoding for orthogonal frequency division multiplexing (OFDM) radio access in the Evolved UTRA downlink with frequency division duplexing (FDD). We clarify the best RS structure for precoding-based MIMO multiplexing based on comparisons of the structures in terms of the achievable throughput taking into account the overhead of the common and dedicated RSs and the precoding matrix indication (PMI) signal. Based on extensive simulations on the throughput in 2-by-2 and 4-by-4 MIMO multiplexing with precoding, we clarify that channel estimation based on common RSs multiplied with the precoding matrix indicated by the PMI signal achieves higher throughput compared to that using dedicated RSs irrespective of the number of spatial multiplexing streams when the number of available precoding matrices, i.e., the codebook size, is less than approximately 16 and 32 for 2-by-2 and 4-by-4 MIMO multiplexing, respectively.
Chu, Jing; Shi, Panpan; Deng, Xiaoyuan; Jin, Ying; Liu, Hao; Chen, Maosheng; Han, Xue; Liu, Hanping
2018-03-25
Significantly effective therapies need to be developed for chronic nonhealing diabetic wounds. In this work, the topical transplantation of mesenchymal stem cell (MSC) seeded on an acellular dermal matrix (ADM) scaffold is proposed as a novel therapeutic strategy for diabetic cutaneous wound healing. GFP-labeled MSCs were cocultured with an ADM scaffold that was decellularized from normal mouse skin. These cultures were subsequently transplanted as a whole into the full-thickness cutaneous wound site in streptozotocin-induced diabetic mice. Wounds treated with MSC-ADM demonstrated an increased percentage of wound closure. The treatment of MSC-ADM also greatly increased angiogenesis and rapidly completed the reepithelialization of newly formed skin on diabetic mice. More importantly, multiphoton microscopy was used for the intravital and dynamic monitoring of collagen type I (Col-I) fibers synthesis via second harmonic generation imaging. The synthesis of Col-I fibers during diabetic wound healing is of great significance for revealing wound repair mechanisms. In addition, the activity of GFP-labeled MSCs during wound healing was simultaneously traced via two-photon excitation fluorescence imaging. Our research offers a novel advanced nonlinear optical imaging method for monitoring the diabetic wound healing process while the ADM and MSCs interact in situ. Schematic of dynamic imaging of ADM scaffolds seeded with mesenchymal stem cells in diabetic wound healing using multiphoton microscopy. PMT, photo-multiplier tube. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A Highly Linear and Wide Input Range Four-Quadrant CMOS Analog Multiplier Using Active Feedback
NASA Astrophysics Data System (ADS)
Huang, Zhangcai; Jiang, Minglu; Inoue, Yasuaki
Analog multipliers are one of the most important building blocks in analog signal processing circuits. The performance with high linearity and wide input range is usually required for analog four-quadrant multipliers in most applications. Therefore, a highly linear and wide input range four-quadrant CMOS analog multiplier using active feedback is proposed in this paper. Firstly, a novel configuration of four-quadrant multiplier cell is presented. Its input dynamic range and linearity are improved significantly by adding two resistors compared with the conventional structure. Then based on the proposed multiplier cell configuration, a four-quadrant CMOS analog multiplier with active feedback technique is implemented by two operational amplifiers. Because of both the proposed multiplier cell and active feedback technique, the proposed multiplier achieves a much wider input range with higher linearity than conventional structures. The proposed multiplier was fabricated by a 0.6µm CMOS process. Experimental results show that the input range of the proposed multiplier can be up to 5.6Vpp with 0.159% linearity error on VX and 4.8Vpp with 0.51% linearity error on VY for ±2.5V power supply voltages, respectively.
NASA Astrophysics Data System (ADS)
Krasilenko, Vladimir G.; Lazarev, Alexander A.; Nikitovich, Diana V.
2018-03-01
The biologically-motivated self-learning equivalence-convolutional recurrent-multilayer neural structures (BLM_SL_EC_RMNS) for fragments images clustering and recognition will be discussed. We shall consider these neural structures and their spatial-invariant equivalental models (SIEMs) based on proposed equivalent two-dimensional functions of image similarity and the corresponding matrix-matrix (or tensor) procedures using as basic operations of continuous logic and nonlinear processing. These SIEMs can simply describe the signals processing during the all training and recognition stages and they are suitable for unipolar-coding multilevel signals. The clustering efficiency in such models and their implementation depends on the discriminant properties of neural elements of hidden layers. Therefore, the main models and architecture parameters and characteristics depends on the applied types of non-linear processing and function used for image comparison or for adaptive-equivalent weighing of input patterns. We show that these SL_EC_RMNSs have several advantages, such as the self-study and self-identification of features and signs of the similarity of fragments, ability to clustering and recognize of image fragments with best efficiency and strong mutual correlation. The proposed combined with learning-recognition clustering method of fragments with regard to their structural features is suitable not only for binary, but also color images and combines self-learning and the formation of weight clustered matrix-patterns. Its model is constructed and designed on the basis of recursively continuous logic and nonlinear processing algorithms and to k-average method or method the winner takes all (WTA). The experimental results confirmed that fragments with a large numbers of elements may be clustered. For the first time the possibility of generalization of these models for space invariant case is shown. The experiment for an images of different dimensions (a reference array) and fragments with diferent dimensions for clustering is carried out. The experiments, using the software environment Mathcad showed that the proposed method is universal, has a significant convergence, the small number of iterations is easily, displayed on the matrix structure, and confirmed its prospects. Thus, to understand the mechanisms of self-learning equivalence-convolutional clustering, accompanying her to the competitive processes in neurons, and the neural auto-encoding-decoding and recognition principles with the use of self-learning cluster patterns is very important which used the algorithm and the principles of non-linear processing of two-dimensional spatial functions of images comparison. The experimental results show that such models can be successfully used for auto- and hetero-associative recognition. Also they can be used to explain some mechanisms, known as "the reinforcementinhibition concept". Also we demonstrate a real model experiments, which confirm that the nonlinear processing by equivalent function allow to determine the neuron-winners and customize the weight matrix. At the end of the report, we will show how to use the obtained results and to propose new more efficient hardware architecture of SL_EC_RMNS based on matrix-tensor multipliers. Also we estimate the parameters and performance of such architectures.
Uses and abuses of multipliers in the stand prognosis model
David A. Hamilton
1994-01-01
Users of the Stand Prognosis Model may have difficulties in selecting the proper set of multipliers to simulate a desired effect or in determining the appropriate value to assign to selected multipliers. A series of examples describe impact of multipliers on simulated stand development. Guidelines for the proper use of multipliers are presented....
Faster Double-Size Bipartite Multiplication out of Montgomery Multipliers
NASA Astrophysics Data System (ADS)
Yoshino, Masayuki; Okeya, Katsuyuki; Vuillaume, Camille
This paper proposes novel algorithms for computing double-size modular multiplications with few modulus-dependent precomputations. Low-end devices such as smartcards are usually equipped with hardware Montgomery multipliers. However, due to progresses of mathematical attacks, security institutions such as NIST have steadily demanded longer bit-lengths for public-key cryptography, making the multipliers quickly obsolete. In an attempt to extend the lifespan of such multipliers, double-size techniques compute modular multiplications with twice the bit-length of the multipliers. Techniques are known for extending the bit-length of classical Euclidean multipliers, of Montgomery multipliers and the combination thereof, namely bipartite multipliers. However, unlike classical and bipartite multiplications, Montgomery multiplications involve modulus-dependent precomputations, which amount to a large part of an RSA encryption or signature verification. The proposed double-size technique simulates double-size multiplications based on single-size Montgomery multipliers, and yet precomputations are essentially free: in an 2048-bit RSA encryption or signature verification with public exponent e=216+1, the proposal with a 1024-bit Montgomery multiplier is at least 1.5 times faster than previous double-size Montgomery multiplications.
NASA Astrophysics Data System (ADS)
Rais, Muhammad H.
2010-06-01
This paper presents Field Programmable Gate Array (FPGA) implementation of standard and truncated multipliers using Very High Speed Integrated Circuit Hardware Description Language (VHDL). Truncated multiplier is a good candidate for digital signal processing (DSP) applications such as finite impulse response (FIR) and discrete cosine transform (DCT). Remarkable reduction in FPGA resources, delay, and power can be achieved using truncated multipliers instead of standard parallel multipliers when the full precision of the standard multiplier is not required. The truncated multipliers show significant improvement as compared to standard multipliers. Results show that the anomaly in Spartan-3 AN average connection and maximum pin delay have been efficiently reduced in Virtex-4 device.
Bayesian source term determination with unknown covariance of measurements
NASA Astrophysics Data System (ADS)
Belal, Alkomiet; Tichý, Ondřej; Šmídl, Václav
2017-04-01
Determination of a source term of release of a hazardous material into the atmosphere is a very important task for emergency response. We are concerned with the problem of estimation of the source term in the conventional linear inverse problem, y = Mx, where the relationship between the vector of observations y is described using the source-receptor-sensitivity (SRS) matrix M and the unknown source term x. Since the system is typically ill-conditioned, the problem is recast as an optimization problem minR,B(y - Mx)TR-1(y - Mx) + xTB-1x. The first term minimizes the error of the measurements with covariance matrix R, and the second term is a regularization of the source term. There are different types of regularization arising for different choices of matrices R and B, for example, Tikhonov regularization assumes covariance matrix B as the identity matrix multiplied by scalar parameter. In this contribution, we adopt a Bayesian approach to make inference on the unknown source term x as well as unknown R and B. We assume prior on x to be a Gaussian with zero mean and unknown diagonal covariance matrix B. The covariance matrix of the likelihood R is also unknown. We consider two potential choices of the structure of the matrix R. First is the diagonal matrix and the second is a locally correlated structure using information on topology of the measuring network. Since the inference of the model is intractable, iterative variational Bayes algorithm is used for simultaneous estimation of all model parameters. The practical usefulness of our contribution is demonstrated on an application of the resulting algorithm to real data from the European Tracer Experiment (ETEX). This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).
A comparison of VLSI architecture of finite field multipliers using dual, normal or standard basis
NASA Technical Reports Server (NTRS)
Hsu, I. S.; Truong, T. K.; Shao, H. M.; Deutsch, L. J.; Reed, I. S.
1987-01-01
Three different finite field multipliers are presented: (1) a dual basis multiplier due to Berlekamp; (2) a Massy-Omura normal basis multiplier; and (3) the Scott-Tavares-Peppard standard basis multiplier. These algorithms are chosen because each has its own distinct features which apply most suitably in different areas. Finally, they are implemented on silicon chips with nitride metal oxide semiconductor technology so that the multiplier most desirable for very large scale integration implementations can readily be ascertained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Xing; Lin, Guang; Zou, Jianfeng
To model red blood cell (RBC) deformation in flow, the recently developed LBM-DLM/FD method ([Shi and Lim, 2007)29], derived from the lattice Boltzmann method and the distributed Lagrange multiplier/fictitious domain methodthe fictitious domain method, is extended to employ the mesoscopic network model for simulations of red blood cell deformation. The flow is simulated by the lattice Boltzmann method with an external force, while the network model is used for modeling red blood cell deformation and the fluid-RBC interaction is enforced by the Lagrange multiplier. To validate parameters of the RBC network model, sThe stretching numerical tests on both coarse andmore » fine meshes are performed and compared with the corresponding experimental data to validate the parameters of the RBC network model. In addition, RBC deformation in pipe flow and in shear flow is simulated, revealing the capacity of the current method for modeling RBC deformation in various flows.« less
NASA Astrophysics Data System (ADS)
Danilin, A. I.; Neverov, V. V.; Danilin, S. A.; Shimanov, A. A.; Tsapkova, A. B.
2018-01-01
The article describes a noncontact operational control method based on the processing of a microwave signal reflected from the controlled teeth of the wheel. In this paper describes the influence of wear patterns on the characteristic information parameters of the analyzed signals. The block diagram in section 3 shows the experimental system for monitoring the operating state of the gear wheels of the steam compressor torque multiplier. The design of the primary converter is briefly described.
Pipeline active filter utilizing a booth type multiplier
NASA Technical Reports Server (NTRS)
Nathan, Robert (Inventor)
1987-01-01
Multiplier units of the modified Booth decoder and carry-save adder/full adder combination are used to implement a pipeline active filter wherein pixel data is processed sequentially, and each pixel need only be accessed once and multiplied by a predetermined number of weights simultaneously, one multiplier unit for each weight. Each multiplier unit uses only one row of carry-save adders, and the results are shifted to less significant multiplier positions and one row of full adders to add the carry to the sum in order to provide the correct binary number for the product Wp. The full adder is also used to add this product Wp to the sum of products .SIGMA.Wp from preceding multiply units. If m.times.m multiplier units are pipelined, the system would be capable of processing a kernel array of m.times.m weighting factors.
NASA Astrophysics Data System (ADS)
Subanti, S.; Hakim, A. R.; Hakim, I. M.
2018-03-01
This purpose of the current study aims is to analyze the multiplier analysis on mining sector in Indonesia. The mining sectors defined by coal and metal; crude oil, natural gas, and geothermal; and other mining and quarrying. The multiplier analysis based from input output analysis, this divided by income multiplier and output multiplier. This results show that (1) Indonesian mining sectors ranked 6th with contribute amount of 6.81% on national total output; (2) Based on total gross value added, this sector contribute amount of 12.13% or ranked 4th; (3) The value from income multiplier is 0.7062 and the value from output multiplier is 1.2426.
Ultrafast absorption of intense x rays by nitrogen molecules
NASA Astrophysics Data System (ADS)
Buth, Christian; Liu, Ji-Cai; Chen, Mau Hsiung; Cryan, James P.; Fang, Li; Glownia, James M.; Hoener, Matthias; Coffee, Ryan N.; Berrah, Nora
2012-06-01
We devise a theoretical description for the response of nitrogen molecules (N2) to ultrashort and intense x rays from the free electron laser Linac Coherent Light Source (LCLS). We set out from a rate-equation description for the x-ray absorption by a nitrogen atom. The equations are formulated using all one-x-ray-photon absorption cross sections and the Auger and radiative decay widths of multiply-ionized nitrogen atoms. Cross sections are obtained with a one-electron theory and decay widths are determined from ab initio computations using the Dirac-Hartree-Slater (DHS) method. We also calculate all binding and transition energies of nitrogen atoms in all charge states with the DHS method as the difference of two self-consistent field (SCF) calculations (ΔSCF method). To describe the interaction with N2, a detailed investigation of intense x-ray-induced ionization and molecular fragmentation are carried out. As a figure of merit, we calculate ion yields and the average charge state measured in recent experiments at the LCLS. We use a series of phenomenological models of increasing sophistication to unravel the mechanisms of the interaction of x rays with N2: a single atom, a symmetric-sharing model, and a fragmentation-matrix model are developed. The role of the formation and decay of single and double core holes, the metastable states of N_2^{2+}, and molecular fragmentation are explained.
When will low-contrast features be visible in a STEM X-ray spectrum image?
Parish, Chad M.
2015-04-01
When will a small or low-contrast feature, such as an embedded second-phase particle, be visible in a scanning transmission electron microscopy (STEM) X-ray map? This work illustrates a computationally inexpensive method to simulate X-ray maps and spectrum images (SIs), based upon the equations of X-ray generation and detection. To particularize the general procedure, an example of nanostructured ferritic alloy (NFA) containing nm-sized Y 2Ti 2O 7 embedded precipitates in ferritic stainless steel matrix is chosen. The proposed model produces physically appearing simulated SI data sets, which can either be reduced to X-ray dot maps or analyzed via multivariate statistical analysis.more » Comparison to NFA X-ray maps acquired using three different STEM instruments match the generated simulations quite well, despite the large number of simplifying assumptions used. A figure of merit of electron dose multiplied by X-ray collection solid angle is proposed to compare feature detectability from one data set (simulated or experimental) to another. The proposed method can scope experiments that are feasible under specific analysis conditions on a given microscope. As a result, future applications, such as spallation proton–neutron irradiations, core-shell nanoparticles, or dopants in polycrystalline photovoltaic solar cells, are proposed.« less
Convolutional Dictionary Learning: Acceleration and Convergence
NASA Astrophysics Data System (ADS)
Chun, Il Yong; Fessler, Jeffrey A.
2018-04-01
Convolutional dictionary learning (CDL or sparsifying CDL) has many applications in image processing and computer vision. There has been growing interest in developing efficient algorithms for CDL, mostly relying on the augmented Lagrangian (AL) method or the variant alternating direction method of multipliers (ADMM). When their parameters are properly tuned, AL methods have shown fast convergence in CDL. However, the parameter tuning process is not trivial due to its data dependence and, in practice, the convergence of AL methods depends on the AL parameters for nonconvex CDL problems. To moderate these problems, this paper proposes a new practically feasible and convergent Block Proximal Gradient method using a Majorizer (BPG-M) for CDL. The BPG-M-based CDL is investigated with different block updating schemes and majorization matrix designs, and further accelerated by incorporating some momentum coefficient formulas and restarting techniques. All of the methods investigated incorporate a boundary artifacts removal (or, more generally, sampling) operator in the learning model. Numerical experiments show that, without needing any parameter tuning process, the proposed BPG-M approach converges more stably to desirable solutions of lower objective values than the existing state-of-the-art ADMM algorithm and its memory-efficient variant do. Compared to the ADMM approaches, the BPG-M method using a multi-block updating scheme is particularly useful in single-threaded CDL algorithm handling large datasets, due to its lower memory requirement and no polynomial computational complexity. Image denoising experiments show that, for relatively strong additive white Gaussian noise, the filters learned by BPG-M-based CDL outperform those trained by the ADMM approach.
Tables of compound-discount interest rate multipliers for evaluating forestry investments.
Allen L. Lundgren
1971-01-01
Tables, prepared by computer, are presented for 10 selected compound-discount interest rate multipliers commonly used in financial analyses of forestry investments. Two set of tables are given for each of the 10 multipliers. The first set gives multipliers for each year from 1 to 40 years; the second set gives multipliers at 5-year intervals from 5 to 160 years....
Pilot Study of Cartilage Repair in the Knee Joint with Multiply Incised Chondral Allograft
Vancsodi, Jozsef; Farkas, Boglarka; Fazekas, Adam; Nagy, Szilvia Anett; Bogner, Peter; Vermes, Csaba; Than, Peter
2015-01-01
Background Focal cartilage lesions in the knee joint have limited capacity to heal. Current animal experiments show that incisions of the deep zone of a cartilage allograft allow acceptable integration for the graft. Questions/Purposes We performed this clinical study to determine (1) if the multiply incised cartilage graft is surgically applicable for focal cartilage lesions, (2) whether this allograft has a potential to integrate to the repair site, and (3) if patients show clinical improvement. Patients and Methods Seven patients with 8 chondral lesions were enrolled into the study. Symptomatic lesions between 2 and 8 cm2 were accepted. Additional injuries were allowed but were addressed simultaneously. Grafts were tailored to match and the deep zone of the cartilage was multiply incised to augment the basal integration before securing in place. Rigorous postoperative physiotherapy followed. At 12 and 24 months the patients’ satisfaction were measured and serial magnetic resonance imaging (MRI) was performed in 6 patients. Results Following the implantations no adverse reaction occurred. MRI evaluation postoperatively showed the graft in place in 5 out of 6 patients. In 1 patient, MRI suggested partial delamination at 1 year and graft degeneration at 2 years. Short Form–36 health survey and the Lysholm knee score demonstrated a significant improvement in the first year; however, by 2 years there was a noticeable drop in the scores. Conclusions. Multiply incised pure chondral allograft used for cartilage repair appears to be a relatively safe method. Further studies are necessary to assess its potential in cartilage repair before its clinical use. PMID:26069710
HEC-4 Monthly Streamflow Simulation (User’s Manual)
1971-02-01
adding this product to the uroduct of the complement of this multiplier and the value of the ele - ment in the inconsistent matrix. The averaged or...the month pre- ceding the first month specified (by input data) to be generated. The var-’ ele M.1A is comnutel f’or the subscript o" ; that conforms...1-4 .f’o .Td74 r m 1i4 tf’t 0atQ P PkPkk4 el %04 k04 fl 0 -41. ----- 4-4-4 a-4 -44 44-4- - -4 *fF_4Q cacao---1 00 0404- -4 Cb 0 0-r-o .0. 4
An efficient optical architecture for sparsely connected neural networks
NASA Technical Reports Server (NTRS)
Hine, Butler P., III; Downie, John D.; Reid, Max B.
1990-01-01
An architecture for general-purpose optical neural network processor is presented in which the interconnections and weights are formed by directing coherent beams holographically, thereby making use of the space-bandwidth products of the recording medium for sparsely interconnected networks more efficiently that the commonly used vector-matrix multiplier, since all of the hologram area is in use. An investigation is made of the use of computer-generated holograms recorded on such updatable media as thermoplastic materials, in order to define the interconnections and weights of a neural network processor; attention is given to limits on interconnection densities, diffraction efficiencies, and weighing accuracies possible with such an updatable thin film holographic device.
Loop Mirror Laser Neural Network with a Fast Liquid-Crystal Display
NASA Astrophysics Data System (ADS)
Mos, Evert C.; Schleipen, Jean J. H. B.; de Waardt, Huug; Khoe, Djan G. D.
1999-07-01
In our laser neural network (LNN) all-optical threshold action is obtained by application of controlled optical feedback to a laser diode. Here an extended experimental LNN is presented with as many as 32 neurons and 12 inputs. In the setup we use a fast liquid-crystal display to implement an optical matrix vector multiplier. This display, based on ferroelectric liquid-crystal material, enables us to present 125 training examples s to the LNN. To maximize the optical feedback efficiency of the setup, a loop mirror is introduced. We use a -rule learning algorithm to train the network to perform a number of functions toward the application area of telecommunication data switching.
Integrated optical circuits for numerical computation
NASA Technical Reports Server (NTRS)
Verber, C. M.; Kenan, R. P.
1983-01-01
The development of integrated optical circuits (IOC) for numerical-computation applications is reviewed, with a focus on the use of systolic architectures. The basic architecture criteria for optical processors are shown to be the same as those proposed by Kung (1982) for VLSI design, and the advantages of IOCs over bulk techniques are indicated. The operation and fabrication of electrooptic grating structures are outlined, and the application of IOCs of this type to an existing 32-bit, 32-Mbit/sec digital correlator, a proposed matrix multiplier, and a proposed pipeline processor for polynomial evaluation is discussed. The problems arising from the inherent nonlinearity of electrooptic gratings are considered. Diagrams and drawings of the application concepts are provided.
Structural brain network analysis in families multiply affected with bipolar I disorder.
Forde, Natalie J; O'Donoghue, Stefani; Scanlon, Cathy; Emsell, Louise; Chaddock, Chris; Leemans, Alexander; Jeurissen, Ben; Barker, Gareth J; Cannon, Dara M; Murray, Robin M; McDonald, Colm
2015-10-30
Disrupted structural connectivity is associated with psychiatric illnesses including bipolar disorder (BP). Here we use structural brain network analysis to investigate connectivity abnormalities in multiply affected BP type I families, to assess the utility of dysconnectivity as a biomarker and its endophenotypic potential. Magnetic resonance diffusion images for 19 BP type I patients in remission, 21 of their first degree unaffected relatives, and 18 unrelated healthy controls underwent tractography. With the automated anatomical labelling atlas being used to define nodes, a connectivity matrix was generated for each subject. Network metrics were extracted with the Brain Connectivity Toolbox and then analysed for group differences, accounting for potential confounding effects of age, gender and familial association. Whole brain analysis revealed no differences between groups. Analysis of specific mainly frontal regions, previously implicated as potentially endophenotypic by functional magnetic resonance imaging analysis of the same cohort, revealed a significant effect of group in the right medial superior frontal gyrus and left middle frontal gyrus driven by reduced organisation in patients compared with controls. The organisation of whole brain networks of those affected with BP I does not differ from their unaffected relatives or healthy controls. In discreet frontal regions, however, anatomical connectivity is disrupted in patients but not in their unaffected relatives. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
A general-purpose approach to computer-aided dynamic analysis of a flexible helicopter
NASA Technical Reports Server (NTRS)
Agrawal, Om P.
1988-01-01
A general purpose mathematical formulation is described for dynamic analysis of a helicopter consisting of flexible and/or rigid bodies that undergo large translations and rotations. Rigid body and elastic sets of generalized coordinates are used. The rigid body coordinates define the location and the orientation of a body coordinate frame (global frame) with respect to an inertial frame. The elastic coordinates are introduced using a finite element approach in order to model flexible components. The compatibility conditions between two adjacent elements in a flexible body are imposed using a Boolean matrix, whereas the compatibility conditions between two adjacent bodies are imposed using the Lagrange multiplier approach. Since the form of the constraint equations depends upon the type of kinematic joint and involves only the generalized coordinates of the two participating elements, then a library of constraint elements can be developed to impose the kinematic constraint in an automated fashion. For the body constraints, the Lagrange multipliers yield the reaction forces and torques of the bodies at the joints. The virtual work approach is used to derive the equations of motion, which are a system of differential and algebraic equations that are highly nonlinear. The formulation presented is general and is compared with hard-wired formulations commonly used in helicopter analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Datta, Dipayan, E-mail: datta@uni-mainz.de; Gauss, Jürgen, E-mail: gauss@uni-mainz.de
2014-09-14
An analytic scheme is presented for the evaluation of first derivatives of the energy for a unitary group based spin-adapted coupled cluster (CC) theory, namely, the combinatoric open-shell CC (COSCC) approach within the singles and doubles approximation. The widely used Lagrange multiplier approach is employed for the derivation of an analytical expression for the first derivative of the energy, which in combination with the well-established density-matrix formulation, is used for the computation of first-order electrical properties. Derivations of the spin-adapted lambda equations for determining the Lagrange multipliers and the expressions for the spin-free effective density matrices for the COSCC approachmore » are presented. Orbital-relaxation effects due to the electric-field perturbation are treated via the Z-vector technique. We present calculations of the dipole moments for a number of doublet radicals in their ground states using restricted open-shell Hartree-Fock (ROHF) and quasi-restricted HF (QRHF) orbitals in order to demonstrate the applicability of our analytic scheme for computing energy derivatives. We also report calculations of the chlorine electric-field gradients and nuclear quadrupole-coupling constants for the CCl, CH{sub 2}Cl, ClO{sub 2}, and SiCl radicals.« less
Chao, Jerry; Ward, E. Sally; Ober, Raimund J.
2012-01-01
The high quantum efficiency of the charge-coupled device (CCD) has rendered it the imaging technology of choice in diverse applications. However, under extremely low light conditions where few photons are detected from the imaged object, the CCD becomes unsuitable as its readout noise can easily overwhelm the weak signal. An intended solution to this problem is the electron-multiplying charge-coupled device (EMCCD), which stochastically amplifies the acquired signal to drown out the readout noise. Here, we develop the theory for calculating the Fisher information content of the amplified signal, which is modeled as the output of a branching process. Specifically, Fisher information expressions are obtained for a general and a geometric model of amplification, as well as for two approximations of the amplified signal. All expressions pertain to the important scenario of a Poisson-distributed initial signal, which is characteristic of physical processes such as photon detection. To facilitate the investigation of different data models, a “noise coefficient” is introduced which allows the analysis and comparison of Fisher information via a scalar quantity. We apply our results to the problem of estimating the location of a point source from its image, as observed through an optical microscope and detected by an EMCCD. PMID:23049166
Kusić, Dragana; Rösch, Petra; Popp, Jürgen
2016-03-01
Legionellae colonize biofilms, can form a biofilm by itself and multiply intracellularly within the protozoa commonly found in water distribution systems. Approximately half of the known species are pathogenic and have been connected to severe multisystem Legionnaires' disease. The detection methods for Legionella spp. in water samples are still based on cultivation, which is time consuming due to the slow growth of this bacterium. Here, we developed a cultivation-independent, label-free and fast detection method for legionellae in a biofilm matrix based on the Raman spectroscopic analysis of isolated single cells via immunomagnetic separation (IMS). A database comprising the Raman spectra of single bacterial cells captured and separated from the biofilms formed by each species was used to build the identification method based on a support vector machine (SVM) discriminative classifier. The complete method allows the detection of Legionella spp. in 100 min. Cross-reactivity of Legionella spp. specific immunomagnetic beads to the other studied genera was tested, where only small cell amounts of Pseudomonas aeruginosa, Klebsiella pneumoniae and Escherichia coli compared to the initial number of cells were isolated by the immunobeads. Nevertheless, the Raman spectra collected from isolated non-targeted bacteria were well-discriminated from the Raman spectra collected from isolated Legionella cells, whereby the Raman spectra of the independent dataset of Legionella strains were assigned with an accuracy of 98.6%. In addition, Raman spectroscopy was also used to differentiate between isolated Legionella species. Copyright © 2016 Elsevier GmbH. All rights reserved.
Automobile Industry Retail Price Equivalent and Indirect Cost Multipliers
This report develops a modified multiplier, referred to as an indirect cost (IC) multiplier, which specifically evaluates the components of indirect costs that are likely to be affected by vehicle modifications associated with environmental regulation. A range of IC multipliers a...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roy, L.; Rao, N.D.
1983-04-01
This paper presents a new method for optimal dispatch of real and reactive power generation which is based on cartesian coordinate formulation of economic dispatch problem and reclassification of state and control variables associated with generator buses. The voltage and power at these buses are classified as parametric and functional inequality constraints, and are handled by reduced gradient technique and penalty factor approach respectively. The advantage of this classification is the reduction in the size of the equality constraint model, leading to less storage requirement. The rectangular coordinate formulation results in an exact equality constraint model in which the coefficientmore » matrix is real, sparse, diagonally dominant, smaller in size and need be computed and factorized once only in each gradient step. In addition, Lagragian multipliers are calculated using a new efficient procedure. A natural outcome of these features is the solution of the economic dispatch problem, faster than other methods available to date in the literature. Rapid and reliable convergence is an additional desirable characteristic of the method. Digital simulation results are presented on several IEEE test systems to illustrate the range of application of the method visa-vis the popular Dommel-Tinney (DT) procedure. It is found that the proposed method is more reliable, 3-4 times faster and requires 20-30 percent less storage compared to the DT algorithm, while being just as general. Thus, owing to its exactness, robust mathematical model and less computational requirements, the method developed in the paper is shown to be a practically feasible algorithm for on-line optimal power dispatch.« less
Automated Testability Decision Tool
1991-09-01
Vol. 16,1968, pp. 538-558. Bertsekas, D. P., "Constraints Optimization and Lagrange Multiplier Methods," Academic Press, New York. McLeavey , D.W... McLeavey , J.A., "Parallel Optimization Methods in Standby Reliability, " University of Connecticut, School of Business Administration, Bureau of Business
A hybridized formulation for the weak Galerkin mixed finite element method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mu, Lin; Wang, Junping; Ye, Xiu
This paper presents a hybridized formulation for the weak Galerkin mixed finite element method (WG-MFEM) which was introduced and analyzed in Wang and Ye (2014) for second order elliptic equations. The WG-MFEM method was designed by using discontinuous piecewise polynomials on finite element partitions consisting of polygonal or polyhedral elements of arbitrary shape. The key to WG-MFEM is the use of a discrete weak divergence operator which is defined and computed by solving inexpensive problems locally on each element. The hybridized formulation of this paper leads to a significantly reduced system of linear equations involving only the unknowns arising frommore » the Lagrange multiplier in hybridization. Optimal-order error estimates are derived for the hybridized WG-MFEM approximations. In conclusion, some numerical results are reported to confirm the theory and a superconvergence for the Lagrange multiplier.« less
A hybridized formulation for the weak Galerkin mixed finite element method
Mu, Lin; Wang, Junping; Ye, Xiu
2016-01-14
This paper presents a hybridized formulation for the weak Galerkin mixed finite element method (WG-MFEM) which was introduced and analyzed in Wang and Ye (2014) for second order elliptic equations. The WG-MFEM method was designed by using discontinuous piecewise polynomials on finite element partitions consisting of polygonal or polyhedral elements of arbitrary shape. The key to WG-MFEM is the use of a discrete weak divergence operator which is defined and computed by solving inexpensive problems locally on each element. The hybridized formulation of this paper leads to a significantly reduced system of linear equations involving only the unknowns arising frommore » the Lagrange multiplier in hybridization. Optimal-order error estimates are derived for the hybridized WG-MFEM approximations. In conclusion, some numerical results are reported to confirm the theory and a superconvergence for the Lagrange multiplier.« less
A trust region-based approach to optimize triple response systems
NASA Astrophysics Data System (ADS)
Fan, Shu-Kai S.; Fan, Chihhao; Huang, Chia-Fen
2014-05-01
This article presents a new computing procedure for the global optimization of the triple response system (TRS) where the response functions are non-convex quadratics and the input factors satisfy a radial constrained region of interest. The TRS arising from response surface modelling can be approximated using a nonlinear mathematical program that considers one primary objective function and two secondary constraint functions. An optimization algorithm named the triple response surface algorithm (TRSALG) is proposed to determine the global optimum for the non-degenerate TRS. In TRSALG, the Lagrange multipliers of the secondary functions are determined using the Hooke-Jeeves search method and the Lagrange multiplier of the radial constraint is located using the trust region method within the global optimality space. The proposed algorithm is illustrated in terms of three examples appearing in the quality-control literature. The results of TRSALG compared to a gradient-based method are also presented.
Planar diode multiplier chains for THz spectroscopy
NASA Technical Reports Server (NTRS)
Maiwald, Frank W.; Drouin, Brian J.; Pearson, John C.; Mehdi, Imran; Lewena, Frank; Endres, Christian; Winnewisser, Gisbert
2005-01-01
High-resolution laboratory spectroscopy is utilized as a diagnostic tool to determine noise and harmonic content of balanced [9]-[11] and unbalanced [12]-[14] multiplier designs. Balanced multiplier designs suppress unintended harmonics more than -20dB. Much smaller values were measured on unbalanced multipliers.
NASA Astrophysics Data System (ADS)
Abbiati, Giuseppe; La Salandra, Vincenzo; Bursi, Oreste S.; Caracoglia, Luca
2018-02-01
Successful online hybrid (numerical/physical) dynamic substructuring simulations have shown their potential in enabling realistic dynamic analysis of almost any type of non-linear structural system (e.g., an as-built/isolated viaduct, a petrochemical piping system subjected to non-stationary seismic loading, etc.). Moreover, owing to faster and more accurate testing equipment, a number of different offline experimental substructuring methods, operating both in time (e.g. the impulse-based substructuring) and frequency domains (i.e. the Lagrange multiplier frequency-based substructuring), have been employed in mechanical engineering to examine dynamic substructure coupling. Numerous studies have dealt with the above-mentioned methods and with consequent uncertainty propagation issues, either associated with experimental errors or modelling assumptions. Nonetheless, a limited number of publications have systematically cross-examined the performance of the various Experimental Dynamic Substructuring (EDS) methods and the possibility of their exploitation in a complementary way to expedite a hybrid experiment/numerical simulation. From this perspective, this paper performs a comparative uncertainty propagation analysis of three EDS algorithms for coupling physical and numerical subdomains with a dual assembly approach based on localized Lagrange multipliers. The main results and comparisons are based on a series of Monte Carlo simulations carried out on a five-DoF linear/non-linear chain-like systems that include typical aleatoric uncertainties emerging from measurement errors and excitation loads. In addition, we propose a new Composite-EDS (C-EDS) method to fuse both online and offline algorithms into a unique simulator. Capitalizing from the results of a more complex case study composed of a coupled isolated tank-piping system, we provide a feasible way to employ the C-EDS method when nonlinearities and multi-point constraints are present in the emulated system.
NASA Astrophysics Data System (ADS)
Singh, Inderjeet; Singh, Bhajan; Sandhu, B. S.; Sabharwal, Arvind D.
2017-04-01
A method has been presented for calculation of effective atomic number (Zeff) of composite materials, by using back-scattering of 662 keV gamma photons obtained from a 137Cs mono-energetic radioactive source. The present technique is a non-destructive approach, and is employed to evaluate Zeff of different composite materials, by interacting gamma photons with semi-infinite material in a back-scattering geometry, using a 3″ × 3″ NaI(Tl) scintillation detector. The present work is undertaken to study the effect of target thickness on intensity distribution of gamma photons which are multiply back-scattered from targets (pure elements) and composites (mixtures of different elements). The intensity of multiply back-scattered events increases with increasing target thickness and finally saturates. The saturation thickness for multiply back-scattered events is used to assign a number (Zeff) for multi-element materials. Response function of the 3″ × 3″ NaI(Tl) scintillation detector is applied on observed pulse-height distribution to include the contribution of partially absorbed photons. The reduced value of signal-to-noise ratio interprets the increase in multiply back-scattered data of a response corrected spectrum. Data obtained from Monte Carlo simulations and literature also support the present experimental results.
NASA Astrophysics Data System (ADS)
Adem, Abdullahi Rashid
2016-05-01
We consider a (2+1)-dimensional Korteweg-de Vries type equation which models the shallow-water waves, surface and internal waves. In the analysis, we use the Lie symmetry method and the multiple exp-function method. Furthermore, conservation laws are computed using the multiplier method.
Portfolio Analysis for Vector Calculus
ERIC Educational Resources Information Center
Kaplan, Samuel R.
2015-01-01
Classic stock portfolio analysis provides an applied context for Lagrange multipliers that undergraduate students appreciate. Although modern methods of portfolio analysis are beyond the scope of vector calculus, classic methods reinforce the utility of this material. This paper discusses how to introduce classic stock portfolio analysis in a…
Soref, Richard; Hendrickson, Joshua
2015-12-14
Silicon-on-insulator Mach-Zehnder interferometer structures that utilize a photonic crystal nanobeam waveguide in each of two connecting arms are proposed here as efficient 2 × 2 resonant, wavelength-selective electro-optical routing switches that are readily cascaded into on-chip N × N switching networks. A localized lateral PN junction of length ~2 μm within each of two identical nanobeams is proposed as a means of shifting the transmission resonance by 400 pm within the 1550 nm band. Using a bias swing ΔV = 2.7 V, the 474 attojoules-per-bit switching mechanism is free-carrier sweepout due to PN depletion layer widening. Simulations of the 2 × 2 outputs versus voltage are presented. Dual-nanobeam designs are given for N × N data-routing matrix switches, electrooptical logic unit cells, N × M wavelength selective switches, and vector matrix multipliers. Performance penalties are analyzed for possible fabrication induced errors such as non-ideal 3-dB couplers, differences in optical path lengths, and variations in photonic crystal cavity resonances.
Optimization of Sparse Matrix-Vector Multiplication on Emerging Multicore Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Samuel; Oliker, Leonid; Vuduc, Richard
2008-10-16
We are witnessing a dramatic change in computer architecture due to the multicore paradigm shift, as every electronic device from cell phones to supercomputers confronts parallelism of unprecedented scale. To fully unleash the potential of these systems, the HPC community must develop multicore specific-optimization methodologies for important scientific computations. In this work, we examine sparse matrix-vector multiply (SpMV) - one of the most heavily used kernels in scientific computing - across a broad spectrum of multicore designs. Our experimental platform includes the homogeneous AMD quad-core, AMD dual-core, and Intel quad-core designs, the heterogeneous STI Cell, as well as one ofmore » the first scientific studies of the highly multithreaded Sun Victoria Falls (a Niagara2 SMP). We present several optimization strategies especially effective for the multicore environment, and demonstrate significant performance improvements compared to existing state-of-the-art serial and parallel SpMV implementations. Additionally, we present key insights into the architectural trade-offs of leading multicore design strategies, in the context of demanding memory-bound numerical algorithms.« less
NASA Astrophysics Data System (ADS)
Ma, Mingxing; Liu, Wenjin; Zhong, Minlin; Zhang, Hongjun; Zhang, Weiming
2005-01-01
In the research hotspot of particle reinforced metal-matrix composite layer produced by laser cladding, in-situ reinforced particles obtained by adding strong-carbide-formation elements into cladding power have been attracting more attention for their unique advantage. The research has demonstrated that when adding strong-carbide-formation elements-Ti into the cladding powder of the Fe-C-Si-B separately, by optimizing the composition, better cladding coating with the characters of better strength and toughness, higher wear resistance and free of cracks. When the microstructure of cladding coating is hypoeutectic microstructure, its comprehensive performance is best. The research discovered that, compositely adding the strong-carbide-formation elements like Ti+V, Ti+Zr or V+Zr into the cladding coating is able to improve its comprehensive capability. All the cladding coatings obtained are hypoeutectic microstructure. The cladding coatings have a great deal of particulates, and its average microhardness reaches HV0.2700-1400. The research also discovered that the cladding coating obtained is of less cracking after adding the Ti+Zr.
Practical somewhat-secure quantum somewhat-homomorphic encryption with coherent states
NASA Astrophysics Data System (ADS)
Tan, Si-Hui; Ouyang, Yingkai; Rohde, Peter P.
2018-04-01
We present a scheme for implementing homomorphic encryption on coherent states encoded using phase-shift keys. The encryption operations require only rotations in phase space, which commute with computations in the code space performed via passive linear optics, and with generalized nonlinear phase operations that are polynomials of the photon-number operator in the code space. This encoding scheme can thus be applied to any computation with coherent-state inputs, and the computation proceeds via a combination of passive linear optics and generalized nonlinear phase operations. An example of such a computation is matrix multiplication, whereby a vector representing coherent-state amplitudes is multiplied by a matrix representing a linear optics network, yielding a new vector of coherent-state amplitudes. By finding an orthogonal partitioning of the support of our encoded states, we quantify the security of our scheme via the indistinguishability of the encrypted code words. While we focus on coherent-state encodings, we expect that this phase-key encoding technique could apply to any continuous-variable computation scheme where the phase-shift operator commutes with the computation.
Neuwald, Andrew F
2009-08-01
The patterns of sequence similarity and divergence present within functionally diverse, evolutionarily related proteins contain implicit information about corresponding biochemical similarities and differences. A first step toward accessing such information is to statistically analyze these patterns, which, in turn, requires that one first identify and accurately align a very large set of protein sequences. Ideally, the set should include many distantly related, functionally divergent subgroups. Because it is extremely difficult, if not impossible for fully automated methods to align such sequences correctly, researchers often resort to manual curation based on detailed structural and biochemical information. However, multiply-aligning vast numbers of sequences in this way is clearly impractical. This problem is addressed using Multiply-Aligned Profiles for Global Alignment of Protein Sequences (MAPGAPS). The MAPGAPS program uses a set of multiply-aligned profiles both as a query to detect and classify related sequences and as a template to multiply-align the sequences. It relies on Karlin-Altschul statistics for sensitivity and on PSI-BLAST (and other) heuristics for speed. Using as input a carefully curated multiple-profile alignment for P-loop GTPases, MAPGAPS correctly aligned weakly conserved sequence motifs within 33 distantly related GTPases of known structure. By comparison, the sequence- and structurally based alignment methods hmmalign and PROMALS3D misaligned at least 11 and 23 of these regions, respectively. When applied to a dataset of 65 million protein sequences, MAPGAPS identified, classified and aligned (with comparable accuracy) nearly half a million putative P-loop GTPase sequences. A C++ implementation of MAPGAPS is available at http://mapgaps.igs.umaryland.edu. Supplementary data are available at Bioinformatics online.
Biggs, Holly M.; Hertz, Julian T.; Munishi, O. Michael; Galloway, Renee L.; Marks, Florian; Saganda, Wilbrod; Maro, Venance P.; Crump, John A.
2013-01-01
Background The incidence of leptospirosis, a neglected zoonotic disease, is uncertain in Tanzania and much of sub-Saharan Africa, resulting in scarce data on which to prioritize resources for public health interventions and disease control. In this study, we estimate the incidence of leptospirosis in two districts in the Kilimanjaro Region of Tanzania. Methodology/Principal Findings We conducted a population-based household health care utilization survey in two districts in the Kilimanjaro Region of Tanzania and identified leptospirosis cases at two hospital-based fever sentinel surveillance sites in the Kilimanjaro Region. We used multipliers derived from the health care utilization survey and case numbers from hospital-based surveillance to calculate the incidence of leptospirosis. A total of 810 households were enrolled in the health care utilization survey and multipliers were derived based on responses to questions about health care seeking in the event of febrile illness. Of patients enrolled in fever surveillance over a 1 year period and residing in the 2 districts, 42 (7.14%) of 588 met the case definition for confirmed or probable leptospirosis. After applying multipliers to account for hospital selection, test sensitivity, and study enrollment, we estimated the overall incidence of leptospirosis ranges from 75–102 cases per 100,000 persons annually. Conclusions/Significance We calculated a high incidence of leptospirosis in two districts in the Kilimanjaro Region of Tanzania, where leptospirosis incidence was previously unknown. Multiplier methods, such as used in this study, may be a feasible method of improving availability of incidence estimates for neglected diseases, such as leptospirosis, in resource constrained settings. PMID:24340122
Dividing Fractions: A Pedagogical Technique
ERIC Educational Resources Information Center
Lewis, Robert
2016-01-01
When dividing one fraction by a second fraction, invert, that is, flip the second fraction, then multiply it by the first fraction. To multiply fractions, simply multiply across the denominators, and multiply across the numerators to get the resultant fraction. So by inverting the division of fractions it is turned into an easy multiplication of…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Chengzhou; Shi, Qiurong; Fu, Shaofang
Delicately engineering the well-defined noble metal aerogels with favorable structural and compositional features is of vital importance for wide applications. Here, we reported one-pot and facile method for synthesizing core-shell PdPb@Pd hydrogels/aerogels with multiply-twinned grains and ordered intermetallic phase using sodium hypophosphite as a multifunctional reducing agent. Due to the accelerated gelation kinetics induced by increased reaction temperature and specific function of sodium hypophosphite, the formation of hydrogels can be completed within 4 hrs, far faster than the previous reports. Owe to their unique porous structure and favorable geometric and electronic effects, the optimized PdPb@Pd aerogels exhibit enhanced electrochemical performancemore » towards ethylene glycol oxidation with a mass activity of 5.8 times higher than Pd black.Core–shell PdPb@Pd aerogels with multiply-twinned grains and an ordered intermetallic phase was synthesized, which exhibited good electrocatalytic activity towards ethanol oxidation.« less
NASA Astrophysics Data System (ADS)
Khatri, Kshitij; Pu, Yi; Klein, Joshua A.; Wei, Juan; Costello, Catherine E.; Lin, Cheng; Zaia, Joseph
2018-04-01
Analysis of singly glycosylated peptides has evolved to a point where large-scale LC-MS analyses can be performed at almost the same scale as proteomics experiments. While collisionally activated dissociation (CAD) remains the mainstay of bottom-up analyses, it performs poorly for the middle-down analysis of multiply glycosylated peptides. With improvements in instrumentation, electron-activated dissociation (ExD) modes are becoming increasingly prevalent for proteomics experiments and for the analysis of fragile modifications such as glycosylation. While these methods have been applied for glycopeptide analysis in isolated studies, an organized effort to compare their efficiencies, particularly for analysis of multiply glycosylated peptides (termed here middle-down glycoproteomics), has not been made. We therefore compared the performance of different ExD modes for middle-down glycopeptide analyses. We identified key features among the different dissociation modes and show that increased electron energy and supplemental activation provide the most useful data for middle-down glycopeptide analysis. [Figure not available: see fulltext.
Analysis and computation of a least-squares method for consistent mesh tying
Day, David; Bochev, Pavel
2007-07-10
We report in the finite element method, a standard approach to mesh tying is to apply Lagrange multipliers. If the interface is curved, however, discretization generally leads to adjoining surfaces that do not coincide spatially. Straightforward Lagrange multiplier methods lead to discrete formulations failing a first-order patch test [T.A. Laursen, M.W. Heinstein, Consistent mesh-tying methods for topologically distinct discretized surfaces in non-linear solid mechanics, Internat. J. Numer. Methods Eng. 57 (2003) 1197–1242]. This paper presents a theoretical and computational study of a least-squares method for mesh tying [P. Bochev, D.M. Day, A least-squares method for consistent mesh tying, Internat. J.more » Numer. Anal. Modeling 4 (2007) 342–352], applied to the partial differential equation -∇ 2φ+αφ=f. We prove optimal convergence rates for domains represented as overlapping subdomains and show that the least-squares method passes a patch test of the order of the finite element space by construction. To apply the method to subdomain configurations with gaps and overlaps we use interface perturbations to eliminate the gaps. Finally, theoretical error estimates are illustrated by numerical experiments.« less
UWB delay and multiply receiver
Dallum, Gregory E.; Pratt, Garth C.; Haugen, Peter C.; Romero, Carlos E.
2013-09-10
An ultra-wideband (UWB) delay and multiply receiver is formed of a receive antenna; a variable gain attenuator connected to the receive antenna; a signal splitter connected to the variable gain attenuator; a multiplier having one input connected to an undelayed signal from the signal splitter and another input connected to a delayed signal from the signal splitter, the delay between the splitter signals being equal to the spacing between pulses from a transmitter whose pulses are being received by the receive antenna; a peak detection circuit connected to the output of the multiplier and connected to the variable gain attenuator to control the variable gain attenuator to maintain a constant amplitude output from the multiplier; and a digital output circuit connected to the output of the multiplier.
[Multiply upconversion emission in oxyfluoride ceramics].
Xiao, Si-guo; Yang, Xiao-liang; Liu, Zhen-wei
2003-02-01
Oxyfluoride ceramics with the host composition of SiO2 and PbF2 have been prepared. X-ray diffraction analysis of the ceramics revealed that fluoride type beta-PbF2 solid solution regions are precipitated in the glass matrix. Rare earth ions in the beta-PbF2 solid solution show highly efficient upconversion performance due to the very small multi-phonon relaxation rates. Eight upconversion emission bands whose central wavelength are 846, 803, 665, 549, 523, 487, 456 and 411 nm have been observed when the sample was excited with 930 nm diode light. Four possible energy transfer processes between Er3+ and Yb3+ cause the electronic population of high energy level of Er3+ and realize the abound upconversion luminescence bands.
Bacteriorhodopsin films for optical signal processing and data storage
NASA Technical Reports Server (NTRS)
Walkup, John F. (Principal Investigator); Mehrl, David J. (Principal Investigator)
1996-01-01
This report summarizes the research results obtained on NASA Ames Grant NAG 2-878 entitled 'Investigations of Bacteriorhodopsin Films for Optical Signal Processing and Data Storage.' Specifically we performed research, at Texas Tech University, on applications of Bacteriorhodopisin film to both (1) dynamic spatial filtering and (2) holographic data storage. In addition, measurements of the noise properties of an acousto-optical matrix-vestor multiplier built for NASA Ames by Photonic Systems Inc. were performed at NASA Ames' Photonics Laboratory. This research resulted in two papers presented at major optical data processing conferences and a journal paper which is to appear in APPLIED OPTICS. A new proposal for additional BR research has recently been submitted to NASA Ames Research Center.
Application of preconditioned alternating direction method of multipliers in depth from focal stack
NASA Astrophysics Data System (ADS)
Javidnia, Hossein; Corcoran, Peter
2018-03-01
Postcapture refocusing effect in smartphone cameras is achievable using focal stacks. However, the accuracy of this effect is totally dependent on the combination of the depth layers in the stack. The accuracy of the extended depth of field effect in this application can be improved significantly by computing an accurate depth map, which has been an open issue for decades. To tackle this issue, a framework is proposed based on a preconditioned alternating direction method of multipliers for depth from the focal stack and synthetic defocus application. In addition to its ability to provide high structural accuracy, the optimization function of the proposed framework can, in fact, converge faster and better than state-of-the-art methods. The qualitative evaluation has been done on 21 sets of focal stacks and the optimization function has been compared against five other methods. Later, 10 light field image sets have been transformed into focal stacks for quantitative evaluation purposes. Preliminary results indicate that the proposed framework has a better performance in terms of structural accuracy and optimization in comparison to the current state-of-the-art methods.
Efficient sensitivity analysis method for chaotic dynamical systems
NASA Astrophysics Data System (ADS)
Liao, Haitao
2016-05-01
The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.
Investigation of valley-resolved transmission through gate defined graphene carrier guiders
NASA Astrophysics Data System (ADS)
Cao, Shi-Min; Zhou, Jiao-Jiao; Wei, Xuan; Cheng, Shu-Guang
2017-04-01
Massless charge carriers in gate potentials modulate graphene quantum well transport in the same way that a electromagnetic wave propagates in optical fibers. A recent experiment by Kim et al (2016 Nat. Phys. 12 1022) reports valley symmetry preserved transport in a graphene carrier guider. Based on a tight-binding model, the valley-resolved transport coefficients are calculated with the method of scattering matrix theory. For a straight potential well, valley-resolved conductance is quantized with a value of 2n + 1 and multiplied by 2e 2/h with integer n. In the absence of disorder, intervalley scattering, only occurring at both ends of the potential well, is weak. The propagating modes inside the potential well are analyzed with the help of band structure and wave function distribution. The conductance is better preserved for a longer carrier guider. The quantized conductance is barely affected by the boundaries of different types or slightly changing the orientation of the carrier guider. For a curved model, the state with momentum closes to the neutral point is more fragile to boundary scattering and the quantized conductance is ruined as well.
Gamma-ray blind beta particle probe
Weisenberger, Andrew G.
2001-01-01
An intra-operative beta particle probe is provided by placing a suitable photomultiplier tube (PMT), micro channel plate (MCP) or other electron multiplier device within a vacuum housing equipped with: 1) an appropriate beta particle permeable window; and 2) electron detection circuitry. Beta particles emitted in the immediate vicinity of the probe window will be received by the electron multiplier device and amplified to produce a detectable signal. Such a device is useful as a gamma insensitive, intra-operative, beta particle probe in surgeries where the patient has been injected with a beta emitting radiopharmaceutical. The method of use of such a device is also described, as is a position sensitive such device.
Did we misunderstand how to calculate total stroke work in mitral regurgitation by echocardiography?
Shingu, Yasushige; Matsui, Yoshiro
2012-01-01
Total stroke work (TSW) is used for the estimation of cardiac efficiency in mitral regurgitation (MR). We should be cautious about the interpretation of this parameter, especially when it is assessed by non-invasive methods such as echocardiography. For the calculation of regurgitant stroke work, regurgitant volume is usually multiplied by left atrial (LA) pressure. However, by considering the left ventricular (LV) pressure-volume loop, it would be more appropriate to multiply regurgitant volume and the LV pressure, not the atrial one. We might underestimate TSW when we use LA pressure for the estimation of regurgitant stroke work.
User-friendly program for multitask analysis
NASA Astrophysics Data System (ADS)
Caporali, Sergio A.; Akladios, Magdy; Becker, Paul E.
2000-10-01
Research on lifting activities has led to the design of several useful tools for evaluating tasks that involve lifting and material handling. The National Institute for Occupational Safety and Health (NIOSH) has developed a single task lifting equation. This formula has been frequently used as a guide in the field of ergonomics and material handling. While being much more complicated, the multi-task formula will provide a more realistic analysis for the evaluation of lifting and material handling jobs. A user friendly tool has been developed to assist professionals in the field of ergonomics in analyzing multitask types of material handling jobs. The program allows for up to 10 different tasks to be evaluated. The program requires a basic understanding of the NIOSH lifting guidelines and the six multipliers that are involved in the analysis of each single task. These multipliers are: Horizontal Distance Multiplier (HM), Vertical Distance Multiplier (VM), Vertical Displacement Multiplier (DM), Frequency of lifting Multiplier (FM), Coupling Multiplier (CM), and the Asymmetry Multiplier (AM). Once a given job is analyzed, a researched list of recommendations is provided to the user in an attempt to reduce the potential risk factors that are associated with each task.
Distributed-Lagrange-Multiplier-based computational method for particulate flow with collisions
NASA Astrophysics Data System (ADS)
Ardekani, Arezoo; Rangel, Roger
2006-11-01
A Distributed-Lagrange-Multiplier-based computational method is developed for colliding particles in a solid-fluid system. A numerical simulation is conducted in two dimensions using the finite volume method. The entire domain is treated as a fluid but the fluid in the particle domains satisfies a rigidity constraint. We present an efficient method for predicting the collision between particles. In earlier methods, a repulsive force was applied to the particles when their distance was less than a critical value. In this method, an impulsive force is computed. During the frictionless collision process between two particles, linear momentum is conserved while the tangential forces are zero. Thus, instead of satisfying a condition of rigid body motion for each particle separately, as done when particles are not in contact, both particles are rigidified together along their line of centers. Particles separate from each other when the impulsive force is less than zero and after this time, a rigidity constraint is satisfied for each particle separately. Grid independency is implemented to ensure the accuracy of the numerical simulation. A comparison between this method and previous collision strategies is presented and discussed.
NASA Astrophysics Data System (ADS)
Nuber, André; Manukyan, Edgar; Maurer, Hansruedi
2014-05-01
Conventional methods of interpreting seismic data rely on filtering and processing limited portions of the recorded wavefield. Typically, either reflections, refractions or surface waves are considered in isolation. Particularly in near-surface engineering and environmental investigations (depths less than, say 100 m), these wave types often overlap in time and are difficult to separate. Full waveform inversion is a technique that seeks to exploit and interpret the full information content of the seismic records without the need for separating events first; it yields models of the subsurface at sub-wavelength resolution. We use a finite element modelling code to solve the 2D elastic isotropic wave equation in the frequency domain. This code is part of a Gauss-Newton inversion scheme which we employ to invert for the P- and S-wave velocities as well as for density in the subsurface. For shallow surface data the use of an elastic forward solver is essential because surface waves often dominate the seismograms. This leads to high sensitivities (partial derivatives contained in the Jacobian matrix of the Gauss-Newton inversion scheme) and thus large model updates close to the surface. Reflections from deeper structures may also include useful information, but the large sensitivities of the surface waves often preclude this information from being fully exploited. We have developed two methods that balance the sensitivity distributions and thus may help resolve the deeper structures. The first method includes equilibrating the columns of the Jacobian matrix prior to every inversion step by multiplying them with individual scaling factors. This is expected to also balance the model updates throughout the entire subsurface model. It can be shown that this procedure is mathematically equivalent to balancing the regularization weights of the individual model parameters. A proper choice of the scaling factors required to balance the Jacobian matrix is critical. We decided to normalise the columns of the Jacobian based on their absolute column sum, but defining an upper threshold for the scaling factors. This avoids particularly small and therefore insignificant sensitivities being over-boosted, which would produce unstable results. The second method proposed includes adjusting the inversion cell size with depth. Multiple cells of the forward modelling grid are merged to form larger inversion cells (typical ratios between forward and inversion cells are in the order of 1:100). The irregular inversion grid is adapted to the expected resolution power of full waveform inversion. Besides stabilizing the inversion, this approach also reduces the number of model parameters to be recovered. Consequently, the computational costs and the memory consumption are reduced significantly. This is particularly critical when Gauss-Newton type inversion schemes are employed. Extensive tests with synthetic data demonstrated that both methods stabilise the inversion and improve the inversion results. The two methods have some redundancy, which can be seen when both are applied simultaneously, that is, when scaling of the Jacobian matrix is applied to an irregular inversion grid. The calculated scaling factors are quite balanced and span a much smaller range than in the case of a regular inversion grid.
Three dimensional time reversal optical tomography
NASA Astrophysics Data System (ADS)
Wu, Binlin; Cai, W.; Alrubaiee, M.; Xu, M.; Gayen, S. K.
2011-03-01
Time reversal optical tomography (TROT) approach is used to detect and locate absorptive targets embedded in a highly scattering turbid medium to assess its potential in breast cancer detection. TROT experimental arrangement uses multi-source probing and multi-detector signal acquisition and Multiple-Signal-Classification (MUSIC) algorithm for target location retrieval. Light transport from multiple sources through the intervening medium with embedded targets to the detectors is represented by a response matrix constructed using experimental data. A TR matrix is formed by multiplying the response matrix by its transpose. The eigenvectors with leading non-zero eigenvalues of the TR matrix correspond to embedded objects. The approach was used to: (a) obtain the location and spatial resolution of an absorptive target as a function of its axial position between the source and detector planes; and (b) study variation in spatial resolution of two targets at the same axial position but different lateral positions. The target(s) were glass sphere(s) of diameter ~9 mm filled with ink (absorber) embedded in a 60 mm-thick slab of Intralipid-20% suspension in water with an absorption coefficient μa ~ 0.003 mm-1 and a transport mean free path lt ~ 1 mm at 790 nm, which emulate the average values of those parameters for human breast tissue. The spatial resolution and accuracy of target location depended on axial position, and target contrast relative to the background. Both the targets could be resolved and located even when they were only 4-mm apart. The TROT approach is fast, accurate, and has the potential to be useful in breast cancer detection and localization.
Oryspayev, Dossay; Aktulga, Hasan Metin; Sosonkina, Masha; ...
2015-07-14
In this article, sparse matrix vector multiply (SpMVM) is an important kernel that frequently arises in high performance computing applications. Due to its low arithmetic intensity, several approaches have been proposed in literature to improve its scalability and efficiency in large scale computations. In this paper, our target systems are high end multi-core architectures and we use messaging passing interface + open multiprocessing hybrid programming model for parallelism. We analyze the performance of recently proposed implementation of the distributed symmetric SpMVM, originally developed for large sparse symmetric matrices arising in ab initio nuclear structure calculations. We also study important featuresmore » of this implementation and compare with previously reported implementations that do not exploit underlying symmetry. Our SpMVM implementations leverage the hybrid paradigm to efficiently overlap expensive communications with computations. Our main comparison criterion is the "CPU core hours" metric, which is the main measure of resource usage on supercomputers. We analyze the effects of topology-aware mapping heuristic using simplified network load model. Furthermore, we have tested the different SpMVM implementations on two large clusters with 3D Torus and Dragonfly topology. Our results show that the distributed SpMVM implementation that exploits matrix symmetry and hides communication yields the best value for the "CPU core hours" metric and significantly reduces data movement overheads.« less
System identification using Nuclear Norm & Tabu Search optimization
NASA Astrophysics Data System (ADS)
Ahmed, Asif A.; Schoen, Marco P.; Bosworth, Ken W.
2018-01-01
In recent years, subspace System Identification (SI) algorithms have seen increased research, stemming from advanced minimization methods being applied to the Nuclear Norm (NN) approach in system identification. These minimization algorithms are based on hard computing methodologies. To the authors’ knowledge, as of now, there has been no work reported that utilizes soft computing algorithms to address the minimization problem within the nuclear norm SI framework. A linear, time-invariant, discrete time system is used in this work as the basic model for characterizing a dynamical system to be identified. The main objective is to extract a mathematical model from collected experimental input-output data. Hankel matrices are constructed from experimental data, and the extended observability matrix is employed to define an estimated output of the system. This estimated output and the actual - measured - output are utilized to construct a minimization problem. An embedded rank measure assures minimum state realization outcomes. Current NN-SI algorithms employ hard computing algorithms for minimization. In this work, we propose a simple Tabu Search (TS) algorithm for minimization. TS algorithm based SI is compared with the iterative Alternating Direction Method of Multipliers (ADMM) line search optimization based NN-SI. For comparison, several different benchmark system identification problems are solved by both approaches. Results show improved performance of the proposed SI-TS algorithm compared to the NN-SI ADMM algorithm.
Nohara, Ryuki; Endo, Yui; Murai, Akihiko; Takemura, Hiroshi; Kouchi, Makiko; Tada, Mitsunori
2016-08-01
Individual human models are usually created by direct 3D scanning or deforming a template model according to the measured dimensions. In this paper, we propose a method to estimate all the necessary dimensions (full set) for the human model individualization from a small number of measured dimensions (subset) and human dimension database. For this purpose, we solved multiple regression equation from the dimension database given full set dimensions as the objective variable and subset dimensions as the explanatory variables. Thus, the full set dimensions are obtained by simply multiplying the subset dimensions to the coefficient matrix of the regression equation. We verified the accuracy of our method by imputing hand, foot, and whole body dimensions from their dimension database. The leave-one-out cross validation is employed in this evaluation. The mean absolute errors (MAE) between the measured and the estimated dimensions computed from 4 dimensions (hand length, breadth, middle finger breadth at proximal, and middle finger depth at proximal) in the hand, 2 dimensions (foot length, breadth, and lateral malleolus height) in the foot, and 1 dimension (height) and weight in the whole body are computed. The average MAE of non-measured dimensions were 4.58% in the hand, 4.42% in the foot, and 3.54% in the whole body, while that of measured dimensions were 0.00%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yixing; Hong, Tianzhen
We present that urban-scale building energy modeling (UBEM)—using building modeling to understand how a group of buildings will perform together—is attracting increasing attention in the energy modeling field. Unlike modeling a single building, which will use detailed information, UBEM generally uses existing building stock data consisting of high-level building information. This study evaluated the impacts of three zoning methods and the use of floor multipliers on the simulated energy use of 940 office and retail buildings in three climate zones using City Building Energy Saver. The first zoning method, OneZone, creates one thermal zone per floor using the target building'smore » footprint. The second zoning method, AutoZone, splits the building's footprint into perimeter and core zones. A novel, pixel-based automatic zoning algorithm is developed for the AutoZone method. The third zoning method, Prototype, uses the U.S. Department of Energy's reference building prototype shapes. Results show that simulated source energy use of buildings with the floor multiplier are marginally higher by up to 2.6% than those modeling each floor explicitly, which take two to three times longer to run. Compared with the AutoZone method, the OneZone method results in decreased thermal loads and less equipment capacities: 15.2% smaller fan capacity, 11.1% smaller cooling capacity, 11.0% smaller heating capacity, 16.9% less heating loads, and 7.5% less cooling loads. Source energy use differences range from -7.6% to 5.1%. When comparing the Prototype method with the AutoZone method, source energy use differences range from -12.1% to 19.0%, and larger ranges of differences are found for the thermal loads and equipment capacities. This study demonstrated that zoning methods have a significant impact on the simulated energy use of UBEM. Finally, one recommendation resulting from this study is to use the AutoZone method with floor multiplier to obtain accurate results while balancing the simulation run time for UBEM.« less
Chen, Yixing; Hong, Tianzhen
2018-02-20
We present that urban-scale building energy modeling (UBEM)—using building modeling to understand how a group of buildings will perform together—is attracting increasing attention in the energy modeling field. Unlike modeling a single building, which will use detailed information, UBEM generally uses existing building stock data consisting of high-level building information. This study evaluated the impacts of three zoning methods and the use of floor multipliers on the simulated energy use of 940 office and retail buildings in three climate zones using City Building Energy Saver. The first zoning method, OneZone, creates one thermal zone per floor using the target building'smore » footprint. The second zoning method, AutoZone, splits the building's footprint into perimeter and core zones. A novel, pixel-based automatic zoning algorithm is developed for the AutoZone method. The third zoning method, Prototype, uses the U.S. Department of Energy's reference building prototype shapes. Results show that simulated source energy use of buildings with the floor multiplier are marginally higher by up to 2.6% than those modeling each floor explicitly, which take two to three times longer to run. Compared with the AutoZone method, the OneZone method results in decreased thermal loads and less equipment capacities: 15.2% smaller fan capacity, 11.1% smaller cooling capacity, 11.0% smaller heating capacity, 16.9% less heating loads, and 7.5% less cooling loads. Source energy use differences range from -7.6% to 5.1%. When comparing the Prototype method with the AutoZone method, source energy use differences range from -12.1% to 19.0%, and larger ranges of differences are found for the thermal loads and equipment capacities. This study demonstrated that zoning methods have a significant impact on the simulated energy use of UBEM. Finally, one recommendation resulting from this study is to use the AutoZone method with floor multiplier to obtain accurate results while balancing the simulation run time for UBEM.« less
DQM: Decentralized Quadratically Approximated Alternating Direction Method of Multipliers
NASA Astrophysics Data System (ADS)
Mokhtari, Aryan; Shi, Wei; Ling, Qing; Ribeiro, Alejandro
2016-10-01
This paper considers decentralized consensus optimization problems where nodes of a network have access to different summands of a global objective function. Nodes cooperate to minimize the global objective by exchanging information with neighbors only. A decentralized version of the alternating directions method of multipliers (DADMM) is a common method for solving this category of problems. DADMM exhibits linear convergence rate to the optimal objective but its implementation requires solving a convex optimization problem at each iteration. This can be computationally costly and may result in large overall convergence times. The decentralized quadratically approximated ADMM algorithm (DQM), which minimizes a quadratic approximation of the objective function that DADMM minimizes at each iteration, is proposed here. The consequent reduction in computational time is shown to have minimal effect on convergence properties. Convergence still proceeds at a linear rate with a guaranteed constant that is asymptotically equivalent to the DADMM linear convergence rate constant. Numerical results demonstrate advantages of DQM relative to DADMM and other alternatives in a logistic regression problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Weixiong; Wang, Yaqi; DeHart, Mark D.
2016-09-01
In this report, we present a new upwinding scheme for the multiscale capability in Rattlesnake, the MOOSE based radiation transport application. Comparing with the initial implementation of multiscale utilizing Lagrange multipliers to impose strong continuity of angular flux on interface of in-between subdomains, this scheme does not require the particular domain partitioning. This upwinding scheme introduces discontinuity of angular flux and resembles the classic upwinding technique developed for solving first order transport equation using discontinuous finite element method (DFEM) on the subdomain interfaces. Because this scheme restores the causality of radiation streaming on the interfaces, significant accuracy improvement can bemore » observed with moderate increase of the degrees of freedom comparing with the continuous method over the entire solution domain. Hybrid SN-PN is implemented and tested with this upwinding scheme. Numerical results show that the angular smoothing required by Lagrange multiplier method is not necessary for the upwinding scheme.« less
Structural optimization of large structural systems by optimality criteria methods
NASA Technical Reports Server (NTRS)
Berke, Laszlo
1992-01-01
The fundamental concepts of the optimality criteria method of structural optimization are presented. The effect of the separability properties of the objective and constraint functions on the optimality criteria expressions is emphasized. The single constraint case is treated first, followed by the multiple constraint case with a more complex evaluation of the Lagrange multipliers. Examples illustrate the efficiency of the method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Holak; Lim, Youbong; Choe, Wonho, E-mail: wchoe@kaist.ac.kr
2014-10-06
Plasma plume and thruster performance characteristics associated with multiply charged ions in a cylindrical type Hall thruster (CHT) and an annular type Hall thruster are compared under identical conditions such as channel diameter, channel depth, propellant mass flow rate. A high propellant utilization in a CHT is caused by a high ionization rate, which brings about large multiply charged ions. Ion currents and utilizations are much different due to the presence of multiply charged ions. A high multiply charged ion fraction and a high ionization rate in the CHT result in a higher specific impulse, thrust, and discharge current.
Pierce, Paul E.
1986-01-01
A hardware processor is disclosed which in the described embodiment is a memory mapped multiplier processor that can operate in parallel with a 16 bit microcomputer. The multiplier processor decodes the address bus to receive specific instructions so that in one access it can write and automatically perform single or double precision multiplication involving a number written to it with or without addition or subtraction with a previously stored number. It can also, on a single read command automatically round and scale a previously stored number. The multiplier processor includes two concatenated 16 bit multiplier registers, two 16 bit concatenated 16 bit multipliers, and four 16 bit product registers connected to an internal 16 bit data bus. A high level address decoder determines when the multiplier processor is being addressed and first and second low level address decoders generate control signals. In addition, certain low order address lines are used to carry uncoded control signals. First and second control circuits coupled to the decoders generate further control signals and generate a plurality of clocking pulse trains in response to the decoded and address control signals.
VLSI Implementation of Fault Tolerance Multiplier based on Reversible Logic Gate
NASA Astrophysics Data System (ADS)
Ahmad, Nabihah; Hakimi Mokhtar, Ahmad; Othman, Nurmiza binti; Fhong Soon, Chin; Rahman, Ab Al Hadi Ab
2017-08-01
Multiplier is one of the essential component in the digital world such as in digital signal processing, microprocessor, quantum computing and widely used in arithmetic unit. Due to the complexity of the multiplier, tendency of errors are very high. This paper aimed to design a 2×2 bit Fault Tolerance Multiplier based on Reversible logic gate with low power consumption and high performance. This design have been implemented using 90nm Complemetary Metal Oxide Semiconductor (CMOS) technology in Synopsys Electronic Design Automation (EDA) Tools. Implementation of the multiplier architecture is by using the reversible logic gates. The fault tolerance multiplier used the combination of three reversible logic gate which are Double Feynman gate (F2G), New Fault Tolerance (NFT) gate and Islam Gate (IG) with the area of 160μm x 420.3μm (67.25 mm2). This design achieved a low power consumption of 122.85μW and propagation delay of 16.99ns. The fault tolerance multiplier proposed achieved a low power consumption and high performance which suitable for application of modern computing as it has a fault tolerance capabilities.
Pierce, P.E.
A hardware processor is disclosed which in the described embodiment is a memory mapped multiplier processor that can operate in parallel with a 16 bit microcomputer. The multiplier processor decodes the address bus to receive specific instructions so that in one access it can write and automatically perform single or double precision multiplication involving a number written to it with or without addition or subtraction with a previously stored number. It can also, on a single read command automatically round and scale a previously stored number. The multiplier processor includes two concatenated 16 bit multiplier registers, two 16 bit concatenated 16 bit multipliers, and four 16 bit product registers connected to an internal 16 bit data bus. A high level address decoder determines when the multiplier processor is being addressed and first and second low level address decoders generate control signals. In addition, certain low order address lines are used to carry uncoded control signals. First and second control circuits coupled to the decoders generate further control signals and generate a plurality of clocking pulse trains in response to the decoded and address control signals.
Tests of Measurement Invariance without Subgroups: A Generalization of Classical Methods
ERIC Educational Resources Information Center
Merkle, Edgar C.; Zeileis, Achim
2013-01-01
The issue of measurement invariance commonly arises in factor-analytic contexts, with methods for assessment including likelihood ratio tests, Lagrange multiplier tests, and Wald tests. These tests all require advance definition of the number of groups, group membership, and offending model parameters. In this paper, we study tests of measurement…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Haitao, E-mail: liaoht@cae.ac.cn
The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results inmore » an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.« less
NASA Astrophysics Data System (ADS)
Mozaffarzadeh, Moein; Mahloojifar, Ali; Nasiriavanaki, Mohammadreza; Orooji, Mahdi
2018-02-01
Delay and sum (DAS) is the most common beamforming algorithm in linear-array photoacoustic imaging (PAI) as a result of its simple implementation. However, it leads to a low resolution and high sidelobes. Delay multiply and sum (DMAS) was used to address the incapabilities of DAS, providing a higher image quality. However, the resolution improvement is not well enough compared to eigenspace-based minimum variance (EIBMV). In this paper, the EIBMV beamformer has been combined with DMAS algebra, called EIBMV-DMAS, using the expansion of DMAS algorithm. The proposed method is used as the reconstruction algorithm in linear-array PAI. EIBMV-DMAS is experimentally evaluated where the quantitative and qualitative results show that it outperforms DAS, DMAS and EIBMV. The proposed method degrades the sidelobes for about 365 %, 221 % and 40 %, compared to DAS, DMAS and EIBMV, respectively. Moreover, EIBMV-DMAS improves the SNR about 158 %, 63 % and 20 %, respectively.
Retrieving Storm Electric Fields from Aircraft Field Mill Data. Part 1; Theory
NASA Technical Reports Server (NTRS)
Koshak, W. J.
2006-01-01
It is shown that the problem of retrieving storm electric fields from an aircraft instrumented with several electric field mill sensors can be expressed in terms of a standard Lagrange multiplier optimization problem. The method naturally removes aircraft charge from the retrieval process without having to use a high voltage stinger and linearly combined mill data values. It allows a variety of user-supplied physical constraints (the so-called side constraints in the theory of Lagrange multipliers) and also helps improve absolute calibration. Additionally, this paper introduces an alternate way of performing the absolute calibration of an aircraft that has some benefits over conventional analyses. It is accomplished by using the time derivatives of mill and pitch data for a pitch down maneuver performed at high (greater than 1 km) altitude. In Part II of this study, the above methods are tested and then applied to complete a full calibration of a Citation aircraft.
Retrieving Storm Electric Fields From Aircraft Field Mill Data. Part I: Theory
NASA Technical Reports Server (NTRS)
Koshak, W. J.
2005-01-01
It is shown that the problem of retrieving storm electric fields from an aircraft instrumented with several electric field mill sensors can be expressed in terms of a standard Lagrange multiplier optimization problem. The method naturally removes aircraft charge from the retrieval process without having to use a high voltage stinger and linearly combined mill data values. It also allows a variety of user-supplied physical constraints (the so-called side constraints in the theory of Lagrange multipliers). Additionally, this paper introduces a novel way of performing the absolute calibration of an aircraft that has several benefits over conventional analyses. In the new approach, absolute calibration is completed by inspecting the time derivatives of mill and pitch data for a pitch down maneuver performed at high (greater than 1 km) altitude. In Part II of this study, the above methods are tested and then applied to complete a full calibration of a Citation aircraft.
Monte Carlo Simulation of THz Multipliers
NASA Technical Reports Server (NTRS)
East, J.; Blakey, P.
1997-01-01
Schottky Barrier diode frequency multipliers are critical components in submillimeter and Thz space based earth observation systems. As the operating frequency of these multipliers has increased, the agreement between design predictions and experimental results has become poorer. The multiplier design is usually based on a nonlinear model using a form of harmonic balance and a model for the Schottky barrier diode. Conventional voltage dependent lumped element models do a poor job of predicting THz frequency performance. This paper will describe a large signal Monte Carlo simulation of Schottky barrier multipliers. The simulation is a time dependent particle field Monte Carlo simulation with ohmic and Schottky barrier boundary conditions included that has been combined with a fixed point solution for the nonlinear circuit interaction. The results in the paper will point out some important time constants in varactor operation and will describe the effects of current saturation and nonlinear resistances on multiplier operation.
Mori, Mari; Hamada, Atsumi; Mori, Hideki; Yamori, Yukio; Tsuda, Kinsuke
2012-01-01
This 2-week interventional study involved a randomized allocation of subjects into three groups: Group A (daily ingestion of 350 g vegetables cooked without water using multi-ply [multilayer-structured] cookware), Group B (daily ingestion of 350g vegetables; ordinary cookware) and Group C (routine living). Before and after intervention, each subject underwent health examination with 24-h urine sampling. Blood vitamin C significantly increased after intervention from the baseline in Group A (P < 0.01) and Group B (P < 0.05). β-Carotene levels also increased significantly after intervention in Group A (P < 0.01) and Group B (P < 0.01). Oxidized low-density lipoprotein decreased significantly after intervention in Group A (P < 0.01). In Group A, 24-h urinary potassium excretion increased significantly (P < 0.01) and 24-h urinary sodium (Na)/K ratio improved significantly (P < 0.05) after intervention. In conclusion, a cooking method modification with multi-ply cookware improved absorption of nutrients from vegetables and enhanced effective utilization of the antioxidant potentials of vegetable nutrients. PMID:22229802
Propagation of terahertz pulses in random media.
Pearce, Jeremy; Jian, Zhongping; Mittleman, Daniel M
2004-02-15
We describe measurements of single-cycle terahertz pulse propagation in a random medium. The unique capabilities of terahertz time-domain spectroscopy permit the characterization of a multiply scattered field with unprecedented spatial and temporal resolution. With these results, we can develop a framework for understanding the statistics of broadband laser speckle. Also, the ability to extract information on the phase of the field opens up new possibilities for characterizing multiply scattered waves. We illustrate this with a simple example, which involves computing a time-windowed temporal correlation between fields measured at different spatial locations. This enables the identification of individual scattering events, and could lead to a new method for imaging in random media.
NASA Astrophysics Data System (ADS)
Amengonu, Yawo H.; Kakad, Yogendra P.
2014-07-01
Quasivelocity techniques such as Maggi's and Boltzmann-Hamel's equations eliminate Lagrange multipliers from the beginning as opposed to the Euler-Lagrange method where one has to solve for the n configuration variables and the multipliers as functions of time when there are m nonholonomic constraints. Maggi's equation produces n second-order differential equations of which (n-m) are derived using (n-m) independent quasivelocities and the time derivative of the m kinematic constraints which add the remaining m second order differential equations. This technique is applied to derive the dynamics of a differential mobile robot and a controller which takes into account these dynamics is developed.
Oral fluid vs. Urine Analysis to Monitor Synthetic Cannabinoids and Classic Drugs Recent Exposure
Blandino, Vincent; Wetzel, Jillian; Kim, Jiyoung; Haxhi, Petrit; Curtis, Richard; Concheiro, Marta
2018-01-01
Background Urine is a common biological sample to monitor recent drug exposure, and oral fluid is an alternative matrix of increasing interest in clinical and forensic toxicology. Limited data are available about oral fluid vs. urine drug disposition, especially for synthetic cannabinoids. Objective To compare urine and oral fluid as biological matrices to monitor recent drug exposure among HIV-infected homeless individuals. Methods Seventy matched urine and oral fluid samples were collected from 13 participants. Cannabis, amphetamines, benzodiazepines, cocaine and opiates were analyzed in urine by the enzyme-multiplied-immunoassay-technique and in oral fluid by liquid chromatography tandem mass spectrometry (LC-MSMS). Eleven synthetic cannabinoids were analyzed in urine and in oral fluid by LC-MSMS. Results Five oral fluid samples were positive for AB-FUBINACA. In urine, 4 samples tested positive for synthetic cannabinoids PB-22, 5-Fluoro-PB-22, AB-FUBINACA, and metabolites UR-144 5-pentanoic acid and UR-144 4-hydroxypentyl. In only one case, oral fluid and urine results matched, both specimens being AB-FUBINACA positive. For cannabis, 40 samples tested positive in urine and 30 in oral fluid (85.7% match). For cocaine, 37 urine and 52 oral fluid samples were positive (75.7% match). Twenty-four urine samples were positive for opiates, and 25 in oral fluid (81.4% match). For benzodiazepines, 23 samples were positive in urine and 25 in oral fluid (85.7% match). Conclusion/Discussion These results offer new information about drugs disposition between urine and oral fluid. Oral fluid is a good alternative matrix to urine for monitoring cannabis, cocaine, opiates and benzodiazepines recent use; however, synthetic cannabinoids showed mixed results. PMID:29173162
NULL Convention Floating Point Multiplier
Ramachandran, Seshasayanan
2015-01-01
Floating point multiplication is a critical part in high dynamic range and computational intensive digital signal processing applications which require high precision and low power. This paper presents the design of an IEEE 754 single precision floating point multiplier using asynchronous NULL convention logic paradigm. Rounding has not been implemented to suit high precision applications. The novelty of the research is that it is the first ever NULL convention logic multiplier, designed to perform floating point multiplication. The proposed multiplier offers substantial decrease in power consumption when compared with its synchronous version. Performance attributes of the NULL convention logic floating point multiplier, obtained from Xilinx simulation and Cadence, are compared with its equivalent synchronous implementation. PMID:25879069
NULL convention floating point multiplier.
Albert, Anitha Juliette; Ramachandran, Seshasayanan
2015-01-01
Floating point multiplication is a critical part in high dynamic range and computational intensive digital signal processing applications which require high precision and low power. This paper presents the design of an IEEE 754 single precision floating point multiplier using asynchronous NULL convention logic paradigm. Rounding has not been implemented to suit high precision applications. The novelty of the research is that it is the first ever NULL convention logic multiplier, designed to perform floating point multiplication. The proposed multiplier offers substantial decrease in power consumption when compared with its synchronous version. Performance attributes of the NULL convention logic floating point multiplier, obtained from Xilinx simulation and Cadence, are compared with its equivalent synchronous implementation.
Wagner, Pablo; Standard, Shawn C; Herzenberg, John E
The multiplier method (MM) is frequently used to predict limb-length discrepancy and timing of epiphysiodesis. The traditional MM uses complex formulae and requires a calculator. A mobile application was developed in an attempt to simplify and streamline these calculations. We compared the accuracy and speed of using the traditional pencil and paper technique with that using the Multiplier App (MA). After attending a training lecture and a hands-on workshop on the MM and MA, 30 resident surgeons were asked to apply the traditional MM and the MA at different weeks of their rotations. They were randomized as to the method they applied first. Subjects performed calculations for 5 clinical exercises that involved congenital and developmental limb-length discrepancies and timing of epiphysiodesis. The amount of time required to complete the exercises and the accuracy of the answers were evaluated for each subject. The test subjects answered 60% of the questions correctly using the traditional MM and 80% of the questions correctly using the MA (P=0.001). The average amount of time to complete the 5 exercises with the MM and MA was 22 and 8 minutes, respectively (P<0.0001). Several reports state that the traditional MM is quick and easy to use. Nevertheless, even in the most experienced hands, performing the calculations in clinical practice can be time-consuming. Errors may result from choosing the wrong formulae and from performing the calculations by hand. Our data show that the MA is simpler, more accurate, and faster than the traditional MM from a practical standpoint. Level II.
The general ventilation multipliers calculated by using a standard Near-Field/Far-Field model.
Koivisto, Antti J; Jensen, Alexander C Ø; Koponen, Ismo K
2018-05-01
In conceptual exposure models, the transmission of pollutants in an imperfectly mixed room is usually described with general ventilation multipliers. This is the approach used in the Advanced REACH Tool (ART) and Stoffenmanager® exposure assessment tools. The multipliers used in these tools were reported by Cherrie (1999; http://dx.doi.org/10.1080/104732299302530 ) and Cherrie et al. (2011; http://dx.doi.org/10.1093/annhyg/mer092 ) who developed them by positing input values for a standard Near-Field/Far-Field (NF/FF) model and then calculating concentration ratios between NF and FF concentrations. This study revisited the calculations that produce the multipliers used in ART and Stoffenmanager and found that the recalculated general ventilation multipliers were up to 2.8 times (280%) higher than the values reported by Cherrie (1999) and the recalculated NF and FF multipliers for 1-hr exposure were up to 1.2 times (17%) smaller and for 8-hr exposure up to 1.7 times (41%) smaller than the values reported by Cherrie et al. (2011). Considering that Stoffenmanager and the ART are classified as higher-tier regulatory exposure assessment tools, the errors is general ventilation multipliers should not be ignored. We recommend revising the general ventilation multipliers. A better solution is to integrate the NF/FF model to Stoffenmanager and the ART.
Yalavarthy, Phaneendra K; Lynch, Daniel R; Pogue, Brian W; Dehghani, Hamid; Paulsen, Keith D
2008-05-01
Three-dimensional (3D) diffuse optical tomography is known to be a nonlinear, ill-posed and sometimes under-determined problem, where regularization is added to the minimization to allow convergence to a unique solution. In this work, a generalized least-squares (GLS) minimization method was implemented, which employs weight matrices for both data-model misfit and optical properties to include their variances and covariances, using a computationally efficient scheme. This allows inversion of a matrix that is of a dimension dictated by the number of measurements, instead of by the number of imaging parameters. This increases the computation speed up to four times per iteration in most of the under-determined 3D imaging problems. An analytic derivation, using the Sherman-Morrison-Woodbury identity, is shown for this efficient alternative form and it is proven to be equivalent, not only analytically, but also numerically. Equivalent alternative forms for other minimization methods, like Levenberg-Marquardt (LM) and Tikhonov, are also derived. Three-dimensional reconstruction results indicate that the poor recovery of quantitatively accurate values in 3D optical images can also be a characteristic of the reconstruction algorithm, along with the target size. Interestingly, usage of GLS reconstruction methods reduces error in the periphery of the image, as expected, and improves by 20% the ability to quantify local interior regions in terms of the recovered optical contrast, as compared to LM methods. Characterization of detector photo-multiplier tubes noise has enabled the use of the GLS method for reconstructing experimental data and showed a promise for better quantification of target in 3D optical imaging. Use of these new alternative forms becomes effective when the ratio of the number of imaging property parameters exceeds the number of measurements by a factor greater than 2.
Study on diagnosis of micro-biomechanical structure using optical coherence tomography
NASA Astrophysics Data System (ADS)
Saeki, Souichi; Hashimoto, Youhei; Saito, Takashi; Hiro, Takafumi; Matsuzaki, Masunori
2007-02-01
Acute coronary syndromes, e.g. myocardial infarctions, are caused by the rupture of unstable plaques on coronary arteries. The stability of plaque, which depends on biomechanical properties of fibrous cap, should be diagnosed crucially. Recently, Optical Coherence Tomography (OCT) has been developed as a cross-sectional imaging method of microstructural biological tissue with high resolution 1~10 μm. Multi-functional OCT system has been promising, e.g. an estimator of biomechanical characteristics. It has been, however, difficult to estimate biomechanical characteristics, because OCT images have just speckle patterns by back-scattering light from tissue. In this study, presented is Optical Coherence Straingraphy (OCS) on the basis of OCT system, which can diagnose tissue strain distribution. This is basically composed of Recursive Cross-correlation technique (RC), which can provide a displacement vector distribution with high resolution. Furthermore, Adjacent Cross-correlation Multiplication (ACM) is introduced as a speckle noise reduction method. Multiplying adjacent correlation maps can eliminate anomalies from speckle noise, and then can enhance S/N in the determination of maximum correlation coefficient. Error propagation also can be further prevented by introducing to the recursive algorithm (RC). In addition, the spatial vector interpolation by local least square method is introduced to remove erroneous vectors and smooth the vector distribution. This was numerically applied to compressed elastic heterogeneous tissue samples to carry out the accuracy verifications. Consequently, it was quantitatively confirmed that its accuracy of displacement vectors and strain matrix components could be enhanced, comparing with the conventional method. Therefore, the proposed method was validated by the identification of different elastic objects with having nearly high resolution for that defined by optical system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Xing; Lin, Guang
To model the sedimentation of the red blood cell (RBC) in a square duct and a circular pipe, the recently developed technique derived from the lattice Boltzmann method and the distributed Lagrange multiplier/fictitious domain method (LBM-DLM/FD) is extended to employ the mesoscopic network model for simulations of the sedimentation of the RBC in flow. The flow is simulated by the lattice Boltzmann method with a strong magnetic body force, while the network model is used for modeling RBC deformation. The fluid-RBC interactions are enforced by the Lagrange multiplier. The sedimentation of the RBC in a square duct and a circularmore » pipe is simulated, revealing the capacity of the current method for modeling the sedimentation of RBC in various flows. Numerical results illustrate that that the terminal setting velocity increases with the increment of the exerted body force. The deformation of the RBC has significant effect on the terminal setting velocity due to the change of the frontal area. The larger the exerted force is, the smaller the frontal area and the larger deformation of the RBC are.« less
Modeling Photo-multiplier Gain and Regenerating Pulse Height Data for Application Development
NASA Astrophysics Data System (ADS)
Aspinall, Michael D.; Jones, Ashley R.
2018-01-01
Systems that adopt organic scintillation detector arrays often require a calibration process prior to the intended measurement campaign to correct for significant performance variances between detectors within the array. These differences exist because of low tolerances associated with photo-multiplier tube technology and environmental influences. Differences in detector response can be corrected for by adjusting the supplied photo-multiplier tube voltage to control its gain and the effect that this has on the pulse height spectra from a gamma-only calibration source with a defined photo-peak. Automated methods that analyze these spectra and adjust the photo-multiplier tube bias accordingly are emerging for hardware that integrate acquisition electronics and high voltage control. However, development of such algorithms require access to the hardware, multiple detectors and calibration source for prolonged periods, all with associated constraints and risks. In this work, we report on a software function and related models developed to rescale and regenerate pulse height data acquired from a single scintillation detector. Such a function could be used to generate significant and varied pulse height data that can be used to integration-test algorithms that are capable of automatically response matching multiple detectors using pulse height spectra analysis. Furthermore, a function of this sort removes the dependence on multiple detectors, digital analyzers and calibration source. Results show a good match between the real and regenerated pulse height data. The function has also been used successfully to develop auto-calibration algorithms.
Auditing the multiply-related concepts within the UMLS
Mougin, Fleur; Grabar, Natalia
2014-01-01
Objective This work focuses on multiply-related Unified Medical Language System (UMLS) concepts, that is, concepts associated through multiple relations. The relations involved in such situations are audited to determine whether they are provided by source vocabularies or result from the integration of these vocabularies within the UMLS. Methods We study the compatibility of the multiple relations which associate the concepts under investigation and try to explain the reason why they co-occur. Towards this end, we analyze the relations both at the concept and term levels. In addition, we randomly select 288 concepts associated through contradictory relations and manually analyze them. Results At the UMLS scale, only 0.7% of combinations of relations are contradictory, while homogeneous combinations are observed in one-third of situations. At the scale of source vocabularies, one-third do not contain more than one relation between the concepts under investigation. Among the remaining source vocabularies, seven of them mainly present multiple non-homogeneous relations between terms. Analysis at the term level also shows that only in a quarter of cases are the source vocabularies responsible for the presence of multiply-related concepts in the UMLS. These results are available at: http://www.isped.u-bordeaux2.fr/ArticleJAMIA/results_multiply_related_concepts.aspx. Discussion Manual analysis was useful to explain the conceptualization difference in relations between terms across source vocabularies. The exploitation of source relations was helpful for understanding why some source vocabularies describe multiple relations between a given pair of terms. PMID:24464853
Chase, R.L.
1963-05-01
An electronic fast multiplier circuit utilizing a transistor controlled voltage divider network is presented. The multiplier includes a stepped potentiometer in which solid state or transistor switches are substituted for mechanical wipers in order to obtain electronic switching that is extremely fast as compared to the usual servo-driven mechanical wipers. While this multiplier circuit operates as an approximation and in steps to obtain a voltage that is the product of two input voltages, any desired degree of accuracy can be obtained with the proper number of increments and adjustment of parameters. (AEC)
Solvers for $$\\mathcal{O} (N)$$ Electronic Structure in the Strong Scaling Limit
Bock, Nicolas; Challacombe, William M.; Kale, Laxmikant
2016-01-26
Here we present a hybrid OpenMP/Charm\\tt++ framework for solving themore » $$\\mathcal{O} (N)$$ self-consistent-field eigenvalue problem with parallelism in the strong scaling regime, $$P\\gg{N}$$, where $P$ is the number of cores, and $N$ is a measure of system size, i.e., the number of matrix rows/columns, basis functions, atoms, molecules, etc. This result is achieved with a nested approach to spectral projection and the sparse approximate matrix multiply [Bock and Challacombe, SIAM J. Sci. Comput., 35 (2013), pp. C72--C98], and involves a recursive, task-parallel algorithm, often employed by generalized $N$-Body solvers, to occlusion and culling of negligible products in the case of matrices with decay. Lastly, employing classic technologies associated with generalized $N$-Body solvers, including overdecomposition, recursive task parallelism, orderings that preserve locality, and persistence-based load balancing, we obtain scaling beyond hundreds of cores per molecule for small water clusters ([H$${}_2$$O]$${}_N$$, $$N \\in \\{ 30, 90, 150 \\}$$, $$P/N \\approx \\{ 819, 273, 164 \\}$$) and find support for an increasingly strong scalability with increasing system size $N$.« less
NASA Astrophysics Data System (ADS)
Kumar, Ravi; Bhaduri, Basanta; Nishchal, Naveen K.
2018-01-01
In this study, we propose a quick response (QR) code based nonlinear optical image encryption technique using spiral phase transform (SPT), equal modulus decomposition (EMD) and singular value decomposition (SVD). First, the primary image is converted into a QR code and then multiplied with a spiral phase mask (SPM). Next, the product is spiral phase transformed with particular spiral phase function, and further, the EMD is performed on the output of SPT, which results into two complex images, Z 1 and Z 2. Among these, Z 1 is further Fresnel propagated with distance d, and Z 2 is reserved as a decryption key. Afterwards, SVD is performed on Fresnel propagated output to get three decomposed matrices i.e. one diagonal matrix and two unitary matrices. The two unitary matrices are modulated with two different SPMs and then, the inverse SVD is performed using the diagonal matrix and modulated unitary matrices to get the final encrypted image. Numerical simulation results confirm the validity and effectiveness of the proposed technique. The proposed technique is robust against noise attack, specific attack, and brutal force attack. Simulation results are presented in support of the proposed idea.
Vector-matrix-quaternion, array and arithmetic packages: All HAL/S functions implemented in Ada
NASA Technical Reports Server (NTRS)
Klumpp, Allan R.; Kwong, David D.
1986-01-01
The HAL/S avionics programmers have enjoyed a variety of tools built into a language tailored to their special requirements. Ada is designed for a broader group of applications. Rather than providing built-in tools, Ada provides the elements with which users can build their own. Standard avionic packages remain to be developed. These must enable programmers to code in Ada as they have coded in HAL/S. The packages under development at JPL will provide all of the vector-matrix, array, and arithmetic functions described in the HAL/S manuals. In addition, the linear algebra package will provide all of the quaternion functions used in Shuttle steering and Galileo attitude control. Furthermore, using Ada's extensibility, many quaternion functions are being implemented as infix operations; equivalent capabilities were never implemented in HAL/S because doing so would entail modifying the compiler and expanding the language. With these packages, many HAL/S expressions will compile and execute in Ada, unchanged. Others can be converted simply by replacing the implicit HAL/S multiply operator with the Ada *. Errors will be trapped and identified. Input/output will be convenient and readable.
SiPM based readout system for PbWO4 crystals
NASA Astrophysics Data System (ADS)
Berra, A.; Bolognini, D.; Bonfanti, S.; Bonvicini, V.; Lietti, D.; Penzo, A.; Prest, M.; Stoppani, L.; Vallazza, E.
2013-08-01
Silicon PhotoMultipliers (SiPMs) consist of a matrix of small passively quenched silicon avalanche photodiodes operated in limited Geiger-mode (GM-APDs) and read out in parallel from a common output node. Each pixel (with a typical size in the 20-100 μm range) gives the same current response when hit by a photon; the SiPM output signal is the sum of the signals of all the pixels, which depends on the light intensity. The main advantages of SiPMs with respect to photomultiplier tubes (PMTs) are essentially the small dimensions, the insensitivity to magnetic fields and a low bias voltage. This contribution presents the performance of a SiPM based readout system for crystal calorimeters developed in the framework of the FACTOR/TWICE collaboration. The SiPM used for the test is a new device produced by FBK-irst which consists in a matrix of four sensors embedded in the same silicon substrate, called QUAD. The SiPM has been coupled to a lead tungstate crystal, an early-prototype version of the crystals developed for the electromagnetic calorimeter of the CMS experiment. New tests are foreseen using a complete module consisting of nine crystals, each one readout by two QUADs.
FORCE MULTIPLIER FOR USE WITH MASTER SLAVES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miles, L.E.; Parsons, T.C.; Howe, P.W.
1961-06-01
A force multiplier was designed. This piece of equipment was made to increase the gripping force presently available in the Model 8 master slave. The force multiplier described incorporates a clamp which can be quickly attached to and detached from the master slave hand. (auth)
NASA Astrophysics Data System (ADS)
Yang, G.; Stark, B. H.; Burrow, S. G.; Hollis, S. J.
2014-11-01
This paper demonstrates the use of passive voltage multipliers for rapid start-up of sub-milliwatt electromagnetic energy harvesting systems. The work describes circuit optimization to make as short as possible the transition from completely depleted energy storage to the first powering-up of an actively controlled switched-mode converter. The dependency of the start-up time on component parameters and topologies is derived by simulation and experimentation. The resulting optimized multiplier design reduces the start-up time from several minutes to 1 second. An additional improvement uses the inherent cascade structure of the voltage multiplier to power sub-systems at different voltages. This multi-rail start-up is shown to reduce the circuit losses of the active converter by 72% with respect to the optimized single-rail system. The experimental results provide insight into the multiplier's transient behaviour, including circuit interactions, in a complete harvesting system, and offer important information to optimize voltage multipliers for rapid start-up.
Kelly, Patrick; Mapes, Brian; Hu, I-Kuan; ...
2017-04-03
This study describes a new intermediate global atmosphere model in which synoptic and planetary dynamics including the advection of water vapor are explicit, the time mean flow is centered near a realistic state through the calibration of time-independent 3D forcings, and temporal anomalies of convective tendencies of heat and moisture in each column are represented as a linear matrix acting on the anomalous temperature and moisture profiles in the GCM. This matrix was devised from Kuang’s [2010] linear response function (LRF) of a cooled cyclic convection-permitting model (CCPM) with 256 km periodic domain and 1km mesh, measured around an equilibriummore » state with a mean rainrate of 3.5 mm/d. The goal of this effort was to cleanly test the role of convection’s free-tropospheric moisture sensitivity in tropical waves, without incurring large changes of mean climate that confuse the interpretation of experiments with entrainment rates in the convection schemes of full-physics GCMs. As the sensitivity to free tropospheric moisture (columns 12-20 of the matrix, representing sensitivity to humidity above 900 hPa altitude) is multiplied by a factor ranging from 0 to 2, the model’s variability ranges from: (1) moderately strong convectively coupled waves with speeds near 20 m s -1; to (0) weak waves, but still slowed by convective coupling; to (2) wave variability that is greater in amplitude as the water vapor field plays an increasingly important role. Longitudinal structure in the model’s time-mean tropical flow is not fully realistic, and does change significantly with matrix edits, disappointing initial hopes that the Madden-Julian oscillation would be well simulated in the control and could be convincingly decomposed, but further work could improve this class of models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelly, Patrick; Mapes, Brian; Hu, I-Kuan
This study describes a new intermediate global atmosphere model in which synoptic and planetary dynamics including the advection of water vapor are explicit, the time mean flow is centered near a realistic state through the calibration of time-independent 3D forcings, and temporal anomalies of convective tendencies of heat and moisture in each column are represented as a linear matrix acting on the anomalous temperature and moisture profiles in the GCM. This matrix was devised from Kuang’s [2010] linear response function (LRF) of a cooled cyclic convection-permitting model (CCPM) with 256 km periodic domain and 1km mesh, measured around an equilibriummore » state with a mean rainrate of 3.5 mm/d. The goal of this effort was to cleanly test the role of convection’s free-tropospheric moisture sensitivity in tropical waves, without incurring large changes of mean climate that confuse the interpretation of experiments with entrainment rates in the convection schemes of full-physics GCMs. As the sensitivity to free tropospheric moisture (columns 12-20 of the matrix, representing sensitivity to humidity above 900 hPa altitude) is multiplied by a factor ranging from 0 to 2, the model’s variability ranges from: (1) moderately strong convectively coupled waves with speeds near 20 m s -1; to (0) weak waves, but still slowed by convective coupling; to (2) wave variability that is greater in amplitude as the water vapor field plays an increasingly important role. Longitudinal structure in the model’s time-mean tropical flow is not fully realistic, and does change significantly with matrix edits, disappointing initial hopes that the Madden-Julian oscillation would be well simulated in the control and could be convincingly decomposed, but further work could improve this class of models.« less
Solutions and conservation laws for a Kaup-Boussinesq system
NASA Astrophysics Data System (ADS)
Motsepa, Tanki; Abudiab, Mufid; Khalique, Chaudry Masood
2017-07-01
In this work we study a Kaup-Boussinesq system, which is used in the analysis of long waves in shallow water. Travelling wave solutions are obtained by using direct integration. Secondly, conservation laws are derived by using the multiplier method.
On conservation laws for a generalized Boussinesq equation
NASA Astrophysics Data System (ADS)
Anco, S.; Rosa, M.; Gandarias, M. L.
2017-07-01
In this work, we study a Boussinesq equation with a strong damping term from the point of view of the Lie theory. By using the low order conservation laws we apply the conservation laws multiplier method to the associated potential systems.
Using Appreciative Learning in Executive Education.
ERIC Educational Resources Information Center
Preziosi, Robert C.; Gooden, Doreen J.
2002-01-01
A leadership development program for managers used appreciative learning, based upon appreciative inquiry, an organizational development method focused on what organizations do well. Participants identified prior successful learning experiences for use in future work performance, creating a multiplier effect of positive experiences. (SK)
Discrete fourier transform (DFT) analysis for applications using iterative transform methods
NASA Technical Reports Server (NTRS)
Dean, Bruce H. (Inventor)
2012-01-01
According to various embodiments, a method is provided for determining aberration data for an optical system. The method comprises collecting a data signal, and generating a pre-transformation algorithm. The data is pre-transformed by multiplying the data with the pre-transformation algorithm. A discrete Fourier transform of the pre-transformed data is performed in an iterative loop. The method further comprises back-transforming the data to generate aberration data.
Economical Implementation of a Filter Engine in an FPGA
NASA Technical Reports Server (NTRS)
Kowalski, James E.
2009-01-01
A logic design has been conceived for a field-programmable gate array (FPGA) that would implement a complex system of multiple digital state-space filters. The main innovative aspect of this design lies in providing for reuse of parts of the FPGA hardware to perform different parts of the filter computations at different times, in such a manner as to enable the timely performance of all required computations in the face of limitations on available FPGA hardware resources. The implementation of the digital state-space filter involves matrix vector multiplications, which, in the absence of the present innovation, would ordinarily necessitate some multiplexing of vector elements and/or routing of data flows along multiple paths. The design concept calls for implementing vector registers as shift registers to simplify operand access to multipliers and accumulators, obviating both multiplexing and routing of data along multiple paths. Each vector register would be reused for different parts of a calculation. Outputs would always be drawn from the same register, and inputs would always be loaded into the same register. A simple state machine would control each filter. The output of a given filter would be passed to the next filter, accompanied by a "valid" signal, which would start the state machine of the next filter. Multiple filter modules would share a multiplication/accumulation arithmetic unit. The filter computations would be timed by use of a clock having a frequency high enough, relative to the input and output data rate, to provide enough cycles for matrix and vector arithmetic operations. This design concept could prove beneficial in numerous applications in which digital filters are used and/or vectors are multiplied by coefficient matrices. Examples of such applications include general signal processing, filtering of signals in control systems, processing of geophysical measurements, and medical imaging. For these and other applications, it could be advantageous to combine compact FPGA digital filter implementations with other application-specific logic implementations on single integrated-circuit chips. An FPGA could readily be tailored to implement a variety of filters because the filter coefficients would be loaded into memory at startup.
Comments on "The multisynapse neural network and its application to fuzzy clustering".
Yu, Jian; Hao, Pengwei
2005-05-01
In the above-mentioned paper, Wei and Fahn proposed a neural architecture, the multisynapse neural network, to solve constrained optimization problems including high-order, logarithmic, and sinusoidal forms, etc. As one of its main applications, a fuzzy bidirectional associative clustering network (FBACN) was proposed for fuzzy-partition clustering according to the objective-functional method. The connection between the objective-functional-based fuzzy c-partition algorithms and FBACN is the Lagrange multiplier approach. Unfortunately, the Lagrange multiplier approach was incorrectly applied so that FBACN does not equivalently minimize its corresponding constrained objective-function. Additionally, Wei and Fahn adopted traditional definition of fuzzy c-partition, which is not satisfied by FBACN. Therefore, FBACN can not solve constrained optimization problems, either.
40 CFR 423.15 - New source performance standards (NSPS).
Code of Federal Regulations, 2010 CFR
2010-07-01
... sources shall not exceed the quantity determined by multiplying the flow of low volume waste sources times... metal cleaning wastes shall not exceed the quantity determined by multiplying the flow of chemical metal... transport water shall not exceed the quantity determined by multiplying the flow of the bottom ash transport...
Cavallo's multiplier for in situ generation of high voltage
NASA Astrophysics Data System (ADS)
Clayton, S. M.; Ito, T. M.; Ramsey, J. C.; Wei, W.; Blatnik, M. A.; Filippone, B. W.; Seidel, G. M.
2018-05-01
A classic electrostatic induction machine, Cavallo's multiplier, is suggested for in situ production of very high voltage in cryogenic environments. The device is suitable for generating a large electrostatic field under conditions of very small load current. Operation of the Cavallo multiplier is analyzed, with quantitative description in terms of mutual capacitances between electrodes in the system. A demonstration apparatus was constructed, and measured voltages are compared to predictions based on measured capacitances in the system. The simplicity of the Cavallo multiplier makes it amenable to electrostatic analysis using finite element software, and electrode shapes can be optimized to take advantage of a high dielectric strength medium such as liquid helium. A design study is presented for a Cavallo multiplier in a large-scale, cryogenic experiment to measure the neutron electric dipole moment.
NASA Technical Reports Server (NTRS)
Gerhard, Craig Steven; Gurdal, Zafer; Kapania, Rakesh K.
1996-01-01
Layerwise finite element analyses of geodesically stiffened cylindrical shells are presented. The layerwise laminate theory of Reddy (LWTR) is developed and adapted to circular cylindrical shells. The Ritz variational method is used to develop an analytical approach for studying the buckling of simply supported geodesically stiffened shells with discrete stiffeners. This method utilizes a Lagrange multiplier technique to attach the stiffeners to the shell. The development of the layerwise shells couples a one-dimensional finite element through the thickness with a Navier solution that satisfies the boundary conditions. The buckling results from the Ritz discrete analytical method are compared with smeared buckling results and with NASA Testbed finite element results. The development of layerwise shell and beam finite elements is presented and these elements are used to perform the displacement field, stress, and first-ply failure analyses. The layerwise shell elements are used to model the shell skin and the layerwise beam elements are used to model the stiffeners. This arrangement allows the beam stiffeners to be assembled directly into the global stiffness matrix. A series of analytical studies are made to compare the response of geodesically stiffened shells as a function of loading, shell geometry, shell radii, shell laminate thickness, stiffener height, and geometric nonlinearity. Comparisons of the structural response of geodesically stiffened shells, axial and ring stiffened shells, and unstiffened shells are provided. In addition, interlaminar stress results near the stiffener intersection are presented. First-ply failure analyses for geodesically stiffened shells utilizing the Tsai-Wu failure criterion are presented for a few selected cases.
Robust and Efficient Spin Purification for Determinantal Configuration Interaction.
Fales, B Scott; Hohenstein, Edward G; Levine, Benjamin G
2017-09-12
The limited precision of floating point arithmetic can lead to the qualitative and even catastrophic failure of quantum chemical algorithms, especially when high accuracy solutions are sought. For example, numerical errors accumulated while solving for determinantal configuration interaction wave functions via Davidson diagonalization may lead to spin contamination in the trial subspace. This spin contamination may cause the procedure to converge to roots with undesired ⟨Ŝ 2 ⟩, wasting computer time in the best case and leading to incorrect conclusions in the worst. In hopes of finding a suitable remedy, we investigate five purification schemes for ensuring that the eigenvectors have the desired ⟨Ŝ 2 ⟩. These schemes are based on projection, penalty, and iterative approaches. All of these schemes rely on a direct, graphics processing unit-accelerated algorithm for calculating the S 2 c matrix-vector product. We assess the computational cost and convergence behavior of these methods by application to several benchmark systems and find that the first-order spin penalty method is the optimal choice, though first-order and Löwdin projection approaches also provide fast convergence to the desired spin state. Finally, to demonstrate the utility of these approaches, we computed the lowest several excited states of an open-shell silver cluster (Ag 19 ) using the state-averaged complete active space self-consistent field method, where spin purification was required to ensure spin stability of the CI vector coefficients. Several low-lying states with significant multiply excited character are predicted, suggesting the value of a multireference approach for modeling plasmonic nanomaterials.
Impact of Selected Parameters on the Fatigue Strength of Splices on Multiply Textile Conveyor Belts
NASA Astrophysics Data System (ADS)
Bajda, Mirosław; Błażej, Ryszard; Hardygóra, Monika
2016-10-01
Splices are the weakest points in the conveyor belt loop. The strength of these joints, and thus their design as well as the method and quality of splicing, determine the strength of the whole conveyor belt loop. A special zone in a splice exists, where the stresses in the adjacent plies or cables differ considerably from each other. This results in differences in the elongation of these elements and in additional shearing stresses in the rubber layer. The strength of the joints depends on several factors, among others on the parameters of the joined belt, on the connecting layer and the technology of joining, as well as on the materials used to make the joint. The strength of the joint constitutes a criterion for the selection of a belt suitable for the operating conditions, and therefore methods of testing such joints are of great importance. This paper presents the method of testing fatigue strength of splices made on multi-ply textile conveyor belts and the results of these studies.
A Search for Nontoroidal Topological Lensing in the Sloan Digital Sky Survey Quasar Catalog
NASA Astrophysics Data System (ADS)
Fujii, Hirokazu; Yoshii, Yuzuru
2013-08-01
Flat space models with multiply connected topology, which have compact dimensions, are tested against the distribution of high-redshift (z >= 4) quasars of the Sloan Digital Sky Survey (SDSS). When the compact dimensions are smaller in size than the observed universe, topological lensing occurs, in which multiple images of single objects (ghost images) are observed. We improve on the recently introduced method to identify ghost images by means of four-point statistics. Our method is valid for any of the 17 multiply connected flat models, including nontoroidal ones that are compacted by screw motions or glide reflection. Applying the method to the data revealed one possible case of topological lensing caused by sixth-turn screw motion, however, it is consistent with the simply connected model by this test alone. Moreover, simulations suggest that we cannot exclude the other space models despite the absence of their signatures. This uncertainty mainly originates from the patchy coverage of SDSS in the south Galactic cap, and this situation will be improved by future wide-field spectroscopic surveys.
The effect of material heterogeneities in long term multiscale seismic cycle simulations
NASA Astrophysics Data System (ADS)
Kyriakopoulos, C.; Richards-Dinger, K. B.; Dieterich, J. H.
2016-12-01
A fundamental part of the simulation of the earthquake cycles in large-scale multicycle earthquake simulators is the pre-computation of elastostatic Greens functions collected into the stiffness matrix (K). The stiffness matrices are typically based on the elastostatic solutions of Okada (1992), Gimbutas et al. (2012), or similar. While these analytic solutions are computationally very fast, they are limited to modeling a homogeneous isotropic half-space. It is thus unknown how such simulations may be affected by material heterogeneity characterizing the earth medium. We are currently working on the estimation of the effects of heterogeneous material properties in the earthquake simulator RSQSim (Richards-Dinger and Dieterich, 2012). In order to do that we are calculating elastostatic solutions in a heterogeneous medium using the Finite Element (FE) method instead of any of the analytical solutions. The investigated region is a 400 x 400 km area centered on the Anza zone in southern California. The fault system geometry is based on that of the UCERF3 deformation models in the area of interest, which we then implement in a finite element mesh using Trelis 15. The heterogeneous elastic structure is based on available tomographic data (seismic wavespeeds and density) for the region (SCEC CVM and Allam et al., 2014). For computation of the Greens functions we are using the open source FE code Defmod (https://bitbucket.org/stali/defmod/wiki/Home) to calculate the elastostatic solutions due to unit slip on each patch. Earthquake slip on the fault plane is implemented through linear constraint equations (Ali et al., 2014, Kyriakopoulos et al., 2013, Aagard et al, 2015) and more specifically with the use of Lagrange multipliers adjunction. The elementary responses are collected into the "heterogeneous" stiffness matrix Khet and used in RSQSim instead of the ones generated with Okada. Finally, we compare the RSQSim results based on the "heterogeneous" Khet with results from Khom (stiffness matrix generated from the same mesh as Khet but using homogeneous material properties). The estimation of the effect of heterogeneous material properties in the seismic cycles simulated by RSQSim is a needed experiment that will allow us to evaluate the impact of heterogeneities in earthquake simulators.
NASA Astrophysics Data System (ADS)
Wu, Pei-Chun; Hsieh, Tsung-Yuan; Tsai, Zen-Uong; Liu, Tzu-Ming
2015-03-01
Using in vivo second harmonic generation (SHG) and third harmonic generation (THG) microscopies, we tracked the course of collagen remodeling over time in the same melanoma microenvironment within an individual mouse. The corresponding structural and morphological changes were quantitatively analyzed without labeling using an orientation index (OI), the gray level co-occurrence matrix (GLCM) method, and the intensity ratio of THG to SHG (RTHG/SHG). In the early stage of melanoma development, we found that collagen fibers adjacent to a melanoma have increased OI values and SHG intensities. In the late stages, these collagen networks have more directionality and less homogeneity. The corresponding GLCM traces showed oscillation features and the sum of squared fluctuation VarGLCM increased with the tumor sizes. In addition, the THG intensities of the extracellular matrices increased, indicating an enhanced optical inhomogeneity. Multiplying OI, VarGLCM, and RTHG/SHG together, the combinational collagen remodeling (CR) index at 4 weeks post melanoma implantation showed a 400-times higher value than normal ones. These results validate that our quantitative indices of SHG and THG microscopies are sensitive enough to diagnose the collagen remodeling in vivo. We believe these indices have the potential to help the diagnosis of skin cancers in clinical practice.
Development of a scalable generic platform for adaptive optics real time control
NASA Astrophysics Data System (ADS)
Surendran, Avinash; Burse, Mahesh P.; Ramaprakash, A. N.; Parihar, Padmakar
2015-06-01
The main objective of the present project is to explore the viability of an adaptive optics control system based exclusively on Field Programmable Gate Arrays (FPGAs), making strong use of their parallel processing capability. In an Adaptive Optics (AO) system, the generation of the Deformable Mirror (DM) control voltages from the Wavefront Sensor (WFS) measurements is usually through the multiplication of the wavefront slopes with a predetermined reconstructor matrix. The ability to access several hundred hard multipliers and memories concurrently in an FPGA allows performance far beyond that of a modern CPU or GPU for tasks with a well-defined structure such as Adaptive Optics control. The target of the current project is to generate a signal for a real time wavefront correction, from the signals coming from a Wavefront Sensor, wherein the system would be flexible to accommodate all the current Wavefront Sensing techniques and also the different methods which are used for wavefront compensation. The system should also accommodate for different data transmission protocols (like Ethernet, USB, IEEE 1394 etc.) for transmitting data to and from the FPGA device, thus providing a more flexible platform for Adaptive Optics control. Preliminary simulation results for the formulation of the platform, and a design of a fully scalable slope computer is presented.
Wu, Pei-Chun; Hsieh, Tsung-Yuan; Tsai, Zen-Uong; Liu, Tzu-Ming
2015-01-01
Using in vivo second harmonic generation (SHG) and third harmonic generation (THG) microscopies, we tracked the course of collagen remodeling over time in the same melanoma microenvironment within an individual mouse. The corresponding structural and morphological changes were quantitatively analyzed without labeling using an orientation index (OI), the gray level co-occurrence matrix (GLCM) method, and the intensity ratio of THG to SHG (RTHG/SHG). In the early stage of melanoma development, we found that collagen fibers adjacent to a melanoma have increased OI values and SHG intensities. In the late stages, these collagen networks have more directionality and less homogeneity. The corresponding GLCM traces showed oscillation features and the sum of squared fluctuation VarGLCM increased with the tumor sizes. In addition, the THG intensities of the extracellular matrices increased, indicating an enhanced optical inhomogeneity. Multiplying OI, VarGLCM, and RTHG/SHG together, the combinational collagen remodeling (CR) index at 4 weeks post melanoma implantation showed a 400-times higher value than normal ones. These results validate that our quantitative indices of SHG and THG microscopies are sensitive enough to diagnose the collagen remodeling in vivo. We believe these indices have the potential to help the diagnosis of skin cancers in clinical practice. PMID:25748390
Kamensky, David; Hsu, Ming-Chen; Schillinger, Dominik; Evans, John A.; Aggarwal, Ankush; Bazilevs, Yuri; Sacks, Michael S.; Hughes, Thomas J. R.
2014-01-01
In this paper, we develop a geometrically flexible technique for computational fluid–structure interaction (FSI). The motivating application is the simulation of tri-leaflet bioprosthetic heart valve function over the complete cardiac cycle. Due to the complex motion of the heart valve leaflets, the fluid domain undergoes large deformations, including changes of topology. The proposed method directly analyzes a spline-based surface representation of the structure by immersing it into a non-boundary-fitted discretization of the surrounding fluid domain. This places our method within an emerging class of computational techniques that aim to capture geometry on non-boundary-fitted analysis meshes. We introduce the term “immersogeometric analysis” to identify this paradigm. The framework starts with an augmented Lagrangian formulation for FSI that enforces kinematic constraints with a combination of Lagrange multipliers and penalty forces. For immersed volumetric objects, we formally eliminate the multiplier field by substituting a fluid–structure interface traction, arriving at Nitsche’s method for enforcing Dirichlet boundary conditions on object surfaces. For immersed thin shell structures modeled geometrically as surfaces, the tractions from opposite sides cancel due to the continuity of the background fluid solution space, leaving a penalty method. Application to a bioprosthetic heart valve, where there is a large pressure jump across the leaflets, reveals shortcomings of the penalty approach. To counteract steep pressure gradients through the structure without the conditioning problems that accompany strong penalty forces, we resurrect the Lagrange multiplier field. Further, since the fluid discretization is not tailored to the structure geometry, there is a significant error in the approximation of pressure discontinuities across the shell. This error becomes especially troublesome in residual-based stabilized methods for incompressible flow, leading to problematic compressibility at practical levels of refinement. We modify existing stabilized methods to improve performance. To evaluate the accuracy of the proposed methods, we test them on benchmark problems and compare the results with those of established boundary-fitted techniques. Finally, we simulate the coupling of the bioprosthetic heart valve and the surrounding blood flow under physiological conditions, demonstrating the effectiveness of the proposed techniques in practical computations. PMID:25541566
Improved Algorithm For Finite-Field Normal-Basis Multipliers
NASA Technical Reports Server (NTRS)
Wang, C. C.
1989-01-01
Improved algorithm reduces complexity of calculations that must precede design of Massey-Omura finite-field normal-basis multipliers, used in error-correcting-code equipment and cryptographic devices. Algorithm represents an extension of development reported in "Algorithm To Design Finite-Field Normal-Basis Multipliers" (NPO-17109), NASA Tech Briefs, Vol. 12, No. 5, page 82.
Mori, Mari; Hamada, Atsumi; Mori, Hideki; Yamori, Yukio; Tsuda, Kinsuke
2012-08-01
This 2-week interventional study involved a randomized allocation of subjects into three groups: Group A (daily ingestion of 350 g vegetables cooked without water using multi-ply [multilayer-structured] cookware), Group B (daily ingestion of 350 g vegetables; ordinary cookware) and Group C (routine living). Before and after intervention, each subject underwent health examination with 24-h urine sampling. Blood vitamin C significantly increased after intervention from the baseline in Group A (P < 0.01) and Group B (P < 0.05). β-Carotene levels also increased significantly after intervention in Group A (P < 0.01) and Group B (P < 0.01). Oxidized low-density lipoprotein decreased significantly after intervention in Group A (P < 0.01). In Group A, 24-h urinary potassium excretion increased significantly (P < 0.01) and 24-h urinary sodium (Na)/K ratio improved significantly (P < 0.05) after intervention. In conclusion, a cooking method modification with multi-ply cookware improved absorption of nutrients from vegetables and enhanced effective utilization of the antioxidant potentials of vegetable nutrients.
Sociophysics of sexism: normal and anomalous petrie multipliers
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2015-07-01
A recent mathematical model by Karen Petrie explains how sexism towards women can arise in organizations where male and female are equally sexist. Indeed, the Petrie model predicts that such sexism will emerge whenever there is a male majority, and quantifies this majority bias by the ‘Petrie multiplier’: the square of the male/female ratio. In this paper—emulating the shift from ‘normal’ to ‘anomalous’ diffusion—we generalize the Petrie model to a stochastic Poisson model that accommodates heterogeneously sexist men and woman, and that extends the ‘normal’ quadratic Petrie multiplier to ‘anomalous’ non-quadratic multipliers. The Petrie multipliers span a full spectrum of behaviors which we classify into four universal types. A variation of the stochastic Poisson model and its Petrie multipliers is further applied to the context of cyber warfare.
Greenwood, Tiffany A.; Light, Gregory A.; Swerdlow, Neal R.; Calkins, Monica E.; Green, Michael F.; Gur, Raquel E.; Gur, Ruben C.; Lazzeroni, Laura C.; Nuechterlein, Keith H.; Olincy, Ann; Radant, Allen D.; Seidman, Larry J.; Siever, Larry J.; Silverman, Jeremy M.; Stone, William S.; Sugar, Catherine A.; Tsuang, Debby W.; Tsuang, Ming T.; Turetsky, Bruce I.; Freedman, Robert; Braff, David L.
2016-01-01
Objective The Consortium on the Genetics of Schizophrenia Family Study (COGS-1) evaluated 12 primary and other supplementary neurocognitive and neurophysiological endophenotypes in schizophrenia probands and their families. Previous analyses of prepulse inhibition (PPI) and P50 gating measures in this sample revealed heritability estimates that were lower than expected based on earlier family studies. Here we investigated whether gating measures were more heritable in multiply affected families with a positive family history vs. families with only a single affected proband (singleton). Method A total of 296 nuclear families consisting of a schizophrenia proband, at least one unaffected sibling, and both parents underwent a comprehensive endophenotype and clinical characterization. The Family Interview for Genetic Studies was administered to all participants and used to obtain convergent psychiatric symptom information for additional first-degree relatives. Among the families, 97 were multiply affected, and 96 were singletons. Results Both PPI and P50 gating displayed significantly increased heritability in the 97 multiply affected families (47% and 36%, respectively), compared with estimates derived from the entire sample of 296 families (29% and 20%, respectively). However, no evidence for heritability was observed for either measure in the 96 singleton families. Schizophrenia probands derived from the multiply affected families also displayed a significantly increased severity of clinical symptoms compared with those derived from singleton families. Conclusions PPI and P50 gating measures demonstrate substantially increased heritability in schizophrenia families with a higher genetic vulnerability for illness, which provides further support for the commonality of genes underlying both schizophrenia and gating measures. PMID:26441157
Progress on single barrier varactors for submillimeter wave power generation
NASA Technical Reports Server (NTRS)
Nilsen, Svein M.; Groenqvist, Hans; Hjelmgren, Hans; Rydberg, Anders; Kollberg, Erik L.
1992-01-01
Theoretical work on Single Barrier Varactor (SBV) diodes, indicate that the efficiency for a multiplier has a maximum for a considerably smaller capacitance variation than previously thought. The theoretical calculations are performed, both with a simple theoretical model and a complete computer simulation using the method of harmonic balance. Modeling of the SBV is carried out in two steps. First, the semiconductor transport equations are solved simultaneously using a finite difference scheme in one dimension. Secondly, the calculated I-V, and C-V characteristics are input to a multiplier simulator which calculates the optimum impedances, and output powers at the frequencies of interest. Multiple barrier varactors can also be modeled in this way. Several examples on how to design the semiconductor layers to obtain certain characteristics are given. The calculated conversion efficiencies of the modeled structures, in a multiplier circuit, are also presented. Computer simulations for a case study of a 750 GHz multiplier show that InAs diodes perform favorably compared to GaAs diodes. InAs and InGaAs SBV diodes have been fabricated and their current vs. voltage characteristics are presented. In the InAs diode, was the large bandgap semiconductor AlSb used as barrier. The InGaAs diode was grown lattice matched to an InP substrate with InAlAs as a barrier material. The current density is greatly reduced for these two material combinations, compared to that of GaAs/AlGaAs SBV diodes. GaAs based diodes can be biased to higher voltages than InAs diodes.
Krol, Marieke; Brouwer, Werner B F; Severens, Johan L; Kaper, Janneke; Evers, Silvia M A A
2012-12-01
Productivity costs related to paid work are commonly calculated in economic evaluations of health technologies by multiplying the relevant number of work days lost with a wage rate estimate. It has been argued that actual productivity costs may either be lower or higher than current estimates due to compensation mechanisms and/or multiplier effects (related to team dependency and problems with finding good substitutes in cases of absenteeism). Empirical evidence on such mechanisms and their impact on productivity costs is scarce, however. This study aims to increase knowledge on how diminished productivity is compensated within firms. Moreover, it aims to explore how compensation and multiplier effects potentially affect productivity cost estimates. Absenteeism and compensation mechanisms were measured in a randomized trial among Dutch citizens examining the cost-effectiveness of reimbursement for smoking cessation treatment. Multiplier effects were extracted from published literature. Productivity costs were calculated applying the Friction Cost Approach. Regular estimates were subsequently adjusted for (i) compensation during regular working hours, (ii) job dependent multipliers and (iii) both compensation and multiplier effects. A total of 187 respondents included in the trial were useful for inclusion in this study, based on being in paid employment, having experienced absenteeism in the preceding six months and completing the questionnaire on absenteeism and compensation mechanisms. Over half of these respondents stated that their absenteeism was compensated during normal working hours by themselves or colleagues. Only counting productivity costs not compensated in regular working hours reduced the traditional estimate by 57%. Correcting for multiplier effects increased regular estimates by a quarter. Combining both impacts decreased traditional estimates by 29%. To conclude, large amounts of lost production are compensated in normal hours. Productivity costs estimates are strongly influenced by adjustment for compensation mechanisms and multiplier effects. The validity of such adjustments needs further examination, however. Copyright © 2012 Elsevier Ltd. All rights reserved.
N-person differential games. Part 1: Duality-finite element methods
NASA Technical Reports Server (NTRS)
Chen, G.; Zheng, Q.
1983-01-01
The duality approach, which is motivated by computational needs and is done by introducing N + 1 Language multipliers is addressed. For N-person linear quadratic games, the primal min-max problem is shown to be equivalent to the dual min-max problem.
The Phase Rule in a System Subject to a Pressure Gradient
NASA Astrophysics Data System (ADS)
Podladchikov, Yuri; Connolly, James; Powell, Roger; Aardvark, Alberto
2015-04-01
It can be shown by diligent application of Lagrange's method of undetermined multipliers that the phase rule in a system subject to a pressure gradient is: � + 赑 ≥ ρ. We explore the consequence of this important relationship for natural systems.
48 CFR 215.404-71-1 - General.
Code of Federal Regulations, 2010 CFR
2010-10-01
... range provides values based on above normal or below normal conditions. In the price negotiation..., DEPARTMENT OF DEFENSE CONTRACTING METHODS AND CONTRACT TYPES CONTRACTING BY NEGOTIATION Contract Pricing 215... contracting officer assigns values to each profit factor; the value multiplied by the base results in the...
Albin, Thomas J
2017-07-01
Occasionally practitioners must work with single dimensions defined as combinations (sums or differences) of percentile values, but lack information (e.g. variances) to estimate the accommodation achieved. This paper describes methods to predict accommodation proportions for such combinations of percentile values, e.g. two 90th percentile values. Kreifeldt and Nah z-score multipliers were used to estimate the proportions accommodated by combinations of percentile values of 2-15 variables; two simplified versions required less information about variance and/or correlation. The estimates were compared to actual observed proportions; for combinations of 2-15 percentile values the average absolute differences ranged between 0.5 and 1.5 percentage points. The multipliers were also used to estimate adjusted percentile values, that, when combined, estimate a desired proportion of the combined measurements. For combinations of two and three adjusted variables, the average absolute difference between predicted and observed proportions ranged between 0.5 and 3.0 percentage points. Copyright © 2017 Elsevier Ltd. All rights reserved.
Double-Stage Delay Multiply and Sum Beamforming Algorithm Applied to Ultrasound Medical Imaging.
Mozaffarzadeh, Moein; Sadeghi, Masume; Mahloojifar, Ali; Orooji, Mahdi
2018-03-01
In ultrasound (US) imaging, delay and sum (DAS) is the most common beamformer, but it leads to low-quality images. Delay multiply and sum (DMAS) was introduced to address this problem. However, the reconstructed images using DMAS still suffer from the level of side lobes and low noise suppression. Here, a novel beamforming algorithm is introduced based on expansion of the DMAS formula. We found that there is a DAS algebra inside the expansion, and we proposed use of the DMAS instead of the DAS algebra. The introduced method, namely double-stage DMAS (DS-DMAS), is evaluated numerically and experimentally. The quantitative results indicate that DS-DMAS results in an approximately 25% lower level of side lobes compared with DMAS. Moreover, the introduced method leads to 23%, 22% and 43% improvement in signal-to-noise ratio, full width at half-maximum and contrast ratio, respectively, compared with the DMAS beamformer. Copyright © 2018. Published by Elsevier Inc.
A 2.5-2.7 THz Room Temperature Electronic Source
NASA Technical Reports Server (NTRS)
Maestrini, Alain; Mehdi, Imran; Lin, Robert; Siles, Jose Vicente; Lee, Choonsup; Gill, John; Chattopadhyay, Goutam; Schlecht, Erich; Bertrand, Thomas; Ward, John
2011-01-01
We report on a room temperature 2.5 to 2.7 THz electronic source based on frequency multipliers. The source utilizes a cascade of three frequency multipliers with W-band power amplifiers driving the first stage multiplier. Multiple-chip multipliers are utilized for the two initial stages to improve the power handling capability and a sub-micron anode is utilized for the final stage tripler. Room temperature measurements indicate that the source can put out a peak power of about 14 microwatts with more than 4 microwatts in the 2.5 to 2.7 THz range.
NASA Astrophysics Data System (ADS)
Barr, David; Basden, Alastair; Dipper, Nigel; Schwartz, Noah; Vick, Andy; Schnetler, Hermine
2014-08-01
We present wavefront reconstruction acceleration of high-order AO systems using an Intel Xeon Phi processor. The Xeon Phi is a coprocessor providing many integrated cores and designed for accelerating compute intensive, numerical codes. Unlike other accelerator technologies, it allows virtually unchanged C/C++ to be recompiled to run on the Xeon Phi, giving the potential of making development, upgrade and maintenance faster and less complex. We benchmark the Xeon Phi in the context of AO real-time control by running a matrix vector multiply (MVM) algorithm. We investigate variability in execution time and demonstrate a substantial speed-up in loop frequency. We examine the integration of a Xeon Phi into an existing RTC system and show that performance improvements can be achieved with limited development effort.
Experiences from the testing of a theory for modelling groundwater flow in heterogeneous media
Christensen, S.; Cooley, R.L.
2002-01-01
Usually, small-scale model error is present in groundwater modelling because the model only represents average system characteristics having the same form as the drift and small-scale variability is neglected. These errors cause the true errors of a regression model to be correlated. Theory and an example show that the errors also contribute to bias in the estimates of model parameters. This bias originates from model nonlinearity. In spite of this bias, predictions of hydraulic head are nearly unbiased if the model intrinsic nonlinearity is small. Individual confidence and prediction intervals are accurate if the t-statistic is multiplied by a correction factor. The correction factor can be computed from the true error second moment matrix, which can be determined when the stochastic properties of the system characteristics are known.
Experience gained in testing a theory for modelling groundwater flow in heterogeneous media
Christensen, S.; Cooley, R.L.
2002-01-01
Usually, small-scale model error is present in groundwater modelling because the model only represents average system characteristics having the same form as the drift, and small-scale variability is neglected. These errors cause the true errors of a regression model to be correlated. Theory and an example show that the errors also contribute to bias in the estimates of model parameters. This bias originates from model nonlinearity. In spite of this bias, predictions of hydraulic head are nearly unbiased if the model intrinsic nonlinearity is small. Individual confidence and prediction intervals are accurate if the t-statistic is multiplied by a correction factor. The correction factor can be computed from the true error second moment matrix, which can be determined when the stochastic properties of the system characteristics are known.
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Heber, Gerd; Biswas, Rupak
2000-01-01
The Conjugate Gradient (CG) algorithm is perhaps the best-known iterative technique to solve sparse linear systems that are symmetric and positive definite. A sparse matrix-vector multiply (SPMV) usually accounts for most of the floating-point operations within a CG iteration. In this paper, we investigate the effects of various ordering and partitioning strategies on the performance of parallel CG and SPMV using different programming paradigms and architectures. Results show that for this class of applications, ordering significantly improves overall performance, that cache reuse may be more important than reducing communication, and that it is possible to achieve message passing performance using shared memory constructs through careful data ordering and distribution. However, a multi-threaded implementation of CG on the Tera MTA does not require special ordering or partitioning to obtain high efficiency and scalability.
An Efficient Augmented Lagrangian Method with Applications to Total Variation Minimization
2012-08-17
the classic augmented Lagrangian multiplier method, we propose, analyze and test an algorithm for solving a class of equality-constrained non-smooth...method, we propose, analyze and test an algorithm for solving a class of equality-constrained non-smooth optimization problems (chie y but not...significantly outperforming several state-of-the-art solvers on most tested problems. The resulting MATLAB solver, called TVAL3, has been posted online [23]. 2
A hybrid-perturbation-Galerkin technique which combines multiple expansions
NASA Technical Reports Server (NTRS)
Geer, James F.; Andersen, Carl M.
1989-01-01
A two-step hybrid perturbation-Galerkin method for the solution of a variety of differential equations type problems is found to give better results when multiple perturbation expansions are employed. The method assumes that there is parameter in the problem formulation and that a perturbation method can be sued to construct one or more expansions in this perturbation coefficient functions multiplied by computed amplitudes. In step one, regular and/or singular perturbation methods are used to determine the perturbation coefficient functions. The results of step one are in the form of one or more expansions each expressed as a sum of perturbation coefficient functions multiplied by a priori known gauge functions. In step two the classical Bubnov-Galerkin method uses the perturbation coefficient functions computed in step one to determine a set of amplitudes which replace and improve upon the gauge functions. The hybrid method has the potential of overcoming some of the drawbacks of the perturbation and Galerkin methods as applied separately, while combining some of their better features. The proposed method is applied, with two perturbation expansions in each case, to a variety of model ordinary differential equations problems including: a family of linear two-boundary-value problems, a nonlinear two-point boundary-value problem, a quantum mechanical eigenvalue problem and a nonlinear free oscillation problem. The results obtained from the hybrid methods are compared with approximate solutions obtained by other methods, and the applicability of the hybrid method to broader problem areas is discussed.
Multiplier Architecture for Coding Circuits
NASA Technical Reports Server (NTRS)
Wang, C. C.; Truong, T. K.; Shao, H. M.; Deutsch, L. J.
1986-01-01
Multipliers based on new algorithm for Galois-field (GF) arithmetic regular and expandable. Pipeline structures used for computing both multiplications and inverses. Designs suitable for implementation in very-large-scale integrated (VLSI) circuits. This general type of inverter and multiplier architecture especially useful in performing finite-field arithmetic of Reed-Solomon error-correcting codes and of some cryptographic algorithms.
NASA Astrophysics Data System (ADS)
Chen, Shaobo; Chen, Pingxiuqi; Shao, Qiliang; Basha Shaik, Nazeem; Xie, Jiafeng
2017-05-01
The elliptic curve cryptography (ECC) provides much stronger security per bits compared to the traditional cryptosystem, and hence it is an ideal role in secure communication in smart grid. On the other side, secure implementation of finite field multiplication over GF(2 m ) is considered as the bottle neck of ECC. In this paper, we present a novel obfuscation strategy for secure implementation of systolic field multiplier for ECC in smart grid. First, for the first time, we propose a novel obfuscation technique to derive a novel obfuscated systolic finite field multiplier for ECC implementation. Then, we employ the DNA cryptography coding strategy to obfuscate the field multiplier further. Finally, we obtain the area-time-power complexity of the proposed field multiplier to confirm the efficiency of the proposed design. The proposed design is highly obfuscated with low overhead, suitable for secure cryptosystem in smart grid.
10 CFR 436.36 - Conditions of payment.
Code of Federal Regulations, 2010 CFR
2010-01-01
... baseline under the energy savings performance contract (adjusted if appropriate under § 436.37), multiplied... 10 Energy 3 2010-01-01 2010-01-01 false Conditions of payment. 436.36 Section 436.36 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION FEDERAL ENERGY MANAGEMENT AND PLANNING PROGRAMS Methods and...
NASA Astrophysics Data System (ADS)
Hu, Mengsu; Wang, Yuan; Rutqvist, Jonny
2015-06-01
One major challenge in modeling groundwater flow within heterogeneous geological media is that of modeling arbitrarily oriented or intersected boundaries and inner material interfaces. The Numerical Manifold Method (NMM) has recently emerged as a promising method for such modeling, in its ability to handle boundaries, its flexibility in constructing physical cover functions (continuous or with gradient jump), its meshing efficiency with a fixed mathematical mesh (covers), its convenience for enhancing approximation precision, and its integration precision, achieved by simplex integration. In this paper, we report on developing and comparing two new approaches for boundary constraints using the NMM, namely a continuous approach with jump functions and a discontinuous approach with Lagrange multipliers. In the discontinuous Lagrange multiplier method (LMM), the material interfaces are regarded as discontinuities which divide mathematical covers into different physical covers. We define and derive stringent forms of Lagrange multipliers to link the divided physical covers, thus satisfying the continuity requirement of the refraction law. In the continuous Jump Function Method (JFM), the material interfaces are regarded as inner interfaces contained within physical covers. We briefly define jump terms to represent the discontinuity of the head gradient across an interface to satisfy the refraction law. We then make a theoretical comparison between the two approaches in terms of global degrees of freedom, treatment of multiple material interfaces, treatment of small area, treatment of moving interfaces, the feasibility of coupling with mechanical analysis and applicability to other numerical methods. The newly derived boundary-constraint approaches are coded into a NMM model for groundwater flow analysis, and tested for precision and efficiency on different simulation examples. We first test the LMM for a Dirichlet boundary and then test both LMM and JFM for an idealized heterogeneous model, comparing the numerical results with analytical solutions. Then we test both approaches for a heterogeneous model and compare the results of hydraulic head and specific discharge. We show that both approaches are suitable for modeling material boundaries, considering high accuracy for the boundary constraints, the capability to deal with arbitrarily oriented or complexly intersected boundaries, and their efficiency using a fixed mathematical mesh.
NASA Astrophysics Data System (ADS)
Seferlis, Andreas K.; Neophytides, Stylianos G.
2014-08-01
Solar photoelectrochemical water splitting on TiO2 for H2 production has been investigated for many years and is still considered very promising. Despite the many advantages, Titania's UV-only absorption, limits its terrestrial practical applications. In space though, with the lack of ozone's natural UV filter, this handicap is lifted, rendering TiO2 an attractive candidate as photoelectrocatalyst in space applications. Reductive doping of TiO2 has been investigated over the years for its impressive results but, till now, without practical application due to the impermanent nature of the doping. In this work we present a method that not only multiplies TiO2 water splitting efficiency, but also is facile, stable and easily applied in working conditions.
Aagaard, Brad T.; Knepley, M.G.; Williams, C.A.
2013-01-01
We employ a domain decomposition approach with Lagrange multipliers to implement fault slip in a finite-element code, PyLith, for use in both quasi-static and dynamic crustal deformation applications. This integrated approach to solving both quasi-static and dynamic simulations leverages common finite-element data structures and implementations of various boundary conditions, discretization schemes, and bulk and fault rheologies. We have developed a custom preconditioner for the Lagrange multiplier portion of the system of equations that provides excellent scalability with problem size compared to conventional additive Schwarz methods. We demonstrate application of this approach using benchmarks for both quasi-static viscoelastic deformation and dynamic spontaneous rupture propagation that verify the numerical implementation in PyLith.
Serial multiplier arrays for parallel computation
NASA Technical Reports Server (NTRS)
Winters, Kel
1990-01-01
Arrays of systolic serial-parallel multiplier elements are proposed as an alternative to conventional SIMD mesh serial adder arrays for applications that are multiplication intensive and require few stored operands. The design and operation of a number of multiplier and array configurations featuring locality of connection, modularity, and regularity of structure are discussed. A design methodology combining top-down and bottom-up techniques is described to facilitate development of custom high-performance CMOS multiplier element arrays as well as rapid synthesis of simulation models and semicustom prototype CMOS components. Finally, a differential version of NORA dynamic circuits requiring a single-phase uncomplemented clock signal introduced for this application.
Keynesian multiplier versus velocity of money
NASA Astrophysics Data System (ADS)
Wang, Yougui; Xu, Yan; Liu, Li
2010-08-01
In this paper we present the relation between Keynesian multiplier and the velocity of money circulation in a money exchange model. For this purpose we modify the original exchange model by constructing the interrelation between income and expenditure. The random exchange yields an agent's income, which along with the amount of money he processed determines his expenditure. In this interactive process, both the circulation of money and Keynesian multiplier effect can be formulated. The equilibrium values of Keynesian multiplier are demonstrated to be closely related to the velocity of money. Thus the impacts of macroeconomic policies on aggregate income can be understood by concentrating solely on the variations of money circulation.
A review of the matrix-exponential formalism in radiative transfer
NASA Astrophysics Data System (ADS)
Efremenko, Dmitry S.; Molina García, Víctor; Gimeno García, Sebastián; Doicu, Adrian
2017-07-01
This paper outlines the matrix exponential description of radiative transfer. The eigendecomposition method which serves as a basis for computing the matrix exponential and for representing the solution in a discrete ordinate setting is considered. The mathematical equivalence of the discrete ordinate method, the matrix operator method, and the matrix Riccati equations method is proved rigorously by means of the matrix exponential formalism. For optically thin layers, approximate solution methods relying on the Padé and Taylor series approximations to the matrix exponential, as well as on the matrix Riccati equations, are presented. For optically thick layers, the asymptotic theory with higher-order corrections is derived, and parameterizations of the asymptotic functions and constants for a water-cloud model with a Gamma size distribution are obtained.
Code of Federal Regulations, 2010 CFR
2010-07-01
... times the maximum per diem rate (i.e., lodging plus meals and incidental expenses) prescribed in chapter... immediate family, multiply the same number of days by .25 times the same per diem rate. Your payment will be...
Code of Federal Regulations, 2011 CFR
2011-07-01
... times the maximum per diem rate (i.e., lodging plus meals and incidental expenses) prescribed in chapter... immediate family, multiply the same number of days by .25 times the same per diem rate. Your payment will be...
Symplectic Quantization of a Reducible Theory
NASA Astrophysics Data System (ADS)
Barcelos-Neto, J.; Silva, M. B. D.
We use the symplectic formalism to quantize the Abelian antisymmetric tensor gauge field. It is related to a reducible theory in the sense that all of its constraints are not independent. A procedure like ghost-of-ghost of the BFV method has to be used, but in terms of Lagrange multipliers.
Investigation and Development of Advanced Surface Microanalysis Techniques and Methods
1983-04-01
descriminates against isobars since each of the isobaric species will have a different atomic number or Z and, therefore, will be stripped of its...allow descrimination between two elements at the same mass but which have different atomic numbers. Multiply-charged ions are not produced during the
Classroom Norms and Individual Smoking Behavior in Middle School
ERIC Educational Resources Information Center
Yarnell, Lisa M.; Brown, H. Shelton, III; Pasch, Keryn E.; Perry, Cheryl L.; Komro, Kelli A.
2012-01-01
Objectives: To investigate whether smoking prevalence in grade-level networks influences individual smoking, suggesting that peers are important social multipliers in teen smoking. Methods: We measured gender-specific, grade-level recent and life-time smoking among urban middle-school students who participated in Project Northland Chicago in a…
Alternative Multiple Imputation Inference for Mean and Covariance Structure Modeling
ERIC Educational Resources Information Center
Lee, Taehun; Cai, Li
2012-01-01
Model-based multiple imputation has become an indispensable method in the educational and behavioral sciences. Mean and covariance structure models are often fitted to multiply imputed data sets. However, the presence of multiple random imputations complicates model fit testing, which is an important aspect of mean and covariance structure…
Integers Made Easy: Just Walk It Off
ERIC Educational Resources Information Center
Nurnberger-Haag, Julie
2007-01-01
This article describes a multisensory method for teaching students how to multiply and divide as well as add and subtract integers. The author uses sidewalk chalk and the underlying concept of integers to physically and mentally engage students in understanding the concepts of integers, making connections, and developing computational fluency.…
Sampson, Jason S.; Murray, Kermit K.; Muddiman, David C.
2013-01-01
We report the implementation of an infrared laser onto our previously reported matrix-assisted laser desorption electrospray ionization (MALDESI) source with ESI post-ionization yielding multiply charged peptides and proteins. Infrared (IR)-MALDESI is demonstrated for atmospheric pressure desorption and ionization of biological molecules ranging in molecular weight from 1.2 to 17 kDa. High resolving power, high mass accuracy single-acquisition Fourier transform ion cyclotron resonance (FT-ICR) mass spectra were generated from liquid-and solid-state peptide and protein samples by desorption with an infrared laser (2.94 µm) followed by ESI post-ionization. Intact and top-down analysis of equine myoglobin (17 kDa) desorbed from the solid state with ESI post-ionization demonstrates the sequencing capabilities using IR-MALDESI coupled to FT-ICR mass spectrometry. Carbohydrates and lipids were detected through direct analysis of milk and egg yolk using both UV- and IR-MALDESI with minimal sample preparation. Three of the four classes of biological macromolecules (proteins, carbohydrates, and lipids) have been ionized and detected using MALDESI with minimal sample preparation. Sequencing of O-linked glycans, cleaved from mucin using reductive β-elimination chemistry, is also demonstrated. PMID:19185512
Comparison Of Eigenvector-Based Statistical Pattern Recognition Algorithms For Hybrid Processing
NASA Astrophysics Data System (ADS)
Tian, Q.; Fainman, Y.; Lee, Sing H.
1989-02-01
The pattern recognition algorithms based on eigenvector analysis (group 2) are theoretically and experimentally compared in this part of the paper. Group 2 consists of Foley-Sammon (F-S) transform, Hotelling trace criterion (HTC), Fukunaga-Koontz (F-K) transform, linear discriminant function (LDF) and generalized matched filter (GMF). It is shown that all eigenvector-based algorithms can be represented in a generalized eigenvector form. However, the calculations of the discriminant vectors are different for different algorithms. Summaries on how to calculate the discriminant functions for the F-S, HTC and F-K transforms are provided. Especially for the more practical, underdetermined case, where the number of training images is less than the number of pixels in each image, the calculations usually require the inversion of a large, singular, pixel correlation (or covariance) matrix. We suggest solving this problem by finding its pseudo-inverse, which requires inverting only the smaller, non-singular image correlation (or covariance) matrix plus multiplying several non-singular matrices. We also compare theoretically the effectiveness for classification with the discriminant functions from F-S, HTC and F-K with LDF and GMF, and between the linear-mapping-based algorithms and the eigenvector-based algorithms. Experimentally, we compare the eigenvector-based algorithms using a set of image data bases each image consisting of 64 x 64 pixels.
Aryal, Uma K.; Olson, Douglas J.H.; Ross, Andrew R.S.
2008-01-01
Although widely used in proteomics research for the selective enrichment of phosphopeptides from protein digests, immobilized metal-ion affinity chromatography (IMAC) often suffers from low specificity and differential recovery of peptides carrying different numbers of phosphate groups. By systematically evaluating and optimizing different loading, washing, and elution conditions, we have developed an efficient and highly selective procedure for the enrichment of phosphopeptides using a commercially available gallium(III)-IMAC column (PhosphoProfile, Sigma). Phosphopeptide enrichment using the reagents supplied with the column is incomplete and biased toward the recovery and/or detection of smaller, singly phosphorylated peptides. In contrast, elution with base (0.4 M ammonium hydroxide) gives efficient and balanced recovery of both singly and multiply phosphorylated peptides, while loading peptides in a strong acidic solution (1% trifluoracetic acid) further increases selectivity toward phosphopeptides, with minimal carryover of nonphosphorylated peptides. 2,5-Dihydroxybenzoic acid, a matrix commonly used when analyzing phosphopeptides by matrix-assisted laser desorption/ionization mass spectrometry was also evaluated as an additive in loading and eluting solvents. Elution with 50% acetonitrile containing 20 mg/mL dihydroxybenzoic acid and 1% phosphoric acid gave results similar to those obtained using ammonium hydroxide as the eluent, although the latter showed the highest specificity for phosphorylated peptides. PMID:19183793
Effective switching frequency multiplier inverter
Su, Gui-Jia [Oak Ridge, TN; Peng, Fang Z [Okemos, MI
2007-08-07
A switching frequency multiplier inverter for low inductance machines that uses parallel connection of switches and each switch is independently controlled according to a pulse width modulation scheme. The effective switching frequency is multiplied by the number of switches connected in parallel while each individual switch operates within its limit of switching frequency. This technique can also be used for other power converters such as DC/DC, AC/DC converters.
7 CFR 1000.50 - Class prices, component prices, and advanced pricing factors.
Code of Federal Regulations, 2012 CFR
2012-01-01
..., rounded to the nearest cent, shall be the protein price per pound times 3.1 plus the other solids price...) Multiply the protein price computed in paragraph (q)(1)(i) of this section by 3.1; (iii) Multiply the other... multiply the result by 1.383; (3) Add to the amount computed pursuant to paragraph (n)(2) of this section...
7 CFR 1000.50 - Class prices, component prices, and advanced pricing factors.
Code of Federal Regulations, 2013 CFR
2013-01-01
..., rounded to the nearest cent, shall be the protein price per pound times 3.1 plus the other solids price...) Multiply the protein price computed in paragraph (q)(1)(i) of this section by 3.1; (iii) Multiply the other... multiply the result by 1.383; (3) Add to the amount computed pursuant to paragraph (n)(2) of this section...
NASA Astrophysics Data System (ADS)
Stoykov, S.; Atanassov, E.; Margenov, S.
2016-10-01
Many of the scientific applications involve sparse or dense matrix operations, such as solving linear systems, matrix-matrix products, eigensolvers, etc. In what concerns structural nonlinear dynamics, the computations of periodic responses and the determination of stability of the solution are of primary interest. Shooting method iswidely used for obtaining periodic responses of nonlinear systems. The method involves simultaneously operations with sparse and dense matrices. One of the computationally expensive operations in the method is multiplication of sparse by dense matrices. In the current work, a new algorithm for sparse matrix by dense matrix products is presented. The algorithm takes into account the structure of the sparse matrix, which is obtained by space discretization of the nonlinear Mindlin's plate equation of motion by the finite element method. The algorithm is developed to use the vector engine of Intel Xeon Phi coprocessors. It is compared with the standard sparse matrix by dense matrix algorithm and the one developed by Intel MKL and it is shown that by considering the properties of the sparse matrix better algorithms can be developed.
Results of the NFIRAOS RTC trade study
NASA Astrophysics Data System (ADS)
Véran, Jean-Pierre; Boyer, Corinne; Ellerbroek, Brent L.; Gilles, Luc; Herriot, Glen; Kerley, Daniel A.; Ljusic, Zoran; McVeigh, Eric A.; Prior, Robert; Smith, Malcolm; Wang, Lianqi
2014-07-01
With two large deformable mirrors with a total of more than 7000 actuators that need to be driven from the measurements of six 60x60 LGS WFSs (total 1.23Mpixels) at 800Hz with a latency of less than one frame, NFIRAOS presents an interesting real-time computing challenge. This paper reports on a recent trade study to evaluate which current technology could meet this challenge, with the plan to select a baseline architecture by the beginning of NFIRAOS construction in 2014. We have evaluated a number of architectures, ranging from very specialized layouts with custom boards to more generic architectures made from commercial off-the-shelf units (CPUs with or without accelerator boards). For each architecture, we have found the most suitable algorithm, mapped it onto the hardware and evaluated the performance through benchmarking whenever possible. We have evaluated a large number of criteria, including cost, power consumption, reliability and flexibility, and proceeded with scoring each architecture based on these criteria. We have found that, with today's technology, the NFIRAOS requirements are well within reach of off-the-shelf commercial hardware running a parallel implementation of the straightforward matrix-vector multiply (MVM) algorithm for wave-front reconstruction. Even accelerators such as GPUs and Xeon Phis are no longer necessary. Indeed, we have found that the entire NFIRAOS RTC can be handled by seven 2U high-end PC-servers using 10GbE connectivity. Accelerators are only required for the off-line process of updating the matrix control matrix every ~10s, as observing conditions change.
Caine, Jonathan S.; Bruhn, R.L.; Forster, C.B.
2010-01-01
Outcrop mapping and fault-rock characterization of the Stillwater normal fault zone in Dixie Valley, Nevada are used to document and interpret ancient hydrothermal fluid flow and its possible relationship to seismic deformation. The fault zone is composed of distinct structural and hydrogeological components. Previous work on the fault rocks is extended to the map scale where a distinctive fault core shows a spectrum of different fault-related breccias. These include predominantly clast-supported breccias with angular clasts that are cut by zones containing breccias with rounded clasts that are also clast supported. These are further cut by breccias that are predominantly matrix supported with angular and rounded clasts. The fault-core breccias are surrounded by a heterogeneously fractured damage zone. Breccias are bounded between major, silicified slip surfaces, forming large pod-like structures, systematically oriented with long axes parallel to slip. Matrix-supported breccias have multiply brecciated, angular and rounded clasts revealing episodic deformation and fluid flow. These breccias have a quartz-rich matrix with microcrystalline anhedral, equant, and pervasively conformable mosaic texture. The breccia pods are interpreted to have formed by decompression boiling and rapid precipitation of hydrothermal fluids whose flow was induced by coseismic, hybrid dilatant-shear deformation and hydraulic connection to a geothermal reservoir. The addition of hydrothermal silica cement localized in the core at the map scale causes fault-zone widening, local sealing, and mechanical heterogeneities that impact the evolution of the fault zone throughout the seismic cycle. ?? 2010.
Method of forming a ceramic matrix composite and a ceramic matrix component
DOE Office of Scientific and Technical Information (OSTI.GOV)
de Diego, Peter; Zhang, James
A method of forming a ceramic matrix composite component includes providing a formed ceramic member having a cavity, filling at least a portion of the cavity with a ceramic foam. The ceramic foam is deposited on a barrier layer covering at least one internal passage of the cavity. The method includes processing the formed ceramic member and ceramic foam to obtain a ceramic matrix composite component. Also provided is a method of forming a ceramic matrix composite blade and a ceramic matrix composite component.
Symplectic Quantization of a Vector-Tensor Gauge Theory with Topological Coupling
NASA Astrophysics Data System (ADS)
Barcelos-Neto, J.; Silva, M. B. D.
We use the symplectic formalism to quantize a gauge theory where vectors and tensors fields are coupled in a topological way. This is an example of reducible theory and a procedure like of ghosts-of-ghosts of the BFV method is applied but in terms of Lagrange multipliers. Our final results are in agreement with the ones found in the literature by using the Dirac method.
David W. Vahey; C. Tim Scott; J.Y. Zhu; Kenneth E. Skog
2012-01-01
Methods for estimating present and future carbon storage in trees and forests rely on measurements or estimates of tree volume or volume growth multiplied by specific gravity. Wood density can vary by tree ring and height in a tree. If data on density by tree ring could be obtained and linked to tree size and stand characteristics, it would be possible to more...
Fast kinematic ray tracing of first- and later-arriving global seismic phases
NASA Astrophysics Data System (ADS)
Bijwaard, Harmen; Spakman, Wim
1999-11-01
We have developed a ray tracing algorithm that traces first- and later-arriving global seismic phases precisely (traveltime errors of the order of 0.1 s), and with great computational efficiency (15 rays s- 1). To achieve this, we have extended and adapted two existing ray tracing techniques: a graph method and a perturbation method. The two resulting algorithms are able to trace (critically) refracted, (multiply) reflected, some diffracted (Pdiff), and (multiply) converted seismic phases in a 3-D spherical geometry, thus including the largest part of seismic phases that are commonly observed on seismograms. We have tested and compared the two methods in 2-D and 3-D Cartesian and spherical models, for which both algorithms have yielded precise paths and traveltimes. These tests indicate that only the perturbation method is computationally efficient enough to perform 3-D ray tracing on global data sets of several million phases. To demonstrate its potential for non-linear tomography, we have applied the ray perturbation algorithm to a data set of 7.6 million P and pP phases used by Bijwaard et al. (1998) for linearized tomography. This showed that the expected heterogeneity within the Earth's mantle leads to significant non-linear effects on traveltimes for 10 per cent of the applied phases.
3D Bragg coherent diffractive imaging of five-fold multiply twinned gold nanoparticle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Jong Woo; Ulvestad, Andrew; Manna, Sohini
The formation mechanism of five-fold multiply twinned nanoparticles has been a long-term topic because of their geometrical incompatibility. So, various models have been proposed to explain how the internal structure of the multiply twinned nanoparticles accommodates the constraints of the solid-angle deficiency. Here, we investigate the internal structure, strain field and strain energy density of 600 nm sized five-fold multiply twinned gold nanoparticles quantitatively using Bragg coherent diffractive imaging, which is suitable for the study of buried defects and three-dimensional strain distribution with great precision. Our study reveals that the strain energy density in five-fold multiply twinned gold nanoparticles ismore » an order of magnitude higher than that of the single nanocrystals such as an octahedron and triangular plate synthesized under the same conditions. This result indicates that the strain developed while accommodating an angular misfit, although partially released through the introduction of structural defects, is still large throughout the crystal.« less
3D Bragg coherent diffractive imaging of five-fold multiply twinned gold nanoparticle
Kim, Jong Woo; Ulvestad, Andrew; Manna, Sohini; ...
2017-08-11
The formation mechanism of five-fold multiply twinned nanoparticles has been a long-term topic because of their geometrical incompatibility. So, various models have been proposed to explain how the internal structure of the multiply twinned nanoparticles accommodates the constraints of the solid-angle deficiency. Here, we investigate the internal structure, strain field and strain energy density of 600 nm sized five-fold multiply twinned gold nanoparticles quantitatively using Bragg coherent diffractive imaging, which is suitable for the study of buried defects and three-dimensional strain distribution with great precision. Our study reveals that the strain energy density in five-fold multiply twinned gold nanoparticles ismore » an order of magnitude higher than that of the single nanocrystals such as an octahedron and triangular plate synthesized under the same conditions. This result indicates that the strain developed while accommodating an angular misfit, although partially released through the introduction of structural defects, is still large throughout the crystal.« less
Temperature Effects in Varactors and Multipliers
NASA Technical Reports Server (NTRS)
East, J.; Mehdi, Imran
2001-01-01
Varactor diode multipliers are a critical part of many THz measurement systems. The power and efficiencies of these devices limit the available power for THz sources. Varactor operation is determined by the physics of the varactor device and a careful doping profile design is needed to optimize the performance. Higher doped devices are limited by junction breakdown and lower doped structures are limited by current saturation. Higher doped structures typically have higher efficiencies and lower doped structures typically have higher powers at the same operating frequency and impedance level. However, the device material properties are also a function of the operating temperature. Recent experimental evidence has shown that the power output of a multiplier can be improved by cooling the device. We have used a particle Monte Carlo simulation to investigate the temperature dependent velocity vs. electric field in GaAs. This information was then included in a nonlinear device circuit simulator to predict multiplier performance for various temperatures and device designs. This paper will describe the results of this analysis of temperature dependent multiplier operation.
Schultz, Thomas J.; Kotidis, Petros A.; Woodroffe, Jaime A.; Rostler, Peter S.
1995-01-01
A system for non-destructively measuring an object and controlling industrial processes in response to the measurement is disclosed in which an impulse laser generates a plurality of sound waves over timed increments in an object. A polarizing interferometer is used to measure surface movement of the object caused by the sound waves and sensed by phase shifts in the signal beam. A photon multiplier senses the phase shift and develops an electrical signal. A signal conditioning arrangement modifies the electrical signals to generate an average signal correlated to the sound waves which in turn is correlated to a physical or metallurgical property of the object, such as temperature, which property may then be used to control the process. External, random vibrations of the workpiece are utilized to develop discernible signals which can be sensed in the interferometer by only one photon multiplier. In addition the interferometer includes an arrangement for optimizing its sensitivity so that movement attributed to various waves can be detected in opaque objects. The interferometer also includes a mechanism for sensing objects with rough surfaces which produce speckle light patterns. Finally the interferometer per se, with the addition of a second photon multiplier is capable of accurately recording beam length distance differences with only one reading.
Method and apparatus for measuring surface movement of an object using a polarizing interfeometer
Schultz, Thomas J.; Kotidis, Petros A.; Woodroffe, Jaime A.; Rostler, Peter S.
1995-01-01
A system for non-destructively measuring an object and controlling industrial processes in response to the measurement is disclosed in which an impulse laser generates a plurality of sound waves over timed increments in an object. A polarizing interferometer is used to measure surface movement of the object caused by the sound waves and sensed by phase shifts in the signal beam. A photon multiplier senses the phase shift and develops an electrical signal. A signal conditioning arrangement modifies the electrical signals to generate an average signal correlated to the sound waves which in turn is correlated to a physical or metallurgical property of the object, such as temperature, which property may then be used to control the process. External, random vibrations of the workpiece are utilized to develop discernible signals which can be sensed in the interferometer by only one photon multiplier. In addition the interferometer includes an arrangement for optimizing its sensitivity so that movement attributed to various waves can be detected in opaque objects. The interferometer also includes a mechanism for sensing objects with rough surfaces which produce speckle light patterns. Finally the interferometer per se, with the addition of a second photon multiplier is capable of accurately recording beam length distance differences with only one reading.
Method and apparatus for measuring surface movement of an object using a polarizing interferometer
Schultz, T.J.; Kotidis, P.A.; Woodroffe, J.A.; Rostler, P.S.
1995-05-09
A system for non-destructively measuring an object and controlling industrial processes in response to the measurement is disclosed in which an impulse laser generates a plurality of sound waves over timed increments in an object. A polarizing interferometer is used to measure surface movement of the object caused by the sound waves and sensed by phase shifts in the signal beam. A photon multiplier senses the phase shift and develops an electrical signal. A signal conditioning arrangement modifies the electrical signals to generate an average signal correlated to the sound waves which in turn is correlated to a physical or metallurgical property of the object, such as temperature, which property may then be used to control the process. External, random vibrations of the workpiece are utilized to develop discernible signals which can be sensed in the interferometer by only one photon multiplier. In addition the interferometer includes an arrangement for optimizing its sensitivity so that movement attributed to various waves can be detected in opaque objects. The interferometer also includes a mechanism for sensing objects with rough surfaces which produce speckle light patterns. Finally the interferometer per se, with the addition of a second photon multiplier is capable of accurately recording beam length distance differences with only one reading. 38 figs.
Schultz, T.J.; Kotidis, P.A.; Woodroffe, J.A.; Rostler, P.S.
1995-04-25
A system for non-destructively measuring an object and controlling industrial processes in response to the measurement is disclosed in which an impulse laser generates a plurality of sound waves over timed increments in an object. A polarizing interferometer is used to measure surface movement of the object caused by the sound waves and sensed by phase shifts in the signal beam. A photon multiplier senses the phase shift and develops an electrical signal. A signal conditioning arrangement modifies the electrical signals to generate an average signal correlated to the sound waves which in turn is correlated to a physical or metallurgical property of the object, such as temperature, which property may then be used to control the process. External, random vibrations of the workpiece are utilized to develop discernible signals which can be sensed in the interferometer by only one photon multiplier. In addition the interferometer includes an arrangement for optimizing its sensitivity so that movement attributed to various waves can be detected in opaque objects. The interferometer also includes a mechanism for sensing objects with rough surfaces which produce speckle light patterns. Finally the interferometer per se, with the addition of a second photon multiplier is capable of accurately recording beam length distance differences with only one reading. 38 figs.
7 CFR 1463.106 - Base quota levels for eligible tobacco producers.
Code of Federal Regulations, 2012 CFR
2012-01-01
...)—.952381 (iv) Virginia Sun-cured (type 37) 1.0000 3 Multiply the sum from Step 2 times the farm's average... (35-36)—.952381 (iv) Virginia Sun-cured (type 37) 1.0000 6 Multiply the sum from Step 5 times the farm... (35-36)—.94264 (iv) Virginia Sun-cured (type 37) 1.0000 3 Multiply the sum from Step 2 times the farm...
7 CFR 1463.106 - Base quota levels for eligible tobacco producers.
Code of Federal Regulations, 2013 CFR
2013-01-01
...)—.952381 (iv) Virginia Sun-cured (type 37) 1.0000 3 Multiply the sum from Step 2 times the farm's average... (35-36)—.952381 (iv) Virginia Sun-cured (type 37) 1.0000 6 Multiply the sum from Step 5 times the farm... (35-36)—.94264 (iv) Virginia Sun-cured (type 37) 1.0000 3 Multiply the sum from Step 2 times the farm...
7 CFR 1463.106 - Base quota levels for eligible tobacco producers.
Code of Federal Regulations, 2014 CFR
2014-01-01
...)—.952381 (iv) Virginia Sun-cured (type 37) 1.0000 3 Multiply the sum from Step 2 times the farm's average... (35-36)—.952381 (iv) Virginia Sun-cured (type 37) 1.0000 6 Multiply the sum from Step 5 times the farm... (35-36)—.94264 (iv) Virginia Sun-cured (type 37) 1.0000 3 Multiply the sum from Step 2 times the farm...
7 CFR 1463.106 - Base quota levels for eligible tobacco producers.
Code of Federal Regulations, 2011 CFR
2011-01-01
...)—.952381 (iv) Virginia Sun-cured (type 37) 1.0000 3 Multiply the sum from Step 2 times the farm's average... (35-36)—.952381 (iv) Virginia Sun-cured (type 37) 1.0000 6 Multiply the sum from Step 5 times the farm... (35-36)—.94264 (iv) Virginia Sun-cured (type 37) 1.0000 3 Multiply the sum from Step 2 times the farm...
Comby, G.
1996-10-01
The Ceramic Electron Multipliers (CEM) is a compact, robust, linear and fast multi-channel electron multiplier. The Multi Layer Ceramic Technique (MLCT) allows to build metallic dynodes inside a compact ceramic block. The activation of the metallic dynodes enhances their secondary electron emission (SEE). The CEM can be used in multi-channel photomultipliers, multi-channel light intensifiers, ion detection, spectroscopy, analysis of time of flight events, particle detection or Cherenkov imaging detectors. (auth)
FamNet: A Framework to Identify Multiplied Modules Driving Pathway Expansion in Plants1
Tohge, Takayuki; Klie, Sebastian; Fernie, Alisdair R.
2016-01-01
Gene duplications generate new genes that can acquire similar but often diversified functions. Recent studies of gene coexpression networks have indicated that, not only genes, but also pathways can be multiplied and diversified to perform related functions in different parts of an organism. Identification of such diversified pathways, or modules, is needed to expand our knowledge of biological processes in plants and to understand how biological functions evolve. However, systematic explorations of modules remain scarce, and no user-friendly platform to identify them exists. We have established a statistical framework to identify modules and show that approximately one-third of the genes of a plant’s genome participate in hundreds of multiplied modules. Using this framework as a basis, we implemented a platform that can explore and visualize multiplied modules in coexpression networks of eight plant species. To validate the usefulness of the platform, we identified and functionally characterized pollen- and root-specific cell wall modules that multiplied to confer tip growth in pollen tubes and root hairs, respectively. Furthermore, we identified multiplied modules involved in secondary metabolite synthesis and corroborated them by metabolite profiling of tobacco (Nicotiana tabacum) tissues. The interactive platform, referred to as FamNet, is available at http://www.gene2function.de/famnet.html. PMID:26754669
Exponential series approaches for nonparametric graphical models
NASA Astrophysics Data System (ADS)
Janofsky, Eric
Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.
Factors affecting volume calculation with single photon emission tomography (SPECT) method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, T.H.; Lee, K.H.; Chen, D.C.P.
1985-05-01
Several factors may influence the calculation of absolute volumes (VL) from SPECT images. The effect of these factors must be established to optimize the technique. The authors investigated the following on the VL calculations: % of background (BG) subtraction, reconstruction filters, sample activity, angular sampling and edge detection methods. Transaxial images of a liver-trunk phantom filled with Tc-99m from 1 to 3 ..mu..Ci/cc were obtained in 64x64 matrix with a Siemens Rota Camera and MDS computer. Different reconstruction filters including Hanning 20,32, 64 and Butterworth 20, 32 were used. Angular samplings were performed in 3 and 6 degree increments. ROI'smore » were drawn manually and with an automatic edge detection program around the image after BG subtraction. VL's were calculated by multiplying the number of pixels within the ROI by the slice thickness and the x- and y- calibrations of each pixel. One or 2 pixel per slice thickness was applied in the calculation. An inverse correlation was found between the calculated VL and the % of BG subtraction (r=0.99 for 1,2,3 ..mu..Ci/cc activity). Based on the authors' linear regression analysis, the correct liver VL was measured with about 53% BG subtraction. The reconstruction filters, slice thickness and angular sampling had only minor effects on the calculated phantom volumes. Detection of the ROI automatically by the computer was not as accurate as the manual method. The authors conclude that the % of BG subtraction appears to be the most important factor affecting the VL calculation. With good quality control and appropriate reconstruction factors, correct VL calculations can be achieved with SPECT.« less
Chandra, A; Rana, J; Li, Y
2001-08-01
A method has been established and validated for identification and quantification of individual, as well as total, anthocyanins by HPLC and LC/ES-MS in botanical raw materials used in the herbal supplement industry. The anthocyanins were separated and identified on the basis of their respective M(+) (cation) using LC/ES-MS. Separated anthocyanins were individually calculated against one commercially available anthocyanin external standard (cyanidin-3-glucoside chloride) and expressed as its equivalents. Amounts of each anthocyanin calculated as external standard equivalent were then multiplied by a molecular-weight correction factor to afford their specific quantities. Experimental procedures and use of a molecular-weight correction factors are substantiated and validated using Balaton tart cherry and elderberry as templates. Cyanidin-3-glucoside chloride has been widely used in the botanical industry to calculate total anthocyanins. In our studies on tart cherry and elderberry, its use as external standard followed by use of molecular-weight correction factors should provide relatively accurate results for total anthocyanins, because of the presence of cyanidin as their major anthocyanidin backbone. The method proposed here is simple and has a direct sample preparation procedure without any solid-phase extraction. It enables selection and use of commercially available anthocyanins as external standards for quantification of specific anthocyanins in the sample matrix irrespective of their commercial availability as analytical standards. It can be used as a template and applied for similar quantification in several anthocyanin-containing raw materials for routine quality control procedures, thus providing consistency in analytical testing of botanical raw materials used for manufacturing efficacious and true-to-the-label nutritional supplements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krasnoshchekov, Sergey V.; Stepanov, Nikolay F.
2013-11-14
In the theory of anharmonic vibrations of a polyatomic molecule, mixing the zero-order vibrational states due to cubic, quartic and higher-order terms in the potential energy expansion leads to the appearance of more-or-less isolated blocks of states (also called polyads), connected through multiple resonances. Such polyads of states can be characterized by a common secondary integer quantum number. This polyad quantum number is defined as a linear combination of the zero-order vibrational quantum numbers, attributed to normal modes, multiplied by non-negative integer polyad coefficients, which are subject to definition for any particular molecule. According to Kellman's method [J. Chem. Phys.more » 93, 6630 (1990)], the corresponding formalism can be conveniently described using vector algebra. In the present work, a systematic consideration of polyad quantum numbers is given in the framework of the canonical Van Vleck perturbation theory (CVPT) and its numerical-analytic operator implementation for reducing the Hamiltonian to the quasi-diagonal form, earlier developed by the authors. It is shown that CVPT provides a convenient method for the systematic identification of essential resonances and the definition of a polyad quantum number. The method presented is generally suitable for molecules of significant size and complexity, as illustrated by several examples of molecules up to six atoms. The polyad quantum number technique is very useful for assembling comprehensive basis sets for the matrix representation of the Hamiltonian after removal of all non-resonance terms by CVPT. In addition, the classification of anharmonic energy levels according to their polyad quantum numbers provides an additional means for the interpretation of observed vibrational spectra.« less
Faust, M A; Robison, O W; Tess, M W
1992-07-01
A stochastic life-cycle swine production model was used to study the effect of female replacement rates in the dam-daughter pathway for a tiered breeding structure on genetic change and returns to the breeder. Genetic, environmental, and economic parameters were used to simulate characteristics of individual pigs in a system producing F1 female replacements. Evaluated were maximum culling ages for nucleus and multiplier tier sows. System combinations included one- and five-parity alternatives for both levels and 10-parity options for the multiplier tier. Yearly changes and average phenotypic levels were computed for performance and economic measures. Generally, at the nucleus level, responses to 10 yr of selection for sow and pig performance in five-parity herds were 70 to 85% of response in one-parity herds. Similarly, the highest selection responses in multiplier herds were from systems with one-parity nucleus tiers. Responses in these were typically greater than 115% of the response for systems with the smallest yearly change, namely, the five-parity nucleus and five- and 10-parity multiplier levels. In contrast, the most profitable multiplier tiers (10-parity) had the lowest replacement costs. Within a multiplier culling strategy, rapid genetic change was desirable. Differences between systems that culled after five or 10 parities were smaller than differences between five- and one-parity multiplier options. To recover production costs, systems with the lowest returns required 140% of market hog value for gilts available to commercial tiers, whereas more economically efficient systems required no premium.
Quad-channel beam switching WR3-band transmitter MMIC
NASA Astrophysics Data System (ADS)
Müller, Daniel; Eren, Gülesin; Wagner, Sandrine; Tessmann, Axel; Leuther, Arnulf; Zwick, Thomas; Kallfass, Ingmar
2017-05-01
Millimeter wave radar systems offer several advantages such as the combination of high resolution and the penetration of adverse atmosphere like smoke, dust or rain. This paper presents a monolithic millimeter wave integrated circuit (MMIC) transmitter which offers four channel beam steering capabilities and can be used as a radar or communication system transmitter. At the local oscillator input, in order to simplify packaging, a frequency tripler is used to multiply the 76.6 - 83.3 GHz input signal to the intended 230 - 250 GHz output frequency range. A resistive mixer is used for the conversion of the intermediate frequency signal into the RF domain. The actual beam steering network is realized using an active single pole quadruple throw (SP4T) switch, which is connected to a integrated Butler matrix. The MMIC was fabricated in a 35 nm InGaAs mHEMT process and has a size of 4.0 mm × 1.5 mm
1987-08-01
C C CALL THE ELSV SUBROUTINE TO INVERT THE MATRIX. EPO .1D-09 . CALL ELSV(Z,AUX1,AUX2,tN, DE ,EP) WRITE(93,118) DE 118 FORMAT(5X,’ DE -,1E) C C MULTIPLY...IFCABS(W).LT.EP)GO TO 17 DO 13 I=1,N Y’=A( I,K)/W DO 13 J=1,N 13 AC I,J)=A(I,J)-B(J)*Y DE =O.DO DO.15 J=1,N B(J)0O.DO DO 16 I=1,N 16 B(J)=B(J)+A(I,J...15 DE - DE +C(J)*B(J) RETURN 1. DE -1.DO RETURN -18- IMPLICIT COMPLEX*16 (C) IMPLICIT REAL*8 (A-B,E-H,P-z) C C C THIS PROGRAM GIVES THE POTENTIAL AND THE
An application of corelap algoritm to improve the utilization space of the classroom
NASA Astrophysics Data System (ADS)
Sembiring, A. C.; Budiman, I.; Mardhatillah, A.; Tarigan, U. P.; Jawira, A.
2018-04-01
The high demand of the room due to the increasing number of students requires the addition of the room The limited number of rooms, the price of land and the cost of building expensive infrastructure requires effective and efficient use of the space. The facility layout redesign is done using the Computerized Relationship Planning (CORELAP) algorithm based on total closeness rating (TCR). By calculating the square distance between the departments based on the coordinates of the central point of the department. The distance obtained is multiplied by the material current from the From-to chart matrix. The analysis is done by comparing the total distance between the initial layout and the proposed layout and then viewing the activities performed in each room. The results of CORELAP algorithm processing gives an increase of room usage efficiency equal to 14, 98% from previous activity.
NASA Astrophysics Data System (ADS)
Yuan, Yanchao; Sun, Yanxiao; Yan, Shijing; Zhao, Jianqing; Liu, Shumei; Zhang, Mingqiu; Zheng, Xiaoxing; Jia, Lei
2017-03-01
Nondestructive retrieval of expensive carbon fibres (CFs) from CF-reinforced thermosetting advanced composites widely applied in high-tech fields has remained inaccessible as the harsh conditions required to recycle high-performance resin matrices unavoidably damage the structure and properties of CFs. Degradable thermosetting resins with stable covalent structures offer a potential solution to this conflict. Here we design a new synthesis scheme and prepare a recyclable CF-reinforced poly(hexahydrotriazine) resin matrix advanced composite. The multiple recycling experiments and characterization data establish that this composite demonstrates performance comparable to those of its commercial counterparts, and more importantly, it realizes multiple intact recoveries of CFs and near-total recycling of the principal raw materials through gentle depolymerization in certain dilute acid solution. To our best knowledge, this study demonstrates for the first time a feasible and environment-friendly preparation-recycle-regeneration strategy for multiple CF-recycling from CF-reinforced advanced composites.
Bioactive and Biodegradable Nanocomposites and Hybrid Biomaterials for Bone Regeneration
Allo, Bedilu A.; Costa, Daniel O.; Dixon, S. Jeffrey; Mequanint, Kibret; Rizkalla, Amin S.
2012-01-01
Strategies for bone tissue engineering and regeneration rely on bioactive scaffolds to mimic the natural extracellular matrix and act as templates onto which cells attach, multiply, migrate and function. Of particular interest are nanocomposites and organic-inorganic (O/I) hybrid biomaterials based on selective combinations of biodegradable polymers and bioactive inorganic materials. In this paper, we review the current state of bioactive and biodegradable nanocomposite and O/I hybrid biomaterials and their applications in bone regeneration. We focus specifically on nanocomposites based on nano-sized hydroxyapatite (HA) and bioactive glass (BG) fillers in combination with biodegradable polyesters and their hybrid counterparts. Topics include 3D scaffold design, materials that are widely used in bone regeneration, and recent trends in next generation biomaterials. We conclude with a perspective on the future application of nanocomposites and O/I hybrid biomaterials for regeneration of bone. PMID:24955542
Computer simulations and real-time control of ELT AO systems using graphical processing units
NASA Astrophysics Data System (ADS)
Wang, Lianqi; Ellerbroek, Brent
2012-07-01
The adaptive optics (AO) simulations at the Thirty Meter Telescope (TMT) have been carried out using the efficient, C based multi-threaded adaptive optics simulator (MAOS, http://github.com/lianqiw/maos). By porting time-critical parts of MAOS to graphical processing units (GPU) using NVIDIA CUDA technology, we achieved a 10 fold speed up for each GTX 580 GPU used compared to a modern quad core CPU. Each time step of full scale end to end simulation for the TMT narrow field infrared AO system (NFIRAOS) takes only 0.11 second in a desktop with two GTX 580s. We also demonstrate that the TMT minimum variance reconstructor can be assembled in matrix vector multiply (MVM) format in 8 seconds with 8 GTX 580 GPUs, meeting the TMT requirement for updating the reconstructor. Analysis show that it is also possible to apply the MVM using 8 GTX 580s within the required latency.
A Practical, Hardware Friendly MMSE Detector for MIMO-OFDM-Based Systems
NASA Astrophysics Data System (ADS)
Kim, Hun Seok; Zhu, Weijun; Bhatia, Jatin; Mohammed, Karim; Shah, Anish; Daneshrad, Babak
2008-12-01
Design and implementation of a highly optimized MIMO (multiple-input multiple-output) detector requires cooptimization of the algorithm with the underlying hardware architecture. Special attention must be paid to application requirements such as throughput, latency, and resource constraints. In this work, we focus on a highly optimized matrix inversion free [InlineEquation not available: see fulltext.] MMSE (minimum mean square error) MIMO detector implementation. The work has resulted in a real-time field-programmable gate array-based implementation (FPGA-) on a Xilinx Virtex-2 6000 using only 9003 logic slices, 66 multipliers, and 24 Block RAMs (less than 33% of the overall resources of this part). The design delivers over 420 Mbps sustained throughput with a small 2.77-microsecond latency. The designed [InlineEquation not available: see fulltext.] linear MMSE MIMO detector is capable of complying with the proposed IEEE 802.11n standard.
Yuan, Yanchao; Sun, Yanxiao; Yan, Shijing; Zhao, Jianqing; Liu, Shumei; Zhang, Mingqiu; Zheng, Xiaoxing; Jia, Lei
2017-03-02
Nondestructive retrieval of expensive carbon fibres (CFs) from CF-reinforced thermosetting advanced composites widely applied in high-tech fields has remained inaccessible as the harsh conditions required to recycle high-performance resin matrices unavoidably damage the structure and properties of CFs. Degradable thermosetting resins with stable covalent structures offer a potential solution to this conflict. Here we design a new synthesis scheme and prepare a recyclable CF-reinforced poly(hexahydrotriazine) resin matrix advanced composite. The multiple recycling experiments and characterization data establish that this composite demonstrates performance comparable to those of its commercial counterparts, and more importantly, it realizes multiple intact recoveries of CFs and near-total recycling of the principal raw materials through gentle depolymerization in certain dilute acid solution. To our best knowledge, this study demonstrates for the first time a feasible and environment-friendly preparation-recycle-regeneration strategy for multiple CF-recycling from CF-reinforced advanced composites.
Multicore Challenges and Benefits for High Performance Scientific Computing
Nielsen, Ida M. B.; Janssen, Curtis L.
2008-01-01
Until recently, performance gains in processors were achieved largely by improvements in clock speeds and instruction level parallelism. Thus, applications could obtain performance increases with relatively minor changes by upgrading to the latest generation of computing hardware. Currently, however, processor performance improvements are realized by using multicore technology and hardware support for multiple threads within each core, and taking full advantage of this technology to improve the performance of applications requires exposure of extreme levels of software parallelism. We will here discuss the architecture of parallel computers constructed from many multicore chips as well as techniques for managing the complexitymore » of programming such computers, including the hybrid message-passing/multi-threading programming model. We will illustrate these ideas with a hybrid distributed memory matrix multiply and a quantum chemistry algorithm for energy computation using Møller–Plesset perturbation theory.« less
DOC II 32-bit digital optical computer: optoelectronic hardware and software
NASA Astrophysics Data System (ADS)
Stone, Richard V.; Zeise, Frederick F.; Guilfoyle, Peter S.
1991-12-01
This paper describes current electronic hardware subsystems and software code which support OptiComp's 32-bit general purpose digital optical computer (DOC II). The reader is referred to earlier papers presented in this section for a thorough discussion of theory and application regarding DOC II. The primary optoelectronic subsystems include the drive electronics for the multichannel acousto-optic modulators, the avalanche photodiode amplifier, as well as threshold circuitry, and the memory subsystems. This device utilizes a single optical Boolean vector matrix multiplier and its VME based host controller interface in performing various higher level primitives. OptiComp Corporation wishes to acknowledge the financial support of the Office of Naval Research, the National Aeronautics and Space Administration, the Rome Air Development Center, and the Strategic Defense Initiative Office for the funding of this program under contracts N00014-87-C-0077, N00014-89-C-0266 and N00014-89-C- 0225.
Yuan, Yanchao; Sun, Yanxiao; Yan, Shijing; Zhao, Jianqing; Liu, Shumei; Zhang, Mingqiu; Zheng, Xiaoxing; Jia, Lei
2017-01-01
Nondestructive retrieval of expensive carbon fibres (CFs) from CF-reinforced thermosetting advanced composites widely applied in high-tech fields has remained inaccessible as the harsh conditions required to recycle high-performance resin matrices unavoidably damage the structure and properties of CFs. Degradable thermosetting resins with stable covalent structures offer a potential solution to this conflict. Here we design a new synthesis scheme and prepare a recyclable CF-reinforced poly(hexahydrotriazine) resin matrix advanced composite. The multiple recycling experiments and characterization data establish that this composite demonstrates performance comparable to those of its commercial counterparts, and more importantly, it realizes multiple intact recoveries of CFs and near-total recycling of the principal raw materials through gentle depolymerization in certain dilute acid solution. To our best knowledge, this study demonstrates for the first time a feasible and environment-friendly preparation-recycle-regeneration strategy for multiple CF-recycling from CF-reinforced advanced composites. PMID:28251985
Matrix completion by deep matrix factorization.
Fan, Jicong; Cheng, Jieyu
2018-02-01
Conventional methods of matrix completion are linear methods that are not effective in handling data of nonlinear structures. Recently a few researchers attempted to incorporate nonlinear techniques into matrix completion but there still exists considerable limitations. In this paper, a novel method called deep matrix factorization (DMF) is proposed for nonlinear matrix completion. Different from conventional matrix completion methods that are based on linear latent variable models, DMF is on the basis of a nonlinear latent variable model. DMF is formulated as a deep-structure neural network, in which the inputs are the low-dimensional unknown latent variables and the outputs are the partially observed variables. In DMF, the inputs and the parameters of the multilayer neural network are simultaneously optimized to minimize the reconstruction errors for the observed entries. Then the missing entries can be readily recovered by propagating the latent variables to the output layer. DMF is compared with state-of-the-art methods of linear and nonlinear matrix completion in the tasks of toy matrix completion, image inpainting and collaborative filtering. The experimental results verify that DMF is able to provide higher matrix completion accuracy than existing methods do and DMF is applicable to large matrices. Copyright © 2017 Elsevier Ltd. All rights reserved.
Zhu, Chengzhou; Shi, Qiurong; Fu, Shaofang; ...
2018-04-04
Delicately engineering well-defined noble metal aerogels with favorable structural and compositional features is of vital importance for wide applications. Here, we reported a one-pot and facile method for synthesizing core–shell PdPb@Pd hydrogels/aerogels with multiply-twinned grains and an ordered intermetallic phase using sodium hypophosphite as a multifunctional reducing agent. Due to the accelerated gelation kinetics induced by increased reaction temperature and the specific function of sodium hypophosphite, the formation of hydrogels can be completed within 4 h. As a result, owing to their unique porous structure and favorable geometric and electronic effects, the optimized PdPb@Pd aerogels exhibit enhanced electrochemical performance towardsmore » ethylene glycol oxidation with a mass activity of 5.8 times higher than Pd black.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Chengzhou; Shi, Qiurong; Fu, Shaofang
Delicately engineering well-defined noble metal aerogels with favorable structural and compositional features is of vital importance for wide applications. Here, we reported a one-pot and facile method for synthesizing core–shell PdPb@Pd hydrogels/aerogels with multiply-twinned grains and an ordered intermetallic phase using sodium hypophosphite as a multifunctional reducing agent. Due to the accelerated gelation kinetics induced by increased reaction temperature and the specific function of sodium hypophosphite, the formation of hydrogels can be completed within 4 h. As a result, owing to their unique porous structure and favorable geometric and electronic effects, the optimized PdPb@Pd aerogels exhibit enhanced electrochemical performance towardsmore » ethylene glycol oxidation with a mass activity of 5.8 times higher than Pd black.« less
Quantum Associative Neural Network with Nonlinear Search Algorithm
NASA Astrophysics Data System (ADS)
Zhou, Rigui; Wang, Huian; Wu, Qian; Shi, Yang
2012-03-01
Based on analysis on properties of quantum linear superposition, to overcome the complexity of existing quantum associative memory which was proposed by Ventura, a new storage method for multiply patterns is proposed in this paper by constructing the quantum array with the binary decision diagrams. Also, the adoption of the nonlinear search algorithm increases the pattern recalling speed of this model which has multiply patterns to O( {log2}^{2^{n -t}} ) = O( n - t ) time complexity, where n is the number of quantum bit and t is the quantum information of the t quantum bit. Results of case analysis show that the associative neural network model proposed in this paper based on quantum learning is much better and optimized than other researchers' counterparts both in terms of avoiding the additional qubits or extraordinary initial operators, storing pattern and improving the recalling speed.
Key-Generation Algorithms for Linear Piece In Hand Matrix Method
NASA Astrophysics Data System (ADS)
Tadaki, Kohtaro; Tsujii, Shigeo
The linear Piece In Hand (PH, for short) matrix method with random variables was proposed in our former work. It is a general prescription which can be applicable to any type of multivariate public-key cryptosystems for the purpose of enhancing their security. Actually, we showed, in an experimental manner, that the linear PH matrix method with random variables can certainly enhance the security of HFE against the Gröbner basis attack, where HFE is one of the major variants of multivariate public-key cryptosystems. In 1998 Patarin, Goubin, and Courtois introduced the plus method as a general prescription which aims to enhance the security of any given MPKC, just like the linear PH matrix method with random variables. In this paper we prove the equivalence between the plus method and the primitive linear PH matrix method, which is introduced by our previous work to explain the notion of the PH matrix method in general in an illustrative manner and not for a practical use to enhance the security of any given MPKC. Based on this equivalence, we show that the linear PH matrix method with random variables has the substantial advantage over the plus method with respect to the security enhancement. In the linear PH matrix method with random variables, the three matrices, including the PH matrix, play a central role in the secret-key and public-key. In this paper, we clarify how to generate these matrices and thus present two probabilistic polynomial-time algorithms to generate these matrices. In particular, the second one has a concise form, and is obtained as a byproduct of the proof of the equivalence between the plus method and the primitive linear PH matrix method.
Sparse subspace clustering for data with missing entries and high-rank matrix completion.
Fan, Jicong; Chow, Tommy W S
2017-09-01
Many methods have recently been proposed for subspace clustering, but they are often unable to handle incomplete data because of missing entries. Using matrix completion methods to recover missing entries is a common way to solve the problem. Conventional matrix completion methods require that the matrix should be of low-rank intrinsically, but most matrices are of high-rank or even full-rank in practice, especially when the number of subspaces is large. In this paper, a new method called Sparse Representation with Missing Entries and Matrix Completion is proposed to solve the problems of incomplete-data subspace clustering and high-rank matrix completion. The proposed algorithm alternately computes the matrix of sparse representation coefficients and recovers the missing entries of a data matrix. The proposed algorithm recovers missing entries through minimizing the representation coefficients, representation errors, and matrix rank. Thorough experimental study and comparative analysis based on synthetic data and natural images were conducted. The presented results demonstrate that the proposed algorithm is more effective in subspace clustering and matrix completion compared with other existing methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
Readout electronics for the GEM detector
NASA Astrophysics Data System (ADS)
Kasprowicz, G.; Czarski, T.; Chernyshova, M.; Czyrkowski, H.; Dabrowski, R.; Dominik, W.; Jakubowska, K.; Karpinski, L.; Kierzkowski, K.; Kudla, I. M.; Pozniak, K.; Rzadkiewicz, J.; Salapa, Z.; Scholz, M.; Zabolotny, W.
2011-10-01
A novel approach to the Gas Electron Multiplier (GEM) detector readout is presented. Unlike commonly used methods, based on discriminators[2],[3] and analogue FIFOs[1], the method developed uses simultaneously sampling high speed ADCs and advanced FPGA-based processing logic to estimate the energy of every single photon. Such method is applied to every GEM strip signal. It is especially useful in case of crystal-based spectrometers for soft X-rays, where higher order reflections need to be identified and rejected[5].
Stark problem in terms of the Stokes multipliers for the triconfluent Heun equation
NASA Astrophysics Data System (ADS)
Osherov, V. I.; Ushakov, V. G.
2013-11-01
The solution of the Stark problem is obtained in terms of the Stokes multipliers for the triconfluent Heun equation (the quartic oscillator equation). The Stokes multipliers are found in an analytical form at positive energies. For negative energies, the Stokes parameters are calculated in frames of a consistent asymptotic approach. The scattering phase, positions, and widths of the Stark resonances are determined as solutions of an implicit equation.
Non-cross talk multi-channel photomultiplier using guided electron multipliers
Gomez, J.; Majewski, S.; Weisenberger, A.G.
1995-09-26
An improved multi-channel electron multiplier is provided that exhibits zero cross-talk and high rate operation. Resistive material input and output masks are employed to control divergence of electrons. Electron multiplication takes place in closed channels. Several embodiments are provided for these channels including a continuous resistive emissive multiplier and a discrete resistive multiplier with discrete dynode chains interspaced with resistive layers-masks. Both basic embodiments provide high gain multiplication of electrons without accumulating surface charges while containing electrons to their proper channels to eliminate cross-talk. The invention can be for example applied to improve the performance of ion mass spectrometers, positron emission tomography devices, in DNA sequencing and other beta radiography applications and in many applications in particle physics. 28 figs.
Non cross talk multi-channel photomultiplier using guided electron multipliers
Gomez, Javier; Majewski, Stanislaw; Weisenberger, Andrew G.
1995-01-01
An improved multi-channel electron multiplier is provided that exhibits zero cross-talk and high rate operation. Resistive material input and output masks are employed to control divergence of electrons. Electron multiplication takes place in closed channels. Several embodiments are provided for these channels including a continuous resistive emissive multiplier and a discrete resistive multiplier with discrete dynode chains interspaced with resistive layers-masks. Both basic embodiments provide high gain multiplication of electrons without accumulating surface charges while containing electrons to their proper channels to eliminate cross-talk. The invention can be for example applied to improve the performance of ion mass spectrometers, positron emission tomography devices, in DNA sequencing and other beta radiography applications and in many applications in particle physics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Holak; Lim, Youbong; Choe, Wonho, E-mail: wchoe@kaist.ac.kr
2015-04-13
Multiply charged ions and plume characteristics in Hall thruster plasmas are investigated with regard to magnetic field configuration. Differences in the plume shape and the fraction of ions with different charge states are demonstrated by the counter-current and co-current magnetic field configurations, respectively. The significantly larger number of multiply charged and higher charge state ions including Xe{sup 4+} are observed in the co-current configuration than in the counter-current configuration. The large fraction of multiply charged ions and high ion currents in this experiment may be related to the strong electron confinement, which is due to the strong magnetic mirror effectmore » in the co-current magnetic field configuration.« less
Based on in vitro studies, bacteria in the genus Legionella are believed to multiply within protozoa such as amoebae in aquatic environments. Current methods used for detection of Legionella species, however, are not designed to show this relationship. Thus the natural intimate a...
Manufacturing Methods and Techniques for Miniature High Voltage Hybrid Multiplier Modules
1977-05-06
technologies and materials,, and to demonstrate the production line capability to fabricate at the rate of 125 acceptable units per 40 hour week. - iv...Miraloma Avenue Anaheim, CA 92803 Dr. Andrew Tlcki1 Nitron Cornoratlon 10420 Bubb Road Cunertino, CA 95014 S I Commander RAt)C * ATTN: RBRII/Mr. J. Brauer
Tic Tac Toe Math. Instructional Guide.
ERIC Educational Resources Information Center
Cooper, Richard
This instructional guide and set of three companion workbooks are intended for use in an arithmetic course based on the Tic Tac Toe method of addition and multiplication, which is an alternative means of learning to add and multiply that was developed for students whose learning disabilities (including difficulty in distinguishing left from right…
Fostering Remainder Understanding in Fraction Division
ERIC Educational Resources Information Center
Zembat, Ismail O.
2017-01-01
Most students can follow this simple procedure for division of fractions: "Ours is not to reason why, just invert and multiply." But how many really understand what division of fractions means--especially fraction division with respect to the meaning of the remainder. The purpose of this article is to provide an instructional method as a…
Antecedents and Concomitants of Parenting Stress in Adolescent Mothers in Foster Care
ERIC Educational Resources Information Center
Budd, Karen S.; Holdsworth, Michelle J. A.; HoganBruen, Kathy D.
2006-01-01
Objective: This study's aim was to examine variables associated with different short-term trajectories in multiply disadvantaged adolescent mothers by investigating antecedents and concomitants of parenting stress. Method: We followed 49 adolescent mothers (ages 14-18 at study outset) who were wards in Illinois foster care using a longitudinal…
29 CFR 4022.82 - Method of recoupment.
Code of Federal Regulations, 2011 CFR
2011-07-01
... determine the fractional multiplier by dividing the amount of the net overpayment by the present value of... the present value of the benefit to which a participant or beneficiary is entitled under title IV of... by the present value of the benefit payable with respect to the participant under title IV of ERISA...
12 CFR 324.210 - Standardized measurement method for specific risk.
Code of Federal Regulations, 2014 CFR
2014-01-01
... purchased credit protection is capped at the current fair value of the transaction plus the absolute value... hedge has a specific risk add-on of zero if: (i) The debt or securitization position is fully hedged by... debt or securitization positions, an FDIC-supervised institution must multiply the absolute value of...
Using Constructed Knowledge to Multiply Fractions
ERIC Educational Resources Information Center
Witherspoon, Taajah Felder
2014-01-01
Over the course of her teaching career, this author learned to create environments in which both the teacher and learners embrace understanding. She introduced new concepts with a general question or word problem and encouraged students to find solutions with a strategy of their choice. By using this instructional method, she allowed her students…