Three filters for visualization of phase objects with large variations of phase gradients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sagan, Arkadiusz; Antosiewicz, Tomasz J.; Szoplik, Tomasz
2009-02-20
We propose three amplitude filters for visualization of phase objects. They interact with the spectra of pure-phase objects in the frequency plane and are based on tangent and error functions as well as antisymmetric combination of square roots. The error function is a normalized form of the Gaussian function. The antisymmetric square-root filter is composed of two square-root filters to widen its spatial frequency spectral range. Their advantage over other known amplitude frequency-domain filters, such as linear or square-root graded ones, is that they allow high-contrast visualization of objects with large variations of phase gradients.
A suggestion for computing objective function in model calibration
Wu, Yiping; Liu, Shuguang
2014-01-01
A parameter-optimization process (model calibration) is usually required for numerical model applications, which involves the use of an objective function to determine the model cost (model-data errors). The sum of square errors (SSR) has been widely adopted as the objective function in various optimization procedures. However, ‘square error’ calculation was found to be more sensitive to extreme or high values. Thus, we proposed that the sum of absolute errors (SAR) may be a better option than SSR for model calibration. To test this hypothesis, we used two case studies—a hydrological model calibration and a biogeochemical model calibration—to investigate the behavior of a group of potential objective functions: SSR, SAR, sum of squared relative deviation (SSRD), and sum of absolute relative deviation (SARD). Mathematical evaluation of model performance demonstrates that ‘absolute error’ (SAR and SARD) are superior to ‘square error’ (SSR and SSRD) in calculating objective function for model calibration, and SAR behaved the best (with the least error and highest efficiency). This study suggests that SSR might be overly used in real applications, and SAR may be a reasonable choice in common optimization implementations without emphasizing either high or low values (e.g., modeling for supporting resources management).
Accumulated energy norm for full waveform inversion of marine data
NASA Astrophysics Data System (ADS)
Shin, Changsoo; Ha, Wansoo
2017-12-01
Macro-velocity models are important for imaging the subsurface structure. However, the conventional objective functions of full waveform inversion in the time and the frequency domain have a limited ability to recover the macro-velocity model because of the absence of low-frequency information. In this study, we propose new objective functions that can recover the macro-velocity model by minimizing the difference between the zero-frequency components of the square of seismic traces. Instead of the seismic trace itself, we use the square of the trace, which contains low-frequency information. We apply several time windows to the trace and obtain zero-frequency information of the squared trace for each time window. The shape of the new objective functions shows that they are suitable for local optimization methods. Since we use the acoustic wave equation in this study, this method can be used for deep-sea marine data, in which elastic effects can be ignored. We show that the zero-frequency components of the square of the seismic traces can be used to recover macro-velocities from synthetic and field data.
Least squares reverse time migration of controlled order multiples
NASA Astrophysics Data System (ADS)
Liu, Y.
2016-12-01
Imaging using the reverse time migration of multiples generates inherent crosstalk artifacts due to the interference among different order multiples. Traditionally, least-square fitting has been used to address this issue by seeking the best objective function to measure the amplitude differences between the predicted and observed data. We have developed an alternative objective function by decomposing multiples into different orders to minimize the difference between Born modeling predicted multiples and specific-order multiples from observational data in order to attenuate the crosstalk. This method is denoted as the least-squares reverse time migration of controlled order multiples (LSRTM-CM). Our numerical examples demonstrated that the LSRTM-CM can significantly improve image quality compared with reverse time migration of multiples and least-square reverse time migration of multiples. Acknowledgments This research was funded by the National Nature Science Foundation of China (Grant Nos. 41430321 and 41374138).
Ho, Kevin I-J; Leung, Chi-Sing; Sum, John
2010-06-01
In the last two decades, many online fault/noise injection algorithms have been developed to attain a fault tolerant neural network. However, not much theoretical works related to their convergence and objective functions have been reported. This paper studies six common fault/noise-injection-based online learning algorithms for radial basis function (RBF) networks, namely 1) injecting additive input noise, 2) injecting additive/multiplicative weight noise, 3) injecting multiplicative node noise, 4) injecting multiweight fault (random disconnection of weights), 5) injecting multinode fault during training, and 6) weight decay with injecting multinode fault. Based on the Gladyshev theorem, we show that the convergence of these six online algorithms is almost sure. Moreover, their true objective functions being minimized are derived. For injecting additive input noise during training, the objective function is identical to that of the Tikhonov regularizer approach. For injecting additive/multiplicative weight noise during training, the objective function is the simple mean square training error. Thus, injecting additive/multiplicative weight noise during training cannot improve the fault tolerance of an RBF network. Similar to injective additive input noise, the objective functions of other fault/noise-injection-based online algorithms contain a mean square error term and a specialized regularization term.
Cao, Jiguo; Huang, Jianhua Z.; Wu, Hulin
2012-01-01
Ordinary differential equations (ODEs) are widely used in biomedical research and other scientific areas to model complex dynamic systems. It is an important statistical problem to estimate parameters in ODEs from noisy observations. In this article we propose a method for estimating the time-varying coefficients in an ODE. Our method is a variation of the nonlinear least squares where penalized splines are used to model the functional parameters and the ODE solutions are approximated also using splines. We resort to the implicit function theorem to deal with the nonlinear least squares objective function that is only defined implicitly. The proposed penalized nonlinear least squares method is applied to estimate a HIV dynamic model from a real dataset. Monte Carlo simulations show that the new method can provide much more accurate estimates of functional parameters than the existing two-step local polynomial method which relies on estimation of the derivatives of the state function. Supplemental materials for the article are available online. PMID:23155351
Fitting a function to time-dependent ensemble averaged data.
Fogelmark, Karl; Lomholt, Michael A; Irbäck, Anders; Ambjörnsson, Tobias
2018-05-03
Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion constants). A commonly overlooked challenge in such function fitting procedures is that fluctuations around mean values, by construction, exhibit temporal correlations. We show that the only available general purpose function fitting methods, correlated chi-square method and the weighted least squares method (which neglects correlation), fail at either robust parameter estimation or accurate error estimation. We remedy this by deriving a new closed-form error estimation formula for weighted least square fitting. The new formula uses the full covariance matrix, i.e., rigorously includes temporal correlations, but is free of the robustness issues, inherent to the correlated chi-square method. We demonstrate its accuracy in four examples of importance in many fields: Brownian motion, damped harmonic oscillation, fractional Brownian motion and continuous time random walks. We also successfully apply our method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software.
Design of vibration isolation systems using multiobjective optimization techniques
NASA Technical Reports Server (NTRS)
Rao, S. S.
1984-01-01
The design of vibration isolation systems is considered using multicriteria optimization techniques. The integrated values of the square of the force transmitted to the main mass and the square of the relative displacement between the main mass and the base are taken as the performance indices. The design of a three degrees-of-freedom isolation system with an exponentially decaying type of base disturbance is considered for illustration. Numerical results are obtained using the global criterion, utility function, bounded objective, lexicographic, goal programming, goal attainment and game theory methods. It is found that the game theory approach is superior in finding a better optimum solution with proper balance of the various objective functions.
NASA Astrophysics Data System (ADS)
Croke, B. F.
2008-12-01
The role of performance indicators is to give an accurate indication of the fit between a model and the system being modelled. As all measurements have an associated uncertainty (determining the significance that should be given to the measurement), performance indicators should take into account uncertainties in the observed quantities being modelled as well as in the model predictions (due to uncertainties in inputs, model parameters and model structure). In the presence of significant uncertainty in observed and modelled output of a system, failure to adequately account for variations in the uncertainties means that the objective function only gives a measure of how well the model fits the observations, not how well the model fits the system being modelled. Since in most cases, the interest lies in fitting the system response, it is vital that the objective function(s) be designed to account for these uncertainties. Most objective functions (e.g. those based on the sum of squared residuals) assume homoscedastic uncertainties. If model contribution to the variations in residuals can be ignored, then transformations (e.g. Box-Cox) can be used to remove (or at least significantly reduce) heteroscedasticity. An alternative which is more generally applicable is to explicitly represent the uncertainties in the observed and modelled values in the objective function. Previous work on this topic addressed the modifications to standard objective functions (Nash-Sutcliffe efficiency, RMSE, chi- squared, coefficient of determination) using the optimal weighted averaging approach. This paper extends this previous work; addressing the issue of serial correlation. A form for an objective function that includes serial correlation will be presented, and the impact on model fit discussed.
Koay, Cheng Guan; Chang, Lin-Ching; Carew, John D; Pierpaoli, Carlo; Basser, Peter J
2006-09-01
A unifying theoretical and algorithmic framework for diffusion tensor estimation is presented. Theoretical connections among the least squares (LS) methods, (linear least squares (LLS), weighted linear least squares (WLLS), nonlinear least squares (NLS) and their constrained counterparts), are established through their respective objective functions, and higher order derivatives of these objective functions, i.e., Hessian matrices. These theoretical connections provide new insights in designing efficient algorithms for NLS and constrained NLS (CNLS) estimation. Here, we propose novel algorithms of full Newton-type for the NLS and CNLS estimations, which are evaluated with Monte Carlo simulations and compared with the commonly used Levenberg-Marquardt method. The proposed methods have a lower percent of relative error in estimating the trace and lower reduced chi2 value than those of the Levenberg-Marquardt method. These results also demonstrate that the accuracy of an estimate, particularly in a nonlinear estimation problem, is greatly affected by the Hessian matrix. In other words, the accuracy of a nonlinear estimation is algorithm-dependent. Further, this study shows that the noise variance in diffusion weighted signals is orientation dependent when signal-to-noise ratio (SNR) is low (
Fink, G R; Marshall, J C; Weiss, P H; Shah, N J; Toni, I; Halligan, P W; Zilles, K
2000-01-01
Line bisection is widely used as a clinical test of spatial cognition in patients with left visuospatial neglect after right hemisphere lesion. Surprisingly, many neglect patients who show severe impairment on marking the center of horizontal lines can accurately mark the center of squares. That these patients with left neglect are also typically poor at judging whether lines are correctly prebisected implies that the deficit can be perceptual rather than motoric. These findings suggest a differential neural basis for one- and two-dimensional visual position discrimination that we investigated with functional neuroimaging (fMRI). Normal subjects judged whether, in premarked lines or squares, the mark was placed centrally. Line center judgements differentially activated right parietal cortex, while square center judgements differentially activated the lingual gyrus bilaterally. These distinct neural bases for one- and two-dimensional visuospatial judgements help explain the observed clinical dissociations by showing that as a stimulus becomes a better, more 'object-like' gestalt, the ventral visuoperceptive route assumes more responsibility for assessing position within the object.
Universal Approximation by Using the Correntropy Objective Function.
Nayyeri, Mojtaba; Sadoghi Yazdi, Hadi; Maskooki, Alaleh; Rouhani, Modjtaba
2017-10-16
Several objective functions have been proposed in the literature to adjust the input parameters of a node in constructive networks. Furthermore, many researchers have focused on the universal approximation capability of the network based on the existing objective functions. In this brief, we use a correntropy measure based on the sigmoid kernel in the objective function to adjust the input parameters of a newly added node in a cascade network. The proposed network is shown to be capable of approximating any continuous nonlinear mapping with probability one in a compact input sample space. Thus, the convergence is guaranteed. The performance of our method was compared with that of eight different objective functions, as well as with an existing one hidden layer feedforward network on several real regression data sets with and without impulsive noise. The experimental results indicate the benefits of using a correntropy measure in reducing the root mean square error and increasing the robustness to noise.
Skobska, O E; Kadzhaya, N V; Andreyev, O A; Potapov, E V
2015-04-01
There were examined 32 injured persons, ageing (34.1 ± 1.3) yrs at average, for the brain commotion (BC). The adopted protocol SCAT-3 (Standardized Concussion Assessment Tool, 3rd ed.), DHI (Dizziness Handicap Inventory questionnaire), computer stabilography (KS) were applied for the vestibular disorders diagnosis. There was established, that in acute period of BC a dyssociation between regression of objective neurological symptoms and permanence of the BC indices occurs, what confirms a latent disorder of the balance function. Changes of basic indices of statokinesiography, including increase of the vibration amplitude enhancement in general centre of pressure in a saggital square and the BC square (235.3 ± 13.7) mm2 in a modified functional test of Romberg with the closed eyes is possible to apply as objective criteria for the BC diagnosis.
NASA Technical Reports Server (NTRS)
Long, S. A. T.
1974-01-01
Formulas are derived for the root-mean-square (rms) displacement, slope, and curvature errors in an azimuth-elevation image trace of an elongated object in space, as functions of the number and spacing of the input data points and the rms elevation error in the individual input data points from a single observation station. Also, formulas are derived for the total rms displacement, slope, and curvature error vectors in the triangulation solution of an elongated object in space due to the rms displacement, slope, and curvature errors, respectively, in the azimuth-elevation image traces from different observation stations. The total rms displacement, slope, and curvature error vectors provide useful measure numbers for determining the relative merits of two or more different triangulation procedures applicable to elongated objects in space.
Penalized weighted least-squares approach for low-dose x-ray computed tomography
NASA Astrophysics Data System (ADS)
Wang, Jing; Li, Tianfang; Lu, Hongbing; Liang, Zhengrong
2006-03-01
The noise of low-dose computed tomography (CT) sinogram follows approximately a Gaussian distribution with nonlinear dependence between the sample mean and variance. The noise is statistically uncorrelated among detector bins at any view angle. However the correlation coefficient matrix of data signal indicates a strong signal correlation among neighboring views. Based on above observations, Karhunen-Loeve (KL) transform can be used to de-correlate the signal among the neighboring views. In each KL component, a penalized weighted least-squares (PWLS) objective function can be constructed and optimal sinogram can be estimated by minimizing the objective function, followed by filtered backprojection (FBP) for CT image reconstruction. In this work, we compared the KL-PWLS method with an iterative image reconstruction algorithm, which uses the Gauss-Seidel iterative calculation to minimize the PWLS objective function in image domain. We also compared the KL-PWLS with an iterative sinogram smoothing algorithm, which uses the iterated conditional mode calculation to minimize the PWLS objective function in sinogram space, followed by FBP for image reconstruction. Phantom experiments show a comparable performance of these three PWLS methods in suppressing the noise-induced artifacts and preserving resolution in reconstructed images. Computer simulation concurs with the phantom experiments in terms of noise-resolution tradeoff and detectability in low contrast environment. The KL-PWLS noise reduction may have the advantage in computation for low-dose CT imaging, especially for dynamic high-resolution studies.
Generalizations of Tikhonov's regularized method of least squares to non-Euclidean vector norms
NASA Astrophysics Data System (ADS)
Volkov, V. V.; Erokhin, V. I.; Kakaev, V. V.; Onufrei, A. Yu.
2017-09-01
Tikhonov's regularized method of least squares and its generalizations to non-Euclidean norms, including polyhedral, are considered. The regularized method of least squares is reduced to mathematical programming problems obtained by "instrumental" generalizations of the Tikhonov lemma on the minimal (in a certain norm) solution of a system of linear algebraic equations with respect to an unknown matrix. Further studies are needed for problems concerning the development of methods and algorithms for solving reduced mathematical programming problems in which the objective functions and admissible domains are constructed using polyhedral vector norms.
LANDSAT-D investigations in snow hydrology
NASA Technical Reports Server (NTRS)
Dozier, J. (Principal Investigator)
1982-01-01
The sample LANDSAT-4 TM tape (7 bands) of NE Arkansas/Tennessee area was received and displayed. Snow reflectance in all 6 TM reflective bands, i.e. 1, 2, 3, 4, 5, and 7 was simulated, using Wiscombe and Warren's (1980) delta-Eddington model. Snow reflectance in bands 4, 5, and 7 appear sensitive to grain size. One of the objectives is to interpret surface optical grain size of snow, for spectral extension of albedo. While TM data of the study area are not received, simulation results are encouraging. It also appears that the TM filters resemble a "square-wave" closely enough to permit assuming a square-wave in calculations. Integrated band reflectance over the actual response functions was simulated, using sensor data supplied by Santa Barbara Research Center. Differences between integrating over the actual response functions and the equivalent square wave were negligible.
Wu, Yabei; Lu, Huanzhang; Zhao, Fei; Zhang, Zhiyong
2016-01-01
Shape serves as an important additional feature for space target classification, which is complementary to those made available. Since different shapes lead to different projection functions, the projection property can be regarded as one kind of shape feature. In this work, the problem of estimating the projection function from the infrared signature of the object is addressed. We show that the projection function of any rotationally symmetric object can be approximately represented as a linear combination of some base functions. Based on this fact, the signal model of the emissivity-area product sequence is constructed, which is a particular mathematical function of the linear coefficients and micro-motion parameters. Then, the least square estimator is proposed to estimate the projection function and micro-motion parameters jointly. Experiments validate the effectiveness of the proposed method. PMID:27763500
Noise reduction for low-dose helical CT by 3D penalized weighted least-squares sinogram smoothing
NASA Astrophysics Data System (ADS)
Wang, Jing; Li, Tianfang; Lu, Hongbing; Liang, Zhengrong
2006-03-01
Helical computed tomography (HCT) has several advantages over conventional step-and-shoot CT for imaging a relatively large object, especially for dynamic studies. However, HCT may increase X-ray exposure significantly to the patient. This work aims to reduce the radiation by lowering the X-ray tube current (mA) and filtering the low-mA (or dose) sinogram noise. Based on the noise properties of HCT sinogram, a three-dimensional (3D) penalized weighted least-squares (PWLS) objective function was constructed and an optimal sinogram was estimated by minimizing the objective function. To consider the difference of signal correlation among different direction of the HCT sinogram, an anisotropic Markov random filed (MRF) Gibbs function was designed as the penalty. The minimization of the objection function was performed by iterative Gauss-Seidel updating strategy. The effectiveness of the 3D-PWLS sinogram smoothing for low-dose HCT was demonstrated by a 3D Shepp-Logan head phantom study. Comparison studies with our previously developed KL domain PWLS sinogram smoothing algorithm indicate that the KL+2D-PWLS algorithm shows better performance on in-plane noise-resolution trade-off while the 3D-PLWS shows better performance on z-axis noise-resolution trade-off. Receiver operating characteristic (ROC) studies by using channelized Hotelling observer (CHO) shows that 3D-PWLS and KL+2DPWLS algorithms have similar performance on detectability in low-contrast environment.
1980-1981 Comparative Costs and Staffing Report for Physical Plants of Colleges and Universities.
ERIC Educational Resources Information Center
Association of Physical Plant Administrators of Universities and Colleges, Washington, DC.
Comparative costs of plant maintenance and operations functions, including staffing costs, for higher education institutions are presented for 1980-1981. The objective of the survey data is to promote comparisons of unit costs per gross square foot of the functions classified as maintenance and operations of plant, the number of full-time…
Activation of the Human MT Complex by Motion in Depth Induced by a Moving Cast Shadow
Katsuyama, Narumi; Usui, Nobuo; Taira, Masato
2016-01-01
A moving cast shadow is a powerful monocular depth cue for motion perception in depth. For example, when a cast shadow moves away from or toward an object in a two-dimensional plane, the object appears to move toward or away from the observer in depth, respectively, whereas the size and position of the object are constant. Although the cortical mechanisms underlying motion perception in depth by cast shadow are unknown, the human MT complex (hMT+) is likely involved in the process, as it is sensitive to motion in depth represented by binocular depth cues. In the present study, we examined this possibility by using a functional magnetic resonance imaging (fMRI) technique. First, we identified the cortical regions sensitive to the motion of a square in depth represented via binocular disparity. Consistent with previous studies, we observed significant activation in the bilateral hMT+, and defined functional regions of interest (ROIs) there. We then investigated the activity of the ROIs during observation of the following stimuli: 1) a central square that appeared to move back and forth via a moving cast shadow (mCS); 2) a segmented and scrambled cast shadow presented beside the square (sCS); and 3) no cast shadow (nCS). Participants perceived motion of the square in depth in the mCS condition only. The activity of the hMT+ was significantly higher in the mCS compared with the sCS and nCS conditions. Moreover, the hMT+ was activated equally in both hemispheres in the mCS condition, despite presentation of the cast shadow in the bottom-right quadrant of the stimulus. Perception of the square moving in depth across visual hemifields may be reflected in the bilateral activation of the hMT+. We concluded that the hMT+ is involved in motion perception in depth induced by moving cast shadow and by binocular disparity. PMID:27597999
Time-Series INSAR: An Integer Least-Squares Approach For Distributed Scatterers
NASA Astrophysics Data System (ADS)
Samiei-Esfahany, Sami; Hanssen, Ramon F.
2012-01-01
The objective of this research is to extend the geode- tic mathematical model which was developed for persistent scatterers to a model which can exploit distributed scatterers (DS). The main focus is on the integer least- squares framework, and the main challenge is to include the decorrelation effect in the mathematical model. In order to adapt the integer least-squares mathematical model for DS we altered the model from a single master to a multi-master configuration and introduced the decorrelation effect stochastically. This effect is described in our model by a full covariance matrix. We propose to de- rive this covariance matrix by numerical integration of the (joint) probability distribution function (PDF) of interferometric phases. This PDF is a function of coherence values and can be directly computed from radar data. We show that the use of this model can improve the performance of temporal phase unwrapping of distributed scatterers.
NASA Astrophysics Data System (ADS)
Gsponer, Andre
2009-01-01
The objective of this introduction to Colombeau algebras of generalized functions (in which distributions can be freely multiplied) is to explain in elementary terms the essential concepts necessary for their application to basic nonlinear problems in classical physics. Examples are given in hydrodynamics and electrodynamics. The problem of the self-energy of a point electric charge is worked out in detail: the Coulomb potential and field are defined as Colombeau generalized functions, and integrals of nonlinear expressions corresponding to products of distributions (such as the square of the Coulomb field and the square of the delta function) are calculated. Finally, the methods introduced in Gsponer (2007 Eur. J. Phys. 28 267, 2007 Eur. J. Phys. 28 1021 and 2007 Eur. J. Phys. 28 1241), to deal with point-like singularities in classical electrodynamics are confirmed.
Physical Function Does Not Predict Care Assessment Need Score in Older Veterans.
Serra, Monica C; Addison, Odessa; Giffuni, Jamie; Paden, Lydia; Morey, Miriam C; Katzel, Leslie
2017-01-01
The Veterans Health Administration's Care Assessment Need (CAN) score is a statistical model, aimed to predict high-risk patients. We were interested in determining if a relationship existed between physical function and CAN scores. Seventy-four older (71 ± 1 years) male Veterans underwent assessment of CAN score and subjective (Short Form-36 [SF-36]) and objective (self-selected walking speed, four square step test, short physical performance battery) assessment of physical function. Approximately 25% of participants self-reported limitations performing lower intensity activities, while 70% to 90% reported limitations with more strenuous activities. When compared with cut points indicative of functional limitations, 35% to 65% of participants had limitations for each of the objective measures. Any measure of subjective or objective physical function did not predict CAN score. These data indicate that the addition of a physical function assessment may complement the CAN score in the identification of high-risk patients.
Fang, Fang; Ni, Bing-Jie; Yu, Han-Qing
2009-06-01
In this study, weighted non-linear least-squares analysis and accelerating genetic algorithm are integrated to estimate the kinetic parameters of substrate consumption and storage product formation of activated sludge. A storage product formation equation is developed and used to construct the objective function for the determination of its production kinetics. The weighted least-squares analysis is employed to calculate the differences in the storage product concentration between the model predictions and the experimental data as the sum of squared weighted errors. The kinetic parameters for the substrate consumption and the storage product formation are estimated to be the maximum heterotrophic growth rate of 0.121/h, the yield coefficient of 0.44 mg CODX/mg CODS (COD, chemical oxygen demand) and the substrate half saturation constant of 16.9 mg/L, respectively, by minimizing the objective function using a real-coding-based accelerating genetic algorithm. Also, the fraction of substrate electrons diverted to the storage product formation is estimated to be 0.43 mg CODSTO/mg CODS. The validity of our approach is confirmed by the results of independent tests and the kinetic parameter values reported in literature, suggesting that this approach could be useful to evaluate the product formation kinetics of mixed cultures like activated sludge. More importantly, as this integrated approach could estimate the kinetic parameters rapidly and accurately, it could be applied to other biological processes.
Sum, John Pui-Fai; Leung, Chi-Sing; Ho, Kevin I-J
2012-02-01
Improving fault tolerance of a neural network has been studied for more than two decades. Various training algorithms have been proposed in sequel. The on-line node fault injection-based algorithm is one of these algorithms, in which hidden nodes randomly output zeros during training. While the idea is simple, theoretical analyses on this algorithm are far from complete. This paper presents its objective function and the convergence proof. We consider three cases for multilayer perceptrons (MLPs). They are: (1) MLPs with single linear output node; (2) MLPs with multiple linear output nodes; and (3) MLPs with single sigmoid output node. For the convergence proof, we show that the algorithm converges with probability one. For the objective function, we show that the corresponding objective functions of cases (1) and (2) are of the same form. They both consist of a mean square errors term, a regularizer term, and a weight decay term. For case (3), the objective function is slight different from that of cases (1) and (2). With the objective functions derived, we can compare the similarities and differences among various algorithms and various cases.
NASA Technical Reports Server (NTRS)
Tiffany, Sherwood H.; Adams, William M., Jr.
1988-01-01
The approximation of unsteady generalized aerodynamic forces in the equations of motion of a flexible aircraft are discussed. Two methods of formulating these approximations are extended to include the same flexibility in constraining the approximations and the same methodology in optimizing nonlinear parameters as another currently used extended least-squares method. Optimal selection of nonlinear parameters is made in each of the three methods by use of the same nonlinear, nongradient optimizer. The objective of the nonlinear optimization is to obtain rational approximations to the unsteady aerodynamics whose state-space realization is lower order than that required when no optimization of the nonlinear terms is performed. The free linear parameters are determined using the least-squares matrix techniques of a Lagrange multiplier formulation of an objective function which incorporates selected linear equality constraints. State-space mathematical models resulting from different approaches are described and results are presented that show comparative evaluations from application of each of the extended methods to a numerical example.
Lu, Huancai; Wu, Sean F
2009-03-01
The vibroacoustic responses of a highly nonspherical vibrating object are reconstructed using Helmholtz equation least-squares (HELS) method. The objectives of this study are to examine the accuracy of reconstruction and the impacts of various parameters involved in reconstruction using HELS. The test object is a simply supported and baffled thin plate. The reason for selecting this object is that it represents a class of structures that cannot be exactly described by the spherical Hankel functions and spherical harmonics, which are taken as the basis functions in the HELS formulation, yet the analytic solutions to vibroacoustic responses of a baffled plate are readily available so the accuracy of reconstruction can be checked accurately. The input field acoustic pressures for reconstruction are generated by the Rayleigh integral. The reconstructed normal surface velocities are validated against the benchmark values, and the out-of-plane vibration patterns at several natural frequencies are compared with the natural modes of a simply supported plate. The impacts of various parameters such as number of measurement points, measurement distance, location of the origin of the coordinate system, microphone spacing, and ratio of measurement aperture size to the area of source surface of reconstruction on the resultant accuracy of reconstruction are examined.
Confirmatory factor analysis of the female sexual function index.
Opperman, Emily A; Benson, Lindsay E; Milhausen, Robin R
2013-01-01
The Female Sexual Functioning Index (Rosen et al., 2000 ) was designed to assess the key dimensions of female sexual functioning using six domains: desire, arousal, lubrication, orgasm, satisfaction, and pain. A full-scale score was proposed to represent women's overall sexual function. The fifth revision to the Diagnostic and Statistical Manual (DSM) is currently underway and includes a proposal to combine desire and arousal problems. The objective of this article was to evaluate and compare four models of the Female Sexual Functioning Index: (a) single-factor model, (b) six-factor model, (c) second-order factor model, and (4) five-factor model combining the desire and arousal subscales. Cross-sectional and observational data from 85 women were used to conduct a confirmatory factor analysis on the Female Sexual Functioning Index. Local and global goodness-of-fit measures, the chi-square test of differences, squared multiple correlations, and regression weights were used. The single-factor model fit was not acceptable. The original six-factor model was confirmed, and good model fit was found for the second-order and five-factor models. Delta chi-square tests of differences supported best fit for the six-factor model validating usage of the six domains. However, when revisions are made to the DSM-5, the Female Sexual Functioning Index can adapt to reflect these changes and remain a valid assessment tool for women's sexual functioning, as the five-factor structure was also supported.
Luminance gradient at object borders communicates object location to the human oculomotor system.
Kilpeläinen, Markku; Georgeson, Mark A
2018-01-25
The locations of objects in our environment constitute arguably the most important piece of information our visual system must convey to facilitate successful visually guided behaviour. However, the relevant objects are usually not point-like and do not have one unique location attribute. Relatively little is known about how the visual system represents the location of such large objects as visual processing is, both on neural and perceptual level, highly edge dominated. In this study, human observers made saccades to the centres of luminance defined squares (width 4 deg), which appeared at random locations (8 deg eccentricity). The phase structure of the square was manipulated such that the points of maximum luminance gradient at the square's edges shifted from trial to trial. The average saccade endpoints of all subjects followed those shifts in remarkable quantitative agreement. Further experiments showed that the shifts were caused by the edge manipulations, not by changes in luminance structure near the centre of the square or outside the square. We conclude that the human visual system programs saccades to large luminance defined square objects based on edge locations derived from the points of maximum luminance gradients at the square's edges.
Translation-aware semantic segmentation via conditional least-square generative adversarial networks
NASA Astrophysics Data System (ADS)
Zhang, Mi; Hu, Xiangyun; Zhao, Like; Pang, Shiyan; Gong, Jinqi; Luo, Min
2017-10-01
Semantic segmentation has recently made rapid progress in the field of remote sensing and computer vision. However, many leading approaches cannot simultaneously translate label maps to possible source images with a limited number of training images. The core issue is insufficient adversarial information to interpret the inverse process and proper objective loss function to overcome the vanishing gradient problem. We propose the use of conditional least squares generative adversarial networks (CLS-GAN) to delineate visual objects and solve these problems. We trained the CLS-GAN network for semantic segmentation to discriminate dense prediction information either from training images or generative networks. We show that the optimal objective function of CLS-GAN is a special class of f-divergence and yields a generator that lies on the decision boundary of discriminator that reduces possible vanished gradient. We also demonstrate the effectiveness of the proposed architecture at translating images from label maps in the learning process. Experiments on a limited number of high resolution images, including close-range and remote sensing datasets, indicate that the proposed method leads to the improved semantic segmentation accuracy and can simultaneously generate high quality images from label maps.
Resistive and Hall weighting functions in three dimensions
NASA Astrophysics Data System (ADS)
Koon, D. W.; Knickerbocker, C. J.
1998-10-01
The authors extend their study of the effect of macroscopic impurities on resistive and Hall measurements to include objects of finite thickness. The effect of such impurities is calculated for a series of rectangular parallelepipeds with two current and two voltage contacts on the corners of one square face. The weighting functions display singularities near these contacts, but these are shown to vanish in the two-dimensional limit, in agreement with previous results. Finally, it is shown that while Hall measurements principally sample the plane of the electrodes, resistivity measurements sample more of the interior of an object of finite thickness.
AMLSA Algorithm for Hybrid Precoding in Millimeter Wave MIMO Systems
NASA Astrophysics Data System (ADS)
Liu, Fulai; Sun, Zhenxing; Du, Ruiyan; Bai, Xiaoyu
2017-10-01
In this paper, an effective algorithm will be proposed for hybrid precoding in mmWave MIMO systems, referred to as alternating minimization algorithm with the least squares amendment (AMLSA algorithm). To be specific, for the fully-connected structure, the presented algorithm is exploited to minimize the classical objective function and obtain the hybrid precoding matrix. It introduces an orthogonal constraint to the digital precoding matrix which is amended subsequently by the least squares after obtaining its alternating minimization iterative result. Simulation results confirm that the achievable spectral efficiency of our proposed algorithm is better to some extent than that of the existing algorithm without the least squares amendment. Furthermore, the number of iterations is reduced slightly via improving the initialization procedure.
Estimation of the ARNO model baseflow parameters using daily streamflow data
NASA Astrophysics Data System (ADS)
Abdulla, F. A.; Lettenmaier, D. P.; Liang, Xu
1999-09-01
An approach is described for estimation of baseflow parameters of the ARNO model, using historical baseflow recession sequences extracted from daily streamflow records. This approach allows four of the model parameters to be estimated without rainfall data, and effectively facilitates partitioning of the parameter estimation procedure so that parsimonious search procedures can be used to estimate the remaining storm response parameters separately. Three methods of optimization are evaluated for estimation of four baseflow parameters. These methods are the downhill Simplex (S), Simulated Annealing combined with the Simplex method (SA) and Shuffled Complex Evolution (SCE). These estimation procedures are explored in conjunction with four objective functions: (1) ordinary least squares; (2) ordinary least squares with Box-Cox transformation; (3) ordinary least squares on prewhitened residuals; (4) ordinary least squares applied to prewhitened with Box-Cox transformation of residuals. The effects of changing the seed random generator for both SA and SCE methods are also explored, as are the effects of the bounds of the parameters. Although all schemes converge to the same values of the objective function, SCE method was found to be less sensitive to these issues than both the SA and the Simplex schemes. Parameter uncertainty and interactions are investigated through estimation of the variance-covariance matrix and confidence intervals. As expected the parameters were found to be correlated and the covariance matrix was found to be not diagonal. Furthermore, the linearized confidence interval theory failed for about one-fourth of the catchments while the maximum likelihood theory did not fail for any of the catchments.
NASA Astrophysics Data System (ADS)
Franzetti, Paolo; Scodeggio, Marco
2012-10-01
GOSSIP fits the electro-magnetic emission of an object (the SED, Spectral Energy Distribution) against synthetic models to find the simulated one that best reproduces the observed data. It builds-up the observed SED of an object (or a large sample of objects) combining magnitudes in different bands and eventually a spectrum; then it performs a chi-square minimization fitting procedure versus a set of synthetic models. The fitting results are used to estimate a number of physical parameters like the Star Formation History, absolute magnitudes, stellar mass and their Probability Distribution Functions.
NASA Astrophysics Data System (ADS)
Castiglioni, S.; Toth, E.
2009-04-01
In the calibration procedure of continuously-simulating models, the hydrologist has to choose which part of the observed hydrograph is most important to fit, either implicitly, through the visual agreement in manual calibration, or explicitly, through the choice of the objective function(s). Changing the objective functions it is in fact possible to emphasise different kind of errors, giving them more weight in the calibration phase. The objective functions used for calibrating hydrological models are generally of the quadratic type (mean squared error, correlation coefficient, coefficient of determination, etc) and are therefore oversensitive to high and extreme error values, that typically correspond to high and extreme streamflow values. This is appropriate when, like in the majority of streamflow forecasting applications, the focus is on the ability to reproduce potentially dangerous flood events; on the contrary, if the aim of the modelling is the reproduction of low and average flows, as it is the case in water resource management problems, this may result in a deterioration of the forecasting performance. This contribution presents the results of a series of automatic calibration experiments of a continuously-simulating rainfall-runoff model applied over several real-world case-studies, where the objective function is chosen so to highlight the fit of average and low flows. In this work a simple conceptual model will be used, of the lumped type, with a relatively low number of parameters to be calibrated. The experiments will be carried out for a set of case-study watersheds in Central Italy, covering an extremely wide range of geo-morphologic conditions and for whom at least five years of contemporary daily series of streamflow, precipitation and evapotranspiration estimates are available. Different objective functions will be tested in calibration and the results will be compared, over validation data, against those obtained with traditional squared functions. A companion work presents the results, over the same case-study watersheds and observation periods, of a system-theoretic model, again calibrated for reproducing average and low streamflows.
The role of edges in the selection of a jump target in Mantis religiosa.
Hyden, Karin; Kral, Karl
2005-09-30
Before jumping to a landing object, praying mantids determine the distance, using information obtained from retinal image motion resulting from horizontal peering movements. The present study investigates the peering-jump behaviour of Mantis religiosa larvae with regard to jump targets differing in shape and size. The experimental animals were presented with square, triangular and round target objects with visual extensions of 20 degrees and 40 degrees. The cardboard objects, presented against a uniform white background, were solid black or shaded with a gradation from white to black. It was found that larger objects were preferred to smaller ones as jump targets, and that the square and triangle were preferred to the round disk. When two objects were presented, no preference was exhibited between square and triangular objects. However, when three objects were presented, the square was preferred. For targets with a visual angle of 40 degrees, the amplitude and velocity of the horizontal peering movements were greater for the round disk than for the square or triangle. This amplification of the peering movements suggests that weaker motion signals are generated in the case of curved edges. This may help to account for the preference for the square and triangle as jump targets.
Squared eigenvalue condition numbers and eigenvector correlations from the single ring theorem
NASA Astrophysics Data System (ADS)
Belinschi, Serban; Nowak, Maciej A.; Speicher, Roland; Tarnowski, Wojciech
2017-03-01
We extend the so-called ‘single ring theorem’ (Feinberg and Zee 1997 Nucl. Phys. B 504 579), also known as the Haagerup-Larsen theorem (Haagerup and Larsen 2000 J. Funct. Anal. 176 331). We do this by showing that in the limit when the size of the matrix goes to infinity a particular correlator between left and right eigenvectors of the relevant non-hermitian matrix X, being the spectral density weighted by the squared eigenvalue condition number, is given by a simple formula involving only the radial spectral cumulative distribution function of X. We show that this object allows the calculation of the conditional expectation of the squared eigenvalue condition number. We give examples and provide a cross-check of the analytic prediction by the large scale numerics.
Measured soil water evaporation as a function of the square root of time and reference ET
USDA-ARS?s Scientific Manuscript database
Sunflower (Helianthus annuus L.) is a drought-adapted crop with a short growing season that reduces irrigation requirements and makes it ideal for regions with limited irrigation water supplies. Our objectives were a) to evaluate the yield potential of sunflower under deficit irrigation and b) det...
NASA Astrophysics Data System (ADS)
Jeong, Woodon; Kang, Minji; Kim, Shinwoong; Min, Dong-Joo; Kim, Won-Ki
2015-06-01
Seismic full waveform inversion (FWI) has primarily been based on a least-squares optimization problem for data residuals. However, the least-squares objective function can suffer from its weakness and sensitivity to noise. There have been numerous studies to enhance the robustness of FWI by using robust objective functions, such as l 1-norm-based objective functions. However, the l 1-norm can suffer from a singularity problem when the residual wavefield is very close to zero. Recently, Student's t distribution has been applied to acoustic FWI to give reasonable results for noisy data. Student's t distribution has an overdispersed density function compared with the normal distribution, and is thus useful for data with outliers. In this study, we investigate the feasibility of Student's t distribution for elastic FWI by comparing its basic properties with those of the l 2-norm and l 1-norm objective functions and by applying the three methods to noisy data. Our experiments show that the l 2-norm is sensitive to noise, whereas the l 1-norm and Student's t distribution objective functions give relatively stable and reasonable results for noisy data. When noise patterns are complicated, i.e., due to a combination of missing traces, unexpected outliers, and random noise, FWI based on Student's t distribution gives better results than l 1- and l 2-norm FWI. We also examine the application of simultaneous-source methods to acoustic FWI based on Student's t distribution. Computing the expectation of the coefficients of gradient and crosstalk noise terms and plotting the signal-to-noise ratio with iteration, we were able to confirm that crosstalk noise is suppressed as the iteration progresses, even when simultaneous-source FWI is combined with Student's t distribution. From our experiments, we conclude that FWI based on Student's t distribution can retrieve subsurface material properties with less distortion from noise than l 1- and l 2-norm FWI, and the simultaneous-source method can be adopted to improve the computational efficiency of FWI based on Student's t distribution.
NASA Astrophysics Data System (ADS)
Parhusip, H. A.; Trihandaru, S.; Susanto, B.; Prasetyo, S. Y. J.; Agus, Y. H.; Simanjuntak, B. H.
2017-03-01
Several algorithms and objective functions on paddy crops have been studied to get optimal paddy crops in Central Java based on the data given from Surakarta and Boyolali. The algorithms are linear solver, least square and Ant Colony Algorithms (ACO) to develop optimization procedures on paddy crops modelled with Modified GSTAR (Generalized Space-Time Autoregressive) and nonlinear models where the nonlinear models are quadratic and power functions. The studied data contain paddy crops from Surakarta and Boyolali determining the best period of planting in the year 1992-2012 for Surakarta where 3 periods for planting are known and the optimal amount of paddy crops in Boyolali in the year 2008-2013. Having these analyses may guide the local agriculture government to give a decision on rice sustainability in its region. The best period for planting in Surakarta is observed, i.e. the best period is in September-December based on the data 1992-2012 by considering the planting area, the cropping area, and the paddy crops are the most important factors to be taken into account. As a result, we can refer the paddy crops in this best period (about 60.4 thousand tons per year) as the optimal results in 1992-2012 where the used objective function is quadratic. According to the research, the optimal paddy crops in Boyolali about 280 thousand tons per year where the studied factors are the amount of rainfalls, the harvested area and the paddy crops in 2008-2013. In this case, linear and power functions are studied to be the objective functions. Compared to all studied algorithms, the linear solver is still recommended to be an optimization tool for a local agriculture government to predict paddy crops in future.
Automated Calibration For Numerical Models Of Riverflow
NASA Astrophysics Data System (ADS)
Fernandez, Betsaida; Kopmann, Rebekka; Oladyshkin, Sergey
2017-04-01
Calibration of numerical models is fundamental since the beginning of all types of hydro system modeling, to approximate the parameters that can mimic the overall system behavior. Thus, an assessment of different deterministic and stochastic optimization methods is undertaken to compare their robustness, computational feasibility, and global search capacity. Also, the uncertainty of the most suitable methods is analyzed. These optimization methods minimize the objective function that comprises synthetic measurements and simulated data. Synthetic measurement data replace the observed data set to guarantee an existing parameter solution. The input data for the objective function derivate from a hydro-morphological dynamics numerical model which represents an 180-degree bend channel. The hydro- morphological numerical model shows a high level of ill-posedness in the mathematical problem. The minimization of the objective function by different candidate methods for optimization indicates a failure in some of the gradient-based methods as Newton Conjugated and BFGS. Others reveal partial convergence, such as Nelder-Mead, Polak und Ribieri, L-BFGS-B, Truncated Newton Conjugated, and Trust-Region Newton Conjugated Gradient. Further ones indicate parameter solutions that range outside the physical limits, such as Levenberg-Marquardt and LeastSquareRoot. Moreover, there is a significant computational demand for genetic optimization methods, such as Differential Evolution and Basin-Hopping, as well as for Brute Force methods. The Deterministic Sequential Least Square Programming and the scholastic Bayes Inference theory methods present the optimal optimization results. keywords: Automated calibration of hydro-morphological dynamic numerical model, Bayesian inference theory, deterministic optimization methods.
Highly uniform parallel microfabrication using a large numerical aperture system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Zi-Yu; Su, Ya-Hui, E-mail: ustcsyh@ahu.edu.cn, E-mail: dongwu@ustc.edu.cn; Zhang, Chen-Chu
In this letter, we report an improved algorithm to produce accurate phase patterns for generating highly uniform diffraction-limited multifocal arrays in a large numerical aperture objective system. It is shown that based on the original diffraction integral, the uniformity of the diffraction-limited focal arrays can be improved from ∼75% to >97%, owing to the critical consideration of the aperture function and apodization effect associated with a large numerical aperture objective. The experimental results, e.g., 3 × 3 arrays of square and triangle, seven microlens arrays with high uniformity, further verify the advantage of the improved algorithm. This algorithm enables the laser parallelmore » processing technology to realize uniform microstructures and functional devices in the microfabrication system with a large numerical aperture objective.« less
Structural and configurational properties of nanoconfined monolayer ice from first principles
Corsetti, Fabiano; Matthews, Paul; Artacho, Emilio
2016-01-01
Understanding the structural tendencies of nanoconfined water is of great interest for nanoscience and biology, where nano/micro-sized objects may be separated by very few layers of water. Here we investigate the properties of ice confined to a quasi-2D monolayer by a featureless, chemically neutral potential, in order to characterize its intrinsic behaviour. We use density-functional theory simulations with a non-local van der Waals density functional. An ab initio random structure search reveals all the energetically competitive monolayer configurations to belong to only two of the previously-identified families, characterized by a square or honeycomb hydrogen-bonding network, respectively. We discuss the modified ice rules needed for each network, and propose a simple point dipole 2D lattice model that successfully explains the energetics of the square configurations. All identified stable phases for both networks are found to be non-polar (but with a topologically non-trivial texture for the square) and, hence, non-ferroelectric, in contrast to previous predictions from a five-site empirical force-field model. Our results are in good agreement with very recently reported experimental observations. PMID:26728125
Structural and configurational properties of nanoconfined monolayer ice from first principles
NASA Astrophysics Data System (ADS)
Corsetti, Fabiano; Matthews, Paul; Artacho, Emilio
2016-01-01
Understanding the structural tendencies of nanoconfined water is of great interest for nanoscience and biology, where nano/micro-sized objects may be separated by very few layers of water. Here we investigate the properties of ice confined to a quasi-2D monolayer by a featureless, chemically neutral potential, in order to characterize its intrinsic behaviour. We use density-functional theory simulations with a non-local van der Waals density functional. An ab initio random structure search reveals all the energetically competitive monolayer configurations to belong to only two of the previously-identified families, characterized by a square or honeycomb hydrogen-bonding network, respectively. We discuss the modified ice rules needed for each network, and propose a simple point dipole 2D lattice model that successfully explains the energetics of the square configurations. All identified stable phases for both networks are found to be non-polar (but with a topologically non-trivial texture for the square) and, hence, non-ferroelectric, in contrast to previous predictions from a five-site empirical force-field model. Our results are in good agreement with very recently reported experimental observations.
Effects of atmospheric turbulence on the imaging performance of optical system
NASA Astrophysics Data System (ADS)
Al-Hamadani, Ali H.; Zainulabdeen, Faten Sh.; Karam, Ghada Sabah; Nasir, Eman Yousif; Al-Saedi, Abaas
2018-05-01
Turbulent effects are very complicated and still not entirely understood. Light waves from an astronomical object are distorted as they pass through the atmosphere. The refractive index fluctuations in the turbulent atmosphere induce an optical path difference (OPD) between different parts of the wavefront, distorted wavefronts produce low-quality images and degrade the image beyond the diffraction limit. In this paper the image degradation due to 2-D Gaussian atmospheric turbulence is considered in terms of the point spread function (PSF), and Strehl ratio as an image quality criteria for imaging systems with different apertures using the pupil function teqneque. A general expression for the degraded PSF in the case of circular and square apertures (with half diagonal = √{π/2 } , and 1) diffraction limited and defocused optical system is considered. Based on the derived formula, the effect of the Gaussian atmospheric turbulence on circular and square pupils has been studied with details. Numerical results show that the performance of optical systems with square aperture is more efficient at high levels of atmospheric turbulence than the other apertures.
Simple shear of deformable square objects
NASA Astrophysics Data System (ADS)
Treagus, Susan H.; Lan, Labao
2003-12-01
Finite element models of square objects in a contrasting matrix in simple shear show that the objects deform to a variety of shapes. For a range of viscosity contrasts, we catalogue the changing shapes and orientations of objects in progressive simple shear. At moderate simple shear ( γ=1.5), the shapes are virtually indistinguishable from those in equivalent pure shear models with the same bulk strain ( RS=4), examined in a previous study. In theory, differences would be expected, especially for very stiff objects or at very large strain. In all our simple shear models, relatively competent square objects become asymmetric barrel shapes with concave shortened edges, similar to some types of boudin. Incompetent objects develop shapes surprisingly similar to mica fish described in mylonites.
NASA Astrophysics Data System (ADS)
Baturin, A. P.
2011-07-01
The method of NEO's impact orbits search based on two target functions product minimization is presented. These functions are: a square of asteroid-Earth distance at the moment of close approach and a sum of squares of angular residuals. Besides, the method includes a minimization of asteroid-Earth distance's square in function of time alone when initial motion parameters are fixed. Both minimizations are carrying out in turn each by another. The testing of method has been made on the problem of Apophis's impact orbit search. The results of the testing have depicted an effectivity of presented method in searching of impact orbits for the Apophis's Earth encounters in 2036 and 2037.
Least-Squares, Continuous Sensitivity Analysis for Nonlinear Fluid-Structure Interaction
2009-08-20
Tangential stress optimization convergence to uniform value 1.797 as a function of eccentric anomaly E and Objective function value as a...up to the domain dimension, domainn . Equation (3.7) expands as truncation error round-off error decreasing step size FD e rr or 54...force, and E is Young’s modulus. Equations (3.31) and (3.32) may be directly integrated to yield the stress and displacement solutions, which, for no
NASA Astrophysics Data System (ADS)
Mansor, Zakwan; Zakaria, Mohd Zakimi; Nor, Azuwir Mohd; Saad, Mohd Sazli; Ahmad, Robiah; Jamaluddin, Hishamuddin
2017-09-01
This paper presents the black-box modelling of palm oil biodiesel engine (POB) using multi-objective optimization differential evolution (MOODE) algorithm. Two objective functions are considered in the algorithm for optimization; minimizing the number of term of a model structure and minimizing the mean square error between actual and predicted outputs. The mathematical model used in this study to represent the POB system is nonlinear auto-regressive moving average with exogenous input (NARMAX) model. Finally, model validity tests are applied in order to validate the possible models that was obtained from MOODE algorithm and lead to select an optimal model.
The holographic dual of the Penrose transform
NASA Astrophysics Data System (ADS)
Neiman, Yasha
2018-01-01
We consider the holographic duality between type-A higher-spin gravity in AdS4 and the free U( N) vector model. In the bulk, linearized solutions can be translated into twistor functions via the Penrose transform. We propose a holographic dual to this transform, which translates between twistor functions and CFT sources and operators. We present a twistorial expression for the partition function, which makes global higher-spin symmetry manifest, and appears to automatically include all necessary contact terms. In this picture, twistor space provides a fully nonlocal, gauge-invariant description underlying both bulk and boundary spacetime pictures. While the bulk theory is handled at the linear level, our formula for the partition function includes the effects of bulk interactions. Thus, the CFT is used to solve the bulk, with twistors as a language common to both. A key ingredient in our result is the study of ordinary spacetime symmetries within the fundamental representation of higher-spin algebra. The object that makes these "square root" spacetime symmetries manifest becomes the kernel of our boundary/twistor transform, while the original Penrose transform is identified as a "square root" of CPT.
Doppler-shift estimation of flat underwater channel using data-aided least-square approach
NASA Astrophysics Data System (ADS)
Pan, Weiqiang; Liu, Ping; Chen, Fangjiong; Ji, Fei; Feng, Jing
2015-06-01
In this paper we proposed a dada-aided Doppler estimation method for underwater acoustic communication. The training sequence is non-dedicate, hence it can be designed for Doppler estimation as well as channel equalization. We assume the channel has been equalized and consider only flat-fading channel. First, based on the training symbols the theoretical received sequence is composed. Next the least square principle is applied to build the objective function, which minimizes the error between the composed and the actual received signal. Then an iterative approach is applied to solve the least square problem. The proposed approach involves an outer loop and inner loop, which resolve the channel gain and Doppler coefficient, respectively. The theoretical performance bound, i.e. the Cramer-Rao Lower Bound (CRLB) of estimation is also derived. Computer simulations results show that the proposed algorithm achieves the CRLB in medium to high SNR cases.
Comparison of global optimization approaches for robust calibration of hydrologic model parameters
NASA Astrophysics Data System (ADS)
Jung, I. W.
2015-12-01
Robustness of the calibrated parameters of hydrologic models is necessary to provide a reliable prediction of future performance of watershed behavior under varying climate conditions. This study investigated calibration performances according to the length of calibration period, objective functions, hydrologic model structures and optimization methods. To do this, the combination of three global optimization methods (i.e. SCE-UA, Micro-GA, and DREAM) and four hydrologic models (i.e. SAC-SMA, GR4J, HBV, and PRMS) was tested with different calibration periods and objective functions. Our results showed that three global optimization methods provided close calibration performances under different calibration periods, objective functions, and hydrologic models. However, using the agreement of index, normalized root mean square error, Nash-Sutcliffe efficiency as the objective function showed better performance than using correlation coefficient and percent bias. Calibration performances according to different calibration periods from one year to seven years were hard to generalize because four hydrologic models have different levels of complexity and different years have different information content of hydrological observation. Acknowledgements This research was supported by a grant (14AWMP-B082564-01) from Advanced Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.
This is SPIRAL-TAP: Sparse Poisson Intensity Reconstruction ALgorithms--theory and practice.
Harmany, Zachary T; Marcia, Roummel F; Willett, Rebecca M
2012-03-01
Observations in many applications consist of counts of discrete events, such as photons hitting a detector, which cannot be effectively modeled using an additive bounded or Gaussian noise model, and instead require a Poisson noise model. As a result, accurate reconstruction of a spatially or temporally distributed phenomenon (f*) from Poisson data (y) cannot be effectively accomplished by minimizing a conventional penalized least-squares objective function. The problem addressed in this paper is the estimation of f* from y in an inverse problem setting, where the number of unknowns may potentially be larger than the number of observations and f* admits sparse approximation. The optimization formulation considered in this paper uses a penalized negative Poisson log-likelihood objective function with nonnegativity constraints (since Poisson intensities are naturally nonnegative). In particular, the proposed approach incorporates key ideas of using separable quadratic approximations to the objective function at each iteration and penalization terms related to l1 norms of coefficient vectors, total variation seminorms, and partition-based multiscale estimation methods.
Li, Yanqiu; Liu, Shi; Inaki, Schlaberg H.
2017-01-01
Accuracy and speed of algorithms play an important role in the reconstruction of temperature field measurements by acoustic tomography. Existing algorithms are based on static models which only consider the measurement information. A dynamic model of three-dimensional temperature reconstruction by acoustic tomography is established in this paper. A dynamic algorithm is proposed considering both acoustic measurement information and the dynamic evolution information of the temperature field. An objective function is built which fuses measurement information and the space constraint of the temperature field with its dynamic evolution information. Robust estimation is used to extend the objective function. The method combines a tunneling algorithm and a local minimization technique to solve the objective function. Numerical simulations show that the image quality and noise immunity of the dynamic reconstruction algorithm are better when compared with static algorithms such as least square method, algebraic reconstruction technique and standard Tikhonov regularization algorithms. An effective method is provided for temperature field reconstruction by acoustic tomography. PMID:28895930
The emotional effects of violations of causality, or How to make a square amusing
Bressanelli, Daniela; Parovel, Giulia
2012-01-01
In Michotte's launching paradigm a square moves up to and makes contact with another square, which then moves off more slowly. In the triggering effect, the second square moves much faster than the first, eliciting an amusing impression. We generated 13 experimental displays in which there was always incongruity between cause and effect. We hypothesized that the comic impression would be stronger when objects are perceived as living agents and weaker when objects are perceived as mechanically non-animated. General findings support our hypothesis. PMID:23145274
The Hard X-ray 20-40 keV AGN Luminosity Function
NASA Technical Reports Server (NTRS)
Beckmann, V.; Soldi, S.; Shrader, C. R.; Gehrels, N.; Produit, N.
2006-01-01
We have compiled a complete, significance limited extragalactic sample based on approximately 25,000 deg(sup 2) to a limiting flux of 3 x 10(exp -11) ergs per square centimeter per second. (approximately 7,000 deg(sup 2)) to a flux limit of 10(exp -11) ergs per square centimeter per second)) in the 20 - 40 keV band with INTEGRAL. We have constructed a detailed exposure map to compensate for effects of non-uniform exposure. The flux-number relation is best described by a power-law with a slope of alpha = 1.66 plus or minus 0.11. The integration of the cumulative flux per unit area leads to f(sub 20-40 keV) = 2.6 x 10(exp -10) ergs per square centimeter per second per sr(sup -1) which is about 1% of the known 20-40 keV X-ray background. We present the first luminosity function of AGN in the 20-40 keV energy range, based on 68 extragalactic objects detected by the imager IBIS/ISGRI on-board INTEGRAL. The luminosity function shows a smoothly connected two power-law form, with an index of gamma (sub 1) = 0.9 below, and gamma (sub 2) = 2.2 above the turn-over luminosity of L(sub *), = 4.6 x 10(sup 43) ergs per second. The emissivity of all INTEGRAL AGNs per unit volume is W(sub 20-40keV)(greater than 10(sup 41) ergs per second) = 2.8 x 10(sup 38) ergs per second h(sup 3)(sub 70) Mpc(sup -3). These results are consistent with those derived in the 2-20keV energy band and do not show a significant contribution by Compton-thick objects. Because the sample used in this study is truly local (z(raised bar) = 0.022)), only limited conclusions can be drawn for the evolution of AGNs in this energy band. But the objects explaining the peak in the cosmic X-ray background are likely to be either low luminosity AGN (L(sub x) less than 10(sup 41) ergs per second) or of other type, such as intermediate mass black holes, clusters, and star forming regions.
Multi-objective aerodynamic shape optimization of small livestock trailers
NASA Astrophysics Data System (ADS)
Gilkeson, C. A.; Toropov, V. V.; Thompson, H. M.; Wilson, M. C. T.; Foxley, N. A.; Gaskell, P. H.
2013-11-01
This article presents a formal optimization study of the design of small livestock trailers, within which the majority of animals are transported to market in the UK. The benefits of employing a headboard fairing to reduce aerodynamic drag without compromising the ventilation of the animals' microclimate are investigated using a multi-stage process involving computational fluid dynamics (CFD), optimal Latin hypercube (OLH) design of experiments (DoE) and moving least squares (MLS) metamodels. Fairings are parameterized in terms of three design variables and CFD solutions are obtained at 50 permutations of design variables. Both global and local search methods are employed to locate the global minimum from metamodels of the objective functions and a Pareto front is generated. The importance of carefully selecting an objective function is demonstrated and optimal fairing designs, offering drag reductions in excess of 5% without compromising animal ventilation, are presented.
Temperature-dependent and optimized thermal emission by spheres
NASA Astrophysics Data System (ADS)
Nguyen, K. L.; Merchiers, O.; Chapuis, P.-O.
2018-03-01
We investigate the temperature and size dependencies of thermal emission by homogeneous spheres as a function of their dielectric properties. Different power laws obtained in this work show that the emitted power can depart strongly from the usual fourth power of temperature given by Planck's law and from the square or the cube of the radius. We also show how to optimize the thermal emission by selecting permittivities leading to resonances, which allow for the so-called super-Planckian regime. These results will be useful as spheres, i.e. the simplest finite objects, are often considered as building blocks of more complex objects.
Control system estimation and design for aerospace vehicles
NASA Technical Reports Server (NTRS)
Stefani, R. T.; Williams, T. L.; Yakowitz, S. J.
1972-01-01
The selection of an estimator which is unbiased when applied to structural parameter estimation is discussed. The mathematical relationships for structural parameter estimation are defined. It is shown that a conventional weighted least squares (CWLS) estimate is biased when applied to structural parameter estimation. Two approaches to bias removal are suggested: (1) change the CWLS estimator or (2) change the objective function. The advantages of each approach are analyzed.
Nonlinear programming extensions to rational function approximations of unsteady aerodynamics
NASA Technical Reports Server (NTRS)
Tiffany, Sherwood H.; Adams, William M., Jr.
1987-01-01
This paper deals with approximating unsteady generalized aerodynamic forces in the equations of motion of a flexible aircraft. Two methods of formulating these approximations are extended to include both the same flexibility in constraining them and the same methodology in optimizing nonlinear parameters as another currently used 'extended least-squares' method. Optimal selection of 'nonlinear' parameters is made in each of the three methods by use of the same nonlinear (nongradient) optimizer. The objective of the nonlinear optimization is to obtain rational approximations to the unsteady aerodynamics whose state-space realization is of lower order than that required when no optimization of the nonlinear terms is performed. The free 'linear' parameters are determined using least-squares matrix techniques on a Lagrange multiplier formulation of an objective function which incorporates selected linear equality constraints. State-space mathematical models resulting from the different approaches are described, and results are presented which show comparative evaluations from application of each of the extended methods to a numerical example. The results obtained for the example problem show a significant (up to 63 percent) reduction in the number of differential equations used to represent the unsteady aerodynamic forces in linear time-invariant equations of motion as compared to a conventional method in which nonlinear terms are not optimized.
Least-squares/parabolized Navier-Stokes procedure for optimizing hypersonic wind tunnel nozzles
NASA Technical Reports Server (NTRS)
Korte, John J.; Kumar, Ajay; Singh, D. J.; Grossman, B.
1991-01-01
A new procedure is demonstrated for optimizing hypersonic wind-tunnel-nozzle contours. The procedure couples a CFD computer code to an optimization algorithm, and is applied to both conical and contoured hypersonic nozzles for the purpose of determining an optimal set of parameters to describe the surface geometry. A design-objective function is specified based on the deviation from the desired test-section flow-field conditions. The objective function is minimized by optimizing the parameters used to describe the nozzle contour based on the solution to a nonlinear least-squares problem. The effect of the changes in the nozzle wall parameters are evaluated by computing the nozzle flow using the parabolized Navier-Stokes equations. The advantage of the new procedure is that it directly takes into account the displacement effect of the boundary layer on the wall contour. The new procedure provides a method for optimizing hypersonic nozzles of high Mach numbers which have been designed by classical procedures, but are shown to produce poor flow quality due to the large boundary layers present in the test section. The procedure is demonstrated by finding the optimum design parameters for a Mach 10 conical nozzle and a Mach 6 and a Mach 15 contoured nozzle.
An adaptive learning control system for large flexible structures
NASA Technical Reports Server (NTRS)
Thau, F. E.
1985-01-01
The objective of the research has been to study the design of adaptive/learning control systems for the control of large flexible structures. In the first activity an adaptive/learning control methodology for flexible space structures was investigated. The approach was based on using a modal model of the flexible structure dynamics and an output-error identification scheme to identify modal parameters. In the second activity, a least-squares identification scheme was proposed for estimating both modal parameters and modal-to-actuator and modal-to-sensor shape functions. The technique was applied to experimental data obtained from the NASA Langley beam experiment. In the third activity, a separable nonlinear least-squares approach was developed for estimating the number of excited modes, shape functions, modal parameters, and modal amplitude and velocity time functions for a flexible structure. In the final research activity, a dual-adaptive control strategy was developed for regulating the modal dynamics and identifying modal parameters of a flexible structure. A min-max approach was used for finding an input to provide modal parameter identification while not exceeding reasonable bounds on modal displacement.
A Study of the Thermal Environment Developed by a Traveling Slipper at High Velocity
2013-03-01
Power Partition Function The next partition function takes the same formulation as the powered function but now the exponent is squared. The...function and note the squared term in the exponent . 66 Equation 4.27 (4.36) Thus far the three partition functions each give a predicted...hypothesized that the function would fall somewhere between the first exponential decay function and the power function. However, by squaring the exponent
Robust Joint Graph Sparse Coding for Unsupervised Spectral Feature Selection.
Zhu, Xiaofeng; Li, Xuelong; Zhang, Shichao; Ju, Chunhua; Wu, Xindong
2017-06-01
In this paper, we propose a new unsupervised spectral feature selection model by embedding a graph regularizer into the framework of joint sparse regression for preserving the local structures of data. To do this, we first extract the bases of training data by previous dictionary learning methods and, then, map original data into the basis space to generate their new representations, by proposing a novel joint graph sparse coding (JGSC) model. In JGSC, we first formulate its objective function by simultaneously taking subspace learning and joint sparse regression into account, then, design a new optimization solution to solve the resulting objective function, and further prove the convergence of the proposed solution. Furthermore, we extend JGSC to a robust JGSC (RJGSC) via replacing the least square loss function with a robust loss function, for achieving the same goals and also avoiding the impact of outliers. Finally, experimental results on real data sets showed that both JGSC and RJGSC outperformed the state-of-the-art algorithms in terms of k -nearest neighbor classification performance.
Wang, B.; Zhu, X.; Gao, C.; Bai, Y.; Dong, J. W.; Wang, L. J.
2015-01-01
The Square Kilometre Array (SKA) project is an international effort to build the world’s largest radio telescope, with a one-square-kilometre collecting area. In addition to its ambitious scientific objectives, such as probing cosmic dawn and the cradle of life, the SKA demands several revolutionary technological breakthroughs, such as ultra-high precision synchronisation of the frequency references for thousands of antennas. In this report, with the purpose of application to the SKA, we demonstrate a frequency reference dissemination and synchronisation scheme in which the phase-noise compensation function is applied at the client site. Hence, one central hub can be linked to a large number of client sites, thus forming a star-shaped topology. As a performance test, a 100-MHz reference frequency signal from a hydrogen maser (H-maser) clock is disseminated and recovered at two remote sites. The phase-noise characteristics of the recovered reference frequency signal coincide with those of the H-maser source and satisfy the SKA requirements. PMID:26349544
Piecewise convexity of artificial neural networks.
Rister, Blaine; Rubin, Daniel L
2017-10-01
Although artificial neural networks have shown great promise in applications including computer vision and speech recognition, there remains considerable practical and theoretical difficulty in optimizing their parameters. The seemingly unreasonable success of gradient descent methods in minimizing these non-convex functions remains poorly understood. In this work we offer some theoretical guarantees for networks with piecewise affine activation functions, which have in recent years become the norm. We prove three main results. First, that the network is piecewise convex as a function of the input data. Second, that the network, considered as a function of the parameters in a single layer, all others held constant, is again piecewise convex. Third, that the network as a function of all its parameters is piecewise multi-convex, a generalization of biconvexity. From here we characterize the local minima and stationary points of the training objective, showing that they minimize the objective on certain subsets of the parameter space. We then analyze the performance of two optimization algorithms on multi-convex problems: gradient descent, and a method which repeatedly solves a number of convex sub-problems. We prove necessary convergence conditions for the first algorithm and both necessary and sufficient conditions for the second, after introducing regularization to the objective. Finally, we remark on the remaining difficulty of the global optimization problem. Under the squared error objective, we show that by varying the training data, a single rectifier neuron admits local minima arbitrarily far apart, both in objective value and parameter space. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Patra, Rusha; Dutta, Pranab K.
2015-07-01
Reconstruction of the absorption coefficient of tissue with good contrast is of key importance in functional diffuse optical imaging. A hybrid approach using model-based iterative image reconstruction and a genetic algorithm is proposed to enhance the contrast of the reconstructed image. The proposed method yields an observed contrast of 98.4%, mean square error of 0.638×10-3, and object centroid error of (0.001 to 0.22) mm. Experimental validation of the proposed method has also been provided with tissue-like phantoms which shows a significant improvement in image quality and thus establishes the potential of the method for functional diffuse optical tomography reconstruction with continuous wave setup. A case study of finger joint imaging is illustrated as well to show the prospect of the proposed method in clinical diagnosis. The method can also be applied to the concentration measurement of a region of interest in a turbid medium.
Reconstruction of internal density distributions in porous bodies from laser ultrasonic data
NASA Technical Reports Server (NTRS)
Lu, Yichi; Goldman, Jeffrey A.; Wadley, Haydn N. G.
1992-01-01
It is presently shown that, for density-reconstruction problems in which information about the inhomogeneity is known a priori, the nonlinear least-squares algorithm yields satisfactory results on the basis of limited projection data. The back-projection algorithm, which obviates assumptions about the objective function to be reconstructed, does not recover the boundary of the inhomogeneity when the number of projections is limited and ray-bending is ignored.
Arrays of flow channels with heat transfer embedded in conducting walls
Bejan, A.; Almerbati, A.; Lorente, S.; ...
2016-04-20
Here we illustrate the free search for the optimal geometry of flow channel cross-sections that meet two objectives simultaneously: reduced resistances to heat transfer and fluid flow. The element cross section and the wall material are fixed, while the shape of the fluid flow opening, or the wetted perimeter is free to vary. Two element cross sections are considered, square and equilateral triangular. We find that the two objectives are best met when the solid wall thickness is uniform, i.e., when the wetted perimeters are square and triangular, respectively. In addition, we consider arrays of square elements and triangular elements,more » on the basis of equal mass flow rate per unit of array cross sectional area. The conclusion is that the array of triangular elements meets the two objectives better than the array of square elements.« less
O'Farrell, Erin; Smith, Andra; Collins, Barbara
2017-10-01
Studies to date have found little correlation between subjective and objective measures of cognitive function in cancer patients, making it difficult to interpret the significance of their cognitive complaints. The purpose of this study was to determine if a stronger correlation would be obtained using measures of cognitive change rather than static scores. Sixty women with early stage breast cancer underwent repeated cognitive assessment over the course of chemotherapy with a neuropsychological test battery (objective measure) and with the FACT-Cog (subjective measure). Their results were compared to 60 healthy women matched on age and education and assessed at similar intervals. We used multilevel modeling, with FACT-Cog as the dependent measure and ordinary least squares slopes of a neuropsychological summary score as the independent variable, to evaluate the co-variation between the subjective and objective measures over time RESULTS: Measures of both objective and subjective cognitive function declined over the course of chemotherapy in the breast cancer patients but there was no significant relationship between them, even when using change measures. Change in objective cognitive function was not related to change in anxiety or fatigue scores but the decline in perceived cognitive function was associated with greater anxiety and fatigue. The discrepancy in objective and subjective measures of cognition in breast cancer patients cannot be accounted for in terms of a failure to use change measures. Although the results are negative, we contend that this is the more appropriate methodology for analyzing cancer-related changes in cognition. Copyright © 2016 John Wiley & Sons, Ltd.
Weighted Least Squares Fitting Using Ordinary Least Squares Algorithms.
ERIC Educational Resources Information Center
Kiers, Henk A. L.
1997-01-01
A general approach for fitting a model to a data matrix by weighted least squares (WLS) is studied. The approach consists of iteratively performing steps of existing algorithms for ordinary least squares fitting of the same model and is based on maximizing a function that majorizes WLS loss function. (Author/SLD)
Students' Spatial Structuring of 2D Arrays of Squares.
ERIC Educational Resources Information Center
Battista, Michael T.; Clements, Douglas H.; Arnoff, Judy; Battista, Kathryn; Van Auken Borrow, Caroline
1998-01-01
Defines spatial structuring as the mental operation of constructing an organization or form for an object/set of objects. Examines in detail students' structuring and enumeration of two-dimensional rectangular arrays of squares. Concludes that many students do not see row-by-column structure. Describes various levels of sophistication in students'…
Liu, Yan; Ma, Jianhua; Zhang, Hao; Wang, Jing; Liang, Zhengrong
2014-01-01
Background The negative effects of X-ray exposure, such as inducing genetic and cancerous diseases, has arisen more attentions. Objective This paper aims to investigate a penalized re-weighted least-square (PRWLS) strategy for low-mAs X-ray computed tomography image reconstruction by incorporating an adaptive weighted total variation (AwTV) penalty term and a noise variance model of projection data. Methods An AwTV penalty is introduced in the objective function by considering both piecewise constant property and local nearby intensity similarity of the desired image. Furthermore, the weight of data fidelity term in the objective function is determined by our recent study on modeling variance estimation of projection data in the presence of electronic background noise. Results The presented AwTV-PRWLS algorithm can achieve the highest full-width-at-half-maximum (FWHM) measurement, for data conditions of (1) full-view 10mA acquisition and (2) sparse-view 80mA acquisition. In comparison between the AwTV/TV-PRWLS strategies and the previous reported AwTV/TV-projection onto convex sets (AwTV/TV-POCS) approaches, the former can gain in terms of FWHM for data condition (1), but cannot gain for the data condition (2). Conclusions In the case of full-view 10mA projection data, the presented AwTV-PRWLS shows potential improvement. However, in the case of sparse-view 80mA projection data, the AwTV/TV-POCS shows advantage over the PRWLS strategies. PMID:25080113
Lump solutions to nonlinear partial differential equations via Hirota bilinear forms
NASA Astrophysics Data System (ADS)
Ma, Wen-Xiu; Zhou, Yuan
2018-02-01
Lump solutions are analytical rational function solutions localized in all directions in space. We analyze a class of lump solutions, generated from quadratic functions, to nonlinear partial differential equations. The basis of success is the Hirota bilinear formulation and the primary object is the class of positive multivariate quadratic functions. A complete determination of quadratic functions positive in space and time is given, and positive quadratic functions are characterized as sums of squares of linear functions. Necessary and sufficient conditions for positive quadratic functions to solve Hirota bilinear equations are presented, and such polynomial solutions yield lump solutions to nonlinear partial differential equations under the dependent variable transformations u = 2(ln f) x and u = 2(ln f) xx, where x is one spatial variable. Applications are made for a few generalized KP and BKP equations.
Bai, Mingsian R; Hsieh, Ping-Ju; Hur, Kur-Nan
2009-02-01
The performance of the minimum mean-square error noise reduction (MMSE-NR) algorithm in conjunction with time-recursive averaging (TRA) for noise estimation is found to be very sensitive to the choice of two recursion parameters. To address this problem in a more systematic manner, this paper proposes an optimization method to efficiently search the optimal parameters of the MMSE-TRA-NR algorithms. The objective function is based on a regression model, whereas the optimization process is carried out with the simulated annealing algorithm that is well suited for problems with many local optima. Another NR algorithm proposed in the paper employs linear prediction coding as a preprocessor for extracting the correlated portion of human speech. Objective and subjective tests were undertaken to compare the optimized MMSE-TRA-NR algorithm with several conventional NR algorithms. The results of subjective tests were processed by using analysis of variance to justify the statistic significance. A post hoc test, Tukey's Honestly Significant Difference, was conducted to further assess the pairwise difference between the NR algorithms.
SU-F-18C-14: Hessian-Based Norm Penalty for Weighted Least-Square CBCT Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, T; Sun, N; Tan, S
Purpose: To develop a Hessian-based norm penalty for cone-beam CT (CBCT) reconstruction that has a similar ability in suppressing noise as the total variation (TV) penalty while avoiding the staircase effect and better preserving low-contrast objects. Methods: We extended the TV penalty to a Hessian-based norm penalty based on the Frobenius norm of the Hessian matrix of an image for CBCT reconstruction. The objective function was constructed using the penalized weighted least-square (PWLS) principle. An effective algorithm was developed to minimize the objective function using a majorization-minimization (MM) approach. We evaluated and compared the proposed penalty with the TV penaltymore » on a CatPhan 600 phantom and an anthropomorphic head phantom, each acquired at a low-dose protocol (10mA/10ms) and a high-dose protocol (80mA/12ms). For both penalties, contrast-to-noise (CNR) in four low-contrast regions-of-interest (ROIs) and the full-width-at-half-maximum (FWHM) of two point-like objects in constructed images were calculated and compared. Results: In the experiment of CatPhan 600 phantom, the Hessian-based norm penalty has slightly higher CNRs and approximately equivalent FWHM values compared with the TV penalty. In the experiment of the anthropomorphic head phantom at the low-dose protocol, the TV penalty result has several artificial piece-wise constant areas known as the staircase effect while in the Hessian-based norm penalty the image appears smoother and more similar to that of the FDK result using the high-dose protocol. Conclusion: The proposed Hessian-based norm penalty has a similar performance in suppressing noise to the TV penalty, but has a potential advantage in suppressing the staircase effect and preserving low-contrast objects. This work was supported in part by National Natural Science Foundation of China (NNSFC), under Grant Nos. 60971112 and 61375018, and Fundamental Research Funds for the Central Universities, under Grant No. 2012QN086.« less
Milne, S C
1996-12-24
In this paper, we give two infinite families of explicit exact formulas that generalize Jacobi's (1829) 4 and 8 squares identities to 4n(2) or 4n(n + 1) squares, respectively, without using cusp forms. Our 24 squares identity leads to a different formula for Ramanujan's tau function tau(n), when n is odd. These results arise in the setting of Jacobi elliptic functions, Jacobi continued fractions, Hankel or Turánian determinants, Fourier series, Lambert series, inclusion/exclusion, Laplace expansion formula for determinants, and Schur functions. We have also obtained many additional infinite families of identities in this same setting that are analogous to the eta-function identities in appendix I of Macdonald's work [Macdonald, I. G. (1972) Invent. Math. 15, 91-143]. A special case of our methods yields a proof of the two conjectured [Kac, V. G. and Wakimoto, M. (1994) in Progress in Mathematics, eds. Brylinski, J.-L., Brylinski, R., Guillemin, V. & Kac, V. (Birkhäuser Boston, Boston, MA), Vol. 123, pp. 415-456] identities involving representing a positive integer by sums of 4n(2) or 4n(n + 1) triangular numbers, respectively. Our 16 and 24 squares identities were originally obtained via multiple basic hypergeometric series, Gustafson's C(l) nonterminating (6)phi(5) summation theorem, and Andrews' basic hypergeometric series proof of Jacobi's 4 and 8 squares identities. We have (elsewhere) applied symmetry and Schur function techniques to this original approach to prove the existence of similar infinite families of sums of squares identities for n(2) or n(n + 1) squares, respectively. Our sums of more than 8 squares identities are not the same as the formulas of Mathews (1895), Glaisher (1907), Ramanujan (1916), Mordell (1917, 1919), Hardy (1918, 1920), Kac and Wakimoto, and many others.
A Visual Model for the Variance and Standard Deviation
ERIC Educational Resources Information Center
Orris, J. B.
2011-01-01
This paper shows how the variance and standard deviation can be represented graphically by looking at each squared deviation as a graphical object--in particular, as a square. A series of displays show how the standard deviation is the size of the average square.
NASA Astrophysics Data System (ADS)
Wu, Zhejun; Kudenov, Michael W.
2017-05-01
This paper presents a reconstruction algorithm for the Spatial-Spectral Multiplexing (SSM) optical system. The goal of this algorithm is to recover the three-dimensional spatial and spectral information of a scene, given that a one-dimensional spectrometer array is used to sample the pupil of the spatial-spectral modulator. The challenge of the reconstruction is that the non-parametric representation of the three-dimensional spatial and spectral object requires a large number of variables, thus leading to an underdetermined linear system that is hard to uniquely recover. We propose to reparameterize the spectrum using B-spline functions to reduce the number of unknown variables. Our reconstruction algorithm then solves the improved linear system via a least- square optimization of such B-spline coefficients with additional spatial smoothness regularization. The ground truth object and the optical model for the measurement matrix are simulated with both spatial and spectral assumptions according to a realistic field of view. In order to test the robustness of the algorithm, we add Poisson noise to the measurement and test on both two-dimensional and three-dimensional spatial and spectral scenes. Our analysis shows that the root mean square error of the recovered results can be achieved within 5.15%.
Energy Performance Measurement and Simulation Modeling of Tactical Soft-Wall Shelters
2015-07-01
was too low to measure was on the order of 5 hours. Because the research team did not have access to the site between 1700 and 0500 hours the...Basic for Applications ( VBA ). The objective function was the root mean square (RMS) errors between modeled and measured heating load and the modeled...References Phase Change Energy Solutions. (2013). BioPCM web page, http://phasechange.com/index.php/en/about/our-material. Accessed 16 September
Artificial grasping system for the paralyzed hand.
Ferrari de Castro, M C; Cliquet, A
2000-03-01
Neuromuscular electrical stimulation has been used in upper limb rehabilitation towards restoring motor hand function. In this work, an 8 channel microcomputer controlled stimulator with monophasic square voltage output was used. Muscle activation sequences were defined to perform palmar and lateral prehension and power grip (index finger extension type). The sequences used allowed subjects to demonstrate their ability to hold and release objects that are encountered in daily living, permitting activities such as drinking, eating, writing, and typing.
McGregor, Heather R.; Pun, Henry C. H.; Buckingham, Gavin; Gribble, Paul L.
2016-01-01
The human sensorimotor system is routinely capable of making accurate predictions about an object's weight, which allows for energetically efficient lifts and prevents objects from being dropped. Often, however, poor predictions arise when the weight of an object can vary and sensory cues about object weight are sparse (e.g., picking up an opaque water bottle). The question arises, what strategies does the sensorimotor system use to make weight predictions when one is dealing with an object whose weight may vary? For example, does the sensorimotor system use a strategy that minimizes prediction error (minimal squared error) or one that selects the weight that is most likely to be correct (maximum a posteriori)? In this study we dissociated the predictions of these two strategies by having participants lift an object whose weight varied according to a skewed probability distribution. We found, using a small range of weight uncertainty, that four indexes of sensorimotor prediction (grip force rate, grip force, load force rate, and load force) were consistent with a feedforward strategy that minimizes the square of prediction errors. These findings match research in the visuomotor system, suggesting parallels in underlying processes. We interpret our findings within a Bayesian framework and discuss the potential benefits of using a minimal squared error strategy. NEW & NOTEWORTHY Using a novel experimental model of object lifting, we tested whether the sensorimotor system models the weight of objects by minimizing lifting errors or by selecting the statistically most likely weight. We found that the sensorimotor system minimizes the square of prediction errors for object lifting. This parallels the results of studies that investigated visually guided reaching, suggesting an overlap in the underlying mechanisms between tasks that involve different sensory systems. PMID:27760821
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shrestha, S; Vedantham, S; Karellas, A
Purpose: Detectors with hexagonal pixels require resampling to square pixels for distortion-free display of acquired images. In this work, the presampling modulation transfer function (MTF) of a hexagonal pixel array photon-counting CdTe detector for region-of-interest fluoroscopy was measured and the optimal square pixel size for resampling was determined. Methods: A 0.65mm thick CdTe Schottky sensor capable of concurrently acquiring up to 3 energy-windowed images was operated in a single energy-window mode to include ≥10 KeV photons. The detector had hexagonal pixels with apothem of 30 microns resulting in pixel spacing of 60 and 51.96 microns along the two orthogonal directions.more » Images of a tungsten edge test device acquired under IEC RQA5 conditions were double Hough transformed to identify the edge and numerically differentiated. The presampling MTF was determined from the finely sampled line spread function that accounted for the hexagonal sampling. The optimal square pixel size was determined in two ways; the square pixel size for which the aperture function evaluated at the Nyquist frequencies along the two orthogonal directions matched that from the hexagonal pixel aperture functions, and the square pixel size for which the mean absolute difference between the square and hexagonal aperture functions was minimized over all frequencies up to the Nyquist limit. Results: Evaluation of the aperture functions over the entire frequency range resulted in square pixel size of 53 microns with less than 2% difference from the hexagonal pixel. Evaluation of the aperture functions at Nyquist frequencies alone resulted in 54 microns square pixels. For the photon-counting CdTe detector and after resampling to 53 microns square pixels using quadratic interpolation, the presampling MTF at Nyquist frequency of 9.434 cycles/mm along the two directions were 0.501 and 0.507. Conclusion: Hexagonal pixel array photon-counting CdTe detector after resampling to square pixels provides high-resolution imaging suitable for fluoroscopy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levi, Michele; Steinhoff, Jan, E-mail: michele.levi@upmc.fr, E-mail: jan.steinhoff@aei.mpg.de
2016-01-01
The next-to-next-to-leading order spin-squared interaction potential for generic compact binaries is derived for the first time via the effective field theory for gravitating spinning objects in the post-Newtonian scheme. The spin-squared sector is an intricate one, as it requires the consideration of the point particle action beyond minimal coupling, and mainly involves the spin-squared worldline couplings, which are quite complex, compared to the worldline couplings from the minimal coupling part of the action. This sector also involves the linear in spin couplings, as we go up in the nonlinearity of the interaction, and in the loop order. Hence, there ismore » an excessive increase in the number of Feynman diagrams, of which more are higher loop ones. We provide all the Feynman diagrams and their values. The beneficial ''nonrelativistic gravitational'' fields are employed in the computation. This spin-squared correction, which enters at the fourth post-Newtonian order for rapidly rotating compact objects, completes the conservative sector up to the fourth post-Newtonian accuracy. The robustness of the effective field theory for gravitating spinning objects is shown here once again, as demonstrated in a recent series of papers by the authors, which obtained all spin dependent sectors, required up to the fourth post-Newtonian accuracy. The effective field theory of spinning objects allows to directly obtain the equations of motion, and the Hamiltonians, and these will be derived for the potential obtained here in a forthcoming paper.« less
Multidimensional model of apathy in older adults using partial least squares--path modeling.
Raffard, Stéphane; Bortolon, Catherine; Burca, Marianna; Gely-Nargeot, Marie-Christine; Capdevielle, Delphine
2016-06-01
Apathy defined as a mental state characterized by a lack of goal-directed behavior is prevalent and associated with poor functioning in older adults. The main objective of this study was to identify factors contributing to the distinct dimensions of apathy (cognitive, emotional, and behavioral) in older adults without dementia. One hundred and fifty participants (mean age, 80.42) completed self-rated questionnaires assessing apathy, emotional distress, anticipatory pleasure, motivational systems, physical functioning, quality of life, and cognitive functioning. Data were analyzed using partial least squares variance-based structural equation modeling in order to examine factors contributing to the three different dimensions of apathy in our sample. Overall, the different facets of apathy were associated with cognitive functioning, anticipatory pleasure, sensitivity to reward, and physical functioning, but the contribution of these different factors to the three dimensions of apathy differed significantly. More specifically, the impact of anticipatory pleasure and physical functioning was stronger for the cognitive than for emotional apathy. Conversely, the impact of sensibility to reward, although small, was slightly stronger on emotional apathy. Regarding behavioral apathy, again we found similar latent variables except for the cognitive functioning whose impact was not statistically significant. Our results highlight the need to take into account various mechanisms involved in the different facets of apathy in older adults without dementia, including not only cognitive factors but also motivational variables and aspects related to physical disability. Clinical implications are discussed.
Mayo, Ann M.; Wallhagen, Margaret; Cooper, Bruce A.; Mehta, Kala; Ross, Leslie; Miller, Bruce
2012-01-01
Objective To determine the relationship between functional status (independent activities of daily living) and judgment/problem solving and the extent to which select demographic characteristics such as dementia subtype and cognitive measures may moderate that relationship in older adult individuals with dementia. Methods The National Alzheimer’s Coordinating Center Universal Data Set was accessed for a study sample of 3,855 individuals diagnosed with dementia. Primary variables included functional status, judgment/problem solving, and cognition. Results Functional status was related to judgment/problem solving (r= 0.66; p< .0005). Functional status and cognition jointly predicted 56% of the variance in judgment/problem solving (R-squared = .56, p <.0005). As cognition decreases, the prediction of poorer judgment/problem solving by functional status became stronger. Conclusions Among individuals with a diagnosis of dementia, declining functional status as well as declining cognition should raise concerns about judgment/problem solving. PMID:22786576
Han, Jubong; Lee, K B; Lee, Jong-Man; Park, Tae Soon; Oh, J S; Oh, Pil-Jei
2016-03-01
We discuss a new method to incorporate Type B uncertainty into least-squares procedures. The new method is based on an extension of the likelihood function from which a conventional least-squares function is derived. The extended likelihood function is the product of the original likelihood function with additional PDFs (Probability Density Functions) that characterize the Type B uncertainties. The PDFs are considered to describe one's incomplete knowledge on correction factors being called nuisance parameters. We use the extended likelihood function to make point and interval estimations of parameters in the basically same way as the least-squares function used in the conventional least-squares method is derived. Since the nuisance parameters are not of interest and should be prevented from appearing in the final result, we eliminate such nuisance parameters by using the profile likelihood. As an example, we present a case study for a linear regression analysis with a common component of Type B uncertainty. In this example we compare the analysis results obtained from using our procedure with those from conventional methods. Copyright © 2015. Published by Elsevier Ltd.
Degradation trend estimation of slewing bearing based on LSSVM model
NASA Astrophysics Data System (ADS)
Lu, Chao; Chen, Jie; Hong, Rongjing; Feng, Yang; Li, Yuanyuan
2016-08-01
A novel prediction method is proposed based on least squares support vector machine (LSSVM) to estimate the slewing bearing's degradation trend with small sample data. This method chooses the vibration signal which contains rich state information as the object of the study. Principal component analysis (PCA) was applied to fuse multi-feature vectors which could reflect the health state of slewing bearing, such as root mean square, kurtosis, wavelet energy entropy, and intrinsic mode function (IMF) energy. The degradation indicator fused by PCA can reflect the degradation more comprehensively and effectively. Then the degradation trend of slewing bearing was predicted by using the LSSVM model optimized by particle swarm optimization (PSO). The proposed method was demonstrated to be more accurate and effective by the whole life experiment of slewing bearing. Therefore, it can be applied in engineering practice.
The DES Science Verification Weak Lensing Shear Catalogs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jarvis, M.
We present weak lensing shear catalogs for 139 square degrees of data taken during the Science Verification (SV) time for the new Dark Energy Camera (DECam) being used for the Dark Energy Survey (DES). We describe our object selection, point spread function estimation and shear measurement procedures using two independent shear pipelines, IM3SHAPE and NGMIX, which produce catalogs of 2.12 million and 3.44 million galaxies respectively. We also detail a set of null tests for the shear measurements and find that they pass the requirements for systematic errors at the level necessary for weak lensing science applications using the SVmore » data. Furthermore, we discuss some of the planned algorithmic improvements that will be necessary to produce sufficiently accurate shear catalogs for the full 5-year DES, which is expected to cover 5000 square degrees.« less
The DES Science Verification Weak Lensing Shear Catalogs
Jarvis, M.
2016-05-01
We present weak lensing shear catalogs for 139 square degrees of data taken during the Science Verification (SV) time for the new Dark Energy Camera (DECam) being used for the Dark Energy Survey (DES). We describe our object selection, point spread function estimation and shear measurement procedures using two independent shear pipelines, IM3SHAPE and NGMIX, which produce catalogs of 2.12 million and 3.44 million galaxies respectively. We also detail a set of null tests for the shear measurements and find that they pass the requirements for systematic errors at the level necessary for weak lensing science applications using the SVmore » data. Furthermore, we discuss some of the planned algorithmic improvements that will be necessary to produce sufficiently accurate shear catalogs for the full 5-year DES, which is expected to cover 5000 square degrees.« less
Accuracy enhancement of point triangulation probes for linear displacement measurement
NASA Astrophysics Data System (ADS)
Kim, Kyung-Chan; Kim, Jong-Ahn; Oh, SeBaek; Kim, Soo Hyun; Kwak, Yoon Keun
2000-03-01
Point triangulation probes (PTBs) fall into a general category of noncontact height or displacement measurement devices. PTBs are widely used for their simple structure, high resolution, and long operating range. However, there are several factors that must be taken into account in order to obtain high accuracy and reliability; measurement errors from inclinations of an object surface, probe signal fluctuations generated by speckle effects, power variation of a light source, electronic noises, and so on. In this paper, we propose a novel signal processing algorithm, named as EASDF (expanded average square difference function), for a newly designed PTB which is composed of an incoherent source (LED), a line scan array detector, a specially selected diffuse reflecting surface, and several optical components. The EASDF, which is a modified correlation function, is able to calculate displacement between the probe and the object surface effectively even if there are inclinations, power fluctuations, and noises.
Smoothed low rank and sparse matrix recovery by iteratively reweighted least squares minimization.
Lu, Canyi; Lin, Zhouchen; Yan, Shuicheng
2015-02-01
This paper presents a general framework for solving the low-rank and/or sparse matrix minimization problems, which may involve multiple nonsmooth terms. The iteratively reweighted least squares (IRLSs) method is a fast solver, which smooths the objective function and minimizes it by alternately updating the variables and their weights. However, the traditional IRLS can only solve a sparse only or low rank only minimization problem with squared loss or an affine constraint. This paper generalizes IRLS to solve joint/mixed low-rank and sparse minimization problems, which are essential formulations for many tasks. As a concrete example, we solve the Schatten-p norm and l2,q-norm regularized low-rank representation problem by IRLS, and theoretically prove that the derived solution is a stationary point (globally optimal if p,q ≥ 1). Our convergence proof of IRLS is more general than previous one that depends on the special properties of the Schatten-p norm and l2,q-norm. Extensive experiments on both synthetic and real data sets demonstrate that our IRLS is much more efficient.
Wheaton, Felicia V; Crimmins, Eileen M
2016-07-01
The objectives were to determine whether women always fare more poorly in terms of physical function and disability across countries that vary widely in terms of their level of development, epidemiologic context and level of gender equality. Sex differences in self-reported and objective measures of disability and physical function were compared among older adults aged 55-85 in the United States of America, Taiwan, Korea, Mexico, China, Indonesia and among the Tsimane of Bolivia using population-based studies collected between 2001 and 2011. Data were analysed using logistic and ordinary least-squares regression. Confidence intervals were examined to see whether the effect of being female differed significantly between countries. In all countries, women had consistently worse physical functioning (both self-reported and objectively measured). Women also tended to report more difficulty with activities of daily living (ADL), although differences were not always significant. In general, sex differences across measures were less pronounced in China. In Korea, women had significantly lower grip strength, but sex differences in ADL difficulty were non-significant or even reversed. Education and marital status helped explain sex differences. Overall, there was striking similarity in the magnitude and direction of sex differences across countries despite considerable differences in context, although modest variations in the effect of sex were observed.
Objective scatterometer wind ambiguity removal using smoothness and dynamical constraints
NASA Technical Reports Server (NTRS)
Hoffman, R. N.
1984-01-01
In the present investigation, a variational analysis method (VAM) is used to remove the ambiguity of the Seasat-A Satellite Scatterometer (SASS) winds. At each SASS data point, two, three, or four wind vectors (termed ambiguities) are retrieved. It is pointed out that the VAM is basically a least squares method for fitting data. The problem may be nonlinear. The best fit to the data and constraints is obtained on the basis of a minimization of the objective function. The VAM was tested and tuned at 12 h GMT Sept. 10, 1978. Attention is given to a case study involving an intense cyclone centered south of Japan at 138 deg E.
Identification of Bouc-Wen hysteretic parameters based on enhanced response sensitivity approach
NASA Astrophysics Data System (ADS)
Wang, Li; Lu, Zhong-Rong
2017-05-01
This paper aims to identify parameters of Bouc-Wen hysteretic model using time-domain measured data. It follows a general inverse identification procedure, that is, identifying model parameters is treated as an optimization problem with the nonlinear least squares objective function. Then, the enhanced response sensitivity approach, which has been shown convergent and proper for such kind of problems, is adopted to solve the optimization problem. Numerical tests are undertaken to verify the proposed identification approach.
Moore, H.J.; Boyce, J.M.; Hahn, D.A.
1980-01-01
Apparently, there are two types of size-frequency distributions of small lunar craters (???1-100 m across): (1) crater production distributions for which the cumulative frequency of craters is an inverse function of diameter to power near 2.8, and (2) steady-state distributions for which the cumulative frequency of craters is inversely proportional to the square of their diameters. According to theory, cumulative frequencies of craters in each morphologic category within the steady-state should also be an inverse function of the square of their diameters. Some data on frequency distribution of craters by morphologic types are approximately consistent with theory, whereas other data are inconsistent with theory. A flux of crater producing objects can be inferred from size-frequency distributions of small craters on the flanks and ejecta of craters of known age. Crater frequency distributions and data on the craters Tycho, North Ray, Cone, and South Ray, when compared with the flux of objects measured by the Apollo Passive Seismometer, suggest that the flux of objects has been relatively constant over the last 100 m.y. (within 1/3 to 3 times of the flux estimated for Tycho). Steady-state frequency distributions for craters in several morphologic categories formed the basis for estimating the relative ages of craters and surfaces in a system used during the Apollo landing site mapping program of the U.S. Geological Survey. The relative ages in this system are converted to model absolute ages that have a rather broad range of values. The range of values of the absolute ages are between about 1/3 to 3 times the assigned model absolute age. ?? 1980 D. Reidel Publishing Co.
2011-10-30
techniques can produce nanostructured programmable objects. The length scale of the driving physics limits the size scale of objects in DNA origami ...been working on developing a more compact design for 3D origami , with layers of helices packed on a square lattice, that can be folded successfully...version of the CADnano DNA origami CAD software to support square lattice designs. Achieving a simple and standardized way to create designs with the
Measuring the Hall weighting function for square and cloverleaf geometries
NASA Astrophysics Data System (ADS)
Scherschligt, Julia K.; Koon, Daniel W.
2000-02-01
We have directly measured the Hall weighting function—the sensitivity of a four-wire Hall measurement to the position of macroscopic inhomogeneities in Hall angle—for both a square shaped and a cloverleaf specimen. Comparison with the measured resistivity weighting function for a square geometry [D. W. Koon and W. K. Chan, Rev. Sci. Instrum. 69, 12 (1998)] proves that the two measurements sample the same specimen differently. For Hall measurements on both a square and a cloverleaf, the function is nonnegative with its maximum in the center and its minimum of zero at the edges of the square. Converting a square into a cloverleaf is shown to dramatically focus the measurement process onto a much smaller portion of the specimen. While our results agree qualitatively with theory, details are washed out, owing to the finite size of the magnetic probe used.
Squared eigenfunctions for the Sasa-Satsuma equation
NASA Astrophysics Data System (ADS)
Yang, Jianke; Kaup, D. J.
2009-02-01
Squared eigenfunctions are quadratic combinations of Jost functions and adjoint Jost functions which satisfy the linearized equation of an integrable equation. They are needed for various studies related to integrable equations, such as the development of its soliton perturbation theory. In this article, squared eigenfunctions are derived for the Sasa-Satsuma equation whose spectral operator is a 3×3 system, while its linearized operator is a 2×2 system. It is shown that these squared eigenfunctions are sums of two terms, where each term is a product of a Jost function and an adjoint Jost function. The procedure of this derivation consists of two steps: First is to calculate the variations of the potentials via variations of the scattering data by the Riemann-Hilbert method. The second one is to calculate the variations of the scattering data via the variations of the potentials through elementary calculations. While this procedure has been used before on other integrable equations, it is shown here, for the first time, that for a general integrable equation, the functions appearing in these variation relations are precisely the squared eigenfunctions and adjoint squared eigenfunctions satisfying, respectively, the linearized equation and the adjoint linearized equation of the integrable system. This proof clarifies this procedure and provides a unified explanation for previous results of squared eigenfunctions on individual integrable equations. This procedure uses primarily the spectral operator of the Lax pair. Thus two equations in the same integrable hierarchy will share the same squared eigenfunctions (except for a time-dependent factor). In the Appendix, the squared eigenfunctions are presented for the Manakov equations whose spectral operator is closely related to that of the Sasa-Satsuma equation.
A Comparison of Lord's Chi Square and Raju's Area Measures in Detection of DIF.
ERIC Educational Resources Information Center
Cohen, Allan S.; Kim, Seock-Ho
1993-01-01
The effectiveness of two statistical tests of the area between item response functions (exact signed area and exact unsigned area) estimated in different samples, a measure of differential item functioning (DIF), was compared with Lord's chi square. Lord's chi square was found the most effective in determining DIF. (SLD)
Confidence Region of Least Squares Solution for Single-Arc Observations
NASA Astrophysics Data System (ADS)
Principe, G.; Armellin, R.; Lewis, H.
2016-09-01
The total number of active satellites, rocket bodies, and debris larger than 10 cm is currently about 20,000. Considering all resident space objects larger than 1 cm this rises to an estimated minimum of 500,000 objects. Latest generation sensor networks will be able to detect small-size objects, producing millions of observations per day. Due to observability constraints it is likely that long gaps between observations will occur for small objects. This requires to determine the space object (SO) orbit and to accurately describe the associated uncertainty when observations are acquired on a single arc. The aim of this work is to revisit the classical least squares method taking advantage of the high order Taylor expansions enabled by differential algebra. In particular, the high order expansion of the residuals with respect to the state is used to implement an arbitrary order least squares solver, avoiding the typical approximations of differential correction methods. In addition, the same expansions are used to accurately characterize the confidence region of the solution, going beyond the classical Gaussian distributions. The properties and performances of the proposed method are discussed using optical observations of objects in LEO, HEO, and GEO.
Zhang, Hua; Huang, Jing; Ma, Jianhua; Bian, Zhaoying; Feng, Qianjin; Lu, Hongbing; Liang, Zhengrong; Chen, Wufan
2014-09-01
Repeated X-ray computed tomography (CT) scans are often required in several specific applications such as perfusion imaging, image-guided biopsy needle, image-guided intervention, and radiotherapy with noticeable benefits. However, the associated cumulative radiation dose significantly increases as comparison with that used in the conventional CT scan, which has raised major concerns in patients. In this study, to realize radiation dose reduction by reducing the X-ray tube current and exposure time (mAs) in repeated CT scans, we propose a prior-image induced nonlocal (PINL) regularization for statistical iterative reconstruction via the penalized weighted least-squares (PWLS) criteria, which we refer to as "PWLS-PINL". Specifically, the PINL regularization utilizes the redundant information in the prior image and the weighted least-squares term considers a data-dependent variance estimation, aiming to improve current low-dose image quality. Subsequently, a modified iterative successive overrelaxation algorithm is adopted to optimize the associative objective function. Experimental results on both phantom and patient data show that the present PWLS-PINL method can achieve promising gains over the other existing methods in terms of the noise reduction, low-contrast object detection, and edge detail preservation.
Ma, Jianhua; Bian, Zhaoying; Feng, Qianjin; Lu, Hongbing; Liang, Zhengrong; Chen, Wufan
2014-01-01
Repeated x-ray computed tomography (CT) scans are often required in several specific applications such as perfusion imaging, image-guided biopsy needle, image-guided intervention, and radiotherapy with noticeable benefits. However, the associated cumulative radiation dose significantly increases as comparison with that used in the conventional CT scan, which has raised major concerns in patients. In this study, to realize radiation dose reduction by reducing the x-ray tube current and exposure time (mAs) in repeated CT scans, we propose a prior-image induced nonlocal (PINL) regularization for statistical iterative reconstruction via the penalized weighted least-squares (PWLS) criteria, which we refer to as “PWLS-PINL”. Specifically, the PINL regularization utilizes the redundant information in the prior image and the weighted least-squares term considers a data-dependent variance estimation, aiming to improve current low-dose image quality. Subsequently, a modified iterative successive over-relaxation algorithm is adopted to optimize the associative objective function. Experimental results on both phantom and patient data show that the present PWLS-PINL method can achieve promising gains over the other existing methods in terms of the noise reduction, low-contrast object detection and edge detail preservation. PMID:24235272
Brown, Angus M
2006-04-01
The objective of this present study was to demonstrate a method for fitting complex electrophysiological data with multiple functions using the SOLVER add-in of the ubiquitous spreadsheet Microsoft Excel. SOLVER minimizes the difference between the sum of the squares of the data to be fit and the function(s) describing the data using an iterative generalized reduced gradient method. While it is a straightforward procedure to fit data with linear functions, and we have previously demonstrated a method of non-linear regression analysis of experimental data based upon a single function, it is more complex to fit data with multiple functions, usually requiring specialized expensive computer software. In this paper we describe an easily understood program for fitting experimentally acquired data, in this case the stimulus-evoked compound action potential from the mouse optic nerve, with multiple Gaussian functions. The program is flexible and can be applied to describe data with a wide variety of user-input functions.
Validation of the Malay Version of the Inventory of Functional Status after Childbirth Questionnaire
Noor, Norhayati Mohd; Aziz, Aniza Abd.; Mostapa, Mohd Rosmizaki; Awang, Zainudin
2015-01-01
Objective. This study was designed to examine the psychometric properties of Malay version of the Inventory of Functional Status after Childbirth (IFSAC). Design. A cross-sectional study. Materials and Methods. A total of 108 postpartum mothers attending Obstetrics and Gynaecology Clinic, in a tertiary teaching hospital in Malaysia, were involved. Construct validity and internal consistency were performed after the translation, content validity, and face validity process. The data were analyzed using Analysis of Moment Structure version 18 and Statistical Packages for the Social Sciences version 20. Results. The final model consists of four constructs, namely, infant care, personal care, household activities, and social and community activities, with 18 items demonstrating acceptable factor loadings, domain to domain correlation, and best fit (Chi-squared/degree of freedom = 1.678; Tucker-Lewis index = 0.923; comparative fit index = 0.936; and root mean square error of approximation = 0.080). Composite reliability and average variance extracted of the domains ranged from 0.659 to 0.921 and from 0.499 to 0.628, respectively. Conclusion. The study suggested that the four-factor model with 18 items of the Malay version of IFSAC was acceptable to be used to measure functional status after childbirth because it is valid, reliable, and simple. PMID:25667932
Least-Squares Adaptive Control Using Chebyshev Orthogonal Polynomials
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Burken, John; Ishihara, Abraham
2011-01-01
This paper presents a new adaptive control approach using Chebyshev orthogonal polynomials as basis functions in a least-squares functional approximation. The use of orthogonal basis functions improves the function approximation significantly and enables better convergence of parameter estimates. Flight control simulations demonstrate the effectiveness of the proposed adaptive control approach.
An Extension of RSS-based Model Comparison Tests for Weighted Least Squares
2012-08-22
use the model comparison test statistic to analyze the null hypothesis. Under the null hypothesis, the weighted least squares cost functional is JWLS ...q̂WLSH ) = 10.3040×106. Under the alternative hypothesis, the weighted least squares cost functional is JWLS (q̂WLS) = 8.8394 × 106. Thus the model
Adaptive Morphological Feature-Based Object Classifier for a Color Imaging System
NASA Technical Reports Server (NTRS)
McDowell, Mark; Gray, Elizabeth
2009-01-01
Utilizing a Compact Color Microscope Imaging System (CCMIS), a unique algorithm has been developed that combines human intelligence along with machine vision techniques to produce an autonomous microscope tool for biomedical, industrial, and space applications. This technique is based on an adaptive, morphological, feature-based mapping function comprising 24 mutually inclusive feature metrics that are used to determine the metrics for complex cell/objects derived from color image analysis. Some of the features include: Area (total numbers of non-background pixels inside and including the perimeter), Bounding Box (smallest rectangle that bounds and object), centerX (x-coordinate of intensity-weighted, center-of-mass of an entire object or multi-object blob), centerY (y-coordinate of intensity-weighted, center-of-mass, of an entire object or multi-object blob), Circumference (a measure of circumference that takes into account whether neighboring pixels are diagonal, which is a longer distance than horizontally or vertically joined pixels), . Elongation (measure of particle elongation given as a number between 0 and 1. If equal to 1, the particle bounding box is square. As the elongation decreases from 1, the particle becomes more elongated), . Ext_vector (extremal vector), . Major Axis (the length of a major axis of a smallest ellipse encompassing an object), . Minor Axis (the length of a minor axis of a smallest ellipse encompassing an object), . Partial (indicates if the particle extends beyond the field of view), . Perimeter Points (points that make up a particle perimeter), . Roundness [(4(pi) x area)/perimeter(squared)) the result is a measure of object roundness, or compactness, given as a value between 0 and 1. The greater the ratio, the rounder the object.], . Thin in center (determines if an object becomes thin in the center, (figure-eight-shaped), . Theta (orientation of the major axis), . Smoothness and color metrics for each component (red, green, blue) the minimum, maximum, average, and standard deviation within the particle are tracked. These metrics can be used for autonomous analysis of color images from a microscope, video camera, or digital, still image. It can also automatically identify tumor morphology of stained images and has been used to detect stained cell phenomena (see figure).
Exploration of Objective Functions for Optimal Placement of Weather Stations
NASA Astrophysics Data System (ADS)
Snyder, A.; Dietterich, T.; Selker, J. S.
2016-12-01
Many regions of Earth lack ground-based sensing of weather variables. For example, most countries in Sub-Saharan Africa do not have reliable weather station networks. This absence of sensor data has many consequences ranging from public safety (poor prediction and detection of severe weather events), to agriculture (lack of crop insurance), to science (reduced quality of world-wide weather forecasts, climate change measurement, etc.). The Trans-African Hydro-Meteorological Observatory (TAHMO.org) project seeks to address these problems by deploying and operating a large network of weather stations throughout Sub-Saharan Africa. To design the TAHMO network, we must determine where to locate each weather station. We can formulate this as the following optimization problem: Determine a set of N sites that jointly optimize the value of an objective function. The purpose of this poster is to propose and assess several objective functions. In addition to standard objectives (e.g., minimizing the summed squared error of interpolated values over the entire region), we consider objectives that minimize the maximum error over the region and objectives that optimize the detection of extreme events. An additional issue is that each station measures more than 10 variables—how should we balance the accuracy of our interpolated maps for each variable? Weather sensors inevitably drift out of calibration or fail altogether. How can we incorporate robustness to failed sensors into our network design? Another important requirement is that the network should make it possible to detect failed sensors by comparing their readings with those of other stations. How can this requirement be met? Finally, we provide an initial assessment of the computational cost of optimizing these various objective functions. We invite everyone to join the discussion at our poster by proposing additional objectives, identifying additional issues to consider, and expanding our bibliography of relevant papers. A prize (derived from grapes grown in Oregon) will be awarded for the most insightful contribution to the discussion!
Perez-Guaita, David; Kuligowski, Julia; Quintás, Guillermo; Garrigues, Salvador; Guardia, Miguel de la
2013-03-30
Locally weighted partial least squares regression (LW-PLSR) has been applied to the determination of four clinical parameters in human serum samples (total protein, triglyceride, glucose and urea contents) by Fourier transform infrared (FTIR) spectroscopy. Classical LW-PLSR models were constructed using different spectral regions. For the selection of parameters by LW-PLSR modeling, a multi-parametric study was carried out employing the minimum root-mean square error of cross validation (RMSCV) as objective function. In order to overcome the effect of strong matrix interferences on the predictive accuracy of LW-PLSR models, this work focuses on sample selection. Accordingly, a novel strategy for the development of local models is proposed. It was based on the use of: (i) principal component analysis (PCA) performed on an analyte specific spectral region for identifying most similar sample spectra and (ii) partial least squares regression (PLSR) constructed using the whole spectrum. Results found by using this strategy were compared to those provided by PLSR using the same spectral intervals as for LW-PLSR. Prediction errors found by both, classical and modified LW-PLSR improved those obtained by PLSR. Hence, both proposed approaches were useful for the determination of analytes present in a complex matrix as in the case of human serum samples. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ghulam Saber, Md; Arif Shahriar, Kh; Ahmed, Ashik; Hasan Sagor, Rakibul
2016-10-01
Particle swarm optimization (PSO) and invasive weed optimization (IWO) algorithms are used for extracting the modeling parameters of materials useful for optics and photonics research community. These two bio-inspired algorithms are used here for the first time in this particular field to the best of our knowledge. The algorithms are used for modeling graphene oxide and the performances of the two are compared. Two objective functions are used for different boundary values. Root mean square (RMS) deviation is determined and compared.
NASA Astrophysics Data System (ADS)
Dong, Shidu; Yang, Xiaofan; He, Bo; Liu, Guojin
2006-11-01
Radiance coming from the interior of an uncooled infrared camera has a significant effect on the measured value of the temperature of the object. This paper presents a three-phase compensation scheme for coping with this effect. The first phase acquires the calibration data and forms the calibration function by least square fitting. Likewise, the second phase obtains the compensation data and builds the compensation function by fitting. With the aid of these functions, the third phase determines the temperature of the object in concern from any given ambient temperature. It is known that acquiring the compensation data of a camera is very time-consuming. For the purpose of getting the compensation data at a reasonable time cost, we propose a transplantable scheme. The idea of this scheme is to calculate the ratio between the central pixel’s responsivity of the child camera to the radiance from the interior and that of the mother camera, followed by determining the compensation data of the child camera using this ratio and the compensation data of the mother camera Experimental results show that either of the child camera and the mother camera can measure the temperature of the object with an error of no more than 2°C.
Edge Extraction by an Exponential Function Considering X-ray Transmission Characteristics
NASA Astrophysics Data System (ADS)
Kim, Jong Hyeong; Youp Synn, Sang; Cho, Sung Man; Jong Joo, Won
2011-04-01
3-D radiographic methodology has been into the spotlight for quality inspection of mass product or in-service inspection of aging product. To locate a target object in 3-D space, its characteristic contours such as edge length, edge angle, and vertices are very important. In spite of a simple geometry product, it is very difficult to get clear shape contours from a single radiographic image. The image contains scattering noise at the edges and ambiguity coming from X-Ray absorption within the body. This article suggests a concise method to extract whole edges from a single X-ray image. At the edge point of the object, the intensity of the X-ray decays exponentially as the X-ray penetrates the object. Considering this X-Ray decaying property, edges are extracted by using the least square fitting with the control of Coefficient of Determination.
A CCD search for geosynchronous debris
NASA Technical Reports Server (NTRS)
Gehrels, Tom; Vilas, Faith
1986-01-01
Using the Spacewatch Camera, a search was conducted for objects in geosynchronous earth orbit. The system is equipped with a CCD camera cooled with dry ice; the image scale is 1.344 arcsec/pixel. The telescope drive was off so that during integrations the stars were trailed while geostationary objects appeared as round images. The technique should detect geostationary objects to a limiting apparent visual magnitude of 19. A sky area of 8.8 square degrees was searched for geostationary objects while geosynchronous debris passing through was 16.4 square degrees. Ten objects were found of which seven are probably geostationary satellites having apparent visual magnitudes brighter than 13.1. Three objects having magnitudes equal to or fainter than 13.7 showed motion in the north-south direction. The absence of fainter stationary objects suggests that a gap in debris size exists between satellites and particles having diameters in the millimeter range.
O'Neil, Edward B; Watson, Hilary C; Dhillon, Sonya; Lobaugh, Nancy J; Lee, Andy C H
2015-09-01
Recent work has demonstrated that the perirhinal cortex (PRC) supports conjunctive object representations that aid object recognition memory following visual object interference. It is unclear, however, how these representations interact with other brain regions implicated in mnemonic retrieval and how congruent and incongruent interference influences the processing of targets and foils during object recognition. To address this, multivariate partial least squares was applied to fMRI data acquired during an interference match-to-sample task, in which participants made object or scene recognition judgments after object or scene interference. This revealed a pattern of activity sensitive to object recognition following congruent (i.e., object) interference that included PRC, prefrontal, and parietal regions. Moreover, functional connectivity analysis revealed a common pattern of PRC connectivity across interference and recognition conditions. Examination of eye movements during the same task in a separate study revealed that participants gazed more at targets than foils during correct object recognition decisions, regardless of interference congruency. By contrast, participants viewed foils more than targets for incorrect object memory judgments, but only after congruent interference. Our findings suggest that congruent interference makes object foils appear familiar and that a network of regions, including PRC, is recruited to overcome the effects of interference.
Object Detection in Natural Backgrounds Predicted by Discrimination Performance and Models
NASA Technical Reports Server (NTRS)
Ahumada, A. J., Jr.; Watson, A. B.; Rohaly, A. M.; Null, Cynthia H. (Technical Monitor)
1995-01-01
In object detection, an observer looks for an object class member in a set of backgrounds. In discrimination, an observer tries to distinguish two images. Discrimination models predict the probability that an observer detects a difference between two images. We compare object detection and image discrimination with the same stimuli by: (1) making stimulus pairs of the same background with and without the target object and (2) either giving many consecutive trials with the same background (discrimination) or intermixing the stimuli (object detection). Six images of a vehicle in a natural setting were altered to remove the vehicle and mixed with the original image in various proportions. Detection observers rated the images for vehicle presence. Discrimination observers rated the images for any difference from the background image. Estimated detectabilities of the vehicles were found by maximizing the likelihood of a Thurstone category scaling model. The pattern of estimated detectabilities is similar for discrimination and object detection, and is accurately predicted by a Cortex Transform discrimination model. Predictions of a Contrast- Sensitivity- Function filter model and a Root-Mean-Square difference metric based on the digital image values are less accurate. The discrimination detectabilities averaged about twice those of object detection.
Brown, A M
2001-06-01
The objective of this present study was to introduce a simple, easily understood method for carrying out non-linear regression analysis based on user input functions. While it is relatively straightforward to fit data with simple functions such as linear or logarithmic functions, fitting data with more complicated non-linear functions is more difficult. Commercial specialist programmes are available that will carry out this analysis, but these programmes are expensive and are not intuitive to learn. An alternative method described here is to use the SOLVER function of the ubiquitous spreadsheet programme Microsoft Excel, which employs an iterative least squares fitting routine to produce the optimal goodness of fit between data and function. The intent of this paper is to lead the reader through an easily understood step-by-step guide to implementing this method, which can be applied to any function in the form y=f(x), and is well suited to fast, reliable analysis of data in all fields of biology.
NASA Technical Reports Server (NTRS)
Krishnamurthy, Thiagarajan
2005-01-01
Response construction methods using Moving Least Squares (MLS), Kriging and Radial Basis Functions (RBF) are compared with the Global Least Squares (GLS) method in three numerical examples for derivative generation capability. Also, a new Interpolating Moving Least Squares (IMLS) method adopted from the meshless method is presented. It is found that the response surface construction methods using the Kriging and RBF interpolation yields more accurate results compared with MLS and GLS methods. Several computational aspects of the response surface construction methods also discussed.
Alber, S A; Schaffner, D W
1992-01-01
A comparison was made between mathematical variations of the square root and Schoolfield models for predicting growth rate as a function of temperature. The statistical consequences of square root and natural logarithm transformations of growth rate use in several variations of the Schoolfield and square root models were examined. Growth rate variances of Yersinia enterocolitica in brain heart infusion broth increased as a function of temperature. The ability of the two data transformations to correct for the heterogeneity of variance was evaluated. A natural logarithm transformation of growth rate was more effective than a square root transformation at correcting for the heterogeneity of variance. The square root model was more accurate than the Schoolfield model when both models used natural logarithm transformation. PMID:1444367
BIOMECHANICS. Why the seahorse tail is square.
Porter, Michael M; Adriaens, Dominique; Hatton, Ross L; Meyers, Marc A; McKittrick, Joanna
2015-07-03
Whereas the predominant shapes of most animal tails are cylindrical, seahorse tails are square prisms. Seahorses use their tails as flexible grasping appendages, in spite of a rigid bony armor that fully encases their bodies. We explore the mechanics of two three-dimensional-printed models that mimic either the natural (square prism) or hypothetical (cylindrical) architecture of a seahorse tail to uncover whether or not the square geometry provides any functional advantages. Our results show that the square prism is more resilient when crushed and provides a mechanism for preserving articulatory organization upon extensive bending and twisting, as compared with its cylindrical counterpart. Thus, the square architecture is better than the circular one in the context of two integrated functions: grasping ability and crushing resistance. Copyright © 2015, American Association for the Advancement of Science.
A spectral mimetic least-squares method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bochev, Pavel; Gerritsma, Marc
We present a spectral mimetic least-squares method for a model diffusion–reaction problem, which preserves key conservation properties of the continuum problem. Casting the model problem into a first-order system for two scalar and two vector variables shifts material properties from the differential equations to a pair of constitutive relations. We also use this system to motivate a new least-squares functional involving all four fields and show that its minimizer satisfies the differential equations exactly. Discretization of the four-field least-squares functional by spectral spaces compatible with the differential operators leads to a least-squares method in which the differential equations are alsomore » satisfied exactly. Additionally, the latter are reduced to purely topological relationships for the degrees of freedom that can be satisfied without reference to basis functions. Furthermore, numerical experiments confirm the spectral accuracy of the method and its local conservation.« less
A spectral mimetic least-squares method
Bochev, Pavel; Gerritsma, Marc
2014-09-01
We present a spectral mimetic least-squares method for a model diffusion–reaction problem, which preserves key conservation properties of the continuum problem. Casting the model problem into a first-order system for two scalar and two vector variables shifts material properties from the differential equations to a pair of constitutive relations. We also use this system to motivate a new least-squares functional involving all four fields and show that its minimizer satisfies the differential equations exactly. Discretization of the four-field least-squares functional by spectral spaces compatible with the differential operators leads to a least-squares method in which the differential equations are alsomore » satisfied exactly. Additionally, the latter are reduced to purely topological relationships for the degrees of freedom that can be satisfied without reference to basis functions. Furthermore, numerical experiments confirm the spectral accuracy of the method and its local conservation.« less
Jozwik, Kamila M.; Kriegeskorte, Nikolaus; Mur, Marieke
2016-01-01
Object similarity, in brain representations and conscious perception, must reflect a combination of the visual appearance of the objects on the one hand and the categories the objects belong to on the other. Indeed, visual object features and category membership have each been shown to contribute to the object representation in human inferior temporal (IT) cortex, as well as to object-similarity judgments. However, the explanatory power of features and categories has not been directly compared. Here, we investigate whether the IT object representation and similarity judgments are best explained by a categorical or a feature-based model. We use rich models (>100 dimensions) generated by human observers for a set of 96 real-world object images. The categorical model consists of a hierarchically nested set of category labels (such as “human”, “mammal”, and “animal”). The feature-based model includes both object parts (such as “eye”, “tail”, and “handle”) and other descriptive features (such as “circular”, “green”, and “stubbly”). We used non-negative least squares to fit the models to the brain representations (estimated from functional magnetic resonance imaging data) and to similarity judgments. Model performance was estimated on held-out images not used in fitting. Both models explained significant variance in IT and the amounts explained were not significantly different. The combined model did not explain significant additional IT variance, suggesting that it is the shared model variance (features correlated with categories, categories correlated with features) that best explains IT. The similarity judgments were almost fully explained by the categorical model, which explained significantly more variance than the feature-based model. The combined model did not explain significant additional variance in the similarity judgments. Our findings suggest that IT uses features that help to distinguish categories as stepping stones toward a semantic representation. Similarity judgments contain additional categorical variance that is not explained by visual features, reflecting a higher-level more purely semantic representation. PMID:26493748
Jozwik, Kamila M; Kriegeskorte, Nikolaus; Mur, Marieke
2016-03-01
Object similarity, in brain representations and conscious perception, must reflect a combination of the visual appearance of the objects on the one hand and the categories the objects belong to on the other. Indeed, visual object features and category membership have each been shown to contribute to the object representation in human inferior temporal (IT) cortex, as well as to object-similarity judgments. However, the explanatory power of features and categories has not been directly compared. Here, we investigate whether the IT object representation and similarity judgments are best explained by a categorical or a feature-based model. We use rich models (>100 dimensions) generated by human observers for a set of 96 real-world object images. The categorical model consists of a hierarchically nested set of category labels (such as "human", "mammal", and "animal"). The feature-based model includes both object parts (such as "eye", "tail", and "handle") and other descriptive features (such as "circular", "green", and "stubbly"). We used non-negative least squares to fit the models to the brain representations (estimated from functional magnetic resonance imaging data) and to similarity judgments. Model performance was estimated on held-out images not used in fitting. Both models explained significant variance in IT and the amounts explained were not significantly different. The combined model did not explain significant additional IT variance, suggesting that it is the shared model variance (features correlated with categories, categories correlated with features) that best explains IT. The similarity judgments were almost fully explained by the categorical model, which explained significantly more variance than the feature-based model. The combined model did not explain significant additional variance in the similarity judgments. Our findings suggest that IT uses features that help to distinguish categories as stepping stones toward a semantic representation. Similarity judgments contain additional categorical variance that is not explained by visual features, reflecting a higher-level more purely semantic representation. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Ocean data assimilation using optimal interpolation with a quasi-geostrophic model
NASA Technical Reports Server (NTRS)
Rienecker, Michele M.; Miller, Robert N.
1991-01-01
A quasi-geostrophic (QG) stream function is analyzed by optimal interpolation (OI) over a 59-day period in a 150-km-square domain off northern California. Hydrographic observations acquired over five surveys were assimilated into a QG open boundary ocean model. Assimilation experiments were conducted separately for individual surveys to investigate the sensitivity of the OI analyses to parameters defining the decorrelation scale of an assumed error covariance function. The analyses were intercompared through dynamical hindcasts between surveys. The best hindcast was obtained using the smooth analyses produced with assumed error decorrelation scales identical to those of the observed stream function. The rms difference between the hindcast stream function and the final analysis was only 23 percent of the observation standard deviation. The two sets of OI analyses were temporally smoother than the fields from statistical objective analysis and in good agreement with the only independent data available for comparison.
Nijran, Kuldip S; Houston, Alex S; Fleming, John S; Jarritt, Peter H; Heikkinen, Jari O; Skrypniuk, John V
2014-07-01
In this second UK audit of quantitative parameters obtained from renography, phantom simulations were used in cases in which the 'true' values could be estimated, allowing the accuracy of the parameters measured to be assessed. A renal physical phantom was used to generate a set of three phantom simulations (six kidney functions) acquired on three different gamma camera systems. A total of nine phantom simulations and three real patient studies were distributed to UK hospitals participating in the audit. Centres were asked to provide results for the following parameters: relative function and time-to-peak (whole kidney and cortical region). As with previous audits, a questionnaire collated information on methodology. Errors were assessed as the root mean square deviation from the true value. Sixty-one centres responded to the audit, with some hospitals providing multiple sets of results. Twenty-one centres provided a complete set of parameter measurements. Relative function and time-to-peak showed a reasonable degree of accuracy and precision in most UK centres. The overall average root mean squared deviation of the results for (i) the time-to-peak measurement for the whole kidney and (ii) the relative function measurement from the true value was 7.7 and 4.5%, respectively. These results showed a measure of consistency in the relative function and time-to-peak that was similar to the results reported in a previous renogram audit by our group. Analysis of audit data suggests a reasonable degree of accuracy in the quantification of renography function using relative function and time-to-peak measurements. However, it is reasonable to conclude that the objectives of the audit could not be fully realized because of the limitations of the mechanical phantom in providing true values for renal parameters.
NASA Astrophysics Data System (ADS)
Wilczek, Iwona; Tenczyński, Mariusz
2017-10-01
In 2015 the authorities of the city of Opole decided to sell a part of Kopernik Square, one of the main city squares, to a private investor. The objective of this project was the extension of the existing shopping mall and the construction of an underground car park within the scope of a public-private partnership. In order to find the best solution to design the remaining part of the square, a competition for its development was announced in cooperation with the Opole branch of the Association of Polish Architects. The article presents a description of the studies and analyses of the aforementioned space conducted by the db2 architekci architectural studio for the purpose of preparing a competition entry. The square development concept was based on an analysis of the urban context of the Opole city centre. The character of the public spaces within a twenty-minute walk from Kopernik Square was analysed. In the course of the works, a decision was made to develop the public space in a manner different from that originally intended by the Investor. A graphic visualization of the maximum scope of the shopping mall extension was presented in accordance with the urban layout of this part of the city, allowing the preservation of the historical view corridors. The article presents a competition entry prepared by us along with a justification of decisions concerning the design. One of the fundamental design assumptions was the connection of all frontages with the square and the creation of a recreational part abounding in green areas. The concept provided for the division of the area into three parts of various characters. The central part of the square is a green area of a recreational character - a space so far absent in the city centre. Catering and food services, shops, parking spaces for bicycles as well as services related to the parking area are located at the southern frontage of the square under one roofing. The area directly adjoining the shopping mall is an open multifunctional and partly roofed square - a place where cyclical events are held in the city. The project allows for a harmonious combination of various functions performed by Kopernik Square. The adopted traffic solutions, in particular the entrance to and exit from the underground car park have a positive influence on road traffic in this part of the city. Due to maintaining the historical urban layout and view corridors, the new building development does not overwhelm the square space but constitutes its harmonious closure.
Narita, Akihiro; Ohkubo, Masaki; Murao, Kohei; Matsumoto, Toru; Wada, Shinichi
2017-10-01
The aim of this feasibility study using phantoms was to propose a novel method for obtaining computer-generated realistic virtual nodules in lung computed tomography (CT). In the proposed methodology, pulmonary nodule images obtained with a CT scanner are deconvolved with the point spread function (PSF) in the scan plane and slice sensitivity profile (SSP) measured for the scanner; the resultant images are referred to as nodule-like object functions. Next, by convolving the nodule-like object function with the PSF and SSP of another (target) scanner, the virtual nodule can be generated so that it has the characteristics of the spatial resolution of the target scanner. To validate the methodology, the authors applied physical nodules of 5-, 7- and 10-mm-diameter (uniform spheres) included in a commercial CT test phantom. The nodule-like object functions were calculated from the sphere images obtained with two scanners (Scanner A and Scanner B); these functions were referred to as nodule-like object functions A and B, respectively. From these, virtual nodules were generated based on the spatial resolution of another scanner (Scanner C). By investigating the agreement of the virtual nodules generated from the nodule-like object functions A and B, the equivalence of the nodule-like object functions obtained from different scanners could be assessed. In addition, these virtual nodules were compared with the real (true) sphere images obtained with Scanner C. As a practical validation, five types of laboratory-made physical nodules with various complicated shapes and heterogeneous densities, similar to real lesions, were used. The nodule-like object functions were calculated from the images of these laboratory-made nodules obtained with Scanner A. From them, virtual nodules were generated based on the spatial resolution of Scanner C and compared with the real images of laboratory-made nodules obtained with Scanner C. Good agreement of the virtual nodules generated from the nodule-like object functions A and B of the phantom spheres was found, suggesting the validity of the nodule-like object functions. The virtual nodules generated from the nodule-like object function A of the phantom spheres were similar to the real images obtained with Scanner C; the root mean square errors (RMSEs) between them were 10.8, 11.1, and 12.5 Hounsfield units (HU) for 5-, 7-, and 10-mm-diameter spheres, respectively. The equivalent results (RMSEs) using the nodule-like object function B were 15.9, 16.8, and 16.5 HU, respectively. These RMSEs were small considering the high contrast between the sphere density and background density (approximately 674 HU). The virtual nodules generated from the nodule-like object functions of the five laboratory-made nodules were similar to the real images obtained with Scanner C; the RMSEs between them ranged from 6.2 to 8.6 HU in five cases. The nodule-like object functions calculated from real nodule images would be effective to generate realistic virtual nodules. The proposed method would be feasible for generating virtual nodules that have the characteristics of the spatial resolution of the CT system used in each institution, allowing for site-specific nodule generation. © 2017 American Association of Physicists in Medicine.
Fath, Aaron J; Lind, Mats; Bingham, Geoffrey P
2018-04-17
The role of the monocular-flow-based optical variable τ in the perception of the time to contact of approaching objects has been well-studied. There are additional contributions from binocular sources of information, such as changes in disparity over time (CDOT), but these are less understood. We conducted an experiment to determine whether an object's velocity affects which source is most effective for perceiving time to contact. We presented participants with stimuli that simulated two approaching squares. During approach the squares disappeared, and participants indicated which square would have contacted them first. Approach was specified by (a) only disparity-based information, (b) only monocular flow, or (c) all sources of information in normal viewing conditions. As expected, participants were more accurate at judging fast objects when only monocular flow was available than when only CDOT was. In contrast, participants were more accurate judging slow objects with only CDOT than with only monocular flow. For both ranges of velocity, the condition with both information sources yielded performance equivalent to the better of the single-source conditions. These results show that different sources of motion information are used to perceive time to contact and play different roles in allowing for stable perception across a variety of conditions.
The psychophysical law of speed estimation in Michotte's causal events.
Parovel, Giulia; Casco, Clara
2006-11-01
Observers saw an event in which a computer-animated square moved up to and made contact with another, which after a short delay moved off, its motion appearing to be caused by launch by the first square. Observers chose whether the second (launched) square was faster in this causal event than when presented following a long delay (non-causal event). The speed of the second object in causal events was overestimated for a wide range of speeds of the first object (launcher), but accurately assessed in non-causal events. Experiments 2 and 3 showed that overestimation occurred also in other causal displays in which the trajectories were overlapping, successive, spatially separated or inverted but did not occurred with consecutive speeds that did not produce causal percepts. We also found that if the first object in a causal event was faster, then Weber's law holds and overestimation of the launched object speed was proportional to the speed of the launcher. In contrast, if the second object was faster, overestimation was constant, i.e. independent of the launcher. We propose that the particular speed integration of causal display results in overestimation and that the way overestimation depends on V1 phenomenally affects the attribution of the source of V2 motion: either in V1 (in launching) or in V2 (in triggering).
First-Order System Least-Squares for Second-Order Elliptic Problems with Discontinuous Coefficients
NASA Technical Reports Server (NTRS)
Manteuffel, Thomas A.; McCormick, Stephen F.; Starke, Gerhard
1996-01-01
The first-order system least-squares methodology represents an alternative to standard mixed finite element methods. Among its advantages is the fact that the finite element spaces approximating the pressure and flux variables are not restricted by the inf-sup condition and that the least-squares functional itself serves as an appropriate error measure. This paper studies the first-order system least-squares approach for scalar second-order elliptic boundary value problems with discontinuous coefficients. Ellipticity of an appropriately scaled least-squares bilinear form of the size of the jumps in the coefficients leading to adequate finite element approximation results. The occurrence of singularities at interface corners and cross-points is discussed. and a weighted least-squares functional is introduced to handle such cases. Numerical experiments are presented for two test problems to illustrate the performance of this approach.
Cross-correlation least-squares reverse time migration in the pseudo-time domain
NASA Astrophysics Data System (ADS)
Li, Qingyang; Huang, Jianping; Li, Zhenchun
2017-08-01
The least-squares reverse time migration (LSRTM) method with higher image resolution and amplitude is becoming increasingly popular. However, the LSRTM is not widely used in field land data processing because of its sensitivity to the initial migration velocity model, large computational cost and mismatch of amplitudes between the synthetic and observed data. To overcome the shortcomings of the conventional LSRTM, we propose a cross-correlation least-squares reverse time migration algorithm in pseudo-time domain (PTCLSRTM). Our algorithm not only reduces the depth/velocity ambiguities, but also reduces the effect of velocity error on the imaging results. It relieves the accuracy requirements on the migration velocity model of least-squares migration (LSM). The pseudo-time domain algorithm eliminates the irregular wavelength sampling in the vertical direction, thus it can reduce the vertical grid points and memory requirements used during computation, which makes our method more computationally efficient than the standard implementation. Besides, for field data applications, matching the recorded amplitudes is a very difficult task because of the viscoelastic nature of the Earth and inaccuracies in the estimation of the source wavelet. To relax the requirement for strong amplitude matching of LSM, we extend the normalized cross-correlation objective function to the pseudo-time domain. Our method is only sensitive to the similarity between the predicted and the observed data. Numerical tests on synthetic and land field data confirm the effectiveness of our method and its adaptability for complex models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Jing; Guan, Huaiqun; Solberg, Timothy
2011-07-15
Purpose: A statistical projection restoration algorithm based on the penalized weighted least-squares (PWLS) criterion can substantially improve the image quality of low-dose CBCT images. The performance of PWLS is largely dependent on the choice of the penalty parameter. Previously, the penalty parameter was chosen empirically by trial and error. In this work, the authors developed an inverse technique to calculate the penalty parameter in PWLS for noise suppression of low-dose CBCT in image guided radiotherapy (IGRT). Methods: In IGRT, a daily CBCT is acquired for the same patient during a treatment course. In this work, the authors acquired the CBCTmore » with a high-mAs protocol for the first session and then a lower mAs protocol for the subsequent sessions. The high-mAs projections served as the goal (ideal) toward, which the low-mAs projections were to be smoothed by minimizing the PWLS objective function. The penalty parameter was determined through an inverse calculation of the derivative of the objective function incorporating both the high and low-mAs projections. Then the parameter obtained can be used for PWLS to smooth the noise in low-dose projections. CBCT projections for a CatPhan 600 and an anthropomorphic head phantom, as well as for a brain patient, were used to evaluate the performance of the proposed technique. Results: The penalty parameter in PWLS was obtained for each CBCT projection using the proposed strategy. The noise in the low-dose CBCT images reconstructed from the smoothed projections was greatly suppressed. Image quality in PWLS-processed low-dose CBCT was comparable to its corresponding high-dose CBCT. Conclusions: A technique was proposed to estimate the penalty parameter for PWLS algorithm. It provides an objective and efficient way to obtain the penalty parameter for image restoration algorithms that require predefined smoothing parameters.« less
The Sensitivity to Trans-Neptunian Dwarf Planets of the Siding Spring Survey
NASA Astrophysics Data System (ADS)
Bannister, Michele; Brown, M. E.; Schmidt, B. P.; Francis, P.; McNaught, R.; Garrad, G.; Larson, S.; Beshore, E.
2012-10-01
The last decade has seen considerable effort in assessing the populations of icy worlds in the outer Solar System, with major surveys in the Northern and more recently, in the Southern Hemisphere skies. Our archival search of more than ten thousand square degrees of sky south of the ecliptic observed over five years is a bright-object survey, sensitive to dwarf-planet sized trans-Neptunian objects. Our innovative survey analyses observations of the Siding Spring Survey, an ongoing survey for near-Earth asteroids at the 0.5 m Uppsala telescope at Siding Spring Observatory. This survey observed each of 2300 4.55 square degree fields on between 30 and 90 of the nights from early 2004 to late 2009, creating a dataset with dense temporal coverage, which we reprocessed for TNOs with a dedicated pipeline. We assess our survey's sensitivity to trans-Neptunian objects by simulating the observation of the synthetic outer Solar System populations of Grav et al. (2011): Centaurs, Kuiper belt and scattered disk. As our fields span approx. -15 to -70 declination, avoiding the galactic plane by 10 degrees either side, we are particularly sensitive to dwarf planets in high-inclination orbits. Partly due to this coverage far from the ecliptic, all known dwarf planets, including Pluto, do fall outside our survey coverage in its temporal span. We apply the widest plausible range of absolute magnitudes to each observable synthetic object, measuring each subsequent apparent magnitude against the magnitude depth of the survey observations. We evaluate our survey's null detection of new dwarf planets in light of our detection efficiencies as a function of trans-Neptunian orbital parameter space. MTB appreciates the funding support of the Joan Duffield Postgraduate Scholarship, an Australian Postgraduate Award, and the Astronomical Society of Australia.
Latin-square three-dimensional gage master
Jones, L.
1981-05-12
A gage master for coordinate measuring machines has an nxn array of objects distributed in the Z coordinate utilizing the concept of a Latin square experimental design. Using analysis of variance techniques, the invention may be used to identify sources of error in machine geometry and quantify machine accuracy.
Latin square three dimensional gage master
Jones, Lynn L.
1982-01-01
A gage master for coordinate measuring machines has an nxn array of objects distributed in the Z coordinate utilizing the concept of a Latin square experimental design. Using analysis of variance techniques, the invention may be used to identify sources of error in machine geometry and quantify machine accuracy.
Off-Resonance Acoustic Levitation Without Rotation
NASA Technical Reports Server (NTRS)
Barmatz, M. B.; Allen, J. L.
1984-01-01
Orthogonal acoustic-levitation modes excited at slightly different frequencies to control rotation. Rotation of object in square cross-section acoustic-levitation chamber stopped by detuning two orthogonal (x and y) excitation drivers in plane of square cross section. Detuning done using fundamental degenerate modes or odd harmonic modes.
Analytical YORP torques model with an improved temperature distribution function
NASA Astrophysics Data System (ADS)
Breiter, S.; Vokrouhlický, D.; Nesvorný, D.
2010-01-01
Previous models of the Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) effect relied either on the zero thermal conductivity assumption, or on the solutions of the heat conduction equations assuming an infinite body size. We present the first YORP solution accounting for a finite size and non-radial direction of the surface normal vectors in the temperature distribution. The new thermal model implies the dependence of the YORP effect in rotation rate on asteroids conductivity. It is shown that the effect on small objects does not scale as the inverse square of diameter, but rather as the first power of the inverse.
Bergh, Daniel
2015-01-01
Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.
LANDSAT-D investigations in snow hydrology
NASA Technical Reports Server (NTRS)
Dozier, J. (Principal Investigator)
1982-01-01
Snow reflectance in all 6 TM reflective bands, i.e., 1, 2, 3, 4, 5, and 7 was simulated using a delta-Eddington model. Snow reflectance in bands 4, 5, and 7 appear sensitive to grain size. It appears that the TM filters resemble a ""square-wave'' closely enough that a square-wave can be assumed in calculations. Integrated band reflectance over the actual response functions was calculated using sensor data supplied by Santa Barbara Research Center. Differences between integrating over the actual response functions and the equivalent square wave were negligible. Tables are given which show (1) sensor saturation radiance as a percentage of the solar constant, integrated through the band response function; (2) comparisons of integrations through the sensor response function with integrations over the equivalent square wave; and (3) calculations of integrated reflectance for snow over all reflective TM bands, and water and ice clouds with thickness of 1 mm water equivalent over TM bands 5 and 7. These calculations look encouraging for snow/cloud discrimination with TM bands 5 and 7.
Spatiotemporal proximity effects in visual short-term memory examined by target-nontarget analysis.
Sapkota, Raju P; Pardhan, Shahina; van der Linde, Ian
2016-08-01
Visual short-term memory (VSTM) is a limited-capacity system that holds a small number of objects online simultaneously, implying that competition for limited storage resources occurs (Phillips, 1974). How the spatial and temporal proximity of stimuli affects this competition is unclear. In this 2-experiment study, we examined the effect of the spatial and temporal separation of real-world memory targets and erroneously selected nontarget items examined during location-recognition and object-recall tasks. In Experiment 1 (the location-recognition task), our test display comprised either the picture or name of 1 previously examined memory stimulus (rendered above as the stimulus-display area), together with numbered square boxes at each of the memory-stimulus locations used in that trial. Participants were asked to report the number inside the square box corresponding to the location at which the cued object was originally presented. In Experiment 2 (the object-recall task), the test display comprised a single empty square box presented at 1 memory-stimulus location. Participants were asked to report the name of the object presented at that location. In both experiments, nontarget objects that were spatially and temporally proximal to the memory target were confused more often than nontarget objects that were spatially and temporally distant (i.e., a spatiotemporal proximity effect); this effect generalized across memory tasks, and the object feature (picture or name) that cued the test-display memory target. Our findings are discussed in terms of spatial and temporal confusion "fields" in VSTM, wherein objects occupy diffuse loci in a spatiotemporal coordinate system, wherein neighboring locations are more susceptible to confusion. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
NLINEAR - NONLINEAR CURVE FITTING PROGRAM
NASA Technical Reports Server (NTRS)
Everhart, J. L.
1994-01-01
A common method for fitting data is a least-squares fit. In the least-squares method, a user-specified fitting function is utilized in such a way as to minimize the sum of the squares of distances between the data points and the fitting curve. The Nonlinear Curve Fitting Program, NLINEAR, is an interactive curve fitting routine based on a description of the quadratic expansion of the chi-squared statistic. NLINEAR utilizes a nonlinear optimization algorithm that calculates the best statistically weighted values of the parameters of the fitting function and the chi-square that is to be minimized. The inputs to the program are the mathematical form of the fitting function and the initial values of the parameters to be estimated. This approach provides the user with statistical information such as goodness of fit and estimated values of parameters that produce the highest degree of correlation between the experimental data and the mathematical model. In the mathematical formulation of the algorithm, the Taylor expansion of chi-square is first introduced, and justification for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations are derived, which are solved by matrix algebra. To achieve convergence, the algorithm requires meaningful initial estimates for the parameters of the fitting function. NLINEAR is written in Fortran 77 for execution on a CDC Cyber 750 under NOS 2.3. It has a central memory requirement of 5K 60 bit words. Optionally, graphical output of the fitting function can be plotted. Tektronix PLOT-10 routines are required for graphics. NLINEAR was developed in 1987.
Polymer tensiometer with ceramic cones: a case study for a Brazilian soil.
NASA Astrophysics Data System (ADS)
Durigon, A.; de Jong van Lier, Q.; van der Ploeg, M. J.; Gooren, H. P. A.; Metselaar, K.; de Rooij, G. H.
2009-04-01
Laboratory outflow experiments, in combination with inverse modeling techniques, allow to simultaneously determine retention and hydraulic conductivity functions. A numerical model solves the pressure-head-based form of the Richards' equation for unsaturated flow in a rigid porous medium. Applying adequate boundary conditions, the cumulative outflow is calculated at prescribed times, and as a function of the set of optimized parameters. These parameters are evaluated by nonlinear least-squares fitting of predicted to observed cumulative outflow with time. An objective function quantifies this difference between calculated and observed cumulative outflow and between predicted and measured soil water retention data. Using outflow data only in the objective function, the multistep outflow method results in unique estimates of the retention and hydraulic conductivity functions. To obtain more reliable estimates of the hydraulic conductivity as a function of the water content using the inverse method, the outflow data must be supplemented with soil retention data. To do so tensiometers filled with a polymer solution instead of water were used. The measurement range of these tensiometers is larger than that of the conventional tensiometers, being able to measure the entire pressure head range over which crops take up water, down to values in the order of -1.6 MPa. The objective of this study was to physically characterize a Brazilian red-yellow oxisol using measurements in outflow experiments by polymer tensiometers and processing these data with the inverse modeling technique for use in the analysis of a field experiment and in modeling. The soil was collected at an experimental site located in Piracicaba, Brazil, 22° 42 S, 47° 38 W, 550 m above sea level.
Accelerating the two-point and three-point galaxy correlation functions using Fourier transforms
NASA Astrophysics Data System (ADS)
Slepian, Zachary; Eisenstein, Daniel J.
2016-01-01
Though Fourier transforms (FTs) are a common technique for finding correlation functions, they are not typically used in computations of the anisotropy of the two-point correlation function (2PCF) about the line of sight in wide-angle surveys because the line-of-sight direction is not constant on the Cartesian grid. Here we show how FTs can be used to compute the multipole moments of the anisotropic 2PCF. We also show how FTs can be used to accelerate the 3PCF algorithm of Slepian & Eisenstein. In both cases, these FT methods allow one to avoid the computational cost of pair counting, which scales as the square of the number density of objects in the survey. With the upcoming large data sets of Dark Energy Spectroscopic Instrument, Euclid, and Large Synoptic Survey Telescope, FT techniques will therefore offer an important complement to simple pair or triplet counts.
NASA Astrophysics Data System (ADS)
Koon, D. W.; Knickerbocker, C. J.
1996-12-01
The effect of macroscopic inhomogeneities on resistivity and Hall angle measurements is studied by calculating weighting functions (the relative effect of perturbations in a local transport property on the measured global average for the object) for cross, cloverleaf, and bar-shaped geometries. The ``sweet spot,'' the region in the center of the object that the measurement effectively samples, is smaller for crosses and cloverleafs than for the circles and squares already studied, and smaller for the cloverleaf than for the corresponding cross. Resistivity measurements for crosses and cloverleafs suffer from singularities and negative weighting, which can be eliminated by averaging two independent resistance measurements, as done in the van der Pauw technique. Resistivity and Hall measurements made on sufficiently narrow bars are shown to effectively sample only the region directly between the voltage probes.
Non-ambiguous recovery of Biot poroelastic parameters of cellular panels using ultrasonicwaves
NASA Astrophysics Data System (ADS)
Ogam, Erick; Fellah, Z. E. A.; Sebaa, Naima; Groby, J.-P.
2011-03-01
The inverse problem of the recovery of the poroelastic parameters of open-cell soft plastic foam panels is solved by employing transmitted ultrasonic waves (USW) and the Biot-Johnson-Koplik-Champoux-Allard (BJKCA) model. It is shown by constructing the objective functional given by the total square of the difference between predictions from the BJKCA interaction model and experimental data obtained with transmitted USW that the inverse problem is ill-posed, since the functional exhibits several local minima and maxima. In order to solve this problem, which is beyond the capability of most off-the-shelf iterative nonlinear least squares optimization algorithms (such as the Levenberg Marquadt or Nelder-Mead simplex methods), simple strategies are developed. The recovered acoustic parameters are compared with those obtained using simpler interaction models and a method employing asymptotic phase velocity of the transmitted USW. The retrieved elastic moduli are validated by solving an inverse vibration spectroscopy problem with data obtained from beam-like specimens cut from the panels using an equivalent solid elastodynamic model as estimator. The phase velocities are reconstructed using computed, measured resonance frequencies and a time-frequency decomposition of transient waves induced in the beam specimen. These confirm that the elastic parameters recovered using vibration are valid over the frequency range ofstudy.
Planetary cores, their energy flux relationship, and its implications
NASA Astrophysics Data System (ADS)
Johnson, Fred M.
2018-02-01
Integrated surface heat flux data from each planet in our solar system plus over 50 stars, including our Sun, was plotted against each object's known mass to generate a continuous exponential curve at an R-squared value of 0.99. The unexpected yet undeniable implication of this study is that all planets and celestial objects have a similar mode of energy production. It is widely accepted that proton-proton reactions require hydrogen gas at temperatures of about 15 million degrees, neither of which can plausibly exist inside a terrestrial planet. Hence, this paper proposes a nuclear fission mechanism for all luminous celestial objects, and uses this mechanism to further suggest a developmental narrative for all celestial bodies, including our Sun. This narrative was deduced from an exponential curve drawn adjacent to the first and passing through the Earth's solid core (as a known prototype). This trend line was used to predict the core masses for each planet as a function of its luminosity.
Space Object Maneuver Detection Algorithms Using TLE Data
NASA Astrophysics Data System (ADS)
Pittelkau, M.
2016-09-01
An important aspect of Space Situational Awareness (SSA) is detection of deliberate and accidental orbit changes of space objects. Although space surveillance systems detect orbit maneuvers within their tracking algorithms, maneuver data are not readily disseminated for general use. However, two-line element (TLE) data is available and can be used to detect maneuvers of space objects. This work is an attempt to improve upon existing TLE-based maneuver detection algorithms. Three adaptive maneuver detection algorithms are developed and evaluated: The first is a fading-memory Kalman filter, which is equivalent to the sliding-window least-squares polynomial fit, but computationally more efficient and adaptive to the noise in the TLE data. The second algorithm is based on a sample cumulative distribution function (CDF) computed from a histogram of the magnitude-squared |V|2 of change-in-velocity vectors (V), which is computed from the TLE data. A maneuver detection threshold is computed from the median estimated from the CDF, or from the CDF and a specified probability of false alarm. The third algorithm is a median filter. The median filter is the simplest of a class of nonlinear filters called order statistics filters, which is within the theory of robust statistics. The output of the median filter is practically insensitive to outliers, or large maneuvers. The median of the |V|2 data is proportional to the variance of the V, so the variance is estimated from the output of the median filter. A maneuver is detected when the input data exceeds a constant times the estimated variance.
Application of Least Mean Square Algorithms to Spacecraft Vibration Compensation
NASA Technical Reports Server (NTRS)
Woodard , Stanley E.; Nagchaudhuri, Abhijit
1998-01-01
This paper describes the application of the Least Mean Square (LMS) algorithm in tandem with the Filtered-X Least Mean Square algorithm for controlling a science instrument's line-of-sight pointing. Pointing error is caused by a periodic disturbance and spacecraft vibration. A least mean square algorithm is used on-orbit to produce the transfer function between the instrument's servo-mechanism and error sensor. The result is a set of adaptive transversal filter weights tuned to the transfer function. The Filtered-X LMS algorithm, which is an extension of the LMS, tunes a set of transversal filter weights to the transfer function between the disturbance source and the servo-mechanism's actuation signal. The servo-mechanism's resulting actuation counters the disturbance response and thus maintains accurate science instrumental pointing. A simulation model of the Upper Atmosphere Research Satellite is used to demonstrate the algorithms.
A Simple Introduction to Moving Least Squares and Local Regression Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garimella, Rao Veerabhadra
In this brief note, a highly simpli ed introduction to esimating functions over a set of particles is presented. The note starts from Global Least Squares tting, going on to Moving Least Squares estimation (MLS) and nally, Local Regression Estimation (LRE).
NASA Astrophysics Data System (ADS)
Liu, L. H.; Tan, J. Y.
2007-02-01
A least-squares collocation meshless method is employed for solving the radiative heat transfer in absorbing, emitting and scattering media. The least-squares collocation meshless method for radiative transfer is based on the discrete ordinates equation. A moving least-squares approximation is applied to construct the trial functions. Except for the collocation points which are used to construct the trial functions, a number of auxiliary points are also adopted to form the total residuals of the problem. The least-squares technique is used to obtain the solution of the problem by minimizing the summation of residuals of all collocation and auxiliary points. Three numerical examples are studied to illustrate the performance of this new solution method. The numerical results are compared with the other benchmark approximate solutions. By comparison, the results show that the least-squares collocation meshless method is efficient, accurate and stable, and can be used for solving the radiative heat transfer in absorbing, emitting and scattering media.
The Construction of a Square through Multiple Approaches to Foster Learners' Mathematical Thinking
ERIC Educational Resources Information Center
Reyes-Rodriguez, Aaron; Santos-Trigo, Manuel; Barrera-Mora, Fernando
2017-01-01
The task of constructing a square is used to argue that looking for and pursuing several solution routes is a powerful principle to identify and analyse properties of mathematical objects, to understand problem statements and to engage in mathematical thinking activities. Developing mathematical understanding requires that students delve into…
An adjoint method for gradient-based optimization of stellarator coil shapes
NASA Astrophysics Data System (ADS)
Paul, E. J.; Landreman, M.; Bader, A.; Dorland, W.
2018-07-01
We present a method for stellarator coil design via gradient-based optimization of the coil-winding surface. The REGCOIL (Landreman 2017 Nucl. Fusion 57 046003) approach is used to obtain the coil shapes on the winding surface using a continuous current potential. We apply the adjoint method to calculate derivatives of the objective function, allowing for efficient computation of analytic gradients while eliminating the numerical noise of approximate derivatives. We are able to improve engineering properties of the coils by targeting the root-mean-squared current density in the objective function. We obtain winding surfaces for W7-X and HSX which simultaneously decrease the normal magnetic field on the plasma surface and increase the surface-averaged distance between the coils and the plasma in comparison with the actual winding surfaces. The coils computed on the optimized surfaces feature a smaller toroidal extent and curvature and increased inter-coil spacing. A technique for computation of the local sensitivity of figures of merit to normal displacements of the winding surface is presented, with potential applications for understanding engineering tolerances.
Generalized adjustment by least squares ( GALS).
Elassal, A.A.
1983-01-01
The least-squares principle is universally accepted as the basis for adjustment procedures in the allied fields of geodesy, photogrammetry and surveying. A prototype software package for Generalized Adjustment by Least Squares (GALS) is described. The package is designed to perform all least-squares-related functions in a typical adjustment program. GALS is capable of supporting development of adjustment programs of any size or degree of complexity. -Author
Estimators of The Magnitude-Squared Spectrum and Methods for Incorporating SNR Uncertainty
Lu, Yang; Loizou, Philipos C.
2011-01-01
Statistical estimators of the magnitude-squared spectrum are derived based on the assumption that the magnitude-squared spectrum of the noisy speech signal can be computed as the sum of the (clean) signal and noise magnitude-squared spectra. Maximum a posterior (MAP) and minimum mean square error (MMSE) estimators are derived based on a Gaussian statistical model. The gain function of the MAP estimator was found to be identical to the gain function used in the ideal binary mask (IdBM) that is widely used in computational auditory scene analysis (CASA). As such, it was binary and assumed the value of 1 if the local SNR exceeded 0 dB, and assumed the value of 0 otherwise. By modeling the local instantaneous SNR as an F-distributed random variable, soft masking methods were derived incorporating SNR uncertainty. The soft masking method, in particular, which weighted the noisy magnitude-squared spectrum by the a priori probability that the local SNR exceeds 0 dB was shown to be identical to the Wiener gain function. Results indicated that the proposed estimators yielded significantly better speech quality than the conventional MMSE spectral power estimators, in terms of yielding lower residual noise and lower speech distortion. PMID:21886543
Róg, T; Murzyn, K; Hinsen, K; Kneller, G R
2003-04-15
We present a new implementation of the program nMoldyn, which has been developed for the computation and decomposition of neutron scattering intensities from Molecular Dynamics trajectories (Comp. Phys. Commun 1995, 91, 191-214). The new implementation extends the functionality of the original version, provides a much more convenient user interface (both graphical/interactive and batch), and can be used as a tool set for implementing new analysis modules. This was made possible by the use of a high-level language, Python, and of modern object-oriented programming techniques. The quantities that can be calculated by nMoldyn are the mean-square displacement, the velocity autocorrelation function as well as its Fourier transform (the density of states) and its memory function, the angular velocity autocorrelation function and its Fourier transform, the reorientational correlation function, and several functions specific to neutron scattering: the coherent and incoherent intermediate scattering functions with their Fourier transforms, the memory function of the coherent scattering function, and the elastic incoherent structure factor. The possibility to compute memory function is a new and powerful feature that allows to relate simulation results to theoretical studies. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 657-667, 2003
NASA Technical Reports Server (NTRS)
Cai, Zhiqiang; Manteuffel, Thomas A.; McCormick, Stephen F.
1996-01-01
In this paper, we study the least-squares method for the generalized Stokes equations (including linear elasticity) based on the velocity-vorticity-pressure formulation in d = 2 or 3 dimensions. The least squares functional is defined in terms of the sum of the L(exp 2)- and H(exp -1)-norms of the residual equations, which is weighted appropriately by by the Reynolds number. Our approach for establishing ellipticity of the functional does not use ADN theory, but is founded more on basic principles. We also analyze the case where the H(exp -1)-norm in the functional is replaced by a discrete functional to make the computation feasible. We show that the resulting algebraic equations can be uniformly preconditioned by well-known techniques.
Vision System Measures Motions of Robot and External Objects
NASA Technical Reports Server (NTRS)
Talukder, Ashit; Matthies, Larry
2008-01-01
A prototype of an advanced robotic vision system both (1) measures its own motion with respect to a stationary background and (2) detects other moving objects and estimates their motions, all by use of visual cues. Like some prior robotic and other optoelectronic vision systems, this system is based partly on concepts of optical flow and visual odometry. Whereas prior optoelectronic visual-odometry systems have been limited to frame rates of no more than 1 Hz, a visual-odometry subsystem that is part of this system operates at a frame rate of 60 to 200 Hz, given optical-flow estimates. The overall system operates at an effective frame rate of 12 Hz. Moreover, unlike prior machine-vision systems for detecting motions of external objects, this system need not remain stationary: it can detect such motions while it is moving (even vibrating). The system includes a stereoscopic pair of cameras mounted on a moving robot. The outputs of the cameras are digitized, then processed to extract positions and velocities. The initial image-data-processing functions of this system are the same as those of some prior systems: Stereoscopy is used to compute three-dimensional (3D) positions for all pixels in the camera images. For each pixel of each image, optical flow between successive image frames is used to compute the two-dimensional (2D) apparent relative translational motion of the point transverse to the line of sight of the camera. The challenge in designing this system was to provide for utilization of the 3D information from stereoscopy in conjunction with the 2D information from optical flow to distinguish between motion of the camera pair and motions of external objects, compute the motion of the camera pair in all six degrees of translational and rotational freedom, and robustly estimate the motions of external objects, all in real time. To meet this challenge, the system is designed to perform the following image-data-processing functions: The visual-odometry subsystem (the subsystem that estimates the motion of the camera pair relative to the stationary background) utilizes the 3D information from stereoscopy and the 2D information from optical flow. It computes the relationship between the 3D and 2D motions and uses a least-mean-squares technique to estimate motion parameters. The least-mean-squares technique is suitable for real-time implementation when the number of external-moving-object pixels is smaller than the number of stationary-background pixels.
NASA Astrophysics Data System (ADS)
Enescu (Balaş, M. L.; Alexandru, C.
2016-08-01
The paper deals with the optimal design of the control system for a 6-DOF robot used in thin layers deposition. The optimization is based on parametric technique, by modelling the design objective as a numerical function, and then establishing the optimal values of the design variables so that to minimize the objective function. The robotic system is a mechatronic product, which integrates the mechanical device and the controlled operating device.The mechanical device of the robot was designed in the CAD (Computer Aided Design) software CATIA, the 3D-model being then transferred to the MBS (Multi-Body Systems) environment ADAMS/View. The control system was developed in the concurrent engineering concept, through the integration with the MBS mechanical model, by using the DFC (Design for Control) software solution EASY5. The necessary angular motions in the six joints of the robot, in order to obtain the imposed trajectory of the end-effector, have been established by performing the inverse kinematic analysis. The positioning error in each joint of the robot is used as design objective, the optimization goal being to minimize the root mean square during simulation, which is a measure of the magnitude of the positioning error varying quantity.
Optimal hemodynamic response model for functional near-infrared spectroscopy
Kamran, Muhammad A.; Jeong, Myung Yung; Mannan, Malik M. N.
2015-01-01
Functional near-infrared spectroscopy (fNIRS) is an emerging non-invasive brain imaging technique and measures brain activities by means of near-infrared light of 650–950 nm wavelengths. The cortical hemodynamic response (HR) differs in attributes at different brain regions and on repetition of trials, even if the experimental paradigm is kept exactly the same. Therefore, an HR model that can estimate such variations in the response is the objective of this research. The canonical hemodynamic response function (cHRF) is modeled by two Gamma functions with six unknown parameters (four of them to model the shape and other two to scale and baseline respectively). The HRF model is supposed to be a linear combination of HRF, baseline, and physiological noises (amplitudes and frequencies of physiological noises are supposed to be unknown). An objective function is developed as a square of the residuals with constraints on 12 free parameters. The formulated problem is solved by using an iterative optimization algorithm to estimate the unknown parameters in the model. Inter-subject variations in HRF and physiological noises have been estimated for better cortical functional maps. The accuracy of the algorithm has been verified using 10 real and 15 simulated data sets. Ten healthy subjects participated in the experiment and their HRF for finger-tapping tasks have been estimated and analyzed. The statistical significance of the estimated activity strength parameters has been verified by employing statistical analysis (i.e., t-value > tcritical and p-value < 0.05). PMID:26136668
Optimal hemodynamic response model for functional near-infrared spectroscopy.
Kamran, Muhammad A; Jeong, Myung Yung; Mannan, Malik M N
2015-01-01
Functional near-infrared spectroscopy (fNIRS) is an emerging non-invasive brain imaging technique and measures brain activities by means of near-infrared light of 650-950 nm wavelengths. The cortical hemodynamic response (HR) differs in attributes at different brain regions and on repetition of trials, even if the experimental paradigm is kept exactly the same. Therefore, an HR model that can estimate such variations in the response is the objective of this research. The canonical hemodynamic response function (cHRF) is modeled by two Gamma functions with six unknown parameters (four of them to model the shape and other two to scale and baseline respectively). The HRF model is supposed to be a linear combination of HRF, baseline, and physiological noises (amplitudes and frequencies of physiological noises are supposed to be unknown). An objective function is developed as a square of the residuals with constraints on 12 free parameters. The formulated problem is solved by using an iterative optimization algorithm to estimate the unknown parameters in the model. Inter-subject variations in HRF and physiological noises have been estimated for better cortical functional maps. The accuracy of the algorithm has been verified using 10 real and 15 simulated data sets. Ten healthy subjects participated in the experiment and their HRF for finger-tapping tasks have been estimated and analyzed. The statistical significance of the estimated activity strength parameters has been verified by employing statistical analysis (i.e., t-value > t critical and p-value < 0.05).
Quantized kernel least mean square algorithm.
Chen, Badong; Zhao, Songlin; Zhu, Pingping; Príncipe, José C
2012-01-01
In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.
A survey of various enhancement techniques for square rings antennas
NASA Astrophysics Data System (ADS)
Mumin, Abdul Rashid O.; Alias, Rozlan; Abdullah, Jiwa; Abdulhasan, Raed Abdulkareem; Ali, Jawad; Dahlan, Samsul Haimi; Awaleh, Abdisamad A.
2017-09-01
The square ring shape becomes a famous reconfiguration on antenna design. The researchers have been developed the square ring by different configurations. It has high efficiency and simple calculation method. The performance enhancement for an antenna is the main reason to use this setting. Furthermore, the multi-objectives for the antenna also are considered. In this paper, different studies of square ring shape are discussed. This shape is developed in five different techniques, which are the gain enhancement, dual band antenna, reconfigurable antenna, CSRR, and circularly polarization. Moreover, the validation between these configurations also demonstrates for square ring shapes. In particular, the square ring slot improved the gain by 4.3 dB, provide dual band resonance at 1.4 and 2.6 GHz while circular polarization at 1.54 GHz, and multi-mode antenna. However, square ring strip achieved an excellent band rejection on UWB antenna at 5.5 GHz. The square ring slot length is the most influential factor on the antenna performance, which refers to the free space wavelength. Finally, comparisons between these techniques are presented.
King, P M
1997-01-01
The purpose of this study was to determine if a correlation exists between touch-pressure threshold testing and sensory discrimination function, specifically tactile gnosis for texture and object recognition. Twenty-nine patients diagnosed with carpal tunnel syndrome (CTS), as confirmed by electromyography or nerve conduction velocity tests, were administered three sensibility tests: the Semmes-Weinstein monofilament test, a texture discrimination test, and an object identification test. Norms were established for texture and object recognition tests using 100 subjects (50 females and 50 males) with normal touch-pressure thresholds as assessed by the Semmes-Weinstein monofilament test. The CTS patients were grouped into three categories of sensibility as determined by their performance on the Semmes-Weinstein monofilament test: normal, diminished light touch, and diminished protective sensation. Through an independent t test statistical procedure, each of the three categories mean response times for identification of textures of objects were compared with the normed response times. Accurate responses were given for identification of all textures and objects. No significant difference (p < .05) was noted in mean response times of the CTS patients with normal touch-pressure thresholds. A significant difference (p < .05) in response times by those CTS patients with diminished light touch was detected in identification in four out of six objects. Subjects with diminished protective sensation had significantly longer response times (p < .05) for identification of the textures of cork, coarse and fine sandpaper, and rubber. Significantly longer response times were recorded by the same subjects for identification of such objects as a screw and a button, and for the shapes of a square, triangle, and oval.
The Quételet index revisited in children and adults.
Chiquete, Erwin; Ruiz-Sandoval, José L; Ochoa-Guzmán, Ana; Sánchez-Orozco, Laura V; Lara-Zaragoza, Erika B; Basaldúa, Nancy; Ruiz-Madrigal, Bertha; Martínez-López, Erika; Román, Sonia; Godínez-Gutiérrez, Sergio A; Panduro, Arturo
2014-02-01
The body mass index (BMI) is based on the original concept that body weight increases as a function of height squared. As an indicator of obesity the modern BMI assumption postulates that adiposity also increases as a function of height in states of positive energy balance. To evaluate the BMI concept across different adiposity magnitudes, in both children and adults. We studied 975 individuals who underwent anthropometric evaluation: 474 children and 501 adults. Tetrapolar bioimpedance analysis was used to assess body fat and lean mass. BMI significantly correlated with percentage of body fat (%BF; children: r=0.893; adults: r=0.878) and with total fat mass (children: r=0.967; adults: r=0.953). In children, body weight, fat mass, %BF and waist circumference progressively increased as a function of height squared. In adults body weight increased as a function of height squared, but %BF actually decreased with increasing height both in men (r=-0.406; p<0.001) and women (r=-0.413; p<0.001). Most of the BMI variance in adults was explained by a positive correlation of total lean mass with height squared (r(2)=0.709), and by a negative correlation of BMI with total fat mass (r=-0.193). Body weight increases as a function of height squared. However, adiposity progressively increases as a function of height only in children. BMI is not an ideal indicator of obesity in adults since it is significantly influenced by the lean mass, even in obese individuals. Copyright © 2013 SEEN. Published by Elsevier Espana. All rights reserved.
Failure mechanisms of uni-ply composite plates with a circular hole under static compressive loading
NASA Technical Reports Server (NTRS)
Khamseh, A. R.; Waas, A. M.
1992-01-01
The objective of the study was to identify and study the failure mechanisms associated with compressive-loaded uniply graphite/epoxy square plates with a central circular hole. It is found that the type of compressive failure depends on the hole size. For large holes with the diameter/width ratio exceeding 0.062, fiber buckling/kinking initiated at the hole is found to be the dominant failure mechanism. In plates with smaller hole sizes, failure initiates away from the hole edge or complete global failure occurs. Critical buckle wavelengths at failure are presented as a function of the normalized hole diameter.
Sampling functions for geophysics
NASA Technical Reports Server (NTRS)
Giacaglia, G. E. O.; Lunquist, C. A.
1972-01-01
A set of spherical sampling functions is defined such that they are related to spherical-harmonic functions in the same way that the sampling functions of information theory are related to sine and cosine functions. An orderly distribution of (N + 1) squared sampling points on a sphere is given, for which the (N + 1) squared spherical sampling functions span the same linear manifold as do the spherical-harmonic functions through degree N. The transformations between the spherical sampling functions and the spherical-harmonic functions are given by recurrence relations. The spherical sampling functions of two arguments are extended to three arguments and to nonspherical reference surfaces. Typical applications of this formalism to geophysical topics are sketched.
Interpolation on the manifold of K component GMMs.
Kim, Hyunwoo J; Adluru, Nagesh; Banerjee, Monami; Vemuri, Baba C; Singh, Vikas
2015-12-01
Probability density functions (PDFs) are fundamental objects in mathematics with numerous applications in computer vision, machine learning and medical imaging. The feasibility of basic operations such as computing the distance between two PDFs and estimating a mean of a set of PDFs is a direct function of the representation we choose to work with. In this paper, we study the Gaussian mixture model (GMM) representation of the PDFs motivated by its numerous attractive features. (1) GMMs are arguably more interpretable than, say, square root parameterizations (2) the model complexity can be explicitly controlled by the number of components and (3) they are already widely used in many applications. The main contributions of this paper are numerical algorithms to enable basic operations on such objects that strictly respect their underlying geometry. For instance, when operating with a set of K component GMMs, a first order expectation is that the result of simple operations like interpolation and averaging should provide an object that is also a K component GMM. The literature provides very little guidance on enforcing such requirements systematically. It turns out that these tasks are important internal modules for analysis and processing of a field of ensemble average propagators (EAPs), common in diffusion weighted magnetic resonance imaging. We provide proof of principle experiments showing how the proposed algorithms for interpolation can facilitate statistical analysis of such data, essential to many neuroimaging studies. Separately, we also derive interesting connections of our algorithm with functional spaces of Gaussians, that may be of independent interest.
Nystrom, Elizabeth A.; Burns, Douglas A.
2011-01-01
TOPMODEL uses a topographic wetness index computed from surface-elevation data to simulate streamflow and subsurface-saturation state, represented by the saturation deficit. Depth to water table was computed from simulated saturation-deficit values using computed soil properties. In the Fishing Brook Watershed, TOPMODEL was calibrated to the natural logarithm of streamflow at the study area outlet and depth to water table at Sixmile Wetland using a combined multiple-objective function. Runoff and depth to water table responded differently to some of the model parameters, and the combined multiple-objective function balanced the goodness-of-fit of the model realizations with respect to these parameters. Results show that TOPMODEL reasonably simulated runoff and depth to water table during the study period. The simulated runoff had a Nash-Sutcliffe efficiency of 0.738, but the model underpredicted total runoff by 14 percent. Depth to water table computed from simulated saturation-deficit values matched observed water-table depth moderately well; the root mean squared error of absolute depth to water table was 91 millimeters (mm), compared to the mean observed depth to water table of 205 mm. The correlation coefficient for temporal depth-to-water-table fluctuations was 0.624. The variability of the TOPMODEL simulations was assessed using prediction intervals grouped using the combined multiple-objective function. The calibrated TOPMODEL results for the entire study area were applied to several subwatersheds within the study area using computed hydrogeomorphic properties of the subwatersheds.
Meenakshi, S; Gujjari, Anil Kumar; Thippeswamy, H N; Raghunath, N
2014-12-01
Stereognosis has been defined as the appreciation of the form of objects by palpation. Whilst this definition holds good for the manual exploration of objects, it is possible for the shape of objects to be explored intra orally referred to as oral stereognosis. To better understand patients' relative satisfaction with complete dentures, differences in oral stereognostic perception, based on the identification of 6 edible objects was analyzed in a group of 30 edentulous individuals at 3 stages, namely, just before (pre-treatment), 30 min after (30 min post-treatment) and 1 month after (1 month post-treatment) the insertion of new dentures. The time required to identify each object was recorded and the correctness of identification of each object was scored using oral stereognostic score. Descriptive statistics, Wilcoxon signed rank test, Spearman's rank correlation test, Pearson Chi square test was used to statistically analyze the data obtained. OSA scores was significantly increased 1 month post-treatment compared to 30 min post-treatment (p < 0.05). It was found that Oral stereognostic test is reliable for measuring patients' oral stereognostic perception and may be used as one of the clinical aids in appreciating the functional limitations imposed by the prostheses.
VizieR Online Data Catalog: M33 SNR candidates properties (Lee+, 2014)
NASA Astrophysics Data System (ADS)
Lee, J. H.; Lee, M. G.
2017-04-01
We utilized the Hα and [S II] images in the LGGS to find new M33 remnants. The LGGS covered three 36' square fields of M33. We subtracted continuum sources from the narrowband images using R-band images. We smoothed the images with better seeing to match the point-spread function in the images with worse seeing, using the IRAF task psfmatch. We then scaled and subtracted the resulting continuum images from narrowband images. We selected M33 remnants considering three criteria: emission-line ratio ([S II]/Hα), the morphological structure, and the absence of blue stars inside the sources. Details are described in L14 (Lee et al. 2014ApJ...786..130L). We detected objects with [S II]/Hα>0.4 in emission-line ratio maps, and selected objects with round or shell structures in each narrowband image. As a result, we chose 435 sources. (2 data files).
NASA Astrophysics Data System (ADS)
Abd-Elmotaal, Hussein; Kühtreiber, Norbert
2016-04-01
In the framework of the IAG African Geoid Project, there are a lot of large data gaps in its gravity database. These gaps are filled initially using unequal weight least-squares prediction technique. This technique uses a generalized Hirvonen covariance function model to replace the empirically determined covariance function. The generalized Hirvonen covariance function model has a sensitive parameter which is related to the curvature parameter of the covariance function at the origin. This paper studies the effect of the curvature parameter on the least-squares prediction results, especially in the large data gaps as appearing in the African gravity database. An optimum estimation of the curvature parameter has also been carried out. A wide comparison among the results obtained in this research along with their obtained accuracy is given and thoroughly discussed.
Automatic evaluation of interferograms
NASA Technical Reports Server (NTRS)
Becker, F.
1982-01-01
A system for the evaluation of interference patterns was developed. For digitizing and processing of the interferograms from classical and holographic interferometers a picture analysis system based upon a computer with a television digitizer was installed. Depending on the quality of the interferograms, four different picture enhancement operations may be used: Signal averaging; spatial smoothing, subtraction of the overlayed intensity function and the removal of distortion-patterns using a spatial filtering technique in the frequency spectrum of the interferograms. The extraction of fringe loci from the digitized interferograms is performed by a foating-threshold method. The fringes are numbered using a special scheme after the removal of any fringe disconnections which appeared if there was insufficient contrast in the holograms. The reconstruction of the object function from the fringe field uses least squares approximation with spline fit. Applications are given.
Salas-Gonzalez, D; Górriz, J M; Ramírez, J; Padilla, P; Illán, I A
2013-01-01
A procedure to improve the convergence rate for affine registration methods of medical brain images when the images differ greatly from the template is presented. The methodology is based on a histogram matching of the source images with respect to the reference brain template before proceeding with the affine registration. The preprocessed source brain images are spatially normalized to a template using a general affine model with 12 parameters. A sum of squared differences between the source images and the template is considered as objective function, and a Gauss-Newton optimization algorithm is used to find the minimum of the cost function. Using histogram equalization as a preprocessing step improves the convergence rate in the affine registration algorithm of brain images as we show in this work using SPECT and PET brain images.
Least Squares Metric, Unidimensional Scaling of Multivariate Linear Models.
ERIC Educational Resources Information Center
Poole, Keith T.
1990-01-01
A general approach to least-squares unidimensional scaling is presented. Ordering information contained in the parameters is used to transform the standard squared error loss function into a discrete rather than continuous form. Monte Carlo tests with 38,094 ratings of 261 senators, and 1,258 representatives demonstrate the procedure's…
ERIC Educational Resources Information Center
Ding, Cody S.; Davison, Mark L.
2010-01-01
Akaike's information criterion is suggested as a tool for evaluating fit and dimensionality in metric multidimensional scaling that uses least squares methods of estimation. This criterion combines the least squares loss function with the number of estimated parameters. Numerical examples are presented. The results from analyses of both simulation…
ERIC Educational Resources Information Center
Helmreich, James E.; Krog, K. Peter
2018-01-01
We present a short, inquiry-based learning course on concepts and methods underlying ordinary least squares (OLS), least absolute deviation (LAD), and quantile regression (QR). Students investigate squared, absolute, and weighted absolute distance functions (metrics) as location measures. Using differential calculus and properties of convex…
Dietrich, B.; Herrmann, W. M.
1989-01-01
1 In a controlled, randomized, double-blind study the influence of cilazapril and metoprolol on learning and memory functions and on sleep behaviour was investigated in healthy young volunteers under steady-state conditions. Twenty-three subjects were given either 2.5 mg cilazapril, 200 mg metoprolol, or placebo for 14 days in a latin square design separated by washout periods of 7 days. 2 To test memory functions different modalities—verbal, visual, numerical associative and two dimensional spatial memory were tested for recent anterograde recall, both short-term (less than 10 s) and middle-term (up to 15 min) were selected. The test had a content similar to that used in daily life situations. The sleep behaviour was tested both by objective (all night sleep EEG) and subjective measures. 3 Neither antihypertensive drug had an observable influence on memory performance at the dosages used under steady-state conditions. However, sleep was disturbed during metoprolol, while cilazapril could not be differentiated from placebo. The effects of metoprolol on sleep behaviour were observed in the objective and subjective measures. There was more frequent awakening during the night with the subjective complaint of difficulties in sleeping through. 4 From this study it is concluded that cilazapril has no major effect on memory functions and sleep behaviour. This is only true for the dosages given and under steady-state conditions. PMID:2527538
Dietrich, B; Herrmann, W M
1989-01-01
1. In a controlled, randomized, double-blind study the influence of cilazapril and metoprolol on learning and memory functions and on sleep behaviour was investigated in healthy young volunteers under steady-state conditions. Twenty-three subjects were given either 2.5 mg cilazapril, 200 mg metoprolol, or placebo for 14 days in a latin square design separated by washout periods of 7 days. 2. To test memory functions different modalities--verbal, visual, numerical associative and two dimensional spatial memory were tested for recent anterograde recall, both short-term (less than 10 s) and middle-term (up to 15 min) were selected. The test had a content similar to that used in daily life situations. The sleep behaviour was tested both by objective (all night sleep EEG) and subjective measures. 3. Neither antihypertensive drug had an observable influence on memory performance at the dosages used under steady-state conditions. However, sleep was disturbed during metoprolol, while cilazapril could not be differentiated from placebo. The effects of metoprolol on sleep behaviour were observed in the objective and subjective measures. There was more frequent awakening during the night with the subjective complaint of difficulties in sleeping through. 4. From this study it is concluded that cilazapril has no major effect on memory functions and sleep behaviour. This is only true for the dosages given and under steady-state conditions.
A novel design for passive misscromixers based on topology optimization method.
Chen, Xueye; Li, Tiechuan
2016-08-01
In this paper, a series of novel passive micromixers, called topological micromixers with reversed flow (TMRFX), are proposed. The reversed flow in the microchannels can enhance chaotic advection and produce better mixing performance. Therefore the maximum of reversed flow is chosen as the objective function of the topology optimization problem. Because the square-wave unit is easier to fabricate and have better mixing performance than many other serpentine micromixers, square-wave structure becomes the original geometry structure. By simulating analysis, the series of TMRFX, namely TMRF, TMRF0.75, TMRF0.5, TMRF0.25, mix better than the square-wave micromixer at various Reynolds numbers (Re), but pressure drops of TMRFX are much higher. Lots of intensive numerical simulations are conducted to prove that TMRF and TMRF0.75 have remarkable advantages on mixing over other micromixers at various Re. The mixing performance of TMRF0.75 is similar to TMRF's. What's more, TMRF have a larger pressure drop than TMRF0.75, which means that TMRF have taken more energy than TMRF0.75. For a wide range of Re (Re ≤ 0.1 and Re ≥ 10), TMRF0.75 delivers a great performance and the mixing efficiency is greater than 95 %. Even in the range of 0.1-10 for the Re, the mixing efficiency of TMRF0.75 is higher than 85 %.
The stress intensity factor for the double cantilever beam
NASA Technical Reports Server (NTRS)
Fichter, W. B.
1983-01-01
Fourier transforms and the Wiener-Hopf technique are used in conjunction with plane elastostatics to examine the singular crack tip stress field in the double cantilever beam (DCB) specimen. In place of the Dirac delta function, a family of functions which duplicates the important features of the concentrated forces without introducing unmanageable mathematical complexities is used as a loading function. With terms of order h-squared/a-squared retained in the series expansion, the dimensionless stress intensity factor is found to be K (h to the 1/2)/P = 12 to the 1/2 (a/h + 0.6728 + 0.0377 h-squared/a-squared), in which P is the magnitude of the concentrated forces per unit thickness, a is the distance from the crack tip to the points of load application, and h is the height of each cantilever beam. The result is similar to that obtained by Gross and Srawley by fitting a line to discrete results from their boundary collocation analysis.
Cao, Meng-Li; Meng, Qing-Hao; Zeng, Ming; Sun, Biao; Li, Wei; Ding, Cheng-Jun
2014-06-27
This paper investigates the problem of locating a continuous chemical source using the concentration measurements provided by a wireless sensor network (WSN). Such a problem exists in various applications: eliminating explosives or drugs, detecting the leakage of noxious chemicals, etc. The limited power and bandwidth of WSNs have motivated collaborative in-network processing which is the focus of this paper. We propose a novel distributed least-squares estimation (DLSE) method to solve the chemical source localization (CSL) problem using a WSN. The DLSE method is realized by iteratively conducting convex combination of the locally estimated chemical source locations in a distributed manner. Performance assessments of our method are conducted using both simulations and real experiments. In the experiments, we propose a fitting method to identify both the release rate and the eddy diffusivity. The results show that the proposed DLSE method can overcome the negative interference of local minima and saddle points of the objective function, which would hinder the convergence of local search methods, especially in the case of locating a remote chemical source.
Life-Span Development of Visual Working Memory: When Is Feature Binding Difficult?
ERIC Educational Resources Information Center
Cowan, Nelson; Naveh-Benjamin, Moshe; Kilb, Angela; Saults, J. Scott
2006-01-01
We asked whether the ability to keep in working memory the binding between a visual object and its spatial location changes with development across the life span more than memory for item information. Paired arrays of colored squares were identical or differed in the color of one square, and in the latter case, the changed color was unique on…
Least-squares luma-chroma demultiplexing algorithm for Bayer demosaicking.
Leung, Brian; Jeon, Gwanggil; Dubois, Eric
2011-07-01
This paper addresses the problem of interpolating missing color components at the output of a Bayer color filter array (CFA), a process known as demosaicking. A luma-chroma demultiplexing algorithm is presented in detail, using a least-squares design methodology for the required bandpass filters. A systematic study of objective demosaicking performance and system complexity is carried out, and several system configurations are recommended. The method is compared with other benchmark algorithms in terms of CPSNR and S-CIELAB ∆E∗ objective quality measures and demosaicking speed. It was found to provide excellent performance and the best quality-speed tradeoff among the methods studied.
Kaur, Jaspreet; Nygren, Anders; Vigmond, Edward J
2014-01-01
Fitting parameter sets of non-linear equations in cardiac single cell ionic models to reproduce experimental behavior is a time consuming process. The standard procedure is to adjust maximum channel conductances in ionic models to reproduce action potentials (APs) recorded in isolated cells. However, vastly different sets of parameters can produce similar APs. Furthermore, even with an excellent AP match in case of single cell, tissue behaviour may be very different. We hypothesize that this uncertainty can be reduced by additionally fitting membrane resistance (Rm). To investigate the importance of Rm, we developed a genetic algorithm approach which incorporated Rm data calculated at a few points in the cycle, in addition to AP morphology. Performance was compared to a genetic algorithm using only AP morphology data. The optimal parameter sets and goodness of fit as computed by the different methods were compared. First, we fit an ionic model to itself, starting from a random parameter set. Next, we fit the AP of one ionic model to that of another. Finally, we fit an ionic model to experimentally recorded rabbit action potentials. Adding the extra objective (Rm, at a few voltages) to the AP fit, lead to much better convergence. Typically, a smaller MSE (mean square error, defined as the average of the squared error between the target AP and AP that is to be fitted) was achieved in one fifth of the number of generations compared to using only AP data. Importantly, the variability in fit parameters was also greatly reduced, with many parameters showing an order of magnitude decrease in variability. Adding Rm to the objective function improves the robustness of fitting, better preserving tissue level behavior, and should be incorporated.
Orthogonality catastrophe and fractional exclusion statistics
NASA Astrophysics Data System (ADS)
Ares, Filiberto; Gupta, Kumar S.; de Queiroz, Amilcar R.
2018-02-01
We show that the N -particle Sutherland model with inverse-square and harmonic interactions exhibits orthogonality catastrophe. For a fixed value of the harmonic coupling, the overlap of the N -body ground state wave functions with two different values of the inverse-square interaction term goes to zero in the thermodynamic limit. When the two values of the inverse-square coupling differ by an infinitesimal amount, the wave function overlap shows an exponential suppression. This is qualitatively different from the usual power law suppression observed in the Anderson's orthogonality catastrophe. We also obtain an analytic expression for the wave function overlaps for an arbitrary set of couplings, whose properties are analyzed numerically. The quasiparticles constituting the ground state wave functions of the Sutherland model are known to obey fractional exclusion statistics. Our analysis indicates that the orthogonality catastrophe may be valid in systems with more general kinds of statistics than just the fermionic type.
Orthogonality catastrophe and fractional exclusion statistics.
Ares, Filiberto; Gupta, Kumar S; de Queiroz, Amilcar R
2018-02-01
We show that the N-particle Sutherland model with inverse-square and harmonic interactions exhibits orthogonality catastrophe. For a fixed value of the harmonic coupling, the overlap of the N-body ground state wave functions with two different values of the inverse-square interaction term goes to zero in the thermodynamic limit. When the two values of the inverse-square coupling differ by an infinitesimal amount, the wave function overlap shows an exponential suppression. This is qualitatively different from the usual power law suppression observed in the Anderson's orthogonality catastrophe. We also obtain an analytic expression for the wave function overlaps for an arbitrary set of couplings, whose properties are analyzed numerically. The quasiparticles constituting the ground state wave functions of the Sutherland model are known to obey fractional exclusion statistics. Our analysis indicates that the orthogonality catastrophe may be valid in systems with more general kinds of statistics than just the fermionic type.
Minimally invasive reconstruction of acute type IV and Type V acromioclavicular separations.
Katsenis, Dimitris L; Stamoulis, Dimitris; Begkas, Dimitris; Tsamados, Stamatis
2015-04-01
The goal of this study was to evaluate the midterm radiologic, clinical, and functional results of the early reconstruction of the severe acromioclavicular joint dislocation using the flipptack fixation button technique. Between December 2006 and December 2009, one hundred thirty-five consecutive patients with acromioclavicular joint separations were admitted to the authors' institution. Fifty patients were included in the study. According to Rockwood classification, 29 (58%) dislocations were type IV and 21 (42%) were type V. Surgery was performed at an average of 4.2 days (range, 0-12 days) after dislocation. All dislocations were treated with the flipptack fixation button technique. All patients were evaluated at a final postoperative follow-up of 42 months (range, 36-49 months). The clinical outcome was assessed using the Constant score. The functional limitation was assessed using the bother index of the short Musculoskeletal Function Assessment. Radiographs taken immediately postoperatively and at the final follow-up assessed acromioclavicular joint reduction, coracoclavicular distance, and joint arthrosis. At the final follow-up, mean Constant score was 93.04 (range, 84-100). The average (±SD) short Musculoskeletal Function Assessment bother index was 20.88±8.95 (range, 2.0-49). No statistically significant difference was found between the acromioclavicular joint dislocation type and the clinical result (P=.227; chi-square, 6.910, Kruskal Wallis test). The regression of the coracoclavicular distance at final follow-up was not statistically significant (P=.276; chi-square, 6.319, Kruskal Wallis test). The flipptack fixation button technique is an effective alternative for the treatment of severe acromioclavicular joint dislocation. Because all objectives of the treatment were obtained, the results do not deteriorate over time. Copyright 2015, SLACK Incorporated.
Gemperline, Paul J; Cash, Eric
2003-08-15
A new algorithm for self-modeling curve resolution (SMCR) that yields improved results by incorporating soft constraints is described. The method uses least squares penalty functions to implement constraints in an alternating least squares algorithm, including nonnegativity, unimodality, equality, and closure constraints. By using least squares penalty functions, soft constraints are formulated rather than hard constraints. Significant benefits are (obtained using soft constraints, especially in the form of fewer distortions due to noise in resolved profiles. Soft equality constraints can also be used to introduce incomplete or partial reference information into SMCR solutions. Four different examples demonstrating application of the new method are presented, including resolution of overlapped HPLC-DAD peaks, flow injection analysis data, and batch reaction data measured by UV/visible and near-infrared spectroscopy (NIR). Each example was selected to show one aspect of the significant advantages of soft constraints over traditionally used hard constraints. Incomplete or partial reference information into self-modeling curve resolution models is described. The method offers a substantial improvement in the ability to resolve time-dependent concentration profiles from mixture spectra recorded as a function of time.
Holsen, Laura M.; Jackson, Benita
2017-01-01
Objective The role of leptin in mesolimbic signaling non-food-related reward has been well established at the pre-clinical level, yet studies in humans are lacking. The present investigation explored the association between hedonic capacity and leptin dynamics, and whether this association differed by BMI class. Methods In this cross-sectional study of 75 women (42 with lean BMIs, 33 with obese BMIs), we measured serum leptin before/after meal consumption. Reward capacity was assessed using the Snaith-Hamilton Pleasure Scale (SHAPS). Multiple regression tested whether reward capacity was associated with leptin AUC, with an interaction term to test differences between lean (LN) and obese (OB) groups. Results The interaction of SHAPS by BMI group was robust (β=−.40, p=.005); among women with obesity, greater SHAPS score was associated with lower leptin AUC (β=−.35, p=.002, adjusted R-squared=.66). Among the lean group, the association was not statistically significant (β=−.16, p=.252, adjusted R-squared=.22). Findings were above and beyond BMI and age. Conclusions In this sample a robust, negative association between reward capacity and circulating leptin was stronger in women with obesity compared to lean counterparts. These findings suggest that despite likely leptin resistance, inhibitory leptin functioning related to non-food reward may be spared in women with obesity. PMID:28722317
The Impact of Financial Incentives on Physician Productivity in Medical Groups
Conrad, Douglas A; Sales, Anne; Liang, Su-Ying; Chaudhuri, Anoshua; Maynard, Charles; Pieper, Lisa; Weinstein, Laurel; Gans, David; Piland, Neill
2002-01-01
Objective To estimate the effect of financial incentives in medical groups—both at the level of individual physician and collectively—on individual physician productivity. Data Sources/Study Setting Secondary data from 1997 on individual physician and group characteristics from two surveys: Medical Group Management Association (MGMA) Physician Compensation and Production Survey and the Cost Survey; Area Resource File data on market characteristics, and various sources of state regulatory data. Study Design Cross-sectional estimation of individual physician production function models, using ordinary least squares and two-stage least squares regression. Data Collection Data from respondents completing all items required for the two stages of production function estimation on both MGMA surveys (with RBRVS units as production measure: 102 groups, 2,237 physicians; and with charges as the production measure: 383 groups, 6,129 physicians). The 102 groups with complete data represent 1.8 percent of the 5,725 MGMA member groups. Principal Findings Individual production-based physician compensation leads to increased productivity, as expected (elasticity=.07, p<.05). The productivity effects of compensation methods based on equal shares of group net income and incentive bonuses are significantly positive (p<.05) and smaller in magnitude. The group-levelfinancial incentive does not appear to be significantly related to physician productivity. Conclusions Individual physician incentives based on own production do increase physician productivity. PMID:12236389
NASA Technical Reports Server (NTRS)
Argentiero, P.; Lowrey, B.
1977-01-01
The least squares collocation algorithm for estimating gravity anomalies from geodetic data is shown to be an application of the well known regression equations which provide the mean and covariance of a random vector (gravity anomalies) given a realization of a correlated random vector (geodetic data). It is also shown that the collocation solution for gravity anomalies is equivalent to the conventional least-squares-Stokes' function solution when the conventional solution utilizes properly weighted zero a priori estimates. The mathematical and physical assumptions underlying the least squares collocation estimator are described.
A Case for Inhibition: Visual Attention Suppresses the Processing of Irrelevant Objects
ERIC Educational Resources Information Center
Wuhr, Peter; Frings, Christian
2008-01-01
The present study investigated the ability to inhibit the processing of an irrelevant visual object while processing a relevant one. Participants were presented with 2 overlapping shapes (e.g., circle and square) in different colors. The task was to name the color of the relevant object designated by shape. Congruent or incongruent color words…
NASA Technical Reports Server (NTRS)
Argentiero, P.; Lowrey, B.
1976-01-01
The least squares collocation algorithm for estimating gravity anomalies from geodetic data is shown to be an application of the well known regression equations which provide the mean and covariance of a random vector (gravity anomalies) given a realization of a correlated random vector (geodetic data). It is also shown that the collocation solution for gravity anomalies is equivalent to the conventional least-squares-Stokes' function solution when the conventional solution utilizes properly weighted zero a priori estimates. The mathematical and physical assumptions underlying the least squares collocation estimator are described, and its numerical properties are compared with the numerical properties of the conventional least squares estimator.
Visual Tracking via Sparse and Local Linear Coding.
Wang, Guofeng; Qin, Xueying; Zhong, Fan; Liu, Yue; Li, Hongbo; Peng, Qunsheng; Yang, Ming-Hsuan
2015-11-01
The state search is an important component of any object tracking algorithm. Numerous algorithms have been proposed, but stochastic sampling methods (e.g., particle filters) are arguably one of the most effective approaches. However, the discretization of the state space complicates the search for the precise object location. In this paper, we propose a novel tracking algorithm that extends the state space of particle observations from discrete to continuous. The solution is determined accurately via iterative linear coding between two convex hulls. The algorithm is modeled by an optimal function, which can be efficiently solved by either convex sparse coding or locality constrained linear coding. The algorithm is also very flexible and can be combined with many generic object representations. Thus, we first use sparse representation to achieve an efficient searching mechanism of the algorithm and demonstrate its accuracy. Next, two other object representation models, i.e., least soft-threshold squares and adaptive structural local sparse appearance, are implemented with improved accuracy to demonstrate the flexibility of our algorithm. Qualitative and quantitative experimental results demonstrate that the proposed tracking algorithm performs favorably against the state-of-the-art methods in dynamic scenes.
NASA Astrophysics Data System (ADS)
Di, Jianglei; Zhao, Jianlin; Sun, Weiwei; Jiang, Hongzhen; Yan, Xiaobo
2009-10-01
Digital holographic microscopy allows the numerical reconstruction of the complex wavefront of samples, especially biological samples such as living cells. In digital holographic microscopy, a microscope objective is introduced to improve the transverse resolution of the sample; however a phase aberration in the object wavefront is also brought along, which will affect the phase distribution of the reconstructed image. We propose here a numerical method to compensate for the phase aberration of thin transparent objects with a single hologram. The least squares surface fitting with points number less than the matrix of the original hologram is performed on the unwrapped phase distribution to remove the unwanted wavefront curvature. The proposed method is demonstrated with the samples of the cicada wings and epidermal cells of garlic, and the experimental results are consistent with that of the double exposure method.
A Christoffel function weighted least squares algorithm for collocation approximations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narayan, Akil; Jakeman, John D.; Zhou, Tao
Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less
A Christoffel function weighted least squares algorithm for collocation approximations
Narayan, Akil; Jakeman, John D.; Zhou, Tao
2016-11-28
Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less
NASA Astrophysics Data System (ADS)
Althuwaynee, Omar F.; Pradhan, Biswajeet; Ahmad, Noordin
2014-06-01
This article uses methodology based on chi-squared automatic interaction detection (CHAID), as a multivariate method that has an automatic classification capacity to analyse large numbers of landslide conditioning factors. This new algorithm was developed to overcome the subjectivity of the manual categorization of scale data of landslide conditioning factors, and to predict rainfall-induced susceptibility map in Kuala Lumpur city and surrounding areas using geographic information system (GIS). The main objective of this article is to use CHi-squared automatic interaction detection (CHAID) method to perform the best classification fit for each conditioning factor, then, combining it with logistic regression (LR). LR model was used to find the corresponding coefficients of best fitting function that assess the optimal terminal nodes. A cluster pattern of landslide locations was extracted in previous study using nearest neighbor index (NNI), which were then used to identify the clustered landslide locations range. Clustered locations were used as model training data with 14 landslide conditioning factors such as; topographic derived parameters, lithology, NDVI, land use and land cover maps. Pearson chi-squared value was used to find the best classification fit between the dependent variable and conditioning factors. Finally the relationship between conditioning factors were assessed and the landslide susceptibility map (LSM) was produced. An area under the curve (AUC) was used to test the model reliability and prediction capability with the training and validation landslide locations respectively. This study proved the efficiency and reliability of decision tree (DT) model in landslide susceptibility mapping. Also it provided a valuable scientific basis for spatial decision making in planning and urban management studies.
Estimation of proportions in mixed pixels through their region characterization
NASA Technical Reports Server (NTRS)
Chittineni, C. B. (Principal Investigator)
1981-01-01
A region of mixed pixels can be characterized through the probability density function of proportions of classes in the pixels. Using information from the spectral vectors of a given set of pixels from the mixed pixel region, expressions are developed for obtaining the maximum likelihood estimates of the parameters of probability density functions of proportions. The proportions of classes in the mixed pixels can then be estimated. If the mixed pixels contain objects of two classes, the computation can be reduced by transforming the spectral vectors using a transformation matrix that simultaneously diagonalizes the covariance matrices of the two classes. If the proportions of the classes of a set of mixed pixels from the region are given, then expressions are developed for obtaining the estmates of the parameters of the probability density function of the proportions of mixed pixels. Development of these expressions is based on the criterion of the minimum sum of squares of errors. Experimental results from the processing of remotely sensed agricultural multispectral imagery data are presented.
Object matching using a locally affine invariant and linear programming techniques.
Li, Hongsheng; Huang, Xiaolei; He, Lei
2013-02-01
In this paper, we introduce a new matching method based on a novel locally affine-invariant geometric constraint and linear programming techniques. To model and solve the matching problem in a linear programming formulation, all geometric constraints should be able to be exactly or approximately reformulated into a linear form. This is a major difficulty for this kind of matching algorithm. We propose a novel locally affine-invariant constraint which can be exactly linearized and requires a lot fewer auxiliary variables than other linear programming-based methods do. The key idea behind it is that each point in the template point set can be exactly represented by an affine combination of its neighboring points, whose weights can be solved easily by least squares. Errors of reconstructing each matched point using such weights are used to penalize the disagreement of geometric relationships between the template points and the matched points. The resulting overall objective function can be solved efficiently by linear programming techniques. Our experimental results on both rigid and nonrigid object matching show the effectiveness of the proposed algorithm.
Short-range inverse-square law experiment in space
NASA Technical Reports Server (NTRS)
Strayer, D.; Paik, H. J.; Moody, M. V.
2002-01-01
The objective of ISLES (Inverse-Square Law Experiment in Space) is to perform a null test ofNewton's law on the ISS with a resolution of one part in lo5 at ranges from 100 pm to 1 mm. ISLES will be sensitive enough to detect axions with the strongest allowed coupling and to test the string-theory prediction with R z 5 pm.
NASA Technical Reports Server (NTRS)
Abercromby, Kira J.; Rapp, Jason; Bedard, Donald; Seitzer, Patrick; Cardona, Tommaso; Cowardin, Heather; Barker, Ed; Lederer, Susan
2013-01-01
Constrained Linear Least Squares model is generally more accurate than the "human-in-the-loop". However, "human-in-the-loop" can remove materials that make no sense. The speed of the model in determining a "first cut" at the material ID makes it a viable option for spectral unmixing of debris objects.
Anisotropic mean-square displacements in two-dimensional colloidal crystals of tilted dipoles
NASA Astrophysics Data System (ADS)
Froltsov, V. A.; Likos, C. N.; Löwen, H.; Eisenmann, C.; Gasser, U.; Keim, P.; Maret, G.
2005-03-01
Superparamagnetic colloidal particles confined to a flat horizontal air-water interface in an external magnetic field, which is tilted relative to the interface, form anisotropic two-dimensional crystals resulting from their mutual dipole-dipole interactions. Using real-space experiments and harmonic lattice theory we explore the mean-square displacements of the particles in the directions parallel and perpendicular to the in-plane component of the external magnetic field as a function of the tilt angle. We find that the anisotropy of the mean-square displacement behaves nonmonotonically as a function of the tilt angle and does not correlate with the structural anisotropy of the crystal.
Functional Generalized Structured Component Analysis.
Suk, Hye Won; Hwang, Heungsun
2016-12-01
An extension of Generalized Structured Component Analysis (GSCA), called Functional GSCA, is proposed to analyze functional data that are considered to arise from an underlying smooth curve varying over time or other continua. GSCA has been geared for the analysis of multivariate data. Accordingly, it cannot deal with functional data that often involve different measurement occasions across participants and a large number of measurement occasions that exceed the number of participants. Functional GSCA addresses these issues by integrating GSCA with spline basis function expansions that represent infinite-dimensional curves onto a finite-dimensional space. For parameter estimation, functional GSCA minimizes a penalized least squares criterion by using an alternating penalized least squares estimation algorithm. The usefulness of functional GSCA is illustrated with gait data.
[An object-oriented remote sensing image segmentation approach based on edge detection].
Tan, Yu-Min; Huai, Jian-Zhu; Tang, Zhong-Shi
2010-06-01
Satellite sensor technology endorsed better discrimination of various landscape objects. Image segmentation approaches to extracting conceptual objects and patterns hence have been explored and a wide variety of such algorithms abound. To this end, in order to effectively utilize edge and topological information in high resolution remote sensing imagery, an object-oriented algorithm combining edge detection and region merging is proposed. Susan edge filter is firstly applied to the panchromatic band of Quickbird imagery with spatial resolution of 0.61 m to obtain the edge map. Thanks to the resulting edge map, a two-phrase region-based segmentation method operates on the fusion image from panchromatic and multispectral Quickbird images to get the final partition result. In the first phase, a quad tree grid consisting of squares with sides parallel to the image left and top borders agglomerates the square subsets recursively where the uniform measure is satisfied to derive image object primitives. Before the merger of the second phrase, the contextual and spatial information, (e. g., neighbor relationship, boundary coding) of the resulting squares are retrieved efficiently by means of the quad tree structure. Then a region merging operation is performed with those primitives, during which the criterion for region merging integrates edge map and region-based features. This approach has been tested on the QuickBird images of some site in Sanxia area and the result is compared with those of ENVI Zoom Definiens. In addition, quantitative evaluation of the quality of segmentation results is also presented. Experiment results demonstrate stable convergence and efficiency.
NASA Astrophysics Data System (ADS)
Allen, Brian; Travesset, Alex
2004-03-01
Dislocations and disclinations play a fundamental role in the properties of two dimensional crystals. In this talk, it will be shown that a general computational framework can be developed by combining previous work of Seung and Nelson* and modern advances in objected oriented design. This allows separating the problem into independent classes such as: geometry (sphere, plane, torus..), lattice (triangular, square, etc..), type of defect (dislocation, disclinations, etc..), boundary conditions, type of order (crystalline, hexatic) or energy functional. As applications, the ground state of crystals in several geometries will be discussed. Experimental examples with colloidal particles will be shown. *S. Seung and D. Nelson, Phys. Rev. A 38, 1005 (1988)
Effects of prosodically modulated sub-phonetic variation on lexical competition.
Salverda, Anne Pier; Dahan, Delphine; Tanenhaus, Michael K; Crosswhite, Katherine; Masharov, Mikhail; McDonough, Joyce
2007-11-01
Eye movements were monitored as participants followed spoken instructions to manipulate one of four objects pictured on a computer screen. Target words occurred in utterance-medial (e.g., Put the cap next to the square) or utterance-final position (e.g., Now click on the cap). Displays consisted of the target picture (e.g., a cap), a monosyllabic competitor picture (e.g., a cat), a polysyllabic competitor picture (e.g., a captain) and a distractor (e.g., a beaker). The relative proportion of fixations to the two types of competitor pictures changed as a function of the position of the target word in the utterance, demonstrating that lexical competition is modulated by prosodically conditioned phonetic variation.
Triangulation of multistation camera data to locate a curved line in space
NASA Technical Reports Server (NTRS)
Fricke, C. L.
1974-01-01
A method is described for finding the location of a curved line in space from local azimuth as a function of elevation data obtained at several observation sites. A least-squares criterion is used to insure the best fit to the data. The method is applicable to the triangulation of an object having no identifiable structural features, provided its width is very small compared with its length so as to approximate a line in space. The method was implemented with a digital computer program and was successfully applied to data obtained from photographs of a barium ion cloud which traced out the earth's magnetic field line at very high altitudes.
Jani, Ylber; Kamberi, Ahmet; Xhunga, Sotir; Pocesta, Bekim; Ferati, Fatmir; Lala, Dali; Zeqiri, Agim; Rexhepi, Atila
2015-01-01
Objective: To assess the influence of type 2 DM and gender, on the QT dispersion, Tpeak-Tend dispersion of ventricular repolarization, in patients with sub-clinic left ventricular diastolic dysfunction of the heart. Background: QT dispersion, that reflects spatial inhomogeneity in ventricular repolarization, Tpeak-Tend dispersion, this on the other hand reflects transmural inhomogeneity in ventricular repolarization, that is increased in an early stage of cardiomyopathy, and in patients with left ventricular diastolic dysfunction, as well. The left ventricular diastolic dysfunction, a basic characteristic of diabetic heart disease (diabetic cardiomyopathy), that developes earlier than systolic dysfunction, suggests that diastolic markers might be sensitive for early cardiac injury. It is also demonstrated that gender has complex influence on indices of myocardial repolarization abnormalities such as QT interval and QT dispersion. Material and methods: We performed an observational study including 300 diabetic patients with similar epidemiological-demographic characteristics recruited in our institution from May 2009 to July 2014, divided into two groups. Demographic and laboratory echocardiographic data were obtained, twelve lead resting electrocardiography, QT, QTc, Tpeak-Tend-intervals and dispersion, were determined manually, and were compared between various groups. For statistical analysis a t-test, X2 test, and logistic regression are used according to the type of variables. A p value <0.05 was considered statistically significant for a confidence interval of 95%. Results: QTc max. interval, QTc dispersion and Tpeak-Tend dispersion, were significantly higher in diabetic group with subclinical LV (left ventricular) diastolic dysfunction, than in diabetic group with normal left ventricular diastolic function (445.24±14.7 ms vs. 433.55±14.4 ms, P<0.000; 44.98±18.78 ms vs. 32.05±17.9 ms, P<0.000; 32.60±1.6 ms vs. 17.46±2.0 ms, P<0.02. Prolonged QTc max. interval was found in 33% of patients, indiabetic group with subclinical left ventricular diastolic dysfunction vs. 13.3% of patients in diabetic group with normal left ventricular diastolic function, (Chi-square: 16.77, P<0.0001). A prolonged QTc dispersion, was found in 40.6% of patients, in diabetic group with subclinical left ventricular diastolic dysfunction vs. 20% of patients in diabetic group with normal left ventricular diastolic function Chi-square: 14.11, P<0.0002). A prolonged dispersion of Tpeak-Tend interval was found in 24% of patients in diabetic group with subclinical left ventricular diastolic dysfunction vs. 13.3% of patients in diabetic group with normal left ventricular diastolic function (Chi-square: 12.00, P<0.005). Females in diabetic group with subclinical left ventricular diastolic dysfunction in comparison with males in diabetic group with subclinical left ventricular diastolic dysfunction, have a significantly prolonged: mean QTc max. interval (23.3% vs. 10%, Chisquare: 12.0, P<0.005), mean QTc dispersion (27.3% vs. 13.3%, Chi-square: 10.24, P<0.001), mean Tpeak-Tend interval (10% vs. 3.3%, Chi-square: 5.77, P<0.01), mean Tpek-Tend dispersion (16.6% vs. 6.6%, Chi-square: 8.39, P<0.003). Conclusion: The present study has shown that influences of type 2 diabetes and gender in diabetics with sub-clinical left-ventricular diastolic dysfunction are reflected in a set of electrophysiological parameters that indicate a prolonged and more heterogeneous repolarization than in diabetic patients with normal diastolic function. In addition, it demonstrates that there exist differences between diabetic females with sub-clinic LV dysfunction and those with diabetes and normal LV function in the prevalence of increased set of electrophysiological parameters that indicate a prolonged and more heterogeneous repolarization. PMID:26550530
Cancer Detection in Microarray Data Using a Modified Cat Swarm Optimization Clustering Approach
M, Pandi; R, Balamurugan; N, Sadhasivam
2017-12-29
Objective: A better understanding of functional genomics can be obtained by extracting patterns hidden in gene expression data. This could have paramount implications for cancer diagnosis, gene treatments and other domains. Clustering may reveal natural structures and identify interesting patterns in underlying data. The main objective of this research was to derive a heuristic approach to detection of highly co-expressed genes related to cancer from gene expression data with minimum Mean Squared Error (MSE). Methods: A modified CSO algorithm using Harmony Search (MCSO-HS) for clustering cancer gene expression data was applied. Experiment results are analyzed using two cancer gene expression benchmark datasets, namely for leukaemia and for breast cancer. Result: The results indicated MCSO-HS to be better than HS and CSO, 13% and 9% with the leukaemia dataset. For breast cancer dataset improvement was by 22% and 17%, respectively, in terms of MSE. Conclusion: The results showed MCSO-HS to outperform HS and CSO with both benchmark datasets. To validate the clustering results, this work was tested with internal and external cluster validation indices. Also this work points to biological validation of clusters with gene ontology in terms of function, process and component. Creative Commons Attribution License
Torres-Lapasió, J R; Pous-Torres, S; Ortiz-Bolsico, C; García-Alvarez-Coque, M C
2015-01-16
The optimisation of the resolution in high-performance liquid chromatography is traditionally performed attending only to the time information. However, even in the optimal conditions, some peak pairs may remain unresolved. Such incomplete resolution can be still accomplished by deconvolution, which can be carried out with more guarantees of success by including spectral information. In this work, two-way chromatographic objective functions (COFs) that incorporate both time and spectral information were tested, based on the peak purity (analyte peak fraction free of overlapping) and the multivariate selectivity (figure of merit derived from the net analyte signal) concepts. These COFs are sensitive to situations where the components that coelute in a mixture show some spectral differences. Therefore, they are useful to find out experimental conditions where the spectrochromatograms can be recovered by deconvolution. Two-way multivariate selectivity yielded the best performance and was applied to the separation using diode-array detection of a mixture of 25 phenolic compounds, which remained unresolved in the chromatographic order using linear and multi-linear gradients of acetonitrile-water. Peak deconvolution was carried out using the combination of orthogonal projection approach and alternating least squares. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Gong, Changfei; Han, Ce; Gan, Guanghui; Deng, Zhenxiang; Zhou, Yongqiang; Yi, Jinling; Zheng, Xiaomin; Xie, Congying; Jin, Xiance
2017-04-01
Dynamic myocardial perfusion CT (DMP-CT) imaging provides quantitative functional information for diagnosis and risk stratification of coronary artery disease by calculating myocardial perfusion hemodynamic parameter (MPHP) maps. However, the level of radiation delivered by dynamic sequential scan protocol can be potentially high. The purpose of this work is to develop a pre-contrast normal-dose scan induced structure tensor total variation regularization based on the penalized weighted least-squares (PWLS) criteria to improve the image quality of DMP-CT with a low-mAs CT acquisition. For simplicity, the present approach was termed as ‘PWLS-ndiSTV’. Specifically, the ndiSTV regularization takes into account the spatial-temporal structure information of DMP-CT data and further exploits the higher order derivatives of the objective images to enhance denoising performance. Subsequently, an effective optimization algorithm based on the split-Bregman approach was adopted to minimize the associative objective function. Evaluations with modified dynamic XCAT phantom and preclinical porcine datasets have demonstrated that the proposed PWLS-ndiSTV approach can achieve promising gains over other existing approaches in terms of noise-induced artifacts mitigation, edge details preservation, and accurate MPHP maps calculation.
Stability Criteria Analysis for Landing Craft Utility
2017-12-01
Square meter m/s Meters per Second m/s2 Meters per Second Squared n Vertical Displacement of Sea Water Free Surface n3 Ship’s Heave... Displacement n5 Ship’s Pitch Angle p(ξ) Rayleigh Distribution Probability Function POSSE Program of Ship Salvage Engineering pk...Spectrum Constant γ JONSWAP Wave Spectrum Peak Factor Γ(λ) Gamma Probability Function Δ Ship’s Displacement Δω Small Frequency
Enabling quaternion derivatives: the generalized HR calculus
Xu, Dongpo; Jahanchahi, Cyrus; Took, Clive C.; Mandic, Danilo P.
2015-01-01
Quaternion derivatives exist only for a very restricted class of analytic (regular) functions; however, in many applications, functions of interest are real-valued and hence not analytic, a typical case being the standard real mean square error objective function. The recent HR calculus is a step forward and provides a way to calculate derivatives and gradients of both analytic and non-analytic functions of quaternion variables; however, the HR calculus can become cumbersome in complex optimization problems due to the lack of rigorous product and chain rules, a consequence of the non-commutativity of quaternion algebra. To address this issue, we introduce the generalized HR (GHR) derivatives which employ quaternion rotations in a general orthogonal system and provide the left- and right-hand versions of the quaternion derivative of general functions. The GHR calculus also solves the long-standing problems of product and chain rules, mean-value theorem and Taylor's theorem in the quaternion field. At the core of the proposed GHR calculus is quaternion rotation, which makes it possible to extend the principle to other functional calculi in non-commutative settings. Examples in statistical learning theory and adaptive signal processing support the analysis. PMID:26361555
Enabling quaternion derivatives: the generalized HR calculus.
Xu, Dongpo; Jahanchahi, Cyrus; Took, Clive C; Mandic, Danilo P
2015-08-01
Quaternion derivatives exist only for a very restricted class of analytic (regular) functions; however, in many applications, functions of interest are real-valued and hence not analytic, a typical case being the standard real mean square error objective function. The recent HR calculus is a step forward and provides a way to calculate derivatives and gradients of both analytic and non-analytic functions of quaternion variables; however, the HR calculus can become cumbersome in complex optimization problems due to the lack of rigorous product and chain rules, a consequence of the non-commutativity of quaternion algebra. To address this issue, we introduce the generalized HR (GHR) derivatives which employ quaternion rotations in a general orthogonal system and provide the left- and right-hand versions of the quaternion derivative of general functions. The GHR calculus also solves the long-standing problems of product and chain rules, mean-value theorem and Taylor's theorem in the quaternion field. At the core of the proposed GHR calculus is quaternion rotation, which makes it possible to extend the principle to other functional calculi in non-commutative settings. Examples in statistical learning theory and adaptive signal processing support the analysis.
Assessment of parametric uncertainty for groundwater reactive transport modeling,
Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun
2014-01-01
The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.
NASA Astrophysics Data System (ADS)
Takahashi, Hiroki; Hasegawa, Hideyuki; Kanai, Hiroshi
2013-07-01
For the facilitation of analysis and elimination of the operator dependence in estimating the myocardial function in echocardiography, we have previously developed a method for automated identification of the heart wall. However, there are misclassified regions because the magnitude-squared coherence (MSC) function of echo signals, which is one of the features in the previous method, is sensitively affected by the clutter components such as multiple reflection and off-axis echo from external tissue or the nearby myocardium. The objective of the present study is to improve the performance of automated identification of the heart wall. For this purpose, we proposed a method to suppress the effect of the clutter components on the MSC of echo signals by applying an adaptive moving target indicator (MTI) filter to echo signals. In vivo experimental results showed that the misclassified regions were significantly reduced using our proposed method in the longitudinal axis view of the heart.
Kundu, Tapas K.; Barde, Pradip B.; Jindal, Ghanshyam D.; Motiwala, Farooq F.
2017-01-01
Background. Status of autonomic homoeostasis in hemostasic disturbances due to hemophilia needs to be studied. Objectives. To compare autonomic nervous system markers measured by heart rate variability (HRV) and blood flow variability (BFV) in hemophiliacs and healthy age-matched control population using medical analyzer system. Design. Cross-sectional study. Settings. Motiwala Homoeopathy Medical College, and Hemophilia Clinics, Nashik. Subjects. Eighty subjects. Interventions. Nil. Outcome Measures. Autonomic function markers for HRV and BFV. Results. Among 80 subjects, BFV time domain measure, root mean square of successive NN (normal-to-normal) interval differences (RMSSD), was significantly higher among hemophiliacs than nonhemophiliacs. Frequency domain analysis parameter, low frequency for both HRV and BFV was significantly higher among hemophiliacs as compared with nonhemophiliacs. Conclusions. Hemophiliacs were shown to have higher autonomic activity as compared with healthy controls. Homoeopathic medicines used as an adjunct was associated with decrease in parasympathetic modulations. PMID:28719973
NASA Astrophysics Data System (ADS)
Natarajan, Sundararajan
2014-12-01
The main objectives of the paper are to (1) present an overview of nonlocal integral elasticity and Aifantis gradient elasticity theory and (2) discuss the application of partition of unity methods to study the response of low-dimensional structures. We present different choices of approximation functions for gradient elasticity, namely Lagrange intepolants, moving least-squares approximants and non-uniform rational B-splines. Next, we employ these approximation functions to study the response of nanobeams based on Euler-Bernoulli and Timoshenko theories as well as to study nanoplates based on first-order shear deformation theory. The response of nanobeams and nanoplates is studied using Eringen's nonlocal elasticity theory. The influence of the nonlocal parameter, the beam and the plate aspect ratio and the boundary conditions on the global response is numerically studied. The influence of a crack on the axial vibration and buckling characteristics of nanobeams is also numerically studied.
NASA Astrophysics Data System (ADS)
Ogam, Erick; Fellah, Z. E. A.
2011-09-01
A wave-fluid saturated poroelastic structure interaction model based on the modified Biot theory (MBT) and plane-wave decomposition using orthogonal cylindrical functions is developed. The model is employed to recover from real data acquired in an anechoic chamber, the poromechanical properties of a soft cellular melamine cylinder submitted to an audible acoustic radiation. The inverse problem of acoustic diffraction is solved by constructing the objective functional given by the total square of the difference between predictions from the MBT interaction model and diffracted field data from experiment. The faculty of retrieval of the intrinsic poromechanical parameters from the diffracted acoustic fields, indicate that a wave initially propagating in a light fluid (air) medium, is able to carry in the absence of mechanical excitation of the specimen, information on the macroscopic mechanical properties which depend on the microstructural and intrinsic properties of the solid phase.
Hubble Space Telescope faint object camera instrument handbook (Post-COSTAR), version 5.0
NASA Technical Reports Server (NTRS)
Nota, A. (Editor); Jedrzejewski, R. (Editor); Greenfield, P. (Editor); Hack, W. (Editor)
1994-01-01
The faint object camera (FOC) is a long-focal-ratio, photon-counting device capable of taking high-resolution two-dimensional images of the sky up to 14 by 14 arc seconds squared in size with pixel dimensions as small as 0.014 by 0.014 arc seconds squared in the 1150 to 6500 A wavelength range. Its performance approaches that of an ideal imaging system at low light levels. The FOC is the only instrument on board the Hubble Space Telescope (HST) to fully use the spatial resolution capabilities of the optical telescope assembly (OTA) and is one of the European Space Agency's contributions to the HST program.
Adams, J; Adler, C; Ahammed, Z; Allgower, C; Amonett, J; Anderson, B D; Anderson, M; Averichev, G S; Balewski, J; Barannikova, O; Barnby, L S; Baudot, J; Bekele, S; Belaga, V V; Bellwied, R; Berger, J; Bichsel, H; Billmeier, A; Bland, L C; Blyth, C O; Bonner, B E; Boucham, A; Brandin, A; Bravar, A; Cadman, R V; Caines, H; Calderónde la Barca Sánchez, M; Cardenas, A; Carroll, J; Castillo, J; Castro, M; Cebra, D; Chaloupka, P; Chattopadhyay, S; Chen, Y; Chernenko, S P; Cherney, M; Chikanian, A; Choi, B; Christie, W; Coffin, J P; Cormier, T M; Corral, M M; Cramer, J G; Crawford, H J; Derevschikov, A A; Didenko, L; Dietel, T; Draper, J E; Dunin, V B; Dunlop, J C; Eckardt, V; Efimov, L G; Emelianov, V; Engelage, J; Eppley, G; Erazmus, B; Fachini, P; Faine, V; Faivre, J; Fatemi, R; Filimonov, K; Finch, E; Fisyak, Y; Flierl, D; Foley, K J; Fu, J; Gagliardi, C A; Gagunashvili, N; Gans, J; Gaudichet, L; Germain, M; Geurts, F; Ghazikhanian, V; Grachov, O; Grigoriev, V; Guedon, M; Guertin, S M; Gushin, E; Hallman, T J; Hardtke, D; Harris, J W; Heinz, M; Henry, T W; Heppelmann, S; Herston, T; Hippolyte, B; Hirsch, A; Hjort, E; Hoffmann, G W; Horsley, M; Huang, H Z; Humanic, T J; Igo, G; Ishihara, A; Ivanshin, Yu I; Jacobs, P; Jacobs, W W; Janik, M; Johnson, I; Jones, P G; Judd, E G; Kaneta, M; Kaplan, M; Keane, D; Kiryluk, J; Kisiel, A; Klay, J; Klein, S R; Klyachko, A; Kollegger, T; Konstantinov, A S; Kopytine, M; Kotchenda, L; Kovalenko, A D; Kramer, M; Kravtsov, P; Krueger, K; Kuhn, C; Kulikov, A I; Kunde, G J; Kunz, C L; Kutuev, R Kh; Kuznetsov, A A; Lamont, M A C; Landgraf, J M; Lange, S; Lansdell, C P; Lasiuk, B; Laue, F; Lauret, J; Lebedev, A; Lednický, R; Leontiev, V M; LeVine, M J; Li, Q; Lindenbaum, S J; Lisa, M A; Liu, F; Liu, L; Liu, Z; Liu, Q J; Ljubicic, T; Llope, W J; Long, H; Longacre, R S; Lopez-Noriega, M; Love, W A; Ludlam, T; Lynn, D; Ma, J; Magestro, D; Majka, R; Margetis, S; Markert, C; Martin, L; Marx, J; Matis, H S; Matulenko, Yu A; McShane, T S; Meissner, F; Melnick, Yu; Meschanin, A; Messer, M; Miller, M L; Milosevich, Z; Minaev, N G; Mitchell, J; Moore, C F; Morozov, V; de Moura, M M; Munhoz, M G; Nelson, J M; Nevski, P; Nikitin, V A; Nogach, L V; Norman, B; Nurushev, S B; Odyniec, G; Ogawa, A; Okorokov, V; Oldenburg, M; Olson, D; Paic, G; Pandey, S U; Panebratsev, Y; Panitkin, S Y; Pavlinov, A I; Pawlak, T; Perevoztchikov, V; Peryt, W; Petrov, V A; Planinic, M; Pluta, J; Porile, N; Porter, J; Poskanzer, A M; Potrebenikova, E; Prindle, D; Pruneau, C; Putschke, J; Rai, G; Rakness, G; Ravel, O; Ray, R L; Razin, S V; Reichhold, D; Reid, J G; Renault, G; Retiere, F; Ridiger, A; Ritter, H G; Roberts, J B; Rogachevski, O V; Romero, J L; Rose, A; Roy, C; Rykov, V; Sakrejda, I; Salur, S; Sandweiss, J; Savin, I; Schambach, J; Scharenberg, R P; Schmitz, N; Schroeder, L S; Schüttauf, A; Schweda, K; Seger, J; Seliverstov, D; Seyboth, P; Shahaliev, E; Shestermanov, K E; Shimanskii, S S; Simon, F; Skoro, G; Smirnov, N; Snellings, R; Sorensen, P; Sowinski, J; Spinka, H M; Srivastava, B; Stephenson, E J; Stock, R; Stolpovsky, A; Strikhanov, M; Stringfellow, B; Struck, C; Suaide, A A P; Sugarbaker, E; Suire, C; Sumbera, M; Surrow, B; Symons, T J M; de Toledo, A Szanto; Szarwas, P; Tai, A; Takahashi, J; Tang, A H; Thein, D; Thomas, J H; Thompson, M; Tikhomirov, V; Tokarev, M; Tonjes, M B; Trainor, T A; Trentalange, S; Tribble, R E; Trofimov, V; Tsai, O; Ullrich, T; Underwood, D G; Van Buren, G; Vander Molen, A M; Vasilevski, I M; Vasiliev, A N; Vigdor, S E; Voloshin, S A; Wang, F; Ward, H; Watson, J W; Wells, R; Westfall, G D; Whitten, C; Wieman, H; Willson, R; Wissink, S W; Witt, R; Wood, J; Xu, N; Xu, Z; Yakutin, A E; Yamamoto, E; Yang, J; Yepes, P; Yurevich, V I; Zanevski, Y V; Zborovský, I; Zhang, H; Zhang, W M; Zoulkarneev, R; Zubarev, A N
2003-05-02
The balance function is a new observable based on the principle that charge is locally conserved when particles are pair produced. Balance functions have been measured for charged particle pairs and identified charged pion pairs in Au+Au collisions at the square root of SNN = 130 GeV at the Relativistic Heavy Ion Collider using STAR. Balance functions for peripheral collisions have widths consistent with model predictions based on a superposition of nucleon-nucleon scattering. Widths in central collisions are smaller, consistent with trends predicted by models incorporating late hadronization.
Discordance between net analyte signal theory and practical multivariate calibration.
Brown, Christopher D
2004-08-01
Lorber's concept of net analyte signal is reviewed in the context of classical and inverse least-squares approaches to multivariate calibration. It is shown that, in the presence of device measurement error, the classical and inverse calibration procedures have radically different theoretical prediction objectives, and the assertion that the popular inverse least-squares procedures (including partial least squares, principal components regression) approximate Lorber's net analyte signal vector in the limit is disproved. Exact theoretical expressions for the prediction error bias, variance, and mean-squared error are given under general measurement error conditions, which reinforce the very discrepant behavior between these two predictive approaches, and Lorber's net analyte signal theory. Implications for multivariate figures of merit and numerous recently proposed preprocessing treatments involving orthogonal projections are also discussed.
From direct-space discrepancy functions to crystallographic least squares.
Giacovazzo, Carmelo
2015-01-01
Crystallographic least squares are a fundamental tool for crystal structure analysis. In this paper their properties are derived from functions estimating the degree of similarity between two electron-density maps. The new approach leads also to modifications of the standard least-squares procedures, potentially able to improve their efficiency. The role of the scaling factor between observed and model amplitudes is analysed: the concept of unlocated model is discussed and its scattering contribution is combined with that arising from the located model. Also, the possible use of an ancillary parameter, to be associated with the classical weight related to the variance of the observed amplitudes, is studied. The crystallographic discrepancy factors, basic tools often combined with least-squares procedures in phasing approaches, are analysed. The mathematical approach here described includes, as a special case, the so-called vector refinement, used when accurate estimates of the target phases are available.
Multilayer DNA Origami Packed on a Square Lattice
Ke, Yonggang; Douglas, Shawn M.; Liu, Minghui; Sharma, Jaswinder; Cheng, Anchi; Leung, Albert; Liu, Yan; Shih, William M.; Yan, Hao
2009-01-01
Molecular self-assembly using DNA as a structural building block has proven to be an efficient route to the construction of nanoscale objects and arrays of increasing complexity. Using the remarkable “scaffolded DNA origami” strategy, Rothemund demonstrated that a long single-stranded DNA from a viral genome (M13) can be folded into a variety of custom two-dimensional (2D) shapes using hundreds of short synthetic DNA molecules as staple strands. More recently, we generalized a strategy to build custom-shaped, three-dimensional (3D) objects formed as pleated layers of helices constrained to a honeycomb lattice, with precisely controlled dimensions ranging from 10 to 100 nm. Here we describe a more compact design for 3D origami, with layers of helices packed on a square lattice, that can be folded successfully into structures of designed dimensions in a one-step annealing process, despite the increased density of DNA helices. A square lattice provides a more natural framework for designing rectangular structures, the option for a more densely packed architecture, and the ability to create surfaces that are more flat than is possible with the honeycomb lattice. Thus enabling the design and construction of custom 3D shapes from helices packed on a square lattice provides a general foundational advance for increasing the versatility and scope of DNA nanotechnology. PMID:19807088
Systematic search for UV-excess quasar candidates in 40 square degrees at the North Galactic Pole.
NASA Astrophysics Data System (ADS)
Moreau, O.; Reboul, H.
1995-05-01
We have developed a procedure (so called PAPA) for measurement of magnitudes (about 0.1mag accurate) and positions (with accuracy better than 0.5arcsec) of all the objects present on photographic plates digitised with the MAMA machine. This homogeneous procedure was applied to four Schmidt plates - in U, B and twice V - covering the Palomar-Sky-Survey field PS +30deg 13h00m, a 40-square-degree zone at the North Galactic Pole. A general-interest exhaustive tricolour catalogue of 19542 star-like objects down to V=20.0 has been produced and we selected 1681 quasar candidates on the basis of ultraviolet excess and, when possible, absence of any measurable proper motion. The astrometric and photometric catalogue of the candidates is given in electronic form. A first multi-object spectroscopy of a few candidates confirms validity of the selection.
Precision PEP-II optics measurement with an SVD-enhanced Least-Square fitting
NASA Astrophysics Data System (ADS)
Yan, Y. T.; Cai, Y.
2006-03-01
A singular value decomposition (SVD)-enhanced Least-Square fitting technique is discussed. By automatic identifying, ordering, and selecting dominant SVD modes of the derivative matrix that responds to the variations of the variables, the converging process of the Least-Square fitting is significantly enhanced. Thus the fitting speed can be fast enough for a fairly large system. This technique has been successfully applied to precision PEP-II optics measurement in which we determine all quadrupole strengths (both normal and skew components) and sextupole feed-downs as well as all BPM gains and BPM cross-plane couplings through Least-Square fitting of the phase advances and the Local Green's functions as well as the coupling ellipses among BPMs. The local Green's functions are specified by 4 local transfer matrix components R12, R34, R32, R14. These measurable quantities (the Green's functions, the phase advances and the coupling ellipse tilt angles and axis ratios) are obtained by analyzing turn-by-turn Beam Position Monitor (BPM) data with a high-resolution model-independent analysis (MIA). Once all of the quadrupoles and sextupole feed-downs are determined, we obtain a computer virtual accelerator which matches the real accelerator in linear optics. Thus, beta functions, linear coupling parameters, and interaction point (IP) optics characteristics can be measured and displayed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santos-Villalobos, Hector J; Gregor, Jens; Bingham, Philip R
2014-01-01
At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. Tomore » overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.« less
Characterizing Literacy and Cognitive Function during Pregnancy and Postpartum.
Yee, Lynn M; Kamel, Leslie A; Quader, Zara; Rajan, Priya V; Taylor, Shaneah M; O'Conor, Rachel; Wolf, Michael S; Simon, Melissa A
2017-07-01
Objective The objective of this study was to characterize health literacy and cognitive function in a diverse cohort of pregnant women. Methods Pregnant and postpartum women underwent in-depth assessments of health literacy/numeracy and the cognitive domains of verbal ability, working memory, long-term memory, processing speed, and inductive reasoning. Differences by demographic characteristics and gestational age were assessed using chi-square tests and multivariable logistic regression. Results In this cohort of pregnant ( N = 77) or postpartum ( N = 24) women, 41.6% had limited health literacy/numeracy. Women were more likely to score in the lowest quartile for literacy and verbal ability if they were less educated, younger, nonwhite or had Medicaid. These factors were associated with low scores for long-term memory, processing speed, and inductive reasoning. Although there were no differences in literacy or cognitive function by parity or gestational age, postpartum women were more likely to score in the lowest quartile for processing speed (adjusted odds ratio [aOR]: 3.79, 95% confidence interval [CI]: 1.32-10.93) and inductive reasoning (aOR: 4.07, 95% CI: 1.21-13.70). Conclusion Although postpartum status was associated with reduced inductive reasoning and processing speed, there were no differences in cognitive function across pregnancy. Practice Implications Postpartum maternal learning may require enhanced support. In addition, cognitive skills and health literacy may be a mediator of perinatal outcomes inequities. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Plowes, Nicola J.R; Adams, Eldridge S
2005-01-01
Lanchester's models of attrition describe casualty rates during battles between groups as functions of the numbers of individuals and their fighting abilities. Originally developed to describe human warfare, Lanchester's square law has been hypothesized to apply broadly to social animals as well, with important consequences for their aggressive behaviour and social structure. According to the square law, the fighting ability of a group is proportional to the square of the number of individuals, but rises only linearly with fighting ability of individuals within the group. By analyzing mortality rates of fire ants (Solenopsis invicta) fighting in different numerical ratios, we provide the first quantitative test of Lanchester's model for a non-human animal. Casualty rates of fire ants were not consistent with the square law; instead, group fighting ability was an approximately linear function of group size. This implies that the relative numbers of casualties incurred by two fighting groups are not strongly affected by relative group sizes and that battles do not disproportionately favour group size over individual prowess. PMID:16096093
Parameter estimation for groundwater models under uncertain irrigation data
Demissie, Yonas; Valocchi, Albert J.; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen
2015-01-01
The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p < 0.05) bias in estimated parameters and model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.
Bøcher, Peder Klith; McCloy, Keith R
2006-02-01
In this investigation, the characteristics of the average local variance (ALV) function is investigated through the acquisition of images at different spatial resolutions of constructed scenes of regular patterns of black and white squares. It is shown that the ALV plot consistently peaks at a spatial resolution in which the pixels has a size corresponding to half the distance between scene objects, and that, under very specific conditions, it also peaks at a spatial resolution in which the pixel size corresponds to the whole distance between scene objects. It is argued that the peak at object distance when present is an expression of the Nyquist sample rate. The presence of this peak is, hence, shown to be a function of the matching between the phase of the scene pattern and the phase of the sample grid, i.e., the image. When these phases match, a clear and distinct peak is produced on the ALV plot. The fact that the peak at half the distance consistently occurs in the ALV plot is linked to the circumstance that the sampling interval (distance between pixels) and the extent of the sampling unit (size of pixels) are equal. Hence, at twice the Nyquist sampling rate, each fundamental period of the pattern is covered by four pixels; therefore, at least one pixel is always completely embedded within one pattern element, regardless of sample scene phase. If the objects in the scene are scattered with a distance larger than their extent, the peak will be related to the size by a factor larger than 1/2. This is suggested to be the explanation to the results presented by others that the ALV plot is related to scene-object size by a factor of 1/2-3/4.
A Simulated Annealing based Optimization Algorithm for Automatic Variogram Model Fitting
NASA Astrophysics Data System (ADS)
Soltani-Mohammadi, Saeed; Safa, Mohammad
2016-09-01
Fitting a theoretical model to an experimental variogram is an important issue in geostatistical studies because if the variogram model parameters are tainted with uncertainty, the latter will spread in the results of estimations and simulations. Although the most popular fitting method is fitting by eye, in some cases use is made of the automatic fitting method on the basis of putting together the geostatistical principles and optimization techniques to: 1) provide a basic model to improve fitting by eye, 2) fit a model to a large number of experimental variograms in a short time, and 3) incorporate the variogram related uncertainty in the model fitting. Effort has been made in this paper to improve the quality of the fitted model by improving the popular objective function (weighted least squares) in the automatic fitting. Also, since the variogram model function (£) and number of structures (m) too affect the model quality, a program has been provided in the MATLAB software that can present optimum nested variogram models using the simulated annealing method. Finally, to select the most desirable model from among the single/multi-structured fitted models, use has been made of the cross-validation method, and the best model has been introduced to the user as the output. In order to check the capability of the proposed objective function and the procedure, 3 case studies have been presented.
A Stochastic Total Least Squares Solution of Adaptive Filtering Problem
Ahmad, Noor Atinah
2014-01-01
An efficient and computationally linear algorithm is derived for total least squares solution of adaptive filtering problem, when both input and output signals are contaminated by noise. The proposed total least mean squares (TLMS) algorithm is designed by recursively computing an optimal solution of adaptive TLS problem by minimizing instantaneous value of weighted cost function. Convergence analysis of the algorithm is given to show the global convergence of the proposed algorithm, provided that the stepsize parameter is appropriately chosen. The TLMS algorithm is computationally simpler than the other TLS algorithms and demonstrates a better performance as compared with the least mean square (LMS) and normalized least mean square (NLMS) algorithms. It provides minimum mean square deviation by exhibiting better convergence in misalignment for unknown system identification under noisy inputs. PMID:24688412
A new method for gravity field recovery based on frequency analysis of spherical harmonics
NASA Astrophysics Data System (ADS)
Cai, Lin; Zhou, Zebing
2017-04-01
All existing methods for gravity field recovery are mostly based on the space-wise and time-wise approach, whose core processes are constructing the observation equations and solving them by the least square method. It's should be pointed that the least square method means the approximation. On the other hand, we can directly and precisely obtain the coefficients of harmonics by computing the Fast Fourier Transform (FFT) when we do 1-D data (time series) analysis. So the question whether we directly and precisely obtain the coefficients of spherical harmonic by computing 2-D FFT of measurements of satellite gravity mission is of great significance, since this may guide us to a new understanding of the signal components of gravity field and make us determine it quickly by taking advantage of FFT. Like the 1-D data analysis, the 2-D FFT of measurements of satellite can be computed rapidly. If we can determine the relationship between spherical harmonics and 2-D Fourier frequencies and the transfer function from measurements to spherical coefficients, the question mentioned above can be solved. So the objective of this research project is to establish a new method based on frequency analysis of spherical harmonic, which directly compute the confidents of spherical harmonic of gravity field, which is differ from recovery by least squares. There is a one to one correspondence between frequency spectrum and the time series in 1-D FFT. The 2-D FFT has a similar relationship to 1-D FFT. Owing to the fact that any degree or order (higher than one) of spherical function has multi frequencies and these frequencies may be aliased. Fortunately, the elements and ratio of these frequencies of spherical function can be determined, and we can compute the coefficients of spherical function from 2-D FFT. This relationship can be written as equations and equivalent to a matrix, which is solid and can be derived in advance. Until now the relationship has be determined. Some preliminary results, which only compute lower degree spherical harmonics, indicates that the difference between the input (EGM2008) and output (coefficients from recovery) is smaller than 5E-17, while the minimal precision of computer software (Matlab) is 2.2204E-16.
Promising Results from Three NASA SBIR Solar Array Technology Development Programs
NASA Technical Reports Server (NTRS)
Eskenazi, Mike; White, Steve; Spence, Brian; Douglas, Mark; Glick, Mike; Pavlick, Ariel; Murphy, David; O'Neill, Mark; McDanal, A. J.; Piszczor, Michael
2005-01-01
Results from three NASA SBIR solar array technology programs are presented. The programs discussed are: 1) Thin Film Photovoltaic UltraFlex Solar Array; 2) Low Cost/Mass Electrostatically Clean Solar Array (ESCA); and 3) Stretched Lens Array SquareRigger (SLASR). The purpose of the Thin Film UltraFlex (TFUF) Program is to mature and validate the use of advanced flexible thin film photovoltaics blankets as the electrical subsystem element within an UltraFlex solar array structural system. In this program operational prototype flexible array segments, using United Solar amorphous silicon cells, are being manufactured and tested for the flight qualified UltraFlex structure. In addition, large size (e.g. 10 kW GEO) TFUF wing systems are being designed and analyzed. Thermal cycle and electrical test and analysis results from the TFUF program are presented. The purpose of the second program entitled, Low Cost/Mass Electrostatically Clean Solar Array (ESCA) System, is to develop an Electrostatically Clean Solar Array meeting NASA s design requirements and ready this technology for commercialization and use on the NASA MMS and GED missions. The ESCA designs developed use flight proven materials and processes to create a ESCA system that yields low cost, low mass, high reliability, high power density, and is adaptable to any cell type and coverglass thickness. All program objectives, which included developing specifications, creating ESCA concepts, concept analysis and trade studies, producing detailed designs of the most promising ESCA treatments, manufacturing ESCA demonstration panels, and LEO (2,000 cycles) and GEO (1,350 cycles) thermal cycling testing of the down-selected designs were successfully achieved. The purpose of the third program entitled, "High Power Platform for the Stretched Lens Array," is to develop an extremely lightweight, high efficiency, high power, high voltage, and low stowed volume solar array suitable for very high power (multi-kW to MW) applications. These objectives are achieved by combining two cutting edge technologies, the SquareRigger solar array structure and the Stretched Lens Array (SLA). The SLA SquareRigger solar array is termed SLASR. All program objectives, which included developing specifications, creating preliminary designs for a near-term SLASR, detailed structural, mass, power, and sizing analyses, fabrication and power testing of a functional flight-like SLASR solar blanket, were successfully achieved.
Observations of GEO Debris with the Magellan 6.5-m Telescopes
NASA Technical Reports Server (NTRS)
Seitzer, Patrick; Burkhardt, Andrew; Cardonna, Tommaso; Lederer, Susan M.; Cowardin, Heather; Barker, Edwin S.; Abercromby, Kira J.
2012-01-01
Optical observations of geosynchronous orbit (GEO) debris are important to address two questions: 1. What is the distribution function of objects at GEO as a function of brightness? With some assumptions, this can be used to infer a size distribution. 2. Can we determine what the likely composition of individual GEO debris pieces is from studies of the spectral reflectance of these objects? In this paper we report on optical observations with the 6.5-m Magellan telescopes at Las Campanas Observatory in Chile that attempt to answer both questions. Imaging observations over a 0.5 degree diameter field-of-view have detected a significant population of optically faint debris candidates with R > 19th magnitude, corresponding to a size smaller than 20 cm assuming an albedo of 0.175. Many of these objects show brightness variations larger than a factor of 2, suggesting either irregular shapes or albedo variations or both. The object detection rate (per square degree per hour) shows an increase over the rate measured in the 0.6-m MODEST observations, implying an increase in the population at optically fainter levels. Assuming that the albedo distribution is the same for both samples, this corresponds to an increase in the population of smaller size debris. To study the second issue, calibrated reflectance spectroscopy has been obtained of a sample of GEO and near GEO objects with orbits in the public U.S. Space Surveillance Network catalog. With a 6.5-m telescope, the exposures times are short (30 seconds or less), and provide simultaneous wavelength coverage from 4500 to 8000 Angstroms. If the observed objects are tumbling, then simultaneous coverage and short exposure times are essential for a realistic assessment of the object fs spectral signature. We will compare the calibrated spectra with lab-based measurements of simple spacecraft surfaces composed of a single material.
Correspondence between spanning trees and the Ising model on a square lattice
NASA Astrophysics Data System (ADS)
Viswanathan, G. M.
2017-06-01
An important problem in statistical physics concerns the fascinating connections between partition functions of lattice models studied in equilibrium statistical mechanics on the one hand and graph theoretical enumeration problems on the other hand. We investigate the nature of the relationship between the number of spanning trees and the partition function of the Ising model on the square lattice. The spanning tree generating function T (z ) gives the spanning tree constant when evaluated at z =1 , while giving the lattice green function when differentiated. It is known that for the infinite square lattice the partition function Z (K ) of the Ising model evaluated at the critical temperature K =Kc is related to T (1 ) . Here we show that this idea in fact generalizes to all real temperatures. We prove that [Z(K ) s e c h 2 K ] 2=k exp[T (k )] , where k =2 tanh(2 K )s e c h (2 K ) . The identical Mahler measure connects the two seemingly disparate quantities T (z ) and Z (K ) . In turn, the Mahler measure is determined by the random walk structure function. Finally, we show that the the above correspondence does not generalize in a straightforward manner to nonplanar lattices.
An Aggregated Method for Determining Railway Defects and Obstacle Parameters
NASA Astrophysics Data System (ADS)
Loktev, Daniil; Loktev, Alexey; Stepanov, Roman; Pevzner, Viktor; Alenov, Kanat
2018-03-01
The method of combining algorithms of image blur analysis and stereo vision to determine the distance to objects (including external defects of railway tracks) and the speed of moving objects-obstacles is proposed. To estimate the deviation of the distance depending on the blur a statistical approach, logarithmic, exponential and linear standard functions are used. The statistical approach includes a method of estimating least squares and the method of least modules. The accuracy of determining the distance to the object, its speed and direction of movement is obtained. The paper develops a method of determining distances to objects by analyzing a series of images and assessment of depth using defocusing using its aggregation with stereoscopic vision. This method is based on a physical effect of dependence on the determined distance to the object on the obtained image from the focal length or aperture of the lens. In the calculation of the blur spot diameter it is assumed that blur occurs at the point equally in all directions. According to the proposed approach, it is possible to determine the distance to the studied object and its blur by analyzing a series of images obtained using the video detector with different settings. The article proposes and scientifically substantiates new and improved existing methods for detecting the parameters of static and moving objects of control, and also compares the results of the use of various methods and the results of experiments. It is shown that the aggregate method gives the best approximation to the real distances.
NASA Technical Reports Server (NTRS)
Periaux, J.
1979-01-01
The numerical simulation of the transonic flows of idealized fluids and of incompressible viscous fluids, by the nonlinear least squares methods is presented. The nonlinear equations, the boundary conditions, and the various constraints controlling the two types of flow are described. The standard iterative methods for solving a quasi elliptical nonlinear equation with partial derivatives are reviewed with emphasis placed on two examples: the fixed point method applied to the Gelder functional in the case of compressible subsonic flows and the Newton method used in the technique of decomposition of the lifting potential. The new abstract least squares method is discussed. It consists of substituting the nonlinear equation by a problem of minimization in a H to the minus 1 type Sobolev functional space.
Square-core bundles for astronomical imaging
NASA Astrophysics Data System (ADS)
Bryant, Julia J.; Bland-Hawthorn, Joss
2012-09-01
Optical fibre imaging bundles (hexabundles) are proving to be the next logical step for large galaxy surveys as they offer spatially-resolved spectroscopy of galaxies and can be used with conventional fibre positioners. Hexabundles have been effectively demonstrated in the Sydney-AAO Multi-object IFS (SAMI) instrument at the Anglo- Australian Telescope[5]. Based on the success of hexabundles that have circular cores, we have characterised a bundle made instead from square-core fibres. Square cores naturally pack more evenly, which reduces the interstitial holes and can increase the covering, or filling fraction. Furthermore the regular packing simplifies the process of combining and dithering the final images. We discuss the relative issues of filling fraction, focal ratio degradation (FRD), and cross-talk, and find that square-core bundles perform well enough to warrant further development as a format for imaging fibre bundles.
Integral Equations and Scattering Solutions for a Square-Well Potential.
ERIC Educational Resources Information Center
Bagchi, B.; Seyler, R. G.
1979-01-01
Derives Green's functions and integral equations for scattering solutions subject to a variety of boundary conditions. Exact solutions are obtained for the case of a finite spherical square-well potential, and properties of these solutions are discussed. (Author/HM)
Dynamic Polymorphic Reconfiguration to Effectively Cloak a Circuit’s Function
2011-03-24
86 B . RSA Traces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 2.1...56 22 SEMA of Separate Square and Multiply Trace with key (E B5) - RSA Version B ...56 23 Separate Square and Multiply Trace after signal processing - RSA Version B
Hubble Space Telescope: Faint object camera instrument handbook. Version 2.0
NASA Technical Reports Server (NTRS)
Paresce, Francesco (Editor)
1990-01-01
The Faint Object Camera (FOC) is a long focal ratio, photon counting device designed to take high resolution two dimensional images of areas of the sky up to 44 by 44 arcseconds squared in size, with pixel dimensions as small as 0.0007 by 0.0007 arcseconds squared in the 1150 to 6500 A wavelength range. The basic aim of the handbook is to make relevant information about the FOC available to a wide range of astronomers, many of whom may wish to apply for HST observing time. The FOC, as presently configured, is briefly described, and some basic performance parameters are summarized. Also included are detailed performance parameters and instructions on how to derive approximate FOC exposure times for the proposed targets.
MHD natural convection in open inclined square cavity with a heated circular cylinder
NASA Astrophysics Data System (ADS)
Hosain, Sheikh Anwar; Alim, M. A.; Saha, Satrajit Kumar
2017-06-01
MHD natural convection in open cavity becomes very important in many scientific and engineering problems, because of it's application in the design of electronic devices, solar thermal receivers, uncovered flat plate solar collectors having rows of vertical strips, geothermal reservoirs, etc. Several experiments and numerical investigations have been presented for describing the phenomenon of natural convection in open cavity for two decades. MHD natural convection and fluid flow in a two-dimensional open inclined square cavity with a heated circular cylinder was considered. The opposite wall to the opening side of the cavity was first kept to constant heat flux q, at the same time the surrounding fluid interacting with the aperture was maintained to an ambient temperature T∞. The top and bottom wall was kept to low and high temperature respectively. The fluid with different Prandtl numbers. The properties of the fluid are assumed to be constant. As a result a buoyancy force is created inside the cavity due to temperature difference and natural convection is formed inside the cavity. The Computational Fluid Dynamics (CFD) code are used to discretize the solution domain and represent the numerical result to graphical form.. Triangular meshes are used to obtain the solution of the problem. The streamlines and isotherms are produced, heat transfer parameter Nu are obtained. The results are presented in graphical as well as tabular form. The results show that heat flux decreases for increasing inclination of the cavity and the heat flux is a increasing function of Prandtl number Pr and decreasing function of Hartmann number Ha. It is observed that fluid moves counterclockwise around the cylinder in the cavity. Various recirculations are formed around the cylinder. The almost all isotherm lines are concentrated at the right lower corner of the cavity. The object of this work is to develop a Mathematical model regarding the effect of MHD natural convection flow around a heated circular cylinder at the centre of an inclined open square cavity.
Constrained Low-Rank Learning Using Least Squares-Based Regularization.
Li, Ping; Yu, Jun; Wang, Meng; Zhang, Luming; Cai, Deng; Li, Xuelong
2017-12-01
Low-rank learning has attracted much attention recently due to its efficacy in a rich variety of real-world tasks, e.g., subspace segmentation and image categorization. Most low-rank methods are incapable of capturing low-dimensional subspace for supervised learning tasks, e.g., classification and regression. This paper aims to learn both the discriminant low-rank representation (LRR) and the robust projecting subspace in a supervised manner. To achieve this goal, we cast the problem into a constrained rank minimization framework by adopting the least squares regularization. Naturally, the data label structure tends to resemble that of the corresponding low-dimensional representation, which is derived from the robust subspace projection of clean data by low-rank learning. Moreover, the low-dimensional representation of original data can be paired with some informative structure by imposing an appropriate constraint, e.g., Laplacian regularizer. Therefore, we propose a novel constrained LRR method. The objective function is formulated as a constrained nuclear norm minimization problem, which can be solved by the inexact augmented Lagrange multiplier algorithm. Extensive experiments on image classification, human pose estimation, and robust face recovery have confirmed the superiority of our method.
The distance function effect on k-nearest neighbor classification for medical datasets.
Hu, Li-Yu; Huang, Min-Wei; Ke, Shih-Wen; Tsai, Chih-Fong
2016-01-01
K-nearest neighbor (k-NN) classification is conventional non-parametric classifier, which has been used as the baseline classifier in many pattern classification problems. It is based on measuring the distances between the test data and each of the training data to decide the final classification output. Since the Euclidean distance function is the most widely used distance metric in k-NN, no study examines the classification performance of k-NN by different distance functions, especially for various medical domain problems. Therefore, the aim of this paper is to investigate whether the distance function can affect the k-NN performance over different medical datasets. Our experiments are based on three different types of medical datasets containing categorical, numerical, and mixed types of data and four different distance functions including Euclidean, cosine, Chi square, and Minkowsky are used during k-NN classification individually. The experimental results show that using the Chi square distance function is the best choice for the three different types of datasets. However, using the cosine and Euclidean (and Minkowsky) distance function perform the worst over the mixed type of datasets. In this paper, we demonstrate that the chosen distance function can affect the classification accuracy of the k-NN classifier. For the medical domain datasets including the categorical, numerical, and mixed types of data, K-NN based on the Chi square distance function performs the best.
Effects of prosodically-modulated sub-phonetic variation on lexical competition
Salverda, Anne Pier; Dahan, Delphine; Tanenhaus, Michael K.; Crosswhite, Katherine; Masharov, Mikhail; McDonough, Joyce
2007-01-01
Eye movements were monitored as participants followed spoken instructions to manipulate one of four objects pictured on a computer screen. Target words occurred in utterance-medial (e.g., Put the cap next to the square) or utterance-final position (e.g., Now click on the cap). Displays consisted of the target picture (e.g., a cap), a monosyllabic competitor picture (e.g., a cat), a polysyllabic competitor picture (e.g., a captain) and a distractor (e.g., a beaker). The relative proportion of fixations to the two types of competitor pictures changed as a function of the position of the target word in the utterance, demonstrating that lexical competition is modulated by prosodically-conditioned phonetic variation. PMID:17141751
[Application of genetic algorithm in blending technology for extractions of Cortex Fraxini].
Yang, Ming; Zhou, Yinmin; Chen, Jialei; Yu, Minying; Shi, Xiufeng; Gu, Xijun
2009-10-01
To explore the feasibility of genetic algorithm (GA) on multiple objective blending technology for extractions of Cortex Fraxini. According to that the optimization objective was the combination of fingerprint similarity and the root-mean-square error of multiple key constituents, a new multiple objective optimization model of 10 batches extractions of Cortex Fraxini was built. The blending coefficient was obtained by genetic algorithm. The quality of 10 batches extractions of Cortex Fraxini that after blending was evaluated with the finger print similarity and root-mean-square error as indexes. The quality of 10 batches extractions of Cortex Fraxini that after blending was well improved. Comparing with the fingerprint of the control sample, the similarity was up, but the degree of variation is down. The relative deviation of the key constituents was less than 10%. It is proved that genetic algorithm works well on multiple objective blending technology for extractions of Cortex Fraxini. This method can be a reference to control the quality of extractions of Cortex Fraxini. Genetic algorithm in blending technology for extractions of Chinese medicines is advisable.
NASA Astrophysics Data System (ADS)
Liao, Yuxi; She, Xiwei; Wang, Yiwen; Zhang, Shaomin; Zhang, Qiaosheng; Zheng, Xiaoxiang; Principe, Jose C.
2015-12-01
Objective. Representation of movement in the motor cortex (M1) has been widely studied in brain-machine interfaces (BMIs). The electromyogram (EMG) has greater bandwidth than the conventional kinematic variables (such as position, velocity), and is functionally related to the discharge of cortical neurons. As the stochastic information of EMG is derived from the explicit spike time structure, point process (PP) methods will be a good solution for decoding EMG directly from neural spike trains. Previous studies usually assume linear or exponential tuning curves between neural firing and EMG, which may not be true. Approach. In our analysis, we estimate the tuning curves in a data-driven way and find both the traditional functional-excitatory and functional-inhibitory neurons, which are widely found across a rat’s motor cortex. To accurately decode EMG envelopes from M1 neural spike trains, the Monte Carlo point process (MCPP) method is implemented based on such nonlinear tuning properties. Main results. Better reconstruction of EMG signals is shown on baseline and extreme high peaks, as our method can better preserve the nonlinearity of the neural tuning during decoding. The MCPP improves the prediction accuracy (the normalized mean squared error) 57% and 66% on average compared with the adaptive point process filter using linear and exponential tuning curves respectively, for all 112 data segments across six rats. Compared to a Wiener filter using spike rates with an optimal window size of 50 ms, MCPP decoding EMG from a point process improves the normalized mean square error (NMSE) by 59% on average. Significance. These results suggest that neural tuning is constantly changing during task execution and therefore, the use of spike timing methodologies and estimation of appropriate tuning curves needs to be undertaken for better EMG decoding in motor BMIs.
An efficient variable projection formulation for separable nonlinear least squares problems.
Gan, Min; Li, Han-Xiong
2014-05-01
We consider in this paper a class of nonlinear least squares problems in which the model can be represented as a linear combination of nonlinear functions. The variable projection algorithm projects the linear parameters out of the problem, leaving the nonlinear least squares problems involving only the nonlinear parameters. To implement the variable projection algorithm more efficiently, we propose a new variable projection functional based on matrix decomposition. The advantage of the proposed formulation is that the size of the decomposed matrix may be much smaller than those of previous ones. The Levenberg-Marquardt algorithm using finite difference method is then applied to minimize the new criterion. Numerical results show that the proposed approach achieves significant reduction in computing time.
Improvements on a non-invasive, parameter-free approach to inverse form finding
NASA Astrophysics Data System (ADS)
Landkammer, P.; Caspari, M.; Steinmann, P.
2017-08-01
Our objective is to determine the optimal undeformed workpiece geometry (material configuration) within forming processes when the prescribed deformed geometry (spatial configuration) is given. For solving the resulting shape optimization problem—also denoted as inverse form finding—we use a novel parameter-free approach, which relocates in each iteration the material nodal positions as design variables. The spatial nodal positions computed by an elasto-plastic finite element (FE) forming simulation are compared with their prescribed values. The objective function expresses a least-squares summation of the differences between the computed and the prescribed nodal positions. Here, a recently developed shape optimization approach (Landkammer and Steinmann in Comput Mech 57(2):169-191, 2016) is investigated with a view to enhance its stability and efficiency. Motivated by nonlinear optimization theory a detailed justification of the algorithm is given. Furthermore, a classification according to shape changing design, fixed and controlled nodal coordinates is introduced. Two examples with large elasto-plastic strains demonstrate that using a superconvergent patch recovery technique instead of a least-squares (L2 )-smoothing improves the efficiency. Updating the interior discretization nodes by solving a fictitious elastic problem also reduces the number of required FE iterations and avoids severe mesh distortions. Furthermore, the impact of the inclusion of the second deformation gradient in the Hessian of the Quasi-Newton approach is analyzed. Inverse form finding is a crucial issue in metal forming applications. As a special feature, the approach is designed to be coupled in a non-invasive fashion to arbitrary FE software.
Evaluation of Measurement Tools for Tobacco Product Displays: Is there an App for that?
Combs, Todd B.; Moreland-Russell, Sarah; Roche, Jason
2015-01-01
Tobacco product displays are a pervasive presence in convenience stores, supermarkets, pharmacies, and other retailers nationwide. The influence that tobacco product displays have on purchases and tobacco product initiation, particularly on young people and other vulnerable populations, is well known. An objective measurement tool that is valid, reliable, and feasible to use is needed to assess product displays in the retail setting. This study reports on the relative accuracy of various tools that measure area and/or distance in photos and thus could be applied to product displays. We compare results of repeated trials using five tools. Three tools are smartphone apps that measure objects in photos taken on the device; these are narrowed down from a list of 284 candidate apps. Another tool uses photos taken with any device and calculates relative area via a built-in function in the Microsoft Office Suite. The fifth uses photos taken with the Narrative Clip, a “life-logging” wearable camera. To evaluate validity and reliability, we assess each instrument's measurements and calculate intra-class correlation coefficients. Mean differences between observed measurements (via tape measure) and those from the five tools range from just over one square foot to just over two square feet. Most instruments produce reliable estimates though some are sensitive to the size of the display. Results of this study indicate need for future research to test innovative measurement tools. This paper also solicits further discussion on how best to transform anecdotal knowledge of product displays as targeted and disproportionate marketing tactics into a scientific evidence base for public policy change. PMID:29188220
Improvements on a non-invasive, parameter-free approach to inverse form finding
NASA Astrophysics Data System (ADS)
Landkammer, P.; Caspari, M.; Steinmann, P.
2018-04-01
Our objective is to determine the optimal undeformed workpiece geometry (material configuration) within forming processes when the prescribed deformed geometry (spatial configuration) is given. For solving the resulting shape optimization problem—also denoted as inverse form finding—we use a novel parameter-free approach, which relocates in each iteration the material nodal positions as design variables. The spatial nodal positions computed by an elasto-plastic finite element (FE) forming simulation are compared with their prescribed values. The objective function expresses a least-squares summation of the differences between the computed and the prescribed nodal positions. Here, a recently developed shape optimization approach (Landkammer and Steinmann in Comput Mech 57(2):169-191, 2016) is investigated with a view to enhance its stability and efficiency. Motivated by nonlinear optimization theory a detailed justification of the algorithm is given. Furthermore, a classification according to shape changing design, fixed and controlled nodal coordinates is introduced. Two examples with large elasto-plastic strains demonstrate that using a superconvergent patch recovery technique instead of a least-squares (L2)-smoothing improves the efficiency. Updating the interior discretization nodes by solving a fictitious elastic problem also reduces the number of required FE iterations and avoids severe mesh distortions. Furthermore, the impact of the inclusion of the second deformation gradient in the Hessian of the Quasi-Newton approach is analyzed. Inverse form finding is a crucial issue in metal forming applications. As a special feature, the approach is designed to be coupled in a non-invasive fashion to arbitrary FE software.
Prediction model of sinoatrial node field potential using high order partial least squares.
Feng, Yu; Cao, Hui; Zhang, Yanbin
2015-01-01
High order partial least squares (HOPLS) is a novel data processing method. It is highly suitable for building prediction model which has tensor input and output. The objective of this study is to build a prediction model of the relationship between sinoatrial node field potential and high glucose using HOPLS. The three sub-signals of the sinoatrial node field potential made up the model's input. The concentration and the actuation duration of high glucose made up the model's output. The results showed that on the premise of predicting two dimensional variables, HOPLS had the same predictive ability and a lower dispersion degree compared with partial least squares (PLS).
Detail view looking down at mosaics of everyday objects next ...
Detail view looking down at mosaics of everyday objects next to Living Trailer (rear steps seen frame left). "They Last" tile in center surrounded by tiles, irons, glasses, toy guns, license plates, bottle caps, and plastic parts. The mosaic was created in sections as squares and linear strips, as cement was mixed and objects were collected the edges of these sections and variation of objects is noticeable. View looking north. - Grandma Prisbrey's Bottle Village, 4595 Cochran Street, Simi Valley, Ventura County, CA
NASA Astrophysics Data System (ADS)
Smith, Eric Ryan; Farrow, Darcie A.; Jonas, David M.
2005-07-01
Four-wave-mixing nonlinear-response functions are given for intermolecular and intramolecular vibrations of a perpendicular dimer and intramolecular vibrations of a square-symmetric molecule containing a doubly degenerate state. A two-dimensional particle-in-a-box model is used to approximate the electronic wave functions and obtain harmonic potentials for nuclear motion. Vibronic interactions due to symmetry-lowering distortions along Jahn-Teller active normal modes are discussed. Electronic dephasing due to nuclear motion along both symmetric and asymmetric normal modes is included in these response functions, but population transfer between states is not. As an illustration, these response functions are used to predict the pump-probe polarization anisotropy in the limit of impulsive excitation.
[Locally weighted least squares estimation of DPOAE evoked by continuously sweeping primaries].
Han, Xiaoli; Fu, Xinxing; Cui, Jie; Xiao, Ling
2013-12-01
Distortion product otoacoustic emission (DPOAE) signal can be used for diagnosis of hearing loss so that it has an important clinical value. Continuously using sweeping primaries to measure DPOAE provides an efficient tool to record DPOAE data rapidly when DPOAE is measured in a large frequency range. In this paper, locally weighted least squares estimation (LWLSE) of 2f1-f2 DPOAE is presented based on least-squares-fit (LSF) algorithm, in which DPOAE is evoked by continuously sweeping tones. In our study, we used a weighted error function as the loss function and the weighting matrixes in the local sense to obtain a smaller estimated variance. Firstly, ordinary least squares estimation of the DPOAE parameters was obtained. Then the error vectors were grouped and the different local weighting matrixes were calculated in each group. And finally, the parameters of the DPOAE signal were estimated based on least squares estimation principle using the local weighting matrixes. The simulation results showed that the estimate variance and fluctuation errors were reduced, so the method estimates DPOAE and stimuli more accurately and stably, which facilitates extraction of clearer DPOAE fine structure.
Richards, Selena; Miller, Robert; Gemperline, Paul
2008-02-01
An extension to the penalty alternating least squares (P-ALS) method, called multi-way penalty alternating least squares (NWAY P-ALS), is presented. Optionally, hard constraints (no deviation from predefined constraints) or soft constraints (small deviations from predefined constraints) were applied through the application of a row-wise penalty least squares function. NWAY P-ALS was applied to the multi-batch near-infrared (NIR) data acquired from the base catalyzed esterification reaction of acetic anhydride in order to resolve the concentration and spectral profiles of l-butanol with the reaction constituents. Application of the NWAY P-ALS approach resulted in the reduction of the number of active constraints at the solution point, while the batch column-wise augmentation allowed hard constraints in the spectral profiles and resolved rank deficiency problems of the measurement matrix. The results were compared with the multi-way multivariate curve resolution (MCR)-ALS results using hard and soft constraints to determine whether any advantages had been gained through using the weighted least squares function of NWAY P-ALS over the MCR-ALS resolution.
Comparing implementations of penalized weighted least-squares sinogram restoration.
Forthmann, Peter; Koehler, Thomas; Defrise, Michel; La Riviere, Patrick
2010-11-01
A CT scanner measures the energy that is deposited in each channel of a detector array by x rays that have been partially absorbed on their way through the object. The measurement process is complex and quantitative measurements are always and inevitably associated with errors, so CT data must be preprocessed prior to reconstruction. In recent years, the authors have formulated CT sinogram preprocessing as a statistical restoration problem in which the goal is to obtain the best estimate of the line integrals needed for reconstruction from the set of noisy, degraded measurements. The authors have explored both penalized Poisson likelihood (PL) and penalized weighted least-squares (PWLS) objective functions. At low doses, the authors found that the PL approach outperforms PWLS in terms of resolution-noise tradeoffs, but at standard doses they perform similarly. The PWLS objective function, being quadratic, is more amenable to computational acceleration than the PL objective. In this work, the authors develop and compare two different methods for implementing PWLS sinogram restoration with the hope of improving computational performance relative to PL in the standard-dose regime. Sinogram restoration is still significant in the standard-dose regime since it can still outperform standard approaches and it allows for correction of effects that are not usually modeled in standard CT preprocessing. The authors have explored and compared two implementation strategies for PWLS sinogram restoration: (1) A direct matrix-inversion strategy based on the closed-form solution to the PWLS optimization problem and (2) an iterative approach based on the conjugate-gradient algorithm. Obtaining optimal performance from each strategy required modifying the naive off-the-shelf implementations of the algorithms to exploit the particular symmetry and sparseness of the sinogram-restoration problem. For the closed-form approach, the authors subdivided the large matrix inversion into smaller coupled problems and exploited sparseness to minimize matrix operations. For the conjugate-gradient approach, the authors exploited sparseness and preconditioned the problem to speed up convergence. All methods produced qualitatively and quantitatively similar images as measured by resolution-variance tradeoffs and difference images. Despite the acceleration strategies, the direct matrix-inversion approach was found to be uncompetitive with iterative approaches, with a computational burden higher by an order of magnitude or more. The iterative conjugate-gradient approach, however, does appear promising, with computation times half that of the authors' previous penalized-likelihood implementation. Iterative conjugate-gradient based PWLS sinogram restoration with careful matrix optimizations has computational advantages over direct matrix PWLS inversion and over penalized-likelihood sinogram restoration and can be considered a good alternative in standard-dose regimes.
Code of Federal Regulations, 2014 CFR
2014-10-01
... excess of double the square footage of the original facility and all physical improvements. Constructing... square footage of the original facility and all physical improvements. Department means the Department of...) Results in substantial functional limitation in 3 or more of the following major life activities: (1) Self...
Code of Federal Regulations, 2013 CFR
2013-10-01
... excess of double the square footage of the original facility and all physical improvements. Constructing... square footage of the original facility and all physical improvements. Department means the Department of...) Results in substantial functional limitation in 3 or more of the following major life activities: (1) Self...
Code of Federal Regulations, 2012 CFR
2012-10-01
... excess of double the square footage of the original facility and all physical improvements. Constructing... square footage of the original facility and all physical improvements. Department means the Department of...) Results in substantial functional limitation in 3 or more of the following major life activities: (1) Self...
Fast and accurate fitting and filtering of noisy exponentials in Legendre space.
Bao, Guobin; Schild, Detlev
2014-01-01
The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters.
NASA Astrophysics Data System (ADS)
Sein, Lawrence T.
2011-08-01
Hammett parameters σ' were determined from vertical ionization potentials, vertical electron affinities, adiabatic ionization potentials, adiabatic electron affinities, HOMO, and LUMO energies of a series of N, N' -bis (3',4'-substituted-phenyl)-1,4-quinonediimines computed at the B3LYP/6-311+G(2d,p) level on B3LYP/6-31G ∗ molecular geometries. These parameters were then least squares fit as a function of literature Hammett parameters. For N, N' -bis (4'-substituted-phenyl)-1,4-quinonediimines, the least squares fits demonstrated excellent linearity, with the square of Pearson's correlation coefficient ( r2) greater than 0.98 for all isomers. For N, N' -bis (3'-substituted-3'-aminophenyl)-1,4-quinonediimines, the least squares fits were less nearly linear, with r2 approximately 0.70 for all isomers when derived from calculated vertical ionization potentials, but those from calculated vertical electron affinities usually greater than 0.90.
Discrete Tchebycheff orthonormal polynomials and applications
NASA Technical Reports Server (NTRS)
Lear, W. M.
1980-01-01
Discrete Tchebycheff orthonormal polynomials offer a convenient way to make least squares polynomial fits of uniformly spaced discrete data. Computer programs to do so are simple and fast, and appear to be less affected by computer roundoff error, for the higher order fits, than conventional least squares programs. They are useful for any application of polynomial least squares fits: approximation of mathematical functions, noise analysis of radar data, and real time smoothing of noisy data, to name a few.
Hessian-based norm regularization for image restoration with biomedical applications.
Lefkimmiatis, Stamatios; Bourquard, Aurélien; Unser, Michael
2012-03-01
We present nonquadratic Hessian-based regularization methods that can be effectively used for image restoration problems in a variational framework. Motivated by the great success of the total-variation (TV) functional, we extend it to also include second-order differential operators. Specifically, we derive second-order regularizers that involve matrix norms of the Hessian operator. The definition of these functionals is based on an alternative interpretation of TV that relies on mixed norms of directional derivatives. We show that the resulting regularizers retain some of the most favorable properties of TV, i.e., convexity, homogeneity, rotation, and translation invariance, while dealing effectively with the staircase effect. We further develop an efficient minimization scheme for the corresponding objective functions. The proposed algorithm is of the iteratively reweighted least-square type and results from a majorization-minimization approach. It relies on a problem-specific preconditioned conjugate gradient method, which makes the overall minimization scheme very attractive since it can be applied effectively to large images in a reasonable computational time. We validate the overall proposed regularization framework through deblurring experiments under additive Gaussian noise on standard and biomedical images.
Growth curves for ostriches (Struthio camelus) in a Brazilian population.
Ramos, S B; Caetano, S L; Savegnago, R P; Nunes, B N; Ramos, A A; Munari, D P
2013-01-01
The objective of this study was to fit growth curves using nonlinear and linear functions to describe the growth of ostriches in a Brazilian population. The data set consisted of 112 animals with BW measurements from hatching to 383 d of age. Two nonlinear growth functions (Gompertz and logistic) and a third-order polynomial function were applied. The parameters for the models were estimated using the least-squares method and Gauss-Newton algorithm. The goodness-of-fit of the models was assessed using R(2) and the Akaike information criterion. The R(2) calculated for the logistic growth model was 0.945 for hens and 0.928 for cockerels and for the Gompertz growth model, 0.938 for hens and 0.924 for cockerels. The third-order polynomial fit gave R(2) of 0.938 for hens and 0.924 for cockerels. Among the Akaike information criterion calculations, the logistic growth model presented the lowest values in this study, both for hens and for cockerels. Nonlinear models are more appropriate for describing the sigmoid nature of ostrich growth.
Fast global image smoothing based on weighted least squares.
Min, Dongbo; Choi, Sunghwan; Lu, Jiangbo; Ham, Bumsub; Sohn, Kwanghoon; Do, Minh N
2014-12-01
This paper presents an efficient technique for performing a spatially inhomogeneous edge-preserving image smoothing, called fast global smoother. Focusing on sparse Laplacian matrices consisting of a data term and a prior term (typically defined using four or eight neighbors for 2D image), our approach efficiently solves such global objective functions. In particular, we approximate the solution of the memory-and computation-intensive large linear system, defined over a d-dimensional spatial domain, by solving a sequence of 1D subsystems. Our separable implementation enables applying a linear-time tridiagonal matrix algorithm to solve d three-point Laplacian matrices iteratively. Our approach combines the best of two paradigms, i.e., efficient edge-preserving filters and optimization-based smoothing. Our method has a comparable runtime to the fast edge-preserving filters, but its global optimization formulation overcomes many limitations of the local filtering approaches. Our method also achieves high-quality results as the state-of-the-art optimization-based techniques, but runs ∼10-30 times faster. Besides, considering the flexibility in defining an objective function, we further propose generalized fast algorithms that perform Lγ norm smoothing (0 < γ < 2) and support an aggregated (robust) data term for handling imprecise data constraints. We demonstrate the effectiveness and efficiency of our techniques in a range of image processing and computer graphics applications.
Total variation-based method for radar coincidence imaging with model mismatch for extended target
NASA Astrophysics Data System (ADS)
Cao, Kaicheng; Zhou, Xiaoli; Cheng, Yongqiang; Fan, Bo; Qin, Yuliang
2017-11-01
Originating from traditional optical coincidence imaging, radar coincidence imaging (RCI) is a staring/forward-looking imaging technique. In RCI, the reference matrix must be computed precisely to reconstruct the image as preferred; unfortunately, such precision is almost impossible due to the existence of model mismatch in practical applications. Although some conventional sparse recovery algorithms are proposed to solve the model-mismatch problem, they are inapplicable to nonsparse targets. We therefore sought to derive the signal model of RCI with model mismatch by replacing the sparsity constraint item with total variation (TV) regularization in the sparse total least squares optimization problem; in this manner, we obtain the objective function of RCI with model mismatch for an extended target. A more robust and efficient algorithm called TV-TLS is proposed, in which the objective function is divided into two parts and the perturbation matrix and scattering coefficients are updated alternately. Moreover, due to the ability of TV regularization to recover sparse signal or image with sparse gradient, TV-TLS method is also applicable to sparse recovering. Results of numerical experiments demonstrate that, for uniform extended targets, sparse targets, and real extended targets, the algorithm can achieve preferred imaging performance both in suppressing noise and in adapting to model mismatch.
The Taurus Spitzer Legacy Project
NASA Astrophysics Data System (ADS)
McCabe, Caer-Eve; Padgett, D. L.; Rebull, L.; Noriega-Crespo, A.; Carey, S.; Brooke, T.; Stapelfeldt, K. R.; Fukagawa, M.; Hines, D.; Terebey, S.; Huard, T.; Hillenbrand, L.; Guedel, M.; Audard, M.; Monin, J.; Guieu, S.; Knapp, G.; Evans, N. J., III; Menard, F.; Harvey, P.; Allen, L.; Wolf, S.; Skinner, S.; Strom, S.; Glauser, A.; Saavedra, C.; Koerner, D.; Myers, P.; Shupe, D.; Latter, W.; Grosso, N.; Heyer, M.; Dougados, C.; Bouvier, J.
2009-01-01
Without massive stars and dense stellar clusters, Taurus plays host to a distributed mode of low-mass star formation particularly amenable to observational and theoretical study. In 2005-2007, our team mapped the central 43 square degrees of the main Taurus clouds at wavelengths from 3.6 - 160 microns with the IRAC and MIPS cameras on the Spitzer Space Telescope. Together, these images form the largest contiguous Spitzer map of a single star-forming region (and any region outside the galactic plane). Our Legacy team has generated re-reduced mosaic images and source catalogs, available to the community via the Spitzer Science Center website http://ssc.spitzer.caltech.edu/legacy/all.html . This Spitzer survey is a central and crucial part of a multiwavelength study of the Taurus cloud complex that we have performed using XMM, CFHT, and the SDSS. The seven photometry data points from Spitzer allow us to characterize the circumstellar environment of each object, and, in conjunction with optical and NIR photometry, construct a complete luminosity function for the cloud members that will place constraints on the initial mass function. We present results drawing upon our catalog of several hundred thousand IRAC and thousands of MIPS sources. Initial results from our study of the Taurus clouds include new disks around brown dwarfs, new low luminosity YSO candidates, and new Herbig-Haro objects.
Halford, Keith J.
2006-01-01
MODOPTIM is a non-linear ground-water model calibration and management tool that simulates flow with MODFLOW-96 as a subroutine. A weighted sum-of-squares objective function defines optimal solutions for calibration and management problems. Water levels, discharges, water quality, subsidence, and pumping-lift costs are the five direct observation types that can be compared in MODOPTIM. Differences between direct observations of the same type can be compared to fit temporal changes and spatial gradients. Water levels in pumping wells, wellbore storage in the observation wells, and rotational translation of observation wells also can be compared. Negative and positive residuals can be weighted unequally so inequality constraints such as maximum chloride concentrations or minimum water levels can be incorporated in the objective function. Optimization parameters are defined with zones and parameter-weight matrices. Parameter change is estimated iteratively with a quasi-Newton algorithm and is constrained to a user-defined maximum parameter change per iteration. Parameters that are less sensitive than a user-defined threshold are not estimated. MODOPTIM facilitates testing more conceptual models by expediting calibration of each conceptual model. Examples of applying MODOPTIM to aquifer-test analysis, ground-water management, and parameter estimation problems are presented.
Object-based target templates guide attention during visual search.
Berggren, Nick; Eimer, Martin
2018-05-03
During visual search, attention is believed to be controlled in a strictly feature-based fashion, without any guidance by object-based target representations. To challenge this received view, we measured electrophysiological markers of attentional selection (N2pc component) and working memory (sustained posterior contralateral negativity; SPCN) in search tasks where two possible targets were defined by feature conjunctions (e.g., blue circles and green squares). Critically, some search displays also contained nontargets with two target features (incorrect conjunction objects, e.g., blue squares). Because feature-based guidance cannot distinguish these objects from targets, any selective bias for targets will reflect object-based attentional control. In Experiment 1, where search displays always contained only one object with target-matching features, targets and incorrect conjunction objects elicited identical N2pc and SPCN components, demonstrating that attentional guidance was entirely feature-based. In Experiment 2, where targets and incorrect conjunction objects could appear in the same display, clear evidence for object-based attentional control was found. The target N2pc became larger than the N2pc to incorrect conjunction objects from 250 ms poststimulus, and only targets elicited SPCN components. This demonstrates that after an initial feature-based guidance phase, object-based templates are activated when they are required to distinguish target and nontarget objects. These templates modulate visual processing and control access to working memory, and their activation may coincide with the start of feature integration processes. Results also suggest that while multiple feature templates can be activated concurrently, only a single object-based target template can guide attention at any given time. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Shang, Jianyuan; Geva, Eitan
2007-04-26
The quenching rate of a fluorophore attached to a macromolecule can be rather sensitive to its conformational state. The decay of the corresponding fluorescence lifetime autocorrelation function can therefore provide unique information on the time scales of conformational dynamics. The conventional way of measuring the fluorescence lifetime autocorrelation function involves evaluating it from the distribution of delay times between photoexcitation and photon emission. However, the time resolution of this procedure is limited by the time window required for collecting enough photons in order to establish this distribution with sufficient signal-to-noise ratio. Yang and Xie have recently proposed an approach for improving the time resolution, which is based on the argument that the autocorrelation function of the delay time between photoexcitation and photon emission is proportional to the autocorrelation function of the square of the fluorescence lifetime [Yang, H.; Xie, X. S. J. Chem. Phys. 2002, 117, 10965]. In this paper, we show that the delay-time autocorrelation function is equal to the autocorrelation function of the square of the fluorescence lifetime divided by the autocorrelation function of the fluorescence lifetime. We examine the conditions under which the delay-time autocorrelation function is approximately proportional to the autocorrelation function of the square of the fluorescence lifetime. We also investigate the correlation between the decay of the delay-time autocorrelation function and the time scales of conformational dynamics. The results are demonstrated via applications to a two-state model and an off-lattice model of a polypeptide.
Contragenic functions on spheroidal domains
NASA Astrophysics Data System (ADS)
García-Ancona, Raybel; Morais, Joao; Porter, R. Michael
2018-05-01
We construct bases of polynomials for the spaces of square-integrable harmonic functions which are orthogonal to the monogenic and antimonogenic $\\mathbb{R}^3$-valued functions defined in a prolate or oblate spheroid.
Kable, J. A.; Coles, C. D.; Keen, C. L.; Uriu-Adams, J. Y.; Jones, K. L.; Yevtushok, L.; Kulikovsky, Y.; Wertelecki, W.; Pedersen, T. L.; Chambers, C. D.
2015-01-01
Objectives The potential of micronutrients to ameliorate the impact of prenatal alcohol exposure was explored in a clinical trial conducted in Ukraine. Cardiac orienting responses during a habituation/dishabituation learning paradigm were obtained from 6–12-month-olds to assess neurophysiological encoding and memory of environmental events. Materials and methods Women who differed in prenatal alcohol use were recruited during pregnancy and assigned to a group (no study-provided supplements, multivitamin/mineral supplement, or multivitamin/mineral supplement plus choline supplement). An infant habituation/dishabituation paradigm was used to assess outcomes in the offspring. Ten trials were used for the habituation and five for the dishabituation condition. Heart rate was collected for 30 sec prior to stimulus onset and then 12 sec post-stimulus onset. Difference values (ΔHR) were computed for the first three trials of each condition and aggregated for analysis. Gestational blood samples were collected to assess maternal nutritional status and changes as a function of the intervention. Results Choline supplementation resulted in a greater ΔHR on the visual habituation (Wald Chi-Square (1, 149) = 10.9, p < .001, eta-squared = .043) trials for all infants and for the infants with no prenatal alcohol exposure on the dishabituation (Wald Chi-Square (1, 139) = 6.1, p < .013, eta-squared = .065) trials. The latency of the response was reduced in both conditions (Habituation: Wald Chi-Square (1, 150) = 9.0, p < .003, eta-squared = .056; Dishabituation: Wald Chi-Square (1, 137) = 4.9, p < .027, eta-squared = .032) for all infants whose mothers received choline supplementation. Change in gestational choline level was positively related (r = .19) to ΔHR during habituation trials, and levels of one choline metabolite, dimethylglycine (DMG), predicted ΔHR during habituation trials (r = .23) and latency of responses (r = −.20). A trend was found between DMG and ΔHR on the dishabituation trials (r = .19) and latency of the response (r = −.18). Multivitamin/mineral or multivitamin/mineral plus choline supplementation did not significantly affect cardiac orienting responses to the auditory stimuli. Conclusion Choline supplementation when administered together with routinely recommended multivitamin/mineral prenatal supplements during pregnancy may provide a beneficial impact to basic learning mechanisms involved in encoding and memory of environmental events in alcohol-exposed pregnancies as well as non- or low alcohol-exposed pregnancies. Changes in nutrient status of the mother suggested that this process may be mediated by the breakdown of choline to betaine and then to DMG. One mechanism by which choline supplementation may positively affect brain development is through prevention of fetal alcohol-related depletion of DMG, a metabolic nutrient that can protect against overproduction of glycine, during critical periods of neurogenesis. PMID:26493109
QAPgrid: A Two Level QAP-Based Approach for Large-Scale Data Analysis and Visualization
Inostroza-Ponta, Mario; Berretta, Regina; Moscato, Pablo
2011-01-01
Background The visualization of large volumes of data is a computationally challenging task that often promises rewarding new insights. There is great potential in the application of new algorithms and models from combinatorial optimisation. Datasets often contain “hidden regularities” and a combined identification and visualization method should reveal these structures and present them in a way that helps analysis. While several methodologies exist, including those that use non-linear optimization algorithms, severe limitations exist even when working with only a few hundred objects. Methodology/Principal Findings We present a new data visualization approach (QAPgrid) that reveals patterns of similarities and differences in large datasets of objects for which a similarity measure can be computed. Objects are assigned to positions on an underlying square grid in a two-dimensional space. We use the Quadratic Assignment Problem (QAP) as a mathematical model to provide an objective function for assignment of objects to positions on the grid. We employ a Memetic Algorithm (a powerful metaheuristic) to tackle the large instances of this NP-hard combinatorial optimization problem, and we show its performance on the visualization of real data sets. Conclusions/Significance Overall, the results show that QAPgrid algorithm is able to produce a layout that represents the relationships between objects in the data set. Furthermore, it also represents the relationships between clusters that are feed into the algorithm. We apply the QAPgrid on the 84 Indo-European languages instance, producing a near-optimal layout. Next, we produce a layout of 470 world universities with an observed high degree of correlation with the score used by the Academic Ranking of World Universities compiled in the The Shanghai Jiao Tong University Academic Ranking of World Universities without the need of an ad hoc weighting of attributes. Finally, our Gene Ontology-based study on Saccharomyces cerevisiae fully demonstrates the scalability and precision of our method as a novel alternative tool for functional genomics. PMID:21267077
QAPgrid: a two level QAP-based approach for large-scale data analysis and visualization.
Inostroza-Ponta, Mario; Berretta, Regina; Moscato, Pablo
2011-01-18
The visualization of large volumes of data is a computationally challenging task that often promises rewarding new insights. There is great potential in the application of new algorithms and models from combinatorial optimisation. Datasets often contain "hidden regularities" and a combined identification and visualization method should reveal these structures and present them in a way that helps analysis. While several methodologies exist, including those that use non-linear optimization algorithms, severe limitations exist even when working with only a few hundred objects. We present a new data visualization approach (QAPgrid) that reveals patterns of similarities and differences in large datasets of objects for which a similarity measure can be computed. Objects are assigned to positions on an underlying square grid in a two-dimensional space. We use the Quadratic Assignment Problem (QAP) as a mathematical model to provide an objective function for assignment of objects to positions on the grid. We employ a Memetic Algorithm (a powerful metaheuristic) to tackle the large instances of this NP-hard combinatorial optimization problem, and we show its performance on the visualization of real data sets. Overall, the results show that QAPgrid algorithm is able to produce a layout that represents the relationships between objects in the data set. Furthermore, it also represents the relationships between clusters that are feed into the algorithm. We apply the QAPgrid on the 84 Indo-European languages instance, producing a near-optimal layout. Next, we produce a layout of 470 world universities with an observed high degree of correlation with the score used by the Academic Ranking of World Universities compiled in the The Shanghai Jiao Tong University Academic Ranking of World Universities without the need of an ad hoc weighting of attributes. Finally, our Gene Ontology-based study on Saccharomyces cerevisiae fully demonstrates the scalability and precision of our method as a novel alternative tool for functional genomics.
Direct measurement of the resistivity weighting function
NASA Astrophysics Data System (ADS)
Koon, D. W.; Chan, Winston K.
1998-12-01
We have directly measured the resistivity weighting function—the sensitivity of a four-wire resistance measurement to local variations in resistivity—for a square specimen of photoconducting material. This was achieved by optically perturbing the local resistivity of the specimen while measuring the effect of this perturbation on its four-wire resistance. The weighting function we measure for a square geometry with electrical leads at its corners agrees well with calculated results, displaying two symmetric regions of negative weighting which disappear when van der Pauw averaging is performed.
Response functions for sine- and square-wave modulations of disparity.
NASA Technical Reports Server (NTRS)
Richards, W.
1972-01-01
Depth sensations cannot be elicited by modulations of disparity that are more rapid than about 6 Hz, regardless of the modulation amplitude. Vergence tracking also fails at similar modulation rates, suggesting that this portion of the oculomotor system is limited by the behavior of disparity detectors. For sinusoidal modulations of disparity between 1/2 to 2 deg of disparity, most depth-response functions exhibit a low-frequency decrease that is not observed with square-wave modulations of disparity.
VizieR Online Data Catalog: VEGAS-SSS photometry of NGC3115 (Cantiello+, 2015)
NASA Astrophysics Data System (ADS)
Cantiello, M.; Capaccioli, M.; Napolitano, N.; Grado, A.; Limatola, L.; Paolillo, M.; Iodice, E.; Romanowsky, A. J.; Forbes, D. A.; Raimondo, G.; Spavone, M.; La Barbera, F.; Puzia, T. H.; Schipani, P.
2015-03-01
We present g and i band photometry for ~47000 extended and point-like objects in the ~0.8 square degree area centred on NGC3115. For ~30000 object in the catalogue, structural parameters are also available. For each object equatorial coordinates, galactocentric distance from the photometric center of NGC3115, magnitudes in g and i bands (SDSS calibrated), colour, local extinction and sctructural parameters. (1 data file).
Corner-cutting mining assembly
Bradley, J.A.
1981-07-01
This invention resulted from a contract with the United States Department of Energy and relates to a mining tool. More particularly, the invention relates to an assembly capable of drilling a hole having a square cross-sectional shape with radiused corners. In mining operations in which conventional auger-type drills are used to form a series of parallel, cylindrical holes in a coal seam, a large amount of coal remains in place in the seam because the shape of the holes leaves thick webs between the holes. A higher percentage of coal can be mined from a seam by a means capable of drilling holes having a substantially square cross section. It is an object of this invention to provide an improved mining apparatus by means of which the amount of coal recovered from a seam deposit can be increased. Another object of the invention is to provide a drilling assembly which cuts corners in a hole having a circular cross section. These objects and other advantages are attained by a preferred embodiment of the invention.
A hybrid framework for quantifying the influence of data in hydrological model calibration
NASA Astrophysics Data System (ADS)
Wright, David P.; Thyer, Mark; Westra, Seth; McInerney, David
2018-06-01
Influence diagnostics aim to identify a small number of influential data points that have a disproportionate impact on the model parameters and/or predictions. The key issues with current influence diagnostic techniques are that the regression-theory approaches do not provide hydrologically relevant influence metrics, while the case-deletion approaches are computationally expensive to calculate. The main objective of this study is to introduce a new two-stage hybrid framework that overcomes these challenges, by delivering hydrologically relevant influence metrics in a computationally efficient manner. Stage one uses computationally efficient regression-theory influence diagnostics to identify the most influential points based on Cook's distance. Stage two then uses case-deletion influence diagnostics to quantify the influence of points using hydrologically relevant metrics. To illustrate the application of the hybrid framework, we conducted three experiments on 11 hydro-climatologically diverse Australian catchments using the GR4J hydrological model. The first experiment investigated how many data points from stage one need to be retained in order to reliably identify those points that have the hightest influence on hydrologically relevant metrics. We found that a choice of 30-50 is suitable for hydrological applications similar to those explored in this study (30 points identified the most influential data 98% of the time and reduced the required recalibrations by 99% for a 10 year calibration period). The second experiment found little evidence of a change in the magnitude of influence with increasing calibration period length from 1, 2, 5 to 10 years. Even for 10 years the impact of influential points can still be high (>30% influence on maximum predicted flows). The third experiment compared the standard least squares (SLS) objective function with the weighted least squares (WLS) objective function on a 10 year calibration period. In two out of three flow metrics there was evidence that SLS, with the assumption of homoscedastic residual error, identified data points with higher influence (largest changes of 40%, 10%, and 44% for the maximum, mean, and low flows, respectively) than WLS, with the assumption of heteroscedastic residual errors (largest changes of 26%, 6%, and 6% for the maximum, mean, and low flows, respectively). The hybrid framework complements existing model diagnostic tools and can be applied to a wide range of hydrological modelling scenarios.
Yang, X; Le, D; Zhang, Y L; Liang, L Z; Yang, G; Hu, W J
2016-10-18
To explore a crown form classification method for upper central incisor which is more objective and scientific than traditional classification method based on the standardized photography technique. To analyze the relationship between crown form of upper central incisors and papilla filling in periodontally healthy Chinese Han-nationality youth. In the study, 180 periodontally healthy Chinese youth ( 75 males, and 105 females ) aged 20-30 (24.3±4.5) years were included. With the standardized upper central incisor photography technique, pictures of 360 upper central incisors were obtained. Each tooth was classified as triangular, ovoid or square by 13 experienced specialist majors in prothodontics independently and the final classification result was decided by most evaluators in order to ensure objectivity. The standardized digital photo was also used to evaluate the gingival papilla filling situation. The papilla filling result was recorded as present or absent according to naked eye observation. The papilla filling rates of different crown forms were analyzed. Statistical analyses were performed with SPSS 19.0. The proportions of triangle, ovoid and square forms of upper central incisor in Chinese Han-nationality youth were 31.4% (113/360), 37.2% (134/360) and 31.4% (113/360 ), respectively, and no statistical difference was found between the males and females. Average κ value between each two evaluators was 0.381. Average κ value was raised up to 0.563 when compared with the final classification result. In the study, 24 upper central incisors without contact were excluded, and the papilla filling rates of triangle, ovoid and square crown were 56.4% (62/110), 69.6% (87/125), 76.2% (77/101) separately. The papilla filling rate of square form was higher (P=0.007). The proportion of clinical crown form of upper central incisor in Chinese Han-nationality youth is obtained. Compared with triangle form, square form is found to favor a gingival papilla that fills the interproximal embrasure space. The consistency of the present classification method for upper central incisor is not satisfying, which indicates that a new classification method, more scientific and objective than the present one, is to be found.
A Space Crisis. Alaska State Museum.
ERIC Educational Resources Information Center
Alaska State Museum, Juneau.
The 24,000 square foot Alaska State Museum is experiencing a space crisis which hinders its ability to effectively meet present demands. The museum's collection has more than tripled from 5,600 objects 17 years ago to 23,000 objects today. Available storage and exhibition space is filled and only 10% of the collection is on exhibit. The reason for…
Eye Movements during Multiple Object Tracking: Where Do Participants Look?
ERIC Educational Resources Information Center
Fehd, Hilda M.; Seiffert, Adriane E.
2008-01-01
Similar to the eye movements you might make when viewing a sports game, this experiment investigated where participants tend to look while keeping track of multiple objects. While eye movements were recorded, participants tracked either 1 or 3 of 8 red dots that moved randomly within a square box on a black background. Results indicated that…
A Multivariate Quality Loss Function Approach for Optimization of Spinning Processes
NASA Astrophysics Data System (ADS)
Chakraborty, Shankar; Mitra, Ankan
2018-05-01
Recent advancements in textile industry have given rise to several spinning techniques, such as ring spinning, rotor spinning etc., which can be used to produce a wide variety of textile apparels so as to fulfil the end requirements of the customers. To achieve the best out of these processes, they should be utilized at their optimal parametric settings. However, in presence of multiple yarn characteristics which are often conflicting in nature, it becomes a challenging task for the spinning industry personnel to identify the best parametric mix which would simultaneously optimize all the responses. Hence, in this paper, the applicability of a new systematic approach in the form of multivariate quality loss function technique is explored for optimizing multiple quality characteristics of yarns while identifying the ideal settings of two spinning processes. It is observed that this approach performs well against the other multi-objective optimization techniques, such as desirability function, distance function and mean squared error methods. With slight modifications in the upper and lower specification limits of the considered quality characteristics, and constraints of the non-linear optimization problem, it can be successfully applied to other processes in textile industry to determine their optimal parametric settings.
Intrinsic random functions for mitigation of atmospheric effects in terrestrial radar interferometry
NASA Astrophysics Data System (ADS)
Butt, Jemil; Wieser, Andreas; Conzett, Stefan
2017-06-01
The benefits of terrestrial radar interferometry (TRI) for deformation monitoring are restricted by the influence of changing meteorological conditions contaminating the potentially highly precise measurements with spurious deformations. This is especially the case when the measurement setup includes long distances between instrument and objects of interest and the topography affecting atmospheric refraction is complex. These situations are typically encountered with geo-monitoring in mountainous regions, e.g. with glaciers, landslides or volcanoes. We propose and explain an approach for the mitigation of atmospheric influences based on the theory of intrinsic random functions of order k (IRF-k) generalizing existing approaches based on ordinary least squares estimation of trend functions. This class of random functions retains convenient computational properties allowing for rigorous statistical inference while still permitting to model stochastic spatial phenomena which are non-stationary in mean and variance. We explore the correspondence between the properties of the IRF-k and the properties of the measurement process. In an exemplary case study, we find that our method reduces the time needed to obtain reliable estimates of glacial movements from 12 h down to 0.5 h compared to simple temporal averaging procedures.
Functional Class in Children with Idiopathic Dilated Cardiomyopathy. A pilot Study
Tavares, Aline Cristina; Bocchi, Edimar Alcides; Guimarães, Guilherme Veiga
2016-01-01
Background Idiopathic dilated cardiomyopathy (IDCM), most common cardiac cause of pediatric deaths, mortality descriptor: a low left ventricular ejection fraction (LVEF) and low functional capacity (FC). FC is never self reported by children. Objective The aims of this study were (i) To evaluate whether functional classifications according to the children, parents and medical staff were associated. (iv) To evaluate whether there was correlation between VO2 max and Weber's classification. Method Prepubertal children with IDCM and HF (by previous IDCM and preserved LVEF) were selected, evaluated and compared. All children were assessed by testing, CPET and functional class classification. Results Chi-square test showed association between a CFm and CFp (1, n = 31) = 20.6; p = 0.002. There was no significant association between CFp and CFc (1, n = 31) = 6.7; p = 0.4. CFm and CFc were not associated as well (1, n = 31) = 1.7; p = 0.8. Weber's classification was associated to CFm (1, n = 19) = 11.8; p = 0.003, to CFp (1, n = 19) = 20.4; p = 0.0001and CFc (1, n = 19) = 6.4; p = 0.04). Conclusion Drawing were helpful for children's self NYHA classification, which were associated to Weber's stratification. PMID:27168472
Reclamation of Bay wetlands and disposal of dredge spoils: meeting two goals simultaneously
Hostettler, Frances D.; Pereira, Wilfred E.; Kvenvolden, Keith A.; Jones, David R.; Murphy, Fred
1997-01-01
San Francisco Bay is one of the world's largest urbanized estuarine systems with a watershed that drains about 40 percent of the State of California. Its freshwater and saltwater marshes comprise approximately 125 square kilometers (48 square miles), compared to 2,200 square kilometers (850 square miles) before California began rapid development in 1850. This staggering reduction in tidal wetlands of approximately 95 percent has resulted in significant loss . of habitat for many species of fish and wildlife. The need for wetlands is well documented- healthy and adequate wetlands are critical to the proper functioning of an estuarine ecosystem like San Francisco Bay.
Least-squares sequential parameter and state estimation for large space structures
NASA Technical Reports Server (NTRS)
Thau, F. E.; Eliazov, T.; Montgomery, R. C.
1982-01-01
This paper presents the formulation of simultaneous state and parameter estimation problems for flexible structures in terms of least-squares minimization problems. The approach combines an on-line order determination algorithm, with least-squares algorithms for finding estimates of modal approximation functions, modal amplitudes, and modal parameters. The approach combines previous results on separable nonlinear least squares estimation with a regression analysis formulation of the state estimation problem. The technique makes use of sequential Householder transformations. This allows for sequential accumulation of matrices required during the identification process. The technique is used to identify the modal prameters of a flexible beam.
Abe, Takumi; Tsuji, Taishi; Kitano, Naruki; Muraki, Toshiaki; Hotta, Kazushi; Okura, Tomohiro
2015-01-01
The purpose of this study was to investigate whether the degree of improvement in cognitive function achieved with an exercise intervention in community-dwelling older Japanese women is affected by the participant's baseline cognitive function and age. Eighty-eight women (mean age: 70.5±4.2 years) participated in a prevention program for long-term care. They completed the Square-Stepping Exercise (SSE) program once a week, 120 minutes/session, for 11 weeks. We assessed participants' cognitive function using 5 cognitive tests (5-Cog) before and after the intervention. We defined cognitive function as the 5-Cog total score and defined the change in cognitive function as the 5-cog post-score minus the pre-score. We divided participants into four groups based on age (≤69 years or ≥70 years) and baseline cognitive function level (above vs. below the median cognitive function level). We conducted two-way analysis of variance. All 4 groups improved significantly in cognitive function after the intervention. There were no baseline cognitive function level×age interactions and no significant main effects of age, although significant main effects of baseline cognitive function level (P=0.004, η(2)=0.09) were observed. Square-Stepping Exercise is an effective exercise for improving cognitive function. These results suggest that older adults with cognitive decline are more likely to improve their cognitive function with exercise than if they start the intervention with high cognitive function. Furthermore, during an exercise intervention, baseline cognitive function level may have more of an effect than a participant's age on the degree of cognitive improvement.
Wilson, Robert S.; Hebert, Liesi E.; Scherr, Paul A.; Evans, Denis A.; Mendes de Leon, Carlos F.
2011-01-01
Objectives. Few studies have explicitly tested whether the health disadvantage among older Blacks is consistent across the entire range of education. We examined racial differences in the cross-sectional association of education with physical and cognitive function performance in older adults. Methods. Participants included over 9,500 Blacks and Whites, aged ≥65 years, from the Chicago Health and Aging Project {64% Black, 60% women, mean age = 73.0 (standard deviation [SD] = 6.9), mean education = 12.2 (SD = 3.5)}. Physical function was assessed using 3 physical performance tests, and cognitive function was assessed with 4 performance-based tests; composite measures were created and used in analyses. Results. In multiple regression models that controlled for age, age-squared, sex, and race, and their interactions, Whites and those with higher education (>12 years) performed significantly better on both functional health measures. The association of education with each indicator of functional health was similar in older Blacks and Whites with low levels (≤12 years) of education. However, at higher levels of education, there was a significantly more positive association between years of education and these functional health outcomes among Blacks than Whites. Discussion. Results from this biracial population-based sample in the Midwest suggest that Blacks may enjoy greater returns in functional health for additional education beyond high school. PMID:21402644
Fundamental Properties of the Red Square
NASA Astrophysics Data System (ADS)
Tuthill, Peter; Barnes, Peter; Cohen, Martin; Schmidt, Timothy
2007-04-01
This proposal follows the exciting recent discovery of the Red Square, the first near-sibling to the illustrious Red Rectangle; and also a potential example progenitor system for supernovae such as SN 1987A. Exploiting the unique extremely wide bandwidth correlator available at Mopra, we propose to rapidly and efficiently explore the molecular environment of this unique new object at 3 mm. This should reveal the fundamental properties of the gas in the underlying stellar system, and will provide the necessary springboard for future spatially-resolved work with interferometers.
Samalin, Ludovic; Boyer, Laurent; Murru, Andrea; Pacchiarotti, Isabella; Reinares, María; Bonnin, Caterina Mar; Torrent, Carla; Verdolini, Norma; Pancheri, Corinna; de Chazeron, Ingrid; Boucekine, Mohamed; Geoffroy, Pierre-Alexis; Bellivier, Frank; Llorca, Pierre-Michel; Vieta, Eduard
2017-03-01
Many patients with bipolar disorder (BD) experience residual symptoms during their inter-episodic periods. The study aimed to analyse the relationship between residual depressive symptoms, sleep disturbances and self-reported cognitive impairment as determinants of psychosocial functioning in a large sample of euthymic BD patients. This was a cross-sectional study of 468 euthymic BD outpatients. We evaluated the residual depressive symptoms with the Bipolar Depression Rating Scale, the sleep disturbances with the Pittsburgh Sleep Quality Index, the perceived cognitive performance using visual analogic scales and functioning with the Functioning Assessment Short Test. Structural equation modelling (SEM) was used to describe the relationships among the residual depressive symptoms, sleep disturbances, perceived cognitive performance and functioning. SEM showed good fit with normed chi square=2.46, comparative fit index=0.94, root mean square error of approximation=0.05 and standardized root mean square residuals=0.06. This model revealed that residual depressive symptoms (path coefficient =0.37) and perceived cognitive performance (path coefficient=0.27) were the most important features significantly related to psychosocial functioning. Sleep disturbances were indirectly associated with functioning via residual depressive symptoms and perceived cognitive performance (path coefficient=0.23). This study contributes to a better understanding of the determinants of psychosocial functioning during the inter-episodic periods of BD patients. These findings should facilitate decision-making in therapeutics to improve the functional outcomes of BD during this period. Copyright © 2017 Elsevier B.V. All rights reserved.
A pilot trial of square biphasic pulse deep brain stimulation for dystonia: The BIP dystonia study.
Almeida, Leonardo; Martinez-Ramirez, Daniel; Ahmed, Bilal; Deeb, Wissam; Jesus, Sol De; Skinner, Jared; Terza, Matthew J; Akbar, Umer; Raike, Robert S; Hass, Chris J; Okun, Michael S
2017-04-01
Dystonia often has inconsistent benefits and requires more energy-demanding DBS settings. Studies suggest that squared biphasic pulses could provide significant clinical benefit; however, dystonia patients have not been explored. To assess safety and tolerability of square biphasic DBS in dystonia patients. This study included primary generalized or cervical dystonia patients with bilateral GPi DBS. Square biphasic pulses were implemented and patients were assessed at baseline, immediately postwashout, post-30-minute washout, 1 hour post- and 2 hours postinitiation of investigational settings. Ten participants completed the study. There were no patient-reported or clinician-observed side effects. There was improvement across time on the Toronto Western Spasmodic Torticollis Rating Scale (χ 2 = 10.7; P = 0.031). Similar improvement was detected in objective gait measurements. Square biphasic stimulation appears safe and feasible in dystonia patients with GPi DBS. Further studies are needed to evaluate possible effectiveness particularly in cervical and gait features. © 2016 International Parkinson and Movement Disorder Society. © 2017 International Parkinson and Movement Disorder Society.
NASA Astrophysics Data System (ADS)
Li, Xuxu; Li, Xinyang; wang, Caixia
2018-03-01
This paper proposes an efficient approach to decrease the computational costs of correlation-based centroiding methods used for point source Shack-Hartmann wavefront sensors. Four typical similarity functions have been compared, i.e. the absolute difference function (ADF), ADF square (ADF2), square difference function (SDF), and cross-correlation function (CCF) using the Gaussian spot model. By combining them with fast search algorithms, such as three-step search (TSS), two-dimensional logarithmic search (TDL), cross search (CS), and orthogonal search (OS), computational costs can be reduced drastically without affecting the accuracy of centroid detection. Specifically, OS reduces calculation consumption by 90%. A comprehensive simulation indicates that CCF exhibits a better performance than other functions under various light-level conditions. Besides, the effectiveness of fast search algorithms has been verified.
2015-07-01
the radius of gyration in detail as a function FIG. 5. Variation of the root mean square (RMS) displacement of the center of mass of the protein with...depends on the temperature. The global motion can be examined by analyzing the variation of the root mean square displacement (RMS) of the center of...and global physical quantities during the course of simula- tion, including the energy of each residue, its mobility, mean square displacement of the
Terahertz emission from thermally-managed square intrinsic Josephson junction microstrip antennas
NASA Astrophysics Data System (ADS)
Klemm, Richard; Davis, Andrew; Wang, Qing
We show for thin square microstrip antennas that the transverse magnetic electromagnetic cavity modes are greatly restricted in number due to the point group symmetry of a square. For the ten lowest frequency emissions, we present plots of the orthonormal wave functions and of the angular distributions of the emission power obtained from the uniform Josephson current source and from the excitation of an electromagnetic cavity mode excited in the intrinsic Josephson junctions between the layers of a highly anisotropic layered superconductor.
Mercier Franco, Luís Fernando; Castier, Marcelo; Economou, Ioannis G
2017-12-07
We show that the Zwanzig first-order perturbation theory can be obtained directly from a truncated Taylor series expansion of a two-body perturbation theory and that such truncation provides a more accurate prediction of thermodynamic properties than the full two-body perturbation theory. This unexpected result is explained by the quality of the resulting approximation for the fluid radial distribution function. We prove that the first-order and the two-body perturbation theories are based on different approximations for the fluid radial distribution function. To illustrate the calculations, the square-well fluid is adopted. We develop an analytical expression for the two-body perturbed Helmholtz free energy for the square-well fluid. The equation of state obtained using such an expression is compared to the equation of state obtained from the first-order approximation. The vapor-liquid coexistence curve and the supercritical compressibility factor of a square-well fluid are calculated using both equations of state and compared to Monte Carlo simulation data. Finally, we show that the approximation for the fluid radial distribution function given by the first-order perturbation theory provides closer values to the ones calculated via Monte Carlo simulations. This explains why such theory gives a better description of the fluid thermodynamic behavior.
Zhao, Ke; Ji, Yaoyao; Li, Yan; Li, Ting
2018-01-21
Near-infrared spectroscopy (NIRS) has become widely accepted as a valuable tool for noninvasively monitoring hemodynamics for clinical and diagnostic purposes. Baseline shift has attracted great attention in the field, but there has been little quantitative study on baseline removal. Here, we aimed to study the baseline characteristics of an in-house-built portable medical NIRS device over a long time (>3.5 h). We found that the measured baselines all formed perfect polynomial functions on phantom tests mimicking human bodies, which were identified by recent NIRS studies. More importantly, our study shows that the fourth-order polynomial function acted to distinguish performance with stable and low-computation-burden fitting calibration (R-square >0.99 for all probes) among second- to sixth-order polynomials, evaluated by the parameters R-square, sum of squares due to error, and residual. This study provides a straightforward, efficient, and quantitatively evaluated solution for online baseline removal for hemodynamic monitoring using NIRS devices.
Ahmmed, Shamim M; Suteria, Naureen S; Garbin, Valeria; Vanapalli, Siva A
2018-01-01
The transport of deformable objects, including polymer particles, vesicles, and cells, has been a subject of interest for several decades where the majority of experimental and theoretical studies have been focused on circular tubes. Due to advances in microfluidics, there is a need to study the transport of individual deformable particles in rectangular microchannels where corner flows can be important. In this study, we report measurements of hydrodynamic mobility of confined polymeric particles, vesicles, and cancer cells in a linear microchannel with a square cross-section. Our operating conditions are such that the mobility is measured as a function of geometric confinement over the range 0.3 < λ < 1.5 and at specified particle Reynolds numbers that are within 0.1 < Re p < 2.5. The experimental mobility data of each of these systems is compared with the circular-tube theory of Hestroni, Haber, and Wacholder [J. Fluid Mech. 41 , 689-705 (1970)] with modifications made for a square cross-section. For polymeric particles, we find that the mobility data agrees well over a large confinement range with the theory but under predicts for vesicles. The mobility of vesicles is higher in a square channel than in a circular tube, and does not depend significantly on membrane mechanical properties. The mobility of cancer cells is in good agreement with the theory up to λ ≈ 0.8, after which it deviates. Comparison of the mobility data of the three systems reveals that cancer cells have higher mobility than rigid particles but lower than vesicles, suggesting that the cell membrane frictional properties are in between a solid-like interface and a fluid bilayer. We explain further the differences in the mobility of the three systems by considering their shape deformation and surface flow on the interface. The results of this study may find potential applications in drug delivery and biomedical diagnostics.
NASA Astrophysics Data System (ADS)
Shabani, H.; Sánchez-Ortiga, E.; Preza, C.
2016-03-01
Surpassing the resolution of optical microscopy defined by the Abbe diffraction limit, while simultaneously achieving optical sectioning, is a challenging problem particularly for live cell imaging of thick samples. Among a few developing techniques, structured illumination microscopy (SIM) addresses this challenge by imposing higher frequency information into the observable frequency band confined by the optical transfer function (OTF) of a conventional microscope either doubling the spatial resolution or filling the missing cone based on the spatial frequency of the pattern when the patterned illumination is two-dimensional. Standard reconstruction methods for SIM decompose the low and high frequency components from the recorded low-resolution images and then combine them to reach a high-resolution image. In contrast, model-based approaches rely on iterative optimization approaches to minimize the error between estimated and forward images. In this paper, we study the performance of both groups of methods by simulating fluorescence microscopy images from different type of objects (ranging from simulated two-point sources to extended objects). These simulations are used to investigate the methods' effectiveness on restoring objects with various types of power spectrum when modulation frequency of the patterned illumination is changing from zero to the incoherent cut-off frequency of the imaging system. Our results show that increasing the amount of imposed information by using a higher modulation frequency of the illumination pattern does not always yield a better restoration performance, which was found to be depended on the underlying object. Results from model-based restoration show performance improvement, quantified by an up to 62% drop in the mean square error compared to standard reconstruction, with increasing modulation frequency. However, we found cases for which results obtained with standard reconstruction methods do not always follow the same trend.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loughran, B; Singh, V; Jain, A
Purpose: Although generalized linear system analytic metrics such as GMTF and GDQE can evaluate performance of the whole imaging system including detector, scatter and focal-spot, a simplified task-specific measured metric may help to better compare detector systems. Methods: Low quantum-noise images of a neuro-vascular stent with a modified ANSI head phantom were obtained from the average of many exposures taken with the high-resolution Micro-Angiographic Fluoroscope (MAF) and with a Flat Panel Detector (FPD). The square of the Fourier Transform of each averaged image, equivalent to the measured product of the system GMTF and the object function in spatial-frequency space, wasmore » then divided by the normalized noise power spectra (NNPS) for each respective system to obtain a task-specific generalized signal-to-noise ratio. A generalized measured relative object detectability (GM-ROD) was obtained by taking the ratio of the integral of the resulting expressions for each detector system to give an overall metric that enables a realistic systems comparison for the given detection task. Results: The GM-ROD provides comparison of relative performance of detector systems from actual measurements of the object function as imaged by those detector systems. This metric includes noise correlations and spatial frequencies relevant to the specific object. Additionally, the integration bounds for the GM-ROD can be selected to emphasis the higher frequency band of each detector if high-resolution image details are to be evaluated. Examples of this new metric are discussed with a comparison of the MAF to the FPD for neuro-vascular interventional imaging. Conclusion: The GM-ROD is a new direct-measured task-specific metric that can provide clinically relevant comparison of the relative performance of imaging systems. Supported by NIH Grant: 2R01EB002873 and an equipment grant from Toshiba Medical Systems Corporation.« less
Tele-Autonomous control involving contact. Final Report Thesis; [object localization
NASA Technical Reports Server (NTRS)
Shao, Lejun; Volz, Richard A.; Conway, Lynn; Walker, Michael W.
1990-01-01
Object localization and its application in tele-autonomous systems are studied. Two object localization algorithms are presented together with the methods of extracting several important types of object features. The first algorithm is based on line-segment to line-segment matching. Line range sensors are used to extract line-segment features from an object. The extracted features are matched to corresponding model features to compute the location of the object. The inputs of the second algorithm are not limited only to the line features. Featured points (point to point matching) and featured unit direction vectors (vector to vector matching) can also be used as the inputs of the algorithm, and there is no upper limit on the number of the features inputed. The algorithm will allow the use of redundant features to find a better solution. The algorithm uses dual number quaternions to represent the position and orientation of an object and uses the least squares optimization method to find an optimal solution for the object's location. The advantage of using this representation is that the method solves for the location estimation by minimizing a single cost function associated with the sum of the orientation and position errors and thus has a better performance on the estimation, both in accuracy and speed, than that of other similar algorithms. The difficulties when the operator is controlling a remote robot to perform manipulation tasks are also discussed. The main problems facing the operator are time delays on the signal transmission and the uncertainties of the remote environment. How object localization techniques can be used together with other techniques such as predictor display and time desynchronization to help to overcome these difficulties are then discussed.
Nonlinear, discrete flood event models, 1. Bayesian estimation of parameters
NASA Astrophysics Data System (ADS)
Bates, Bryson C.; Townley, Lloyd R.
1988-05-01
In this paper (Part 1), a Bayesian procedure for parameter estimation is applied to discrete flood event models. The essence of the procedure is the minimisation of a sum of squares function for models in which the computed peak discharge is nonlinear in terms of the parameters. This objective function is dependent on the observed and computed peak discharges for several storms on the catchment, information on the structure of observation error, and prior information on parameter values. The posterior covariance matrix gives a measure of the precision of the estimated parameters. The procedure is demonstrated using rainfall and runoff data from seven Australian catchments. It is concluded that the procedure is a powerful alternative to conventional parameter estimation techniques in situations where a number of floods are available for parameter estimation. Parts 2 and 3 will discuss the application of statistical nonlinearity measures and prediction uncertainty analysis to calibrated flood models. Bates (this volume) and Bates and Townley (this volume).
Jig-Shape Optimization of a Low-Boom Supersonic Aircraft
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi
2018-01-01
A simple approach for optimizing the jig-shape is proposed in this study. This simple approach is based on an unconstrained optimization problem and applied to a low-boom supersonic aircraft. In this study, the jig-shape optimization is performed using the two-step approach. First, starting design variables are computed using the least-squares surface fitting technique. Next, the jig-shape is further tuned using a numerical optimization procedure based on an in-house object-oriented optimization tool. During the numerical optimization procedure, a design jig-shape is determined by the baseline jig-shape and basis functions. A total of 12 symmetric mode shapes of the cruise-weight configuration, rigid pitch shape, rigid left and right stabilator rotation shapes, and a residual shape are selected as sixteen basis functions. After three optimization runs, the trim shape error distribution is improved, and the maximum trim shape error of 0.9844 inches of the starting configuration becomes 0.00367 inch by the end of the third optimization run.
NASA Astrophysics Data System (ADS)
Guenanou, A.; Houmat, A.
2018-05-01
The optimum stacking sequence design for the maximum fundamental frequency of symmetrically laminated composite circular plates with curvilinear fibres is investigated for the first time using a layer-wise optimization method. The design variables are two fibre orientation angles per layer. The fibre paths are constructed using the method of shifted paths. The first-order shear deformation plate theory and a curved square p-element are used to calculate the objective function. The blending function method is used to model accurately the geometry of the circular plate. The equations of motion are derived using Lagrange's method. The numerical results are validated by means of a convergence test and comparison with published values for symmetrically laminated composite circular plates with rectilinear fibres. The material parameters, boundary conditions, number of layers and thickness are shown to influence the optimum solutions to different extents. The results should serve as a benchmark for optimum stacking sequences of symmetrically laminated composite circular plates with curvilinear fibres.
A novel SURE-based criterion for parametric PSF estimation.
Xue, Feng; Blu, Thierry
2015-02-01
We propose an unbiased estimate of a filtered version of the mean squared error--the blur-SURE (Stein's unbiased risk estimate)--as a novel criterion for estimating an unknown point spread function (PSF) from the degraded image only. The PSF is obtained by minimizing this new objective functional over a family of Wiener processings. Based on this estimated blur kernel, we then perform nonblind deconvolution using our recently developed algorithm. The SURE-based framework is exemplified with a number of parametric PSF, involving a scaling factor that controls the blur size. A typical example of such parametrization is the Gaussian kernel. The experimental results demonstrate that minimizing the blur-SURE yields highly accurate estimates of the PSF parameters, which also result in a restoration quality that is very similar to the one obtained with the exact PSF, when plugged into our recent multi-Wiener SURE-LET deconvolution algorithm. The highly competitive results obtained outline the great potential of developing more powerful blind deconvolution algorithms based on SURE-like estimates.
Regional application of multi-layer artificial neural networks in 3-D ionosphere tomography
NASA Astrophysics Data System (ADS)
Ghaffari Razin, Mir Reza; Voosoghi, Behzad
2016-08-01
Tomography is a very cost-effective method to study physical properties of the ionosphere. In this paper, residual minimization training neural network (RMTNN) is used in voxel-based tomography to reconstruct of 3-D ionosphere electron density with high spatial resolution. For numerical experiments, observations collected at 37 GPS stations from Iranian permanent GPS network (IPGN) are used. A smoothed TEC approach was used for absolute STEC recovery. To improve the vertical resolution, empirical orthogonal functions (EOFs) obtained from international reference ionosphere 2012 (IRI-2012) used as object function in training neural network. Ionosonde observations is used for validate reliability of the proposed method. Minimum relative error for RMTNN is 1.64% and maximum relative error is 15.61%. Also root mean square error (RMSE) of 0.17 × 1011 (electrons/m3) is computed for RMTNN which is less than RMSE of IRI2012. The results show that RMTNN has higher accuracy and compiles speed than other ionosphere reconstruction methods.
A Case Study Examination of Structure and Function in a State Health Department Chronic Disease Unit
2015-01-01
Objectives. I explored the structural and operational practices of the chronic disease prevention and control unit of a state health department and proposed a conceptual model of structure, function, and effectiveness for future study. Methods. My exploratory case study examined 7 elements of organizational structure and practice. My interviews with staff and external stakeholders of a single chronic disease unit yielded quantitative and qualitative data that I coded by perspective, process, relationship, and activity. I analyzed these for patterns and emerging themes. Results. Chi-square analysis revealed significant correlations among collaboration with goal ambiguity, political support, and responsiveness, and evidence-based decisions with goal ambiguity and responsiveness. Conclusions. Although my study design did not permit conclusions about causality, my findings suggested that some elements of the model might facilitate effectiveness for chronic disease units and should be studied further. My findings might have important implications for identifying levers around which capacity can be built that may strengthen effectiveness. PMID:25689211
NASA Astrophysics Data System (ADS)
Raos, B. J.; Simpson, M. C.; Doyle, C. S.; Murray, A. F.; Graham, E. S.; Unsworth, C. P.
2018-06-01
Objective. Recent literature suggests that astrocytes form organized functional networks and communicate through transient changes in cytosolic Ca2+. Traditional techniques to investigate network activity, such as pharmacological blocking or genetic knockout, are difficult to restrict to individual cells. The objective of this work is to develop cell-patterning techniques to physically manipulate astrocytic interactions to enable the study of Ca2+ in astrocytic networks. Approach. We investigate how an in vitro cell-patterning platform that utilizes geometric patterns of parylene-C on SiO2 can be used to physically isolate single astrocytes and small astrocytic networks. Main results. We report that single astrocytes are effectively isolated on 75 × 75 µm square parylene nodes, whereas multi-cellular astrocytic networks are isolated on larger nodes, with the mean number of astrocytes per cluster increasing as a function of node size. Additionally, we report that astrocytes in small multi-cellular clusters exhibit spatio-temporal clustering of Ca2+ transients. Finally, we report that the frequency and regularity of Ca2+ transients was positively correlated with astrocyte connectivity. Significance. The significance of this work is to demonstrate how patterning hNT astrocytes replicates spatio-temporal clustering of Ca2+ signalling that is observed in vivo but not in dissociated in vitro cultures. We therefore highlight the importance of the structure of astrocytic networks in determining ensemble Ca2+ behaviour.
Synthetic Aperture Sonar Processing with MMSE Estimation of Image Sample Values
2016-12-01
UNCLASSIFIED/UNLIMITED 13. SUPPLEMENTARY NOTES 14. ABSTRACT MMSE (minimum mean- square error) target sample estimation using non-orthogonal basis...orthogonal, they can still be used in a minimum mean‐ square error (MMSE) estimator that models the object echo as a weighted sum of the multi‐aspect basis...problem. 3 Introduction Minimum mean‐ square error (MMSE) estimation is applied to target imaging with synthetic aperture
Conci, Markus; Müller, Hermann J; von Mühlenen, Adrian
2013-07-09
In visual search, detection of a target is faster when it is presented within a spatial layout of repeatedly encountered nontarget items, indicating that contextual invariances can guide selective attention (contextual cueing; Chun & Jiang, 1998). However, perceptual regularities may interfere with contextual learning; for instance, no contextual facilitation occurs when four nontargets form a square-shaped grouping, even though the square location predicts the target location (Conci & von Mühlenen, 2009). Here, we further investigated potential causes for this interference-effect: We show that contextual cueing can reliably occur for targets located within the region of a segmented object, but not for targets presented outside of the object's boundaries. Four experiments demonstrate an object-based facilitation in contextual cueing, with a modulation of context-based learning by relatively subtle grouping cues including closure, symmetry, and spatial regularity. Moreover, the lack of contextual cueing for targets located outside the segmented region was due to an absence of (latent) learning of contextual layouts, rather than due to an attentional bias towards the grouped region. Taken together, these results indicate that perceptual segmentation provides a basic structure within which contextual scene regularities are acquired. This in turn argues that contextual learning is constrained by object-based selection.
Extremely red objects in the fields of high redshift radio galaxies
NASA Technical Reports Server (NTRS)
Persson, S. E.; Mccarthy, P. J.; Dressler, Alan; Matthews, Keith
1993-01-01
We are engaged in a program of infrared imaging photometry of high redshift radio galaxies. The observations are being done using NICMOS2 and NICMOS3 arrays on the DuPont 100-inch telescope at Las Campanas Observatory. In addition, Persson and Matthews are measuring the spectral energy distributions of normal cluster galaxies in the redshift range 0 to 1. These measurements are being done with a 58 x 62 InSb array on the Palomar 5-m telescope. During the course of these observations we have imaged roughly 20 square arcminutes of sky to limiting magnitudes greater than 20 in the J, H, and K passbands (3 sigma in 3 square arcseconds). We have detected several relatively bright, extremely red, extended objects during the course of this work. Because the radio galaxy program requires Thuan-Gunn gri photometry, we are able to construct rough photometric energy distributions for many of the objects. A sample of the galaxy magnitudes within 4 arcseconds diameter is given. All the detections are real; either the objects show up at several wavelengths, or in subsets of the data. The reddest object in the table, 9ab'B' was found in a field of galaxies in a rich cluster at z = 0.4; 9ab'A' lies 8 arcseconds from it.
Genetic Algorithm for Initial Orbit Determination with Too Short Arc (Continued)
NASA Astrophysics Data System (ADS)
Li, X. R.; Wang, X.
2016-03-01
When using the genetic algorithm to solve the problem of too-short-arc (TSA) determination, due to the difference of computing processes between the genetic algorithm and classical method, the methods for outliers editing are no longer applicable. In the genetic algorithm, the robust estimation is acquired by means of using different loss functions in the fitness function, then the outlier problem of TSAs is solved. Compared with the classical method, the application of loss functions in the genetic algorithm is greatly simplified. Through the comparison of results of different loss functions, it is clear that the methods of least median square and least trimmed square can greatly improve the robustness of TSAs, and have a high breakdown point.
Georgopoulos, A P; Whang, K; Georgopoulos, M A; Tagaris, G A; Amirikian, B; Richter, W; Kim, S G; Uğurbil, K
2001-01-01
We studied the brain activation patterns in two visual image processing tasks requiring judgements on object construction (FIT task) or object sameness (SAME task). Eight right-handed healthy human subjects (four women and four men) performed the two tasks in a randomized block design while 5-mm, multislice functional images of the whole brain were acquired using a 4-tesla system using blood oxygenation dependent (BOLD) activation. Pairs of objects were picked randomly from a set of 25 oriented fragments of a square and presented to the subjects approximately every 5 sec. In the FIT task, subjects had to indicate, by pushing one of two buttons, whether the two fragments could match to form a perfect square, whereas in the SAME task they had to decide whether they were the same or not. In a control task, preceding and following each of the two tasks above, a single square was presented at the same rate and subjects pushed any of the two keys at random. Functional activation maps were constructed based on a combination of conservative criteria. The areas with activated pixels were identified using Talairach coordinates and anatomical landmarks, and the number of activated pixels was determined for each area. Altogether, 379 pixels were activated. The counts of activated pixels did not differ significantly between the two tasks or between the two genders. However, there were significantly more activated pixels in the left (n = 218) than the right side of the brain (n = 161). Of the 379 activated pixels, 371 were located in the cerebral cortex. The Talairach coordinates of these pixels were analyzed with respect to their overall distribution in the two tasks. These distributions differed significantly between the two tasks. With respect to individual dimensions, the two tasks differed significantly in the anterior--posterior and superior--inferior distributions but not in the left--right (including mediolateral, within the left or right side) distribution. Specifically, the FIT distribution was, overall, more anterior and inferior than that of the SAME task. A detailed analysis of the counts and spatial distributions of activated pixels was carried out for 15 brain areas (all in the cerebral cortex) in which a consistent activation (in > or = 3 subjects) was observed (n = 323 activated pixels). We found the following. Except for the inferior temporal gyrus, which was activated exclusively in the FIT task, all other areas showed activation in both tasks but to different extents. Based on the extent of activation, areas fell within two distinct groups (FIT or SAME) depending on which pixel count (i.e., FIT or SAME) was greater. The FIT group consisted of the following areas, in decreasing FIT/SAME order (brackets indicate ties): GTi, GTs, GC, GFi, GFd, [GTm, GF], GO. The SAME group consisted of the following areas, in decreasing SAME/FIT order : GOi, LPs, Sca, GPrC, GPoC, [GFs, GFm]. These results indicate that there are distributed, graded, and partially overlapping patterns of activation during performance of the two tasks. We attribute these overlapping patterns of activation to the engagement of partially shared processes. Activated pixels clustered to three types of clusters : FIT-only (111 pixels), SAME-only (97 pixels), and FIT + SAME (115 pixels). Pixels contained in FIT-only and SAME-only clusters were distributed approximately equally between the left and right hemispheres, whereas pixels in the SAME + FIT clusters were located mostly in the left hemisphere. With respect to gender, the left-right distribution of activated pixels was very similar in women and men for the SAME-only and FIT + SAME clusters but differed for the FIT-only case in which there was a prominent left side preponderance for women, in contrast to a right side preponderance for men. We conclude that (a) cortical mechanisms common for processing visual object construction and discrimination involve mostly the left hemisphere, (b) cortical mechanisms specific for these tasks engage both hemispheres, and (c) in object construction only, men engage predominantly the right hemisphere whereas women show a left-hemisphere preponderance.
Thompson, Wesley K.; Charo, Lindsey; Vahia, Ipsit V.; Depp, Colin; Allison, Matthew; Jeste, Dilip V.
2014-01-01
Objectives To determine if measures of successful-aging are associated with sexual activity, satisfaction, and function in older post-menopausal women. Design Cross-sectional study using self-report surveys; analyses include chi-square and t-tests and multiple linear regression analyses. Setting Community-dwelling older post-menopausal women in the greater San Diego Region. Participants 1,235 community-dwelling women aged 60-89 years participating at the San Diego site of the Women's Health Initiative. Measurements Demographics and self-report measures of sexual activity, function, and satisfaction and successful aging. Results Sexual activity and functioning (desire, arousal, vaginal tightness, use of lubricants, and ability to climax) were negatively associated with age, as were physical and mental health. In contrast, sexual satisfaction and self-rated successful aging and quality of life remained unchanged across age groups. Successful aging measures were positively associated with sexual measures, especially self-rated quality of life and sexual satisfaction. Conclusions Self-rated successful aging, quality of life, and sexual satisfaction appear to be stable in the face of declines in physical health, some cognitive abilities, and sexual activity and function and are positively associated with each other across ages 60-89 years. PMID:21797827
Online Pairwise Learning Algorithms.
Ying, Yiming; Zhou, Ding-Xuan
2016-04-01
Pairwise learning usually refers to a learning task that involves a loss function depending on pairs of examples, among which the most notable ones are bipartite ranking, metric learning, and AUC maximization. In this letter we study an online algorithm for pairwise learning with a least-square loss function in an unconstrained setting of a reproducing kernel Hilbert space (RKHS) that we refer to as the Online Pairwise lEaRning Algorithm (OPERA). In contrast to existing works (Kar, Sriperumbudur, Jain, & Karnick, 2013 ; Wang, Khardon, Pechyony, & Jones, 2012 ), which require that the iterates are restricted to a bounded domain or the loss function is strongly convex, OPERA is associated with a non-strongly convex objective function and learns the target function in an unconstrained RKHS. Specifically, we establish a general theorem that guarantees the almost sure convergence for the last iterate of OPERA without any assumptions on the underlying distribution. Explicit convergence rates are derived under the condition of polynomially decaying step sizes. We also establish an interesting property for a family of widely used kernels in the setting of pairwise learning and illustrate the convergence results using such kernels. Our methodology mainly depends on the characterization of RKHSs using its associated integral operators and probability inequalities for random variables with values in a Hilbert space.
Vertical Impact of a Sphere Falling into Water
ERIC Educational Resources Information Center
Cross, Rod
2016-01-01
The nature of the drag force on an object moving through a fluid is well documented and many experiments have been described to allow students to measure the force. For low speed flows the drag force is proportional to the velocity of the object, while at high flow speeds the drag force is proportional to the velocity squared. The basic physics…
Sparse and stable Markowitz portfolios.
Brodie, Joshua; Daubechies, Ingrid; De Mol, Christine; Giannone, Domenico; Loris, Ignace
2009-07-28
We consider the problem of portfolio selection within the classical Markowitz mean-variance framework, reformulated as a constrained least-squares regression problem. We propose to add to the objective function a penalty proportional to the sum of the absolute values of the portfolio weights. This penalty regularizes (stabilizes) the optimization problem, encourages sparse portfolios (i.e., portfolios with only few active positions), and allows accounting for transaction costs. Our approach recovers as special cases the no-short-positions portfolios, but does allow for short positions in limited number. We implement this methodology on two benchmark data sets constructed by Fama and French. Using only a modest amount of training data, we construct portfolios whose out-of-sample performance, as measured by Sharpe ratio, is consistently and significantly better than that of the naïve evenly weighted portfolio.
NASA Astrophysics Data System (ADS)
1990-01-01
The Rayovac TANDEM is an advanced technology combination work light and general purpose flashlight that incorporates several NASA technologies. The TANDEM functions as two lights in one. It features a long range spotlight and wide angle floodlight; simple one-hand electrical switching changes the beam from spot to flood. TANDEM developers made particular use of NASA's extensive research in ergonomics in the TANDEM's angled handle, convenient shape and different orientations. The shatterproof, water resistant plastic casing also draws on NASA technology, as does the shape and beam distance of the square diffused flood. TANDEM's heavy duty magnet that permits the light to be affixed to any metal object borrows from NASA research on rare earth magnets that combine strong magnetic capability with low cost. Developers used a NASA-developed ultrasonic welding technique in the light's interior.
NASA Technical Reports Server (NTRS)
Kolb, Edward W.
1989-01-01
A Friedmann-Robertson-Walker cosmology with energy density decreasing in expansion as 1/R-squared, where R is the Robertson-Walker scale factor, is studied. In such a model the universe expands with constant velocity; hence the term coasting cosmology. Observational consequences of such a model include the age of the universe, the luminosity distance-redshift relation (the Hubble diagram), the angular diameter distance-redshift relation, and the galaxy number count as a function of redshift. These observations are used to limit the parameters of the model. Among the interesting consequences of the model are the possibility of an ever-expanding closed universe, a model universe with multiple images at different redshifts of the same object, a universe with Omega - 1 not equal to 0 stable in expansion, and a closed universe with radius smaller than 1/H(0).
Spin supercurrent and effect of quantum phase transition in the two-dimensional XY model
NASA Astrophysics Data System (ADS)
Lima, L. S.
2018-04-01
We have verified the influence of quantum phase transition on spin transport in the spin-1 two-dimensional XY model on the square lattice, with easy plane, single ion and exchange anisotropy. We analyze the effect of the phase transition from the Néel phase to the paramagnetic phase on the AC spin conductivity. Our results show a bit influence of the quantum phase transition on the conductivity. We also obtain a conventional spin transport for ω > 0 and an ideal spin transport in the limit of DC conductivity and therefore, a superfluid spin transport for the DC current in this limit. We have made the diagrammatic expansion for the Green-function with objective to include the effect exciton-exciton scattering on the results.
Statistical analysis of trypanosomes' motility
NASA Astrophysics Data System (ADS)
Zaburdaev, Vasily; Uppaluri, Sravanti; Pfohl, Thomas; Engstler, Markus; Stark, Holger; Friedrich, Rudolf
2010-03-01
Trypanosome is a parasite causing the sleeping sickness. The way it moves in the blood stream and penetrates various obstacles is the area of active research. Our goal was to investigate a free trypanosomes' motion in the planar geometry. Our analysis of trypanosomes' trajectories reveals that there are two correlation times - one is associated with a fast motion of its body and the second one with a slower rotational diffusion of the trypanosome as a point object. We propose a system of Langevin equations to model such motion. One of its peculiarities is the presence of multiplicative noise predicting higher level of noise for higher velocity of the trypanosome. Theoretical and numerical results give a comprehensive description of the experimental data such as the mean squared displacement, velocity distribution and auto-correlation function.
ERIC Educational Resources Information Center
Jones, Richard C.; Jones, Richard N.
1994-01-01
Explores the effects of the square-cube law that predicts the physical consequences of increasing or decreasing an object's size. Uses examples to discuss the economy of scales, common misconceptions, and applications of scaling laws. (JRH)
A Genetic Algorithm Approach to Nonlinear Least Squares Estimation
ERIC Educational Resources Information Center
Olinsky, Alan D.; Quinn, John T.; Mangiameli, Paul M.; Chen, Shaw K.
2004-01-01
A common type of problem encountered in mathematics is optimizing nonlinear functions. Many popular algorithms that are currently available for finding nonlinear least squares estimators, a special class of nonlinear problems, are sometimes inadequate. They might not converge to an optimal value, or if they do, it could be to a local rather than…
Fast and Accurate Fitting and Filtering of Noisy Exponentials in Legendre Space
Bao, Guobin; Schild, Detlev
2014-01-01
The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters. PMID:24603904
Siudem, Grzegorz; Fronczak, Agata; Fronczak, Piotr
2016-10-10
In this paper, we provide the exact expression for the coefficients in the low-temperature series expansion of the partition function of the two-dimensional Ising model on the infinite square lattice. This is equivalent to exact determination of the number of spin configurations at a given energy. With these coefficients, we show that the ferromagnetic-to-paramagnetic phase transition in the square lattice Ising model can be explained through equivalence between the model and the perfect gas of energy clusters model, in which the passage through the critical point is related to the complete change in the thermodynamic preferences on the size of clusters. The combinatorial approach reported in this article is very general and can be easily applied to other lattice models.
Siudem, Grzegorz; Fronczak, Agata; Fronczak, Piotr
2016-01-01
In this paper, we provide the exact expression for the coefficients in the low-temperature series expansion of the partition function of the two-dimensional Ising model on the infinite square lattice. This is equivalent to exact determination of the number of spin configurations at a given energy. With these coefficients, we show that the ferromagnetic–to–paramagnetic phase transition in the square lattice Ising model can be explained through equivalence between the model and the perfect gas of energy clusters model, in which the passage through the critical point is related to the complete change in the thermodynamic preferences on the size of clusters. The combinatorial approach reported in this article is very general and can be easily applied to other lattice models. PMID:27721435
Algorithms for Nonlinear Least-Squares Problems
1988-09-01
O -,i(x) 2 , where each -,(x) is a smooth function mapping Rn to R. J - The m x n Jacobian matrix of f. ... x g - The gradient of the nonlinear least...V211f(X*)I112~ l~ l) J(xk)T J(xk) 2 + O(k - X*) For more convergence results and detailed convergence analysis for the Gauss-Newton method, see, e. g ...for a class of nonlinear least-squares problems that includes zero-residual prob- lems. The function Jt is the pseudo-inverse of Jk (see, e. g
Approximating a retarded-advanced differential equation that models human phonation
NASA Astrophysics Data System (ADS)
Teodoro, M. Filomena
2017-11-01
In [1, 2, 3] we have got the numerical solution of a linear mixed type functional differential equation (MTFDE) introduced initially in [4], considering the autonomous and non-autonomous case by collocation, least squares and finite element methods considering B-splines basis set. The present work introduces a numerical scheme using least squares method (LSM) and Gaussian basis functions to solve numerically a nonlinear mixed type equation with symmetric delay and advance which models human phonation. The preliminary results are promising. We obtain an accuracy comparable with the previous results.
Four-Dimensional Golden Search
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fenimore, Edward E.
2015-02-25
The Golden search technique is a method to search a multiple-dimension space to find the minimum. It basically subdivides the possible ranges of parameters until it brackets, to within an arbitrarily small distance, the minimum. It has the advantages that (1) the function to be minimized can be non-linear, (2) it does not require derivatives of the function, (3) the convergence criterion does not depend on the magnitude of the function. Thus, if the function is a goodness of fit parameter such as chi-square, the convergence does not depend on the noise being correctly estimated or the function correctly followingmore » the chi-square statistic. And, (4) the convergence criterion does not depend on the shape of the function. Thus, long shallow surfaces can be searched without the problem of premature convergence. As with many methods, the Golden search technique can be confused by surfaces with multiple minima.« less
NASA Astrophysics Data System (ADS)
Klees, R.; Slobbe, D. C.; Farahani, H. H.
2018-03-01
The posed question arises for instance in regional gravity field modelling using weighted least-squares techniques if the gravity field functionals are synthesised from the spherical harmonic coefficients of a satellite-only global gravity model (GGM), and are used as one of the noisy datasets. The associated noise covariance matrix, appeared to be extremely ill-conditioned with a singular value spectrum that decayed gradually to zero without any noticeable gap. We analysed three methods to deal with the ill-conditioned noise covariance matrix: Tihonov regularisation of the noise covariance matrix in combination with the standard formula for the weighted least-squares estimator, a formula of the weighted least-squares estimator, which does not involve the inverse noise covariance matrix, and an estimator based on Rao's unified theory of least-squares. Our analysis was based on a numerical experiment involving a set of height anomalies synthesised from the GGM GOCO05s, which is provided with a full noise covariance matrix. We showed that the three estimators perform similar, provided that the two regularisation parameters each method knows were chosen properly. As standard regularisation parameter choice rules do not apply here, we suggested a new parameter choice rule, and demonstrated its performance. Using this rule, we found that the differences between the three least-squares estimates were within noise. For the standard formulation of the weighted least-squares estimator with regularised noise covariance matrix, this required an exceptionally strong regularisation, much larger than one expected from the condition number of the noise covariance matrix. The preferred method is the inversion-free formulation of the weighted least-squares estimator, because of its simplicity with respect to the choice of the two regularisation parameters.
Huang, Chenyu; Ogawa, Rei
2014-05-01
Joint scar contractures are characterized by tight bands of soft tissue that bridge the 2 ends of the joint like a web. Classical treatment methods such as Z-plasties are mainly based on 2-dimensional designs. Our square flap method is an alternative surgical method that restores the span of the web in a stereometric fashion, thereby reconstructing joint function. In total, 20 Japanese patients with joint scar contractures on the axillary (n = 10) or first digital web (n = 10) underwent square flap surgery. The maximum range of motion and commissure length were measured before and after surgery. A theoretical stereometric geometrical model of the square flap was established to compare it to the classical single (60 degree), 4-flap (45 degree), and 5-flap (60 degree) Z-plasties in terms of theoretical web reconstruction efficacy. All cases achieved 100% contracture release. The maximum range of motion and web space improved after square flap surgery (P = 0.001). Stereometric geometrical modeling revealed that the standard square flap (α = 45 degree; β = 90 degree) yields a larger flap area, length/width ratio, and postsurgical commissure length than the Z-plasties. It can also be adapted by varying angles α and β, although certain angle thresholds must be met to obtain the stereometric advantages of this method. When used to treat joint scar contractures, the square flap method can fully span the web space in a stereometric manner, thus yielding a close-to-original shape and function. Compared with the classical Z-plasties, it also provides sufficient anatomical blood supply while imposing the least physiological tension on the adjacent skin.
Huang, Chenyu
2014-01-01
Background: Joint scar contractures are characterized by tight bands of soft tissue that bridge the 2 ends of the joint like a web. Classical treatment methods such as Z-plasties are mainly based on 2-dimensional designs. Our square flap method is an alternative surgical method that restores the span of the web in a stereometric fashion, thereby reconstructing joint function. Methods: In total, 20 Japanese patients with joint scar contractures on the axillary (n = 10) or first digital web (n = 10) underwent square flap surgery. The maximum range of motion and commissure length were measured before and after surgery. A theoretical stereometric geometrical model of the square flap was established to compare it to the classical single (60 degree), 4-flap (45 degree), and 5-flap (60 degree) Z-plasties in terms of theoretical web reconstruction efficacy. Results: All cases achieved 100% contracture release. The maximum range of motion and web space improved after square flap surgery (P = 0.001). Stereometric geometrical modeling revealed that the standard square flap (α = 45 degree; β = 90 degree) yields a larger flap area, length/width ratio, and postsurgical commissure length than the Z-plasties. It can also be adapted by varying angles α and β, although certain angle thresholds must be met to obtain the stereometric advantages of this method. Conclusions: When used to treat joint scar contractures, the square flap method can fully span the web space in a stereometric manner, thus yielding a close-to-original shape and function. Compared with the classical Z-plasties, it also provides sufficient anatomical blood supply while imposing the least physiological tension on the adjacent skin. PMID:25289342
Shotorban, Babak
2010-04-01
The dynamic least-squares kernel density (LSQKD) model [C. Pantano and B. Shotorban, Phys. Rev. E 76, 066705 (2007)] is used to solve the Fokker-Planck equations. In this model the probability density function (PDF) is approximated by a linear combination of basis functions with unknown parameters whose governing equations are determined by a global least-squares approximation of the PDF in the phase space. In this work basis functions are set to be Gaussian for which the mean, variance, and covariances are governed by a set of partial differential equations (PDEs) or ordinary differential equations (ODEs) depending on what phase-space variables are approximated by Gaussian functions. Three sample problems of univariate double-well potential, bivariate bistable neurodynamical system [G. Deco and D. Martí, Phys. Rev. E 75, 031913 (2007)], and bivariate Brownian particles in a nonuniform gas are studied. The LSQKD is verified for these problems as its results are compared against the results of the method of characteristics in nondiffusive cases and the stochastic particle method in diffusive cases. For the double-well potential problem it is observed that for low to moderate diffusivity the dynamic LSQKD well predicts the stationary PDF for which there is an exact solution. A similar observation is made for the bistable neurodynamical system. In both these problems least-squares approximation is made on all phase-space variables resulting in a set of ODEs with time as the independent variable for the Gaussian function parameters. In the problem of Brownian particles in a nonuniform gas, this approximation is made only for the particle velocity variable leading to a set of PDEs with time and particle position as independent variables. Solving these PDEs, a very good performance by LSQKD is observed for a wide range of diffusivities.
Self-reported walking ability predicts functional mobility performance in frail older adults.
Alexander, N B; Guire, K E; Thelen, D G; Ashton-Miller, J A; Schultz, A B; Grunawalt, J C; Giordani, B
2000-11-01
To determine how self-reported physical function relates to performance in each of three mobility domains: walking, stance maintenance, and rising from chairs. Cross-sectional analysis of older adults. University-based laboratory and community-based congregate housing facilities. Two hundred twenty-one older adults (mean age, 79.9 years; range, 60-102 years) without clinical evidence of dementia (mean Folstein Mini-Mental State score, 28; range, 24-30). We compared the responses of these older adults on a questionnaire battery used by the Established Populations for the Epidemiologic Study of the Elderly (EPESE) project, to performance on mobility tasks of graded difficulty. Responses to the EPESE battery included: (1) whether assistance was required to perform seven Katz activities of daily living (ADL) items, specifically with walking and transferring; (2) three Rosow-Breslau items, including the ability to walk up stairs and walk a half mile; and (3) five Nagi items, including difficulty stooping, reaching, and lifting objects. The performance measures included the ability to perform, and time taken to perform, tasks in three summary score domains: (1) walking ("Walking," seven tasks, including walking with an assistive device, turning, stair climbing, tandem walking); (2) stance maintenance ("Stance," six tasks, including unipedal, bipedal, tandem, and maximum lean); and (3) chair rise ("Chair Rise," six tasks, including rising from a variety of seat heights with and without the use of hands for assistance). A total score combines scores in each Walking, Stance, and Chair Rise domain. We also analyzed how cognitive/ behavioral factors such as depression and self-efficacy related to the residuals from the self-report and performance-based ANOVA models. Rosow-Breslau items have the strongest relationship with the three performance domains, Walking, Stance, and Chair Rise (eta-squared ranging from 0.21 to 0.44). These three performance domains are as strongly related to one Katz ADL item, walking (eta-squared ranging from 0.15 to 0.33) as all of the Katz ADL items combined (eta-squared ranging from 0.21 to 0.35). Tests of problem solving and psychomotor speed, the Trails A and Trails B tests, are significantly correlated with the residuals from the self-report and performance-based ANOVA models. Compared with the rest of the EPESE self-report items, self-report items related to walking (such as Katz walking and Rosow-Breslau items) are better predictors of functional mobility performance on tasks involving walking, stance maintenance, and rising from chairs. Compared with other self-report items, self-reported walking ability may be the best predictor of overall functional mobility.
Feature Detection and Curve Fitting Using Fast Walsh Transforms for Shock Tracking: Applications
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.
2017-01-01
Walsh functions form an orthonormal basis set consisting of square waves. Square waves make the system well suited for detecting and representing functions with discontinuities. Given a uniform distribution of 2p cells on a one-dimensional element, it has been proven that the inner product of the Walsh Root function for group p with every polynomial of degree < or = (p - 1) across the element is identically zero. It has also been proven that the magnitude and location of a discontinuous jump, as represented by a Heaviside function, are explicitly identified by its Fast Walsh Transform (FWT) coefficients. These two proofs enable an algorithm that quickly provides a Weighted Least Squares fit to distributions across the element that include a discontinuity. The detection of a discontinuity enables analytic relations to locally describe its evolution and provide increased accuracy. Time accurate examples are provided for advection, Burgers equation, and Riemann problems (diaphragm burst) in closed tubes and de Laval nozzles. New algorithms to detect up to two C0 and/or C1 discontinuities within a single element are developed for application to the Riemann problem, in which a contact discontinuity and shock wave form after the diaphragm bursts.
Comparing the tensile strength of square and reversing half-hitch alternating post knots
Wu, Vincent; Sykes, Edward A.; Mercer, Dale; Hopman, Wilma M.; Tang, Ephraim
2017-01-01
Background Square knots are the gold standard in hand-tie wound closure, but are difficult to reproduce in deep cavities, inadvertently resulting in slipknots. The reversing half-hitch alternating post (RHAP) knot has been suggested as an alternative owing to its nonslip nature and reproducibility in limited spaces. We explored whether the RHAP knot is noninferior to the square knot by assessing tensile strength. Methods We conducted 10 trials for each baseline and knot configuration, using 3–0 silk and 3–0 polyglactin 910 sutures. We compared tensile strength between knot configurations at the point of knot failure between slippage and breakage. Results Maximal failure strength (mean ± SD) in square knots was reached with 4-throw in both silk (30 ± 1.5 N) and polyglactin 910 (39 ± 12 N). For RHAP knots, maximal failure strength was reached at 5-throw for both silk (31 ± 1.5 N) and polyglactin 910 (41 ± 13 N). In both sutures, there were no strength differences between 3-throw square and 4-throw RHAP, between 4-throw square and 5-throw RHAP, or between 5-throw square and 6-throw RHAP knots. Polyglactin 910 sutures, in all knot configurations, were more prone to slippage than silk sutures (p < 0.001). Conclusion The difference in mean tensile strength could be attributed to the proportion of knot slippage versus breakage, which is material-dependent. Future studies can re-evaluate findings in monofilament sutures and objectively assess the reproducibility of square and RHAP knots in deep cavities. Our results indicate that RHAP knots composed of 1 extra throw provide equivalent strength to square knots and may be an alternative when performing hand-ties in limited cavities with either silk or polyglactin 910 sutures. PMID:28327276
Comparing the tensile strength of square and reversing half-hitch alternating post knots.
Wu, Vincent; Sykes, Edward A; Mercer, Dale; Hopman, Wilma M; Tang, Ephraim
2017-06-01
Square knots are the gold standard in hand-tie wound closure, but are difficult to reproduce in deep cavities, inadvertently resulting in slipknots. The reversing half-hitch alternating post (RHAP) knot has been suggested as an alternative owing to its nonslip nature and reproducibility in limited spaces. We explored whether the RHAP knot is noninferior to the square knot by assessing tensile strength. We conducted 10 trials for each baseline and knot configuration, using 3-0 silk and 3-0 polyglactin 910 sutures. We compared tensile strength between knot configurations at the point of knot failure between slippage and breakage. Maximal failure strength (mean ± SD) in square knots was reached with 4-throw in both silk (30 ± 1.5 N) and polyglactin 910 (39 ± 12 N). For RHAP knots, maximal failure strength was reached at 5-throw for both silk (31 ± 1.5 N) and polyglactin 910 (41 ± 13 N). In both sutures, there were no strength differences between 3-throw square and 4-throw RHAP, between 4-throw square and 5-throw RHAP, or between 5-throw square and 6-throw RHAP knots. Polyglactin 910 sutures, in all knot configurations, were more prone to slippage than silk sutures ( p < 0.001). The difference in mean tensile strength could be attributed to the proportion of knot slippage versus breakage, which is material-dependent. Future studies can re-evaluate findings in monofilament sutures and objectively assess the reproducibility of square and RHAP knots in deep cavities. Our results indicate that RHAP knots composed of 1 extra throw provide equivalent strength to square knots and may be an alternative when performing hand-ties in limited cavities with either silk or polyglactin 910 sutures.
Testing and modelling of the SVOM MXT narrow field lobster-eye telescope
NASA Astrophysics Data System (ADS)
Feldman, Charlotte; Pearson, James; Willingale, Richard; Sykes, John; Drumm, Paul; Houghton, Paul; Bicknell, Chris; Osborne, Julian; Martindale, Adrian; O'Brien, Paul; Fairbend, Ray; Schyns, Emile; Petit, Sylvain; Roudot, Romain; Mercier, Karine; Le Duigou, Jean-Michel; Gotz, Diego
2017-08-01
The Space-based multi-band astronomical Variable Objects Monitor (SVOM) is a French-Chinese space mission to be launched in 2021 with the goal of studying gamma-ray bursts, the most powerful stellar explosions in the Universe. The Microchannel X-ray Telescope (MXT) on-board SVOM, is an X-ray focusing telescope with a detector-limited field of view of ˜1 square° , working in the 0.2-10 keV energy band. The MXT is a narrow-field-optimised lobster eye telescope, designed to promptly detect and accurately locate gamma-ray bursts afterglows. The breadboard MXT optic comprises of an array of square pore micro pore optics (MPOs) which are slumped to a spherical radius of 2 m giving a focal length of 1 m and an intrinsic field of view of ˜6° . We present details of the baseline design and results from the ongoing X-ray tests of the breadboard and structural thermal model MPOs performed at the University of Leicester and at Panter. In addition, we present details of modelling and analysis which reveals the factors that limit the angular resolution, characteristics of the point spread function and the efficiency and collecting area of the currently available MPOs.
Denk, W; Webb, W W; Hudspeth, A J
1989-01-01
By optically probing with a focused, low-power laser beam, we measured the spontaneous deflection fluctuations of the sensory hair bundles on frog saccular hair cells with a sensitivity of about 1 pm/square root of Hz. The preparation was illuminated by two orthogonally polarized laser beams separated by only about 0.2 microns at their foci in the structure under investigation. Slight movement of the object from one beam toward the other caused a change of the phase difference between the transmitted beams and an intensity modulation at the detector where the beams interfered. Maintenance of the health of the cells and function of the transduction mechanism were occasionally confirmed by measuring the intracellular resting potential and the sensitivity of transduction. The root-mean-square (rms) displacement of approximately 3.5 nm at a hair bundle's tip suggests a stiffness of about 350 microN/m, in agreement with measurements made with a probe attached to a bundle's tip. The spectra resemble those of overdamped harmonic oscillators with roll-off frequencies between 200 and 800 Hz. Because the roll-off frequencies depended strongly on the viscosity of the bathing medium, we conclude that hair-bundle motion is mainly damped by the surrounding fluid. PMID:2787510
NASA Astrophysics Data System (ADS)
Fajkus, Marcel; Nedoma, Jan; Martinek, Radek; Vasinek, Vladimir
2017-10-01
In this article, we describe an innovative non-invasive method of Fetal Phonocardiography (fPCG) using fiber-optic sensors and adaptive algorithm for the measurement of fetal heart rate (fHR). Conventional PCG is based on a noninvasive scanning of acoustic signals by means of a microphone placed on the thorax. As for fPCG, the microphone is placed on the maternal abdomen. Our solution is based on patent pending non-invasive scanning of acoustic signals by means of a fiber-optic interferometer. Fiber-optic sensors are resistant to technical artifacts such as electromagnetic interferences (EMI), thus they can be used in situations where it is impossible to use conventional EFM methods, e.g. during Magnetic Resonance Imaging (MRI) examination or in case of delivery in water. The adaptive evaluation system is based on Recursive least squares (RLS) algorithm. Based on real measurements provided on five volunteers with their written consent, we created a simplified dynamic signal model of a distribution of heartbeat sounds (HS) through the human body. Our created model allows us to verification of the proposed adaptive system RLS algorithm. The functionality of the proposed non-invasive adaptive system was verified by objective parameters such as Sensitivity (S+) and Signal to Noise Ratio (SNR).
Zeng, Dong; Gao, Yuanyuan; Huang, Jing; Bian, Zhaoying; Zhang, Hua; Lu, Lijun; Ma, Jianhua
2016-10-01
Multienergy computed tomography (MECT) allows identifying and differentiating different materials through simultaneous capture of multiple sets of energy-selective data belonging to specific energy windows. However, because sufficient photon counts are not available in each energy window compared with that in the whole energy window, the MECT images reconstructed by the analytical approach often suffer from poor signal-to-noise and strong streak artifacts. To address the particular challenge, this work presents a penalized weighted least-squares (PWLS) scheme by incorporating the new concept of structure tensor total variation (STV) regularization, which is henceforth referred to as 'PWLS-STV' for simplicity. Specifically, the STV regularization is derived by penalizing higher-order derivatives of the desired MECT images. Thus it could provide more robust measures of image variation, which can eliminate the patchy artifacts often observed in total variation (TV) regularization. Subsequently, an alternating optimization algorithm was adopted to minimize the objective function. Extensive experiments with a digital XCAT phantom and meat specimen clearly demonstrate that the present PWLS-STV algorithm can achieve more gains than the existing TV-based algorithms and the conventional filtered backpeojection (FBP) algorithm in terms of both quantitative and visual quality evaluations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Simulation-Based Approach to Determining Electron Transfer Rates Using Square-Wave Voltammetry.
Dauphin-Ducharme, Philippe; Arroyo-Currás, Netzahualcóyotl; Kurnik, Martin; Ortega, Gabriel; Li, Hui; Plaxco, Kevin W
2017-05-09
The efficiency with which square-wave voltammetry differentiates faradic and charging currents makes it a particularly sensitive electroanalytical approach, as evidenced by its ability to measure nanomolar or even picomolar concentrations of electroactive analytes. Because of the relative complexity of the potential sweep it uses, however, the extraction of detailed kinetic and mechanistic information from square-wave data remains challenging. In response, we demonstrate here a numerical approach by which square-wave data can be used to determine electron transfer rates. Specifically, we have developed a numerical approach in which we model the height and the shape of voltammograms collected over a range of square-wave frequencies and amplitudes to simulated voltammograms as functions of the heterogeneous rate constant and the electron transfer coefficient. As validation of the approach, we have used it to determine electron transfer kinetics in both freely diffusing and diffusionless surface-tethered species, obtaining electron transfer kinetics in all cases in good agreement with values derived using non-square-wave methods.
Analysis of tractable distortion metrics for EEG compression applications.
Bazán-Prieto, Carlos; Blanco-Velasco, Manuel; Cárdenas-Barrera, Julián; Cruz-Roldán, Fernando
2012-07-01
Coding distortion in lossy electroencephalographic (EEG) signal compression methods is evaluated through tractable objective criteria. The percentage root-mean-square difference, which is a global and relative indicator of the quality held by reconstructed waveforms, is the most widely used criterion. However, this parameter does not ensure compliance with clinical standard guidelines that specify limits to allowable noise in EEG recordings. As a result, expert clinicians may have difficulties interpreting the resulting distortion of the EEG for a given value of this parameter. Conversely, the root-mean-square error is an alternative criterion that quantifies distortion in understandable units. In this paper, we demonstrate that the root-mean-square error is better suited to control and to assess the distortion introduced by compression methods. The experiments conducted in this paper show that the use of the root-mean-square error as target parameter in EEG compression allows both clinicians and scientists to infer whether coding error is clinically acceptable or not at no cost for the compression ratio.
Realization of a thermal cloak-concentrator using a metamaterial transformer.
Liu, Ding-Peng; Chen, Po-Jung; Huang, Hsin-Haou
2018-02-06
By combining rotating squares with auxetic properties, we developed a metamaterial transformer capable of realizing metamaterials with tunable functionalities. We investigated the use of a metamaterial transformer-based thermal cloak-concentrator that can change from a cloak to a concentrator when the device configuration is transformed. We established that the proposed dual-functional metamaterial can either thermally protect a region (cloak) or focus heat flux in a small region (concentrator). The dual functionality was verified by finite element simulations and validated by experiments with a specimen composed of copper, epoxy, and rotating squares. This work provides an effective and efficient method for controlling the gradient of heat, in addition to providing a reference for other thermal metamaterials to possess such controllable functionalities by adapting the concept of a metamaterial transformer.
Genetic Algorithm for Initial Orbit Determination with Too Short Arc (Continued)
NASA Astrophysics Data System (ADS)
Li, Xin-ran; Wang, Xin
2017-04-01
When the genetic algorithm is used to solve the problem of too short-arc (TSA) orbit determination, due to the difference of computing process between the genetic algorithm and the classical method, the original method for outlier deletion is no longer applicable. In the genetic algorithm, the robust estimation is realized by introducing different loss functions for the fitness function, then the outlier problem of the TSA orbit determination is solved. Compared with the classical method, the genetic algorithm is greatly simplified by introducing in different loss functions. Through the comparison on the calculations of multiple loss functions, it is found that the least median square (LMS) estimation and least trimmed square (LTS) estimation can greatly improve the robustness of the TSA orbit determination, and have a high breakdown point.
Enhancing Pseudo-Telepathy in the Magic Square Game
Pawela, Łukasz; Gawron, Piotr; Puchała, Zbigniew; Sładkowski, Jan
2013-01-01
We study the possibility of reversing an action of a quantum channel. Our principal objective is to find a specific channel that reverses as accurately as possible an action of a given quantum channel. To achieve this goal we use semidefinite programming. We show the benefits of our method using the quantum pseudo-telepathy Magic Square game with noise. Our strategy is to move the pseudo-telepathy region to higher noise values. We show that it is possible to reverse the action of a noise channel using semidefinite programming. PMID:23762246
Image Discrimination Models Predict Object Detection in Natural Backgrounds
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Rohaly, A. M.; Watson, Andrew B.; Null, Cynthia H. (Technical Monitor)
1994-01-01
Object detection involves looking for one of a large set of object sub-images in a large set of background images. Image discrimination models only predict the probability that an observer will detect a difference between two images. In a recent study based on only six different images, we found that discrimination models can predict the relative detectability of objects in those images, suggesting that these simpler models may be useful in some object detection applications. Here we replicate this result using a new, larger set of images. Fifteen images of a vehicle in an other-wise natural setting were altered to remove the vehicle and mixed with the original image in a proportion chosen to make the target neither perfectly recognizable nor unrecognizable. The target was also rotated about a vertical axis through its center and mixed with the background. Sixteen observers rated these 30 target images and the 15 background-only images for the presence of a vehicle. The likelihoods of the observer responses were computed from a Thurstone scaling model with the assumption that the detectabilities are proportional to the predictions of an image discrimination model. Three image discrimination models were used: a cortex transform model, a single channel model with a contrast sensitivity function filter, and the Root-Mean-Square (RMS) difference of the digital target and background-only images. As in the previous study, the cortex transform model performed best; the RMS difference predictor was second best; and last, but still a reasonable predictor, was the single channel model. Image discrimination models can predict the relative detectabilities of objects in natural backgrounds.
Lee, Rob
2017-01-01
In January 2017, a large wind turbine blade was installed temporarily in a city square as a public artwork. At first sight, media photographs of the installation appeared to be fakes – the blade looks like it could not really be part of the scene. Close inspection of the object shows that its paradoxical visual appearance can be attributed to unconscious assumptions about object shape and light source direction. PMID:28596821
Child Maltreatment and Executive Functioning in Middle Adulthood: A Prospective Examination
Nikulina, Valentina; Widom, Cathy Spatz
2013-01-01
Objective There is extensive evidence of negative consequences of childhood maltreatment for IQ, academic achievement, and post-traumatic stress disorder (PTSD) and increased attention to neurobiological consequences. However, few prospective studies have assessed the long-term effects of abuse and neglect on executive functioning. The current study examines whether childhood abuse and neglect predicts components of executive functioning and nonverbal reasoning ability in middle adulthood and whether PTSD moderates this relationship. Method Using a prospective cohort design, a large sample (N = 792) of court-substantiated cases of childhood physical and sexual abuse and neglect (ages 0-11) and matched controls were followed into adulthood (mean age = 41). Executive functioning was assessed with the Trail Making B test and non-verbal reasoning with Matrix Reasoning. PTSD (DSM-III-R lifetime diagnosis) was assessed at age 29. Data were analyzed using ordinary least squares regressions, controlling for age, sex, and race and possible confounds of IQ, depression, and excessive alcohol use. Results In multivariate analyses, childhood maltreatment overall and childhood neglect predicted poorer executive functioning and non-verbal reasoning at age 41, whereas physical and sexual abuse did not. A past history of PTSD did not mediate or moderate these relations. Conclusions Childhood maltreatment and neglect specifically have a significant long-term impact on important aspects of adult neuropsychological functioning. These findings suggest the need for targeted efforts dedicated to interventions for neglected children. PMID:23876115
Some Results on Mean Square Error for Factor Score Prediction
ERIC Educational Resources Information Center
Krijnen, Wim P.
2006-01-01
For the confirmatory factor model a series of inequalities is given with respect to the mean square error (MSE) of three main factor score predictors. The eigenvalues of these MSE matrices are a monotonic function of the eigenvalues of the matrix gamma[subscript rho] = theta[superscript 1/2] lambda[subscript rho] 'psi[subscript rho] [superscript…
ERIC Educational Resources Information Center
Osler, James Edward
2013-01-01
This paper discusses the implementation of the Tri-Squared Test as an advanced statistical measure used to verify and validate the research outcomes of Educational Technology software. A mathematical and epistemological rational is provided for the transformative process of qualitative data into quantitative outcomes through the Tri-Squared Test…
Vehicle Sprung Mass Estimation for Rough Terrain
2011-03-01
distributions are greater than zero. The multivariate polynomials are functions of the Legendre polynomials (Poularikas (1999...developed methods based on polynomial chaos theory and on the maximum likelihood approach to estimate the most likely value of the vehicle sprung...mass. The polynomial chaos estimator is compared to benchmark algorithms including recursive least squares, recursive total least squares, extended
The Least-Squares Estimation of Latent Trait Variables.
ERIC Educational Resources Information Center
Tatsuoka, Kikumi
This paper presents a new method for estimating a given latent trait variable by the least-squares approach. The beta weights are obtained recursively with the help of Fourier series and expressed as functions of item parameters of response curves. The values of the latent trait variable estimated by this method and by maximum likelihood method…
Squared Euclidean distance: a statistical test to evaluate plant community change
Raymond D. Ratliff; Sylvia R. Mori
1993-01-01
The concepts and a procedure for evaluating plant community change using the squared Euclidean distance (SED) resemblance function are described. Analyses are based on the concept that Euclidean distances constitute a sample from a population of distances between sampling units (SUs) for a specific number of times and SUs. With different times, the distances will be...
An Object-Independent ENZ Metamaterial-Based Wideband Electromagnetic Cloak
Islam, Sikder Sunbeam; Faruque, Mohammad Rashed Iqbal; Islam, Mohammad Tariqul
2016-01-01
A new, metamaterial-based electromagnetic cloaking operation is proposed in this study. The metamaterial exhibits a sharp transmittance in the C-band of the microwave spectrum with negative effective property of permittivity at that frequency. Two metal arms were placed on an FR-4 substrate to construct a double-split-square shape structure. The size of the resonator was maintained to achieve the effective medium property of the metamaterial. Full wave numerical simulation was performed to extract the reflection and transmission coefficients for the unit cell. Later on, a single layer square-shaped cloak was designed using the proposed metamaterial unit cell. The cloak hides a metal cylinder electromagnetically, where the material exhibits epsilon-near-zero (ENZ) property. Cloaking operation was demonstrated adopting the scattering-reduction technique. The measured result was provided to validate the characteristics of the metamaterial and the cloak. Some object size- and shape-based analyses were performed with the cloak, and a common cloaking region was revealed over more than 900 MHz in the C-band for the different objects. PMID:27634456
An Object-Independent ENZ Metamaterial-Based Wideband Electromagnetic Cloak.
Islam, Sikder Sunbeam; Faruque, Mohammad Rashed Iqbal; Islam, Mohammad Tariqul
2016-09-16
A new, metamaterial-based electromagnetic cloaking operation is proposed in this study. The metamaterial exhibits a sharp transmittance in the C-band of the microwave spectrum with negative effective property of permittivity at that frequency. Two metal arms were placed on an FR-4 substrate to construct a double-split-square shape structure. The size of the resonator was maintained to achieve the effective medium property of the metamaterial. Full wave numerical simulation was performed to extract the reflection and transmission coefficients for the unit cell. Later on, a single layer square-shaped cloak was designed using the proposed metamaterial unit cell. The cloak hides a metal cylinder electromagnetically, where the material exhibits epsilon-near-zero (ENZ) property. Cloaking operation was demonstrated adopting the scattering-reduction technique. The measured result was provided to validate the characteristics of the metamaterial and the cloak. Some object size- and shape-based analyses were performed with the cloak, and a common cloaking region was revealed over more than 900 MHz in the C-band for the different objects.
Anomalous structural transition of confined hard squares.
Gurin, Péter; Varga, Szabolcs; Odriozola, Gerardo
2016-11-01
Structural transitions are examined in quasi-one-dimensional systems of freely rotating hard squares, which are confined between two parallel walls. We find two competing phases: one is a fluid where the squares have two sides parallel to the walls, while the second one is a solidlike structure with a zigzag arrangement of the squares. Using transfer matrix method we show that the configuration space consists of subspaces of fluidlike and solidlike phases, which are connected with low probability microstates of mixed structures. The existence of these connecting states makes the thermodynamic quantities continuous and precludes the possibility of a true phase transition. However, thermodynamic functions indicate strong tendency for the phase transition and our replica exchange Monte Carlo simulation study detects several important markers of the first order phase transition. The distinction of a phase transition from a structural change is practically impossible with simulations and experiments in such systems like the confined hard squares.
NASA Astrophysics Data System (ADS)
Sturrock, P. A.
2008-01-01
Using the chi-square statistic, one may conveniently test whether a series of measurements of a variable are consistent with a constant value. However, that test is predicated on the assumption that the appropriate probability distribution function (pdf) is normal in form. This requirement is usually not satisfied by experimental measurements of the solar neutrino flux. This article presents an extension of the chi-square procedure that is valid for any form of the pdf. This procedure is applied to the GALLEX-GNO dataset, and it is shown that the results are in good agreement with the results of Monte Carlo simulations. Whereas application of the standard chi-square test to symmetrized data yields evidence significant at the 1% level for variability of the solar neutrino flux, application of the extended chi-square test to the unsymmetrized data yields only weak evidence (significant at the 4% level) of variability.
LaFranchi, Stephen H.; Maliga, Zoltan; Lui, Julian C.; Moon, Jennifer E.; McDeed, Cailin; Henke, Katrin; Zonana, Jonathan; Kingman, Garrett A.; Pers, Tune H.; Baron, Jeffrey; Rosenfeld, Ron G.; Hirschhorn, Joel N.; Harris, Matthew P.; Hwa, Vivian
2012-01-01
Context: Microcephalic primordial dwarfism (MPD) is a rare, severe form of human growth failure in which growth restriction is evident in utero and continues into postnatal life. Single causative gene defects have been identified in a number of patients with MPD, and all involve genes fundamental to cellular processes including centrosome functions. Objective: The objective of the study was to find the genetic etiology of a novel presentation of MPD. Design: The design of the study was whole-exome sequencing performed on two affected sisters in a single family. Molecular and functional studies of a candidate gene were performed using patient-derived primary fibroblasts and a zebrafish morpholino oligonucleotides knockdown model. Patients: Two sisters presented with a novel subtype of MPD, including severe intellectual disabilities. Main Outcome Measures: NIN, encoding Ninein, a centrosomal protein critically involved in asymmetric cell division, was identified as a candidate gene, and functional impacts in fibroblasts and zebrafish were studied. Results: From 34,606 genomic variants, two very rare missense variants in NIN were identified. Both probands were compound heterozygotes. In the zebrafish, ninein knockdown led to specific and novel defects in the specification and morphogenesis of the anterior neuroectoderm, resulting in a deformity of the developing cranium with a small, squared skull highly reminiscent of the human phenotype. Conclusion: We identified a novel clinical subtype of MPD in two sisters who have rare variants in NIN. We show, for the first time, that reduction of ninein function in the developing zebrafish leads to specific deficiencies of brain and skull development, offering a developmental basis for the myriad phenotypes in our patients. PMID:22933543
Unusual square roots in the ghost-free theory of massive gravity
NASA Astrophysics Data System (ADS)
Golovnev, Alexey; Smirnov, Fedor
2017-06-01
A crucial building block of the ghost free massive gravity is the square root function of a matrix. This is a problematic entity from the viewpoint of existence and uniqueness properties. We accurately describe the freedom of choosing a square root of a (non-degenerate) matrix. It has discrete and (in special cases) continuous parts. When continuous freedom is present, the usual perturbation theory in terms of matrices can be critically ill defined for some choices of the square root. We consider the new formulation of massive and bimetric gravity which deals directly with eigenvalues (in disguise of elementary symmetric polynomials) instead of matrices. It allows for a meaningful discussion of perturbation theory in such cases, even though certain non-analytic features arise.
NASA Astrophysics Data System (ADS)
Osabe, Keiichi; Kawai, Kotaro
2017-03-01
In this study, angular multiplexing hologram recording photopolymer films were studied experimentally. The films contained acrylamide as a monomer, eosin Y as a sensitizer, and triethanolamine as a promoter in a polyvinyl alcohol matrix. In order to determine the appropriate thickness of the photopolymer films for angular multiplexing, photopolymer films with thicknesses of 29-503 μm were exposed to two intersecting beams of a YVO laser at a wavelength of 532 nm to form a holographic grating with a spatial frequency of 653 line/mm. The diffraction efficiencies as a function of the incident angle of reconstruction were measured. A narrow angular bandwidth and high diffraction efficiency are required for angular multiplexing; hence, we define the Q value, which is the diffraction efficiency divided by half the bandwidth. The Q value of the films depended on the thickness of the films, and was calculated based on the measured diffraction efficiencies. The Q value of a 297-μm-thick film was the highest of the all films. Therefore, the angular multiplexing experiments were conducted using 300-μm-thick films. In the angular multiplexing experiments, the object beam transmitted by a square aperture was focused by a Fourier transform lens and interfered with a reference beam. The maximum order of angular multiplexing was four. The signal intensity that corresponds to the squared-aperture transmission and the noise intensity that corresponds to transmission without the square aperture were measured. The signal intensities decreased as the order of angular multiplexing increased, and the noise intensities were not dependent on the order of angular multiplexing.
The ITE Land classification: Providing an environmental stratification of Great Britain.
Bunce, R G; Barr, C J; Gillespie, M K; Howard, D C
1996-01-01
The surface of Great Britain (GB) varies continuously in land cover from one area to another. The objective of any environmentally based land classification is to produce classes that match the patterns that are present by helping to define clear boundaries. The more appropriate the analysis and data used, the better the classes will fit the natural patterns. The observation of inter-correlations between ecological factors is the basis for interpreting ecological patterns in the field, and the Institute of Terrestrial Ecology (ITE) Land Classification formalises such subjective ideas. The data inevitably comprise a large number of factors in order to describe the environment adequately. Single factors, such as altitude, would only be useful on a national basis if they were the only dominant causative agent of ecological variation.The ITE Land Classification has defined 32 environmental categories called 'land classes', initially based on a sample of 1-km squares in Great Britain but subsequently extended to all 240 000 1-km squares. The original classification was produced using multivariate analysis of 75 environmental variables. The extension to all squares in GB was performed using a combination of logistic discrimination and discriminant functions. The classes have provided a stratification for successive ecological surveys, the results of which have characterised the classes in terms of botanical, zoological and landscape features.The classification has also been applied to integrate diverse datasets including satellite imagery, soils and socio-economic information. A variety of models have used the structure of the classification, for example to show potential land use change under different economic conditions. The principal data sets relevant for planning purposes have been incorporated into a user-friendly computer package, called the 'Countryside Information System'.
Using Remote Sensing Data to Evaluate Surface Soil Properties in Alabama Ultisols
NASA Technical Reports Server (NTRS)
Sullivan, Dana G.; Shaw, Joey N.; Rickman, Doug; Mask, Paul L.; Luvall, Jeff
2005-01-01
Evaluation of surface soil properties via remote sensing could facilitate soil survey mapping, erosion prediction and allocation of agrochemicals for precision management. The objective of this study was to evaluate the relationship between soil spectral signature and surface soil properties in conventionally managed row crop systems. High-resolution RS data were acquired over bare fields in the Coastal Plain, Appalachian Plateau, and Ridge and Valley provinces of Alabama using the Airborne Terrestrial Applications Sensor multispectral scanner. Soils ranged from sandy Kandiudults to fine textured Rhodudults. Surface soil samples (0-1 cm) were collected from 163 sampling points for soil organic carbon, particle size distribution, and citrate dithionite extractable iron content. Surface roughness, soil water content, and crusting were also measured during sampling. Two methods of analysis were evaluated: 1) multiple linear regression using common spectral band ratios, and 2) partial least squares regression. Our data show that thermal infrared spectra are highly, linearly related to soil organic carbon, sand and clay content. Soil organic carbon content was the most difficult to quantify in these highly weathered systems, where soil organic carbon was generally less than 1.2%. Estimates of sand and clay content were best using partial least squares regression at the Valley site, explaining 42-59% of the variability. In the Coastal Plain, sandy surfaces prone to crusting limited estimates of sand and clay content via partial least squares and regression with common band ratios. Estimates of iron oxide content were a function of mineralogy and best accomplished using specific band ratios, with regression explaining 36-65% of the variability at the Valley and Coastal Plain sites, respectively.
The use of least squares methods in functional optimization of energy use prediction models
NASA Astrophysics Data System (ADS)
Bourisli, Raed I.; Al-Shammeri, Basma S.; AlAnzi, Adnan A.
2012-06-01
The least squares method (LSM) is used to optimize the coefficients of a closed-form correlation that predicts the annual energy use of buildings based on key envelope design and thermal parameters. Specifically, annual energy use is related to a number parameters like the overall heat transfer coefficients of the wall, roof and glazing, glazing percentage, and building surface area. The building used as a case study is a previously energy-audited mosque in a suburb of Kuwait City, Kuwait. Energy audit results are used to fine-tune the base case mosque model in the VisualDOE{trade mark, serif} software. Subsequently, 1625 different cases of mosques with varying parameters were developed and simulated in order to provide the training data sets for the LSM optimizer. Coefficients of the proposed correlation are then optimized using multivariate least squares analysis. The objective is to minimize the difference between the correlation-predicted results and the VisualDOE-simulation results. It was found that the resulting correlation is able to come up with coefficients for the proposed correlation that reduce the difference between the simulated and predicted results to about 0.81%. In terms of the effects of the various parameters, the newly-defined weighted surface area parameter was found to have the greatest effect on the normalized annual energy use. Insulating the roofs and walls also had a major effect on the building energy use. The proposed correlation and methodology can be used during preliminary design stages to inexpensively assess the impacts of various design variables on the expected energy use. On the other hand, the method can also be used by municipality officials and planners as a tool for recommending energy conservation measures and fine-tuning energy codes.
Peelle's pertinent puzzle using the Monte Carlo technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawano, Toshihiko; Talou, Patrick; Burr, Thomas
2009-01-01
We try to understand the long-standing problem of the Peelle's Pertinent Puzzle (PPP) using the Monte Carlo technique. We allow the probability density functions to be any kind of form to assume the impact of distribution, and obtain the least-squares solution directly from numerical simulations. We found that the standard least squares method gives the correct answer if a weighting function is properly provided. Results from numerical simulations show that the correct answer of PPP is 1.1 {+-} 0.25 if the common error is multiplicative. The thought-provoking answer of 0.88 is also correct, if the common error is additive, andmore » if the error is proportional to the measured values. The least squares method correctly gives us the most probable case, where the additive component has a negative value. Finally, the standard method fails for PPP due to a distorted (non Gaussian) joint distribution.« less
Geodesic regression on orientation distribution functions with its application to an aging study.
Du, Jia; Goh, Alvina; Kushnarev, Sergey; Qiu, Anqi
2014-02-15
In this paper, we treat orientation distribution functions (ODFs) derived from high angular resolution diffusion imaging (HARDI) as elements of a Riemannian manifold and present a method for geodesic regression on this manifold. In order to find the optimal regression model, we pose this as a least-squares problem involving the sum-of-squared geodesic distances between observed ODFs and their model fitted data. We derive the appropriate gradient terms and employ gradient descent to find the minimizer of this least-squares optimization problem. In addition, we show how to perform statistical testing for determining the significance of the relationship between the manifold-valued regressors and the real-valued regressands. Experiments on both synthetic and real human data are presented. In particular, we examine aging effects on HARDI via geodesic regression of ODFs in normal adults aged 22 years old and above. © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Kunnath-Poovakka, A.; Ryu, D.; Renzullo, L. J.; George, B.
2016-04-01
Calibration of spatially distributed hydrologic models is frequently limited by the availability of ground observations. Remotely sensed (RS) hydrologic information provides an alternative source of observations to inform models and extend modelling capability beyond the limits of ground observations. This study examines the capability of RS evapotranspiration (ET) and soil moisture (SM) in calibrating a hydrologic model and its efficacy to improve streamflow predictions. SM retrievals from the Advanced Microwave Scanning Radiometer-EOS (AMSR-E) and daily ET estimates from the CSIRO MODIS ReScaled potential ET (CMRSET) are used to calibrate a simplified Australian Water Resource Assessment - Landscape model (AWRA-L) for a selection of parameters. The Shuffled Complex Evolution Uncertainty Algorithm (SCE-UA) is employed for parameter estimation at eleven catchments in eastern Australia. A subset of parameters for calibration is selected based on the variance-based Sobol' sensitivity analysis. The efficacy of 15 objective functions for calibration is assessed based on streamflow predictions relative to control cases, and relative merits of each are discussed. Synthetic experiments were conducted to examine the effect of bias in RS ET observations on calibration. The objective function containing the root mean square deviation (RMSD) of ET result in best streamflow predictions and the efficacy is superior for catchments with medium to high average runoff. Synthetic experiments revealed that accurate ET product can improve the streamflow predictions in catchments with low average runoff.
Three-dimensional generalization of the Van Cittert-Zernike theorem to wave and particle scattering
NASA Astrophysics Data System (ADS)
Zarubin, Alexander M.
1993-07-01
Coherence properties of primary partially coherent radiations (light, X-rays and particles) elastically scattered from a 3D object consisting of a collection of electrons and nuclei are analyzed in the Fresnel diffraction region and in the far field. The behaviour of the cross-spectral density of the scattered radiation transverse and along to the local direction of propagation is shown to be described by respectively the 3D Fourier and Fresnel transform of the generalized radiance function of a scattering secondary source associated with the object. A relativistic correct expression is derived for the mutual coherence function of radiation which takes account of the dispersive propagation of particle beams in vacuum. An effect of the spatial coherence of radiation on the temporal one is found; in the Fresnel diffraction region, in distinction to the field, both the longitudinal spatial coherence and the spectral width of radiation affect the longitudinal coherence. A solution of the 3D inverse scattering problem for partially coherent radiation is presented. It is shown that squared modulus of the scattering potential and its 2D projections can be reconstructed from measurements of the modulus and phase of the degree of transverse spatial coherence of the scattered radiation. The results provide a theoretical basis for new methods of image formation and structure analysis in X-ray, electron, ion, and neutron optics.
Aerodynamic parameter estimation via Fourier modulating function techniques
NASA Technical Reports Server (NTRS)
Pearson, A. E.
1995-01-01
Parameter estimation algorithms are developed in the frequency domain for systems modeled by input/output ordinary differential equations. The approach is based on Shinbrot's method of moment functionals utilizing Fourier based modulating functions. Assuming white measurement noises for linear multivariable system models, an adaptive weighted least squares algorithm is developed which approximates a maximum likelihood estimate and cannot be biased by unknown initial or boundary conditions in the data owing to a special property attending Shinbrot-type modulating functions. Application is made to perturbation equation modeling of the longitudinal and lateral dynamics of a high performance aircraft using flight-test data. Comparative studies are included which demonstrate potential advantages of the algorithm relative to some well established techniques for parameter identification. Deterministic least squares extensions of the approach are made to the frequency transfer function identification problem for linear systems and to the parameter identification problem for a class of nonlinear-time-varying differential system models.
Direction information in multiple object tracking is limited by a graded resource.
Horowitz, Todd S; Cohen, Michael A
2010-10-01
Is multiple object tracking (MOT) limited by a fixed set of structures (slots), a limited but divisible resource, or both? Here, we answer this question by measuring the precision of the direction representation for tracked targets. The signature of a limited resource is a decrease in precision as the square root of the tracking load. The signature of fixed slots is a fixed precision. Hybrid models predict a rapid decrease to asymptotic precision. In two experiments, observers tracked moving disks and reported target motion direction by adjusting a probe arrow. We derived the precision of representation of correctly tracked targets using a mixture distribution analysis. Precision declined with target load according to the square-root law up to six targets. This finding is inconsistent with both pure and hybrid slot models. Instead, directional information in MOT appears to be limited by a continuously divisible resource.
Methods of Fitting a Straight Line to Data: Examples in Water Resources
Hirsch, Robert M.; Gilroy, Edward J.
1984-01-01
Three methods of fitting straight lines to data are described and their purposes are discussed and contrasted in terms of their applicability in various water resources contexts. The three methods are ordinary least squares (OLS), least normal squares (LNS), and the line of organic correlation (OC). In all three methods the parameters are based on moment statistics of the data. When estimation of an individual value is the objective, OLS is the most appropriate. When estimation of many values is the objective and one wants the set of estimates to have the appropriate variance, then OC is most appropriate. When one wishes to describe the relationship between two variables and measurement error is unimportant, then OC is most appropriate. Where the error is important in descriptive problems or in calibration problems, then structural analysis techniques may be most appropriate. Finally, if the problem is one of describing some geographic trajectory, then LNS is most appropriate.
Comparing implementations of penalized weighted least-squares sinogram restoration
Forthmann, Peter; Koehler, Thomas; Defrise, Michel; La Riviere, Patrick
2010-01-01
Purpose: A CT scanner measures the energy that is deposited in each channel of a detector array by x rays that have been partially absorbed on their way through the object. The measurement process is complex and quantitative measurements are always and inevitably associated with errors, so CT data must be preprocessed prior to reconstruction. In recent years, the authors have formulated CT sinogram preprocessing as a statistical restoration problem in which the goal is to obtain the best estimate of the line integrals needed for reconstruction from the set of noisy, degraded measurements. The authors have explored both penalized Poisson likelihood (PL) and penalized weighted least-squares (PWLS) objective functions. At low doses, the authors found that the PL approach outperforms PWLS in terms of resolution-noise tradeoffs, but at standard doses they perform similarly. The PWLS objective function, being quadratic, is more amenable to computational acceleration than the PL objective. In this work, the authors develop and compare two different methods for implementing PWLS sinogram restoration with the hope of improving computational performance relative to PL in the standard-dose regime. Sinogram restoration is still significant in the standard-dose regime since it can still outperform standard approaches and it allows for correction of effects that are not usually modeled in standard CT preprocessing. Methods: The authors have explored and compared two implementation strategies for PWLS sinogram restoration: (1) A direct matrix-inversion strategy based on the closed-form solution to the PWLS optimization problem and (2) an iterative approach based on the conjugate-gradient algorithm. Obtaining optimal performance from each strategy required modifying the naive off-the-shelf implementations of the algorithms to exploit the particular symmetry and sparseness of the sinogram-restoration problem. For the closed-form approach, the authors subdivided the large matrix inversion into smaller coupled problems and exploited sparseness to minimize matrix operations. For the conjugate-gradient approach, the authors exploited sparseness and preconditioned the problem to speed up convergence. Results: All methods produced qualitatively and quantitatively similar images as measured by resolution-variance tradeoffs and difference images. Despite the acceleration strategies, the direct matrix-inversion approach was found to be uncompetitive with iterative approaches, with a computational burden higher by an order of magnitude or more. The iterative conjugate-gradient approach, however, does appear promising, with computation times half that of the authors’ previous penalized-likelihood implementation. Conclusions: Iterative conjugate-gradient based PWLS sinogram restoration with careful matrix optimizations has computational advantages over direct matrix PWLS inversion and over penalized-likelihood sinogram restoration and can be considered a good alternative in standard-dose regimes. PMID:21158306
A pilot study: the effects of music therapy interventions on middle school students' ESL skills.
Kennedy, Roy; Scott, Amanda
2005-01-01
The purpose of this study was to investigate the effects of music therapy techniques on the story retelling and speaking skills of English as a Second Language (ESL) middle school students. Thirty-four middle school students of Hispanic heritage, ages 10-12, in high and low-functioning groups participated in the study for 12 weeks. Pretest to posttest data yielded significant differences on the story retelling skills between the experimental and control groups. Chi Square comparisons on English speaking skills also yielded significant results over 3 months of music therapy intervention. A variety of music therapy techniques were used including music and movement, active music listening, group chanting and singing, musical games, rhythmic training, music and sign language, and lyric analysis and rewrite activities as supplemental activities to the ESL goals and objectives. Comparisons of individual subjects' scores indicated that all of the students in the experimental groups scored higher than the control groups on story retelling skills (with the exception of 1 pair of identical scores), regardless of high and low functioning placement. Monthly comparisons of the high and low functioning experimental groups indicated significant improvements in English speaking skills as well.
Tomonaga, Masaki; Imura, Tomoko
2010-07-08
Humans readily perceive whole shapes as intact when some portions of these shapes are occluded by another object. This type of amodal completion has also been widely reported among nonhuman animals and is related to pictorial depth perception. However, the effect of a cast shadow, a critical pictorial-depth cue for amodal completion has been investigated only rarely from the comparative-cognitive perspective. In the present study, we examined this effect in chimpanzees and humans. Chimpanzees were slower in responding to a Pacman target with an occluding square than to the control condition, suggesting that participants perceptually completed the whole circle. When a cast shadow was added to the square, amodal completion occurred in both species. On the other hand, however, critical differences between the species emerged when the cast shadow was added to the Pacman figure, implying that Pacman was in the sky casting a shadow on the square. The cast shadow prevented, to a significant extent, compulsory amodal completion in humans, but had no effect on chimpanzees. These results suggest that cast shadows played a critical role in enabling humans to infer the spatial relationship between Pacman and the square. For chimpanzees, however, a cast shadow may be perceived as another "object". A limited role for cast shadows in the perception of pictorial depth has also been reported with respect to human cognitive development. Further studies on nonhuman primates using a comparative-developmental perspective will clarify the evolutionary origin of the role of cast shadows in visual perception.
Local classification: Locally weighted-partial least squares-discriminant analysis (LW-PLS-DA).
Bevilacqua, Marta; Marini, Federico
2014-08-01
The possibility of devising a simple, flexible and accurate non-linear classification method, by extending the locally weighted partial least squares (LW-PLS) approach to the cases where the algorithm is used in a discriminant way (partial least squares discriminant analysis, PLS-DA), is presented. In particular, to assess which category an unknown sample belongs to, the proposed algorithm operates by identifying which training objects are most similar to the one to be predicted and building a PLS-DA model using these calibration samples only. Moreover, the influence of the selected training samples on the local model can be further modulated by adopting a not uniform distance-based weighting scheme which allows the farthest calibration objects to have less impact than the closest ones. The performances of the proposed locally weighted-partial least squares-discriminant analysis (LW-PLS-DA) algorithm have been tested on three simulated data sets characterized by a varying degree of non-linearity: in all cases, a classification accuracy higher than 99% on external validation samples was achieved. Moreover, when also applied to a real data set (classification of rice varieties), characterized by a high extent of non-linearity, the proposed method provided an average correct classification rate of about 93% on the test set. By the preliminary results, showed in this paper, the performances of the proposed LW-PLS-DA approach have proved to be comparable and in some cases better than those obtained by other non-linear methods (k nearest neighbors, kernel-PLS-DA and, in the case of rice, counterpropagation neural networks). Copyright © 2014 Elsevier B.V. All rights reserved.
Fatone, Stefania; Caldwell, Ryan
2017-01-01
Background: Current transfemoral prosthetic sockets are problematic as they restrict function, lack comfort, and cause residual limb problems. Development of a subischial socket with lower proximal trim lines is an appealing way to address this problem and may contribute to improving quality of life of persons with transfemoral amputation. Objectives: The purpose of this study was to illustrate the use of a new subischial socket in two subjects. Study design: Case series. Methods: Two unilateral transfemoral prosthesis users participated in preliminary socket evaluations comparing functional performance of the new subischial socket to ischial containment sockets. Testing included gait analysis, socket comfort score, and performance-based clinical outcome measures (Rapid-Sit-To-Stand, Four-Square-Step-Test, and Agility T-Test). Results: For both subjects, comfort was better in the subischial socket, while gait and clinical outcomes were generally comparable between sockets. Conclusion: While these evaluations are promising regarding the ability to function in this new socket design, more definitive evaluation is needed. Clinical relevance Using gait analysis, socket comfort score and performance-based outcome measures, use of the Northwestern University Flexible Subischial Vaccum Socket was evaluated in two transfemoral prosthesis users. Socket comfort improved for both subjects with comparable function compared to ischial containment sockets. PMID:28132589
Morais Ferreira, Janaína Madruga; Azevedo, Bruna Marcacini; Luccas, Valdecir; Bolini, Helena Maria André
2017-03-01
Functional food is a product containing nutrients that provide health benefits beyond basic nutrition. The objective of the present study was to evaluate the descriptive sensory profile and consumers' acceptance of functional (prebiotic) white chocolates with and without the addition of an antioxidant source (goji berry [GB]) and sucrose replacement. The descriptive sensory profile was determined by quantitative descriptive analysis (QDA) with trained assessors (n = 12), and the acceptance test was performed with 120 consumers. The correlation of descriptive and hedonic data was determined by partial least squares (PLS). The results of QDA indicated that GB reduces the perception of most aroma and flavor attributes, and enhances the bitter taste, bitter aftertaste, astringency, and most of the texture attributes. The consumers' acceptance of the chocolates was positive for all sensory characteristics, with acceptance scores above 6 on a 9-point scale. According to the PLS regression analysis, the descriptors cream color and cocoa butter flavor contributed positively to the acceptance of functional white chocolates. Therefore, prebiotic white chocolate with or without the addition of GB is innovative and can attract consumers, due to its functional properties, being a promising alternative for the food industry. © 2017 Institute of Food Technologists®.
Frijia, Stephane; Guhathakurta, Subhrajit; Williams, Eric
2012-02-07
Prior LCA studies take the operational phase to include all energy use within a residence, implying a functional unit of all household activities, but then exclude related supply chains such as production of food, appliances, and household chemicals. We argue that bounding the functional unit to provision of a climate controlled space better focuses the LCA on the building, rather than activities that occur within a building. The second issue explored in this article is how technological change in the operational phase affects life cycle energy. Heating and cooling equipment is replaced at least several times over the lifetime of a residence; improved efficiency of newer equipment affects life cycle energy use. The third objective is to construct parametric models to describe LCA results for a family of related products. We explore these three issues through a case study of energy use of residences: one-story and two-story detached homes, 1,500-3,500 square feet in area, located in Phoenix, Arizona, built in 2002 and retired in 2051. With a restricted functional unit and accounting for technological progress, approximately 30% of a building's life cycle energy can be attributed to materials and construction, compared to 0.4-11% in previous studies.
NASA Astrophysics Data System (ADS)
Curceac, S.; Ternynck, C.; Ouarda, T.
2015-12-01
Over the past decades, a substantial amount of research has been conducted to model and forecast climatic variables. In this study, Nonparametric Functional Data Analysis (NPFDA) methods are applied to forecast air temperature and wind speed time series in Abu Dhabi, UAE. The dataset consists of hourly measurements recorded for a period of 29 years, 1982-2010. The novelty of the Functional Data Analysis approach is in expressing the data as curves. In the present work, the focus is on daily forecasting and the functional observations (curves) express the daily measurements of the above mentioned variables. We apply a non-linear regression model with a functional non-parametric kernel estimator. The computation of the estimator is performed using an asymmetrical quadratic kernel function for local weighting based on the bandwidth obtained by a cross validation procedure. The proximities between functional objects are calculated by families of semi-metrics based on derivatives and Functional Principal Component Analysis (FPCA). Additionally, functional conditional mode and functional conditional median estimators are applied and the advantages of combining their results are analysed. A different approach employs a SARIMA model selected according to the minimum Akaike (AIC) and Bayessian (BIC) Information Criteria and based on the residuals of the model. The performance of the models is assessed by calculating error indices such as the root mean square error (RMSE), relative RMSE, BIAS and relative BIAS. The results indicate that the NPFDA models provide more accurate forecasts than the SARIMA models. Key words: Nonparametric functional data analysis, SARIMA, time series forecast, air temperature, wind speed
What do you measure when you measure the Hall effect?
NASA Astrophysics Data System (ADS)
Koon, D. W.; Knickerbocker, C. J.
1993-02-01
A formalism for calculating the sensitivity of Hall measurements to local inhomogeneities of the sample material or the magnetic field is developed. This Hall weighting function g(x,y) is calculated for various placements of current and voltage probes on square and circular laminar samples. Unlike the resistivity weighting function, it is nonnegative throughout the entire sample, provided all probes lie at the edge of the sample. Singularities arise in the Hall weighting function near the current and voltage probes except in the case where these probes are located at the corners of a square. Implications of the results for cross, clover, and bridge samples, and the implications of our results for metal-insulator transition and quantum Hall studies are discussed.
Estimating gene function with least squares nonnegative matrix factorization.
Wang, Guoli; Ochs, Michael F
2007-01-01
Nonnegative matrix factorization is a machine learning algorithm that has extracted information from data in a number of fields, including imaging and spectral analysis, text mining, and microarray data analysis. One limitation with the method for linking genes through microarray data in order to estimate gene function is the high variance observed in transcription levels between different genes. Least squares nonnegative matrix factorization uses estimates of the uncertainties on the mRNA levels for each gene in each condition, to guide the algorithm to a local minimum in normalized chi2, rather than a Euclidean distance or divergence between the reconstructed data and the data itself. Herein, application of this method to microarray data is demonstrated in order to predict gene function.
Radial artery pulse waveform analysis based on curve fitting using discrete Fourier series.
Jiang, Zhixing; Zhang, David; Lu, Guangming
2018-04-19
Radial artery pulse diagnosis has been playing an important role in traditional Chinese medicine (TCM). For its non-invasion and convenience, the pulse diagnosis has great significance in diseases analysis of modern medicine. The practitioners sense the pulse waveforms in patients' wrist to make diagnoses based on their non-objective personal experience. With the researches of pulse acquisition platforms and computerized analysis methods, the objective study on pulse diagnosis can help the TCM to keep up with the development of modern medicine. In this paper, we propose a new method to extract feature from pulse waveform based on discrete Fourier series (DFS). It regards the waveform as one kind of signal that consists of a series of sub-components represented by sine and cosine (SC) signals with different frequencies and amplitudes. After the pulse signals are collected and preprocessed, we fit the average waveform for each sample using discrete Fourier series by least squares. The feature vector is comprised by the coefficients of discrete Fourier series function. Compared with the fitting method using Gaussian mixture function, the fitting errors of proposed method are smaller, which indicate that our method can represent the original signal better. The classification performance of proposed feature is superior to the other features extracted from waveform, liking auto-regression model and Gaussian mixture model. The coefficients of optimized DFS function, who is used to fit the arterial pressure waveforms, can obtain better performance in modeling the waveforms and holds more potential information for distinguishing different psychological states. Copyright © 2018 Elsevier B.V. All rights reserved.
Hazard Function Estimation with Cause-of-Death Data Missing at Random.
Wang, Qihua; Dinse, Gregg E; Liu, Chunling
2012-04-01
Hazard function estimation is an important part of survival analysis. Interest often centers on estimating the hazard function associated with a particular cause of death. We propose three nonparametric kernel estimators for the hazard function, all of which are appropriate when death times are subject to random censorship and censoring indicators can be missing at random. Specifically, we present a regression surrogate estimator, an imputation estimator, and an inverse probability weighted estimator. All three estimators are uniformly strongly consistent and asymptotically normal. We derive asymptotic representations of the mean squared error and the mean integrated squared error for these estimators and we discuss a data-driven bandwidth selection method. A simulation study, conducted to assess finite sample behavior, demonstrates that the proposed hazard estimators perform relatively well. We illustrate our methods with an analysis of some vascular disease data.
Sensing Strategies for Disambiguating among Multiple Objects in Known Poses.
1985-08-01
ELEMENT. PROIECT. TASK Artificial Inteligence Laboratory AE OKUI UBR 545 Technology Square Cambridge, MA 021.39 11. CONTROLLING OFFICE NAME AND ADDRESS 12...AD-Ali65 912 SENSING STRATEGIES FOR DISAMBIGURTING MONG MULTIPLE 1/1 OBJECTS IN KNOWN POSES(U) MASSACHUSETTS INST OF TECH CAMBRIDGE ARTIFICIAL ...or Dist Special 1 ’ MASSACHUSETTS INSTITUTE OF TECHNOLOGY ARTIFICIAL INTELLIGENCE LABORATORY A. I. Memo 855 August, 1985 Sensing Strategies for
3D Object Recognition: Symmetry and Virtual Views
1992-12-01
NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATIONI Artificial Intelligence Laboratory REPORT NUMBER 545 Technology Square AIM 1409 Cambridge... ARTIFICIAL INTELLIGENCE LABORATORY and CENTER FOR BIOLOGICAL AND COMPUTATIONAL LEARNING A.I. Memo No. 1409 December 1992 C.B.C.L. Paper No. 76 3D Object...research done within the Center for Biological and Computational Learning in the Department of Brain and Cognitive Sciences, and at the Artificial
Controlling Sample Rotation in Acoustic Levitation
NASA Technical Reports Server (NTRS)
Barmatz, M. B.; Stoneburner, J. D.
1985-01-01
Rotation of acoustically levitated object stopped or controlled according to phase-shift monitoring and control concept. Principle applies to square-cross-section levitation chamber with two perpendicular acoustic drivers operating at same frequency. Phase difference between X and Y acoustic excitation measured at one corner by measuring variation of acoustic amplitude sensed by microphone. Phase of driver adjusted to value that produces no rotation or controlled rotation of levitated object.
Natural Object Categorization.
1987-11-01
6-A194 103 NATURAL OBJECT CATEGORIZATION(U) MASSACHUSETTS INST OF 1/3 TECH CAMBRIDGE ARTIFICIAL INTELLIGENCE LAB R F DBICK NOY 87 AI-TR-1091 NBSSI4...ORGANI1ZATION NAME AN40 ACORES$ 10. PROGRAM ELEMENT. PROJECT. TASK Artificial Inteligence Laboratory AREA A WORK UNIT MUMBERS 545 Technology Square Cambridge...describes research done at the Department of Brain and Cognitive Sciences and the Artificial Intelligence Laboratory at the Massachusetts Institute of
Evaluation of the CEAS model for barley yields in North Dakota and Minnesota
NASA Technical Reports Server (NTRS)
Barnett, T. L. (Principal Investigator)
1981-01-01
The CEAS yield model is based upon multiple regression analysis at the CRD and state levels. For the historical time series, yield is regressed on a set of variables derived from monthly mean temperature and monthly precipitation. Technological trend is represented by piecewise linear and/or quadriatic functions of year. Indicators of yield reliability obtained from a ten-year bootstrap test (1970-79) demonstrated that biases are small and performance as indicated by the root mean square errors are acceptable for intended application, however, model response for individual years particularly unusual years, is not very reliable and shows some large errors. The model is objective, adequate, timely, simple and not costly. It considers scientific knowledge on a broad scale but not in detail, and does not provide a good current measure of modeled yield reliability.
Correlated Noise: How it Breaks NMF, and What to Do About It.
Plis, Sergey M; Potluru, Vamsi K; Lane, Terran; Calhoun, Vince D
2011-01-12
Non-negative matrix factorization (NMF) is a problem of decomposing multivariate data into a set of features and their corresponding activations. When applied to experimental data, NMF has to cope with noise, which is often highly correlated. We show that correlated noise can break the Donoho and Stodden separability conditions of a dataset and a regular NMF algorithm will fail to decompose it, even when given freedom to be able to represent the noise as a separate feature. To cope with this issue, we present an algorithm for NMF with a generalized least squares objective function (glsNMF) and derive multiplicative updates for the method together with proving their convergence. The new algorithm successfully recovers the true representation from the noisy data. Robust performance can make glsNMF a valuable tool for analyzing empirical data.
Correlated Noise: How it Breaks NMF, and What to Do About It
Plis, Sergey M.; Potluru, Vamsi K.; Lane, Terran; Calhoun, Vince D.
2010-01-01
Non-negative matrix factorization (NMF) is a problem of decomposing multivariate data into a set of features and their corresponding activations. When applied to experimental data, NMF has to cope with noise, which is often highly correlated. We show that correlated noise can break the Donoho and Stodden separability conditions of a dataset and a regular NMF algorithm will fail to decompose it, even when given freedom to be able to represent the noise as a separate feature. To cope with this issue, we present an algorithm for NMF with a generalized least squares objective function (glsNMF) and derive multiplicative updates for the method together with proving their convergence. The new algorithm successfully recovers the true representation from the noisy data. Robust performance can make glsNMF a valuable tool for analyzing empirical data. PMID:23750288
A sigmoidal model for biosorption of heavy metal cations from aqueous media.
Özen, Rümeysa; Sayar, Nihat Alpagu; Durmaz-Sam, Selcen; Sayar, Ahmet Alp
2015-07-01
A novel multi-input single output (MISO) black-box sigmoid model is developed to simulate the biosorption of heavy metal cations by the fission yeast from aqueous medium. Validation and verification of the model is done through statistical chi-squared hypothesis tests and the model is evaluated by uncertainty and sensitivity analyses. The simulated results are in agreement with the data of the studied system in which Schizosaccharomyces pombe biosorbs Ni(II) cations at various process conditions. Experimental data is obtained originally for this work using dead cells of an adapted variant of S. Pombe and represented by Freundlich isotherms. A process optimization scheme is proposed using the present model to build a novel application of a cost-merit objective function which would be useful to predict optimal operation conditions. Copyright © 2015. Published by Elsevier Inc.
Evaluation of the Williams-type model for barley yields in North Dakota and Minnesota
NASA Technical Reports Server (NTRS)
Barnett, T. L. (Principal Investigator)
1981-01-01
The Williams-type yield model is based on multiple regression analysis of historial time series data at CRD level pooled to regional level (groups of similar CRDs). Basic variables considered in the analysis include USDA yield, monthly mean temperature, monthly precipitation, soil texture and topographic information, and variables derived from these. Technologic trend is represented by piecewise linear and/or quadratic functions of year. Indicators of yield reliability obtained from a ten-year bootstrap test (1970-1979) demonstrate that biases are small and performance based on root mean square appears to be acceptable for the intended AgRISTARS large area applications. The model is objective, adequate, timely, simple, and not costly. It consideres scientific knowledge on a broad scale but not in detail, and does not provide a good current measure of modeled yield reliability.
Sparse and stable Markowitz portfolios
Brodie, Joshua; Daubechies, Ingrid; De Mol, Christine; Giannone, Domenico; Loris, Ignace
2009-01-01
We consider the problem of portfolio selection within the classical Markowitz mean-variance framework, reformulated as a constrained least-squares regression problem. We propose to add to the objective function a penalty proportional to the sum of the absolute values of the portfolio weights. This penalty regularizes (stabilizes) the optimization problem, encourages sparse portfolios (i.e., portfolios with only few active positions), and allows accounting for transaction costs. Our approach recovers as special cases the no-short-positions portfolios, but does allow for short positions in limited number. We implement this methodology on two benchmark data sets constructed by Fama and French. Using only a modest amount of training data, we construct portfolios whose out-of-sample performance, as measured by Sharpe ratio, is consistently and significantly better than that of the naïve evenly weighted portfolio. PMID:19617537
NASA Astrophysics Data System (ADS)
Idris, N. H.; Salim, N. A.; Othman, M. M.; Yasin, Z. M.
2018-03-01
This paper presents the Evolutionary Programming (EP) which proposed to optimize the training parameters for Artificial Neural Network (ANN) in predicting cascading collapse occurrence due to the effect of protection system hidden failure. The data has been collected from the probability of hidden failure model simulation from the historical data. The training parameters of multilayer-feedforward with backpropagation has been optimized with objective function to minimize the Mean Square Error (MSE). The optimal training parameters consists of the momentum rate, learning rate and number of neurons in first hidden layer and second hidden layer is selected in EP-ANN. The IEEE 14 bus system has been tested as a case study to validate the propose technique. The results show the reliable prediction of performance validated through MSE and Correlation Coefficient (R).
NASA Astrophysics Data System (ADS)
Surace, J.; Laher, R.; Masci, F.; Grillmair, C.; Helou, G.
2015-09-01
The Palomar Transient Factory (PTF) is a synoptic sky survey in operation since 2009. PTF utilizes a 7.1 square degree camera on the Palomar 48-inch Schmidt telescope to survey the sky primarily at a single wavelength (R-band) at a rate of 1000-3000 square degrees a night. The data are used to detect and study transient and moving objects such as gamma ray bursts, supernovae and asteroids, as well as variable phenomena such as quasars and Galactic stars. The data processing system at IPAC handles realtime processing and detection of transients, solar system object processing, high photometric precision processing and light curve generation, and long-term archiving and curation. This was developed under an extremely limited budget profile in an unusually agile development environment. Here we discuss the mechanics of this system and our overall development approach. Although a significant scientific installation in of itself, PTF also serves as the prototype for our next generation project, the Zwicky Transient Facility (ZTF). Beginning operations in 2017, ZTF will feature a 50 square degree camera which will enable scanning of the entire northern visible sky every night. ZTF in turn will serve as a stepping stone to the Large Synoptic Survey Telescope (LSST), a major NSF facility scheduled to begin operations in the early 2020s.
Non-ellipsoidal inclusions as geological strain markers and competence indicators
NASA Astrophysics Data System (ADS)
Treagus, S. H.; Hudleston, P. J.; Lan, L.
1996-09-01
Geological objects that do not deform homogeneously with their matrix can be considered as inclusions with viscosity contrast. Such inclusions are generally treated as initially spherical or ellipsoidal. Theory shows that ellipsoidal inclusions deform homogeneously, so they maintain an ellipsoidal shape, regardless of the viscosity difference. However, non-ellipsoidal inclusions deform inhomogeneously, so will become irregular in shape. Geological objects such as porphyroblasts, porphyroclasts and sedimentary clasts are likely to be of this kind, with initially rectilinear, prismatic or superelliptical section shapes. We present two-dimensional finite-element models of deformed square inclusions, in pure shear (parallel or diagonal to the square), as a preliminary investigation of the deformation of non-ellipsoidal inclusions with viscosity contrast. Competent inclusions develop marked barrel shapes with horn-like corners, as described for natural ductile boudins, or slightly wavy rhombs. Incompetent inclusions develop 'dumb-bell' or bone shapes, with a surprising degree of bulging of the shortened edges, or rhomb to sheath shapes. The results lead to speculation for inclusions in the circle to square shape range, and for asymmetric orientations. Anticipated shapes range from asymmetric barrels, lemons or flags for competent inclusions, to ribbon or fish shapes for incompetent inclusions. We conclude that shapes of inclusions and clasts provide an important new type of strain marker and competence criterion.
Cao, Boqiang; Zhang, Qimin; Ye, Ming
2016-11-29
We present a mean-square exponential stability analysis for impulsive stochastic genetic regulatory networks (GRNs) with time-varying delays and reaction-diffusion driven by fractional Brownian motion (fBm). By constructing a Lyapunov functional and using linear matrix inequality for stochastic analysis we derive sufficient conditions to guarantee the exponential stability of the stochastic model of impulsive GRNs in the mean-square sense. Meanwhile, the corresponding results are obtained for the GRNs with constant time delays and standard Brownian motion. Finally, an example is presented to illustrate our results of the mean-square exponential stability analysis.
An iterative algorithm for calculating stylus radius unambiguously
NASA Astrophysics Data System (ADS)
Vorburger, T. V.; Zheng, A.; Renegar, T. B.; Song, J.-F.; Ma, L.
2011-08-01
The stylus radius is an important specification for stylus instruments and is commonly provided by instrument manufacturers. However, it is difficult to measure the stylus radius unambiguously. Accurate profiles of the stylus tip may be obtained by profiling over an object sharper than itself, such as a razor blade. However, the stylus profile thus obtained is a partial arc, and unless the shape of the stylus tip is a perfect sphere or circle, the effective value of the radius depends on the length of the tip profile over which the radius is determined. We have developed an iterative, least squares algorithm aimed to determine the effective least squares stylus radius unambiguously. So far, the algorithm converges to reasonable results for the least squares stylus radius. We suggest that the algorithm be considered for adoption in documentary standards describing the properties of stylus instruments.
Closed-form analysis of fiber-matrix interface stresses under thermo-mechanical loadings
NASA Technical Reports Server (NTRS)
Naik, Rajiv A.; Crews, John H., Jr.
1992-01-01
Closed form techniques for calculating fiber matrix (FM) interface stresses, using repeating square and diamond regular arrays, were presented for a unidirectional composite under thermo-mechanical loadings. An Airy's stress function micromechanics approach from the literature, developed for calculating overall composite moduli, was extended in the present study to compute FM interface stresses for a unidirectional graphite/epoxy (AS4/3501-6) composite under thermal, longitudinal, transverse, transverse shear, and longitudinal shear loadings. Comparison with finite element results indicate excellent agreement of the FM interface stresses for the square array. Under thermal and longitudinal loading, the square array has the same FM peak stresses as the diamond array. The square array predicted higher stress concentrations under transverse normal and longitudinal shear loadings than the diamond array. Under transverse shear loading, the square array had a higher stress concentration while the diamond array had a higher radial stress concentration. Stress concentration factors under transverse shear and longitudinal shear loadings were very sensitive to fiber volume fraction. The present analysis provides a simple way to calculate accurate FM interface stresses for both the square and diamond array configurations.
Cardiovascular Autonomic Dysfunction in Patients with Morbid Obesity
de Sant Anna Junior, Maurício; Carneiro, João Regis Ivar; Carvalhal, Renata Ferreira; Torres, Diego de Faria Magalhães; da Cruz, Gustavo Gavina; Quaresma, José Carlos do Vale; Lugon, Jocemir Ronaldo; Guimarães, Fernando Silva
2015-01-01
Background Morbid obesity is directly related to deterioration in cardiorespiratory capacity, including changes in cardiovascular autonomic modulation. Objective This study aimed to assess the cardiovascular autonomic function in morbidly obese individuals. Methods Cross-sectional study, including two groups of participants: Group I, composed by 50 morbidly obese subjects, and Group II, composed by 30 nonobese subjects. The autonomic function was assessed by heart rate variability in the time domain (standard deviation of all normal RR intervals [SDNN]; standard deviation of the normal R-R intervals [SDNN]; square root of the mean squared differences of successive R-R intervals [RMSSD]; and the percentage of interval differences of successive R-R intervals greater than 50 milliseconds [pNN50] than the adjacent interval), and in the frequency domain (high frequency [HF]; low frequency [LF]: integration of power spectral density function in high frequency and low frequency ranges respectively). Between-group comparisons were performed by the Student’s t-test, with a level of significance of 5%. Results Obese subjects had lower values of SDNN (40.0 ± 18.0 ms vs. 70.0 ± 27.8 ms; p = 0.0004), RMSSD (23.7 ± 13.0 ms vs. 40.3 ± 22.4 ms; p = 0.0030), pNN50 (14.8 ± 10.4 % vs. 25.9 ± 7.2%; p = 0.0061) and HF (30.0 ± 17.5 Hz vs. 51.7 ± 25.5 Hz; p = 0.0023) than controls. Mean LF/HF ratio was higher in Group I (5.0 ± 2.8 vs. 1.0 ± 0.9; p = 0.0189), indicating changes in the sympathovagal balance. No statistical difference in LF was observed between Group I and Group II (50.1 ± 30.2 Hz vs. 40.9 ± 23.9 Hz; p = 0.9013). Conclusion morbidly obese individuals have increased sympathetic activity and reduced parasympathetic activity, featuring cardiovascular autonomic dysfunction. PMID:26536979
Fruit fly optimization based least square support vector regression for blind image restoration
NASA Astrophysics Data System (ADS)
Zhang, Jiao; Wang, Rui; Li, Junshan; Yang, Yawei
2014-11-01
The goal of image restoration is to reconstruct the original scene from a degraded observation. It is a critical and challenging task in image processing. Classical restorations require explicit knowledge of the point spread function and a description of the noise as priors. However, it is not practical for many real image processing. The recovery processing needs to be a blind image restoration scenario. Since blind deconvolution is an ill-posed problem, many blind restoration methods need to make additional assumptions to construct restrictions. Due to the differences of PSF and noise energy, blurring images can be quite different. It is difficult to achieve a good balance between proper assumption and high restoration quality in blind deconvolution. Recently, machine learning techniques have been applied to blind image restoration. The least square support vector regression (LSSVR) has been proven to offer strong potential in estimating and forecasting issues. Therefore, this paper proposes a LSSVR-based image restoration method. However, selecting the optimal parameters for support vector machine is essential to the training result. As a novel meta-heuristic algorithm, the fruit fly optimization algorithm (FOA) can be used to handle optimization problems, and has the advantages of fast convergence to the global optimal solution. In the proposed method, the training samples are created from a neighborhood in the degraded image to the central pixel in the original image. The mapping between the degraded image and the original image is learned by training LSSVR. The two parameters of LSSVR are optimized though FOA. The fitness function of FOA is calculated by the restoration error function. With the acquired mapping, the degraded image can be recovered. Experimental results show the proposed method can obtain satisfactory restoration effect. Compared with BP neural network regression, SVR method and Lucy-Richardson algorithm, it speeds up the restoration rate and performs better. Both objective and subjective restoration performances are studied in the comparison experiments.
Fast function-on-scalar regression with penalized basis expansions.
Reiss, Philip T; Huang, Lei; Mennes, Maarten
2010-01-01
Regression models for functional responses and scalar predictors are often fitted by means of basis functions, with quadratic roughness penalties applied to avoid overfitting. The fitting approach described by Ramsay and Silverman in the 1990 s amounts to a penalized ordinary least squares (P-OLS) estimator of the coefficient functions. We recast this estimator as a generalized ridge regression estimator, and present a penalized generalized least squares (P-GLS) alternative. We describe algorithms by which both estimators can be implemented, with automatic selection of optimal smoothing parameters, in a more computationally efficient manner than has heretofore been available. We discuss pointwise confidence intervals for the coefficient functions, simultaneous inference by permutation tests, and model selection, including a novel notion of pointwise model selection. P-OLS and P-GLS are compared in a simulation study. Our methods are illustrated with an analysis of age effects in a functional magnetic resonance imaging data set, as well as a reanalysis of a now-classic Canadian weather data set. An R package implementing the methods is publicly available.
NASA Technical Reports Server (NTRS)
Laughlin, Daniel
2008-01-01
Persistent Immersive Synthetic Environments (PISE) are not just connection points, they are meeting places. They are the new public squares, village centers, malt shops, malls and pubs all rolled into one. They come with a sense of 'thereness" that engages the mind like a real place does. Learning starts as a real code. The code defines "objects." The objects exist in computer space, known as the "grid." The objects and space combine to create a "place." A "world" is created, Before long, the grid and code becomes obscure, and the "world maintains focus.
Dawson, Neil; Thompson, Rhiannon J.; McVie, Allan; Thomson, David M.; Morris, Brian J.; Pratt, Judith A.
2012-01-01
Objective: In the present study, we employ mathematical modeling (partial least squares regression, PLSR) to elucidate the functional connectivity signatures of discrete brain regions in order to identify the functional networks subserving PCP-induced disruption of distinct cognitive functions and their restoration by the procognitive drug modafinil. Methods: We examine the functional connectivity signatures of discrete brain regions that show overt alterations in metabolism, as measured by semiquantitative 2-deoxyglucose autoradiography, in an animal model (subchronic phencyclidine [PCP] treatment), which shows cognitive inflexibility with relevance to the cognitive deficits seen in schizophrenia. Results: We identify the specific components of functional connectivity that contribute to the rescue of this cognitive inflexibility and to the restoration of overt cerebral metabolism by modafinil. We demonstrate that modafinil reversed both the PCP-induced deficit in the ability to switch attentional set and the PCP-induced hypometabolism in the prefrontal (anterior prelimbic) and retrosplenial cortices. Furthermore, modafinil selectively enhanced metabolism in the medial prelimbic cortex. The functional connectivity signatures of these regions identified a unifying functional subsystem underlying the influence of modafinil on cerebral metabolism and cognitive flexibility that included the nucleus accumbens core and locus coeruleus. In addition, these functional connectivity signatures identified coupling events specific to each brain region, which relate to known anatomical connectivity. Conclusions: These data support clinical evidence that modafinil may alleviate cognitive deficits in schizophrenia and also demonstrate the benefit of applying PLSR modeling to characterize functional brain networks in translational models relevant to central nervous system dysfunction. PMID:20810469
A Comprehensive Study of Gridding Methods for GPS Horizontal Velocity Fields
NASA Astrophysics Data System (ADS)
Wu, Yanqiang; Jiang, Zaisen; Liu, Xiaoxia; Wei, Wenxin; Zhu, Shuang; Zhang, Long; Zou, Zhenyu; Xiong, Xiaohui; Wang, Qixin; Du, Jiliang
2017-03-01
Four gridding methods for GPS velocities are compared in terms of their precision, applicability and robustness by analyzing simulated data with uncertainties from 0.0 to ±3.0 mm/a. When the input data are 1° × 1° grid sampled and the uncertainty of the additional error is greater than ±1.0 mm/a, the gridding results show that the least-squares collocation method is highly robust while the robustness of the Kriging method is low. In contrast, the spherical harmonics and the multi-surface function are moderately robust, and the regional singular values for the multi-surface function method and the edge effects for the spherical harmonics method become more significant with increasing uncertainty of the input data. When the input data (with additional errors of ±2.0 mm/a) are decimated by 50% from the 1° × 1° grid data and then erased in three 6° × 12° regions, the gridding results in these three regions indicate that the least-squares collocation and the spherical harmonics methods have good performances, while the multi-surface function and the Kriging methods may lead to singular values. The gridding techniques are also applied to GPS horizontal velocities with an average error of ±0.8 mm/a over the Chinese mainland and the surrounding areas, and the results show that the least-squares collocation method has the best performance, followed by the Kriging and multi-surface function methods. Furthermore, the edge effects of the spherical harmonics method are significantly affected by the sparseness and geometric distribution of the input data. In general, the least-squares collocation method is superior in terms of its robustness, edge effect, error distribution and stability, while the other methods have several positive features.
Shrestha, Suman; Karellas, Andrew; Shi, Linxi; Gounis, Matthew J.; Bellazzini, Ronaldo; Spandre, Gloria; Brez, Alessandro; Minuti, Massimo
2016-01-01
Purpose: High-resolution, photon-counting, energy-resolved detector with fast-framing capability can facilitate simultaneous acquisition of precontrast and postcontrast images for subtraction angiography without pixel registration artifacts and can facilitate high-resolution real-time imaging during image-guided interventions. Hence, this study was conducted to determine the spatial resolution characteristics of a hexagonal pixel array photon-counting cadmium telluride (CdTe) detector. Methods: A 650 μm thick CdTe Schottky photon-counting detector capable of concurrently acquiring up to two energy-windowed images was operated in a single energy-window mode to include photons of 10 keV or higher. The detector had hexagonal pixels with apothem of 30 μm resulting in pixel pitch of 60 and 51.96 μm along the two orthogonal directions. The detector was characterized at IEC-RQA5 spectral conditions. Linear response of the detector was determined over the air kerma rate relevant to image-guided interventional procedures ranging from 1.3 nGy/frame to 91.4 μGy/frame. Presampled modulation transfer was determined using a tungsten edge test device. The edge-spread function and the finely sampled line spread function accounted for hexagonal sampling, from which the presampled modulation transfer function (MTF) was determined. Since detectors with hexagonal pixels require resampling to square pixels for distortion-free display, the optimal square pixel size was determined by minimizing the root-mean-squared-error of the aperture functions for the square and hexagonal pixels up to the Nyquist limit. Results: At Nyquist frequencies of 8.33 and 9.62 cycles/mm along the apothem and orthogonal to the apothem directions, the modulation factors were 0.397 and 0.228, respectively. For the corresponding axis, the limiting resolution defined as 10% MTF occurred at 13.3 and 12 cycles/mm, respectively. Evaluation of the aperture functions yielded an optimal square pixel size of 54 μm. After resampling to 54 μm square pixels using trilinear interpolation, the presampled MTF at Nyquist frequency of 9.26 cycles/mm was 0.29 and 0.24 along the orthogonal directions and the limiting resolution (10% MTF) occurred at approximately 12 cycles/mm. Visual analysis of a bar pattern image showed the ability to resolve close to 12 line-pairs/mm and qualitative evaluation of a neurovascular nitinol-stent showed the ability to visualize its struts at clinically relevant conditions. Conclusions: Hexagonal pixel array photon-counting CdTe detector provides high spatial resolution in single-photon counting mode. After resampling to optimal square pixel size for distortion-free display, the spatial resolution is preserved. The dual-energy capabilities of the detector could allow for artifact-free subtraction angiography and basis material decomposition. The proposed high-resolution photon-counting detector with energy-resolving capability can be of importance for several image-guided interventional procedures as well as for pediatric applications. PMID:27147324
Vedantham, Srinivasan; Shrestha, Suman; Karellas, Andrew; Shi, Linxi; Gounis, Matthew J; Bellazzini, Ronaldo; Spandre, Gloria; Brez, Alessandro; Minuti, Massimo
2016-05-01
High-resolution, photon-counting, energy-resolved detector with fast-framing capability can facilitate simultaneous acquisition of precontrast and postcontrast images for subtraction angiography without pixel registration artifacts and can facilitate high-resolution real-time imaging during image-guided interventions. Hence, this study was conducted to determine the spatial resolution characteristics of a hexagonal pixel array photon-counting cadmium telluride (CdTe) detector. A 650 μm thick CdTe Schottky photon-counting detector capable of concurrently acquiring up to two energy-windowed images was operated in a single energy-window mode to include photons of 10 keV or higher. The detector had hexagonal pixels with apothem of 30 μm resulting in pixel pitch of 60 and 51.96 μm along the two orthogonal directions. The detector was characterized at IEC-RQA5 spectral conditions. Linear response of the detector was determined over the air kerma rate relevant to image-guided interventional procedures ranging from 1.3 nGy/frame to 91.4 μGy/frame. Presampled modulation transfer was determined using a tungsten edge test device. The edge-spread function and the finely sampled line spread function accounted for hexagonal sampling, from which the presampled modulation transfer function (MTF) was determined. Since detectors with hexagonal pixels require resampling to square pixels for distortion-free display, the optimal square pixel size was determined by minimizing the root-mean-squared-error of the aperture functions for the square and hexagonal pixels up to the Nyquist limit. At Nyquist frequencies of 8.33 and 9.62 cycles/mm along the apothem and orthogonal to the apothem directions, the modulation factors were 0.397 and 0.228, respectively. For the corresponding axis, the limiting resolution defined as 10% MTF occurred at 13.3 and 12 cycles/mm, respectively. Evaluation of the aperture functions yielded an optimal square pixel size of 54 μm. After resampling to 54 μm square pixels using trilinear interpolation, the presampled MTF at Nyquist frequency of 9.26 cycles/mm was 0.29 and 0.24 along the orthogonal directions and the limiting resolution (10% MTF) occurred at approximately 12 cycles/mm. Visual analysis of a bar pattern image showed the ability to resolve close to 12 line-pairs/mm and qualitative evaluation of a neurovascular nitinol-stent showed the ability to visualize its struts at clinically relevant conditions. Hexagonal pixel array photon-counting CdTe detector provides high spatial resolution in single-photon counting mode. After resampling to optimal square pixel size for distortion-free display, the spatial resolution is preserved. The dual-energy capabilities of the detector could allow for artifact-free subtraction angiography and basis material decomposition. The proposed high-resolution photon-counting detector with energy-resolving capability can be of importance for several image-guided interventional procedures as well as for pediatric applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vedantham, Srinivasan; Shrestha, Suman; Karellas, Andrew, E-mail: andrew.karellas@umassmed.edu
Purpose: High-resolution, photon-counting, energy-resolved detector with fast-framing capability can facilitate simultaneous acquisition of precontrast and postcontrast images for subtraction angiography without pixel registration artifacts and can facilitate high-resolution real-time imaging during image-guided interventions. Hence, this study was conducted to determine the spatial resolution characteristics of a hexagonal pixel array photon-counting cadmium telluride (CdTe) detector. Methods: A 650 μm thick CdTe Schottky photon-counting detector capable of concurrently acquiring up to two energy-windowed images was operated in a single energy-window mode to include photons of 10 keV or higher. The detector had hexagonal pixels with apothem of 30 μm resulting in pixelmore » pitch of 60 and 51.96 μm along the two orthogonal directions. The detector was characterized at IEC-RQA5 spectral conditions. Linear response of the detector was determined over the air kerma rate relevant to image-guided interventional procedures ranging from 1.3 nGy/frame to 91.4 μGy/frame. Presampled modulation transfer was determined using a tungsten edge test device. The edge-spread function and the finely sampled line spread function accounted for hexagonal sampling, from which the presampled modulation transfer function (MTF) was determined. Since detectors with hexagonal pixels require resampling to square pixels for distortion-free display, the optimal square pixel size was determined by minimizing the root-mean-squared-error of the aperture functions for the square and hexagonal pixels up to the Nyquist limit. Results: At Nyquist frequencies of 8.33 and 9.62 cycles/mm along the apothem and orthogonal to the apothem directions, the modulation factors were 0.397 and 0.228, respectively. For the corresponding axis, the limiting resolution defined as 10% MTF occurred at 13.3 and 12 cycles/mm, respectively. Evaluation of the aperture functions yielded an optimal square pixel size of 54 μm. After resampling to 54 μm square pixels using trilinear interpolation, the presampled MTF at Nyquist frequency of 9.26 cycles/mm was 0.29 and 0.24 along the orthogonal directions and the limiting resolution (10% MTF) occurred at approximately 12 cycles/mm. Visual analysis of a bar pattern image showed the ability to resolve close to 12 line-pairs/mm and qualitative evaluation of a neurovascular nitinol-stent showed the ability to visualize its struts at clinically relevant conditions. Conclusions: Hexagonal pixel array photon-counting CdTe detector provides high spatial resolution in single-photon counting mode. After resampling to optimal square pixel size for distortion-free display, the spatial resolution is preserved. The dual-energy capabilities of the detector could allow for artifact-free subtraction angiography and basis material decomposition. The proposed high-resolution photon-counting detector with energy-resolving capability can be of importance for several image-guided interventional procedures as well as for pediatric applications.« less
Adaptive object tracking via both positive and negative models matching
NASA Astrophysics Data System (ADS)
Li, Shaomei; Gao, Chao; Wang, Yawen
2015-03-01
To improve tracking drift which often occurs in adaptive tracking, an algorithm based on the fusion of tracking and detection is proposed in this paper. Firstly, object tracking is posed as abinary classification problem and is modeled by partial least squares (PLS) analysis. Secondly, tracking object frame by frame via particle filtering. Thirdly, validating the tracking reliability based on both positive and negative models matching. Finally, relocating the object based on SIFT features matching and voting when drift occurs. Object appearance model is updated at the same time. The algorithm can not only sense tracking drift but also relocate the object whenever needed. Experimental results demonstrate that this algorithm outperforms state-of-the-art algorithms on many challenging sequences.
Full Waveform Inversion for Seismic Velocity And Anelastic Losses in Heterogeneous Structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Askan, A.; /Carnegie Mellon U.; Akcelik, V.
2009-04-30
We present a least-squares optimization method for solving the nonlinear full waveform inverse problem of determining the crustal velocity and intrinsic attenuation properties of sedimentary valleys in earthquake-prone regions. Given a known earthquake source and a set of seismograms generated by the source, the inverse problem is to reconstruct the anelastic properties of a heterogeneous medium with possibly discontinuous wave velocities. The inverse problem is formulated as a constrained optimization problem, where the constraints are the partial and ordinary differential equations governing the anelastic wave propagation from the source to the receivers in the time domain. This leads to amore » variational formulation in terms of the material model plus the state variables and their adjoints. We employ a wave propagation model in which the intrinsic energy-dissipating nature of the soil medium is modeled by a set of standard linear solids. The least-squares optimization approach to inverse wave propagation presents the well-known difficulties of ill posedness and multiple minima. To overcome ill posedness, we include a total variation regularization functional in the objective function, which annihilates highly oscillatory material property components while preserving discontinuities in the medium. To treat multiple minima, we use a multilevel algorithm that solves a sequence of subproblems on increasingly finer grids with increasingly higher frequency source components to remain within the basin of attraction of the global minimum. We illustrate the methodology with high-resolution inversions for two-dimensional sedimentary models of the San Fernando Valley, under SH-wave excitation. We perform inversions for both the seismic velocity and the intrinsic attenuation using synthetic waveforms at the observer locations as pseudoobserved data.« less
HETDEX: The Physical Properties of [O II] Emitters
NASA Astrophysics Data System (ADS)
Ciardullo, Robin; Gronwall, C.; Blanc, G.; Gebhardt, K.; Jogee, S.; HETDEX Collaboration
2012-01-01
Beginning in Fall 2012, the Hobby-Eberly Telescope Dark Energy Experiment (HETDEX) will map out 300 square degrees of sky via a blind integral-field spectroscopic survey. While the main goal of the project is to measure the power spectrum of 800,000 Lyα emitters between 1.9 < z < 3.5, the survey will also identify 1,000,000 [O II] emitting galaxies with z < 0.5. Together, these data will provide an unprecedented view of the emission-line universe and allow us to not only examine the history star formation, but to study the properties of star-forming galaxies as a function of environment. To prepare for HETDEX, a 3 year pilot survey was undertaken with a proto-type integral-field spectrograph (VIRUS-P) on the McDonald 2.7-m telescope. This program, which tested the HETDEX instrumentation, data reduction, target properties, observing procedures, and ancillary data requirements, produced R=800 spectra between 350 nm and 580 nm for 169 square arcmin of sky in the COSMOS, GOODS-N, MUNICS-S2, and XMM-LSS fields. The survey found 397 emission-line objects, including 104 Lyα emitters between 1.9 < z < 3.8 and 284 [O II] galaxies with z < 0.56. We present the properties of the [O II] emitters found in this survey, and detail their line strengths, internal extinction, and emission-line luminosity function. We use these data to show that over the past 5 Gyr, star-formation in the universe has decreased linearly, in both in an absolute and relative sense. We compare the star formation rates measured via [O II] fluxes to those determined via the rest-frame ultraviolet, explore the extinction corrections for our sample, and discuss the implications of our work for the main HETDEX survey.
Kassabian, Nazelie; Presti, Letizia Lo; Rispoli, Francesco
2014-01-01
Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS) signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE) algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs). This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs) distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold. PMID:24922454
Effect of Different Phases of Menstrual Cycle on Heart Rate Variability (HRV)
Singh, K. D.; Kumar, Avnish
2015-01-01
Background Heart Rate Variability (HRV), which is a measure of the cardiac autonomic tone, displays physiological changes throughout the menstrual cycle. The functions of the ANS in various phases of the menstrual cycle were examined in some studies. Aims and Objectives The aim of our study was to observe the effect of menstrual cycle on cardiac autonomic function parameters in healthy females. Materials and Methods A cross-sectional (observational) study was conducted on 50 healthy females, in the age group of 18-25 years. Heart Rate Variability (HRV) was recorded by Physio Pac (PC-2004). The data consisted of Time Domain Analysis and Frequency Domain Analysis in menstrual, proliferative and secretory phase of menstrual cycle. Data collected was analysed statistically using student’s pair t-test. Results The difference in mean heart rate, LF power%, LFnu and HFnu in menstrual and proliferative phase was found to be statistically significant. The difference in mean RR, Mean HR, RMSSD (the square root of the mean of the squares of the successive differences between adjacent NNs.), NN50 (the number of pairs of successive NNs that differ by more than 50 ms), pNN50 (the proportion of NN50 divided by total number of NNs.), VLF (very low frequency) power, LF (low frequency) power, LF power%, HF power %, LF/HF ratio, LFnu and HFnu was found to be statistically significant in proliferative and secretory phase. The difference in Mean RR, Mean HR, LFnu and HFnu was found to be statistically significant in secretory and menstrual phases. Conclusion From the study it can be concluded that sympathetic nervous activity in secretory phase is greater than in the proliferative phase, whereas parasympathetic nervous activity is predominant in proliferative phase. PMID:26557512
NASA Technical Reports Server (NTRS)
Carrier, Alain C.; Aubrun, Jean-Noel
1993-01-01
New frequency response measurement procedures, on-line modal tuning techniques, and off-line modal identification algorithms are developed and applied to the modal identification of the Advanced Structures/Controls Integrated Experiment (ASCIE), a generic segmented optics telescope test-bed representative of future complex space structures. The frequency response measurement procedure uses all the actuators simultaneously to excite the structure and all the sensors to measure the structural response so that all the transfer functions are measured simultaneously. Structural responses to sinusoidal excitations are measured and analyzed to calculate spectral responses. The spectral responses in turn are analyzed as the spectral data become available and, which is new, the results are used to maintain high quality measurements. Data acquisition, processing, and checking procedures are fully automated. As the acquisition of the frequency response progresses, an on-line algorithm keeps track of the actuator force distribution that maximizes the structural response to automatically tune to a structural mode when approaching a resonant frequency. This tuning is insensitive to delays, ill-conditioning, and nonproportional damping. Experimental results show that is useful for modal surveys even in high modal density regions. For thorough modeling, a constructive procedure is proposed to identify the dynamics of a complex system from its frequency response with the minimization of a least-squares cost function as a desirable objective. This procedure relies on off-line modal separation algorithms to extract modal information and on least-squares parameter subset optimization to combine the modal results and globally fit the modal parameters to the measured data. The modal separation algorithms resolved modal density of 5 modes/Hz in the ASCIE experiment. They promise to be useful in many challenging applications.
The Healthy Mind, Healthy Mobility Trial: A Novel Exercise Program for Older Adults.
Gill, Dawn P; Gregory, Michael A; Zou, Guangyong; Liu-Ambrose, Teresa; Shigematsu, Ryosuke; Hachinski, Vladimir; Fitzgerald, Clara; Petrella, Robert J
2016-02-01
More evidence is needed to conclude that a specific program of exercise and/or cognitive training warrants prescription for the prevention of cognitive decline. We examined the effect of a group-based standard exercise program for older adults, with and without dual-task training, on cognitive function in older adults without dementia. We conducted a proof-of-concept, single-blinded, 26-wk randomized controlled trial whereby participants recruited from preexisting exercise classes at the Canadian Centre for Activity and Aging in London, Ontario, were randomized to the intervention group (exercise + dual-task [EDT]) or the control group (exercise only [EO]). Each week (2 or 3 d · wk(-1)), both groups accumulated a minimum of 50 min of aerobic exercise (target 75 min) from standard group classes and completed 45 min of beginner-level square-stepping exercise. The EDT group was also required to answer cognitively challenging questions while doing beginner-level square-stepping exercise (i.e., dual-task training). The effect of interventions on standardized global cognitive function (GCF) scores at 26 wk was compared between the groups using the linear mixed effects model approach. Participants (n = 44; 68% female; mean [SD] age: 73.5 [7.2] yr) had on average, objective evidence of cognitive impairment (Montreal Cognitive Assessment scores, mean [SD]: 24.9 [1.9]) but not dementia (Mini-Mental State Examination scores, mean [SD]: 28.8 [1.2]). After 26 wk, the EDT group showed greater improvement in GCF scores compared with the EO group (difference between groups in mean change [95% CI]: 0.20 SD [0.01-0.39], P = 0.04). A 26-wk group-based exercise program combined with dual-task training improved GCF in community-dwelling older adults without dementia.
Functional Relationships and Regression Analysis.
ERIC Educational Resources Information Center
Preece, Peter F. W.
1978-01-01
Using a degenerate multivariate normal model for the distribution of organismic variables, the form of least-squares regression analysis required to estimate a linear functional relationship between variables is derived. It is suggested that the two conventional regression lines may be considered to describe functional, not merely statistical,…
Conjugate-gradient preconditioning methods for shift-variant PET image reconstruction.
Fessler, J A; Booth, S D
1999-01-01
Gradient-based iterative methods often converge slowly for tomographic image reconstruction and image restoration problems, but can be accelerated by suitable preconditioners. Diagonal preconditioners offer some improvement in convergence rate, but do not incorporate the structure of the Hessian matrices in imaging problems. Circulant preconditioners can provide remarkable acceleration for inverse problems that are approximately shift-invariant, i.e., for those with approximately block-Toeplitz or block-circulant Hessians. However, in applications with nonuniform noise variance, such as arises from Poisson statistics in emission tomography and in quantum-limited optical imaging, the Hessian of the weighted least-squares objective function is quite shift-variant, and circulant preconditioners perform poorly. Additional shift-variance is caused by edge-preserving regularization methods based on nonquadratic penalty functions. This paper describes new preconditioners that approximate more accurately the Hessian matrices of shift-variant imaging problems. Compared to diagonal or circulant preconditioning, the new preconditioners lead to significantly faster convergence rates for the unconstrained conjugate-gradient (CG) iteration. We also propose a new efficient method for the line-search step required by CG methods. Applications to positron emission tomography (PET) illustrate the method.
Audio visual speech source separation via improved context dependent association model
NASA Astrophysics Data System (ADS)
Kazemi, Alireza; Boostani, Reza; Sobhanmanesh, Fariborz
2014-12-01
In this paper, we exploit the non-linear relation between a speech source and its associated lip video as a source of extra information to propose an improved audio-visual speech source separation (AVSS) algorithm. The audio-visual association is modeled using a neural associator which estimates the visual lip parameters from a temporal context of acoustic observation frames. We define an objective function based on mean square error (MSE) measure between estimated and target visual parameters. This function is minimized for estimation of the de-mixing vector/filters to separate the relevant source from linear instantaneous or time-domain convolutive mixtures. We have also proposed a hybrid criterion which uses AV coherency together with kurtosis as a non-Gaussianity measure. Experimental results are presented and compared in terms of visually relevant speech detection accuracy and output signal-to-interference ratio (SIR) of source separation. The suggested audio-visual model significantly improves relevant speech classification accuracy compared to existing GMM-based model and the proposed AVSS algorithm improves the speech separation quality compared to reference ICA- and AVSS-based methods.
Equivalent orthotropic elastic moduli identification method for laminated electrical steel sheets
NASA Astrophysics Data System (ADS)
Saito, Akira; Nishikawa, Yasunari; Yamasaki, Shintaro; Fujita, Kikuo; Kawamoto, Atsushi; Kuroishi, Masakatsu; Nakai, Hideo
2016-05-01
In this paper, a combined numerical-experimental methodology for the identification of elastic moduli of orthotropic media is presented. Special attention is given to the laminated electrical steel sheets, which are modeled as orthotropic media with nine independent engineering elastic moduli. The elastic moduli are determined specifically for use with finite element vibration analyses. We propose a three-step methodology based on a conventional nonlinear least squares fit between measured and computed natural frequencies. The methodology consists of: (1) successive augmentations of the objective function by increasing the number of modes, (2) initial condition updates, and (3) appropriate selection of the natural frequencies based on their sensitivities on the elastic moduli. Using the results of numerical experiments, it is shown that the proposed method achieves more accurate converged solution than a conventional approach. Finally, the proposed method is applied to measured natural frequencies and mode shapes of the laminated electrical steel sheets. It is shown that the method can successfully identify the orthotropic elastic moduli that can reproduce the measured natural frequencies and frequency response functions by using finite element analyses with a reasonable accuracy.
Nonparametric methods for drought severity estimation at ungauged sites
NASA Astrophysics Data System (ADS)
Sadri, S.; Burn, D. H.
2012-12-01
The objective in frequency analysis is, given extreme events such as drought severity or duration, to estimate the relationship between that event and the associated return periods at a catchment. Neural networks and other artificial intelligence approaches in function estimation and regression analysis are relatively new techniques in engineering, providing an attractive alternative to traditional statistical models. There are, however, few applications of neural networks and support vector machines in the area of severity quantile estimation for drought frequency analysis. In this paper, we compare three methods for this task: multiple linear regression, radial basis function neural networks, and least squares support vector regression (LS-SVR). The area selected for this study includes 32 catchments in the Canadian Prairies. From each catchment drought severities are extracted and fitted to a Pearson type III distribution, which act as observed values. For each method-duration pair, we use a jackknife algorithm to produce estimated values at each site. The results from these three approaches are compared and analyzed, and it is found that LS-SVR provides the best quantile estimates and extrapolating capacity.
Nonlocalized clustering: a new concept in nuclear cluster structure physics.
Zhou, Bo; Funaki, Y; Horiuchi, H; Ren, Zhongzhou; Röpke, G; Schuck, P; Tohsaki, A; Xu, Chang; Yamada, T
2013-06-28
We investigate the α+^{16}O cluster structure in the inversion-doublet band (Kπ=0(1)±}) states of 20Ne with an angular-momentum-projected version of the Tohsaki-Horiuchi-Schuck-Röpke (THSR) wave function, which was successful "in its original form" for the description of, e.g., the famous Hoyle state. In contrast with the traditional view on clusters as localized objects, especially in inversion doublets, we find that these single THSR wave functions, which are based on the concept of nonlocalized clustering, can well describe the Kπ=0(1)- band and the Kπ=0(1)+ band. For instance, they have 99.98% and 99.87% squared overlaps for 1- and 3- states (99.29%, 98.79%, and 97.75% for 0+, 2+, and 4+ states), respectively, with the corresponding exact solution of the α+16O resonating group method. These astounding results shed a completely new light on the physics of low energy nuclear cluster states in nuclei: The clusters are nonlocalized and move around in the whole nuclear volume, only avoiding mutual overlap due to the Pauli blocking effect.
Acoustic design by topology optimization
NASA Astrophysics Data System (ADS)
Dühring, Maria B.; Jensen, Jakob S.; Sigmund, Ole
2008-11-01
To bring down noise levels in human surroundings is an important issue and a method to reduce noise by means of topology optimization is presented here. The acoustic field is modeled by Helmholtz equation and the topology optimization method is based on continuous material interpolation functions in the density and bulk modulus. The objective function is the squared sound pressure amplitude. First, room acoustic problems are considered and it is shown that the sound level can be reduced in a certain part of the room by an optimized distribution of reflecting material in a design domain along the ceiling or by distribution of absorbing and reflecting material along the walls. We obtain well defined optimized designs for a single frequency or a frequency interval for both 2D and 3D problems when considering low frequencies. Second, it is shown that the method can be applied to design outdoor sound barriers in order to reduce the sound level in the shadow zone behind the barrier. A reduction of up to 10 dB for a single barrier and almost 30 dB when using two barriers are achieved compared to utilizing conventional sound barriers.
First and second order derivatives for optimizing parallel RF excitation waveforms.
Majewski, Kurt; Ritter, Dieter
2015-09-01
For piecewise constant magnetic fields, the Bloch equations (without relaxation terms) can be solved explicitly. This way the magnetization created by an excitation pulse can be written as a concatenation of rotations applied to the initial magnetization. For fixed gradient trajectories, the problem of finding parallel RF waveforms, which minimize the difference between achieved and desired magnetization on a number of voxels, can thus be represented as a finite-dimensional minimization problem. We use quaternion calculus to formulate this optimization problem in the magnitude least squares variant and specify first and second order derivatives of the objective function. We obtain a small tip angle approximation as first order Taylor development from the first order derivatives and also develop algorithms for first and second order derivatives for this small tip angle approximation. All algorithms are accompanied by precise floating point operation counts to assess and compare the computational efforts. We have implemented these algorithms as callback functions of an interior-point solver. We have applied this numerical optimization method to example problems from the literature and report key observations. Copyright © 2015 Elsevier Inc. All rights reserved.
First and second order derivatives for optimizing parallel RF excitation waveforms
NASA Astrophysics Data System (ADS)
Majewski, Kurt; Ritter, Dieter
2015-09-01
For piecewise constant magnetic fields, the Bloch equations (without relaxation terms) can be solved explicitly. This way the magnetization created by an excitation pulse can be written as a concatenation of rotations applied to the initial magnetization. For fixed gradient trajectories, the problem of finding parallel RF waveforms, which minimize the difference between achieved and desired magnetization on a number of voxels, can thus be represented as a finite-dimensional minimization problem. We use quaternion calculus to formulate this optimization problem in the magnitude least squares variant and specify first and second order derivatives of the objective function. We obtain a small tip angle approximation as first order Taylor development from the first order derivatives and also develop algorithms for first and second order derivatives for this small tip angle approximation. All algorithms are accompanied by precise floating point operation counts to assess and compare the computational efforts. We have implemented these algorithms as callback functions of an interior-point solver. We have applied this numerical optimization method to example problems from the literature and report key observations.
Combined optimization of image-gathering and image-processing systems for scene feature detection
NASA Technical Reports Server (NTRS)
Halyo, Nesim; Arduini, Robert F.; Samms, Richard W.
1987-01-01
The relationship between the image gathering and image processing systems for minimum mean squared error estimation of scene characteristics is investigated. A stochastic optimization problem is formulated where the objective is to determine a spatial characteristic of the scene rather than a feature of the already blurred, sampled and noisy image data. An analytical solution for the optimal characteristic image processor is developed. The Wiener filter for the sampled image case is obtained as a special case, where the desired characteristic is scene restoration. Optimal edge detection is investigated using the Laplacian operator x G as the desired characteristic, where G is a two dimensional Gaussian distribution function. It is shown that the optimal edge detector compensates for the blurring introduced by the image gathering optics, and notably, that it is not circularly symmetric. The lack of circular symmetry is largely due to the geometric effects of the sampling lattice used in image acquisition. The optimal image gathering optical transfer function is also investigated and the results of a sensitivity analysis are shown.
Point-spread function reconstruction in ground-based astronomy by l(1)-l(p) model.
Chan, Raymond H; Yuan, Xiaoming; Zhang, Wenxing
2012-11-01
In ground-based astronomy, images of objects in outer space are acquired via ground-based telescopes. However, the imaging system is generally interfered by atmospheric turbulence, and hence images so acquired are blurred with unknown point-spread function (PSF). To restore the observed images, the wavefront of light at the telescope's aperture is utilized to derive the PSF. A model with the Tikhonov regularization has been proposed to find the high-resolution phase gradients by solving a least-squares system. Here we propose the l(1)-l(p) (p=1, 2) model for reconstructing the phase gradients. This model can provide sharper edges in the gradients while removing noise. The minimization models can easily be solved by the Douglas-Rachford alternating direction method of a multiplier, and the convergence rate is readily established. Numerical results are given to illustrate that the model can give better phase gradients and hence a more accurate PSF. As a result, the restored images are much more accurate when compared to the traditional Tikhonov regularization model.
NASA Astrophysics Data System (ADS)
Bachrudin, A.; Mohamed, N. B.; Supian, S.; Sukono; Hidayat, Y.
2018-03-01
Application of existing geostatistical theory of stream networks provides a number of interesting and challenging problems. Most of statistical tools in the traditional geostatistics have been based on a Euclidean distance such as autocovariance functions, but for stream data is not permissible since it deals with a stream distance. To overcome this autocovariance developed a model based on the distance the flow with using convolution kernel approach (moving average construction). Spatial model for a stream networks is widely used to monitor environmental on a river networks. In a case study of a river in province of West Java, the objective of this paper is to analyze a capability of a predictive on two environmental variables, potential of hydrogen (PH) and temperature using ordinary kriging. Several the empirical results show: (1) The best fit of autocovariance functions for temperature and potential hydrogen (ph) of Citarik River is linear which also yields the smallest root mean squared prediction error (RMSPE), (2) the spatial correlation values between the locations on upstream and on downstream of Citarik river exhibit decreasingly
NASA Technical Reports Server (NTRS)
Aksay, Ilhan A. (Inventor); Pan, Shuyang (Inventor); Prud'Homme, Robert K. (Inventor)
2016-01-01
A nanocomposite composition having a silicone elastomer matrix having therein a filler loading of greater than 0.05 weight percentage, based on total nanocomposite weight, wherein the filler is functional graphene sheets (FGS) having a surface area of from 300 square meters per gram to 2630 square meters per gram; and a method for producing the nanocomposite and uses thereof.
An Analysis of Advertising Effectiveness for U.S. Navy Recruiting
1997-09-01
This thesis estimates the effect of Navy television advertising on enlistment rates of high quality male recruits (Armed Forces Qualification Test...Joint advertising is for all Armed Forces), Joint journal, and Joint direct mail advertising are explored. Enlistments are modeled as a function of...several factors including advertising , recruiters, and economic. Regression analyses (Ordinary Least Squares and Two Stage Least Squares) explore the
Interpreting the Results of Weighted Least-Squares Regression: Caveats for the Statistical Consumer.
ERIC Educational Resources Information Center
Willett, John B.; Singer, Judith D.
In research, data sets often occur in which the variance of the distribution of the dependent variable at given levels of the predictors is a function of the values of the predictors. In this situation, the use of weighted least-squares (WLS) or techniques is required. Weights suitable for use in a WLS regression analysis must be estimated. A…
Generalized Least Squares Estimators in the Analysis of Covariance Structures.
ERIC Educational Resources Information Center
Browne, Michael W.
This paper concerns situations in which a p x p covariance matrix is a function of an unknown q x 1 parameter vector y-sub-o. Notation is defined in the second section, and some algebraic results used in subsequent sections are given. Section 3 deals with asymptotic properties of generalized least squares (G.L.S.) estimators of y-sub-o. Section 4…
Song, Guicheng; Wang, Miaomiao; Zeng, Bin; Zhang, Jing; Jiang, Chenliang; Hu, Qirui; Geng, Guangtao; Tang, Canming
2015-05-01
Pollen tube growth in styles was strongly inhibited by temperature above 35 °C, and the yield of cotton decreased because of the adverse effect of high temperatures during square development. High-temperature stress during flowering influences the square development of upland cotton (Gossypium hirsutum L.) and cotton yield. Although it is well known that square development is sensitive to high temperature, high-temperature sensitive stages of square development and the effects of high temperature on pollen tube growth in the styles are unknown. The effect of high temperature on anther development corresponding to pollen vigor is unknown during anther development. The objectives of this study were to identify the stages of square development that are sensitive to high temperatures (37/30 and 40/34 °C), to determine whether the abnormal development of squares influenced by high temperature is responsible for the variation in the in vitro germination percent of pollen grains at anthesis, to identify the effect of high temperature on pollen germination in the styles, and to determine pollen thermotolerance heterosis. Our results show that the stages from the sporogenous cell to tetrad stage (square length <6.0 mm) were the most sensitive to high temperature, and the corresponding pollen viability at anthesis was consistent with the changes in the square development stage. Pollen tube growth in the styles was strongly inhibited by temperature above 35 °C, and the yield of cotton decreased because of the effect of high temperature during square development. The thermotolerance of hybrid F1 pollen showed heterosis, and pollen viability could be used as a criterion for screening for high-temperature tolerance cultivars. These results can be used in breeding to develop new cotton cultivars that can withstand high-temperature conditions, particularly in a future warmer climate.
Performance of statistical models to predict mental health and substance abuse cost.
Montez-Rath, Maria; Christiansen, Cindy L; Ettner, Susan L; Loveland, Susan; Rosen, Amy K
2006-10-26
Providers use risk-adjustment systems to help manage healthcare costs. Typically, ordinary least squares (OLS) models on either untransformed or log-transformed cost are used. We examine the predictive ability of several statistical models, demonstrate how model choice depends on the goal for the predictive model, and examine whether building models on samples of the data affects model choice. Our sample consisted of 525,620 Veterans Health Administration patients with mental health (MH) or substance abuse (SA) diagnoses who incurred costs during fiscal year 1999. We tested two models on a transformation of cost: a Log Normal model and a Square-root Normal model, and three generalized linear models on untransformed cost, defined by distributional assumption and link function: Normal with identity link (OLS); Gamma with log link; and Gamma with square-root link. Risk-adjusters included age, sex, and 12 MH/SA categories. To determine the best model among the entire dataset, predictive ability was evaluated using root mean square error (RMSE), mean absolute prediction error (MAPE), and predictive ratios of predicted to observed cost (PR) among deciles of predicted cost, by comparing point estimates and 95% bias-corrected bootstrap confidence intervals. To study the effect of analyzing a random sample of the population on model choice, we re-computed these statistics using random samples beginning with 5,000 patients and ending with the entire sample. The Square-root Normal model had the lowest estimates of the RMSE and MAPE, with bootstrap confidence intervals that were always lower than those for the other models. The Gamma with square-root link was best as measured by the PRs. The choice of best model could vary if smaller samples were used and the Gamma with square-root link model had convergence problems with small samples. Models with square-root transformation or link fit the data best. This function (whether used as transformation or as a link) seems to help deal with the high comorbidity of this population by introducing a form of interaction. The Gamma distribution helps with the long tail of the distribution. However, the Normal distribution is suitable if the correct transformation of the outcome is used.
Chi-square-based scoring function for categorization of MEDLINE citations.
Kastrin, A; Peterlin, B; Hristovski, D
2010-01-01
Text categorization has been used in biomedical informatics for identifying documents containing relevant topics of interest. We developed a simple method that uses a chi-square-based scoring function to determine the likelihood of MEDLINE citations containing genetic relevant topic. Our procedure requires construction of a genetic and a nongenetic domain document corpus. We used MeSH descriptors assigned to MEDLINE citations for this categorization task. We compared frequencies of MeSH descriptors between two corpora applying chi-square test. A MeSH descriptor was considered to be a positive indicator if its relative observed frequency in the genetic domain corpus was greater than its relative observed frequency in the nongenetic domain corpus. The output of the proposed method is a list of scores for all the citations, with the highest score given to those citations containing MeSH descriptors typical for the genetic domain. Validation was done on a set of 734 manually annotated MEDLINE citations. It achieved predictive accuracy of 0.87 with 0.69 recall and 0.64 precision. We evaluated the method by comparing it to three machine-learning algorithms (support vector machines, decision trees, naïve Bayes). Although the differences were not statistically significantly different, results showed that our chi-square scoring performs as good as compared machine-learning algorithms. We suggest that the chi-square scoring is an effective solution to help categorize MEDLINE citations. The algorithm is implemented in the BITOLA literature-based discovery support system as a preprocessor for gene symbol disambiguation process.
Göbel, Silke M
2015-01-01
Most adults and children in cultures where reading text progresses from left to right also count objects from the left to the right side of space. The reverse is found in cultures with a right-to-left reading direction. The current set of experiments investigated whether vertical counting in the horizontal plane is also influenced by reading direction. Participants were either from a left-to-right reading culture (UK) or from a mixed (left-to-right and top-to-bottom) reading culture (Hong Kong). In Experiment 1, native English-speaking children and adults and native Cantonese-speaking children and adults performed three object counting tasks. Objects were presented flat on a table in a horizontal, vertical, and square display. Independent of culture, the horizontal array was mostly counted from left to right. While the majority of English-speaking children counted the vertical display from bottom to top, the majority of the Cantonese-speaking children as well as both Cantonese- and English-speaking adults counted the vertical display from top to bottom. This pattern was replicated in the counting pattern for squares: all groups except the English-speaking children started counting with the top left coin. In Experiment 2, Cantonese-speaking adults counted a square array of objects after they read a text presented to them either in left-to-right or in top-to-bottom reading direction. Most Cantonese-speaking adults started counting the array by moving horizontally from left to right. However, significantly more Cantonese-speaking adults started counting with a top-to-bottom movement after reading the text presented in a top-to-bottom reading direction than in a left-to-right reading direction. Our results show clearly that vertical counting in the horizontal plane is influenced by longstanding as well as more recent experience of reading direction.
Hazard Function Estimation with Cause-of-Death Data Missing at Random
Wang, Qihua; Dinse, Gregg E.; Liu, Chunling
2010-01-01
Hazard function estimation is an important part of survival analysis. Interest often centers on estimating the hazard function associated with a particular cause of death. We propose three nonparametric kernel estimators for the hazard function, all of which are appropriate when death times are subject to random censorship and censoring indicators can be missing at random. Specifically, we present a regression surrogate estimator, an imputation estimator, and an inverse probability weighted estimator. All three estimators are uniformly strongly consistent and asymptotically normal. We derive asymptotic representations of the mean squared error and the mean integrated squared error for these estimators and we discuss a data-driven bandwidth selection method. A simulation study, conducted to assess finite sample behavior, demonstrates that the proposed hazard estimators perform relatively well. We illustrate our methods with an analysis of some vascular disease data. PMID:22267874
An empirical Bayes approach for the Poisson life distribution.
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1973-01-01
A smooth empirical Bayes estimator is derived for the intensity parameter (hazard rate) in the Poisson distribution as used in life testing. The reliability function is also estimated either by using the empirical Bayes estimate of the parameter, or by obtaining the expectation of the reliability function. The behavior of the empirical Bayes procedure is studied through Monte Carlo simulation in which estimates of mean-squared errors of the empirical Bayes estimators are compared with those of conventional estimators such as minimum variance unbiased or maximum likelihood. Results indicate a significant reduction in mean-squared error of the empirical Bayes estimators over the conventional variety.
An efficient voting algorithm for finding additive biclusters with random background.
Xiao, Jing; Wang, Lusheng; Liu, Xiaowen; Jiang, Tao
2008-12-01
The biclustering problem has been extensively studied in many areas, including e-commerce, data mining, machine learning, pattern recognition, statistics, and, more recently, computational biology. Given an n x m matrix A (n >or= m), the main goal of biclustering is to identify a subset of rows (called objects) and a subset of columns (called properties) such that some objective function that specifies the quality of the found bicluster (formed by the subsets of rows and of columns of A) is optimized. The problem has been proved or conjectured to be NP-hard for various objective functions. In this article, we study a probabilistic model for the implanted additive bicluster problem, where each element in the n x m background matrix is a random integer from [0, L - 1] for some integer L, and a k x k implanted additive bicluster is obtained from an error-free additive bicluster by randomly changing each element to a number in [0, L - 1] with probability theta. We propose an O(n(2)m) time algorithm based on voting to solve the problem. We show that when k >or= Omega(square root of (n log n)), the voting algorithm can correctly find the implanted bicluster with probability at least 1 - (9/n(2)). We also implement our algorithm as a C++ program named VOTE. The implementation incorporates several ideas for estimating the size of an implanted bicluster, adjusting the threshold in voting, dealing with small biclusters, and dealing with overlapping implanted biclusters. Our experimental results on both simulated and real datasets show that VOTE can find biclusters with a high accuracy and speed.
Jacobson, Robert B.; Colvin, Michael E.; Bulliner, Edward A.; Pickard, Darcy; Elliott, Caroline M.
2018-06-07
Management actions intended to increase growth and survival of pallid sturgeon (Scaphirhynchus albus) age-0 larvae on the Lower Missouri River require a comprehensive understanding of the geomorphic habitat template of the river. The study described here had two objectives relating to where channel-reconfiguration projects should be located to optimize effectiveness. The first objective was to develop a bend-scale (that is, at the scale of individual bends, defined as “cross-over to cross-over”) geomorphic classification of the Lower Missouri River to help in the design of monitoring and evaluation of such projects. The second objective was to explore whether geomorphic variables could provide insight into varying capacities of bends to intercept drifting larvae. The bend-scale classification was based on geomorphic and engineering variables for 257 bends from Sioux City, Iowa, to the confluence with the Mississippi River near St. Louis, Missouri. We used k-means clustering to identify groupings of bends that shared the same characteristics. Separate 3-, 4-, and 6-cluster classifications were developed and mapped. The three classifications are nested in a hierarchical structure. We also explored capacities of bends to intercept larvae through evaluation of linear models that predicted persistent sand area or catch per unit effort (CPUE) of age-0 sturgeon as a function of the same geomorphic variables used in the classification. All highly ranked models that predict persistent sand area contained mean channel width and standard deviation of channel width as significant variables. Some top-ranked models also included contributions of channel sinuosity and density of navigation structures. The sand-area prediction models have r-squared values of 0.648–0.674. In contrast, the highest-ranking CPUE models have r-squared values of 0.011–0.170, indicating much more uncertainty for the biological response variable. Whereas the persistent sand model documents that physical processes of transport and accumulation are systematic and predictable, the poor performance of the CPUE models indicate that additional processes will need to be considered to predict biological transport and accumulation.
Stochastic Least-Squares Petrov--Galerkin Method for Parameterized Linear Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Kookjin; Carlberg, Kevin; Elman, Howard C.
Here, we consider the numerical solution of parameterized linear systems where the system matrix, the solution, and the right-hand side are parameterized by a set of uncertain input parameters. We explore spectral methods in which the solutions are approximated in a chosen finite-dimensional subspace. It has been shown that the stochastic Galerkin projection technique fails to minimize any measure of the solution error. As a remedy for this, we propose a novel stochatic least-squares Petrov--Galerkin (LSPG) method. The proposed method is optimal in the sense that it produces the solution that minimizes a weightedmore » $$\\ell^2$$-norm of the residual over all solutions in a given finite-dimensional subspace. Moreover, the method can be adapted to minimize the solution error in different weighted $$\\ell^2$$-norms by simply applying a weighting function within the least-squares formulation. In addition, a goal-oriented seminorm induced by an output quantity of interest can be minimized by defining a weighting function as a linear functional of the solution. We establish optimality and error bounds for the proposed method, and extensive numerical experiments show that the weighted LSPG method outperforms other spectral methods in minimizing corresponding target weighted norms.« less
NASA Astrophysics Data System (ADS)
Nordemann, D. J. R.; Rigozo, N. R.; de Souza Echer, M. P.; Echer, E.
2008-11-01
We present here an implementation of a least squares iterative regression method applied to the sine functions embedded in the principal components extracted from geophysical time series. This method seems to represent a useful improvement for the non-stationary time series periodicity quantitative analysis. The principal components determination followed by the least squares iterative regression method was implemented in an algorithm written in the Scilab (2006) language. The main result of the method is to obtain the set of sine functions embedded in the series analyzed in decreasing order of significance, from the most important ones, likely to represent the physical processes involved in the generation of the series, to the less important ones that represent noise components. Taking into account the need of a deeper knowledge of the Sun's past history and its implication to global climate change, the method was applied to the Sunspot Number series (1750-2004). With the threshold and parameter values used here, the application of the method leads to a total of 441 explicit sine functions, among which 65 were considered as being significant and were used for a reconstruction that gave a normalized mean squared error of 0.146.
On Quantile Regression in Reproducing Kernel Hilbert Spaces with Data Sparsity Constraint
Zhang, Chong; Liu, Yufeng; Wu, Yichao
2015-01-01
For spline regressions, it is well known that the choice of knots is crucial for the performance of the estimator. As a general learning framework covering the smoothing splines, learning in a Reproducing Kernel Hilbert Space (RKHS) has a similar issue. However, the selection of training data points for kernel functions in the RKHS representation has not been carefully studied in the literature. In this paper we study quantile regression as an example of learning in a RKHS. In this case, the regular squared norm penalty does not perform training data selection. We propose a data sparsity constraint that imposes thresholding on the kernel function coefficients to achieve a sparse kernel function representation. We demonstrate that the proposed data sparsity method can have competitive prediction performance for certain situations, and have comparable performance in other cases compared to that of the traditional squared norm penalty. Therefore, the data sparsity method can serve as a competitive alternative to the squared norm penalty method. Some theoretical properties of our proposed method using the data sparsity constraint are obtained. Both simulated and real data sets are used to demonstrate the usefulness of our data sparsity constraint. PMID:27134575
Linear response to nonstationary random excitation.
NASA Technical Reports Server (NTRS)
Hasselman, T.
1972-01-01
Development of a method for computing the mean-square response of linear systems to nonstationary random excitation of the form given by y(t) = f(t) x(t), in which x(t) = a stationary process and f(t) is deterministic. The method is suitable for application to multidegree-of-freedom systems when the mean-square response at a point due to excitation applied at another point is desired. Both the stationary process, x(t), and the modulating function, f(t), may be arbitrary. The method utilizes a fundamental component of transient response dependent only on x(t) and the system, and independent of f(t) to synthesize the total response. The role played by this component is analogous to that played by the Green's function or impulse response function in the convolution integral.
On the use of Bayesian Monte-Carlo in evaluation of nuclear data
NASA Astrophysics Data System (ADS)
De Saint Jean, Cyrille; Archier, Pascal; Privas, Edwin; Noguere, Gilles
2017-09-01
As model parameters, necessary ingredients of theoretical models, are not always predicted by theory, a formal mathematical framework associated to the evaluation work is needed to obtain the best set of parameters (resonance parameters, optical models, fission barrier, average width, multigroup cross sections) with Bayesian statistical inference by comparing theory to experiment. The formal rule related to this methodology is to estimate the posterior density probability function of a set of parameters by solving an equation of the following type: pdf(posterior) ˜ pdf(prior) × a likelihood function. A fitting procedure can be seen as an estimation of the posterior density probability of a set of parameters (referred as x→?) knowing a prior information on these parameters and a likelihood which gives the probability density function of observing a data set knowing x→?. To solve this problem, two major paths could be taken: add approximations and hypothesis and obtain an equation to be solved numerically (minimum of a cost function or Generalized least Square method, referred as GLS) or use Monte-Carlo sampling of all prior distributions and estimate the final posterior distribution. Monte Carlo methods are natural solution for Bayesian inference problems. They avoid approximations (existing in traditional adjustment procedure based on chi-square minimization) and propose alternative in the choice of probability density distribution for priors and likelihoods. This paper will propose the use of what we are calling Bayesian Monte Carlo (referred as BMC in the rest of the manuscript) in the whole energy range from thermal, resonance and continuum range for all nuclear reaction models at these energies. Algorithms will be presented based on Monte-Carlo sampling and Markov chain. The objectives of BMC are to propose a reference calculation for validating the GLS calculations and approximations, to test probability density distributions effects and to provide the framework of finding global minimum if several local minimums exist. Application to resolved resonance, unresolved resonance and continuum evaluation as well as multigroup cross section data assimilation will be presented.
On one-sided filters for spectral Fourier approximations of discontinuous functions
NASA Technical Reports Server (NTRS)
Wei, Cai; Gottlieb, David; Shu, Chi-Wang
1991-01-01
The existence of one-sided filters, for spectral Fourier approximations of discontinuous functions, which can recover spectral accuracy up to discontinuity from one side, was proved. A least square procedure was also used to construct such a filter and test it on several discontinuous functions numerically.
Image interpolation via regularized local linear regression.
Liu, Xianming; Zhao, Debin; Xiong, Ruiqin; Ma, Siwei; Gao, Wen; Sun, Huifang
2011-12-01
The linear regression model is a very attractive tool to design effective image interpolation schemes. Some regression-based image interpolation algorithms have been proposed in the literature, in which the objective functions are optimized by ordinary least squares (OLS). However, it is shown that interpolation with OLS may have some undesirable properties from a robustness point of view: even small amounts of outliers can dramatically affect the estimates. To address these issues, in this paper we propose a novel image interpolation algorithm based on regularized local linear regression (RLLR). Starting with the linear regression model where we replace the OLS error norm with the moving least squares (MLS) error norm leads to a robust estimator of local image structure. To keep the solution stable and avoid overfitting, we incorporate the l(2)-norm as the estimator complexity penalty. Moreover, motivated by recent progress on manifold-based semi-supervised learning, we explicitly consider the intrinsic manifold structure by making use of both measured and unmeasured data points. Specifically, our framework incorporates the geometric structure of the marginal probability distribution induced by unmeasured samples as an additional local smoothness preserving constraint. The optimal model parameters can be obtained with a closed-form solution by solving a convex optimization problem. Experimental results on benchmark test images demonstrate that the proposed method achieves very competitive performance with the state-of-the-art interpolation algorithms, especially in image edge structure preservation. © 2011 IEEE
NASA Astrophysics Data System (ADS)
Yu, Jian; Yin, Qian; Guo, Ping; Luo, A.-li
2014-09-01
This paper presents an efficient method for the extraction of astronomical spectra from two-dimensional (2D) multifibre spectrographs based on the regularized least-squares QR-factorization (LSQR) algorithm. We address two issues: we propose a modified Gaussian point spread function (PSF) for modelling the 2D PSF from multi-emission-line gas-discharge lamp images (arc images), and we develop an efficient deconvolution method to extract spectra in real circumstances. The proposed modified 2D Gaussian PSF model can fit various types of 2D PSFs, including different radial distortion angles and ellipticities. We adopt the regularized LSQR algorithm to solve the sparse linear equations constructed from the sparse convolution matrix, which we designate the deconvolution spectrum extraction method. Furthermore, we implement a parallelized LSQR algorithm based on graphics processing unit programming in the Compute Unified Device Architecture to accelerate the computational processing. Experimental results illustrate that the proposed extraction method can greatly reduce the computational cost and memory use of the deconvolution method and, consequently, increase its efficiency and practicability. In addition, the proposed extraction method has a stronger noise tolerance than other methods, such as the boxcar (aperture) extraction and profile extraction methods. Finally, we present an analysis of the sensitivity of the extraction results to the radius and full width at half-maximum of the 2D PSF.
Influence of anisotropy on percolation and jamming of linear k-mers on square lattice with defects
NASA Astrophysics Data System (ADS)
Tarasevich, Yu Yu; Laptev, V. V.; Burmistrov, A. S.; Shinyaeva, T. S.
2015-09-01
By means of the Monte Carlo simulation, we study the layers produced by the random sequential adsorption of the linear rigid objects (k-mers also known as rigid or stiff rods, sticks, needles) onto the square lattice with defects in the presence of an external field. The value of k varies from 2 to 32. The point defects randomly and uniformly placed on the substrate hinder adsorption of the elongated objects. The external field affects isotropic deposition of the particles, consequently the deposited layers are anisotropic. We study the influence of the defect concentration, the length of the objects, and the external field on the percolation threshold and the jamming concentration. Our main findings are (i) the critical defect concentration at which the percolation never occurs even at jammed state decreases for short k-mers (k < 16) and increases for long k-mers (k > 16) as anisotropy increases, (ii) the corresponding critical k-mer concentration decreases with anisotropy growth, (iii) the jamming concentration decreases drastically with growth of k-mer length for any anisotropy, (iv) for short k-mers, the percolation threshold is almost insensitive to the defect concentration for any anisotropy.
Homogeneous buoyancy-generated turbulence
NASA Technical Reports Server (NTRS)
Batchelor, G. K.; Canuto, V. M.; Chasnov, J. R.
1992-01-01
Using a theoretical analysis of fundamental equations and a numerical simulation of the flow field, the statistically homogeneous motion that is generated by buoyancy forces after the creation of homogeneous random fluctuations in the density of infinite fluid at an initial instant is examined. It is shown that analytical results together with numerical results provide a comprehensive description of the 'birth, life, and death' of buoyancy-generated turbulence. Results of numerical simulations yielded the mean-square density mean-square velocity fluctuations and the associated spectra as functions of time for various initial conditions, and the time required for the mean-square density fluctuation to fall to a specified small value was estimated.
Theoretical and experimental studies of error in square-law detector circuits
NASA Technical Reports Server (NTRS)
Stanley, W. D.; Hearn, C. P.; Williams, J. B.
1984-01-01
Square law detector circuits to determine errors from the ideal input/output characteristic function were investigated. The nonlinear circuit response is analyzed by a power series expansion containing terms through the fourth degree, from which the significant deviation from square law can be predicted. Both fixed bias current and flexible bias current configurations are considered. The latter case corresponds with the situation where the mean current can change with the application of a signal. Experimental investigations of the circuit arrangements are described. Agreement between the analytical models and the experimental results are established. Factors which contribute to differences under certain conditions are outlined.
Explicit least squares system parameter identification for exact differential input/output models
NASA Technical Reports Server (NTRS)
Pearson, A. E.
1993-01-01
The equation error for a class of systems modeled by input/output differential operator equations has the potential to be integrated exactly, given the input/output data on a finite time interval, thereby opening up the possibility of using an explicit least squares estimation technique for system parameter identification. The paper delineates the class of models for which this is possible and shows how the explicit least squares cost function can be obtained in a way that obviates dealing with unknown initial and boundary conditions. The approach is illustrated by two examples: a second order chemical kinetics model and a third order system of Lorenz equations.
Least Squares Moving-Window Spectral Analysis.
Lee, Young Jong
2017-08-01
Least squares regression is proposed as a moving-windows method for analysis of a series of spectra acquired as a function of external perturbation. The least squares moving-window (LSMW) method can be considered an extended form of the Savitzky-Golay differentiation for nonuniform perturbation spacing. LSMW is characterized in terms of moving-window size, perturbation spacing type, and intensity noise. Simulation results from LSMW are compared with results from other numerical differentiation methods, such as single-interval differentiation, autocorrelation moving-window, and perturbation correlation moving-window methods. It is demonstrated that this simple LSMW method can be useful for quantitative analysis of nonuniformly spaced spectral data with high frequency noise.
Square sugars: challenges and synthetic strategies.
Hazelard, Damien; Compain, Philippe
2017-05-10
Square sugars (4-membered ring carbohydrate mimetics) are at the intersection of several important topics concerning the recent emergence, in medicinal chemistry, of glycomimetic drugs and small ring systems. Monosaccharide mimetics containing oxetane, azetidine, thiethane or cyclobutane rings present a number of synthetic challenges that are a powerful driving force for innovation in organic synthesis. In addition to the inherent issues associated with 4-membered rings, the high density of functional groups and asymmetric centres found in glycomimetics further complicates the matter and requires efficient stereoselective methodologies. The purpose of this review is to present an overview of the elegant strategies that have been developed to synthesize the different types of square sugars.
Hugelier, Siewert; Vitale, Raffaele; Ruckebusch, Cyril
2018-03-01
This article explores smoothing with edge-preserving properties as a spatial constraint for the resolution of hyperspectral images with multivariate curve resolution-alternating least squares (MCR-ALS). For each constrained component image (distribution map), irrelevant spatial details and noise are smoothed applying an L 1 - or L 0 -norm penalized least squares regression, highlighting in this way big changes in intensity of adjacent pixels. The feasibility of the constraint is demonstrated on three different case studies, in which the objects under investigation are spatially clearly defined, but have significant spectral overlap. This spectral overlap is detrimental for obtaining a good resolution and additional spatial information should be provided. The final results show that the spatial constraint enables better image (map) abstraction, artifact removal, and better interpretation of the results obtained, compared to a classical MCR-ALS analysis of hyperspectral images.
NASA Astrophysics Data System (ADS)
Lalneihpuii, R.; Shrivastava, Ruchi; Mishra, Raj Kumar
2018-05-01
Using statistical mechanical model with square-well (SW) interatomic potential within the frame work of mean spherical approximation, we determine the composition dependent microscopic correlation functions, interdiffusion coefficients, surface tension and chemical ordering in Ag-Cu melts. Further Dzugutov universal scaling law of normalized diffusion is verified with SW potential in binary mixtures. We find that the excess entropy scaling law is valid for SW binary melts. The partial and total structure factors in the attractive and repulsive regions of the interacting potential are evaluated and then Fourier transformed to get partial and total radial distribution functions. A good agreement between theoretical and experimental values for total structure factor and the reduced radial distribution function are observed, which consolidates our model calculations. The well-known Bhatia-Thornton correlation functions are also computed for Ag-Cu melts. The concentration-concentration correlations in the long wavelength limit in liquid Ag-Cu alloys have been analytically derived through the long wavelength limit of partial correlation functions and apply it to demonstrate the chemical ordering and interdiffusion coefficients in binary liquid alloys. We also investigate the concentration dependent viscosity coefficients and surface tension using the computed diffusion data in these alloys. Our computed results for structure, transport and surface properties of liquid Ag-Cu alloys obtained with square-well interatomic interaction are fully consistent with their corresponding experimental values.
Feasibility study on the least square method for fitting non-Gaussian noise data
NASA Astrophysics Data System (ADS)
Xu, Wei; Chen, Wen; Liang, Yingjie
2018-02-01
This study is to investigate the feasibility of least square method in fitting non-Gaussian noise data. We add different levels of the two typical non-Gaussian noises, Lévy and stretched Gaussian noises, to exact value of the selected functions including linear equations, polynomial and exponential equations, and the maximum absolute and the mean square errors are calculated for the different cases. Lévy and stretched Gaussian distributions have many applications in fractional and fractal calculus. It is observed that the non-Gaussian noises are less accurately fitted than the Gaussian noise, but the stretched Gaussian cases appear to perform better than the Lévy noise cases. It is stressed that the least-squares method is inapplicable to the non-Gaussian noise cases when the noise level is larger than 5%.
Ambiguity resolution for satellite Doppler positioning systems
NASA Technical Reports Server (NTRS)
Argentiero, P. D.; Marini, J. W.
1977-01-01
A test for ambiguity resolution was derived which was the most powerful in the sense that it maximized the probability of a correct decision. When systematic error sources were properly included in the least squares reduction process to yield an optimal solution, the test reduced to choosing the solution which provided the smaller valuation of the least squares loss function. When systematic error sources were ignored in the least squares reduction, the most powerful test was a quadratic form comparison with the weighting matrix of the quadratic form obtained by computing the pseudo-inverse of a reduced rank square matrix. A formula is presented for computing the power of the most powerful test. A numerical example is included in which the power of the test is computed for a situation which may occur during an actual satellite aided search and rescue mission.
Influence of the least-squares phase on optical vortices in strongly scintillated beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen Mingzhou; Roux, Filippus S.; National Laser Centre, CSIR, P.O. Box 395, Pretoria 0001
2009-07-15
The optical vortices that exist in strongly scintillated beams make it difficult for conventional adaptive optics systems to remove the phase distortions. When the least-squares reconstructed phase is removed, the vortices still remain. However, we found that the removal of the least-squares phase induces a portion of the vortices to be annihilated during subsequent propagation, causing a reduction in the total number of vortices. This can be understood in terms of the restoration of equilibrium between explicit vortices, which are visible in the phase function, and vortex bound states, which are somehow encoded in the continuous phase fluctuations. Numerical simulationsmore » are provided to show that the total number of optical vortices in a strongly scintillated beam can be reduced significantly after a few steps of least-squares phase corrections.« less
The Mean Life Squared Relationship for Abundances of Extinct Radioactivities
NASA Technical Reports Server (NTRS)
Lodders, K.; Cameron, A. G. W.
2004-01-01
We discovered that the abundances of now extinct radioactivities (relative to stable reference isotopes) in meteorites vary as a function of their mean lifetimes squared. This relationship applies to chondrites, achondrites, and irons but to calcium-aluminum inclusions (CAIs). Certain meteorites contain excesses in isotopic abundances from the decay of radioactive isotopes with half-lives much less than the age of the solar system. These short-lived radioactivities are now extinct, but they were alive when meteorites assembled in the early solar system. The origin of these radioactivities and the processes which control their abundances in the solar nebula are still not well understood. Some clues may come from our finding that the meteoritic abundances of now extinct radioactivities (relative to stable reference isotopes) vary as a function of their mean lifetimes squared. This relationship applies to chondrites, achondrites, and irons, but not to CAIs. This points to at least two different processes establishing the abundances of short-lived isotopes found in the meteoritic record.
2011-01-01
Background Generalized anxiety disorder (GAD) is the most frequent anxiety disorder in primary care patients. It is known that painful physical symptoms (PPS) are associated with GAD, regardless the presence of comorbid major depressive disorder (MDD). However the specific role of such symptoms in patients' functional impairment is not well understood. The objective of the present study is to assess functional impairment related to the presence of PPS in patients with GAD. Methods This is a post hoc analysis of a cross-sectional study. Functioning, in the presence (overall pain score >30; Visual Analog Scale) or absence of PPS, was assessed using the Sheehan Disability Scale (SDS) in three groups of patients; 1) GAD and comorbid MDD (GAD+MDD+), 2) GAD without comorbid MDD (GAD+MDD-), 3) controls (GAD-MDD-). ANCOVA models were used. Results Of those patients with GAD+MDD+ (n = 559), 436 (78.0%) had PPS, compared with GAD+MDD- (249 of 422, 59%) and controls (95 of 336, 28.3%). Functioning worsened in both GAD groups in presence of PPS (SDS least squares mean total score: 16.1 vs. 9.8, p < 0.0001, GAD+MDD+; 14.3 vs. 8.2, p < 0.0001, GAD+MDD-). The presence of PPS was significantly associated with less productivity. Conclusions Functional impairment related to the presence of PPS was relevant. Clinical implications should be considered. PMID:21510887
Active Control of the Forced and Transient Response of a Finite Beam. M.S. Thesis
NASA Technical Reports Server (NTRS)
Post, John Theodore
1989-01-01
When studying structural vibrations resulting from a concentrated source, many structures may be modelled as a finite beam excited by a point source. The theoretical limit on cancelling the resulting beam vibrations by utilizing another point source as an active controller is explored. Three different types of excitation are considered, harmonic, random, and transient. In each case, a cost function is defined and minimized for numerous parameter variations. For the case of harmonic excitation, the cost function is obtained by integrating the mean squared displacement over a region of the beam in which control is desired. A controller is then found to minimize this cost function in the control interval. The control interval and controller location are continuously varied for several frequencies of excitation. The results show that control over the entire beam length is possible only when the excitation frequency is near a resonant frequency of the beam, but control over a subregion may be obtained even between resonant frequencies at the cost of increasing the vibration outside of the control region. For random excitation, the cost function is realized by integrating the expected value of the displacement squared over the interval of the beam in which control is desired. This is shown to yield the identical cost function as obtained by integrating the cost function for harmonic excitation over all excitation frequencies. As a result, it is always possible to reduce the cost function for random excitation whether controlling the entire beam or just a subregion, without ever increasing the vibration outside the region in which control is desired. The last type of excitation considered is a single, transient pulse. A cost function representative of the beam vibration is obtained by integrating the transient displacement squared over a region of the beam and over all time. The form of the controller is chosen a priori as either one or two delayed pulses. Delays constrain the controller to be causal. The best possible control is then examined while varying the region of control and the controller location. It is found that control is always possible using either one or two control pulses. The two pulse controller gives better performance than a single pulse controller, but finding the optimal delay time for the additional controllers increases as the square of the number of control pulses.
On the best mean-square approximations to a planet's gravitational potential
NASA Astrophysics Data System (ADS)
Lobkova, N. I.
1985-02-01
The continuous problem of approximating the gravitational potential of a planet in the form of polynomials of solid spherical functions is considered. The best mean-square polynomials, referred to different parts of space, are compared with each other. The harmonic coefficients corresponding to the surface of a planet are shown to be unstable with respect to the degree of the polynomial and to differ from the Stokes constants.
Brownian self-driven particles on the surface of a sphere
NASA Astrophysics Data System (ADS)
Apaza, Leonardo; Sandoval, Mario
2017-08-01
We present the dynamics of overdamped Brownian self-propelled particles moving on the surface of a sphere. The effect of self-propulsion on the diffusion of these particles is elucidated by determining their angular (azimuthal and polar) mean-square displacement. Short- and long-times analytical expressions for their angular mean-square displacement are offered. Finally, the particles' steady marginal angular probability density functions are also elucidated.
A quadratic-tensor model algorithm for nonlinear least-squares problems with linear constraints
NASA Technical Reports Server (NTRS)
Hanson, R. J.; Krogh, Fred T.
1992-01-01
A new algorithm for solving nonlinear least-squares and nonlinear equation problems is proposed which is based on approximating the nonlinear functions using the quadratic-tensor model by Schnabel and Frank. The algorithm uses a trust region defined by a box containing the current values of the unknowns. The algorithm is found to be effective for problems with linear constraints and dense Jacobian matrices.
Majorana-Hubbard model on the square lattice
NASA Astrophysics Data System (ADS)
Affleck, Ian; Rahmani, Armin; Pikulin, Dmitry
2017-09-01
We study a tight-binding model of interacting Majorana (Hermitian) modes on a square lattice. The model may have an experimental realization in a superconducting-film-topological-insulator heterostructure in a magnetic field. We find a rich phase diagram, as a function of interaction strength, including an emergent superfluid phase with spontaneous breaking of an emergent U (1 ) symmetry, separated by a supersymmetric transition from a gapless normal phase.
NASA Astrophysics Data System (ADS)
Liu, Fei; He, Yong
2008-02-01
Visible and near infrared (Vis/NIR) transmission spectroscopy and chemometric methods were utilized to predict the pH values of cola beverages. Five varieties of cola were prepared and 225 samples (45 samples for each variety) were selected for the calibration set, while 75 samples (15 samples for each variety) for the validation set. The smoothing way of Savitzky-Golay and standard normal variate (SNV) followed by first-derivative were used as the pre-processing methods. Partial least squares (PLS) analysis was employed to extract the principal components (PCs) which were used as the inputs of least squares-support vector machine (LS-SVM) model according to their accumulative reliabilities. Then LS-SVM with radial basis function (RBF) kernel function and a two-step grid search technique were applied to build the regression model with a comparison of PLS regression. The correlation coefficient (r), root mean square error of prediction (RMSEP) and bias were 0.961, 0.040 and 0.012 for PLS, while 0.975, 0.031 and 4.697x10 -3 for LS-SVM, respectively. Both methods obtained a satisfying precision. The results indicated that Vis/NIR spectroscopy combined with chemometric methods could be applied as an alternative way for the prediction of pH of cola beverages.
NASA Astrophysics Data System (ADS)
Wang, Dong
2018-05-01
Thanks to the great efforts made by Antoni (2006), spectral kurtosis has been recognized as a milestone for characterizing non-stationary signals, especially bearing fault signals. The main idea of spectral kurtosis is to use the fourth standardized moment, namely kurtosis, as a function of spectral frequency so as to indicate how repetitive transients caused by a bearing defect vary with frequency. Moreover, spectral kurtosis is defined based on an analytic bearing fault signal constructed from either a complex filter or Hilbert transform. On the other hand, another attractive work was reported by Borghesani et al. (2014) to mathematically reveal the relationship between the kurtosis of an analytical bearing fault signal and the square of the squared envelope spectrum of the analytical bearing fault signal for explaining spectral correlation for quantification of bearing fault signals. More interestingly, it was discovered that the sum of peaks at cyclic frequencies in the square of the squared envelope spectrum corresponds to the raw 4th order moment. Inspired by the aforementioned works, in this paper, we mathematically show that: (1) spectral kurtosis can be decomposed into squared envelope and squared L2/L1 norm so that spectral kurtosis can be explained as spectral squared L2/L1 norm; (2) spectral L2/L1 norm is formally defined for characterizing bearing fault signals and its two geometrical explanations are made; (3) spectral L2/L1 norm is proportional to the square root of the sum of peaks at cyclic frequencies in the square of the squared envelope spectrum; (4) some extensions of spectral L2/L1 norm for characterizing bearing fault signals are pointed out.
Wu, Xialu; Ding, Nini; Zhang, Wenhua; Xue, Fei; Hor, T S Andy
2015-07-20
The use of simple self-assembly methods to direct or engineer porosity or channels of desirable functionality is a major challenge in the field of metal-organic frameworks. We herein report a series of frameworks by modifying square ring structure of [{Cu2(5-dmpy)2(L1)2(H2O)(MeOH)}2{ClO4}4]·4MeOH (1·4MeOH, 5-dmpy = 5,5'-dimethyl-2,2'-bipyridine, HL1 = 4-pyridinecarboxylic acid). Use of pyridyl carboxylates as directional spacers in bipyridyl chelated Cu(II) system led to the growth of square unit into other configurations, namely, square ring, square chain, and square tunnel. Another remarkable characteristic is that the novel use of two isomers of pyridinyl-acrylic acid directs selectively to two different extreme tubular forms-aligned stacking of discrete hexagonal rings and crack-free one-dimensional continuum polymers. This provides a unique example of two extreme forms of copper nanotubes from two isomeric spacers. All of the reactions are performed in a one-pot self-assembly process at room temperature, while the topological selectivity is exclusively determined by the skeletal characteristics of the spacers.
Use of partial least squares regression to impute SNP genotypes in Italian cattle breeds.
Dimauro, Corrado; Cellesi, Massimo; Gaspa, Giustino; Ajmone-Marsan, Paolo; Steri, Roberto; Marras, Gabriele; Macciotta, Nicolò P P
2013-06-05
The objective of the present study was to test the ability of the partial least squares regression technique to impute genotypes from low density single nucleotide polymorphisms (SNP) panels i.e. 3K or 7K to a high density panel with 50K SNP. No pedigree information was used. Data consisted of 2093 Holstein, 749 Brown Swiss and 479 Simmental bulls genotyped with the Illumina 50K Beadchip. First, a single-breed approach was applied by using only data from Holstein animals. Then, to enlarge the training population, data from the three breeds were combined and a multi-breed analysis was performed. Accuracies of genotypes imputed using the partial least squares regression method were compared with those obtained by using the Beagle software. The impact of genotype imputation on breeding value prediction was evaluated for milk yield, fat content and protein content. In the single-breed approach, the accuracy of imputation using partial least squares regression was around 90 and 94% for the 3K and 7K platforms, respectively; corresponding accuracies obtained with Beagle were around 85% and 90%. Moreover, computing time required by the partial least squares regression method was on average around 10 times lower than computing time required by Beagle. Using the partial least squares regression method in the multi-breed resulted in lower imputation accuracies than using single-breed data. The impact of the SNP-genotype imputation on the accuracy of direct genomic breeding values was small. The correlation between estimates of genetic merit obtained by using imputed versus actual genotypes was around 0.96 for the 7K chip. Results of the present work suggested that the partial least squares regression imputation method could be useful to impute SNP genotypes when pedigree information is not available.
The Case against Secondary Task Analyses of Mental Workload.
1980-01-10
different attributes of one object (e.g., its color, form and size) than one attribute of three objects (e.g., red, green and blue or square, circle and...RED printed in colored ink, e.g. green . The subjecc is instructed to report the ink color, ignoring the color word. This is quite difficult for most...Directions in Cognitive Psycholoz. London: Routledge & Kegan Paul (in press). Baddeley, A. D. The capacity for generating information by randomization
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Qinggang; Kusoglu, Ahmet; Lucas, Ivan T.
2011-08-01
The objective of this effort was to correlate the local surface ionic conductance of a Nafion? 212 proton-exchange membrane with its bulk and interfacial transport properties as a function of water content. Both macroscopic and microscopic proton conductivities were investigated at different relative humidity levels, using electrochemical impedance spectroscopy and current-sensing atomic force microscopy (CSAFM). We were able to identify small ion-conducting domains that grew with humidity at the surface of the membrane. Numerical analysis of the surface ionic conductance images recorded at various relative humidity levels helped determine the fractional area of ion-conducting active sites. A simple square-root relationshipmore » between the fractional conducting area and observed interfacial mass-transport resistance was established. Furthermore, the relationship between the bulk ionic conductivity and surface ionic conductance pattern of the Nafion? membrane was examined.« less
Siskos, Michael G; Choudhary, M Iqbal; Gerothanassis, Ioannis P
2017-03-07
The exact knowledge of hydrogen atomic positions of O-H···O hydrogen bonds in solution and in the solid state has been a major challenge in structural and physical organic chemistry. The objective of this review article is to summarize recent developments in the refinement of labile hydrogen positions with the use of: (i) density functional theory (DFT) calculations after a structure has been determined by X-ray from single crystals or from powders; (ii) ¹H-NMR chemical shifts as constraints in DFT calculations, and (iii) use of root-mean-square deviation between experimentally determined and DFT calculated ¹H-NMR chemical shifts considering the great sensitivity of ¹H-NMR shielding to hydrogen bonding properties.
NASA Technical Reports Server (NTRS)
Tiffany, S. H.; Adams, W. M., Jr.
1984-01-01
A technique which employs both linear and nonlinear methods in a multilevel optimization structure to best approximate generalized unsteady aerodynamic forces for arbitrary motion is described. Optimum selection of free parameters is made in a rational function approximation of the aerodynamic forces in the Laplace domain such that a best fit is obtained, in a least squares sense, to tabular data for purely oscillatory motion. The multilevel structure and the corresponding formulation of the objective models are presented which separate the reduction of the fit error into linear and nonlinear problems, thus enabling the use of linear methods where practical. Certain equality and inequality constraints that may be imposed are identified; a brief description of the nongradient, nonlinear optimizer which is used is given; and results which illustrate application of the method are presented.
Fuzzy support vector machines for adaptive Morse code recognition.
Yang, Cheng-Hong; Jin, Li-Cheng; Chuang, Li-Yeh
2006-11-01
Morse code is now being harnessed for use in rehabilitation applications of augmentative-alternative communication and assistive technology, facilitating mobility, environmental control and adapted worksite access. In this paper, Morse code is selected as a communication adaptive device for persons who suffer from muscle atrophy, cerebral palsy or other severe handicaps. A stable typing rate is strictly required for Morse code to be effective as a communication tool. Therefore, an adaptive automatic recognition method with a high recognition rate is needed. The proposed system uses both fuzzy support vector machines and the variable-degree variable-step-size least-mean-square algorithm to achieve these objectives. We apply fuzzy memberships to each point, and provide different contributions to the decision learning function for support vector machines. Statistical analyses demonstrated that the proposed method elicited a higher recognition rate than other algorithms in the literature.
Image reconstruction of IRAS survey scans
NASA Technical Reports Server (NTRS)
Bontekoe, Tj. Romke
1990-01-01
The IRAS survey data can be used successfully to produce images of extended objects. The major difficulties, viz. non-uniform sampling, different response functions for each detector, and varying signal-to-noise levels for each detector for each scan, were resolved. The results of three different image construction techniques are compared: co-addition, constrained least squares, and maximum entropy. The maximum entropy result is superior. An image of the galaxy M51 with an average spatial resolution of 45 arc seconds is presented, using 60 micron survey data. This exceeds the telescope diffraction limit of 1 minute of arc, at this wavelength. Data fusion is a proposed method for combining data from different instruments, with different spacial resolutions, at different wavelengths. Data estimates of the physical parameters, temperature, density and composition, can be made from the data without prior image (re-)construction. An increase in the accuracy of these parameters is expected as the result of this more systematic approach.
An electrical analogy to Mie scattering
Caridad, José M.; Connaughton, Stephen; Ott, Christian; Weber, Heiko B.; Krstić, Vojislav
2016-01-01
Mie scattering is an optical phenomenon that appears when electromagnetic waves, in particular light, are elastically scattered at a spherical or cylindrical object. A transfer of this phenomenon onto electron states in ballistic graphene has been proposed theoretically, assuming a well-defined incident wave scattered by a perfectly cylindrical nanometer scaled potential, but experimental fingerprints are lacking. We present an experimental demonstration of an electrical analogue to Mie scattering by using graphene as a conductor, and circular potentials arranged in a square two-dimensional array. The tabletop experiment is carried out under seemingly unfavourable conditions of diffusive transport at room-temperature. Nonetheless, when a canted arrangement of the array with respect to the incident current is chosen, cascaded Mie scattering results robustly in a transverse voltage. Its response on electrostatic gating and variation of potentials convincingly underscores Mie scattering as underlying mechanism. The findings presented here encourage the design of functional electronic metamaterials. PMID:27671003
Efficient similarity-based data clustering by optimal object to cluster reallocation.
Rossignol, Mathias; Lagrange, Mathieu; Cont, Arshia
2018-01-01
We present an iterative flat hard clustering algorithm designed to operate on arbitrary similarity matrices, with the only constraint that these matrices be symmetrical. Although functionally very close to kernel k-means, our proposal performs a maximization of average intra-class similarity, instead of a squared distance minimization, in order to remain closer to the semantics of similarities. We show that this approach permits the relaxing of some conditions on usable affinity matrices like semi-positiveness, as well as opening possibilities for computational optimization required for large datasets. Systematic evaluation on a variety of data sets shows that compared with kernel k-means and the spectral clustering methods, the proposed approach gives equivalent or better performance, while running much faster. Most notably, it significantly reduces memory access, which makes it a good choice for large data collections. Material enabling the reproducibility of the results is made available online.
Ghosh, Debasree; Chattopadhyay, Parimal
2012-06-01
The objective of the work was to use the method of quantitative descriptive analysis (QDA) to describe the sensory attributes of the fermented food products prepared with the incorporation of lactic cultures. Panellists were selected and trained to evaluate various attributes specially color and appearance, body texture, flavor, overall acceptability and acidity of the fermented food products like cow milk curd and soymilk curd, idli, sauerkraut and probiotic ice cream. Principal component analysis (PCA) identified the six significant principal components that accounted for more than 90% of the variance in the sensory attribute data. Overall product quality was modelled as a function of principal components using multiple least squares regression (R (2) = 0.8). The result from PCA was statistically analyzed by analysis of variance (ANOVA). These findings demonstrate the utility of quantitative descriptive analysis for identifying and measuring the fermented food product attributes that are important for consumer acceptability.
Claumann, Carlos Alberto; Wüst Zibetti, André; Bolzan, Ariovaldo; Machado, Ricardo A F; Pinto, Leonel Teixeira
2015-12-18
For this work, an analysis of parameter estimation for the retention factor in GC model was performed, considering two different criteria: sum of square error, and maximum error in absolute value; relevant statistics are described for each case. The main contribution of this work is the implementation of an initialization scheme (specialized) for the estimated parameters, which features fast convergence (low computational time) and is based on knowledge of the surface of the error criterion. In an application to a series of alkanes, specialized initialization resulted in significant reduction to the number of evaluations of the objective function (reducing computational time) in the parameter estimation. The obtained reduction happened between one and two orders of magnitude, compared with the simple random initialization. Copyright © 2015 Elsevier B.V. All rights reserved.
A method for the microlensed flux variance of QSOs
NASA Astrophysics Data System (ADS)
Goodman, Jeremy; Sun, Ai-Lei
2014-06-01
A fast and practical method is described for calculating the microlensed flux variance of an arbitrary source by uncorrelated stars. The required inputs are the mean convergence and shear due to the smoothed potential of the lensing galaxy, the stellar mass function, and the absolute square of the Fourier transform of the surface brightness in the source plane. The mathematical approach follows previous authors but has been generalized, streamlined, and implemented in publicly available code. Examples of its application are given for Dexter and Agol's inhomogeneous-disc models as well as the usual Gaussian sources. Since the quantity calculated is a second moment of the magnification, it is only logarithmically sensitive to the sizes of very compact sources. However, for the inferred sizes of actual quasi-stellar objects (QSOs), it has some discriminatory power and may lend itself to simple statistical tests. At the very least, it should be useful for testing the convergence of microlensing simulations.
Lorey, Britta; Pilgramm, Sebastian; Bischoff, Matthias; Stark, Rudolf; Vaitl, Dieter; Kindermann, Stefan; Munzert, Jörn; Zentgraf, Karen
2011-01-01
The present study examined the neural basis of vivid motor imagery with parametrical functional magnetic resonance imaging. 22 participants performed motor imagery (MI) of six different right-hand movements that differed in terms of pointing accuracy needs and object involvement, i.e., either none, two big or two small squares had to be pointed at in alternation either with or without an object grasped with the fingers. After each imagery trial, they rated the perceived vividness of motor imagery on a 7-point scale. Results showed that increased perceived imagery vividness was parametrically associated with increasing neural activation within the left putamen, the left premotor cortex (PMC), the posterior parietal cortex of the left hemisphere, the left primary motor cortex, the left somatosensory cortex, and the left cerebellum. Within the right hemisphere, activation was found within the right cerebellum, the right putamen, and the right PMC. It is concluded that the perceived vividness of MI is parametrically associated with neural activity within sensorimotor areas. The results corroborate the hypothesis that MI is an outcome of neural computations based on movement representations located within motor areas. PMID:21655298
Improved linearity using harmonic error rejection in a full-field range imaging system
NASA Astrophysics Data System (ADS)
Payne, Andrew D.; Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A.
2008-02-01
Full field range imaging cameras are used to simultaneously measure the distance for every pixel in a given scene using an intensity modulated illumination source and a gain modulated receiver array. The light is reflected from an object in the scene, and the modulation envelope experiences a phase shift proportional to the target distance. Ideally the waveforms are sinusoidal, allowing the phase, and hence object range, to be determined from four measurements using an arctangent function. In practice these waveforms are often not perfectly sinusoidal, and in some cases square waveforms are instead used to simplify the electronic drive requirements. The waveforms therefore commonly contain odd harmonics which contribute a nonlinear error to the phase determination, and therefore an error in the range measurement. We have developed a unique sampling method to cancel the effect of these harmonics, with the results showing an order of magnitude improvement in the measurement linearity without the need for calibration or lookup tables, while the acquisition time remains unchanged. The technique can be applied to existing range imaging systems without having to change or modify the complex illumination or sensor systems, instead only requiring a change to the signal generation and timing electronics.
NASA Astrophysics Data System (ADS)
Zhang, Changjiang; Dai, Lijie; Ma, Leiming; Qian, Jinfang; Yang, Bo
2017-10-01
An objective technique is presented for estimating tropical cyclone (TC) innercore two-dimensional (2-D) surface wind field structure using infrared satellite imagery and machine learning. For a TC with eye, the eye contour is first segmented by a geodesic active contour model, based on which the eye circumference is obtained as the TC eye size. A mathematical model is then established between the eye size and the radius of maximum wind obtained from the past official TC report to derive the 2-D surface wind field within the TC eye. Meanwhile, the composite information about the latitude of TC center, surface maximum wind speed, TC age, and critical wind radii of 34- and 50-kt winds can be combined to build another mathematical model for deriving the innercore wind structure. After that, least squares support vector machine (LSSVM), radial basis function neural network (RBFNN), and linear regression are introduced, respectively, in the two mathematical models, which are then tested with sensitivity experiments on real TC cases. Verification shows that the innercore 2-D surface wind field structure estimated by LSSVM is better than that of RBFNN and linear regression.
NASA Astrophysics Data System (ADS)
Li, Ziru; Zhang, Xusheng
2008-12-01
Infrared thermal imaging (ITI) is the potential imaging technique for the health care field of traditional Chinese medicine (TCM). Successful application demands obeying the characteristics and regularity of the ITI of human body and designing rigorous trials. First, the influence of time must be taken into account as the ITI of human body varies with time markedly. Second, relative magnitude is preferred to be the index of the image features. Third, scatter diagrams and the method of least square could present important information for evaluating the health care effect. A double-blind placebo-controlled randomized trial was undertaken to study the influences of Shengsheng capsule, one of the TCM health food with immunity adjustment function, on the ITI of human body. The results showed that the effect of Shengsheng capsule to people with weak constitution or in the period of being weak could be reflected objectively by ITI. The relative efficacy rate was 81.3% for the trial group and 30.0% for the control group, there was significant difference between the two groups (P=0.003). So the sensitivity and objectivity of ITI are of great importance to the health care field of TCM.
Permutation flow-shop scheduling problem to optimize a quadratic objective function
NASA Astrophysics Data System (ADS)
Ren, Tao; Zhao, Peng; Zhang, Da; Liu, Bingqian; Yuan, Huawei; Bai, Danyu
2017-09-01
A flow-shop scheduling model enables appropriate sequencing for each job and for processing on a set of machines in compliance with identical processing orders. The objective is to achieve a feasible schedule for optimizing a given criterion. Permutation is a special setting of the model in which the processing order of the jobs on the machines is identical for each subsequent step of processing. This article addresses the permutation flow-shop scheduling problem to minimize the criterion of total weighted quadratic completion time. With a probability hypothesis, the asymptotic optimality of the weighted shortest processing time schedule under a consistency condition (WSPT-CC) is proven for sufficiently large-scale problems. However, the worst case performance ratio of the WSPT-CC schedule is the square of the number of machines in certain situations. A discrete differential evolution algorithm, where a new crossover method with multiple-point insertion is used to improve the final outcome, is presented to obtain high-quality solutions for moderate-scale problems. A sequence-independent lower bound is designed for pruning in a branch-and-bound algorithm for small-scale problems. A set of random experiments demonstrates the performance of the lower bound and the effectiveness of the proposed algorithms.
Improvement of Raman lidar algorithm for quantifying aerosol extinction
NASA Technical Reports Server (NTRS)
Russo, Felicita; Whiteman, David; Demoz, Belay; Hoff, Raymond
2005-01-01
Aerosols are particles of different composition and origin and influence the formation of clouds which are important in atmospheric radiative balance. At the present there is high uncertainty on the effect of aerosols on climate and this is mainly due to the fact that aerosol presence in the atmosphere can be highly variable in space and time. Monitoring of the aerosols in the atmosphere is necessary to better understanding many of these uncertainties. A lidar (an instrument that uses light to detect the extent of atmospheric aerosol loading) can be particularly useful to monitor aerosols in the atmosphere since it is capable to record the scattered intensity as a function of altitude from molecules and aerosols. One lidar method (the Raman lidar) makes use of the different wavelength changes that occur when light interacts with the varying chemistry and structure of atmospheric aerosols. One quantity that is indicative of aerosol presence is the aerosol extinction which quantifies the amount of attenuation (removal of photons), due to scattering, that light undergoes when propagating in the atmosphere. It can be directly measured with a Raman lidar using the wavelength dependence of the received signal. In order to calculate aerosol extinction from Raman scattering data it is necessary to evaluate the rate of change (derivative) of a Raman signal with respect to altitude. Since derivatives are defined for continuous functions, they cannot be performed directly on the experimental data which are not continuous. The most popular technique to find the functional behavior of experimental data is the least-square fit. This procedure allows finding a polynomial function which better approximate the experimental data. The typical approach in the lidar community is to make an a priori assumption about the functional behavior of the data in order to calculate the derivative. It has been shown in previous work that the use of the chi-square technique to determine the most likely functional behavior of the data prior to actually calculating the derivative eliminates the need for making a priori assumptions. We note that the a priori choice of a model itself can lead to larger uncertainties as compared to the method that is validated here. In this manuscript, the chi-square technique that determines the most likely functional behavior is validated through numerical simulation and by application to a large body of Raman lidar measurements. In general, we show that the chi-square approach to evaluate aerosol extinction yields lower extinction uncertainty than the traditional technique. We also use the technique to study the feasibility of developing a general characterization of the extinction uncertainty that could permit the uncertainty in Raman lidar aerosol extinction measurements to be estimated accurately without the use of the chi-square technique.
Regression Analysis: Instructional Resource for Cost/Managerial Accounting
ERIC Educational Resources Information Center
Stout, David E.
2015-01-01
This paper describes a classroom-tested instructional resource, grounded in principles of active learning and a constructivism, that embraces two primary objectives: "demystify" for accounting students technical material from statistics regarding ordinary least-squares (OLS) regression analysis--material that students may find obscure or…
VizieR Online Data Catalog: OGLE II SMC eclipsing binaries (Wyrzykowski+, 2004)
NASA Astrophysics Data System (ADS)
Wyrzykowski, L.; Udalski, A.; Kubiak, M.; Szymanski, M. K.; Zebrun, K.; Soszinski, I.; Wozniak, P. R.; Pietrzynski, G.; Szewczyk, O.
2009-03-01
We present new version of the OGLE-II catalog of eclipsing binary stars detected in the Small Magellanic Cloud, based on Difference Image Analysis catalog of variable stars in the Magellanic Clouds containing data collected from 1997 to 2000. We found 1351 eclipsing binary stars in the central 2.4 square degree area of the SMC. 455 stars are newly discovered objects, not found in the previous release of the catalog. The eclipsing objects were selected with the automatic search algorithm based on the artificial neural network. The full catalog with individual photometry is accessible from the OGLE INTERNET archive, at ftp://sirius.astrouw.edu.pl/ogle/ogle2/var_stars/smc/ecl . Regular observations of the SMC fields started on June 26, 1997 and covered about 2.4 square degrees of central parts of the SMC. Reductions of the photometric data collected up to the end of May 2000 were performed with the Difference Image Analysis (DIA) package. (1 data file).
NASA Astrophysics Data System (ADS)
Xu, Xianfeng; Cai, Luzhong; Li, Dailin; Mao, Jieying
2010-04-01
In phase-shifting interferometry (PSI) the reference wave is usually supposed to be an on-axis plane wave. But in practice a slight tilt of reference wave often occurs, and this tilt will introduce unexpected errors of the reconstructed object wave-front. Usually the least-square method with iterations, which is time consuming, is employed to analyze the phase errors caused by the tilt of reference wave. Here a simple effective algorithm is suggested to detect and then correct this kind of errors. In this method, only some simple mathematic operation is used, avoiding using least-square equations as needed in most methods reported before. It can be used for generalized phase-shifting interferometry with two or more frames for both smooth and diffusing objects, and the excellent performance has been verified by computer simulations. The numerical simulations show that the wave reconstruction errors can be reduced by 2 orders of magnitude.
On the classification of weakly integral modular categories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bruillard, Paul; Galindo, César; Ng, Siu-Hung
In this paper we classify all modular categories of dimension 4m, where m is an odd square-free integer, and all rank 6 and rank 7 weakly integral modular categories. This completes the classification of weakly integral modular categories through rank 7. In particular, our results imply that all integral modular categories of rank at most 7 are pointed (that is, every simple object has dimension 1). All the non-integral (but weakly integral) modular categories of ranks 6 and 7 have dimension 4m, with m an odd square free integer, so their classification is an application of our main result. Themore » classification of rank 7 integral modular categories is facilitated by an analysis of the two group actions on modular categories: the Galois group of the field generated by the entries of the S-matrix and the group of invertible isomorphism classes of objects. We derive some valuable arithmetic consequences from these actions.« less
VOSA: SED building and analysis of thousands of stars in the framework of Gaia
NASA Astrophysics Data System (ADS)
Rodrigo, C.; Solano, E.; Bayo, A.
2014-07-01
VOSA (http://svo2.cab.inta-csic.es/theory/vosa/), is a web-based tool designed to combine private photometric measurements with data available in VO services distributed worldwide to build the observational spectral energy distributions (SEDs) of hundreds of objects. VOSA also accesses various collections of models to simulate the equivalent theoretical SEDs, allows the user to decide the range of physical parameters to explore, performs the SED comparison, provides the best fitting models to the user following two different approaches (chi square and Bayesian fitting), and, for stellar sources, compares these parameters with isochrones and evolutionary tracks to estimate masses and ages. In particular, VOSA offers the advantage of deriving physical parameters using all the available photometric information instead of a restricted subset of colors. VOSA was firstly released in 2008 and its functionalities are described in Bayo et al. (2008). At the time of writing there are more than 300 active users in VOSA who have published more than 60 refereed papers. In the framework of the GENIUS (https://gaia.am.ub.es/Twiki/bin/view/GENIUS) project we are upgrading VOSA to, on one hand, provide a seamless access to Gaia data and, on the other hand, handle thousands of objects at a time. In this poster, the main functionalities to be implemented in the Gaia context will be described. The poster can be found at: http://svo.cab.inta-csic.es/files/svo//Public/SVOPapers/posters/vosa-poster3.pdf.
Outlier Resistant Predictive Source Encoding for a Gaussian Stationary Nominal Source.
1987-09-18
breakdown point and influence function . The proposed sequence of predictive encoders attains strictly positive breakdown point and uniformly bounded... influence function , at the expense of increased mean difference-squared distortion and differential entropy, at the Gaussian nominal source.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, Jiangye
Up-to-date maps of installed solar photovoltaic panels are a critical input for policy and financial assessment of solar distributed generation. However, such maps for large areas are not available. With high coverage and low cost, aerial images enable large-scale mapping, bit it is highly difficult to automatically identify solar panels from images, which are small objects with varying appearances dispersed in complex scenes. We introduce a new approach based on deep convolutional networks, which effectively learns to delineate solar panels in aerial scenes. The approach has successfully mapped solar panels in imagery covering 200 square kilometers in two cities, usingmore » only 12 square kilometers of training data that are manually labeled.« less
NASA Technical Reports Server (NTRS)
Chen, Fang-Jenq
1997-01-01
Flow visualization produces data in the form of two-dimensional images. If the optical components of a camera system are perfect, the transformation equations between the two-dimensional image and the three-dimensional object space are linear and easy to solve. However, real camera lenses introduce nonlinear distortions that affect the accuracy of transformation unless proper corrections are applied. An iterative least-squares adjustment algorithm is developed to solve the nonlinear transformation equations incorporated with distortion corrections. Experimental applications demonstrate that a relative precision on the order of 40,000 is achievable without tedious laboratory calibrations of the camera.